Skip to content

Tags: akuligowski101/Flux.jl

Tags

v0.14.22

Toggle v0.14.22's commit message
[Diff since v0.14.21](FluxML/Flux.jl@v0.14.21...v0.14.22)

**Merged pull requests:**
- Bump actions/checkout from 4.2.0 to 4.2.1 (FluxML#2489) (@dependabot[bot])
- handle data movement with MLDataDevices.jl (FluxML#2492) (@CarloLucibello)
- remove some v0.13 deprecations (FluxML#2493) (@CarloLucibello)

**Closed issues:**
- use MLDataDevices.jl? (FluxML#2482)
- The dependency error about `Flux->FluxMPIExt` occurs when updating to Julia 1.11 (FluxML#2490)

v0.14.21

Toggle v0.14.21's commit message
[Diff since v0.14.20](FluxML/Flux.jl@v0.14.20...v0.14.21)

**Merged pull requests:**
- Update ci.yml for macos-latest to use aarch64 (FluxML#2481) (@ViralBShah)
- Remove leading empty line in example (FluxML#2486) (@blegat)
- Bump actions/checkout from 4.1.7 to 4.2.0 (FluxML#2487) (@dependabot[bot])
- fix: CUDA package optional for FluxMPIExt (FluxML#2488) (@askorupka)

v0.14.20

Toggle v0.14.20's commit message
[Diff since v0.14.19](FluxML/Flux.jl@v0.14.19...v0.14.20)

**Merged pull requests:**
- feat: Distributed data parallel training support (FluxML#2464) (@askorupka)
- Run Enzyme tests only on CUDA CI machine (FluxML#2478) (@pxl-th)
- Adapt to pending Enzyme breaking change (FluxML#2479) (@wsmoses)
- Update TagBot.yml (FluxML#2480) (@ViralBShah)
- Bump patch version (FluxML#2483) (@wsmoses)

v0.14.19

Toggle v0.14.19's commit message
[Diff since v0.14.18](FluxML/Flux.jl@v0.14.18...v0.14.19)

**Merged pull requests:**
- Allow loading of `ConvTranspose` state without `.outpad` field (FluxML#2477) (@mcabbott)

**Closed issues:**
- Model saved under Flux v0.14.16 does not load on v0.14.17 (FluxML#2476)

v0.14.18

Toggle v0.14.18's commit message
[Diff since v0.14.17](FluxML/Flux.jl@v0.14.17...v0.14.18)

**Merged pull requests:**
- Bump deps (FluxML#2475) (@pxl-th)

v0.14.17

Toggle v0.14.17's commit message
[Diff since v0.14.16](FluxML/Flux.jl@v0.14.16...v0.14.17)

**Merged pull requests:**
- Add Enzyme train function (FluxML#2446) (@wsmoses)
- Bump actions/checkout from 4.1.5 to 4.1.7 (FluxML#2460) (@dependabot[bot])
- Add output padding for ConvTranspose (FluxML#2462) (@guiyrt)
- Fix ConvTranspose symmetric non-constant padding (FluxML#2463) (@paulnovo)
- CompatHelper: add new compat entry for Enzyme at version 0.12, (keep existing compat) (FluxML#2466) (@github-actions[bot])
- move enzyme to extension (FluxML#2467) (@CarloLucibello)
- Fix function `_size_check()` (FluxML#2472) (@gruberchr)
- Fix ConvTranspose output padding on AMDGPU (FluxML#2473) (@paulnovo)

**Closed issues:**
- Hoping to offer a version without cuda (FluxML#2155)
- ConvTranspose errors with symmetric non-constant pad (FluxML#2424)
- Create a flag to use Enzyme as the AD in training/etc. (FluxML#2443)
- Can't load a Fluxml trained & saved model. Getting ERROR: CUDA error: invalid device context (code 201, ERROR_INVALID_CONTEXT) (FluxML#2461)
- Requires deprecated cuNN.jl package (FluxML#2470)

v0.14.16

Toggle v0.14.16's commit message
[Diff since v0.14.15](FluxML/Flux.jl@v0.14.15...v0.14.16)

**Merged pull requests:**
- Make sure first example in Custom Layers docs uses type parameter (FluxML#2415) (@BioTurboNick)
- Add GPU GC comment to Performance Tips (FluxML#2416) (@BioTurboNick)
- Fix some typos in docs (FluxML#2418) (@JoshuaLampert)
- fix component arrays test (FluxML#2419) (@CarloLucibello)
- Bump julia-actions/setup-julia from 1 to 2 (FluxML#2420) (@dependabot[bot])
- documentation update (FluxML#2422) (@CarloLucibello)
- remove `public dropout` (FluxML#2423) (@mcabbott)
- Allow BatchNorm on CUDA with track_stats=False (FluxML#2427) (@paulnovo)
- Bump actions/checkout from 4.1.2 to 4.1.3 (FluxML#2428) (@dependabot[bot])
- Add working downloads badge (FluxML#2429) (@pricklypointer)
- Bump actions/checkout from 4.1.3 to 4.1.4 (FluxML#2430) (@dependabot[bot])
- Add tip for non-CUDA users (FluxML#2434) (@micahscopes)
- Add hint for choosing a different GPU backend (FluxML#2435) (@micahscopes)
- Patch `Flux._isleaf` for abstract arrays with bitstype elements (FluxML#2436) (@jondeuce)
- Bump julia-actions/cache from 1 to 2 (FluxML#2437) (@dependabot[bot])
- Bump actions/checkout from 4.1.4 to 4.1.5 (FluxML#2438) (@dependabot[bot])
- Enzyme: bump version and mark models as working [test] (FluxML#2439) (@wsmoses)
- Enable remaining enzyme test (FluxML#2442) (@wsmoses)
- Bump AMDGPU to 0.9 (FluxML#2449) (@pxl-th)
- Do not install all GPU backends at once (FluxML#2453) (@pxl-th)
- CompatHelper: add new compat entry for BSON at version 0.3, (keep existing compat) (FluxML#2457) (@github-actions[bot])
- remove BSON dependence (FluxML#2458) (@CarloLucibello)

**Closed issues:**
- How to have a stable GPU memory while being performant?  (FluxML#780)
- Why is Flux.destructure type unstable? (FluxML#2405)
- tests are failing due to ComponentArrays (FluxML#2411)
- Significant time spent moving medium-size arrays to GPU, type instability (FluxML#2414)
- Dense layers with shared parameters  (FluxML#2432)
- why is my `withgradient` type unstable ? (FluxML#2456)

v0.14.15

Toggle v0.14.15's commit message
[Diff since v0.14.14](FluxML/Flux.jl@v0.14.14...v0.14.15)

**Merged pull requests:**
- Restore some support for Tracker.jl (FluxML#2387) (@mcabbott)
- start testing Enzyme (FluxML#2392) (@CarloLucibello)
- Add Ignite.jl to ecosystem.md (FluxML#2395) (@mcabbott)
- Bump actions/checkout from 4.1.1 to 4.1.2 (FluxML#2401) (@dependabot[bot])
- More lazy strings (FluxML#2402) (@lassepe)
- Fix dead link in docs (FluxML#2403) (@BioTurboNick)
- Improve errors for conv layers (FluxML#2404) (@mcabbott)

**Closed issues:**
- Given that DataLoader implements `length` shouldn't it also be able to provide size? (FluxML#2372)
- Dimensions check for `Conv` is incomplete, leading to confusing error (FluxML#2398)

v0.14.14

Toggle v0.14.14's commit message
[Diff since v0.14.13](FluxML/Flux.jl@v0.14.13...v0.14.14)

**Merged pull requests:**
- Bump actions/cache from 3 to 4 (FluxML#2371) (@dependabot[bot])
- Use LazyString in depwarn (FluxML#2400) (@mcabbott)

**Closed issues:**
- precompilation issue on Julia 1.10 (FluxML#2354)
- Flux installation error under Julia 1.10 on Apple Silicon (FluxML#2366)
- Compilation time of Flux models (FluxML#2391)
- Flux.setup buggy and broken in latest v.0.13.17 (FluxML#2394)
- 2x performance regression due to 5e80211 (FluxML#2399)

v0.14.13

Toggle v0.14.13's commit message
[Diff since v0.14.12](FluxML/Flux.jl@v0.14.12...v0.14.13)

**Merged pull requests:**
- Add a macro to opt-in to fancy printing, and to everything else (FluxML#1932) (@mcabbott)
- Small upgrades to training docs (FluxML#2331) (@mcabbott)
- Bump codecov/codecov-action from 3 to 4 (FluxML#2376) (@dependabot[bot])
- Bump dorny/paths-filter from 3.0.0 to 3.0.1 (FluxML#2381) (@dependabot[bot])
- Bump thollander/actions-comment-pull-request from 2.4.3 to 2.5.0 (FluxML#2382) (@dependabot[bot])
- Fix FluxML#2380 (FluxML#2384) (@diegozea)
- Allow `cpu(::DataLoader)` (FluxML#2388) (@mcabbott)
- Bump dorny/paths-filter from 3.0.1 to 3.0.2 (FluxML#2389) (@dependabot[bot])
- doc changes re at-functor and at-layer (FluxML#2390) (@mcabbott)

**Closed issues:**
- Macro to display model struct the way Flux does (FluxML#2044)
- Update GH Actions across all repos (FluxML#2170)
- Flux.jl Documentation (Training API Reference) (FluxML#2303)
- Flux docs missing withgradient() call for multi-objective loss functions (FluxML#2325)