Skip to content

Tags: priyamm145/Flux.jl

Tags

v0.13.13

Toggle v0.13.13's commit message
[Diff since v0.13.12](FluxML/Flux.jl@v0.13.12...v0.13.13)

**Closed issues:**
- Normalization layers promote eltype (FluxML#1562)
- Recurrent cell `eltype` restriction breaks `outputsize` (FluxML#1565)
- Performance regression with graph neural networks  (FluxML#1577)
- Opaque error caused by Float64 input to RNN (FluxML#1972)
- Binding Flux.setup does not exist (FluxML#2169)
- Un-intended behaviour? Should Flux be able to reduce StaticArrays? (FluxML#2180)
- Custom model can not be trained (FluxML#2187)

**Merged pull requests:**
- Match layer output to weights (FluxML#2156) (@mcabbott)
- Add friendly size check (FluxML#2176) (@mcabbott)
- Add `f16` (FluxML#2184) (@mcabbott)
- remove Flux.flatten in favor of MLUtils.flatten (FluxML#2188) (@CarloLucibello)

v0.13.12

Toggle v0.13.12's commit message
[Diff since v0.13.11](FluxML/Flux.jl@v0.13.11...v0.13.12)

**Closed issues:**
- Delta neural networks inference (FluxML#2129)
- [Bug] Embedding forward pass breaks for onehotbatch with multiple batch dimensions (FluxML#2160)
- MethodError: no method matching when training LSTMs even when loss function is working corrently (FluxML#2168)
- Type instability with Flux.update! when loss function involves extra arguments (FluxML#2175)

**Merged pull requests:**
- Un-deprecate `track_stats` for InstanceNorm (FluxML#2149) (@ToucheSir)
- Move `dropout` to NNlib (FluxML#2150) (@mcabbott)
- Use NNlib's `within_gradient` (FluxML#2152) (@mcabbott)
- Export `rand32` and friends (FluxML#2157) (@mcabbott)
- Remove piratical array conversion rule (FluxML#2167) (@ToucheSir)
- update: actions node 12 => node 16 (FluxML#2173) (@skyleaworlder)
- cuda 4.0 compat (FluxML#2177) (@CarloLucibello)

v0.13.11

Toggle v0.13.11's commit message
[Diff since v0.13.10](FluxML/Flux.jl@v0.13.10...v0.13.11)

**Closed issues:**
- Deprecate `track_stats=true` for `GroupNorm` and `InstanceNorm` (FluxML#2006)
- `cpu(x)` errors for `x isa CuArray{<:CartesianIndex}` (FluxML#2116)
- Constructing a Chain from a dictionary (FluxML#2142)
- Method error when using `Flux.setup` with `Embedding` layer (FluxML#2144)
- Method Error when using Flux.withgradient (FluxML#2148)

**Merged pull requests:**
- fix cpu(x) for immutable arrays (FluxML#2117) (@CarloLucibello)
- Fix two bugs re `setup` (FluxML#2145) (@mcabbott)
- CompatHelper: bump compat for MLUtils to 0.4, (keep existing compat) (FluxML#2147) (@github-actions[bot])

v0.13.10

Toggle v0.13.10's commit message
[Diff since v0.13.9](FluxML/Flux.jl@v0.13.9...v0.13.10)

**Closed issues:**
- remove Bors (FluxML#1843)
- Only generate and upload coverage for one matrix entry (FluxML#1939)
- [Discussion]: Revamped Getting Started guide (FluxML#2012)
- Using users provided weight matrix to build LSTM layers (FluxML#2130)

**Merged pull requests:**
- Re-write training docs (FluxML#2114) (@mcabbott)
- Move doc sections to "guide" + "reference" (FluxML#2115) (@mcabbott)
- Allow ForwardDiff in BatchNorm's track_stats (FluxML#2127) (@mcabbott)
- Fix last block in quickstart.md (FluxML#2131) (@simonschnake)
- Delete bors.toml (FluxML#2133) (@CarloLucibello)
- Docs for `onecold` (FluxML#2134) (@nathanielvirgo)
- [ISSUE 1939] Update workflow, to only generate coverage for a specific entry  (FluxML#2136) (@skyleaworlder)

v0.13.9

Toggle v0.13.9's commit message
[Diff since v0.13.8](FluxML/Flux.jl@v0.13.8...v0.13.9)

**Closed issues:**
- Iteration over `params(m)` in explicit mode gives no gradient (FluxML#2091)
- `Flux.Optimise.update!` updating grads instead of params? (FluxML#2121)
- Flux.reset! triggers a BoundsError (FluxML#2124)

**Merged pull requests:**
- Remove `train!` from quickstart example (FluxML#2110) (@mcabbott)
- Re-organise "built-in layers" section (FluxML#2112) (@mcabbott)
- Narrower version of `@non_differentiable params` (FluxML#2118) (@mcabbott)
- allow non-tuple data in the new train! (FluxML#2119) (@CarloLucibello)
- fix train! test (FluxML#2123) (@CarloLucibello)
- Move 5 tutorials from fluxml.github.io (FluxML#2125) (@mcabbott)
- Remove Flux.Data module (FluxML#2126) (@mcabbott)
- CompatHelper: bump compat for Functors to 0.4, (keep existing compat) (FluxML#2128) (@github-actions[bot])

v0.13.8

Toggle v0.13.8's commit message
[Diff since v0.13.7](FluxML/Flux.jl@v0.13.7...v0.13.8)

**Closed issues:**
- `using Flux` is broken on the Julia nightly (FluxML#2097)
- Chain(Parallel(...), ...) (FluxML#2100)
- Apparent memory leak when using Distributed? (FluxML#2102)
- [API] Preventing errors from misplaced optimizer objects (FluxML#2106)

**Merged pull requests:**
- Safer gradients (by copying before mutating) & less piracy (by removing ArrayInterface) (FluxML#2098) (@mcabbott)
- Allow OneHotArrays.jl v0.2 (FluxML#2109) (@mcabbott)

v0.13.7

Toggle v0.13.7's commit message
[Diff since v0.13.6](FluxML/Flux.jl@v0.13.6...v0.13.7)

**Closed issues:**
- DimensionMismatch("array could not be broadcast to match destination") (FluxML#1457)
- Warn on `NaN` loss (FluxML#1981)
- Make `create_bias` a public API? (FluxML#2049)
- Make `rng_from_array` non-differentiable (FluxML#2062)
- `@autosize` does not work with semi-colon separated kwargs (FluxML#2086)
- early_stopping does not work as expected  (FluxML#2089)

**Merged pull requests:**
- Documentation headings & sections (FluxML#2056) (@mcabbott)
- Add a dark mode version of logo (FluxML#2063) (@Saransh-cpp)
- Fix a few crossrefs + update Zygote's page (FluxML#2064) (@Saransh-cpp)
- Make `rng_from_array` non differentiable (FluxML#2065) (@Saransh-cpp)
- Add an example to the readme? (FluxML#2067) (@mcabbott)
- Add a quick start example, and change some headings (FluxML#2069) (@mcabbott)
- Stop training on Inf/NaN loss (FluxML#2070) (@mcabbott)
- Export `Embedding` (FluxML#2072) (@mcognetta)
- Relax `RNN`/`LSTM`/`GRUCell` internal matrix type restrictions (FluxML#2073) (@mcognetta)
- Finish docs for FluxML#2073 (FluxML#2075) (@mcognetta)
- Add `@autosize` (FluxML#2078) (@mcabbott)
- Back to create_bias (FluxML#2081) (@Saransh-cpp)
- Simplify `Embedding` (FluxML#2084) (@mcabbott)
- Fix `|> gpu` bug in `@autosize` (FluxML#2085) (@mcabbott)
- Fix FluxML#2086 re `@autosize` (FluxML#2087) (@mcabbott)
- Use the standard Documenter.jl local redirect (FluxML#2093) (@ChrisRackauckas)
- CompatHelper: bump compat for MLUtils to 0.3, (keep existing compat) (FluxML#2095) (@github-actions[bot])

v0.13.6

Toggle v0.13.6's commit message
[Diff since v0.13.5](FluxML/Flux.jl@v0.13.5...v0.13.6)

**Closed issues:**
- OneHotArrays.jl? (FluxML#1544)
- [Discussion]: doctests, docstrings, documentation manual, and unclear internal API (for newcomers) (FluxML#1990)
- [Bug]: Swapped `alpha` and `beta` in `tversky` loss? (FluxML#1993)
- [Discussion]: documentation for `@reexport`ed and `import`ed (or `using`) packages (FluxML#2038)
- Pull request FluxML#2007 causes Flux.params() calls to not get cached (FluxML#2040)
- v0.13.5 breaks Flux.train! on a custom type (FluxML#2045)
- Bounds erro for Flux.reset! in loss function (FluxML#2057)

**Merged pull requests:**
- Miscellaneous docstring additions and fixes (FluxML#1998) (@Saransh-cpp)
- Use muladd for LSTM cell matmuls (FluxML#2023) (@ToucheSir)
- using OneHotArrays (FluxML#2025) (@mcabbott)
- mark `stop`, `skip`, `@epochs` as deprecated (FluxML#2027) (@mcabbott)
- Fix the last remaining 404 errors (FluxML#2035) (@Saransh-cpp)
- Add ability to filter `loadmodel!` recursion (FluxML#2041) (@darsnack)
- Mark `track_stats=true` as deprecated (FluxML#2042) (@akahard2dj)
- Better docs for reexported packages (FluxML#2046) (@Saransh-cpp)
- Typo in BatchNorm number of channels assertion (FluxML#2047) (@Marcovela)
- Add extra test for params (FluxML#2051) (@christiangnrd)
- Restore some private functions (FluxML#2052) (@ToucheSir)
- Make params non-differentiable (Closes FluxML#2040 & FluxML#2048) (FluxML#2054) (@christiangnrd)
- Leftover changes from FluxML#2046 (FluxML#2055) (@Saransh-cpp)
- `unthunk` in some rules (FluxML#2058) (@mcabbott)
- Fix the failing CI build (FluxML#2059) (@christiangnrd)

v0.13.5

Toggle v0.13.5's commit message
[Diff since v0.13.4](FluxML/Flux.jl@v0.13.4...v0.13.5)

**Closed issues:**
- PINN loss doesn't converge to 0? (FluxML#1966)
- Simple chaining compatibility check (FluxML#2017)
- v0.12.10 => v0.13.4 breaks `Dropout` on CUDA (FluxML#2018)
- Wrong rrule dispatch for Array constructor (FluxML#2033)

**Merged pull requests:**
- Get rid of documentation warnings and 404 pages (FluxML#1987) (@Saransh-cpp)
- use Functors 0.3 in Flux (FluxML#2007) (@mcabbott)
- Typo (FluxML#2020) (@trigaten)
- Add `NNlib.grid_sample` (FluxML#2022) (@scheidan)
- Remove CTC loss (moved to NNlib) (FluxML#2024) (@mcabbott)
- Fix typo in docs (FluxML#2030) (@svilupp)
- fix array constructor rrule (FluxML#2034) (@chengchingwen)

v0.13.4

Toggle v0.13.4's commit message
[Diff since v0.13.3](FluxML/Flux.jl@v0.13.3...v0.13.4)

**Closed issues:**
- Repository: on the addition of loss/distance functions and other niceties to Flux (FluxML#826)
- `trainable` for BatchNorm stops parameters from being saved and loaded (FluxML#1027)
- Non-descriptive arg in `Conv`: why `filter` intead of `size`? (FluxML#1212)
- Ada or ADA (FluxML#1949)
- Make `gpu(::DataLoader)` work or error loudly if it doesn't (FluxML#1974)
- Conversion error when loading a model with v0.13+ with BSON (FluxML#1984)
- GPU broadcasting error when using softmax on GPU (FluxML#1994)
- Error when using CUDA (FluxML#1997)
- type cannot been referred with structured model function (FluxML#2000)
- [Broken Documentation] Dense(1 => 1) (FluxML#2001)

**Merged pull requests:**
- Fix slight typos in `LayerNorm` docs (FluxML#1975) (@theabhirath)
- Piratical errors for two mistakes (FluxML#1976) (@mcabbott)
- Show `using Flux` before BSON `@load` (FluxML#1977) (@JeffFessler)
- Update docstrings of `basic.jl` and `conv.jl` (FluxML#1978) (@Saransh-cpp)
- Added Common GPU Workflows in Docs (FluxML#1980) (@lfenzo)
- `PairwiseFusion` layer, take 2 (FluxML#1983) (@theabhirath)
- deprecations.jl: depwarn -> Base.depwarn (FluxML#1985) (@skleinbo)
- Update docstrings in `upsample.jl`, `recurrent.jl`, and `normalise.jl`  (FluxML#1995) (@Saransh-cpp)
- replace ADAM with Adam and its variants thereof (FluxML#1996) (@Karthik-d-k)
- Make `Dropout` docs a little more user friendly (FluxML#2014) (@theabhirath)