Tags: ashwanirathee/Flux.jl
Tags
[Diff since v0.13.8](FluxML/Flux.jl@v0.13.8...v0.13.9) **Closed issues:** - Iteration over `params(m)` in explicit mode gives no gradient (FluxML#2091) - `Flux.Optimise.update!` updating grads instead of params? (FluxML#2121) - Flux.reset! triggers a BoundsError (FluxML#2124) **Merged pull requests:** - Remove `train!` from quickstart example (FluxML#2110) (@mcabbott) - Re-organise "built-in layers" section (FluxML#2112) (@mcabbott) - Narrower version of `@non_differentiable params` (FluxML#2118) (@mcabbott) - allow non-tuple data in the new train! (FluxML#2119) (@CarloLucibello) - fix train! test (FluxML#2123) (@CarloLucibello) - Move 5 tutorials from fluxml.github.io (FluxML#2125) (@mcabbott) - Remove Flux.Data module (FluxML#2126) (@mcabbott) - CompatHelper: bump compat for Functors to 0.4, (keep existing compat) (FluxML#2128) (@github-actions[bot])
[Diff since v0.13.7](FluxML/Flux.jl@v0.13.7...v0.13.8) **Closed issues:** - `using Flux` is broken on the Julia nightly (FluxML#2097) - Chain(Parallel(...), ...) (FluxML#2100) - Apparent memory leak when using Distributed? (FluxML#2102) - [API] Preventing errors from misplaced optimizer objects (FluxML#2106) **Merged pull requests:** - Safer gradients (by copying before mutating) & less piracy (by removing ArrayInterface) (FluxML#2098) (@mcabbott) - Allow OneHotArrays.jl v0.2 (FluxML#2109) (@mcabbott)
[Diff since v0.13.6](FluxML/Flux.jl@v0.13.6...v0.13.7) **Closed issues:** - DimensionMismatch("array could not be broadcast to match destination") (FluxML#1457) - Warn on `NaN` loss (FluxML#1981) - Make `create_bias` a public API? (FluxML#2049) - Make `rng_from_array` non-differentiable (FluxML#2062) - `@autosize` does not work with semi-colon separated kwargs (FluxML#2086) - early_stopping does not work as expected (FluxML#2089) **Merged pull requests:** - Documentation headings & sections (FluxML#2056) (@mcabbott) - Add a dark mode version of logo (FluxML#2063) (@Saransh-cpp) - Fix a few crossrefs + update Zygote's page (FluxML#2064) (@Saransh-cpp) - Make `rng_from_array` non differentiable (FluxML#2065) (@Saransh-cpp) - Add an example to the readme? (FluxML#2067) (@mcabbott) - Add a quick start example, and change some headings (FluxML#2069) (@mcabbott) - Stop training on Inf/NaN loss (FluxML#2070) (@mcabbott) - Export `Embedding` (FluxML#2072) (@mcognetta) - Relax `RNN`/`LSTM`/`GRUCell` internal matrix type restrictions (FluxML#2073) (@mcognetta) - Finish docs for FluxML#2073 (FluxML#2075) (@mcognetta) - Add `@autosize` (FluxML#2078) (@mcabbott) - Back to create_bias (FluxML#2081) (@Saransh-cpp) - Simplify `Embedding` (FluxML#2084) (@mcabbott) - Fix `|> gpu` bug in `@autosize` (FluxML#2085) (@mcabbott) - Fix FluxML#2086 re `@autosize` (FluxML#2087) (@mcabbott) - Use the standard Documenter.jl local redirect (FluxML#2093) (@ChrisRackauckas) - CompatHelper: bump compat for MLUtils to 0.3, (keep existing compat) (FluxML#2095) (@github-actions[bot])
[Diff since v0.13.5](FluxML/Flux.jl@v0.13.5...v0.13.6) **Closed issues:** - OneHotArrays.jl? (FluxML#1544) - [Discussion]: doctests, docstrings, documentation manual, and unclear internal API (for newcomers) (FluxML#1990) - [Bug]: Swapped `alpha` and `beta` in `tversky` loss? (FluxML#1993) - [Discussion]: documentation for `@reexport`ed and `import`ed (or `using`) packages (FluxML#2038) - Pull request FluxML#2007 causes Flux.params() calls to not get cached (FluxML#2040) - v0.13.5 breaks Flux.train! on a custom type (FluxML#2045) - Bounds erro for Flux.reset! in loss function (FluxML#2057) **Merged pull requests:** - Miscellaneous docstring additions and fixes (FluxML#1998) (@Saransh-cpp) - Use muladd for LSTM cell matmuls (FluxML#2023) (@ToucheSir) - using OneHotArrays (FluxML#2025) (@mcabbott) - mark `stop`, `skip`, `@epochs` as deprecated (FluxML#2027) (@mcabbott) - Fix the last remaining 404 errors (FluxML#2035) (@Saransh-cpp) - Add ability to filter `loadmodel!` recursion (FluxML#2041) (@darsnack) - Mark `track_stats=true` as deprecated (FluxML#2042) (@akahard2dj) - Better docs for reexported packages (FluxML#2046) (@Saransh-cpp) - Typo in BatchNorm number of channels assertion (FluxML#2047) (@Marcovela) - Add extra test for params (FluxML#2051) (@christiangnrd) - Restore some private functions (FluxML#2052) (@ToucheSir) - Make params non-differentiable (Closes FluxML#2040 & FluxML#2048) (FluxML#2054) (@christiangnrd) - Leftover changes from FluxML#2046 (FluxML#2055) (@Saransh-cpp) - `unthunk` in some rules (FluxML#2058) (@mcabbott) - Fix the failing CI build (FluxML#2059) (@christiangnrd)
[Diff since v0.13.4](FluxML/Flux.jl@v0.13.4...v0.13.5) **Closed issues:** - PINN loss doesn't converge to 0? (FluxML#1966) - Simple chaining compatibility check (FluxML#2017) - v0.12.10 => v0.13.4 breaks `Dropout` on CUDA (FluxML#2018) - Wrong rrule dispatch for Array constructor (FluxML#2033) **Merged pull requests:** - Get rid of documentation warnings and 404 pages (FluxML#1987) (@Saransh-cpp) - use Functors 0.3 in Flux (FluxML#2007) (@mcabbott) - Typo (FluxML#2020) (@trigaten) - Add `NNlib.grid_sample` (FluxML#2022) (@scheidan) - Remove CTC loss (moved to NNlib) (FluxML#2024) (@mcabbott) - Fix typo in docs (FluxML#2030) (@svilupp) - fix array constructor rrule (FluxML#2034) (@chengchingwen)
[Diff since v0.13.3](FluxML/Flux.jl@v0.13.3...v0.13.4) **Closed issues:** - Repository: on the addition of loss/distance functions and other niceties to Flux (FluxML#826) - `trainable` for BatchNorm stops parameters from being saved and loaded (FluxML#1027) - Non-descriptive arg in `Conv`: why `filter` intead of `size`? (FluxML#1212) - Ada or ADA (FluxML#1949) - Make `gpu(::DataLoader)` work or error loudly if it doesn't (FluxML#1974) - Conversion error when loading a model with v0.13+ with BSON (FluxML#1984) - GPU broadcasting error when using softmax on GPU (FluxML#1994) - Error when using CUDA (FluxML#1997) - type cannot been referred with structured model function (FluxML#2000) - [Broken Documentation] Dense(1 => 1) (FluxML#2001) **Merged pull requests:** - Fix slight typos in `LayerNorm` docs (FluxML#1975) (@theabhirath) - Piratical errors for two mistakes (FluxML#1976) (@mcabbott) - Show `using Flux` before BSON `@load` (FluxML#1977) (@JeffFessler) - Update docstrings of `basic.jl` and `conv.jl` (FluxML#1978) (@Saransh-cpp) - Added Common GPU Workflows in Docs (FluxML#1980) (@lfenzo) - `PairwiseFusion` layer, take 2 (FluxML#1983) (@theabhirath) - deprecations.jl: depwarn -> Base.depwarn (FluxML#1985) (@skleinbo) - Update docstrings in `upsample.jl`, `recurrent.jl`, and `normalise.jl` (FluxML#1995) (@Saransh-cpp) - replace ADAM with Adam and its variants thereof (FluxML#1996) (@Karthik-d-k) - Make `Dropout` docs a little more user friendly (FluxML#2014) (@theabhirath)
[Diff since v0.13.2](FluxML/Flux.jl@v0.13.2...v0.13.3) **Merged pull requests:** - Use `var` to speed up normalisation (FluxML#1973) (@mcabbott)
[Diff since v0.13.1](FluxML/Flux.jl@v0.13.1...v0.13.2) **Closed issues:** - Inconsistent "Julia ecosystem" docs (FluxML#1922) - sigmoid_fast in GRU? (FluxML#1967) **Merged pull requests:** - Unify `ecosystem.md` (FluxML#1923) (@Saransh-cpp) - Updated path to DiffImages.jl (FluxML#1964) (@arcAman07) - Explain `stride≠1` case for SamePad (FluxML#1965) (@KronosTheLate) - fast sigmoid (FluxML#1968) (@oysteinsolheim) - CompatHelper: bump compat for ArrayInterface to 6, (keep existing compat) (FluxML#1969) (@github-actions[bot])
[Diff since v0.13.0](FluxML/Flux.jl@v0.13.0...v0.13.1) **Closed issues:** - Batchnorm on GPU for Float64 values (FluxML#1897) - Tag? (FluxML#1924) - DataLoader causes scalar indexing on GPU in Flux v0.13.0 (regression) (FluxML#1935) - Flux.flip with broadcasting warning (FluxML#1936) - Add a workflow to clean-up `gh-pages` branch? (FluxML#1940) - DimensionMismatch: All data containers must have the same number of observations. (FluxML#1941) - Type instability in Recur for 3 dimensional arrays (FluxML#1947) - What is the idiomatic way to get training loss from `gradient()`? (FluxML#1950) - Dropout erroring on latest CUDA (FluxML#1960) - AdaBelief issues (FluxML#1962) **Merged pull requests:** - Add a ton of doctests + fix outdated documentation in `.md` files (FluxML#1916) (@Saransh-cpp) - Get the DocBot up again! (FluxML#1937) (@Saransh-cpp) - Broadcasting replaced with comprehension in the Flux.flip function. (FluxML#1938) (@fpartl) - Fix type instabilities in apply!(optimizer, ...) (FluxML#1942) (@ancapdev) - Add a workflow to delete PR previews (FluxML#1943) (@Saransh-cpp) - Fix for progress logging to non-VS Code loggers (FluxML#1944) (@darsnack) - Add Base.firstindex(c::Chain) = 1 (FluxML#1945) (@KronosTheLate) - Recur type stability for 3d arrays (FluxML#1948) (@Marcovela) - Resolve two warnings in the test suite (FluxML#1951) (@mcognetta) - Update documentation on Split layer (FluxML#1953) (@JLDC) - [docs] suggest using ADAM with LR=1 when combined with ExpDecay (FluxML#1955) (@ericphanson) - Type stable `conv_reshape_bias` and AD-friendly `ConvDims` helpers (FluxML#1956) (@ToucheSir) - onehotbatch with CuArray (FluxML#1959) (@CarloLucibello) - AdaBelief bias correction (FluxML#1963) (@cossio)
[Diff since v0.12.10](FluxML/Flux.jl@v0.12.10...v0.13.0) **Merged pull requests:** - Fix a code block (FluxML#1933) (@prbzrg)
PreviousNext