Skip to content

Latest commit

 

History

History
145 lines (118 loc) · 10.6 KB

NEWS.md

File metadata and controls

145 lines (118 loc) · 10.6 KB

Flux Release Notes

v0.13

  • After a deprecations cycle, the datasets in Flux.Data have been removed in favour of MLDatasets.jl.
  • params is not exported anymore since it is a common name and is also exported by Distributions.jl
  • flatten is not exported anymore due to clash with Iterators.flatten.
  • Remove Juno.jl progress bar support as it is now obsolete.
  • Dropout gained improved compatibility with Int and Complex arrays and is now twice-differentiable.
  • Notation Dense(2 => 3, σ) for channels matches Conv; the equivalent Dense(2, 3, σ) still works.
  • Many utily functions and the DataLoader are now provided by MLUtils.jl.
  • The DataLoader is now compatible with generic dataset types implementing MLUtils.numobs and MLUtils.getobs.
  • Added truncated normal initialisation of weights.

v0.12.10

v0.12.9

v0.12.8

  • Optimized inference and gradient calculation of OneHotMatrixpr

v0.12.7

  • Added support for GRUv3
  • The layers within Chain and Parallel may now have names.

v0.12.5

  • Added option to configure groups in Conv.
  • REPL printing via show displays parameter counts.

v0.12.4

v0.12.1 - v0.12.3

  • CUDA.jl 3.0 support
  • Bug fixes and optimizations.

v0.12.0

v0.11.2

  • Adds the AdaBelief optimiser.
  • Other new features and bug fixes (see GitHub releases page)

v0.11

  • Moved CUDA compatibility to use CUDA.jl instead of CuArrays.jl
  • Add kaiming initialization methods: kaiming_uniform and kaiming_normal
  • Use DataLoader with NamedTuples, so that tensors can be accessed by name.
  • Error if Dense layers weights and biases are not arrays.
  • Add Adaptive Pooling in Flux layers.
  • Change to DataLoader's constructor
  • Uniform loss interface
  • Loss functions now live in the Flux.Losses module
  • Optimistic ADAM (OADAM) optimizer for adversarial training.
  • Add option for same padding to conv and pooling layers by setting pad=SamePad().
  • Added option to set bias to Flux.Zeros to eliminating bias from being trained.
  • Added GlobalMaxPool and GlobalMeanPool layers for performing global pooling operations.
  • Added ClipValue and ClipNorm in this pr to Flux.Optimise to provide a cleaner API for gradient clipping.
  • Added new kwarg-only constructors for the various convolutional layers.
  • Documented the convolutional layer constructors accepting weight and bias keyword arguments to supply custom arrays for those fields.
  • Testing suite improvements now test for gradients of all layers along with GPU support.
  • Functors have now moved to Functors.jl to allow for their use outside of Flux.
  • Added helper functions Flux.convfilter and Flux.depthwiseconvfilter to construct weight arrays for convolutions outside of layer constructors so as to not have to depend on the default layers for custom implementations.
  • dropout function now has a mandatory active keyword argument. The Dropout struct *whose behavior is left unchanged) is the recommended choice for common usage.
  • and many more fixes and additions...

v0.10.1 - v0.10.4

See GitHub's releases.

v0.10.0

  • The default AD engine has switched from Tracker to Zygote.jl
    • The dependency on Tracker.jl has been removed.
    • This means Flux now does not depend on using a specialised TrackedArray type, and can be used with normal Array implementations directly.
    • Tracker compatibility is maintained in most common cases, but Zygote will be the preferred AD backend for Flux from now on.
  • The CUDNN wrappers have been moved from Flux into CuArrays, to allow for better supporting the CUDA backend, and improve user experience, not to mention making Flux lean.
  • *crossentropy functions now work as expected with CuArrays. PR for binarycrossentropy.
  • Added clearer docs around training and the Optimiser interface.
  • Layer initialisations have been improved with a clearer API on how to extend it for other purposes.
  • Better messaging around CUDA availability, with hooks to initialize the GPU as default where possible.
  • @treelike has been formalised as a functor, with an effective deprecation.
  • testmode! is deprecated in favour of istraining

v0.9.0

v0.8.0

AD Changes:

v0.7.0

Despite the heroic efforts of scholars and archeologists, pre-0.7 history is lost to the sands of time.