Skip to content

Commit

Permalink
release
Browse files Browse the repository at this point in the history
  • Loading branch information
jph00 committed Jan 6, 2021
1 parent c56afb8 commit a0ab7e0
Show file tree
Hide file tree
Showing 2 changed files with 42 additions and 21 deletions.
21 changes: 21 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,27 @@

<!-- do not remove -->

## 2.2.0
### Breaking Changes

- Promote `NativeMixedPrecision` to default `MixedPrecision` (and similar for `Learner.to_fp16`); old `MixedPrecision` is now called `NonNativeMixedPrecision` ([#3127](https://github.com/fastai/fastai/issues/3127))
- Use the new `GradientClip` callback instead of the `clip` parameter to use gradient clipping
- Adding a `Callback` which has the same name as an attribute no longer raises an exception ([#3109](https://github.com/fastai/fastai/issues/3109))
- RNN training now requires `RNNCallback`, but does not require `RNNRegularizer`; `out` and `raw_out` have moved to `RNNRegularizer` ([#3108](https://github.com/fastai/fastai/issues/3108))
- Call `rnn_cbs` to get all callbacks needed for RNN training, optionally with regularization
- replace callback `run_after` with `order`; do not run `after` cbs on exception ([#3101](https://github.com/fastai/fastai/issues/3101))

### New Features

- Add `GradientClip` callback ([#3107](https://github.com/fastai/fastai/issues/3107))
- Make `Flatten` cast to `TensorBase` to simplify type compatibility ([#3106](https://github.com/fastai/fastai/issues/3106))
- make flattened metrics compatible with all tensor subclasses ([#3105](https://github.com/fastai/fastai/issues/3105))
- New class method `TensorBase.register_func` to register types for `__torch_function__` ([#3097](https://github.com/fastai/fastai/issues/3097))
- new `dynamic` flag for controlling dynamic loss scaling in `NativeMixedPrecision` ([#3096](https://github.com/fastai/fastai/issues/3096))
- remove need to call `to_native_fp32` before `predict`; set `skipped` in NativeMixedPrecision after NaN from dynamic loss scaling ([#3095](https://github.com/fastai/fastai/issues/3095))
- make native fp16 extensible with callbacks ([#3094](https://github.com/fastai/fastai/issues/3094))


## 2.1.10

### New Features
Expand Down
42 changes: 21 additions & 21 deletions nbs/examples/app_examples.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -370,7 +370,7 @@
"name": "stderr",
"output_type": "stream",
"text": [
"/home/jhoward/anaconda3/lib/python3.7/site-packages/numpy/core/_asarray.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray\n",
"/home/jhoward/mambaforge/lib/python3.8/site-packages/numpy/core/_asarray.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray\n",
" return array(a, dtype, copy=False, order=order)\n"
]
}
Expand Down Expand Up @@ -404,10 +404,10 @@
" <tbody>\n",
" <tr>\n",
" <td>0</td>\n",
" <td>0.743958</td>\n",
" <td>0.617746</td>\n",
" <td>0.720000</td>\n",
" <td>00:04</td>\n",
" <td>0.634987</td>\n",
" <td>0.621853</td>\n",
" <td>0.705000</td>\n",
" <td>00:03</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>"
Expand Down Expand Up @@ -435,31 +435,31 @@
" <tbody>\n",
" <tr>\n",
" <td>0</td>\n",
" <td>0.570559</td>\n",
" <td>0.557726</td>\n",
" <td>0.730000</td>\n",
" <td>00:06</td>\n",
" <td>0.435843</td>\n",
" <td>0.508709</td>\n",
" <td>0.805000</td>\n",
" <td>00:05</td>\n",
" </tr>\n",
" <tr>\n",
" <td>1</td>\n",
" <td>0.490613</td>\n",
" <td>0.512760</td>\n",
" <td>0.775000</td>\n",
" <td>00:06</td>\n",
" <td>0.386479</td>\n",
" <td>0.446490</td>\n",
" <td>0.790000</td>\n",
" <td>00:05</td>\n",
" </tr>\n",
" <tr>\n",
" <td>2</td>\n",
" <td>0.419650</td>\n",
" <td>0.440138</td>\n",
" <td>0.775000</td>\n",
" <td>00:06</td>\n",
" <td>0.320961</td>\n",
" <td>0.522036</td>\n",
" <td>0.770000</td>\n",
" <td>00:05</td>\n",
" </tr>\n",
" <tr>\n",
" <td>3</td>\n",
" <td>0.350080</td>\n",
" <td>0.443032</td>\n",
" <td>0.267238</td>\n",
" <td>0.572048</td>\n",
" <td>0.785000</td>\n",
" <td>00:06</td>\n",
" <td>00:05</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>"
Expand Down Expand Up @@ -502,7 +502,7 @@
{
"data": {
"text/plain": [
"('positive', tensor(1), tensor([7.0108e-04, 9.9930e-01]))"
"('positive', tensor(1), tensor([1.3686e-05, 9.9999e-01]))"
]
},
"execution_count": null,
Expand Down

0 comments on commit a0ab7e0

Please sign in to comment.