Skip to content

Commit

Permalink
fix documentation of center argument in BN layers (tensorflow#6792)
Browse files Browse the repository at this point in the history
  • Loading branch information
kashif authored and Vijay Vasudevan committed Jan 11, 2017
1 parent 1e4d6f1 commit f25907e
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 5 deletions.
9 changes: 6 additions & 3 deletions tensorflow/contrib/layers/python/layers/layers.py
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,8 @@ def _fused_batch_norm(
to 1.0, typically in the multiple-nines range: 0.999, 0.99, 0.9, etc. Lower
`decay` value (recommend trying `decay`=0.9) if model experiences reasonably
good training performance but poor validation and/or test performance.
center: If True, subtract `beta`. If False, `beta` is ignored.
center: If True, add offset of `beta` to normalized tensor. If False, `beta`
is ignored.
scale: If True, multiply by `gamma`. If False, `gamma` is
not used. When the next layer is linear (also e.g. `nn.relu`), this can be
disabled since the scaling can be done by the next layer.
Expand Down Expand Up @@ -407,7 +408,8 @@ def batch_norm(
Lower `decay` value (recommend trying `decay`=0.9) if model experiences
reasonably good training performance but poor validation and/or test
performance. Try zero_debias_moving_mean=True for improved stability.
center: If True, subtract `beta`. If False, `beta` is ignored.
center: If True, add offset of `beta` to normalized tensor. If False, `beta`
is ignored.
scale: If True, multiply by `gamma`. If False, `gamma` is
not used. When the next layer is linear (also e.g. `nn.relu`), this can be
disabled since the scaling can be done by the next layer.
Expand Down Expand Up @@ -1447,7 +1449,8 @@ def layer_norm(inputs,
Args:
inputs: a tensor with 2 or more dimensions. The normalization
occurs over all but the first dimension.
center: If True, subtract `beta`. If False, `beta` is ignored.
center: If True, add offset of `beta` to normalized tensor. If False, `beta`
is ignored.
scale: If True, multiply by `gamma`. If False, `gamma` is
not used. When the next layer is linear (also e.g. `nn.relu`), this can be
disabled since the scaling can be done by the next layer.
Expand Down
6 changes: 4 additions & 2 deletions tensorflow/python/layers/normalization.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,8 @@ class BatchNormalization(base._Layer): # pylint: disable=protected-access
`data_format="channels_first"`, set `axis=1` in `BatchNormalization`.
momentum: Momentum for the moving average.
epsilon: Small float added to variance to avoid dividing by zero.
center: If True, subtract `beta`. If False, `beta` is ignored.
center: If True, add offset of `beta` to normalized tensor. If False, `beta`
is ignored.
scale: If True, multiply by `gamma`. If False, `gamma` is
not used. When the next layer is linear (also e.g. `nn.relu`), this can be
disabled since the scaling can be done by the next layer.
Expand Down Expand Up @@ -276,7 +277,8 @@ def batch_normalization(inputs,
`data_format="channels_first"`, set `axis=1` in `BatchNormalization`.
momentum: Momentum for the moving average.
epsilon: Small float added to variance to avoid dividing by zero.
center: If True, subtract `beta`. If False, `beta` is ignored.
center: If True, add offset of `beta` to normalized tensor. If False, `beta`
is ignored.
scale: If True, multiply by `gamma`. If False, `gamma` is
not used. When the next layer is linear (also e.g. `nn.relu`), this can be
disabled since the scaling can be done by the next layer.
Expand Down

0 comments on commit f25907e

Please sign in to comment.