Skip to content

Commit

Permalink
Update Readme.md
Browse files Browse the repository at this point in the history
  • Loading branch information
mbadry1 authored May 5, 2019
1 parent e91b7dc commit c6f2c68
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions 2- Improving Deep Neural Networks/Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -656,13 +656,13 @@ Implications of L2-regularization on:
- Given `Z[l] = [z(1), ..., z(m)]`, i = 1 to m (for each input)
- Compute `mean = 1/m * sum(z[i])`
- Compute `variance = 1/m * sum((z[i] - mean)^2)`
- Then `Z_norm[i] = (z(i) - mean) / np.sqrt(variance + epsilon)` (add `epsilon` for numerical stability if variance = 0)
- Then `Z_norm[i] = (z[i] - mean) / np.sqrt(variance + epsilon)` (add `epsilon` for numerical stability if variance = 0)
- Forcing the inputs to a distribution with zero mean and variance of 1.
- Then `Z_tilde[i] = gamma * Z_norm[i] + beta`
- To make inputs belong to other distribution (with other mean and variance).
- gamma and beta are learnable parameters of the model.
- Making the NN learn the distribution of the outputs.
- _Note:_ if `gamma = sqrt(variance + epsilon)` and `beta = mean` then `Z_tilde[i] = Z_norm[i]`
- _Note:_ if `gamma = sqrt(variance + epsilon)` and `beta = mean` then `Z_tilde[i] = z[i]`
### Fitting Batch Normalization into a neural network
Expand Down

0 comments on commit c6f2c68

Please sign in to comment.