Skip to content

Commit

Permalink
Correcting the optimization formula for Adam
Browse files Browse the repository at this point in the history
  • Loading branch information
debajyotidatta authored May 26, 2018
1 parent 9eb299f commit e2ac0ab
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions 2- Improving Deep Neural Networks/Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -550,8 +550,8 @@ L2-regularization relies on the assumption that a model with small weights is si
# can be mini-batch or batch gradient descent
compute dw, db on current mini-batch

vdW = (beta1 * dW) + (1 - beta1) * dW # momentum
vdb = (beta1 * db) + (1 - beta1) * db # momentum
vdW = (beta1 * vdW) + (1 - beta1) * dW # momentum
vdb = (beta1 * vdb) + (1 - beta1) * db # momentum

sdW = (beta2 * dW) + (1 - beta2) * dW^2 # RMSprop
sdb = (beta2 * db) + (1 - beta2) * db^2 # RMSprop
Expand Down

0 comments on commit e2ac0ab

Please sign in to comment.