Skip to content

Commit

Permalink
[pytorch] remove enable_grad (d2l-ai#1712)
Browse files Browse the repository at this point in the history
I don't think it's necessary to have enable_grad, which is enabled in default
  • Loading branch information
mli authored Apr 12, 2021
1 parent f67824c commit ab9af31
Showing 1 changed file with 5 additions and 7 deletions.
12 changes: 5 additions & 7 deletions chapter_multilayer-perceptrons/weight-decay.md
Original file line number Diff line number Diff line change
Expand Up @@ -321,10 +321,9 @@ def train(lambd):
xlim=[5, num_epochs], legend=['train', 'test'])
for epoch in range(num_epochs):
for X, y in train_iter:
with torch.enable_grad():
# The L2 norm penalty term has been added, and broadcasting
# makes `l2_penalty(w)` a vector whose length is `batch_size`
l = loss(net(X), y) + lambd * l2_penalty(w)
# The L2 norm penalty term has been added, and broadcasting
# makes `l2_penalty(w)` a vector whose length is `batch_size`
l = loss(net(X), y) + lambd * l2_penalty(w)
l.sum().backward()
d2l.sgd([w, b], lr, batch_size)
if (epoch + 1) % 5 == 0:
Expand Down Expand Up @@ -464,9 +463,8 @@ def train_concise(wd):
xlim=[5, num_epochs], legend=['train', 'test'])
for epoch in range(num_epochs):
for X, y in train_iter:
with torch.enable_grad():
trainer.zero_grad()
l = loss(net(X), y)
trainer.zero_grad()
l = loss(net(X), y)
l.backward()
trainer.step()
if (epoch + 1) % 5 == 0:
Expand Down

0 comments on commit ab9af31

Please sign in to comment.