Skip to content

Commit

Permalink
Closes pytorch#771. Fix vague description on batch size calculation i…
Browse files Browse the repository at this point in the history
…n imagenet.
  • Loading branch information
yzs981130 committed Mar 17, 2022
1 parent 0352380 commit 4067a39
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion imagenet/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ def main_worker(gpu, ngpus_per_node, args):
model.cuda(args.gpu)
# When using a single GPU per process and per
# DistributedDataParallel, we need to divide the batch size
# ourselves based on the total number of GPUs we have
# ourselves based on the total number of GPUs of the current node.
args.batch_size = int(args.batch_size / ngpus_per_node)
args.workers = int((args.workers + ngpus_per_node - 1) / ngpus_per_node)
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
Expand Down

0 comments on commit 4067a39

Please sign in to comment.