Skip to content

Commit

Permalink
Update frontpage.html
Browse files Browse the repository at this point in the history
  • Loading branch information
astonzhang committed Dec 9, 2019
1 parent bd5ce03 commit 98fc431
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 2 deletions.
5 changes: 3 additions & 2 deletions chapter_computational-performance/multiple-gpus.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,10 @@ Assume there are $k$ GPUs on a machine. Given the model to be trained, each GPU

In order to implement data parallelism in a multi-GPU training scenario from scratch, we first import the required packages or modules.

```{.python .input n=33}
```{.python .input n=2}
%matplotlib inline
import d2l
import mxnet as mx
from mxnet import autograd, nd, gluon
```

Expand Down Expand Up @@ -120,7 +121,7 @@ Now, we try to divide the 6 data instances equally between 2 GPUs using the `spl

```{.python .input n=8}
data = nd.arange(24).reshape((6, 4))
ctx = d2l.try_all_gpus()
ctx = [mx.gpu(0), mx.gpu(1)]
splitted = gluon.utils.split_and_load(data, ctx)
print('input: ', data)
print('load into', ctx)
Expand Down
1 change: 1 addition & 0 deletions static/frontpage/frontpage.html
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,7 @@ <h3>D2L as a textbook or a reference book</h3>
Indian Institute of Technology Kanpur (India)<br>
Indian Institute of Technology Ropar (India)<br>
Kyungpook National University (Korea)<br>
Massachusetts Institute of Technology (USA)<br>
Shanghai Jiao Tong University (China)<br>
Shanghai University of Finance and Economics (China)<br>
Texas A&amp;M University (USA)<br>
Expand Down

0 comments on commit 98fc431

Please sign in to comment.