Skip to content

Commit

Permalink
split between 2 gpus
Browse files Browse the repository at this point in the history
  • Loading branch information
astonzhang committed Dec 9, 2019
1 parent d9890e5 commit c15f5c5
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions chapter_computational-performance/multiple-gpus.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,11 +117,11 @@ print('after allreduce:\n', data[0], '\n', data[1])

The `utils` module in Gluon provides a function to evenly split an array into multiple parts along the first dimension, and then copy the $i$-th part into the the $i$-th device. It's straightforward to implement, but we will use the pre-implemented version so later chapters can reuse the `split_batch` function we will define later.

Now, we try to divide the 6 data instances equally between 2 GPUs using the `split_and_load` function.
Now, we try to divide the 4 data instances equally between 2 GPUs using the `split_and_load` function.

```{.python .input n=8}
data = np.arange(24).reshape(4, 6)
ctx = d2l.try_all_gpus()
ctx = [npx.gpu(0), npx.gpu(1)]
splitted = gluon.utils.split_and_load(data, ctx)
print('input: ', data)
print('load into', ctx)
Expand Down

0 comments on commit c15f5c5

Please sign in to comment.