Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Example] PyTorch distributed training with minGPT #4464

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

Michaelvll
Copy link
Collaborator

This PR adds a more modern distributed training example.

TODOs:

  • Update the examples in our doc with this example

Tested (run the relevant ones):

  • Code formatting: bash format.sh
  • Any manual or new tests for this PR (please specify below)
  • All smoke tests: pytest tests/test_smoke.py
  • Relevant individual smoke tests: pytest tests/test_smoke.py::test_fill_in_the_name
  • Backward compatibility tests: conda deactivate; bash -i tests/backward_compatibility_tests.sh

Copy link
Collaborator

@romilbhardwaj romilbhardwaj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, thanks @Michaelvll! Left some minor nit comments

### Using normal `torchrun`


The following command spawn 2 nodes with 2 L4 GPU each.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The following command spawn 2 nodes with 2 L4 GPU each.
The following command will spawn 2 nodes with 2 L4 GPU each:


The main difference between the two for fixed-size distributed training is that `rdvz` backend automatically handles the rank for each node, while `torchrun` requires the rank to be set manually.

SkyPilot offers easy built-in environment variables to help you start distributed training easily.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit

Suggested change
SkyPilot offers easy built-in environment variables to help you start distributed training easily.
SkyPilot offers convinient built-in environment variables to help you start distributed training easily.


The following command spawn 2 nodes with 2 L4 GPU each.

`sky launch -c train.yaml`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing cluster name? Also might be nice to put in a code block

Suggested change
`sky launch -c train.yaml`
\```
sky launch -c train train.yaml
\```


`sky launch -c train.yaml`

In the [train.yaml](./train.yaml), we use `torchrun` to launch the training and set the arguments for distributed training using environment variables provided by SkyPilot.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In the [train.yaml](./train.yaml), we use `torchrun` to launch the training and set the arguments for distributed training using environment variables provided by SkyPilot.
In [train.yaml](./train.yaml), we use `torchrun` to launch the training and set the arguments for distributed training using [environment variables](https://docs.skypilot.co/en/latest/running-jobs/environment-variables.html#skypilot-environment-variables) provided by SkyPilot.

`rdvz` is an alternative backend for distributed training:

```
sky launch -c train-rdzv.yaml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
sky launch -c train-rdzv.yaml
sky launch -c train-rdzv train-rdzv.yaml




### Using `rdvz` backend
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Using `rdvz` backend
### Using `rdzv` backend


### Using `rdvz` backend

`rdvz` is an alternative backend for distributed training:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`rdvz` is an alternative backend for distributed training:
`rdzv` is an alternative backend for distributed training:

sky launch -c train-rdzv.yaml
```

In the [train-rdzv.yaml](./train-rdzv.yaml), we use `torchrun` to launch the training and set the arguments for distributed training using environment variables provided by SkyPilot.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In the [train-rdzv.yaml](./train-rdzv.yaml), we use `torchrun` to launch the training and set the arguments for distributed training using environment variables provided by SkyPilot.
In [train-rdzv.yaml](./train-rdzv.yaml), we use `torchrun` to launch the training and set the arguments for distributed training using [environment variables](https://docs.skypilot.co/en/latest/running-jobs/environment-variables.html#skypilot-environment-variables) provided by SkyPilot.


For example, the following command will spawn 4 nodes with 4 L4 GPUs each.

`sky launch -c train.yaml --num-nodes 2 --gpus L4:2 --cpus 8+`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change to num nodes 4 and L4:4

Suggested change
`sky launch -c train.yaml --num-nodes 2 --gpus L4:2 --cpus 8+`
\```
sky launch -c train.yaml --num-nodes 4 --gpus L4:4 --cpus 8+
\```

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants