Skip to content

Commit

Permalink
Add 🤗 Accelerate tutorial (huggingface#15263)
Browse files Browse the repository at this point in the history
* add accelerate tutorial

* 🖍 apply feedback from review

* 📝 make edits
  • Loading branch information
stevhliu authored Jan 25, 2022
1 parent 637e817 commit 0501beb
Show file tree
Hide file tree
Showing 2 changed files with 94 additions and 0 deletions.
2 changes: 2 additions & 0 deletions docs/source/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@
title: Preprocessing data
- local: training
title: Fine-tuning a pretrained model
- local: accelerate
title: Distributed training with 🤗 Accelerate
- local: model_sharing
title: Model sharing and uploading
- local: tokenizer_summary
Expand Down
92 changes: 92 additions & 0 deletions docs/source/accelerate.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
<!--Copyright 2022 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

# Distributed training with 🤗 Accelerate

As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude. At Hugging Face, we created the [🤗 Accelerate](https://huggingface.co/docs/accelerate/index.html) library to help users easily train a 🤗 Transformers model on any type of distributed setup, whether it is multiple GPU's on one machine or multiple GPU's across several machines. In this tutorial, learn how to customize your native PyTorch training loop to enable training in a distributed environment.

## Setup

Get started by installing 🤗 Accelerate:

```bash
pip install accelerate
```

Then import and create an [`Accelerator`](https://huggingface.co/docs/accelerate/accelerator.html#accelerate.Accelerator) object. [`Accelerator`] will automatically detect your type of distributed setup and initialize all the necessary components for training. You don't need to explicitly place your model on a device.

```py
>>> from accelerate import Accelerator

>>> accelerator = Accelerator()
```
## Prepare to accelerate
The next step is to pass all the relevant training objects to [`prepare`](https://huggingface.co/docs/accelerate/accelerator.html#accelerate.Accelerator.prepare). This includes your training and evaluation DataLoaders, a model and an optimizer:
```py
>>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
... train_dataloader, eval_dataloader, model, optimizer
... )
```

## Backward

The last addition is to replace the typical `loss.backward()` in your training loop with 🤗 Accelerate's [`backward`](https://huggingface.co/docs/accelerate/accelerator.html#accelerate.Accelerator.backward):

```py
>>> for epoch in range(num_epochs):
... for batch in train_dataloader:
... outputs = model(**batch)
... loss = outputs.loss
... accelerator.backward(loss)

... optimizer.step()
... lr_scheduler.step()
... optimizer.zero_grad()
... progress_bar.update(1)
```

As you can see in the following image, you only need to add four additional lines of code to your training loop to enable distributed training!

![accelerate](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate.png)

## Train

Once you've added the relevant lines of code, launch your training in a script or a notebook like Colaboratory.

### Train with a script

If you are running your training from a script, run the following command to create and save a configuration file:

```bash
accelerate config
```

Then launch your training with:

```bash
accelerate launch train.py
```

### Train with a notebook

🤗 Accelerate can also run in a notebook if you're planning on using Colaboratory's TPUs. Wrap all the code responsible for training in a function, and pass it to `notebook_launcher`:

```py
>>> from accelerate import notebook_launcher

>>> notebook_launcher(training_function)
```

For more information about 🤗 Accelerate and it's rich features, refer to the [documentation](https://huggingface.co/docs/accelerate/index.html).

0 comments on commit 0501beb

Please sign in to comment.