From 0501beb84601651bf6a44c5a16ab0e5b98948f78 Mon Sep 17 00:00:00 2001 From: Steven Liu <59462357+stevhliu@users.noreply.github.com> Date: Tue, 25 Jan 2022 13:46:11 -0600 Subject: [PATCH] =?UTF-8?q?Add=20=F0=9F=A4=97=20Accelerate=20tutorial=20(#?= =?UTF-8?q?15263)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * add accelerate tutorial * 🖍 apply feedback from review * 📝 make edits --- docs/source/_toctree.yml | 2 + docs/source/accelerate.mdx | 92 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 94 insertions(+) create mode 100644 docs/source/accelerate.mdx diff --git a/docs/source/_toctree.yml b/docs/source/_toctree.yml index fce02be49fae..2ed4764d0e41 100644 --- a/docs/source/_toctree.yml +++ b/docs/source/_toctree.yml @@ -19,6 +19,8 @@ title: Preprocessing data - local: training title: Fine-tuning a pretrained model + - local: accelerate + title: Distributed training with 🤗 Accelerate - local: model_sharing title: Model sharing and uploading - local: tokenizer_summary diff --git a/docs/source/accelerate.mdx b/docs/source/accelerate.mdx new file mode 100644 index 000000000000..75325fa9fa05 --- /dev/null +++ b/docs/source/accelerate.mdx @@ -0,0 +1,92 @@ + + +# Distributed training with 🤗 Accelerate + +As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude. At Hugging Face, we created the [🤗 Accelerate](https://huggingface.co/docs/accelerate/index.html) library to help users easily train a 🤗 Transformers model on any type of distributed setup, whether it is multiple GPU's on one machine or multiple GPU's across several machines. In this tutorial, learn how to customize your native PyTorch training loop to enable training in a distributed environment. + +## Setup + +Get started by installing 🤗 Accelerate: + +```bash +pip install accelerate +``` + +Then import and create an [`Accelerator`](https://huggingface.co/docs/accelerate/accelerator.html#accelerate.Accelerator) object. [`Accelerator`] will automatically detect your type of distributed setup and initialize all the necessary components for training. You don't need to explicitly place your model on a device. + +```py +>>> from accelerate import Accelerator + +>>> accelerator = Accelerator() +``` + +## Prepare to accelerate + +The next step is to pass all the relevant training objects to [`prepare`](https://huggingface.co/docs/accelerate/accelerator.html#accelerate.Accelerator.prepare). This includes your training and evaluation DataLoaders, a model and an optimizer: + +```py +>>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare( +... train_dataloader, eval_dataloader, model, optimizer +... ) +``` + +## Backward + +The last addition is to replace the typical `loss.backward()` in your training loop with 🤗 Accelerate's [`backward`](https://huggingface.co/docs/accelerate/accelerator.html#accelerate.Accelerator.backward): + +```py +>>> for epoch in range(num_epochs): +... for batch in train_dataloader: +... outputs = model(**batch) +... loss = outputs.loss +... accelerator.backward(loss) + +... optimizer.step() +... lr_scheduler.step() +... optimizer.zero_grad() +... progress_bar.update(1) +``` + +As you can see in the following image, you only need to add four additional lines of code to your training loop to enable distributed training! + +![accelerate](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/accelerate.png) + +## Train + +Once you've added the relevant lines of code, launch your training in a script or a notebook like Colaboratory. + +### Train with a script + +If you are running your training from a script, run the following command to create and save a configuration file: + +```bash +accelerate config +``` + +Then launch your training with: + +```bash +accelerate launch train.py +``` + +### Train with a notebook + +🤗 Accelerate can also run in a notebook if you're planning on using Colaboratory's TPUs. Wrap all the code responsible for training in a function, and pass it to `notebook_launcher`: + +```py +>>> from accelerate import notebook_launcher + +>>> notebook_launcher(training_function) +``` + +For more information about 🤗 Accelerate and it's rich features, refer to the [documentation](https://huggingface.co/docs/accelerate/index.html). \ No newline at end of file