This folder contains actively maintained examples of use of 🤗 Transformers organized along NLP tasks. If you are looking for an example that used to be in this folder, it may have moved to our research projects subfolder (which contains frozen snapshots of research projects) or to the legacy subfolder.
While we strive to present as many use cases as possible, the scripts in this folder are just examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, all the PyTorch versions of the examples fully expose the preprocessing of the data. This way, you can easily tweak them.
This is similar if you want the scripts to report another metric than the one they currently use: look at the compute_metrics
function inside the script. It takes the full arrays of predictions and labels and has to return a dictionary of string keys and float values. Just change it to add (or replace) your own metric to the ones already reported.
Please discuss on the forum or in an issue a feature you would like to implement in an example before submitting a PR: we welcome bug fixes but since we want to keep the examples as simple as possible, it's unlikely we will merge a pull request adding more functionality at the cost of readability.
Important
To make sure you can successfully run the latest versions of the example scripts, you have to install the library from source and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
Then cd in the example folder of your choice and run
pip install -r requirements.txt
To browse the examples corresponding to released versions of 🤗 Transformers, click on the line below and then on your desired version of the library:
Examples for older versions of 🤗 Transformers
Alternatively, you can find switch your cloned 🤗 Transformers to a specific version (for instance with v3.5.1) with
git checkout tags/v3.5.1
and run the example command as usual afterward.
Here is the list of all our examples:
- with information on whether they are built on top of
Trainer
/TFTrainer
(if not, they still work, they might just lack some features), - whether or not they leverage the 🤗 Datasets library.
- links to Colab notebooks to walk through the scripts and run them easily,
Task | Example datasets | Trainer support | TFTrainer support | 🤗 Datasets | Colab |
---|---|---|---|---|---|
language-modeling |
Raw text | ✅ | - | ✅ | |
multiple-choice |
SWAG, RACE, ARC | ✅ | ✅ | ✅ | |
question-answering |
SQuAD | ✅ | ✅ | ✅ | |
summarization |
CNN/Daily Mail | ✅ | - | - | - |
text-classification |
GLUE, XNLI | ✅ | ✅ | ✅ | |
text-generation |
- | n/a | n/a | - | |
token-classification |
CoNLL NER | ✅ | ✅ | ✅ | |
translation |
WMT | ✅ | - | - | - |
Most examples are equipped with a mechanism to truncate the number of dataset samples to the desired length. This is useful for debugging purposes, for example to quickly check that all stages of the programs can complete, before running the same setup on the full dataset which may take hours to complete.
For example here is how to truncate all three splits to just 50 samples each:
examples/token-classification/run_ner.py \
--max_train_samples 50 \
--max_val_samples 50 \
--max_test_samples 50 \
[...]
Most example scripts should have the first two command line arguments and some have the third one. You can quickly check if a given example supports any of these by passing a -h
option, e.g.:
examples/token-classification/run_ner.py -h
You can resume training from a previous checkpoint like this:
- Pass
--output_dir previous_output_dir
without--overwrite_output_dir
to resume training from the latest checkpoint inoutput_dir
(what you would use if the training was interrupted, for instance). - Pass
--model_name_or_path path_to_a_specific_checkpoint
to resume training from that checkpoint folder.
Should you want to turn an example into a notebook where you'd no longer have access to the command
line, 🤗 Trainer supports resuming from a checkpoint via trainer.train(resume_from_checkpoint)
.
- If
resume_from_checkpoint
isTrue
it will look for the last checkpoint in the value ofoutput_dir
passed viaTrainingArguments
. - If
resume_from_checkpoint
is a path to a specific checkpoint it will use that saved checkpoint folder to resume the training from.
All the PyTorch scripts mentioned above work out of the box with distributed training and mixed precision, thanks to the Trainer API. To launch one of them on n GPUS, use the following command:
python -m torch.distributed.launch \
--nproc_per_node number_of_gpu_you_have path_to_script.py \
--all_arguments_of_the_script
As an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text
classification MNLI task using the run_glue
script, with 8 GPUs:
python -m torch.distributed.launch \
--nproc_per_node 8 text-classification/run_glue.py \
--model_name_or_path bert-large-uncased-whole-word-masking \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/mnli_output/
If you have a GPU with mixed precision capabilities (architecture Pascal or more recent), you can use mixed precision
training with PyTorch 1.6.0 or latest, or by installing the Apex library for previous
versions. Just add the flag --fp16
to your command launching one of the scripts mentioned above!
Using mixed precision training usually results in 2x-speedup for training with the same final results (as shown in this table for text classification).
When using Tensorflow, TPUs are supported out of the box as a tf.distribute.Strategy
.
When using PyTorch, we support TPUs thanks to pytorch/xla
. For more context and information on how to setup your TPU environment refer to Google's documentation and to the
very detailed pytorch/xla README.
In this repo, we provide a very simple launcher script named
xla_spawn.py that lets you run our
example scripts on multiple TPU cores without any boilerplate. Just pass a --num_cores
flag to this script, then your
regular training script with its arguments (this is similar to the torch.distributed.launch
helper for
torch.distributed
):
python xla_spawn.py --num_cores num_tpu_you_have \
path_to_script.py \
--all_arguments_of_the_script
As an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text
classification MNLI task using the run_glue
script, with 8 TPUs:
python xla_spawn.py --num_cores 8 \
text-classification/run_glue.py \
--model_name_or_path bert-large-uncased-whole-word-masking \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/mnli_output/
You can easily log and monitor your runs code. The following are currently supported:
To use Weights & Biases, install the wandb package with:
pip install wandb
Then log in the command line:
wandb login
If you are in Jupyter or Colab, you should login with:
import wandb
wandb.login()
To enable logging to W&B, include "wandb"
in the report_to
of your TrainingArguments
or script. Or just pass along --report_to all
if you have wandb
installed.
Whenever you use Trainer
or TFTrainer
classes, your losses, evaluation metrics, model topology and gradients (for Trainer
only) will automatically be logged.
Advanced configuration is possible by setting environment variables:
Environment Variables | Options |
---|---|
WANDB_LOG_MODEL | Log the model as artifact at the end of training (false by default) |
WANDB_WATCH |
|
WANDB_PROJECT | Organize runs by project |
Set run names with run_name
argument present in scripts or as part of TrainingArguments
.
Additional configuration options are available through generic wandb environment variables.
Refer to related documentation & examples.
To use comet_ml
, install the Python package with:
pip install comet_ml
or if in a Conda environment:
conda install -c comet_ml -c anaconda -c conda-forge comet_ml