Skip to content
forked from allenai/OLMo

Modeling, training, eval, and inference code for OLMo

License

Notifications You must be signed in to change notification settings

KobecKristof/OLMo

Repository files navigation



OLMo: Open Language Model

Installation

pip install ai2-olmo

Fine-tuning

To fine-tune an OLMo model you'll first need to prepare your dataset by tokenizing it and saving the tokens IDs to a flat numpy memory-mapped array. See scripts/prepare_tulu_data.py for an example with the Tulu V2 dataset, which can be easily modified for other datasets.

Next, prepare your training config. There are many examples in the configs/ directory that you can use as a starting point. The most important thing is to make sure the model parameters (the model field in the config) match up with the checkpoint you're starting from. To be safe you can always start from the config that comes with the model checkpoint. At a minimum you'll need to make the following changes to the config or provide the corresponding overrides from the command line:

  • Update load_path to point to the checkpoint you want to start from.
  • Set reset_trainer_state to true.
  • Update data.paths to point to the token_ids.npy file you generated.
  • Optionally update data.label_mask_paths to point to the label_mask.npy file you generated, unless you don't need special masking for the loss.
  • Update evaluators to add/remove in-loop evaluations.

Once you're satisfied with your training config, you can launch the training job via torchrun. For example:

torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \
    --data.paths=[{path_to_data}/input_ids.npy] \
    --data.label_mask_paths=[{path_to_data}/label_mask.npy] \
    --load_path={path_to_checkpoint} \
    --reset_trainer_state

Note: passing CLI overrides like --reset_trainer_state is only necessary if you didn't update those fields in your config.

About

Modeling, training, eval, and inference code for OLMo

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 91.2%
  • Cuda 4.7%
  • Shell 1.7%
  • C++ 1.2%
  • Jsonnet 0.8%
  • Dockerfile 0.2%
  • Other 0.2%