Skip to content

Commit

Permalink
Fix evaluation.md checkpoint dirs
Browse files Browse the repository at this point in the history
  • Loading branch information
carmocca committed Oct 4, 2023
1 parent e2da0fc commit 3bd596c
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions tutorials/evaluation.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Use the following command to evaluate Lit-GPT models on all tasks in Eleuther AI

```bash
python eval/lm_eval_harness.py \
--checkpoint_dir "checkpoints/Llama-2-7b-hf/" \
--checkpoint_dir "checkpoints/meta-llama/Llama-2-7b-hf" \
--precision "bf16-true" \
--batch_size 4 \
--save_filepath "results.json"
Expand All @@ -30,7 +30,7 @@ To evaluate on LLMs on specific tasks, for example, TruthfulQA and HellaSwag, yo

```bash
python eval/lm_eval_harness.py \
--checkpoint_dir "checkpoints/Llama-2-7b-hf/" \
--checkpoint_dir "checkpoints/meta-llama/Llama-2-7b-hf" \
--eval_tasks "[truthfulqa_mc,hellaswag]" \
--precision "bf16-true" \
--batch_size 4 \
Expand All @@ -57,7 +57,7 @@ For LoRA-finetuned models, you need to first merge the LoRA weights with the ori

```shell
python eval/lm_eval_harness.py \
--checkpoint_dir "checkpoints/Llama-2-7b-hf/" \
--checkpoint_dir "checkpoints/meta-llama/Llama-2-7b-hf" \
--precision "bf16-true" \
--eval_tasks "[hendrycksTest*]" \
--batch_size 4 \
Expand Down

0 comments on commit 3bd596c

Please sign in to comment.