From 644d548549de4a9becb64ae2246993bc5b0c76bf Mon Sep 17 00:00:00 2001 From: Zach Nussbaum Date: Tue, 28 Mar 2023 13:46:51 -0700 Subject: [PATCH] Update TRAINING_LOG.md --- TRAINING_LOG.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/TRAINING_LOG.md b/TRAINING_LOG.md index 744038ccd1c5..8a2d9489c4ca 100644 --- a/TRAINING_LOG.md +++ b/TRAINING_LOG.md @@ -234,4 +234,4 @@ Taking inspiration from [the Alpaca Repo](https://github.com/tatsu-lab/stanford_ Comparing our model LoRa to the [Alpaca LoRa](https://huggingface.co/tloen/alpaca-lora-7b), our model has lower perplexity. Qualitatively, training on 3 epochs performed the best on perplexity as well as qualitative examples. -We tried training a full model using the parameters above, but found that during the second epoch the model overfit. +We tried training a full model using the parameters above, but found that during the second epoch the model diverged and samples generated post training were worse than the first epoch.