Skip to content

Commit

Permalink
Merge pull request meta-llama#754 from JaredLevi18/fix
Browse files Browse the repository at this point in the history
making a small change to avoid a confusion.
  • Loading branch information
jspisak authored Sep 3, 2023
2 parents 7565eb6 + 8580eb9 commit 4e24858
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions llama/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -448,6 +448,8 @@ def __init__(self, params: ModelArgs):
)

self.freqs_cis = precompute_freqs_cis(
# Note that self.params.max_seq_len is multiplied by 2 because the token limit for the Llama 2 generation of models is 4096.
# Adding this multiplier instead of using 4096 directly allows for dynamism of token lengths while training or fine-tuning.
self.params.dim // self.params.n_heads, self.params.max_seq_len * 2
)

Expand Down

0 comments on commit 4e24858

Please sign in to comment.