Skip to content

Commit

Permalink
Add fallback token limit in llm.utils.create_chat_completion (Signifi…
Browse files Browse the repository at this point in the history
…cant-Gravitas#4839)

Co-authored-by: Reinier van der Leer <[email protected]>
  • Loading branch information
lc0rp and Pwuts authored Jun 29, 2023
1 parent 30f153e commit 975094f
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions autogpt/llm/utils/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,8 @@ def create_chat_completion(
model = prompt.model.name
if temperature is None:
temperature = config.temperature
if max_tokens is None:
max_tokens = OPEN_AI_CHAT_MODELS[model].max_tokens - prompt.token_length

logger.debug(
f"{Fore.GREEN}Creating chat completion with model {model}, temperature {temperature}, max_tokens {max_tokens}{Fore.RESET}"
Expand Down

0 comments on commit 975094f

Please sign in to comment.