-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: is chat context lost when use /reset? #3060
Comments
If history grows over the limit of 2048 tokens (for ≥ 32k context window models, 1024 for less) and you are in /code mode, then anything beyond that gets summarized by weak model. While in /ask mode you can go over the limit. |
Thanks for the detailed response. I'll take your word for it and load the conventions file for each chat. But I found out the following, see below. The LLM might be using existing codebase to use same conventions even if conventions.md not read in. Example using Spring Framework. Repo 1:
Repo 2:
|
I'm not really familiar with these terms, but if Repo 2 was already using a certain template or writing style in a way that can be recognized by looking at the code that can be seen through repo map or that you attached, then LLM can infer that it should be using that template or style and will by default provide code that matches. LLMs are trained as next token predictors, so they're really good at picking up on patterns and using that to figure out what's the most likely continuation. If code is written in a certain style, then continuing that code in that same style is pretty much what LLMs are best at. |
Issue
I was wondering how chat context is defined:
Thanks for your help.
Version and model info
No response
The text was updated successfully, but these errors were encountered: