Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: is chat context lost when use /reset? #3060

Closed
RuminationScape opened this issue Jan 29, 2025 · 3 comments
Closed

Question: is chat context lost when use /reset? #3060

RuminationScape opened this issue Jan 29, 2025 · 3 comments

Comments

@RuminationScape
Copy link

Issue

I was wondering how chat context is defined:

  1. Does every invocation of aider start a new chat context?
  2. Does /reset clear the chat context in addition to dropping the files?
  3. If I load conventions.md at the start, after a /reset do I need to read it again?

Thanks for your help.

Version and model info

No response

@Shadetail
Copy link

  1. Yes
  2. Yes. Though you can use /drop for just dropping files and /clear for just clearing history.
  3. Yes, if you loaded a file and dropped it the LLM will no longer see it. It's Aider that handles history and files and every time you send a message it has to send to the model first the system prompt instructions, then the repo map, then the currently attached files, then the history (context) of your exchange so far, and finally your latest message.

If history grows over the limit of 2048 tokens (for ≥ 32k context window models, 1024 for less) and you are in /code mode, then anything beyond that gets summarized by weak model. While in /ask mode you can go over the limit. --max-chat-history-tokens can increase max history tokens to anything you wish. You can enable AIDER_LLM_HISTORY_FILE=.aider.llm.history in your .env file if you want to log the raw output that gets sent to LLM with every API call, that should help you understand how Aider works internally. You can also clone Aider repo and run aider in it and ask questions, this is how I learned everything that I couldn't find in documentation. Oh, and one other cool command is --map-tokens followed by a number of tokens you'd like to use for repo map, it can be used to either disable repo map by setting it to 0 when you don't need it, or drastically expanding it which is useful for things like asking aider questions about its own codebase. Another cool command is --restore-chat-history, which can be used to continue an existing session.

@RuminationScape
Copy link
Author

Thanks for the detailed response.

I'll take your word for it and load the conventions file for each chat. But I found out the following, see below.


The LLM might be using existing codebase to use same conventions even if conventions.md not read in. Example using Spring Framework.

Repo 1:

  1. implement task B
  2. uses JpaRepository for DAO layer.

Repo 2:

  1. /read conventions: instruction to use JdbcTemplate
  2. implement task A. LLM uses JdbcTemplate.
  3. /reset
  4. implement task B. Uses JdbcTemplate. Expected to use JPA as in Repo 1.

@Shadetail
Copy link

I'm not really familiar with these terms, but if Repo 2 was already using a certain template or writing style in a way that can be recognized by looking at the code that can be seen through repo map or that you attached, then LLM can infer that it should be using that template or style and will by default provide code that matches. LLMs are trained as next token predictors, so they're really good at picking up on patterns and using that to figure out what's the most likely continuation. If code is written in a certain style, then continuing that code in that same style is pretty much what LLMs are best at.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants