You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Found OPENROUTER_API_KEY so using openrouter/anthropic/claude-3.5-sonnet since no --model was specified.
Aider v0.73.0
Model: openrouter/anthropic/claude-3.5-sonnet with whole edit format, infinite output
Git repo: .git with 166 files
Repo-map: using 4092 tokens, auto refresh
What I expect is:
If I run aider --model r1:
aider --model r1
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Warning: deepseek/deepseek-reasoner expects these environment variables
- DEEPSEEK_API_KEY: Not set
Warning: deepseek/deepseek-chat expects these environment variables
- DEEPSEEK_API_KEY: Not set
Warning: deepseek/deepseek-chat expects these environment variables
- DEEPSEEK_API_KEY: Not set
You can skip this check with --no-show-model-warnings
https://aider.chat/docs/llms/warnings.html
Open documentation url for more info? (Y)es/(N)o/(D)on't ask again [Yes]: N
Aider v0.73.0
Main model: deepseek/deepseek-reasoner with diff edit format, prompt cache, infinite output
Weak model: deepseek/deepseek-chat
Git repo: .git with 166 files
Repo-map: using 4092 tokens, auto refresh
Right now I can't even get a new deepseek api token because I get a 503 every time I try to login to the api.
- name: openrouter/anthropic/claude-3.5-sonnet
extra_params:
extra_body:
provider:
# Only use these providers, in this order
order: ["Anthropic"]
# Don't fall back to other providers
allow_fallbacks: false
# Skip providers that may train on inputs
data_collection: "deny"
# Only use providers supporting all parameters
require_parameters: true
- name: openrouter/deepseek/deepseek-r1:free
extra_params:
extra_body:
provider:
# Only use these providers, in this order
order: ["Chute"]
# Don't fall back to other providers
allow_fallbacks: false
# Skip providers that may train on inputs
data_collection: "deny"
# Only use providers supporting all parameters
require_parameters: true
- name: openrouter/deepseek/deepseek-chat
extra_params:
extra_body:
provider:
# Only use these providers, in this order
order: ["DeepSeek", "Nebia", "Fireworks"]
# Don't fall back to other providers
allow_fallbacks: false
# Skip providers that may train on inputs
data_collection:
# Only use providers supporting all parameters
require_parameters: true
Version and model info
Aider v0.73.0
Model: As above
Repo-Map: 4192
The text was updated successfully, but these errors were encountered:
R1 is streaming and much faster overall through Fireworks AI, but it constantly hits its token limit:
Model fireworks_ai/accounts/fireworks/models/deepseek-r1 has hit a token limit!
Token counts below are approximate.
Input tokens: ~9,441 of 0 -- possibly exhausted context window!
Output tokens: ~1,908 of 0 -- possibly exceeded output limit!
Total tokens: ~11,349 of 0 -- possibly exhausted context window!
To reduce output tokens:
- Ask for smaller changes in each request.
- Break your code into smaller source files.
- Use a stronger model that can return diffs.
To reduce input tokens:
- Use /tokens to see token usage.
- Use /drop to remove unneeded files from the chat session.
- Use /clear to clear the chat history.
- Break your code into smaller source files.
Issue
I am trying to use:
When I run
aider
, the output is:What I expect is:
If I run
aider --model r1
:Right now I can't even get a new deepseek api token because I get a 503 every time I try to login to the api.
Here are my settings:
.aider.config.yml
.aider.model.settings.yml
Version and model info
Aider v0.73.0
Model: As above
Repo-Map: 4192
The text was updated successfully, but these errors were encountered: