Releases: radareorg/r2ai
Releases · radareorg/r2ai
0.9.0
What's Changed
- Update litellm by @dnakov in #94
- make r2ai-plugin work r2pm by @dnakov in #90
- Automatically create releases on push tags by @prodrigestivill in #95
Full Changelog: 0.8.8...0.9.0
0.8.8
What's Changed
- Packaging redo by @dnakov in #67
- Use litellm for remote models in chat mode by @dnakov in #68
- Make sure r2lang and r2 are in sync by @dnakov in #70
- UI fixes and cleanup by @dnakov in #71
- Rewrite auto to use litellm; add spinner by @dnakov in #69
- Makefile fixes by @dnakov in #72
- README updates by @dnakov in #73
- Add .tcss files to package by @dnakov in #74
- Fix llama in auto by @dnakov in #76
- Use chat completions for llama.cpp by @dnakov in #75
- Update claude sonnet version by @dnakov in #78
- Use portable script sourcing by @AdamBromiley in #81
- Make llama verbose respond to logger settings; Add the functionary to… by @dnakov in #82
- Rawdog auto by @dnakov in #83
- Auto updates to handle more params and non-streaming mode by @dnakov in #85
- Fix VV for local models by @dnakov in #86
- Fix VV chat taking over main thread by @dnakov in #87
- fix some auto streaming bugs by @dnakov in #88
- Add new execute_binary command for auto; some UI bug fixes by @dnakov in #89
- Fix not getting python output when running as plugin; fix litellm con… by @dnakov in #91
- Use lazy imports to speed up initial loading time by @dnakov in #92
- Fix auto rawdog mode for qwen-coder by @dnakov in #93
New Contributors
- @AdamBromiley made their first contribution in #81
Full Changelog: 0.8.6...0.8.8
0.8.6
What's Changed
- Allow passing model when using openapi by @nitanmarcel in #36
- Use the requests library for openai/openapi only when necesary by @nitanmarcel in #37
- Use python's builtin logging instead of dirrectly printing by @nitanmarcel in #38
- Fix a bug introduced in the last 2 commits by @nitanmarcel in #39
- Add llm.repeat_penalty option by @trufae in #42
- Make sure we close the server properly when handling KeyboardInterrupt by @nitanmarcel in #43
- Remove repeat_penalty for openai based ais by @nitanmarcel in #44
- Setup progress handlers with rich Progress by @nitanmarcel in #45
- [WIP] UI by @dnakov in #47
- Add llm.layers to make it configurable by @trufae in #50
- Only log shutting down webserver when running in ^C by @trufae in #51
- Dont loose the prompt when failing to start the webserver by @trufae in #52
- Fix TUI File Browser and chat bug by @dnakov in #53
- Fix TUI Chat messages formatting by @dnakov in #54
- Fix TUI tool calls formatting by @dnakov in #55
- Make more stuff optional to load it as an r2 plugin again by @trufae in #58
- TUI updates and some error handling by @dnakov in #60
- TUI: Ask for API KEY if one is not set in env by @dnakov in #61
- TUI: switch to using r2cmd {; get filename if binary already opened by @dnakov in #62
- Support setting callbacks on env change; Move debug level set in env by @nitanmarcel in #63
- Add filter_print to support filtering prints by @nitanmarcel in #64
- Add support for HuggingFace 🤗 inference API by @brainstorm in #65
New Contributors
- @nitanmarcel made their first contribution in #36
- @brainstorm made their first contribution in #65
Full Changelog: 0.8.4...0.8.6
0.8.4
What's Changed
- Add r2plugin Makefile system install by @prodrigestivill in #32
- Several fixes for standalone mode, add support for AWS bedrock models by @FernandoDoming in #33
New Contributors
- @prodrigestivill made their first contribution in #32
- @FernandoDoming made their first contribution in #33
Full Changelog: 0.8.2...0.8.4
r2ai-0.8.2
Full Changelog: 0.8.0...0.8.2
- decai plugin - r2ai based decompiler for radare2
- Pin llamacpp version to avoid installation issues
- Fix autocompletion on linux terminals
- Improved prompt
- Use chromadb instead of vectordb, and use it by default
- Add support for ddg webscrapping
- Support json output
- Handle <<EOF for multiline inputs
- Add support for llama3
0.8.0
What's Changed
Packaging:
- Fix installation and usage, requires latest r2pipe
- Separate r2pm packages for r2ai and r2ai-plugin
- r2pm -r r2ai-server to launch llamafile, llama and kabaldcpp to use with openapi backend
- Use the latest huggingface api
Commands:
- -M and -MM for the short and long list of models supported
- -L-# - delete the last N messages from the chat history
- Add 'decai' a decompiled based on AI as an r2js script
- -w webserver supports tabbyml, ollama, openapi, r2pipe and llamaserver rest endpoints
- -ed command edits the r2ai.rc file with $EDITOR
- ?t command to benchmark commands
- ?V command is now fixed to show version of r2, r2ai and llama
Backends:
- -e llm.gpu to use cpu or gpu
- OpenAPI (http requests for llamaserver)
- Support ollama servers
- Larger context window by default (32K)
- top_p and top_k parameters can now be tweaked
- Latest llamapy supports latest gemma2 models
- Add support for Google Gemini API by @dnakov in #24
- docs: update README.md by @eltociear in #27
- Package updates by @dnakov in #29
- Fix anthropic chat mode by @dnakov in #31
Contributors
Full Changelog: 0.7.0...0.8.0
0.7.0
What's Changed
- Add few tests and run the linter in the CI
- Use llama3 model by default
- Add TAB autocompletion
- Better python's pip and venv support
- Add support for user plugins via the '..' command
- r2ai -repl implemented on rlang and r2pipe modes
Full Changelog: 0.6.1...0.7.0
0.6.0
What's Changed
- [WIP] Auto mode by @dnakov in #3
- Fix live code/message blocks and simplify the code by @trufae in #6
- change :auto to
'
by @dnakov in #7 - support chatml-function-calling via llama-cpp by @dnakov in #4
- Support functionary models for auto mode by @dnakov in #10
- Support for anthropic claude + tools by @dnakov in #11
- Support Hermes-2-Pro and auto mode by @dnakov in #13
- Add anthropic claude3 haiku model by @dnakov in #12
- Update repl.py by @Xplo8E in #18
- Support for groq by @dnakov in #19
- Fixes by @dnakov in #20
New Contributors
- @dnakov made their first contribution in #3
- @trufae made their first contribution in #6
- @Xplo8E made their first contribution in #18
Full Changelog: 0.5.0...0.6.0