Skip to content

Releases: radareorg/r2ai

0.9.0

11 Dec 03:07
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.8.8...0.9.0

0.8.8

10 Dec 22:49
Compare
Choose a tag to compare

What's Changed

  • Packaging redo by @dnakov in #67
  • Use litellm for remote models in chat mode by @dnakov in #68
  • Make sure r2lang and r2 are in sync by @dnakov in #70
  • UI fixes and cleanup by @dnakov in #71
  • Rewrite auto to use litellm; add spinner by @dnakov in #69
  • Makefile fixes by @dnakov in #72
  • README updates by @dnakov in #73
  • Add .tcss files to package by @dnakov in #74
  • Fix llama in auto by @dnakov in #76
  • Use chat completions for llama.cpp by @dnakov in #75
  • Update claude sonnet version by @dnakov in #78
  • Use portable script sourcing by @AdamBromiley in #81
  • Make llama verbose respond to logger settings; Add the functionary to… by @dnakov in #82
  • Rawdog auto by @dnakov in #83
  • Auto updates to handle more params and non-streaming mode by @dnakov in #85
  • Fix VV for local models by @dnakov in #86
  • Fix VV chat taking over main thread by @dnakov in #87
  • fix some auto streaming bugs by @dnakov in #88
  • Add new execute_binary command for auto; some UI bug fixes by @dnakov in #89
  • Fix not getting python output when running as plugin; fix litellm con… by @dnakov in #91
  • Use lazy imports to speed up initial loading time by @dnakov in #92
  • Fix auto rawdog mode for qwen-coder by @dnakov in #93

New Contributors

Full Changelog: 0.8.6...0.8.8

0.8.6

10 Dec 22:48
Compare
Choose a tag to compare

What's Changed

  • Allow passing model when using openapi by @nitanmarcel in #36
  • Use the requests library for openai/openapi only when necesary by @nitanmarcel in #37
  • Use python's builtin logging instead of dirrectly printing by @nitanmarcel in #38
  • Fix a bug introduced in the last 2 commits by @nitanmarcel in #39
  • Add llm.repeat_penalty option by @trufae in #42
  • Make sure we close the server properly when handling KeyboardInterrupt by @nitanmarcel in #43
  • Remove repeat_penalty for openai based ais by @nitanmarcel in #44
  • Setup progress handlers with rich Progress by @nitanmarcel in #45
  • [WIP] UI by @dnakov in #47
  • Add llm.layers to make it configurable by @trufae in #50
  • Only log shutting down webserver when running in ^C by @trufae in #51
  • Dont loose the prompt when failing to start the webserver by @trufae in #52
  • Fix TUI File Browser and chat bug by @dnakov in #53
  • Fix TUI Chat messages formatting by @dnakov in #54
  • Fix TUI tool calls formatting by @dnakov in #55
  • Make more stuff optional to load it as an r2 plugin again by @trufae in #58
  • TUI updates and some error handling by @dnakov in #60
  • TUI: Ask for API KEY if one is not set in env by @dnakov in #61
  • TUI: switch to using r2cmd {; get filename if binary already opened by @dnakov in #62
  • Support setting callbacks on env change; Move debug level set in env by @nitanmarcel in #63
  • Add filter_print to support filtering prints by @nitanmarcel in #64
  • Add support for HuggingFace 🤗 inference API by @brainstorm in #65

New Contributors

Full Changelog: 0.8.4...0.8.6

0.8.4

06 Sep 18:41
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.8.2...0.8.4

r2ai-0.8.2

08 Aug 15:06
Compare
Choose a tag to compare

Full Changelog: 0.8.0...0.8.2

  • decai plugin - r2ai based decompiler for radare2
  • Pin llamacpp version to avoid installation issues
  • Fix autocompletion on linux terminals
  • Improved prompt
  • Use chromadb instead of vectordb, and use it by default
  • Add support for ddg webscrapping
  • Support json output
  • Handle <<EOF for multiline inputs
  • Add support for llama3

0.8.0

11 Jul 16:58
Compare
Choose a tag to compare

What's Changed

Packaging:

  • Fix installation and usage, requires latest r2pipe
  • Separate r2pm packages for r2ai and r2ai-plugin
  • r2pm -r r2ai-server to launch llamafile, llama and kabaldcpp to use with openapi backend
  • Use the latest huggingface api

Commands:

  • -M and -MM for the short and long list of models supported
  • -L-# - delete the last N messages from the chat history
  • Add 'decai' a decompiled based on AI as an r2js script
  • -w webserver supports tabbyml, ollama, openapi, r2pipe and llamaserver rest endpoints
  • -ed command edits the r2ai.rc file with $EDITOR
  • ?t command to benchmark commands
  • ?V command is now fixed to show version of r2, r2ai and llama

Backends:

  • -e llm.gpu to use cpu or gpu
  • OpenAPI (http requests for llamaserver)
  • Support ollama servers
  • Larger context window by default (32K)
  • top_p and top_k parameters can now be tweaked
  • Latest llamapy supports latest gemma2 models
  • Add support for Google Gemini API by @dnakov in #24
  • docs: update README.md by @eltociear in #27
  • Package updates by @dnakov in #29
  • Fix anthropic chat mode by @dnakov in #31

Contributors

Full Changelog: 0.7.0...0.8.0

0.7.0

03 May 09:35
Compare
Choose a tag to compare

What's Changed

  • Add few tests and run the linter in the CI
  • Use llama3 model by default
  • Add TAB autocompletion
  • Better python's pip and venv support
  • Add support for user plugins via the '..' command
  • r2ai -repl implemented on rlang and r2pipe modes

Full Changelog: 0.6.1...0.7.0

0.6.0

02 May 23:42
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.5.0...0.6.0

0.5.0

29 Feb 13:47
Compare
Choose a tag to compare
Release 0.5.0

0.4.0

28 Nov 11:11
Compare
Choose a tag to compare
Release 0.4.0