,______ .______ .______ ,___
: __ \ \____ |: \ : __|
| \____|/ ____|| _,_ || : |
| : \ \ . || : || |
| |___\ \__:__||___| || |
|___| : |___||___|
* --pancake
Run a language model in local, without internet, to entertain you or help answering questions about radare2 or reverse engineering in general. Note that models used by r2ai are pulled from external sources which may behave different or respond unrealible information. That's why there's an ongoing effort into improving the post-finetuning using memgpt-like techniques which can't get better without your help!
- Prompt the language model without internet requirements
- Use local GGUF or remote language models (via http)
- Index large codebases or markdown books using a vector database
- Slurp file and perform actions on that
- Embed the output of an r2 command and resolve questions on the given data
- Define different system-level assistant role
- Set environment variables to provide context to the language model
- Live with repl and batch mode from cli or r2 prompt
- Accessible as an r2lang-python plugin, keeps session state inside radare2
- Scriptable from python, bash, r2pipe, and javascript (r2papi)
- Use different models, dynamically adjust query template
- Load multiple models and make them talk between them
Running make
will setup a python virtual environment in the current directory installing all the necessary dependencies and will get into a shell to run r2ai.
The installation is now splitted into two different targets:
make install
will place a symlink in$BINDIR/r2ai
make install-plugin
will install the native r2 plugin into your home
When running installed via r2pm you can execute it like this:
r2pm -r r2ai
Additionally you can get the r2ai
command inside r2 to run as an rlang plugin by installing the bindings:
r2pm -i rlang-python
make user-install
After this you should get the r2ai
command inside the radare2 shell. Set the R2_DEBUG=1
environment to see the reasons why the plugin is not loaded if it's not there.
On Windows you may follow the same instructions, just ensure you have the right python environment ready and create the venv to use
git clone https://github.com/radareorg/r2ai
cd r2ai
set PATH=C:\Users\YOURUSERNAME\Local\Programs\Python\Python39\;%PATH%
python3 -m pip -r requirements.txt
python3 main.py
There are 4 different ways to run r2ai
:
- Standalone and interactive:
r2pm -r r2ai
orpython main.py
- Batch mode:
r2ai '-r act as a calculator' '3+3=?'
- As an r2 plugin:
r2 -i main.py /bin/ls
- From radare2 (requires
r2pm -ci rlang-python
):r2 -c 'r2ai -h'
- Using r2pipe:
#!pipe python main.py
- Define a macro command:
'$r2ai=#!pipe python main.py
- Define a macro command:
When using OpenAI, Claude or any of the Functionary local models you can use the auto mode which permits the language model to execute r2 commands, analyze the output in loop and in a loop until it is resolved. Here's a sample session to achieve that:
$ . env/bin/activate
(env)$ r2 /bin/ls
[0x00000000]> '$r2ai=#!pipe python main.py
[0x00000000]> $r2ai '-m openai:gpt-4'
[0x00000000]> $r2ai "' list the imports for this program"
[0x00000000]> $r2ai "' draw me a donut"
[0x00000000]> $r2ai "' decompile current function and explain it"
You can interact with r2ai from standalone python, from r2pipe via r2 keeping a global state or using the javascript interpreter embedded inside radare2
.
- conversation.r2.js - load two models and make them talk to each other
Just run make
.. or well python3 main.py
- add "undo" command to drop the last message
- dump / restore conversational states (see -L command)
- Implement
~
,|
and>
and other r2shell features