Skip to content

Latest commit

 

History

History
101 lines (70 loc) · 4.67 KB

Development.md

File metadata and controls

101 lines (70 loc) · 4.67 KB

Development Guide

This guide is for people working on OpenDevin and editing the source code.

Start the server for development

1. Requirements

Make sure you have all these dependencies installed before moving on to make build.

Develop without sudo access

If you want to develop without system admin/sudo access to upgrade/install Python and/or NodeJs, you can use conda or mamba to manage the packages for you:

# Download and install Mamba (a faster version of conda)
curl -L -O "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
bash Miniforge3-$(uname)-$(uname -m).sh

# Install Python 3.11, nodejs, and poetry
mamba install python=3.11
mamba install conda-forge::nodejs
mamba install conda-forge::poetry

2. Build and Setup The Environment

  • Build the Project: Begin by building the project, which includes setting up the environment and installing dependencies. This step ensures that OpenDevin is ready to run smoothly on your system.
    make build

3. Configuring the Language Model

OpenDevin supports a diverse array of Language Models (LMs) through the powerful litellm library. By default, we've chosen the mighty GPT-4 from OpenAI as our go-to model, but the world is your oyster! You can unleash the potential of Anthropic's suave Claude, the enigmatic Llama, or any other LM that piques your interest.

To configure the LM of your choice, follow these steps:

  1. Using the Makefile: The Effortless Approach With a single command, you can have a smooth LM setup for your OpenDevin experience. Simply run:
    make setup-config
    This command will prompt you to enter the LLM API key, model name, and other variables ensuring that OpenDevin is tailored to your specific needs. Note that the model name will apply only when you run headless. If you use the UI, please set the model in the UI.

Note on Alternative Models: Some alternative models may prove more challenging to tame than others. Fear not, brave adventurer! We shall soon unveil LLM-specific documentation to guide you on your quest. And if you've already mastered the art of wielding a model other than OpenAI's GPT, we encourage you to share your setup instructions with us.

For a full list of the LM providers and models available, please consult the litellm documentation.

There is also documentation for running with local models using ollama.

4. Run the Application

  • Run the Application: Once the setup is complete, launching OpenDevin is as simple as running a single command. This command starts both the backend and frontend servers seamlessly, allowing you to interact with OpenDevin without any hassle.
    make run

5. Individual Server Startup

  • Start the Backend Server: If you prefer, you can start the backend server independently to focus on backend-related tasks or configurations.

    make start-backend
  • Start the Frontend Server: Similarly, you can start the frontend server on its own to work on frontend-related components or interface enhancements.

    make start-frontend

6. LLM Debugging

If you encounter any issues with the Language Model (LM) or you're simply curious, you can inspect the actual LLM prompts and responses. To do so, export DEBUG=1 in the environment and restart the backend. OpenDevin will then log the prompts and responses in the logs/llm/CURRENT_DATE directory, allowing you to identify the causes.

7. Help

  • Get Some Help: Need assistance or information on available targets and commands? The help command provides all the necessary guidance to ensure a smooth experience with OpenDevin.
    make help

8. Testing

Unit tests

poetry run pytest ./tests/unit/test_sandbox.py

Integration tests

Please refer to this README for details.

9. Add or update dependency

  1. Add your dependency in pyproject.toml or use peotry add xxx
  2. Update the poetry.lock file via poetry lock --no-update