Skip to content

jjmachom/gpt-pilot

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ§‘β€βœˆοΈ GPT PILOT πŸ§‘β€βœˆοΈ


Discord Follow GitHub Repo stars Twitter Follow


Pythagora-io%2Fgpt-pilot | Trendshift

Pythagora-io%2Fgpt-pilot | Trendshift


GPT Pilot doesn't just generate code, it builds apps!


See it in action

(click to open the video in YouTube) (1:40min)


Pythagora-io%2Fgpt-pilot | Trendshift

GPT Pilot is the core technology for the Pythagora VS Code extension that aims to provide the first real AI developer companion. Not just an autocomplete or a helper for PR messages but rather a real AI developer that can write full features, debug them, talk to you about issues, ask for review, etc.


πŸ“« If you would like to get updates on future releases or just get in touch, join our Discord server or you can add your email here. πŸ“¬



GPT Pilot aims to research how much LLMs can be utilized to generate fully working, production-ready apps while the developer oversees the implementation.

The main idea is that AI can write most of the code for an app (maybe 95%), but for the rest, 5%, a developer is and will be needed until we get full AGI.

If you are interested in our learnings during this project, you can check our latest blog posts.





πŸ”Œ Requirements

  • Python 3.9+

🚦How to start using gpt-pilot?

πŸ‘‰ If you are using VS Code as your IDE, the easiest way to start is by downloading GPT Pilot VS Code extension. πŸ‘ˆ

Otherwise, you can use the CLI tool.

If you're new to GPT Pilot:

After you have Python and (optionally) PostgreSQL installed, follow these steps:

  1. git clone https://github.com/Pythagora-io/gpt-pilot.git (clone the repo)
  2. cd gpt-pilot (go to the repo folder)
  3. python -m venv venv (create a virtual environment)
  4. source venv/bin/activate (or on Windows venv\Scripts\activate) (activate the virtual environment)
  5. pip install -r requirements.txt (install the dependencies)
  6. cp example-config.json config.json (create config.json file)
  7. Set your key and other settings in config.json file:
    • LLM Provider (openai, anthropic or groq) key and endpoints (leave null for default) (note that Azure and OpenRouter are suppored via the openai setting)
    • Your API key (if null, will be read from the environment variables)
    • database settings: sqlite is used by default, PostgreSQL should also work
    • optionally update fs.ignore_paths and add files or folders which shouldn't be tracked by GPT Pilot in workspace, useful to ignore folders created by compilers
  8. python main.py (start GPT Pilot)

All generated code will be stored in the folder workspace inside the folder named after the app name you enter upon starting the pilot.

If you're upgrading from GPT Pilot v0.1

Assuming you already have the git repository with an earlier version:

  1. git pull (update the repo)
  2. source pilot-env/bin/activate (or on Windows pilot-env\Scripts\activate) (activate the virtual environment)
  3. pip install -r requirements.txt (install the new dependencies)
  4. python main.py --import-v0 pilot/gpt-pilot (this should import your settings and existing projects)

This will create a new database pythagora.db and import all apps from the old database. For each app, it will import the start of the latest task you were working on.

To verify that the import was successful, you can run python main.py --list to see all the apps you have created, and check config.json to check the settings were correctly converted to the new config file format (and make any adjustments if needed).

πŸ”Ž Examples

Click here to see all example apps created with GPT Pilot.

🐳 How to start gpt-pilot in docker?

  1. git clone https://github.com/Pythagora-io/gpt-pilot.git (clone the repo)
  2. Update the docker-compose.yml environment variables, which can be done via docker compose config. If you wish to use a local model, please go to https://localai.io/basics/getting_started/.
  3. By default, GPT Pilot will read & write to ~/gpt-pilot-workspace on your machine, you can also edit this in docker-compose.yml
  4. run docker compose build. this will build a gpt-pilot container for you.
  5. run docker compose up.
  6. access the web terminal on port 7681
  7. python main.py (start GPT Pilot)

This will start two containers, one being a new image built by the Dockerfile and a Postgres database. The new image also has ttyd installed so that you can easily interact with gpt-pilot. Node is also installed on the image and port 3000 is exposed.

PostgreSQL support

GPT Pilot uses built-in SQLite database by default. If you want to use the PostgreSQL database, you need to additional install asyncpg and psycopg2 packages:

pip install asyncpg psycopg2

Then, you need to update the config.json file to set db.url to postgresql+asyncpg://<user>:<password>@<db-host>/<db-name>.

πŸ§‘β€πŸ’»οΈ CLI arguments

List created projects (apps)

python main.py --list

Note: for each project (app), this also lists "branches". Currently we only support having one branch (called "main"), and in the future we plan to add support for multiple project branches.

Load and continue from the latest step in a project (app)

python main.py --project <app_id>

Load and continue from a specific step in a project (app)

python main.py --project <app_id> --step <step>

Warning: this will delete all progress after the specified step!

Delete project (app)

python main.py --delete <app_id>

Delete project with the specified app_id. Warning: this cannot be undone!

Import projects from v0.1

python main.py --import-v0 <path>

This will import projects from the old GPT Pilot v0.1 database. The path should be the path to the old GPT Pilot v0.1 database. For each project, it will import the start of the latest task you were working on. If the project was already imported, the import procedure will skip it (won't overwrite the project in the database).

Other command-line options

There are several other command-line options that mostly support calling GPT Pilot from our VSCode extension. To see all the available options, use the --help flag:

python main.py --help

πŸ— How GPT Pilot works?

Here are the steps GPT Pilot takes to create an app:

  1. You enter the app name and the description.
  2. Product Owner agent like in real life, does nothing. :)
  3. Specification Writer agent asks a couple of questions to understand the requirements better if project description is not good enough.
  4. Architect agent writes up technologies that will be used for the app and checks if all technologies are installed on the machine and installs them if not.
  5. Tech Lead agent writes up development tasks that the Developer must implement.
  6. Developer agent takes each task and writes up what needs to be done to implement it. The description is in human-readable form.
  7. Code Monkey agent takes the Developer's description and the existing file and implements the changes.
  8. Reviewer agent reviews every step of the task and if something is done wrong Reviewer sends it back to Code Monkey.
  9. Troubleshooter agent helps you to give good feedback to GPT Pilot when something is wrong.
  10. Debugger agent hate to see him, but he is your best friend when things go south.
  11. Technical Writer agent writes documentation for the project.

πŸ•΄How's GPT Pilot different from Smol developer and GPT engineer?

  • GPT Pilot works with the developer to create a fully working production-ready app - I don't think AI can (at least in the near future) create apps without a developer being involved. So, GPT Pilot codes the app step by step just like a developer would in real life. This way, it can debug issues as they arise throughout the development process. If it gets stuck, you, the developer in charge, can review the code and fix the issue. Other similar tools give you the entire codebase at once - this way, bugs are much harder to fix for AI and for you as a developer.

  • Works at scale - GPT Pilot isn't meant to create simple apps but rather so it can work at any scale. It has mechanisms that filter out the code, so in each LLM conversation, it doesn't need to store the entire codebase in context, but it shows the LLM only the relevant code for the current task it's working on. Once an app is finished, you can continue working on it by writing instructions on what feature you want to add.

🍻 Contributing

If you are interested in contributing to GPT Pilot, join our Discord server, check out open GitHub issues, and see if anything interests you. We would be happy to get help in resolving any of those. The best place to start is by reviewing blog posts mentioned above to understand how the architecture works before diving into the codebase.

πŸ–₯ Development

Other than the research, GPT Pilot needs to be debugged to work in different scenarios. For example, we realized that the quality of the code generated is very sensitive to the size of the development task. When the task is too broad, the code has too many bugs that are hard to fix, but when the development task is too narrow, GPT also seems to struggle in getting the task implemented into the existing code.

πŸ“Š Telemetry

To improve GPT Pilot, we are tracking some events from which you can opt out at any time. You can read more about it here.

πŸ”— Connect with us

🌟 As an open-source tool, it would mean the world to us if you starred the GPT-pilot repo 🌟

πŸ’¬ Join the Discord server to get in touch.

About

The first real AI developer

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 91.8%
  • JavaScript 6.4%
  • EJS 0.6%
  • Smarty 0.6%
  • CSS 0.4%
  • Mako 0.1%
  • HTML 0.1%