Building LLM-powered apps is currently very frustrating. It involves a significant amount of prompt engineering and a lots of parameters to tune and countless iterations. Agenta Lab simplifies this process, enabling you to quickly iterate, experiment, and optimize your LLM apps. All without imposing any restrictions on your choice of framework, library, or model.
- Develop your LLM-powered application as you would normally do. Feel free to use any framework, library, or model (langchain, llma_index, GPT-3, or open-source models).
- With two lines of code, specify the parameters for your experiment.
- Deploy your app using the Agenta CLI.
- You or your team can iterate, version parameters, test different versions, and run systematic evaluations via a user-friendly web platform.
In the future, we plan to extend Agenta Lab to facilitate your LLM-app development further, providing features for deployment, monitoring, logging, and A/B testing.
-
Parameter Playground: With just a few lines of code, define the parameters you wish to experiment with. Through our user-friendly web platform, you or your team can then experiment and tweak these parameters.
-
Version Evaluation: Define test sets, evaluate, and compare different app versions.
-
API Deployment Made Easy: Agenta Lab allows you to deploy your LLM applications as APIs without any additional effort. (Currently only available locally)
Please go to docs.agenta.ai for full documentation on:
Agenta Lab requires Docker installed on your machine. If you don't have Docker, you can install it from here.
pip install agenta
git clone https://github.com/Agenta-AI/agenta.git
cd agenta
docker compose -f "docker-compose.yml" up -d --build
Create an empty folder and use the following command to initialize a new project.
mkdir example_app; cd example_app
agenta init
Start a new project based on the template simple_prompt
:
This will create a new project in your folder with the following structure:
.
├── README.md // How to use the template
├── app.py // the code of the app
├── config.toml
└── requirements.txt
The app created uses a simple prompt template in langchain and gpt-3.5 to generate names for companies that makes {product}
If you are interested using your own code in Agenta Lab, please see this tutorial on writing you first LLM-app with Agenta Lab
Create a .env
file with you open api key in the same folder as asked in README.md
Before adding the app to Agenta Lab, you can test it in your terminal
python app.py "colorful socks"
Feetful of Fun
Now let's procede to add the app variant to Agenta Lab.
agenta variant serve
This command will do two things:
- Package the code and serve it locally as an api endpoint under
localhost/app_name/{variant_name}/openapi.json
. - Add the code to the Agenta web platform
Navigate to localhost:3000, select your app, and begin experimenting with the parameters we exposed in the code in the playground.
![Screenshot 2023-05-31 at 19 06 09](https://private-user-images.githubusercontent.com/4510758/242352892-6283d5af-0337-479f-951d-e7560c16d6ec.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0MjM2NDAsIm5iZiI6MTczOTQyMzM0MCwicGF0aCI6Ii80NTEwNzU4LzI0MjM1Mjg5Mi02MjgzZDVhZi0wMzM3LTQ3OWYtOTUxZC1lNzU2MGMxNmQ2ZWMucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIxMyUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMTNUMDUwOTAwWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9NTVjNzE4NDI2NWRhNzcxNzI4N2NmOGZjMWM1NjBmYjY5NTM1MDIzYzU4NTM4MDVhZmRkYTJiNzIwNGUwYmMyMCZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.gRXiYC7vDNyt2lBxlsE1PD7sK3tMW5cGmKLkrAv7Rgk)
You can fork new variants, run batch evalutions, and more.
While there are numerous LLMops platforms, we believe Agenta Lab offers unique benefits:
- Developer-Friendly: We cater to complex LLM-apps and pipelines that require more than just a few no-code abstractions. We give you the freedom to develop your apps the way you want.
- Privacy-First: We respect your privacy and do not proxy your data through third-party services. You have the choice to host your data and models.
- Solution-Agnostic: You have the freedom to use any library and models, be it Langchain, llma_index, or a custom-written alternative.
- Collaborative: We recognize that building LLM-powered apps requires the collaboration of developers and domain experts. Our tool enables this collaboration, allowing domain experts to edit and modify parameters (e.g., prompts, hyperparameters, etc.), and label results for evaluation.
- Open-Source: We encourage you to contribute to the platform and customize it to suit your needs.
Currently, we support Q&A applications (no chat) and do not yet support persistent data (like using a persistent vector database). Our future plans include:
- Supporting chat applications.
- Support for persistent data and vector databases.
- Automated Deployment: Enable automatic app deployment with a simple commit.
- Monitoring and Logging: Introduce a dashboard to monitor your app's performance and usage.
- A/B Testing & User Feedback: Allow for experimentation with different app versions and collect user feedback.
- Regression Testing: Introduce regression tests based on real data for each new version deployment.
We warmly welcome contributions to Agenta Lab. Feel free to submit issues, fork the repository, and send pull requests.
- Designers, UI/UX and Frontend Developers: We need your expertise to enhance the UI/UX of the dashboard and the CLI. We also need help with improving the frontend of the dashboard. Feel free to fork and submit a PR. For bigger ideas, you can contact us via Discord or email ([email protected]).