diff --git a/docs/docs/getting-started/introduction.mdx b/docs/docs/getting-started/01-introduction.mdx similarity index 88% rename from docs/docs/getting-started/introduction.mdx rename to docs/docs/getting-started/01-introduction.mdx index 503c4fa70c..497ff096b7 100644 --- a/docs/docs/getting-started/introduction.mdx +++ b/docs/docs/getting-started/01-introduction.mdx @@ -6,9 +6,15 @@ id: introduction sidebar_position: 0 --- - - -Agenta is an open-source platform that helps **developers** and **product teams** build robust AI applications powered by LLMs. It offers all the tools for **prompt management and evaluation**. +Screenshots of Agenta LLMOPS platform +Agenta is an open-source platform that helps **developers** and **product teams** +build robust AI applications powered by LLMs. It offers all the tools for **prompt +management and evaluation**. ### With Agenta, you can: diff --git a/docs/docs/getting-started/quick-start.mdx b/docs/docs/getting-started/02-quick-start.mdx similarity index 100% rename from docs/docs/getting-started/quick-start.mdx rename to docs/docs/getting-started/02-quick-start.mdx diff --git a/docs/docs/guides/how_does_agenta_work.mdx b/docs/docs/guides/how_does_agenta_work.mdx deleted file mode 100644 index fe4951cd02..0000000000 --- a/docs/docs/guides/how_does_agenta_work.mdx +++ /dev/null @@ -1,57 +0,0 @@ ---- -title: "How does Agenta work?" -description: "An overview of the architecture and main concepts of Agenta" ---- - -## What problem does Agenta solve? - -To build a robust LLM application, you need to: - -1. **Rapidly experiment and evaluate** various prompts, models, and architectures/workflows (RAG, chain-of-prompts, etc..). -2. **Collaborate with non-developers**, such as product managers or domain experts. - -While some tools exist that help doing the first point via a user interface, they are typically limited to pre-built single-prompt applications and fail to accommodate custom workflows or application logic. - -## How does Agenta solve this problem? - -Agenta creates a playground in the UI from your LLM applications, regardless of the workflow (RAG, chain-of-prompts, custom logic) or the framework (Langchain, Llama_index, OpenAI calls) in use. - -This enables the entire team to collaborate on prompt engineering and experimentation with the application parameters (prompts, models, chunk size, etc.). It also allow them to manage all aspects of the app development lifecyclefrom the UI: comparing different configuration, evaluating the application, deploying it, and more. - -## How does Agenta achieve this? - -1. **Microservice-based Applications**: - -Agenta treats each application as a microservice. Creating a new application in Agenta automatically generates a container with a REST API. This is true whether the application is created using a pre-built template from the UI, or whether from CLI using custom application code. Agenta handles the creation of Docker images and container deployment. This means that all interactions with the application (either from the UI, during evaluations, or post-deployment) occur with the container. - -2. **Separation of Logic and Configuration**: - -Agenta separates the application logic from the configuration. The application logic refers to the code that defines the application, whether it's a simple prompt, chain of prompts, RAG, etc. The configuration refers to the parameters used in the application logic, such as the prompt, model, chunk size, etc. In the application code, you specify which configuration does the application use. This configuration can be modified from the UI in the playground or the CLI. - -## Agenta architecture - - -Agenta decouples the configuration (prompts, model) from the application logic. The configuration is managed by the backend. -The configuration then can be modified both from the UI (in the playground) or from the CLI - -### The Application - -The application describes the logic written in Python code. An application can be created from a pre-built template in the UI or from code in the CLI. In either case, a new container with the application code is launched. The application can then be accessed via a REST API. - -Each application has a default configuration specified in its code. This default configuration can be overridden by the user in the UI or the CLI. Additionally, the user can create new configurations from the UI or the CLI. Each new configuration results in the creation of a new application variant, which is a combination of the application logic and a configuration. A single project can house many variants encompassing multiple application logics and configurations. - -## The Backend - -Agenta's backend manages applications and configurations. It is responsible for building images, deploying containers, and managing configurations and prompts for the application. - -## The Frontend / UI - -The frontend provides tools to create new applications from a template, create and edit configurations, run evaluations, and deploy applications to different environments (e.g., staging, production, etc.). - -## The CLI - -The CLI offers the same capabilities as the frontend. Additionally, it allows for the creation of custom applications not available as templates. When serving a new application from the CLI, Agenta handles container creation and deployment. After creating a new application, users can edit its configuration and evaluate it in the UI. - -## The SDK - -The SDK is a Python library used to create new applications from code. It manages the saving of the default configuration, creation of the REST API, and necessary actions to create a playground and integrate the application with the Agenta platform. diff --git a/docs/docs/prompt-management/07-concepts.mdx b/docs/docs/prompt-management/07-concepts.mdx deleted file mode 100644 index 13e994ec6e..0000000000 --- a/docs/docs/prompt-management/07-concepts.mdx +++ /dev/null @@ -1,62 +0,0 @@ ---- -title: "Core Concepts" ---- - -Below are the description to the main terms and concepts used in agenta. - -Taxonomy of concepts in Agenta - -### Templates - -**Templates** are the workflows used by LLM-powered applications. Agenta comes with two default templates: - -- **Completion Application Template:** For single-prompt applications that generate text completions. -- **Chat Application Template:** For applications that handle conversational interactions. - -Agenta also allows you to create custom templates for your workflows using our SDK. Examples include: - -- Retrieval-Augmented Generation (RAG) Applications -- Chains of Multiple Prompts -- Agents Interacting with External APIs - -After creating a template, you can interact with it in the playground, run no-code evaluations, and deploy versions all from the web UI. - -### Applications - -An **application** uses a **template** to solve a specific use case. For instance, an **application** could use the single-prompt **template** for tasks like: - -- **Tweet Generation:** Crafting engaging tweets based on input topics. -- **Article Summarization:** Condensing long articles into key points. - -### Variants - -Within each application, you can create **variants**. **Variants** are different configurations of the application, allowing you to experiment with and compare multiple approaches. For example, for the "tweet generation" application, you might create **variants** that: - -- Use different prompt phrasings. -- Adjust model parameters like temperature or maximum tokens. -- Incorporate different styles or tones (e.g., professional vs. casual). - -### Versions - -Every **variant** is **versioned** and immutable. When you make changes to a **variant**, a new **version** is created. Each **version** has a **commit id** that uniquely identifies it. - -### Environments - -**Environments** are the interfaces where your deployed variants are accessible. You can deploy a **version** of a **variant** to an **environment**. Each **environment** has a user-defined environment name (e.g. development, staging, production) that specifies its context or stage. - -You can then integrate the **environment** into your codebase to fetch the configuration deployed on that **environment**. Additionally, you can directly invoke the endpoints relating to the **environment** containing the application running with that configuration. - -By default, applications come with three predefined **environments**: - -- **Development:** For initial testing and experimentation. -- **Staging:** For pre-production testing and quality assurance. -- **Production:** For live use with real users. - -:::warning -When deploying a **variant** to an **environment**, the latest **version** of that **variant** gets deployed. Each **environment** points to a specific **version** of a **variant** (a certain **commit**). Updating the **variant** after deploying does not automatically update the **environment**. -::: diff --git a/docs/static/images/agenta-cover.png b/docs/static/images/agenta-cover.png new file mode 100644 index 0000000000..afffab54de Binary files /dev/null and b/docs/static/images/agenta-cover.png differ diff --git a/docs/static/images/agenta_mockup_blackbg.png b/docs/static/images/agenta_mockup_blackbg.png deleted file mode 100644 index 15e3adcd53..0000000000 Binary files a/docs/static/images/agenta_mockup_blackbg.png and /dev/null differ diff --git a/docs/static/images/agenta_mockup_grey_bg.png b/docs/static/images/agenta_mockup_grey_bg.png deleted file mode 100644 index 378586aefb..0000000000 Binary files a/docs/static/images/agenta_mockup_grey_bg.png and /dev/null differ diff --git a/docs/static/images/agenta_mockup_whitebg.png b/docs/static/images/agenta_mockup_whitebg.png deleted file mode 100644 index ddecc226c9..0000000000 Binary files a/docs/static/images/agenta_mockup_whitebg.png and /dev/null differ