Skip to content

Chatbot Portal with Agent: Streamlined Workflow for Building Agent-Based Applications

License

Notifications You must be signed in to change notification settings

Billionerd/Intelli-Agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Intelli-Agent

Intelli-Agent: Streamlined Workflow for Building Agent-Based Applications

Intelli-Agent offers a streamlined workflow for developing scalable, production-grade agent-based applications, such as conversational chatbots. Key features include:

  1. Enterprise Knowledge Base Creation: Users can upload private documents in various formats (PDF, DOCX, HTML, TXT, MD, JSON, JSONL, PNG, JPG, JPEG, WEBP) to construct a personalized knowledge base.

  2. Flexible Modeling Options: Choose from multiple models (Agent, Chat, RAG) to suit diverse requirements. For instance, the Agent model can interpret user intent, select appropriate tools, and act on iterative results.

  3. Configurable Chat-Based UI: Our React/Next.js chat interface is user-friendly, making it easy to configure, explore, and customize to meet your specific needs.

  4. Comprehensive RESTful API: Our full-featured API facilitates easy integration with existing applications, enhancing functionality and user experience.

Intelli-Agent is designed to empower developers to rapidly deploy intelligent, context-aware applications with minimal overhead and maximum efficiency.

Table of Contents

Architecture

Architecture Image

Enterprise Knowledge Base Creation

TODO: We manage the entire document ETL process, which includes format recognition, content extraction, metadata conversion, and semantic segmentation, seamlessly in the background.

Offline Workflow

Flexible Modeling Options

TODO: We handle complex tasks such as vector embedding, intent detection, knowledge retrieval, and re-ranking behind the scenes. Online Workflow

Quick Start

Follow these steps to get started:

  1. Prerequisites
  2. Prepare Model Assets
  3. Deploy CDK Template
  4. API Reference

Prerequisites

First, you need to clone the repository. You can do this by executing the following command:

git clone <this repo>

Then, you need to install the following prerequisites:

cd source/infrastructure
npm install

Prepare Model Assets

Execute the script per model folder. Make sure Python is installed properly.

First, navigate to the model directory and run the prepare_model.sh script. This script requires an S3 bucket name as an argument, which will be used to upload the model. Please make sure the bucket name is located in the same region as the CDK deployment.

cd source/model/
./prepare_model.sh -s <Your S3 Bucket Name>

Next, navigate to the ETL code directory. Depending on your region, you will use either the Dockerfile or DockerfileCN. The model.sh script requires the Dockerfile, ETL image name, AWS region, and ETL image tag as arguments. The ETL image will be pushed to your ECR repo with the image name you specified.

cd source/model/etl/code
sh model.sh <./Dockerfile or ./DockerfileCN> <EtlImageName> <AWS_REGION> <EtlImageTag>

For example, to prepare ETL model asset in the GCR (Greater China) region, the command is:

sh model.sh ./DockerfileCN llm-bot-cn cn-northwest-1 latest

Finally, if this is the first time using Amazon OpenSearch in this account, you will need to create a service-linked role for Amazon OpenSearch Service. This role is necessary to allow Amazon OpenSearch Service to manage resources on your behalf.

aws iam create-service-linked-role --aws-service-name es.amazonaws.com

Build Frontend

cd source/portal
npm install
npm run build

Deploy CDK Template

Please make sure docker is installed and the CDK command is executed in the same region of the model files which are uploaded in previous step.

Login to AWS ECR Public to pull the image from the public repository.

aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws

Start the deployment by executing the following command:

cd source/infrastructure
npx cdk deploy --parameters S3ModelAssets=<Your S3 Bucket Name> --parameters SubEmail=<Your email address> --parameters EtlImageName=<Your ETL model name> --parameters ETLTag=<Your ETL tag name>

To deploy the offline process only, you can configure context parameters to skip the online process.

npx cdk deploy --parameters S3ModelAssets=<Your S3 Bucket Name> --parameters SubEmail=<Your email address> --parameters EtlImageName=<Your ETL model name> --parameters ETLTag=<Your ETL tag name> --context DeploymentMode="OFFLINE_EXTRACT"

Deployment Parameters

Parameter Description
S3ModelAssets Your bucket name to store models
SubEmail Your email address to receive notifications
OpenSearchIndex OpenSearch index name to store the knowledge, if the index is not existed, the solution will create one
EtlImageName ETL image name, eg. etl-model, it is set when you executing source/model/etl/code/model.sh script
EtlTag ETL tag, eg. latest, v1.0, v2.0, the default value is latest, it is set when you executing source/model/etl/code/model.sh script

Optional Context Parameters

Context Description
DeploymentMode The mode for deployment. There are three modes: OFFLINE_EXTRACT, OFFLINE_OPENSEARCH, and ALL. Default deployment mode is ALL.
LayerPipOption The configuration option for the Python package installer (pip) for the Lambda layer. Please use it to set PyPi mirror(e.g. -i https://pypi.tuna.tsinghua.edu.cn/simple) when your local development environment is in GCR region. Default LayerPipOption is set to ``.

API Reference

After CDK deployment, you can use a HTTP client such as Postman/cURL to invoke the API by following below API schema.

Test

For detailed test information, please refer to the Test Doc

Optional Steps

Upload Embedding File

Upload the embedding file to the S3 bucket created in the previous step. This object created event will trigger the Step function to execute Glue job for online processing.

aws s3 cp <Your documents> s3://llm-bot-documents-<Your account id>-<region>/<Your S3 bucket prefix>/

Other Samples

Try the Bedrock tutorial to quickly get through the bedrock model & langchain.

Contribution

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

About

Chatbot Portal with Agent: Streamlined Workflow for Building Agent-Based Applications

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 55.0%
  • Jupyter Notebook 37.2%
  • TypeScript 6.0%
  • Shell 1.6%
  • Makefile 0.1%
  • SCSS 0.1%