wren-engine
: the engine service. check out example here: wren-engine /examplewren-ai-service
: the AI service.qdrant
: the vector store ai service is using.wren-ui
: the UI service.bootstrap
: put required files to volume for engine service.
Shared data using data
volume.
Path structure as following:
/mdl
*.json
(will putsample.json
during bootstrap)
accounts
config.properties
- Check out Network drivers overview to learn more about
bridge
network driver.
- copy
.env.example
to.env
and modify the OpenAI API key. - copy
config.example.yaml
toconfig.yaml
for AI service configuration. - start all services:
docker-compose --env-file .env up -d
. - stop all services:
docker-compose --env-file .env down
.
- If your port 3000 is occupied, you can modify the
HOST_PORT
in.env
.
To start with a custom LLM, the process is similar to starting with OpenAI. The main difference is that you need to modify the config.yaml
file
that we created on the previous step. After modifying the file, you can restart the services by running docker-compose --env-file .env up -d --force-recreate wren-ai-service
.
For detailed information on how to modify the configuration for different LLM providers and models, please refer to the AI Service Configuration. This guide provides comprehensive instructions on setting up various LLM providers, embedders, and other components of the AI service.