Skip to content

Latest commit

Β 

History

History
Β 
Β 

docker

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Service

  • wren-engine: the engine service. check out example here: wren-engine /example
  • wren-ai-service: the AI service.
  • qdrant: the vector store ai service is using.
  • wren-ui: the UI service.
  • bootstrap: put required files to volume for engine service.

Volume

Shared data using data volume.

Path structure as following:

  • /mdl
    • *.json (will put sample.json during bootstrap)
  • accounts
  • config.properties

Network

How to start with OpenAI

  1. copy .env.example to .env and modify the OpenAI API key.
  2. copy config.example.yaml to config.yaml for AI service configuration.
  3. start all services: docker-compose --env-file .env up -d.
  4. stop all services: docker-compose --env-file .env down.

Optional

  • If your port 3000 is occupied, you can modify the HOST_PORT in .env.

How to start with custom LLM

To start with a custom LLM, the process is similar to starting with OpenAI. The main difference is that you need to modify the config.yaml file that we created on the previous step. After modifying the file, you can restart the services by running docker-compose --env-file .env up -d --force-recreate wren-ai-service.

For detailed information on how to modify the configuration for different LLM providers and models, please refer to the AI Service Configuration. This guide provides comprehensive instructions on setting up various LLM providers, embedders, and other components of the AI service.