- Docker Desktop installed and running
- Ollama installed on your host system
- NVIDIA GPU with updated drivers (if using GPU acceleration)
-
Start Ollama on your system
ollama serve
-
Verify Docker Desktop
- Ensure Docker Desktop is running
- Check WSL integration is enabled in Docker Desktop settings
- Make sure virtualization is enabled in BIOS
-
Start the containers
docker-compose up -d
- Ollama Web UI: http://localhost:3001
- CUDA Web UI: http://localhost:3002
To download and use models:
- Open terminal and run:
ollama pull <model_name>
- Example models:
- llama2
- mistral
- codellama
- neural-chat
-
Connection Issues
- Ensure Ollama is running on port 11434
- Check Docker Desktop status
- Run
docker ps
to verify containers are running
-
GPU Issues
- Verify NVIDIA drivers are up to date
- Check
nvidia-smi
output - Ensure GPU is recognized in Docker Desktop
-
Container Logs
docker logs open-webui-ollama docker logs open-webui-cuda
docker-compose down
- UI settings and chat history are stored in the
open-webui
volume - Models downloaded through Ollama are stored on your host system