LokAI is a self-hosted ChatGPT-like AI assistant, that integrates with Ollama.
The goal of this project was to play around with Ollama 🦀 Rust 🦀, Axum, Askama, HTMX and Hyperscript.
I started project with Leptos, but I spent too much time compiling, so I move towards something more lightweight.
Project has many flaws, but I had tonne of fun working on it, and I hope it may be an inspiration for some more ambitious projects out there.
demo.mov
If you use VSCode and DevContainers with the plugin, simply open project in IDE, and VSCode will recognise and build proper dev environment for you.
To be able to develop and run the app locally, you need to install following:
- Rust
- cargo-watch
- sqlx-cli
- tailwindcss
You can sneak peak commands in .devcontainer/Dockerfile.
Before you run the app, make sure you have Ollama
installed.
When it's ready run:
ollama serve
# or with docker
docker run -v ~/.docker-share/ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama # runs once
docker stop ollama
docker start ollama
So far the name of the LLM model is hardcoded in the app (phi3:3.8b
), and if you don't want to wait ages for the first response, you should download the model beforehand:
ollama pull phi3:3.8b
To run the project locally with hot-reloading, type:
cargo watch -x run
Once it's done, navigate to http://localhost:3000 and start playing around with it.
Simply run:
cargo test
Project is licensed under the MIT license.
This project took an inspiration from Monte9/nextjs-tailwindcss-chatgpt-clone and MoonKraken/rusty_llama.