Skip to content

Your own self-hosted AI assistant

License

Notifications You must be signed in to change notification settings

lukaszKielar/lokai-web

Repository files navigation

LokAI

LokAI is a self-hosted ChatGPT-like AI assistant, that integrates with Ollama.

The goal of this project was to play around with Ollama 🦀 Rust 🦀, Axum, Askama, HTMX and Hyperscript.

I started project with Leptos, but I spent too much time compiling, so I move towards something more lightweight.

Project has many flaws, but I had tonne of fun working on it, and I hope it may be an inspiration for some more ambitious projects out there.

demo.mov

Running app

Before you run the app, make sure you have Ollama installed.

When it's ready run:

ollama serve
# or with docker
docker run -v ~/.docker-share/ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama # runs once
docker stop ollama
docker start ollama

By default LokAI will use phi3:3.8b LLM model, so if you don't want to wait ages for the first response, you should download the model beforehand:

ollama pull phi3:3.8b

Docker is recommended way to run LokAI locally. In order to make it work build and run an image:

docker build -t lokai .
docker run --name lokai -p 3000:3000 lokai

Environment variables you can define:

Env variable Default value Description
DATABASE_URL sqlite://db.sqlite3 URL of Sqlite database
OLLAMA_URL http://host.docker.internal:11434 URL of Ollama server
LOKAI_DEFAULT_LLM_MODEL phi3:3.8b Default LLM model used for new conversation
LOKAI_HOST 0.0.0.0 LokAI host
LOKAI_PORT 3000 LokAI port

Once it's done, navigate to http://localhost:3000 and start playing around with LokAI.

Development

DevContainers

If you use VSCode and DevContainers with the plugin, simply open project in IDE, and VSCode will recognise and build proper dev environment for you.

Manual installs

To be able to develop and run the app locally, you need to install following:

  • Rust
  • cargo-watch
  • sqlx-cli
  • tailwindcss

You can sneak peak commands in .devcontainer/Dockerfile.

Once you have everything installed you can run the app in hot-reloading mode:

cargo watch -x run

Unit tests

Simply run:

cargo test

Licensing

Project is licensed under the MIT license.

Acknowledgements

This project took an inspiration from Monte9/nextjs-tailwindcss-chatgpt-clone and MoonKraken/rusty_llama.