Skip to content

lukaszKielar/lokai

Repository files navigation

LokAI TUI

LokAI is a local AI assistant in your terminal.

Running app

Before we get started make sure you have following tools installed:

Ollama server has to be up and running before we start LokAI.

ollama serve

Open separate terminal and pull down your favourite model. Default model used by LokAI is phi3:3.8b, but you can use any of available models.

ollama pull phi3:3.8b

Once you have the model you can run LokAI app.

cargo run  # for more configuration flags see CLI section

CLI

LokAI allow you to set some options through CLI.

  • ollama-url [default: http://localhost:11434] - if you run LokAI in docker you may need to use http://host.docker.internal:11434
  • default-llm-model [default: phi3:3.8b] - the model you would like to use for all of your conversations. You can pass any model supported by Ollama. Make sure you have it downloaded before you start LokAI.
  • database-url [default: sqlite::memory:] - default value spins new in-memory instance that won't persist conversations between restarts. Example value for persistent database sqlite://db.sqlite3

To use one, many or all options simply type:

cargo run -- --database-url <DB_URL> --ollama-url <OLLAMA_URL>

To print help type:

cargo run -- --help

Shortcuts

Shortcut Action App Context
Ctrl + c Exit Global
Ctrl + n Add new conversation Global
Tab Next focus Global
Shift + Tab Previous focus Global
/ Switch between conversations Conversation sidebar
Delete Delete selected conversation Conversation sidebar
/ Scroll up/down Chat/Prompt
Esc Cancel action Popups

Roadmap

  • ? Dedicated settings tab, allowing to change things like:
    • default LLM model
    • ollama URL
  • ? Settings persistance - save TOML file in user's dir
  • Better error handling - new Result and Error structs allowing for clear distinction between critical and non-critical errors
  • If nothing is presented in Chat area print shortcuts and welcoming graphics (logo)
    • Create logo
  • Conversations
    • Adding new conversation - design dedicated pop up
    • Deleting conversation
    • ? Changing settings for conversations, e.g. LLM model
  • Chat
    • Highlighting code snippets returned by LLM
    • Ability to copy chat or selected messages to clipboard
  • Prompt
    • Set prompt's border to different colors depending on the factors like: empty prompt, LLM still replying, error
  • ? Ollama
    • Downloading models (in the background)
    • Polling Ollama Server to get the status - presenting status to users
    • Present all available local models
  • Popup or presenting shortcuts
  • Bar that presents sliding messages (iterator for a piece of text that moves from right to left)
  • Tracing
  • Tests
    • Improve unit test coverage
    • Create integration tests
  • Documentation improvements
  • Release tool to crates.io
  • Use kalosm instead of Ollama