Getting Started - Docs - Changelog - Bug reports - Discord
⚠️ Jan is currently in Development: Expect breaking changes and bugs!
Jan lets you run AI on your own hardware, and with 1-click installs for the latest models. Easy-to-use yet powerful, with helpful tools to monitor and manage software-hardware performance.
Jan runs on a wide variety of hardware. We run on consumer-grade GPUs and Mac Minis, as well as datacenter-grade DGX H100 clusters.
Jan can be run as a server or cloud-native application for enterprise. We offer enterprise plugins for LDAP integration and Audit Logs. Contact us at [email protected] for more details.
Jan is free, open core, and Sustainable Use Licensed.
Self-Hosted AI
- Self-hosted Llama2 and LLMs
- Self-hosted StableDiffusion and Controlnet
- 1-click installs for Models (coming soon)
3rd-party AIs
- Connect to ChatGPT, Claude via API Key (coming soon)
- Security policy engine for 3rd-party AIs (coming soon)
- Pre-flight PII and Sensitive Data checks (coming soon)
Multi-Device
- Web App
- Jan Mobile support for custom Jan server (in progress)
- Cloud deployments (coming soon)
Organization Tools
- Multi-user support
- Audit and Usage logs (coming soon)
- Compliance and Audit policy (coming soon)
Hardware Support
- Nvidia GPUs
- Apple Silicon (in progress)
- CPU support via llama.cpp
- Nvidia GPUs using TensorRT (in progress)
👋 https://docs.jan.ai (Work in Progress)
⚠️ Jan is currently in Development: Expect breaking changes and bugs!
Jan is currently packaged as a Docker Compose application.
- Docker (Installation Instructions)
- Docker Compose (Installation Instructions)
git clone https://github.com/janhq/jan.git
cd jan
We provide a sample .env
file that you can use to get started.
cp sample.env .env
You will need to set the following .env
variables
# TODO: Document .env variables
Note: These step will change soon as we will be switching to Nitro, an Accelerated Inference Server written in C++
For complete Mambaforge installation instructions, see miniforge repo
Install Mamba to handle native python binding (which can yield better performance on Mac M/ NVIDIA)
curl -L -O "https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-$(uname)-$(uname -m).sh"
bash Mambaforge-$(uname)-$(uname -m).sh
rm Mambaforge-$(uname)-$(uname -m).sh
# Create environment
conda create -n jan python=3.9.16
conda activate jan
Uninstall any previous versions of llama-cpp-python
pip uninstall llama-cpp-python -y
Note: This step will change soon once Nitro (our accelerated inference server written in C++) is released
- On Mac
# See https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
pip install 'llama-cpp-python[server]'
- On Linux with NVIDIA GPU Hardware Acceleration
# See https://github.com/abetlen/llama-cpp-python#installation-with-hardware-acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
pip install 'llama-cpp-python[server]'
- On Linux with Intel/ AMD CPU (support for AVX-2/ AVX-512)
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" FORCE_CMAKE=1 pip install llama-cpp-python
pip install 'llama-cpp-python[server]'
We recommend that Llama2-7B (4-bit quantized) as a basic model to get started.
You will need to download the models to the models
folder at root level.
# Downloads model (~4gb)
# Download time depends on your internet connection and HuggingFace's bandwidth
# In this part, please head over to any source contains `.gguf` format model - https://huggingface.co/models?search=gguf
wget https://huggingface.co/TheBloke/Llama-2-7B-GGUF/resolve/main/llama-2-7b.Q4_0.gguf -P models
- Run the model in host machine
# Please change the value of --model key as your corresponding model path
# The --n_gpu_layers 1 means using acclerator (can be Metal on Mac, NVIDIA GPU on on linux with NVIDIA GPU)
# This service will run at `http://localhost:8000` in host level
# The backend service inside docker compose will connect to this service by using `http://host.docker.internal:8000`
python3 -m llama_cpp.server --model models/llama-2-7b.Q4_0.gguf --n_gpu_layers 1
Jan utilizes Docker Compose to run all services:
docker compose up -d # Detached mode
The table below summarizes the services and their respective URLs and credentials.
Service | Container Name | URL and Port | Credentials |
---|---|---|---|
Jan Web | jan-web-* | http://localhost:3000 | Set in conf/keycloak_conf/example-realm.json - Default Username / Password |
Hasura (Backend) | jan-graphql-engine-* | http://localhost:8080 | Set in conf/sample.env_app-backend - HASURA_GRAPHQL_ADMIN_SECRET |
Keycloak (Identity) | jan-keycloak-* | http://localhost:8088 | Set in .env - KEYCLOAK_ADMIN - KEYCLOAK_ADMIN_PASSWORD |
PostgresDB | jan-postgres-* | http://localhost:5432 | Set in .env |
- Refactor Keycloak Instructions into main README.md
- Changing login theme
- Launch the web application via
http://localhost:3000
. - Login with default user (username:
username
, password:password
)
- TODO
Jan is a commercial company with a Fair Code business model. This means that while we are open-source and can used for free, we require commercial licenses for specific use cases (e.g. hosting Jan as a service).
We are a team of engineers passionate about AI, productivity and the future of work. We are funded through consulting contracts and enterprise licenses. Feel free to reach out to us!
Jan comprises of several repositories:
Repo | Purpose |
---|---|
Jan | AI Platform to run AI in the enterprise. Easy-to-use for users, and packed with useful organizational and compliance features. |
Jan Mobile | Mobile App that can be pointed to a custom Jan server. |
Nitro | Inference Engine that runs AI on different types of hardware. Offers popular API formats (e.g. OpenAI, Clipdrop). Written in C++ for blazing fast performance |
Jan builds on top of several open-source projects:
- Keycloak Community (Apache-2.0)
- Hasura Community Edition (Apache-2.0)
We may re-evaluate this in the future, given different customer requirements.
Contributions are welcome! Please read the CONTRIBUTING.md file for guidelines on how to contribute to this project.
Please note that Jan intends to build a sustainable business that can provide high quality jobs to its contributors. If you are excited about our mission and vision, please contact us to explore opportunities.
- For support: please file a Github ticket
- For questions: join our Discord here
- For long form inquiries: please email [email protected]