Getting Started - Docs - Changelog - Bug reports - Discord
⚠️ Jan is currently in Development: Expect breaking changes and bugs!
Jan is a self-hosted AI Platform to run AI in the enterprise. Easy-to use for users, and packed with useful organizational and security features.
We help you run AI on your own hardware, with 1-click installs for the latest models. Jan runs on a wide variety of hardware: from consumer grade Mac Minis, to datacenter-grade Nvidia H100s.
Jan can also connect to the latest AI engines like ChatGPT, with a security policy engine to protect your organization from sensitive data leaks.
Jan is free, source-available, and fair-code licensed.
Self-Hosted AI
- Self-hosted Llama2 and LLMs
- Self-hosted StableDiffusion and Controlnet
- 1-click installs for Models (coming soon)
3rd-party AIs
- Connect to ChatGPT, Claude via API Key (coming soon)
- Security policy engine for 3rd-party AIs (coming soon)
- Pre-flight PII and Sensitive Data checks (coming soon)
Multi-Device
- Web App
- Jan Mobile support for custom Jan server (in progress)
- Cloud deployments (coming soon)
Organization Tools
- Multi-user support
- Audit and Usage logs (coming soon)
- Compliance and Audit policy (coming soon)
Hardware Support
- Nvidia GPUs
- Apple Silicon (in progress)
- CPU support via llama.cpp (in progress)
- Nvidia GPUs using TensorRT (in progress)
👋 https://docs.jan.ai (Work in Progress)
⚠️ Jan is currently in Development: Expect breaking changes and bugs!
Jan is currently packaged as a Docker Compose application.
- Docker (Installation Instructions)
- Docker Compose (Installation Instructions)
git clone https://github.com/janhq/jan.git
cd jan
# Pull latest submodules
git submodule update --init --recursive
We provide a sample .env
file that you can use to get started.
cp sample.env .env
You will need to set the following .env
variables
# TODO: Document .env variables
Note: This step will change soon with Nitro becoming its own library
We recommend that Llama2-7B (4-bit quantized) as a basic model to get started.
You will need to download the models to the jan-inference/llms/models
folder.
cd jan-inference/llms/models
# Downloads model (~4gb)
# Download time depends on your internet connection and HuggingFace's bandwidth
wget https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/resolve/main/llama-2-7b-chat.ggmlv3.q4_1.bin
Jan utilizes Docker Compose to run all services:
docker compose up
docker compose up -d # Detached mode
- (Backend)
- Keycloak (Identity)
The table below summarizes the services and their respective URLs and credentials.
Service | Container Name | URL and Port | Credentials |
---|---|---|---|
Jan Web | jan-web-* | http://localhost:3000 | Set in conf/keycloak_conf/example-realm.json - Default Username / Password |
Hasura (Backend) | jan-graphql-engine-* | http://localhost:8080 | Set in conf/sample.env_app-backend - HASURA_GRAPHQL_ADMIN_SECRET |
Keycloak (Identity) | jan-keycloak-* | http://localhost:8088 | Set in .env - KEYCLOAK_ADMIN - KEYCLOAK_ADMIN_PASSWORD |
Inference Service | jan-llm-* | http://localhost:8000 | Set in .env |
PostgresDB | jan-postgres-* | http://localhost:5432 | Set in .env |
- Refactor Keycloak Instructions into main README.md
- Changing login theme
- Launch the web application via
http://localhost:3000
. - Login with default user (username:
username
, password:password
)
- TODO
Jan is a commercial company with a Fair Code business model. This means that while we are open-source and can used for free, we require commercial licenses for specific use cases (e.g. hosting Jan as a service).
We are a team of engineers passionate about AI, productivity and the future of work. We are funded through consulting contracts and enterprise licenses. Feel free to reach out to us!
Jan comprises of several repositories:
Repo | Purpose |
---|---|
Jan | AI Platform to run AI in the enterprise. Easy-to-use for users, and packed with useful organizational and compliance features. |
Jan Mobile | Mobile App that can be pointed to a custom Jan server. |
Nitro | Inference Engine that runs AI on different types of hardware. Offers popular API formats (e.g. OpenAI, Clipdrop). Written in C++ for blazing fast performance |
Jan builds on top of several open-source projects:
- Keycloak Community (Apache-2.0)
- Hasura Community Edition (Apache-2.0)
We may re-evaluate this in the future, given different customer requirements.
Contributions are welcome! Please read the CONTRIBUTING.md file for guidelines on how to contribute to this project.
Please note that Jan intends to build a sustainable business that can provide high quality jobs to its contributors. If you are excited about our mission and vision, please contact us to explore opportunities.
- For support: please file a Github ticket
- For questions: join our Discord here
- For long form inquiries: please email [email protected]