Skip to content
/ jan Public
forked from janhq/jan

Jan is an open source alternative to ChatGPT that runs 100% offline on your computer

License

Notifications You must be signed in to change notification settings

zinohome/jan

Repository files navigation

Jan - Run your own AI

janlogo

GitHub commit activity Github Last Commit Github Contributors GitHub closed issues Discord

Getting Started - Docs - Changelog - Bug reports - Discord

⚠️ Jan is currently in Development: Expect breaking changes and bugs!

Jan lets you run AI on your own hardware, and with 1-click installs for the latest models. Easy-to-use yet powerful, with helpful tools to monitor and manage software-hardware performance.

Jan runs on a wide variety of hardware. We run on consumer-grade GPUs and Mac Minis, as well as datacenter-grade DGX H100 clusters.

Jan can be run as a server or cloud-native application for enterprise. We offer enterprise plugins for LDAP integration and Audit Logs. Contact us at [email protected] for more details.

Jan is free, open core, and Sustainable Use Licensed.

Demo

Jan Web GIF

Features

Self-Hosted AI

  • Self-hosted Llama2 and LLMs
  • Self-hosted StableDiffusion and Controlnet
  • 1-click installs for Models (coming soon)

3rd-party AIs

  • Connect to ChatGPT, Claude via API Key (coming soon)
  • Security policy engine for 3rd-party AIs (coming soon)
  • Pre-flight PII and Sensitive Data checks (coming soon)

Multi-Device

  • Web App
  • Jan Mobile support for custom Jan server (in progress)
  • Cloud deployments (coming soon)

Organization Tools

  • Multi-user support
  • Audit and Usage logs (coming soon)
  • Compliance and Audit policy (coming soon)

Hardware Support

  • Nvidia GPUs
  • Apple Silicon (in progress)
  • CPU support via llama.cpp
  • Nvidia GPUs using TensorRT (in progress)

Documentation

👋 https://docs.jan.ai (Work in Progress)

Installation

⚠️ Jan is currently in Development: Expect breaking changes and bugs!

Step 1: Install Docker

Jan is currently packaged as a Docker Compose application.

Step 2: Clone Repo

git clone https://github.com/janhq/jan.git
cd jan

Step 3: Configure .env

We provide a sample .env file that you can use to get started.

cp sample.env .env

You will need to set the following .env variables

# TODO: Document .env variables

Step 4: Install Models

Note: These step will change soon as we will be switching to Nitro, an Accelerated Inference Server written in C++

Step 4.1: Install Mamba

For complete Mambaforge installation instructions, see miniforge repo

Install Mamba to handle native python binding (which can yield better performance on Mac M/ NVIDIA)

curl -L -O "https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-$(uname)-$(uname -m).sh"
bash Mambaforge-$(uname)-$(uname -m).sh
rm Mambaforge-$(uname)-$(uname -m).sh

# Create environment
conda create -n jan python=3.9.16
conda activate jan

Uninstall any previous versions of llama-cpp-python

pip uninstall llama-cpp-python -y

Step 4.2: Install llama-cpp-python

Note: This step will change soon once Nitro (our accelerated inference server written in C++) is released

  • On Mac
# See https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
pip install 'llama-cpp-python[server]'
  • On Linux with NVIDIA GPU Hardware Acceleration
# See https://github.com/abetlen/llama-cpp-python#installation-with-hardware-acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
pip install 'llama-cpp-python[server]'
  • On Linux with Intel/ AMD CPU (support for AVX-2/ AVX-512)
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" FORCE_CMAKE=1 pip install llama-cpp-python
pip install 'llama-cpp-python[server]'

We recommend that Llama2-7B (4-bit quantized) as a basic model to get started.

You will need to download the models to the models folder at root level.

# Downloads model (~4gb)
# Download time depends on your internet connection and HuggingFace's bandwidth
# In this part, please head over to any source contains `.gguf` format model - https://huggingface.co/models?search=gguf
wget https://huggingface.co/TheBloke/Llama-2-7B-GGUF/resolve/main/llama-2-7b.Q4_0.gguf -P models
  • Run the model in host machine
# Please change the value of --model key as your corresponding model path
# The --n_gpu_layers 1 means using acclerator (can be Metal on Mac, NVIDIA GPU on on linux with NVIDIA GPU)
# This service will run at `http://localhost:8000` in host level
# The backend service inside docker compose will connect to this service by using `http://host.docker.internal:8000`
python3 -m llama_cpp.server --model models/llama-2-7b.Q4_0.gguf --n_gpu_layers 1

Step 5: docker compose up

Jan utilizes Docker Compose to run all services:

docker compose up -d # Detached mode

The table below summarizes the services and their respective URLs and credentials.

Service Container Name URL and Port Credentials
Jan Web jan-web-* http://localhost:3000 Set in conf/keycloak_conf/example-realm.json
- Default Username / Password
Hasura (Backend) jan-graphql-engine-* http://localhost:8080 Set in conf/sample.env_app-backend
- HASURA_GRAPHQL_ADMIN_SECRET
Keycloak (Identity) jan-keycloak-* http://localhost:8088 Set in .env
- KEYCLOAK_ADMIN
- KEYCLOAK_ADMIN_PASSWORD
PostgresDB jan-postgres-* http://localhost:5432 Set in .env

Step 6: Configure Keycloak

Step 7: Use Jan

  • Launch the web application via http://localhost:3000.
  • Login with default user (username: username, password: password)

Step 8: Deploying to Production

  • TODO

About Jan

Jan is a commercial company with a Fair Code business model. This means that while we are open-source and can used for free, we require commercial licenses for specific use cases (e.g. hosting Jan as a service).

We are a team of engineers passionate about AI, productivity and the future of work. We are funded through consulting contracts and enterprise licenses. Feel free to reach out to us!

Repo Structure

Jan comprises of several repositories:

Repo Purpose
Jan AI Platform to run AI in the enterprise. Easy-to-use for users, and packed with useful organizational and compliance features.
Jan Mobile Mobile App that can be pointed to a custom Jan server.
Nitro Inference Engine that runs AI on different types of hardware. Offers popular API formats (e.g. OpenAI, Clipdrop). Written in C++ for blazing fast performance

Architecture

Jan builds on top of several open-source projects:

We may re-evaluate this in the future, given different customer requirements.

Contributing

Contributions are welcome! Please read the CONTRIBUTING.md file for guidelines on how to contribute to this project.

Please note that Jan intends to build a sustainable business that can provide high quality jobs to its contributors. If you are excited about our mission and vision, please contact us to explore opportunities.

Contact

  • For support: please file a Github ticket
  • For questions: join our Discord here
  • For long form inquiries: please email [email protected]

About

Jan is an open source alternative to ChatGPT that runs 100% offline on your computer

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 91.2%
  • JavaScript 4.0%
  • SCSS 3.4%
  • Other 1.4%