Skip to content

Commit

Permalink
docs: update using models documentation (janhq#1288)
Browse files Browse the repository at this point in the history
docs: update using models documentation
  • Loading branch information
0xHieu01 authored Jan 4, 2024
2 parents a9c1be1 + f6aba5e commit 7c784ea
Show file tree
Hide file tree
Showing 7 changed files with 178 additions and 33 deletions.
63 changes: 30 additions & 33 deletions docs/docs/guides/04-using-models/02-import-manually.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ keywords:
no-subscription fee,
large language model,
import-models-manually,
local model,
]
---

Expand All @@ -24,16 +25,12 @@ This is currently under development.
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";

Jan is compatible with all GGUF models.
## Steps to Manually Import a Local Model

If you can not find the model you want in the Hub or have a custom model you want to use, you can import it manually.

In this guide, we will show you how to import a GGUF model from [HuggingFace](https://huggingface.co/), using our lastest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.
In this section, we will show you how to import a GGUF model from [HuggingFace](https://huggingface.co/), using our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.

> We are fast shipping a UI to make this easier, but it's a bit manual for now. Apologies.
## Steps to Manually Import a Model

### 1. Create a Model Folder

Navigate to the `~/jan/models` folder. You can find this folder by going to `App Settings` > `Advanced` > `Open App Directory`.
Expand Down Expand Up @@ -126,45 +123,45 @@ Edit `model.json` and include the following configurations:
- Ensure the filename must be `model.json`.
- Ensure the `id` property matches the folder name you created.
- Ensure the GGUF filename should match the `id` property exactly.
- Ensure the `source_url` property is the direct binary download link ending in `.gguf`. In HuggingFace, you can find the direct links in `Files and versions` tab.
- Ensure the `source_url` property is the direct binary download link ending in `.gguf`. In HuggingFace, you can find the direct links in the `Files and versions` tab.
- Ensure you are using the correct `prompt_template`. This is usually provided in the HuggingFace model's description page.
- Ensure the `state` property is set to `ready`.

```js
{
// highlight-start
"source_url": "https://huggingface.co/janhq/trinity-v1-GGUF/resolve/main/trinity-v1.Q4_K_M.gguf",
"id": "trinity-v1-7b",
// highlight-end
"object": "model",
"name": "Trinity-v1 7B Q4",
"version": "1.0",
"description": "Trinity is an experimental model merge of GreenNodeLM & LeoScorpius using the Slerp method. Recommended for daily assistance purposes.",
"format": "gguf",
"settings": {
"ctx_len": 4096,
// highlight-next-line
"prompt_template": "{system_message}\n### Instruction:\n{prompt}\n### Response:"
},
"parameters": {
"max_tokens": 4096
},
"metadata": {
"author": "Jan",
"tags": ["7B", "Merged"],
"size": 4370000000
},
// highlight-start
"source_url": "https://huggingface.co/janhq/trinity-v1-GGUF/resolve/main/trinity-v1.Q4_K_M.gguf",
"id": "trinity-v1-7b",
// highlight-end
"object": "model",
"name": "Trinity-v1 7B Q4",
"version": "1.0",
"description": "Trinity is an experimental model merge of GreenNodeLM & LeoScorpius using the Slerp method. Recommended for daily assistance purposes.",
"format": "gguf",
"settings": {
"ctx_len": 4096,
// highlight-next-line
"state": "ready",
"engine": "nitro"
}
"prompt_template": "{system_message}\n### Instruction:\n{prompt}\n### Response:"
},
"parameters": {
"max_tokens": 4096
},
"metadata": {
"author": "Jan",
"tags": ["7B", "Merged"],
"size": 4370000000
},
"engine": "nitro",
// highlight-next-line
"state": "ready"
}
```

### 3. Download the Model

Restart Jan and navigate to the Hub. Locate your model and click the `Download` button to download the model binary.

![image](assets/download-model.png)
![image-01](assets/02-manually-import-local-model.png)

Your model is now ready to use in Jan.

Expand Down
148 changes: 148 additions & 0 deletions docs/docs/guides/04-using-models/03-integrate-with-remote-server.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
---
title: Integrate With a Remote Server
slug: /guides/using-models/integrate-with-remote-server
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
import-models-manually,
remote server,
OAI compatible,
]
---

:::caution
This is currently under development.
:::

In this guide, we will show you how to configure Jan as a client and point it to any remote & local (self-hosted) API server.

## OpenAI Platform Configuration

In this section, we will show you how to configure with OpenAI Platform, using the OpenAI GPT 3.5 Turbo 16k model as an example.

### 1. Create a Model JSON

Navigate to the `~/jan/models` folder. Create a folder named `gpt-3.5-turbo-16k` and create a `model.json` file inside the folder including the following configurations:

- Ensure the filename must be `model.json`.
- Ensure the `id` property matches the folder name you created.
- Ensure the `format` property is set to `api`.
- Ensure the `engine` property is set to `openai`.
- Ensure the `state` property is set to `ready`.

```js
{
"source_url": "https://openai.com",
// highlight-next-line
"id": "gpt-3.5-turbo-16k",
"object": "model",
"name": "OpenAI GPT 3.5 Turbo 16k",
"version": "1.0",
"description": "OpenAI GPT 3.5 Turbo 16k model is extremely good",
// highlight-start
"format": "api",
"settings": {},
"parameters": {},
"metadata": {
"author": "OpenAI",
"tags": ["General", "Big Context Length"]
},
"engine": "openai",
"state": "ready"
// highlight-end
}
```

### 2. Configure OpenAI API Keys

You can find your API keys in the [OpenAI Platform](https://platform.openai.com/api-keys) and set the OpenAI API keys in `~/jan/engines/openai.json` file.

```js
{
"full_url": "https://api.openai.com/v1/chat/completions",
// highlight-next-line
"api_key": "sk-<your key here>"
}
```

### 3. Start the Model

Restart Jan and navigate to the Hub. Then, select your configured model and start the model.

![image-01](assets/03-openai-platform-configuration.png)

## Engines with OAI Compatible Configuration

In this section, we will show you how to configure a client connection to a remote/local server, using Jan's API server that is running model `mistral-ins-7b-q4` as an example.

### 1. Configure a Client Connection

Navigate to the `~/jan/engines` folder and modify the `openai.json` file. Please note that at the moment the code that supports any openai compatible endpoint only reads `engine/openai.json` file, thus, it will not search any other files in this directory.

Configure `full_url` properties with the endpoint server that you want to connect. For example, if you want to connect to Jan's API server, you can configure it as follows:

```js
{
// highlight-start
// "full_url": "https://<server-ip-address>:<port>/v1/chat/completions"
"full_url": "https://<server-ip-address>:1337/v1/chat/completions",
// highlight-end
// Skip api_key if your local server does not require authentication
// "api_key": "sk-<your key here>"
}
```

### 2. Create a Model JSON

Navigate to the `~/jan/models` folder. Create a folder named `mistral-ins-7b-q4` and create a `model.json` file inside the folder including the following configurations:

- Ensure the filename must be `model.json`.
- Ensure the `id` property matches the folder name you created.
- Ensure the `format` property is set to `api`.
- Ensure the `engine` property is set to `openai`.
- Ensure the `state` property is set to `ready`.

```js
{
"source_url": "https://jan.ai",
// highlight-next-line
"id": "mistral-ins-7b-q4",
"object": "model",
"name": "Mistral Instruct 7B Q4 on Jan API Server",
"version": "1.0",
"description": "Jan integration with remote Jan API server",
// highlight-next-line
"format": "api",
"settings": {},
"parameters": {},
"metadata": {
"author": "MistralAI, The Bloke",
"tags": [
"remote",
"awesome"
]
},
// highlight-start
"engine": "openai",
"state": "ready"
// highlight-end
}
```

### 3. Start the Model

Restart Jan and navigate to the Hub. Locate your model and click the Use button.

![image-02](assets/03-oai-compatible-configuration.png)

## Assistance and Support

If you have questions or are looking for more preconfigured GGUF models, please feel free to join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 7c784ea

Please sign in to comment.