forked from janhq/jan
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
docs: update using models documentation (janhq#1288)
docs: update using models documentation
- Loading branch information
Showing
7 changed files
with
178 additions
and
33 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
148 changes: 148 additions & 0 deletions
148
docs/docs/guides/04-using-models/03-integrate-with-remote-server.mdx
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,148 @@ | ||
--- | ||
title: Integrate With a Remote Server | ||
slug: /guides/using-models/integrate-with-remote-server | ||
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server. | ||
keywords: | ||
[ | ||
Jan AI, | ||
Jan, | ||
ChatGPT alternative, | ||
local AI, | ||
private AI, | ||
conversational AI, | ||
no-subscription fee, | ||
large language model, | ||
import-models-manually, | ||
remote server, | ||
OAI compatible, | ||
] | ||
--- | ||
|
||
:::caution | ||
This is currently under development. | ||
::: | ||
|
||
In this guide, we will show you how to configure Jan as a client and point it to any remote & local (self-hosted) API server. | ||
|
||
## OpenAI Platform Configuration | ||
|
||
In this section, we will show you how to configure with OpenAI Platform, using the OpenAI GPT 3.5 Turbo 16k model as an example. | ||
|
||
### 1. Create a Model JSON | ||
|
||
Navigate to the `~/jan/models` folder. Create a folder named `gpt-3.5-turbo-16k` and create a `model.json` file inside the folder including the following configurations: | ||
|
||
- Ensure the filename must be `model.json`. | ||
- Ensure the `id` property matches the folder name you created. | ||
- Ensure the `format` property is set to `api`. | ||
- Ensure the `engine` property is set to `openai`. | ||
- Ensure the `state` property is set to `ready`. | ||
|
||
```js | ||
{ | ||
"source_url": "https://openai.com", | ||
// highlight-next-line | ||
"id": "gpt-3.5-turbo-16k", | ||
"object": "model", | ||
"name": "OpenAI GPT 3.5 Turbo 16k", | ||
"version": "1.0", | ||
"description": "OpenAI GPT 3.5 Turbo 16k model is extremely good", | ||
// highlight-start | ||
"format": "api", | ||
"settings": {}, | ||
"parameters": {}, | ||
"metadata": { | ||
"author": "OpenAI", | ||
"tags": ["General", "Big Context Length"] | ||
}, | ||
"engine": "openai", | ||
"state": "ready" | ||
// highlight-end | ||
} | ||
``` | ||
|
||
### 2. Configure OpenAI API Keys | ||
|
||
You can find your API keys in the [OpenAI Platform](https://platform.openai.com/api-keys) and set the OpenAI API keys in `~/jan/engines/openai.json` file. | ||
|
||
```js | ||
{ | ||
"full_url": "https://api.openai.com/v1/chat/completions", | ||
// highlight-next-line | ||
"api_key": "sk-<your key here>" | ||
} | ||
``` | ||
|
||
### 3. Start the Model | ||
|
||
Restart Jan and navigate to the Hub. Then, select your configured model and start the model. | ||
|
||
![image-01](assets/03-openai-platform-configuration.png) | ||
|
||
## Engines with OAI Compatible Configuration | ||
|
||
In this section, we will show you how to configure a client connection to a remote/local server, using Jan's API server that is running model `mistral-ins-7b-q4` as an example. | ||
|
||
### 1. Configure a Client Connection | ||
|
||
Navigate to the `~/jan/engines` folder and modify the `openai.json` file. Please note that at the moment the code that supports any openai compatible endpoint only reads `engine/openai.json` file, thus, it will not search any other files in this directory. | ||
|
||
Configure `full_url` properties with the endpoint server that you want to connect. For example, if you want to connect to Jan's API server, you can configure it as follows: | ||
|
||
```js | ||
{ | ||
// highlight-start | ||
// "full_url": "https://<server-ip-address>:<port>/v1/chat/completions" | ||
"full_url": "https://<server-ip-address>:1337/v1/chat/completions", | ||
// highlight-end | ||
// Skip api_key if your local server does not require authentication | ||
// "api_key": "sk-<your key here>" | ||
} | ||
``` | ||
|
||
### 2. Create a Model JSON | ||
|
||
Navigate to the `~/jan/models` folder. Create a folder named `mistral-ins-7b-q4` and create a `model.json` file inside the folder including the following configurations: | ||
|
||
- Ensure the filename must be `model.json`. | ||
- Ensure the `id` property matches the folder name you created. | ||
- Ensure the `format` property is set to `api`. | ||
- Ensure the `engine` property is set to `openai`. | ||
- Ensure the `state` property is set to `ready`. | ||
|
||
```js | ||
{ | ||
"source_url": "https://jan.ai", | ||
// highlight-next-line | ||
"id": "mistral-ins-7b-q4", | ||
"object": "model", | ||
"name": "Mistral Instruct 7B Q4 on Jan API Server", | ||
"version": "1.0", | ||
"description": "Jan integration with remote Jan API server", | ||
// highlight-next-line | ||
"format": "api", | ||
"settings": {}, | ||
"parameters": {}, | ||
"metadata": { | ||
"author": "MistralAI, The Bloke", | ||
"tags": [ | ||
"remote", | ||
"awesome" | ||
] | ||
}, | ||
// highlight-start | ||
"engine": "openai", | ||
"state": "ready" | ||
// highlight-end | ||
} | ||
``` | ||
|
||
### 3. Start the Model | ||
|
||
Restart Jan and navigate to the Hub. Locate your model and click the Use button. | ||
|
||
![image-02](assets/03-oai-compatible-configuration.png) | ||
|
||
## Assistance and Support | ||
|
||
If you have questions or are looking for more preconfigured GGUF models, please feel free to join our [Discord community](https://discord.gg/Dt7MxDyNNZ) for support, updates, and discussions. |
File renamed without changes.
File renamed without changes.
File renamed without changes
Binary file added
BIN
+348 KB
docs/docs/guides/04-using-models/assets/03-oai-compatible-configuration.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added
BIN
+372 KB
docs/docs/guides/04-using-models/assets/03-openai-platform-configuration.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.