Skip to content

Commit

Permalink
Merge pull request janhq#1047 from janhq/docsNits
Browse files Browse the repository at this point in the history
docs: improve quickstart docs
  • Loading branch information
freelerobot authored Dec 18, 2023
2 parents 80f953b + 3fb80a1 commit 838b476
Show file tree
Hide file tree
Showing 10 changed files with 372 additions and 200 deletions.
13 changes: 0 additions & 13 deletions docs/docs/guides/how-jan-works.md

This file was deleted.

5 changes: 0 additions & 5 deletions docs/docs/guides/models.md

This file was deleted.

159 changes: 159 additions & 0 deletions docs/docs/guides/models.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,159 @@
---
title: Model Management
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
]
---

{/* Imports */}
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";

Jan is compatible with all GGUF models.

If you don't see the model you want in the Hub, or if you have a custom model, you can add it to Jan.

In this guide we will use our latest model, [Trinity](https://huggingface.co/janhq/trinity-v1-GGUF), as an example.

> We are fast shipping a UI to make this easier, but it's a bit manual for now. Apologies.
## 1. Create a model folder

Navigate to the `~/jan/models` folder on your computer.

In `App Settings`, go to `Advanced`, then `Open App Directory`.

<Tabs groupId="operating-systems">
<TabItem value="mac" label="macOS">

```sh
cd ~/jan/models
```

</TabItem>
<TabItem value="win" label="Windows">

```sh
C:/Users/<your_user_name>/jan/models
```

</TabItem>
<TabItem value="linux" label="Linux">

```sh
cd ~/jan/models
```

</TabItem>
</Tabs>

In the `models` folder, create a folder with the name of the model.

<Tabs groupId="operating-systems">
<TabItem value="mac" label="macOS">

```sh
mkdir trinity-v1-7b
```

</TabItem>
<TabItem value="win" label="Windows">

```sh
mkdir trinity-v1-7b
```

</TabItem>
<TabItem value="linux" label="Linux">

```sh
mkdir trinity-v1-7b
```

</TabItem>
</Tabs>

## 2. Create a model JSON

Jan follows a folder-based, [standard model template](/specs/models) called a `model.json` to persist the model configurations on your local filesystem.

This means you can easily & transparently reconfigure your models and export and share your preferences.

<Tabs groupId="operating-systems">
<TabItem value="mac" label="macOS">

```sh
cd trinity-v1-7b
touch model.json
```

</TabItem>
<TabItem value="win" label="Windows">

```sh
cd trinity-v1-7b
touch model.json
```

</TabItem>
<TabItem value="linux" label="Linux">

```sh
cd trinity-v1-7b
touch model.json
```

</TabItem>
</Tabs>

Copy the following configurations into the `model.json`.

1. Make sure the `id` property is the same as the folder name you created.
2. Make sure the `source_url` property is the direct binary download link ending in `.gguf`. In HuggingFace, you can find the directl links in `Files and versions` tab.
3. Ensure you are using the correct `prompt_template`. This is usually provided in the HuggingFace model's description page.

```js
{
"source_url": "https://huggingface.co/janhq/trinity-v1-GGUF/resolve/main/trinity-v1.Q4_K_M.gguf",
"id": "trinity-v1-7b",
"object": "model",
"name": "Trinity 7B Q4",
"version": "1.0",
"description": "Trinity is an experimental model merge of GreenNodeLM & LeoScorpius using the Slerp method. Recommended for daily assistance purposes.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"prompt_template": "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
},
"parameters": {
"max_tokens": 2048
},
"metadata": {
"author": "Jan",
"tags": ["7B", "Merged", "Featured"],
"size": 4370000000
},
"engine": "nitro"
}
```

## 3. Download your model

Restart the Jan application and look for your model in the Hub.

Click the green `download` button to download your actual model binary. This pulls from the `source_url` you provided above.

![image](https://hackmd.io/_uploads/HJLAqvwI6.png)

There you go! You are ready to use your model.

If you have any questions or want to request for more preconfigured GGUF models, please message us in [Discord](https://discord.gg/Dt7MxDyNNZ).
97 changes: 0 additions & 97 deletions docs/docs/guides/quickstart.md

This file was deleted.

81 changes: 81 additions & 0 deletions docs/docs/guides/quickstart.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
---
title: Quickstart
description: Jan is a ChatGPT-alternative that runs on your own computer, with a local API server.
keywords:
[
Jan AI,
Jan,
ChatGPT alternative,
local AI,
private AI,
conversational AI,
no-subscription fee,
large language model,
]
---

import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";

In this quickstart we'll show you how to:

- Download the Jan Desktop client - Mac, Windows, Linux, (and toaster) compatible
- Download and customize models
- Import custom models
- Use the local server at port `1337`

## Setup

### Installation

- To download the latest stable release: https://jan.ai/

- To download a nightly release (highly unstable but lots of new features): https://github.com/janhq/jan/releases

- For a detailed installation guide for your operating system, see the following:

<Tabs groupId="operating-systems">
<TabItem value="mac" label="macOS">
[Mac installation guide](/install/mac)
</TabItem>
<TabItem value="win" label="Windows">
[Windows installation guide](/install/windows)
</TabItem>
<TabItem value="linux" label="Linux">
[Linux installation guide](/install/linux)
</TabItem>
</Tabs>

- To build Jan Desktop from scratch (and have the right to tinker!)

See the [Build from Source](/install/from-source) guide.

### Working with Models

Jan provides a list of recommended models to get you started.
You can find them in the in-app Hub.

1. `cmd + k` and type "hub" to open the Hub.
2. Download your preferred models.
3. `cmd + k` and type "chat" to open the conversation UI and start chatting.
4. Your model may take a few seconds to start up.
5. You can customize the model settings, at each conversation thread level, on the right panel.
6. To change model defaults globally, edit the `model.json` file. See the [Models](/guides/models) guide.

### Importing Models

Jan is compatible with all GGUF models.

For more information on how to import custom models, not found in the Hub, see the [Models](/guides/models) guide.

## Working with the Local Server

> This feature is currently under development. So expect bugs!
Jan runs a local server on port `1337` by default.

The endpoints are OpenAI compatible.

See the [API server guide](/guides/server) for more information.

## Next Steps
File renamed without changes.
Loading

0 comments on commit 838b476

Please sign in to comment.