Skip to content

Commit

Permalink
docs: fix gpt4all and add llm usage documents (eosphoros-ai#221)
Browse files Browse the repository at this point in the history
llm usage documents
fix gpt4all problem
  • Loading branch information
Aries-ckt authored Jun 14, 2023
2 parents 9c9c229 + 88684ca commit e32b41b
Show file tree
Hide file tree
Showing 7 changed files with 71 additions and 32 deletions.
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,8 @@ MANIFEST
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

*.zuo
*.zip
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ As large models are released and iterated upon, they are becoming increasingly i
DB-GPT is an experimental open-source project that uses localized GPT large models to interact with your data and environment. With this solution, you can be assured that there is no risk of data leakage, and your data is 100% private and secure.

## News

- [2023/06/14] support gpt4all model, which can run at M1/M2, or cpu machine. [documents](https://db-gpt.readthedocs.io/en/latest/modules/llms.html)
- [2023/06/01]🔥 On the basis of the Vicuna-13B basic model, task chain calls are implemented through plugins. For example, the implementation of creating a database with a single sentence.[demo](./assets/auto_plugin.gif)
- [2023/06/01]🔥 QLoRA guanaco(7b, 13b, 33b) support.
- [2023/05/28]🔥 Learning from crawling data from the Internet [demo](./assets/chaturl_en.gif)
Expand Down
2 changes: 1 addition & 1 deletion README.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ DB-GPT 是一个开源的以数据库为基础的GPT实验项目,使用本地
[DB-GPT视频介绍](https://www.bilibili.com/video/BV1SM4y1a7Nj/?buvid=551b023900b290f9497610b2155a2668&is_story_h5=false&mid=%2BVyE%2Fwau5woPcUKieCWS0A%3D%3D&p=1&plat_id=116&share_from=ugc&share_medium=iphone&share_plat=ios&share_session_id=5D08B533-82A4-4D40-9615-7826065B4574&share_source=GENERIC&share_tag=s_i&timestamp=1686307943&unique_k=bhO3lgQ&up_id=31375446)

## 最新发布

- [2023/06/14]🔥 支持gpt4all模型,可以在M1/M2 或者CPU机器上运行。 [使用文档](https://db-gpt.readthedocs.io/projects/db-gpt-docs-zh-cn/zh_CN/latest/modules/llms.html)
- [2023/06/01]🔥 在Vicuna-13B基础模型的基础上,通过插件实现任务链调用。例如单句创建数据库的实现.[演示](./assets/dbgpt_bytebase_plugin.gif)
- [2023/06/01]🔥 QLoRA guanaco(原驼)支持, 支持4090运行33B
- [2023/05/28]🔥根据URL进行对话 [演示](./assets/chat_url_zh.gif)
Expand Down
60 changes: 41 additions & 19 deletions docs/locales/zh_CN/LC_MESSAGES/modules/llms.po
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 0.1.0\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-06-14 21:47+0800\n"
"POT-Creation-Date: 2023-06-14 22:33+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
Expand All @@ -19,11 +19,11 @@ msgstr ""
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.12.1\n"

#: ../../modules/llms.md:1 d924f8db422d4adaa4cf0e5d17ffe332
#: ../../modules/llms.md:1 7737e31f5fcc4dc4be573a0bb73ca419
msgid "LLMs"
msgstr "大语言模型"

#: ../../modules/llms.md:3 0fba9725074a4a72be1849e822e69365
#: ../../modules/llms.md:3 8a8422f18e5d4c7aa1c1abf3a89f5d27
#, python-format
msgid ""
"In the underlying large model integration, we have designed an open "
Expand All @@ -36,31 +36,53 @@ msgid ""
"of use."
msgstr "在底层大模型接入中,我们设计了开放的接口,支持对接多种大模型。同时对于接入模型的效果,我们有非常严格的把控与评审机制。对大模型能力上与ChatGPT对比,在准确率上需要满足85%以上的能力对齐。我们用更高的标准筛选模型,是期望在用户使用过程中,可以省去前面繁琐的测试评估环节。"

#: ../../modules/llms.md:5 4f65b4e86d8d4b35b95ccac6a96f9b9c
#: ../../modules/llms.md:5 c48ee3f4c51c49ae9dba40b1854dd483
msgid "Multi LLMs Usage"
msgstr "多模型使用"

#: ../../modules/llms.md:6 246e4dbf1c634d9d9f2ab40b4883de73
#: ../../modules/llms.md:6 ad44cef629f64bf4a4d72431568149fe
msgid ""
"To use multiple models, modify the LLM_MODEL parameter in the .env "
"configuration file to switch between the models."
msgstr "如果要使用不同的模型,请修改.env配置文件中的LLM MODEL参数以在模型之间切换。"

#: ../../modules/llms.md:8 379d7756094f4ceda5ef163031b44e97
#: ../../modules/llms.md:8 b08325fd36af4ef582c8a46685986aaf
msgid ""
"Notice: you can create .env file from .env.template, just use command "
"like this:"
msgstr "注意:你可以从 .env.template 创建 .env 文件。只需使用如下命令:"

#: ../../modules/llms.md:14 3b46be1be08b41b0a4f2fa7ca1f09ccf
#: ../../modules/llms.md:14 75ec45409dc84fd9bf3dfa98835d4645
msgid ""
"now we support models vicuna-13b, vicuna-7b, chatglm-6b, flan-t5-base, "
"guanaco-33b-merged, falcon-40b, gorilla-7b."
msgstr ""
"现在我们支持的模型有vicuna-13b, vicuna-7b, chatglm-6b, flan-t5-base, guanaco-33b-"
"merged, falcon-40b, gorilla-7b."

#: ../../modules/llms.md:16 f7e26b7e7a8349159b5c2e8ec2a1abc8
#: ../../modules/llms.md:16 a6e99cf291e049c99e2d6813d03b0427
msgid ""
"if you want use other model, such as chatglm-6b, you just need update "
".env config file."
msgstr "如果你想使用其他模型,比如chatglm-6b, 仅仅需要修改.env 配置文件"

#: ../../modules/llms.md:21 1b9a9aa83dc7420fb5c6c17c982abb20
msgid "Run Model with cpu."
msgstr "用CPU运行模型"

#: ../../modules/llms.md:22 e6bdeeca06764ed583154ddc78f0f26e
msgid ""
"we alse support smaller models, like gpt4all. you can use it with "
"cpu/mps(M1/M2), Download from [gpt4all model](https://gpt4all.io/models"
"/ggml-gpt4all-j-v1.3-groovy.bin)"
msgstr "我们也支持一些小模型,你可以通过CPU/MPS(M1、M2)运行, 模型下载[gpt4all](https://gpt4all.io/models"
"/ggml-gpt4all-j-v1.3-groovy.bin)"

#: ../../modules/llms.md:24 da5b575421ef45e7ad0f56fac151948d
msgid "put it in the models path, then change .env config."
msgstr "将模型放在models路径, 修改.env 配置文件"

#: ../../modules/llms.md:29 03453d147e64404ab2c116faf0147b70
msgid ""
"DB-GPT provides a model load adapter and chat adapter. load adapter which"
" allows you to easily adapt load different LLM models by inheriting the "
Expand All @@ -69,15 +91,15 @@ msgstr ""
"DB-GPT提供了多模型适配器load adapter和chat adapter.load adapter通过继承BaseLLMAdapter类,"
" 实现match和loader方法允许你适配不同的LLM."

#: ../../modules/llms.md:18 83ad29017489428cad336894de9b1076
#: ../../modules/llms.md:31 06b14da2931349859182473cd79abd68
msgid "vicuna llm load adapter"
msgstr "vicuna llm load adapter"

#: ../../modules/llms.md:35 56997156cf994abbb12a586d435b1f6c
#: ../../modules/llms.md:48 fe7be51e9e2240c1882ef05f94a39d90
msgid "chatglm load adapter"
msgstr "chatglm load adapter"

#: ../../modules/llms.md:62 e0d989155ca443079402921fd15c1b75
#: ../../modules/llms.md:75 3535d4e0a0b946a49ed13710ae0ae5f3
msgid ""
"chat adapter which allows you to easily adapt chat different LLM models "
"by inheriting the BaseChatAdpter.you just implement match() and "
Expand All @@ -86,43 +108,43 @@ msgstr ""
"chat "
"adapter通过继承BaseChatAdpter允许你通过实现match和get_generate_stream_func方法允许你适配不同的LLM."

#: ../../modules/llms.md:64 1cef2736bdd04dc89a3f9f20d6ab7e4e
#: ../../modules/llms.md:77 b8dba90d769d45f090086fa044f22a96
msgid "vicuna llm chat adapter"
msgstr "vicuna llm chat adapter"

#: ../../modules/llms.md:76 0976f3a966b24e458b968c4410e98403
#: ../../modules/llms.md:89 f1d6e8145f704b5bbd1c49224c1e30f9
msgid "chatglm llm chat adapter"
msgstr "chatglm llm chat adapter"

#: ../../modules/llms.md:89 138577d6f1714b4daef8b20108de983d
#: ../../modules/llms.md:102 295e498cec384d9589431b1d5942f590
msgid ""
"if you want to integrate your own model, just need to inheriting "
"BaseLLMAdaper and BaseChatAdpter and implement the methods"
msgstr "如果你想集成自己的模型,只需要继承BaseLLMAdaper和BaseChatAdpter类,然后实现里面的方法即可"

#: ../../modules/llms.md:92 23cd5154bf3a4e7bad206d5e06eef51e
#: ../../modules/llms.md:104 07dd4757b06440fe8e9959446ff05892
#, fuzzy
msgid "Multi Proxy LLMs"
msgstr "多模型使用"

#: ../../modules/llms.md:93 db2739caa40342819965856d6fe83677
#: ../../modules/llms.md:105 1b9fc9ce08b94f6493f4b6ce51878fe2
msgid "1. Openai proxy"
msgstr ""

#: ../../modules/llms.md:94 5a305d769a144543b14f6e7099d0fc81
#: ../../modules/llms.md:106 64e44b3c7c254034a1f72d2e362f4c4d
msgid ""
"If you haven't deployed a private infrastructure for a large model, or if"
" you want to use DB-GPT in a low-cost and high-efficiency way, you can "
"also use OpenAI's large model as your underlying model."
msgstr ""

#: ../../modules/llms.md:96 64d78d8e95174a329f67e638fc63786c
#: ../../modules/llms.md:108 bca7d8118cd546ca8160400fe729be89
msgid ""
"If your environment deploying DB-GPT has access to OpenAI, then modify "
"the .env configuration file as below will work."
msgstr ""

#: ../../modules/llms.md:104 f2bcf0c04da844e0b8fa2a3712d56439
#: ../../modules/llms.md:116 7b84ddb937954787b4c8422f743afeda
msgid ""
"If you can't access OpenAI locally but have an OpenAI proxy service, you "
"can configure as follows."
Expand Down
16 changes: 9 additions & 7 deletions docs/locales/zh_CN/LC_MESSAGES/modules/prompts.po
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ msgid ""
msgstr ""
"Project-Id-Version: DB-GPT 0.1.0\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2023-06-11 14:10+0800\n"
"POT-Creation-Date: 2023-06-14 22:33+0800\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_CN\n"
Expand All @@ -17,21 +17,23 @@ msgstr ""
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.11.0\n"
"Generated-By: Babel 2.12.1\n"

#: ../../modules/prompts.md:1 bb9583334e6948b98b59126234ae045f
#: ../../modules/prompts.md:1 b9279c238c014a74aecbc75b5d3dc202
msgid "Prompts"
msgstr ""

#: ../../modules/prompts.md:3 e6f5129e260c4a739a40115fff82850f
#: ../../modules/prompts.md:3 f0c720c1c85b401cbc26ed0eb3f6e70e
msgid ""
"Prompt is a very important part of the interaction between the large "
"model and the user, and to a certain extent, it determines the quality "
"and accuracy of the answer generated by the large model. In this project,"
" we will automatically optimize the corresponding prompt according to "
"user input and usage scenarios, making it easier and more efficient for "
"users to use large language models."
msgstr "Prompt是与大模型交互过程中非常重要的部分,一定程度上Prompt决定了"
"大模型生成答案的质量与准确性,在本的项目中,我们会根据用户输入与"
"使用场景,自动优化对应的Prompt,让用户使用大语言模型变得更简单、更高效。"
msgstr "Prompt是与大模型交互过程中非常重要的部分,一定程度上Prompt决定了大模型生成答案的质量与准确性,在本的项目中,我们会根据用户输入与使用场景,自动优化对应的Prompt,让用户使用大语言模型变得更简单、更高效。"

#: ../../modules/prompts.md:5 6576d32e28a14be6a5d8180eed000aa7
msgid "1.DB-GPT Prompt"
msgstr ""

14 changes: 13 additions & 1 deletion docs/modules/llms.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,19 @@ MODEL_SERVER=http://127.0.0.1:8000
```
now we support models vicuna-13b, vicuna-7b, chatglm-6b, flan-t5-base, guanaco-33b-merged, falcon-40b, gorilla-7b.

if you want use other model, such as chatglm-6b, you just need update .env config file.
```
LLM_MODEL=chatglm-6b
```

## Run Model with cpu.
we alse support smaller models, like gpt4all. you can use it with cpu/mps(M1/M2), Download from [gpt4all model](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin)

put it in the models path, then change .env config.
```
LLM_MODEL=gptj-6b
```

DB-GPT provides a model load adapter and chat adapter. load adapter which allows you to easily adapt load different LLM models by inheriting the BaseLLMAdapter. You just implement match() and loader() method.

vicuna llm load adapter
Expand Down Expand Up @@ -87,7 +100,6 @@ class ChatGLMChatAdapter(BaseChatAdpter):
return chatglm_generate_stream
```
if you want to integrate your own model, just need to inheriting BaseLLMAdaper and BaseChatAdpter and implement the methods


## Multi Proxy LLMs
### 1. Openai proxy
Expand Down
6 changes: 4 additions & 2 deletions pilot/server/llmserver.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,8 +75,10 @@ def generate_stream_gate(self, params):
):
# Please do not open the output in production!
# The gpt4all thread shares stdout with the parent process,
# and opening it may affect the frontend output.
# print("output: ", output)
# and opening it may affect the frontend output
if not ("gptj" in CFG.LLM_MODEL or "guanaco" in CFG.LLM_MODEL):
print("output: ", output)

ret = {
"text": output,
"error_code": 0,
Expand Down

0 comments on commit e32b41b

Please sign in to comment.