Skip to content

Commit

Permalink
support gpt-3.5-turbo api
Browse files Browse the repository at this point in the history
Signed-off-by: pengzhile <[email protected]>
  • Loading branch information
pengzhile committed Mar 4, 2023
1 parent d1c0f8c commit 4f95a79
Show file tree
Hide file tree
Showing 17 changed files with 740 additions and 218 deletions.
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ WORKDIR /opt/app
ADD . .
RUN pip install .

ENTRYPOINT ["bin/startup.sh"]
ENTRYPOINT ["/opt/app/bin/startup.sh"]
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,6 @@
[![PyPi workflow](https://github.com/pengzhile/pandora/actions/workflows/python-publish.yml/badge.svg)](https://github.com/pengzhile/pandora/actions/workflows/python-publish.yml)
[![Docker workflow](https://github.com/pengzhile/pandora/actions/workflows/docker-publish.yml/badge.svg)](https://github.com/pengzhile/pandora/actions/workflows/docker-publish.yml)

[English](https://github.com/pengzhile/pandora/blob/master/doc/README.en.md)

### `潘多拉`,一个命令行的`ChatGPT`

### 实现了网页版`ChatGPT`的主要操作。能过`Cloudflare`,理论上速度还可以。
Expand Down Expand Up @@ -76,13 +74,15 @@
* `-p``--proxy` 指定代理,格式:`protocol://user:pass@ip:port`
* `-t``--token_file` 指定一个存放`Access Token`的文件,使用`Access Token`登录。
* `-s``--server``http`服务方式启动,格式:`ip:port`
* `-a``--api` 使用`gpt-3.5-turbo`API请求,**你可能需要向`OpenAI`支付费用**
* `-v``--verbose` 显示调试信息,且出错时打印异常堆栈信息,供查错使用。

## Docker环境变量

* `PANDORA_ACCESS_TOKEN` 指定`Access Token`字符串。
* `PANDORA_PROXY` 指定代理,格式:`protocol://user:pass@ip:port`
* `PANDORA_SERVER``http`服务方式启动,格式:`ip:port`
* `PANDORA_API` 使用`gpt-3.5-turbo`API请求,**你可能需要向`OpenAI`支付费用**
* `PANDORA_VERBOSE` 显示调试信息,且出错时打印异常堆栈信息,供查错使用。
* 使用Docker方式,设置环境变量即可,无视上述`程序参数`

Expand Down
4 changes: 4 additions & 0 deletions bin/startup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,10 @@ if [ -n "${PANDORA_SERVER}" ]; then
PANDORA_ARGS="${PANDORA_ARGS} -s ${PANDORA_SERVER}"
fi

if [ -n "${PANDORA_API}" ]; then
PANDORA_ARGS="${PANDORA_ARGS} -a"
fi

if [ -n "${PANDORA_VERBOSE}" ]; then
PANDORA_ARGS="${PANDORA_ARGS} -v"
fi
Expand Down
4 changes: 2 additions & 2 deletions doc/HTTP-API.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,8 +65,8 @@
* **JSON字段:**
* `prompt` 提问的内容。
* `model` 对话使用的模型,通常整个会话中保持不变。
* `last_user_message_id` 上一条用户发送消息的ID。
* `last_parent_message_id` 上一条用户发送消息的父消息ID。
* `message_id` 上一条用户发送消息的ID。
* `parent_message_id` 上一条用户发送消息的父消息ID。
* `conversation_id` 会话ID,在这个接口不可不传。
* `stream` 是否使用流的方式输出内容,默认为:`True`
* **接口描述:**`ChatGPT`重新生成回复。
125 changes: 0 additions & 125 deletions doc/README.en.md

This file was deleted.

3 changes: 2 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ requests[socks]==2.28.2
rich==13.3.1
appdirs==1.4.4
werkzeug==2.2.3
flask[async]==2.2.3
flask==2.2.3
flask-cors==3.0.10
waitress==2.1.2
tiktoken==0.3.0
5 changes: 3 additions & 2 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
long_description=long_description,
long_description_content_type='text/markdown',
url="https://github.com/pengzhile/pandora",
packages=['pandora', 'pandora.openai', 'pandora.bots'],
packages=['pandora', 'pandora.openai', 'pandora.bots', 'pandora.turbo'],
package_dir={'pandora': 'src/pandora'},
include_package_data=True,
install_requires=[
Expand All @@ -28,9 +28,10 @@
'rich == 13.3.1',
'appdirs == 1.4.4',
'werkzeug == 2.2.3',
'flask[async] == 2.2.3',
'flask == 2.2.3',
'flask-cors == 3.0.10',
'waitress == 2.1.2',
'tiktoken == 0.3.0',
],
entry_points={
"console_scripts": [
Expand Down
2 changes: 1 addition & 1 deletion src/pandora/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# -*- coding: utf-8 -*-

__version__ = '0.4.4'
__version__ = '0.5.0'
24 changes: 13 additions & 11 deletions src/pandora/bots/legacy.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,11 @@
# -*- coding: utf-8 -*-

import asyncio
import re
import uuid

from rich.prompt import Prompt, Confirm

from .. import __version__
from ..openai.api import ChatGPT
from ..openai.utils import Console


Expand Down Expand Up @@ -35,7 +33,7 @@ def __init__(self, title=None, conversation_id=None, model_slug=None, user_promp


class ChatBot:
def __init__(self, chatgpt: ChatGPT):
def __init__(self, chatgpt):
self.chatgpt = chatgpt
self.state = None

Expand Down Expand Up @@ -285,9 +283,9 @@ def __talk(self, prompt):
else:
self.state.user_prompt = ChatPrompt(prompt, parent_id=self.state.chatgpt_prompt.message_id)

generator = self.chatgpt.talk(prompt, self.state.model_slug, self.state.user_prompt.message_id,
self.state.user_prompt.parent_id, self.state.conversation_id)
asyncio.run(self.__print_reply(generator))
status, _, generator = self.chatgpt.talk(prompt, self.state.model_slug, self.state.user_prompt.message_id,
self.state.user_prompt.parent_id, self.state.conversation_id)
self.__print_reply(status, generator)

self.state.user_prompts.append(self.state.user_prompt)

Expand All @@ -302,15 +300,19 @@ def __regenerate_reply(self, state):
Console.error('#### Conversation has not been created.')
return

generator = self.chatgpt.regenerate_reply(state.user_prompt.prompt, state.model_slug, state.conversation_id,
state.user_prompt.message_id, state.user_prompt.parent_id)
status, _, generator = self.chatgpt.regenerate_reply(state.user_prompt.prompt, state.model_slug,
state.conversation_id, state.user_prompt.message_id,
state.user_prompt.parent_id)
print()
Console.success_b('ChatGPT:')
asyncio.run(self.__print_reply(generator))
self.__print_reply(status, generator)

def __print_reply(self, status, generator):
if 200 != status:
raise Exception(status, next(generator))

async def __print_reply(self, generator):
p = 0
async for result in await generator:
for result in generator:
if result['error']:
raise Exception(result['error'])

Expand Down
58 changes: 18 additions & 40 deletions src/pandora/bots/server.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
# -*- coding: utf-8 -*-

import asyncio
import logging
from os.path import join, abspath, dirname

Expand All @@ -12,14 +11,14 @@
from werkzeug.serving import WSGIRequestHandler

from .. import __version__
from ..openai.api import ChatGPT
from ..openai.api import API


class ChatBot:
__default_ip = '127.0.0.1'
__default_port = 8008

def __init__(self, chatgpt: ChatGPT, debug=False):
def __init__(self, chatgpt, debug=False):
self.chatgpt = chatgpt
self.debug = debug
self.log_level = logging.DEBUG if debug else logging.INFO
Expand Down Expand Up @@ -103,8 +102,8 @@ def list_models(self):
return self.__proxy_result(self.chatgpt.list_models(True))

def list_conversations(self):
offset = request.args.get('offset', 1)
limit = request.args.get('limit', 20)
offset = request.args.get('offset', '1')
limit = request.args.get('limit', '20')

return self.__proxy_result(self.chatgpt.list_conversations(offset, limit, True))

Expand All @@ -129,7 +128,7 @@ def gen_conversation_title(self, conversation_id):

return self.__proxy_result(self.chatgpt.gen_conversation_title(conversation_id, model, message_id, True))

async def talk(self):
def talk(self):
payload = request.json
prompt = payload['prompt']
model = payload['model']
Expand All @@ -138,53 +137,32 @@ async def talk(self):
conversation_id = payload.get('conversation_id')
stream = payload.get('stream', True)

async def __talk():
generator = self.chatgpt.talk(prompt, model, message_id, parent_message_id, conversation_id, stream)
async for line in await generator:
yield line
return self.__process_stream(
*self.chatgpt.talk(prompt, model, message_id, parent_message_id, conversation_id, stream), stream)

if stream:
return Response(self.__to_sync(__talk()), mimetype='text/event-stream')

last_json = None
async for json in __talk():
last_json = json

return jsonify(last_json)

async def regenerate(self):
def regenerate(self):
payload = request.json
prompt = payload['prompt']
model = payload['model']
last_user_message_id = payload['last_user_message_id']
last_parent_message_id = payload['last_parent_message_id']
message_id = payload['message_id']
parent_message_id = payload['parent_message_id']
conversation_id = payload['conversation_id']
stream = payload.get('stream', True)

async def __generate():
generator = self.chatgpt.regenerate_reply(prompt, model, conversation_id, last_user_message_id,
last_parent_message_id, stream)
async for line in await generator:
yield line
return self.__process_stream(
*self.chatgpt.regenerate_reply(prompt, model, conversation_id, message_id, parent_message_id, stream),
stream)

@staticmethod
def __process_stream(status, headers, generator, stream):
if stream:
return Response(self.__to_sync(__generate()), mimetype='text/event-stream')
return Response(API.wrap_stream_out(generator, status), mimetype=headers['Content-Type'], status=status)

last_json = None
async for json in __generate():
for json in generator:
last_json = json

return jsonify(last_json)

@staticmethod
def __to_sync(generator):
loop = asyncio.new_event_loop()

while True:
try:
yield loop.run_until_complete(generator.__anext__())
except StopAsyncIteration:
break
return make_response(last_json, status)

@staticmethod
def __proxy_result(remote_resp):
Expand Down
Loading

0 comments on commit 4f95a79

Please sign in to comment.