forked from fuergaosi233/wechat-chatgpt
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Increase recordings and adjust OpenAI parameters
- Add GPT3Tokenizer library for tokenization - Add a default response when GPT response is empty - Add a check for blocked words in GPT response - Change temperature parameter to `0.6` - Add error handling for OpenAI API request - Return OpenAI response as a string - Add check for token limit before adding a user or assistant message - Delete messages when token limit is exceeded [src/utils.ts] - Add the GPT3Tokenizer library for tokenization - Add a function to calculate the number of tokens in a chat message - Add a function to check if the number of tokens exceeds the limit for the current model [src/bot.ts] - Add a default response when the GPT response is empty - Add a check for blocked words in the GPT response [src/openai.ts] - Change the temperature parameter to `0.6` - Add error handling for the OpenAI API request - Return the OpenAI response as a string [package.json] - Add `gpt3-tokenizer` package [src/data.ts] - Add a check for token limit before adding a user or assistant message - Delete messages starting from the second one if token limit is exceeded - Import `isTokenOverLimit` from `./utils.js` - Remove initialization of `initState` [package-lock.json] - Add `gpt3-tokenizer` package - Add `array-keyed-map` package - Increase version of `dotenv` package - Increase version of `openai` package - Increase version of `google-protobuf` package - Increase returned recordings from `10` to `100`
- Loading branch information
Showing
6 changed files
with
89 additions
and
13 deletions.
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
import {ChatCompletionRequestMessage} from "openai"; | ||
import GPT3TokenizerImport from 'gpt3-tokenizer'; | ||
import {config} from "./config.js"; | ||
|
||
const GPT3Tokenizer: typeof GPT3TokenizerImport = | ||
typeof GPT3TokenizerImport === 'function' | ||
? GPT3TokenizerImport | ||
: (GPT3TokenizerImport as any).default; | ||
|
||
// https://github.com/chathub-dev/chathub/blob/main/src/app/bots/chatgpt-api/usage.ts | ||
const tokenizer = new GPT3Tokenizer({ type: 'gpt3' }) | ||
function calTokens(chatMessage:ChatCompletionRequestMessage[]):number { | ||
let count = 0 | ||
for (const msg of chatMessage) { | ||
count += countTokens(msg.content) | ||
count += countTokens(msg.role) | ||
} | ||
return count + 2 | ||
} | ||
function countTokens(str: string):number { | ||
const encoded = tokenizer.encode(str) | ||
return encoded.bpe.length | ||
} | ||
|
||
export function isTokenOverLimit(chatMessage:ChatCompletionRequestMessage[]): boolean { | ||
let limit = 4096; | ||
if (config.model==="gpt-3.5-turbo" || config.model==="gpt-3.5-turbo-0301") { | ||
limit = 4096; | ||
} | ||
return calTokens(chatMessage) > limit; | ||
} |