Skip to content

Commit

Permalink
Merge pull request #18 from techcoderx/update-project
Browse files Browse the repository at this point in the history
Latest updates
  • Loading branch information
firtoz authored Jun 17, 2023
2 parents 1026f36 + bc965ec commit 4496037
Show file tree
Hide file tree
Showing 14 changed files with 179 additions and 2,135 deletions.
134 changes: 96 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
| :exclamation: The new version of GPT-Shell is called Erin! Find out more at [Erin's Website!](https://erin.ac/?ref=github-readme) :exclamation: |
|--------------------------------------------------------------------------------------------------------------------------------------------------|

GPT-Shell is an OpenAI based chat-bot that is similar to OpenAI's ChatGPT (https://chat.openai.com/).
GPT-Shell is an OpenAI based chat-bot that is similar to OpenAI's [ChatGPT](https://chat.openai.com/).

It allows users to converse with a virtual companion. It uses nodejs and typescript, as well as modern yarn,
to create a seamless conversation experience.
Expand All @@ -20,10 +20,6 @@ You can try the bot on the official Discord server:

## Usage

Set up a discord bot and add it to your server.

Follow the setup instructions below.

To interact with GPT-Shell, users can:
- Use the `/chat-gpt` command to start a conversation with the bot
- Ping the bot in a channel it's in
Expand All @@ -36,6 +32,8 @@ The bot is able to handle multiple conversations at once,
so you can start as many conversations as you like.

## Bot Setup
Set up a discord bot [here](https://discord.com/developers/applications/) and add it to your server.

Scopes:
- bot
- application.commands
Expand All @@ -61,52 +59,41 @@ You also need to enable the Message Content Intent:

## Setup

You can try to fork the replit:

https://replit.com/@Ephemeros/GPT-Shell
- You can try to fork the [replit here](https://replit.com/@Ephemeros/GPT-Shell)

Or you can set it up on your machine.
- Or you can set it up on your machine.

### Prerequisites:

Nodejs: https://nodejs.org/en/ (18 or above)
- [Nodejs](https://nodejs.org/en/): (18 or above)

Yarn: https://yarnpkg.com/getting-started/install (after installing nodejs)
- [Yarn](https://yarnpkg.com/getting-started/install): (after installing nodejs)

To use GPT-Shell, you will need to:
- Clone the project
- Open the terminal in the project's folder
- (in windows, right click somewhere in the folder and select "Open In Terminal")
- if you see something about powershell, type `cmd` and hit enter, to go to the simpler command line terminal.
- Run `yarn install`

Set up the environment variables as described below.

Then to start a development environment, run `yarn dev`.
This way, whenever you change the code, it will restart the bot to update.

To build and run, run `yarn build` and then `yarn start`.

Go to your server, and type the config command, and set the API key for your server using the config.
- [pm2](https://pm2.io/docs/runtime/guide/installation/): To keep your bot alive even after killing your terminal.

To use GPT-Shell, you will need to:
- Clone the project:
```
/chat-gpt-config
git clone https://github.com/firtoz/GPT-Shell.git
```
- Open the terminal in the project's folder:
```
cd GPT-Shell
```
(in windows, right click somewhere in the folder and select "Open In Terminal")
if you see something about powershell, type `cmd` and hit enter, to go to the simpler command line terminal.
- Install yarn:
```
yarn install
```
- Set up environment variables

<details>
<summary>Expand to see config image</summary>

![config-api-key.png](config-api-key.png)

</details>


## Environment Variables
## Setting up Environment Variables

The following environment variables are required for GPT-Shell to work properly.

You can set the environment variables in any way you like, or place an .env.local file at the root of your project,
next to `package.json`, that looks like this:
You can set the environment variables in any way you like, or place an .env.local file at the root of your project (rename `example.env.local` to `.env.local`),
Ensure that your `.env.local` looks like this:
<details>
<summary> [EXPAND] Click to see .env.local</summary>

Expand Down Expand Up @@ -172,6 +159,77 @@ Extras:

Can create an app at https://developer.wolframalpha.com/portal/myapps and get its id.

## Start your bot
Set up the environment variables as described above.
- Install pm2:

With yarn:
```bash
yarn global add pm2
```
With npm:
```bash
npm install pm2 -g
```
With debian, use the install script:
```bash
apt update && apt install sudo curl && curl -sL https://raw.githubusercontent.com/Unitech/pm2/master/packager/setup.deb.sh | sudo -E bash -
```
- Then to start a development environment, run
```bash
yarn dev
```
This way, whenever you change the code, it will restart the bot to update.

- To build and start the bot, run
```bash
yarn build
```
and then
```bash
yarn start
```
You can also run `npm start` or `npm run start` to start the bot.

NOTE: running `yarn start`, `npm start` or `npm run start` will start the bot with PM2 and give it the name "GPT-Shell". You can replace "GPT-Shell" with a name of your choice in [package.json](https://github.com/firtoz/GPT-Shell/blob/main/package.json). It will also show logs for the PM2 running processes and save them.

If you are in dev environment, use `node .` to test your code:
```bash
node .
```
Once you are satisfied with the changes run:
```bash
pm2 restart GPT-Shell && pm2 logs
```
In order to stop the bot, run:
```bash
yarn run stop
```
You can also restart it from the [pm2.io dashboard](https://pm2.io/) as shown bellow:
<details>
<summary>Expand to see image</summary>

![image](https://cdn.discordapp.com/attachments/1072834906742345808/1076183450417123358/image.png)

</details>

## Configuration

Go to your server, and type the config command, and set the API key for your server using the config.

```
/chat-gpt-config
```

<details>
<summary>Expand to see config image</summary>

![config-api-key.png](config-api-key.png)

</details>



## Long-Term Memory

Starting from 2.0.0, the bot has the capacity to have a long-term memory.
Expand Down
6 changes: 3 additions & 3 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,9 @@
"scripts": {
"dev": "nodemon ./src/index.ts",
"build": "yarn rimraf ./lib/ && yarn tsc && echo build complete",
"start": "yarn exec forever --minUptime 1000 --spinSleepTime 1000 ./lib/index.js",
"start-replit": "yarn node --version ./lib/index.js",
"start": "pm2 start ./lib/index.js --name GPT-Shell && pm2 save && pm2 logs",
"stop": "pm2 stop all",
"start-replit": "yarn node ./lib/index.js",
"test": "vitest run",
"vitest": "vitest"
},
Expand Down Expand Up @@ -41,7 +42,6 @@
"discord-interactions": "^3.3.0",
"discord.js": "^14.9.0",
"dotenv": "^16.0.3",
"forever": "^4.0.3",
"gpt3-tokenizer": "^1.1.5",
"lodash": "^4.17.21",
"mongodb": "^4.12.1",
Expand Down
3 changes: 1 addition & 2 deletions replit-setup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,4 @@ corepack enable
corepack prepare yarn@stable --activate

yarn node --version
yarn remove canvas
yarn build && yarn start
yarn build && yarn run start-replit
4 changes: 4 additions & 0 deletions replit.nix
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,9 @@

pkgs.nodePackages.typescript
pkgs.nodePackages.typescript-language-server
pkgs.libuuid
];
env = {
LD_LIBRARY_PATH = pkgs.lib.makeLibraryPath [pkgs.libuuid];
};
}
54 changes: 35 additions & 19 deletions src/core/ChatGPTConversation.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ import {
AnyThreadChannel,
ChannelType,
EmbedBuilder,
EmbedType,
Message,
TextBasedChannel,
ThreadAutoArchiveDuration,
Expand All @@ -13,7 +12,7 @@ import {
import {logMessage, printArg} from "../utils/logMessage";
import {
ChatCompletionRequestMessage,
CreateCompletionResponse, CreateCompletionResponseUsage,
CreateCompletionResponseUsage,
CreateEmbeddingResponse,
CreateModerationResponse,
CreateModerationResponseResultsInnerCategoryScores,
Expand Down Expand Up @@ -953,20 +952,21 @@ Thank you for your understanding.`),
this.nextSummaryMessageCount = this.messageHistory.length + 10;
await this.persist();

const response: AxiosResponse<CreateCompletionResponse> = await openai.createCompletion({
model: this.model,
prompt: `Please create a name for a discord thread that contains this conversation:
${lastMessages.map(item => messageToPromptPart(item)).join('\n')}`,
temperature: this.temperature,
max_tokens: 512,
top_p: 0.9,
frequency_penalty: 0,
presence_penalty: 0,
user: userId,
}) as any;
const response = await getChatCompletionSimple({
openai,
messages: this.getSummarisePrompt(lastMessages),
options: {
model: 'gpt-3.5-turbo',
temperature: this.temperature,
max_tokens: 512,
top_p: 0.9,
frequency_penalty: 0,
presence_penalty: 0,
user: userId,
}
})

this.summary = response.data.choices[0].text!;
this.summary = response

logMessage(`Summary for ${await this.getLinkableId()}: ${this.summary}.
Source: ${lastMessages}`);
Expand Down Expand Up @@ -1290,6 +1290,18 @@ ${messageToPromptPart(item.message)}`;
return;
}

private getSummarisePrompt(lastMessages: MessageHistoryItem[]): ChatCompletionRequestMessage[] {
const result: ChatCompletionRequestMessage[] = [
{
role: 'system',
content: `Please create a name for a discord thread that contains this conversation:
${lastMessages.map(item => messageToPromptPart(item)).join('\n')}`
}
]
return result
}

private async getFullPrompt(
config: ConfigForIdType,
openai: OpenAIApi,
Expand Down Expand Up @@ -1342,7 +1354,7 @@ ${messageToPromptPart(item.message)}`;
case "human":
return {
role: 'user',
name: item.username,
name: this.filterUsername(item.username),
content: item.content,
};
case "response":
Expand All @@ -1354,7 +1366,7 @@ ${messageToPromptPart(item.message)}`;
}),
{
role: 'user',
name: inputMessageItem.username,
name: this.filterUsername(inputMessageItem.username),
content: inputMessageItem.content,
},
);
Expand Down Expand Up @@ -1458,7 +1470,7 @@ ${messageToPromptPart(item.message)}`;
case "human":
return {
role: 'user',
name: item.username,
name: this.filterUsername(item.username),
content: item.content,
};
case "response":
Expand All @@ -1477,7 +1489,7 @@ ${messageToPromptPart(item.message)}`;
case "human":
return {
role: 'user',
name: item.username,
name: this.filterUsername(item.username),
content: item.content,
};
case "response":
Expand Down Expand Up @@ -1596,6 +1608,10 @@ ${messageToPromptPart(item.message)}`;
return new MultiMessage(channel, undefined, messageToReplyTo).update(message, true);
}

private filterUsername(username: string): string {
return username.replace(/[^a-zA-Z0-9_-]+/g,'').substring(0,64)
}

private async getDebugName(user: User) {
return this.isDirectMessage ? user.username : await getGuildName(this.guildId);
}
Expand Down
2 changes: 1 addition & 1 deletion src/core/ConversationFactory.ts
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ export class ConversationFactory {
message.author.id,
message.guildId ?? '',
discordClient.user!.username,
'text-davinci-003'
'gpt-3.5-turbo'
);

if (channel.isDMBased()) {
Expand Down
2 changes: 1 addition & 1 deletion src/core/EncodeLength.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ export const ChatModelNames: ChatModelName[] = [
'gpt-4-32k',
];

export function numTokensFromMessages(messages: ChatCompletionRequestMessage[], model: ChatModelName = 'gpt-3.5-turbo-0301') {
export function numTokensFromMessages(messages: ChatCompletionRequestMessage[], model: ChatModelName = 'gpt-3.5-turbo') {
if (ChatModelNames.includes(model)) {
let numTokens = 0;
for (const message of messages) {
Expand Down
2 changes: 1 addition & 1 deletion src/core/ModelInfo.ts
Original file line number Diff line number Diff line change
@@ -1 +1 @@
export type ModelName = 'text-davinci-003'
export type ModelName = 'gpt-3.5-turbo'
2 changes: 1 addition & 1 deletion src/core/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ export type ConfigForIdType = {
const defaultConfigForId: ConfigForIdType = {
maxTokensForRecentMessages: 1000,
modelInfo: {
['text-davinci-003']: {
['gpt-3.5-turbo']: {
MAX_ALLOWED_TOKENS: 2000,
MAX_TOKENS_PER_RESPONSE: 512,
},
Expand Down
4 changes: 2 additions & 2 deletions src/discord/handlers/commands/ChatGptCommand.ts
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ export const ChatGptCommand: Command = {
type: ApplicationCommandType.ChatInput,
options,
run: async (client: Client, interaction: CommandInteraction) => {
const model = 'text-davinci-003';
const model = 'gpt-3.5-turbo';
await handleChat(interaction, client, model);
}
};
Expand All @@ -237,7 +237,7 @@ export const PrivateChatGptCommand: Command | null = PRIVATE_COMMAND_NAME ? {
type: ApplicationCommandType.ChatInput,
options,
run: async (client: Client, interaction: CommandInteraction) => {
const model = 'text-davinci-003';
const model = 'gpt-3.5-turbo';
await handleChat(interaction, client, model, true);
}
} : null;
Expand Down
4 changes: 2 additions & 2 deletions src/discord/handlers/commands/ConfigCommand.ts
Original file line number Diff line number Diff line change
Expand Up @@ -72,11 +72,11 @@ async function generateFollowUp(configId: string, isDM: boolean, user: User) {
const fields = [
{
name: 'Token limits:',
value: `Max tokens for prompt: ${config.modelInfo['text-davinci-003'].MAX_ALLOWED_TOKENS}.
value: `Max tokens for prompt: ${config.modelInfo['gpt-3.5-turbo'].MAX_ALLOWED_TOKENS}.
Conversations start at less than a cent per message. As a conversation gets longer, the cost starts to rise as more and more tokens are used.
With this configuration, each message can cost at most \$${(0.02 * config.modelInfo['text-davinci-003'].MAX_ALLOWED_TOKENS / 1000).toFixed(2)} USD.
With this configuration, each message can cost at most \$${(0.02 * config.modelInfo['gpt-3.5-turbo'].MAX_ALLOWED_TOKENS / 1000).toFixed(2)} USD.
Max tokens for recent messages: ${config.maxTokensForRecentMessages}.
Expand Down
Loading

0 comments on commit 4496037

Please sign in to comment.