This is an implementation of ChatGPT using the official ChatGPT raw model, text-chat-davinci-002-20230126
. This model name was briefly leaked while I was inspecting the network requests made by the official ChatGPT website, and I discovered that it works with the OpenAI API. Usage of this model currently does not cost any credits.
As far as I'm aware, I was the first one who discovered this, and usage of the model has since been implemented in libraries like acheong08/ChatGPT.
The previous version of this library that used transitive-bullshit/chatgpt-api is still available on the archive/old-version
branch.
By itself, the model does not have any conversational support, so this library uses a cache to store conversations and pass them to the model as context. This allows you to have persistent conversations with ChatGPT in a nearly identical way to the official website.
- Uses the official ChatGPT raw model,
text-chat-davinci-002-20230126
. - Includes an API server you can run to use ChatGPT in non-Node.js applications.
- Includes a
ChatGPTClient
class that you can use in your own Node.js applications. - Replicates chat threads from the official ChatGPT website (with conversation IDs and message IDs), with persistent conversations using Keyv.
- Conversations are stored in memory by default, but you can optionally install a storage adapter to persist conversations to a database.
- Node.js
- npm
- OpenAI API key
npm i @waylaidwanderer/chatgpt-api
import ChatGPTClient from '@waylaidwanderer/chatgpt-api';
const clientOptions = {
// (Optional) Parameters as described in https://platform.openai.com/docs/api-reference/completions
modelOptions: {
// The model is set to text-chat-davinci-002-20230126 by default, but you can override
// it and any other parameters here
model: 'text-chat-davinci-002-20230126',
// The default temperature is 0.7, but you can override it here
temperature: 0.7,
},
// (Optional) Set a custom prompt prefix. As per my testing it should work with two newlines
promptPrefix: 'You are not ChatGPT...\n\n',
// (Optional) Set to true to enable `console.debug()` logging
debug: false,
};
const cacheOptions = {
// Options for the Keyv cache, see https://www.npmjs.com/package/keyv
// This is used for storing conversations, and supports additional drivers (conversations are stored in memory by default)
};
const chatGptClient = new ChatGPTClient('OPENAI_API_KEY', clientOptions, cacheOptions);
const response = await chatGptClient.sendMessage('Hello!');
console.log(response); // { response: 'Hi! How can I help you today?', conversationId: '...', messageId: '...' }
const response2 = await chatGptClient.sendMessage('Write a poem about cats.', { conversationId: response.conversationId, parentMessageId: response.messageId });
console.log(response2.response); // Cats are the best pets in the world.
const response3 = await chatGptClient.sendMessage('Now write it in French.', { conversationId: response2.conversationId, parentMessageId: response2.messageId });
console.log(response3.response); // Les chats sont les meilleurs animaux de compagnie du monde.
You can install the package using
npm i -g @waylaidwanderer/chatgpt-api
then run it using
chatgpt-api
.
This takes an optional --settings=<path_to_settings.js>
parameter, or looks for settings.js
in the current directory if not set, with the following contents:
module.exports = {
// Your OpenAI API key
openaiApiKey: '',
chatGptClient: {
// (Optional) Parameters as described in https://platform.openai.com/docs/api-reference/completions
modelOptions: {
// The model is set to text-chat-davinci-002-20230126 by default, but you can override
// it and any other parameters here
model: 'text-chat-davinci-002-20230126',
// The default temperature is 0.7, but you can override it here
temperature: 0.7,
},
// (Optional) Set a custom prompt prefix. As per my testing it should work with two newlines
promptPrefix: 'You are not ChatGPT...\n\n',
// (Optional) Set to true to enable `console.debug()` logging
debug: false,
},
// Options for the Keyv cache, see https://www.npmjs.com/package/keyv
// This is used for storing conversations, and supports additional drivers (conversations are stored in memory by default)
cacheOptions: {},
// The port the server will run on (optional, defaults to 3000)
port: 3000,
};
Alternatively, you can install and run the package locally:
- Clone this repository
- Install dependencies with
npm install
- Rename
settings.example.js
tosettings.js
in the root directory and change the settings where required. - Start the server using
npm start
ornode bin/server.js
To start a conversation with ChatGPT, send a POST request to the server's /conversation
endpoint with a JSON body in the following format:
{
"message": "Hello, how are you today?",
"conversationId": "your-conversation-id (optional)",
"parentMessageId": "your-parent-message-id (optional)"
}
The server will return a JSON object containing ChatGPT's response:
{
"response": "I'm doing well, thank you! How are you?",
"conversationId": "your-conversation-id",
"messageId": "response-message-id"
}
If the request is unsuccessful, the server will return a JSON object with an error message and a status code of 503.
If there was an error sending the message to ChatGPT:
{
"error": "There was an error communicating with ChatGPT."
}
Since text-chat-davinci-002-20230126
is ChatGPT's raw model, I had to do my best to replicate the way the official ChatGPT website uses it.
This means it may not behave exactly the same in some ways:
- conversations are not tied to any user IDs, so if that's important to you, you should implement your own user ID system
- ChatGPT's model parameters are unknown, so I set some defaults that I thought would be reasonable, such as
temperature: 0.7
- conversations are limited to roughly the last 3000 tokens, so earlier messages may be forgotten during longer conversations
- this works in a similar way to ChatGPT, except I'm pretty sure they have some additional way of retrieving context from earlier messages when needed (which can probably be achieved with embeddings, but I consider that out-of-scope for now)
- I removed "knowledge cutoff" from the ChatGPT preamble ("You are ChatGPT..."), which stops it from refusing to answer questions about events after 2021-09, as it does have some training data from after that date. This means it may answer questions about events after 2021-09, but it's not guaranteed to be accurate.
If you'd like to contribute to this project, please create a pull request with a detailed description of your changes.
This project is licensed under the MIT License.