A proof-of-concept ChatGPT client for DOS.
Photos of the client running on my 1984 IBM 5155 Portable PC with a 4.77Mhz Intel 8088 CPU with MS-DOS 6.22.
As there are no native HTTPS APIs for DOS, a HTTP-to-HTTPS proxy like this I've written running on a modern machine is needed.
This program is heavily based on sample code in the DOS networking MTCP library. The program also requires a DOS Packet Driver to be loaded and MTCP to be set for the machine/VM.
This program was written in a short time as a toy project. It has not been vigorously tested thus is NOT meant for "production" use.
Application binary can be found in the releases
directory or Github Releases section but do the following first.
-
OpenAI requires an API key to use its APIs. Follow the instructions on their website to obtain this key before proceeding.
-
Download and start up http-to-https-proxy
-
The application requires a config file named
doschgpt.ini
. Modify the configuration file to suit your needs in this order. A sample file can be found with the binary.
- API key: Place your key without quotes (API key in this sample file has been revoked)
- Model: Language model to use, can use
gpt-3.5-turbo
- Request Temperature: How random the completion will be. More details
- Proxy hostname: Hostname IP of the proxy
- Proxy port: Proxy Port
- Outgoing start port: Start of a range of randomly selected outgoing port
- Outgoing end port: End of a range of randomly selected outgoing port
- Socket connect timeout (ms): How long to wait when attempting to connect to proxy
- Socket response timeout (ms): How long to wait for OpenAI's servers to reply
- Ensure that your DOS environment has loaded the following
- Packet Driver
- MTCP Config Environment variable
MTCPCFG
- MTCP Config file configured by DHCP
- Just launch
doschgpt.exe
in your machine and fire away. Press the ESC key to quit the application. You may use the following command line arguments.
-dri
: Print the outgoing port, number of prompt and completion tokens used after each request-drr
: Display the raw server return headers and json reply-drt
: Display the timestamp of the latest request/reply-cp737
: Supports Greek Code Page 737. Ensure code page is loaded before starting the program.-fhistory.txt
: Append conversation history to new/existing text file. File will also include debug messages if specified. Replacehistory.txt
with any other filepath you desire. There is no space between the-f
and the filepath.
Parsed options will be displayed.
To compile this application, you have to use Open Watcom 2.0 beta which you can download from here. Open Watcom 2.0 for 64-bit Windows which was released on 2023-04-01 02:52:44 is used. The v1.9 version seems to create binaries with issues on some platforms.
During installation, Open Watcom may prompt to install the environment variables. I chose to not do that to avoid having those variables being permanent. Instead I use a batch file to set the variables whenever I need to compile.
The program is compiled via a Makefile that is referenced from MTCP.
# Open cmd.exe
cd doschgpt-code
# If using Open Watcom v2.0 beta installed to C:\WATCOM2
20setenv.bat
# If using Open Watcom v1.9 installed to C:\WATCOM (Not recommended)
19setenv.bat
# To compile
wmake
# Only if using Open Watcom 1.9. To patch the Open Watcom runtime to support Compaq Portable. Not needed for Open Watcom 2.0 beta.
PTACH.exe doschgpt.exe doschgpt.map -ml
# To clean
wmake clean
This application compiles against the MTCP library. I have unzipped the latest version mTCP-src_2023-03-31.zip at the time of development to the mtcpsrc
directory. When Brutman updates this library again in future, simply replace the contents of the mtcpsrc
directory with the new library.
PTACH.exe
is a Win NT program compiled from MTCP sources.
I use Visual Studio Code text editor to code for ease of use. For ease of testing, I used a virtual machine to run the application as 16-bit DOS applications and the MTCP network stack cannot run on modern Windows.
More details of my setup can be found here.
To easily transfer the binary, I used Python to host my build directory as a temporary webserver. Then use the MTCP tool htget
to fetch the binary.
# On modern machine with binary
python3 -m http.server 8000
# Run on DOS machine/VM
htget -o doschgpt.exe http://X.X.X.X:8000/doschgpt.exe
OpenAI implements rate limits on their API hence we should minimise calling their API repeatedly.
To avoid calling the OpenAI's servers during testing, we can mock the server's using this mockprox.go
Go program that will replay the contents of reply.txt
whenever the API call is received.
cp correct.txt reply.txt
go build mockprox.go
mockprox.exe
Only one ChatGPT API is used which is the chat completion.
# Test API directly
curl https://api.openai.com/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer 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" -d '{ "model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "What is MS-DOS?"}], "temperature": 0.7 }'
# Call through the https proxy for testing
curl --proxy "http://192.168.1.144:8080" https://api.openai.com/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer sk-EhmTsEsKyH4qHZL2mr3hT3BlbkFJd6AcfdBrujJsBBGPRcQ" -d '{ "model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "What is MS-DOS?"}], "temperature": 0.7 }'
- v0.9 (27 Apr 2023):
-
- (New feature) Ability to append conversation history and debug messages to text file
-
- (New feature) Display timestamp as a debug option
-
- Remove FAR pointers
-
- Reduced user entry buffer to 1600 bytes
-
- Reduced API body buffer to 12000 bytes
-
- Reduced SEND_RECEIVE buffer to 14000 bytes
- v0.8 (9 Apr 2023):
-
- Supports Greek Code Page 737 via
-cp737
command line argument.
- Supports Greek Code Page 737 via
-
- Corrected small bug in Code Page 437 parsing UTF-8 characters starting with 0xE2 that does not return designated unknown character if unknown character is encountered.
- v0.7 (8 Apr 2023):
-
- Corrected bug in previous release where previous message/reply memory is not freed after program ends.
-
- Now will use one-time malloc allocations of previous message (5000), temp message (5000), GPT reply (8000) buffers to avoid memory fragmentation.
-
- Correct memory allocation issue of not using __far when required
- v0.6 (8 Apr 2023):
-
- Added new feature to send the previous request and ChatGPT reply to give the model more context in answering the latest request.
-
-
- Previous request and ChatGPT reply has to be cached
-
-
-
- API_BODY_SIZE_BUFFER increased to 15000 bytes
-
-
- Corrected bug in incorrect printing of uint16_t outgoing port value in debug mode
- v0.5 (5 Apr 2023):
-
- Corrected bug where the size of bytes to read from MTCP is always the same even though buffer already has some bytes inside from previous read.
- v0.4 (1 Apr 2023):
-
- Updated to use MTCP 2023-03-31
- v0.3 (1 Apr 2023):
-
- Display characters like accents from Code Page 437
-
- Escape " and \ characters of user input
-
- Print \ without escape from JSON
-
- Add a 4096 byte buffer for post-escaped message string, API Body buffer increased to 6144 bytes
-
- Further wait for 200ms after the last non-zero byte receive to be sure there are no more bytes incoming from the socket
-
- Compiled with Open Watcom 2.0 Beta (2023-04-01 build)
- v0.2 (30 Mar 2023):
-
- Compiled with Open Watcom 2.0 Beta (2023-03-04 build) that solves the issue of app not starting on some PCs.
-
- Show date and time of compilation
-
- Will parse and print quotes that were escaped from the JSON reply
-
- Reduce size of user text entry buffer from 10240 to 2048 characters to reduce memory usage.
-
- Use the same buffer for send and receive on the socket to further cut down memory usage.
-
- API Body buffer dropped to 4096 bytes
-
- Print only one decimal point for temperature at start
- v0.1 (26 Mar 2023):
-
- Initial release