Skip to content

Commit

Permalink
Add chat binary for Windows
Browse files Browse the repository at this point in the history
  • Loading branch information
EliasVincent committed Mar 29, 2023
1 parent 7e468f2 commit 2c5d4a3
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,8 @@ Download the CPU quantized gpt4all model checkpoint: [gpt4all-lora-quantized.bin
Clone this repository down and place the quantized model in the `chat` directory and start chatting by running:

- `cd chat;./gpt4all-lora-quantized-OSX-m1` on M1 Mac/OSX
- `cd chat;./gpt4all-lora-quantized-linux-x86` on Windows/Linux
- `cd chat;./gpt4all-lora-quantized-linux-x86` on Linux
- `cd chat;./gpt4all-lora-quantized-win64.exe` on Windows (PowerShell)

To compile for custom hardware, see our fork of the [Alpaca C++](https://github.com/zanussbaum/gpt4all.cpp) repo.

Expand Down
Binary file added chat/gpt4all-lora-quantized-win64.exe
Binary file not shown.

0 comments on commit 2c5d4a3

Please sign in to comment.