Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
bmschmidt authored Mar 30, 2023
1 parent bc7eb80 commit b9861f7
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Note: the full model on GPU (16GB of RAM required) performs much better in our q
To get running using the python client with the CPU interface, first install the [nomic client](https://github.com/nomic-ai/nomic) using `pip install nomic`
Then, you can use the following script to interact with GPU4All:
```
from nomic import GPT4All
from nomic.gpt4all import GPT4All
m = GPT4All()
m.connect()
m.prompt('write me a story about a lonely computer')
Expand All @@ -57,7 +57,7 @@ The setup here is slightly more involved than the CPU model.

Once this is done, you can run the model on GPU with a script like the following:
```
from nomic import GPT4AllGPU
from nomic.gpt4all import GPT4AllGPU
m = GPT4AllGPU(LLAMA_PATH)
config = {'num_beams': 2,
'min_new_tokens': 10,
Expand Down

0 comments on commit b9861f7

Please sign in to comment.