Skip to content

Commit

Permalink
Merge branch 'train' of github.com:nomic-ai/gpt4all into train
Browse files Browse the repository at this point in the history
  • Loading branch information
zanussbaum committed Mar 28, 2023
2 parents 812b807 + cd1f1fe commit 9c380d6
Showing 1 changed file with 34 additions and 2 deletions.
36 changes: 34 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,30 @@
# gpt4all
<h1 align="center">GPT4All</h1>
<p align="center">Demo, data and code to train an assistant-style large language model</p>

# Try it yourself

-- TODO LLAMA C++ code

# Setup


# Reproducibility

You can find trained LoRa model weights at:
- gpt4all-lora https://huggingface.co/nomic-ai/gpt4all-lora

We are not distributing LLaMa 7B checkpoint they need to be used in association with.


To reproduce our LoRA training run, do the following:

## Setup

Clone the repo

`git clone --recurse-submodules [email protected]:nomic-ai/gpt4all.git`

`git submodule configure && git submodule update`

Setup the environment

```
Expand All @@ -29,3 +46,18 @@ pip install -e .
## Train

`accelerate launch --dynamo_backend=inductor --num_processes=8 --num_machines=1 --machine_rank=0 --deepspeed_multinode_launcher standard --mixed_precision=bf16 --use_deepspeed --deepspeed_config_file=configs/deepspeed/ds_config.json train.py --config configs/train/finetune-7b.yaml`



If you utilize this reposistory, models or data in a downstream project, please consider citing it with:
```
@misc{gpt4all,
author = {Yuvanesh Anand and Zachary Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}
```

0 comments on commit 9c380d6

Please sign in to comment.