Skip to content

Commit

Permalink
Fix AutoGPTQ version to 0.2.2 in requirements.txt. Adapt README.md.
Browse files Browse the repository at this point in the history
Right now AutoGPTQ 0.3 is installed from requirements.txt and users are then asked to uninstall that and install AutoGPTQ 0.2.2 from source. It is much easier to fix the version to 0.2.2 in requirements.txt in the first place.
Also fix some README.md issues.
  • Loading branch information
KonradHoeffner committed Jul 25, 2023
1 parent 6f89470 commit c0c59c3
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 17 deletions.
21 changes: 5 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Install conda
conda create -n localGPT
```

Activate
Activate

```shell
conda activate localGPT
Expand All @@ -30,17 +30,6 @@ In order to set your environment up to run the code here, first install all requ
pip install -r requirements.txt
```

Then install AutoGPTQ - if you want to run quantized models for GPU

```shell
git clone https://github.com/PanQiWei/AutoGPTQ.git
cd AutoGPTQ
git checkout v0.2.2
pip install .
```

For more support on [AutoGPTQ] (https://github.com/PanQiWei/AutoGPTQ).

## Test dataset

This repo uses a [Constitution of USA ](https://constitutioncenter.org/media/files/constitution.pdf) as an example.
Expand All @@ -57,7 +46,7 @@ Run the following command to ingest all the data.
`defaults to cuda`

```shell
python ingest.py
python ingest.py
```

Use the device type argument to specify a given device.
Expand Down Expand Up @@ -120,7 +109,7 @@ GGML quantized models for Apple Silicon (M1/M2) are supported through the llama-

## Troubleshooting

**Install MPS:**
**Install MPS:**
1- Follow this [page](https://developer.apple.com/metal/pytorch/) to build up PyTorch with Metal Performance Shaders (MPS) support. PyTorch uses the new MPS backend for GPU training acceleration. It is good practice to verify mps support using a simple Python script as mentioned in the provided link.

2- By following the page, here is an example of what you may initiate in your terminal
Expand All @@ -136,7 +125,7 @@ pip install pdfminer.six
pip install xformers
```

**Upgrade packages:**
**Upgrade packages:**
Your langchain or llama-cpp version could be outdated. Upgrade your packages by running install again.

```shell
Expand Down Expand Up @@ -278,7 +267,7 @@ This is a test project to validate the feasibility of a fully local solution for
```shell
conda install -c pytorch torchvision cudatoolkit=10.1 pytorch
```
- If doesn't work try re installing
- If doesn't work, try reinstalling
```shell
pip uninstall torch
pip cache purge
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ transformers
protobuf==3.20.0; sys_platform != 'darwin'
protobuf==3.20.0; sys_platform == 'darwin' and platform_machine != 'arm64'
protobuf==3.20.3; sys_platform == 'darwin' and platform_machine == 'arm64'
auto-gptq
auto-gptq==0.2.2
docx2txt

# Utilities
Expand Down

0 comments on commit c0c59c3

Please sign in to comment.