Skip to content

Commit

Permalink
Merge pull request PromtEngineer#146 from LeafmanZ/UI_v1
Browse files Browse the repository at this point in the history
Adding support for GUI and API. 
Addressed: PromtEngineer#48 PromtEngineer#74 PromtEngineer#19
  • Loading branch information
PromtEngineer authored Jun 15, 2023
2 parents 1647470 + 658a1a4 commit 57d4f5e
Show file tree
Hide file tree
Showing 54 changed files with 61,057 additions and 0 deletions.
35 changes: 35 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,41 @@ In order to ask a question, run a command like:
python run_localGPT.py --device_type cpu
```

# Run the UI
1. Start by opening up `run_localGPTAPI.py` in a code editor of your choice. If you are using gpu skip to step 3.

2. If you are running on cpu change `DEVICE_TYPE = 'cuda'` to `DEVICE_TYPE = 'cpu'`.

* Comment out the following:
```shell
model_id = "TheBloke/WizardLM-7B-uncensored-GPTQ"
model_basename = "WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order.safetensors"
LLM = load_model(device_type=DEVICE_TYPE, model_id=model_id, model_basename = model_basename)
```
* Uncomment:
```shell
model_id = "TheBloke/guanaco-7B-HF" # or some other -HF or .bin model
LLM = load_model(device_type=DEVICE_TYPE, model_id=model_id)
```

* If you are running gpu there should be nothing to change. Save and close `run_localGPTAPI.py`.

3. Open up a terminal and activate your python environment that contains the dependencies installed from requirements.txt.

4. Navigate to the `/LOCALGPT` directory.

5. Run the following command `python run_localGPT_API.py`. The API should being to run.

6. Wait until everything has loaded in. You should see something like `INFO:werkzeug:Press CTRL+C to quit`.

7. Open up a second terminal and activate the same python environment.

8. Navigate to the `/LOCALGPT/localGPTUI` directory.

9. Run the command `python localGPTUI.py`.

10. Open up a web browser and go the address `http://localhost:5111/`.

# How does it work?

Selecting the right local models and the power of `LangChain` you can run the entire pipeline locally, without any data leaving your environment, and with reasonable performance.
Expand Down
51 changes: 51 additions & 0 deletions localGPTUI/localGPTUI.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
from flask import Flask, render_template, request
from werkzeug.utils import secure_filename

import os
import sys
import requests

sys.path.append(os.path.join(os.path.dirname(__file__), '..'))

app = Flask(__name__)
app.secret_key = "LeafmanZSecretKey"

### PAGES ###
@app.route('/', methods=['GET', 'POST'])
def home_page():
if request.method == 'POST':
if 'user_prompt' in request.form:
user_prompt = request.form['user_prompt']
print(f'User Prompt: {user_prompt}')

main_prompt_url = 'http://localhost:5110/api/prompt_route'
response = requests.post(main_prompt_url, data={'user_prompt': user_prompt})
print(response.status_code) # print HTTP response status code for debugging
if response.status_code == 200:
# print(response.json()) # Print the JSON data from the response
return render_template('home.html', show_response_modal=True, response_dict = response.json())
elif 'documents' in request.files:
delete_source_url = 'http://localhost:5110/api/delete_source' # URL of the /api/delete_source endpoint
if request.form.get('action') == 'reset':
response = requests.get(delete_source_url)

save_document_url = 'http://localhost:5110/api/save_document'
run_ingest_url = 'http://localhost:5110/api/run_ingest' # URL of the /api/run_ingest endpoint
files = request.files.getlist('documents')
for file in files:
print(file.filename)
filename = secure_filename(file.filename)
file_path = os.path.join('temp', filename) # replace with your preferred path
file.save(file_path)
with open(file_path, 'rb') as f:
response = requests.post(save_document_url, files={'document': f})
print(response.status_code) # print HTTP response status code for debugging
os.remove(file_path) # remove the file after sending the request
# Make a GET request to the /api/run_ingest endpoint
response = requests.get(run_ingest_url)
print(response.status_code) # print HTTP response status code for debugging

# Display the form for GET request
return render_template('home.html', show_response_modal=False, response_dict={'Prompt': 'None','Answer': 'None', 'Sources': [('ewf','wef')]})
if __name__ == '__main__':
app.run(debug=False, port =5111)
Loading

0 comments on commit 57d4f5e

Please sign in to comment.