forked from xtekky/gpt4free
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merging PR_218 openai_rev package with new streamlit chat app
- Loading branch information
Showing
8,378 changed files
with
2,931,636 additions
and
3 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
[email protected]:eyJhbGciOiJSUzI1NiIsImtpZCI6Imluc18yTzZ3UTFYd3dxVFdXUWUyQ1VYZHZ2bnNaY2UiLCJ0eXAiOiJKV1QifQ.eyJhenAiOiJodHRwczovL2FjY291bnRzLmZvcmVmcm9udC5haSIsImV4cCI6MTY4MjYzNTYyMSwiaWF0IjoxNjgyNjM1NTYxLCJpc3MiOiJodHRwczovL2NsZXJrLmZvcmVmcm9udC5haSIsIm5iZiI6MTY4MjYzNTU1MSwic2lkIjoic2Vzc18yUDFyakFuZm0wRGU2b04ydUNZcWU4VmZueGYiLCJzdWIiOiJ1c2VyXzJQMXJqRU5sT2RPQ0N5WktwaVREZjZTenR5SiJ9.MVImDU8Z2rc2Pk154BBUJZnviqw6rwd1_etJc5BV6D4TYGQkFfbGJPUmtMz2wUKZhXwmIJl-TA5S7v_dweebFKq0UQYyfPFML_5_z76Qj1njfq5E4cQgIx2YNixabyVlQNU02WOurn0EuRIRsrd3w-KKlohj05Hdi_Dn6FsHu5iJEfHB9LK9euKXGCzIWLSZJzcExp-n51WNpmPYnAkKPCQSKJxTvppJ7WP2NC1AQRjDUQ0u-ZHSMC-p9ySpQpBCpTG5rzTfPxd9j0T21FW03rYu3s_lFkLVh9p5kLadZ1DBJuXcIv3HPLsQjm_cuXi9iJFGCLs0paURxJofm-5xdg | ||
[email protected]:eyJhbGciOiJSUzI1NiIsImtpZCI6Imluc18yTzZ3UTFYd3dxVFdXUWUyQ1VYZHZ2bnNaY2UiLCJ0eXAiOiJKV1QifQ.eyJhenAiOiJodHRwczovL2FjY291bnRzLmZvcmVmcm9udC5haSIsImV4cCI6MTY4MjYzNTg1OSwiaWF0IjoxNjgyNjM1Nzk5LCJpc3MiOiJodHRwczovL2NsZXJrLmZvcmVmcm9udC5haSIsIm5iZiI6MTY4MjYzNTc4OSwic2lkIjoic2Vzc18yUDFzREFjeTAzU2U3VTNGMXg4c0RXcDBkMU8iLCJzdWIiOiJ1c2VyXzJQMXNEOU5aQlpxYU45eHAyekNSaFhpUmpyQSJ9.QK1OxFJU2CPyXrhLaCTA18chdYmaaCStYk1h4P9BYoYlQbW-FoWT7cHeN0frSGdA75Feo6sH47zFQzVcTTfQcQfcaVBYf_lXtnn01jhip581njgRBLwpoDwHMOJkynF5dedyuW5a7zDPQ_UvCKInTzyeKSTjewYNZTBOuZ6S07TwAO57_mwgrb52RuIqZhnZOVkzWEsuMYgygRM5PUnsJGh9nRWgITO8-VKU3E42MNEAnB5OYnjIcvN_v0-urGHVJyJHO6r5bSVhnZDYmuGK08hIqnpNzFQdQHl1J8dyPwEC5u83Q71xU3dNwRduMAjlw-cuFC0076gTi_TyWaz0gA | ||
[email protected]:eyJhbGciOiJSUzI1NiIsImtpZCI6Imluc18yTzZ3UTFYd3dxVFdXUWUyQ1VYZHZ2bnNaY2UiLCJ0eXAiOiJKV1QifQ.eyJhenAiOiJodHRwczovL2FjY291bnRzLmZvcmVmcm9udC5haSIsImV4cCI6MTY4MjY0MDkzNywiaWF0IjoxNjgyNjQwODc3LCJpc3MiOiJodHRwczovL2NsZXJrLmZvcmVmcm9udC5haSIsIm5iZiI6MTY4MjY0MDg2Nywic2lkIjoic2Vzc18yUDIyVkhTVjFMd0dDU2FTNkZpMXhPNHNRQmMiLCJzdWIiOiJ1c2VyXzJQMjJWRjFYV3I4em9LVHh3V2J2cTROTWUwVSJ9.RDdjina2RyJzyoVzBTGqGaW4hdHoS9_HM40mOOw3DScPqqFTcGkY_JVpo_g-_vE0Zuobc5HT6A2d0kL-iEPi0m6UlNJELrN1LsmKfgiKHb-4IlVcKcd_al1Exho2qYi9Sn7iPc9cFJfBfQRFAJdMIZGtGVgAkflLSGQDIYbLIAH4gv9FCU3gqAJJaXAtdewUo232qNyjoNHLRaUIP_d8qrWm7eTs8woI-fWOEx4CoA_lsytm20dC3FcftiRrBwUzneMPV4SiNgyiH2ithIn12cWOm7aIeveGmbQlRiXYu9Hmb4pTpLQvoBY8U2ZVbJDKX9YcfUqlM5rps8Zpkeo3RQ |
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,11 +1,72 @@ | ||
# gpt4free gui | ||
|
||
mode `streamlit_app.py` into base folder to run | ||
This code provides a Graphical User Interface (GUI) for gpt4free. Users can ask questions and get answers from GPT-4 API's, utilizing multiple API implementations. The project contains two different Streamlit applications: `streamlit_app.py` and `streamlit_chat_app.py`. | ||
|
||
Installation | ||
------------ | ||
|
||
1. Clone the repository. | ||
2. Install the required dependencies with: `pip install -r requirements.txt`. | ||
3. To use `streamlit_chat_app.py`, note that it depends on a pull request (PR #24) from the https://github.com/AI-Yash/st-chat/ repository, which may change in the future. The current dependency library can be found at https://github.com/AI-Yash/st-chat/archive/refs/pull/24/head.zip. | ||
|
||
Usage | ||
----- | ||
|
||
Choose one of the Streamlit applications to run: | ||
|
||
### streamlit\_app.py | ||
|
||
This application provides a simple interface for asking GPT-4 questions and receiving answers. | ||
|
||
To run the application: | ||
|
||
run: | ||
```arduino | ||
streamlit run gui/streamlit_app.py | ||
``` | ||
<br> | ||
|
||
<img width="724" alt="image" src="https://user-images.githubusercontent.com/98614666/234232449-0d5cd092-a29d-4759-8197-e00ba712cb1a.png"> | ||
|
||
<br> | ||
<br> | ||
|
||
preview: | ||
|
||
<img width="1125" alt="image" src="https://user-images.githubusercontent.com/98614666/234232398-09e9d3c5-08e6-4b8a-b4f2-0666e9790c7d.png"> | ||
|
||
|
||
run: | ||
<img width="724" alt="image" src="https://user-images.githubusercontent.com/98614666/234232449-0d5cd092-a29d-4759-8197-e00ba712cb1a.png"> | ||
### streamlit\_chat\_app.py | ||
|
||
This application provides a chat-like interface for asking GPT-4 questions and receiving answers. It supports multiple query methods, and users can select the desired API for their queries. The application also maintains a conversation history. | ||
|
||
To run the application: | ||
|
||
```arduino | ||
streamlit run streamlit_chat_app.py | ||
``` | ||
|
||
<br> | ||
|
||
<img width="724" alt="image" src="image1.png"> | ||
|
||
<br> | ||
<br> | ||
|
||
preview: | ||
|
||
<img width="1125" alt="image" src="image2.png"> | ||
|
||
Contributing | ||
------------ | ||
|
||
Feel free to submit pull requests, report bugs, or request new features by opening issues on the GitHub repository. | ||
|
||
Bug | ||
---- | ||
There is a bug in `streamlit_chat_app.py` right now that I haven't pinpointed yet, probably is really simple but havent had the time to look for it. Whenever you open a new conversation or access an old conversation it will only start prompt-answering after the second time you input to the text input, other than that, everything else seems to work accordingly. | ||
|
||
License | ||
------- | ||
|
||
This project is licensed under the MIT License. |
Empty file.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,103 @@ | ||
import openai_rev | ||
from openai_rev import forefront, quora, theb, you | ||
import random | ||
|
||
|
||
def query_forefront(question: str) -> str: | ||
# create an account | ||
token = forefront.Account.create(logging=True) | ||
|
||
# get a response | ||
try: | ||
result = forefront.StreamingCompletion.create(token = token, prompt = 'hello world', model='gpt-4') | ||
|
||
return result['response'] | ||
|
||
except Exception as e: | ||
# Return error message if an exception occurs | ||
return f'An error occurred: {e}. Please make sure you are using a valid cloudflare clearance token and user agent.' | ||
|
||
|
||
def query_quora(question: str) -> str: | ||
token = quora.Account.create(logging=False, enable_bot_creation=True) | ||
response = quora.Completion.create( | ||
model='gpt-4', | ||
prompt=question, | ||
token=token | ||
) | ||
|
||
return response.completion.choices[0].tex | ||
|
||
|
||
def query_theb(question: str) -> str: | ||
# Set cloudflare clearance cookie and get answer from GPT-4 model | ||
try: | ||
result = theb.Completion.create( | ||
prompt = question) | ||
|
||
return result['response'] | ||
|
||
except Exception as e: | ||
# Return error message if an exception occurs | ||
return f'An error occurred: {e}. Please make sure you are using a valid cloudflare clearance token and user agent.' | ||
|
||
|
||
def query_you(question: str) -> str: | ||
# Set cloudflare clearance cookie and get answer from GPT-4 model | ||
try: | ||
result = you.Completion.create( | ||
prompt = question) | ||
|
||
return result.text | ||
|
||
except Exception as e: | ||
# Return error message if an exception occurs | ||
return f'An error occurred: {e}. Please make sure you are using a valid cloudflare clearance token and user agent.' | ||
|
||
# Define a dictionary containing all query methods | ||
avail_query_methods = { | ||
"Forefront": query_forefront, | ||
"Poe": query_quora, | ||
"Theb": query_theb, | ||
"You": query_you, | ||
# "Writesonic": query_writesonic, | ||
# "T3nsor": query_t3nsor, | ||
# "Phind": query_phind, | ||
# "Ora": query_ora, | ||
} | ||
|
||
def query(user_input: str, selected_method: str = "Random") -> str: | ||
|
||
# If a specific query method is selected (not "Random") and the method is in the dictionary, try to call it | ||
if selected_method != "Random" and selected_method in avail_query_methods: | ||
try: | ||
return avail_query_methods[selected_method](user_input) | ||
except Exception as e: | ||
print(f"Error with {selected_method}: {e}") | ||
return "😵 Sorry, some error occurred please try again." | ||
|
||
# Initialize variables for determining success and storing the result | ||
success = False | ||
result = "😵 Sorry, some error occurred please try again." | ||
# Create a list of available query methods | ||
query_methods_list = list(avail_query_methods.values()) | ||
|
||
# Continue trying different methods until a successful result is obtained or all methods have been tried | ||
while not success and query_methods_list: | ||
# Choose a random method from the list | ||
chosen_query = random.choice(query_methods_list) | ||
# Find the name of the chosen method | ||
chosen_query_name = [k for k, v in avail_query_methods.items() if v == chosen_query][0] | ||
try: | ||
# Try to call the chosen method with the user input | ||
result = chosen_query(user_input) | ||
success = True | ||
except Exception as e: | ||
print(f"Error with {chosen_query_name}: {e}") | ||
# Remove the failed method from the list of available methods | ||
query_methods_list.remove(chosen_query) | ||
|
||
return result | ||
|
||
|
||
__all__ = ['query', 'avail_query_methods'] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,96 @@ | ||
import os | ||
import sys | ||
|
||
sys.path.append(os.path.join(os.path.dirname(__file__), os.path.pardir)) | ||
|
||
import streamlit as st | ||
from streamlit_chat import message | ||
from query_methods import query, avail_query_methods | ||
import pickle | ||
import openai_rev | ||
|
||
conversations_file = "conversations.pkl" | ||
|
||
def load_conversations(): | ||
try: | ||
with open(conversations_file, "rb") as f: | ||
return pickle.load(f) | ||
except FileNotFoundError: | ||
return [] | ||
|
||
def save_conversations(conversations, current_conversation): | ||
updated = False | ||
for i, conversation in enumerate(conversations): | ||
if conversation == current_conversation: | ||
conversations[i] = current_conversation | ||
updated = True | ||
break | ||
if not updated: | ||
conversations.append(current_conversation) | ||
with open(conversations_file, "wb") as f: | ||
pickle.dump(conversations, f) | ||
|
||
st.header("Chat Placeholder") | ||
|
||
if 'conversations' not in st.session_state: | ||
st.session_state['conversations'] = load_conversations() | ||
|
||
if 'input_text' not in st.session_state: | ||
st.session_state['input_text'] = '' | ||
|
||
if 'selected_conversation' not in st.session_state: | ||
st.session_state['selected_conversation'] = None | ||
|
||
if 'input_field_key' not in st.session_state: | ||
st.session_state['input_field_key'] = 0 | ||
|
||
if 'query_method' not in st.session_state: | ||
st.session_state['query_method'] = query | ||
|
||
# Initialize new conversation | ||
if 'current_conversation' not in st.session_state or st.session_state['current_conversation'] is None: | ||
st.session_state['current_conversation'] = {'user_inputs': [], 'generated_responses': []} | ||
|
||
|
||
input_placeholder = st.empty() | ||
user_input = input_placeholder.text_input('You:', key=f'input_text_{len(st.session_state["current_conversation"]["user_inputs"])}') | ||
submit_button = st.button("Submit") | ||
|
||
if user_input or submit_button: | ||
output = query(user_input, st.session_state['query_method']) | ||
|
||
st.session_state.current_conversation['user_inputs'].append(user_input) | ||
st.session_state.current_conversation['generated_responses'].append(output) | ||
save_conversations(st.session_state.conversations, st.session_state.current_conversation) | ||
user_input = input_placeholder.text_input('You:', value='', key=f'input_text_{len(st.session_state["current_conversation"]["user_inputs"])}') # Clear the input field | ||
|
||
|
||
# Add a button to create a new conversation | ||
if st.sidebar.button("New Conversation"): | ||
st.session_state['selected_conversation'] = None | ||
st.session_state['current_conversation'] = {'user_inputs': [], 'generated_responses': []} | ||
st.session_state['input_field_key'] += 1 | ||
|
||
st.session_state['query_method'] = st.sidebar.selectbox( | ||
"Select API:", | ||
options=openai_rev.Provider.__members__.keys(), | ||
index=0 | ||
) | ||
|
||
# Sidebar | ||
st.sidebar.header("Conversation History") | ||
|
||
for i, conversation in enumerate(st.session_state.conversations): | ||
if st.sidebar.button(f"Conversation {i + 1}: {conversation['user_inputs'][0]}", key=f"sidebar_btn_{i}"): | ||
st.session_state['selected_conversation'] = i | ||
st.session_state['current_conversation'] = st.session_state.conversations[i] | ||
|
||
if st.session_state['selected_conversation'] is not None: | ||
conversation_to_display = st.session_state.conversations[st.session_state['selected_conversation']] | ||
else: | ||
conversation_to_display = st.session_state.current_conversation | ||
|
||
if conversation_to_display['generated_responses']: | ||
for i in range(len(conversation_to_display['generated_responses']) - 1, -1, -1): | ||
message(conversation_to_display["generated_responses"][i], key=f"display_generated_{i}") | ||
message(conversation_to_display['user_inputs'][i], is_user=True, key=f"display_user_{i}") |
Oops, something went wrong.