This project implements a chatbot using a customized codellama 7b, quantized to 4bit precision, served through Ollama API, and presented with a Gradio interface.
This chatbot leverages the power of the codellama 7b language model, fine-tuned to act as a python code teaching assistant.
-
Clone this repository:
git clone https://github.com/your-username/custom-gemma-chatbot.git cd custom-gemma-chatbot
-
Install the required packages:
pip install -r requirements.txt
-
Set up the environment variable:
- Create a
.env
file in the project root - Add the following line:
API_URL=http://localhost:11434/api/generate
- Create a
The custom model is configured using the modelfile
with the following settings:
- Base model: codellama:7b-code-q4_0
- Temperature: 1
- System prompt: Configured to act as a code assistant created by Khalil