This repository contains a reference implementation of a lightweight and friendly AI assistant, designed to run locally without requiring a GPU or cloud services.
While originally created as a secretary AI, it can be adapted for various other use cases, such as customer support, knowledge retrieval, or general AI-powered interactions.
Additionally, it supports deployment on an Apache local server, allowing the AI to be accessible over the internet if needed.
Live Demo: Access the chatbot hosted on an Apache server, running on a MacBook here: https://home.tago.so/ai/
git clone <repository_url>
cd <repository_folder>
- Create a
.env
file based on.env.example
:cp .env.example .env
- Update the
.env
file with the necessary values:ALLOWED_ORIGIN
GENERATE_API_URL
MODEL_NAME
PORT
npm install
node server.js
- Ensure Ollama is installed and running locally.
- Load the required model:
ollama create <model_name> -f <path_to_Modelfile>
- This is the fun part -- Customize the Modelfile to suit your preferences. Edit the file to define behavior, rules, or responses specific to your needs before creating the model. You can also modify the temprature and other parameters.
- Install and configure Apache to serve the frontend (
ollama-frontend
) at your desired domain or subdomain. - For basic setup:
- Ensure you have Apache installed on your system.
- Edit your httpd.conf file to include a block pointing to the frontend directory.
- (Optional) Set up SSL/TLS for secure communication using Let's Encrypt or similar tools.
- Use mod_ssl for enabling HTTPS.
- Generate certificates with a tool like certbot.
Navigate to:
http://localhost:<PORT>
Replace <PORT>
with the value specified in your .env
file.
Use the chat interface to interact with your AI secretary. Sample questions are available to help you get started.
- Lightweight AI Assistant: Powered by the Mistral 7B language model.
- Secure Backend API: Ensures safe handling of user queries.
- Interactive Web Interface: Provides real-time responses through a sleek chat UI.
A friendly AI secretary designed to keep your life organized. This humble yet efficient assistant:
- Runs entirely on a macOS laptop (no GPU or cloud required).
- Delivers clear, concise responses with minimal hallucinations.
- Embraces occasional harmless mistakes with charm. 😉
- Built with Node.js and Express.js.
- Security features include:
- Helmet for secure headers.
- Domain-restricted CORS.
- Rate limiting.
- Request logging.
- Interactive chat interface built with HTML, CSS, and JavaScript.
- Real-time query and response functionality.
- Uses a custom Modelfile for behavior and rules configuration.
- I have confirmed that it runs smoothly on Mistral 7B, but you are free to use other models. However, in that case, you will need to make modifications to the Modelfile.
- Local macOS Environment:
- Runs seamlessly on a macOS laptop without requiring a GPU or cloud resources.
- Self-Contained Hosting:
- Backend and frontend are designed for standalone deployment.
- Scalable:
- Optimized for potential production extension.
This project is licensed under the Apache License 2.0. See the LICENSE
file for details.