This repository contains a reference implementation of the Secretary AI project, a lightweight and friendly AI assistant.
Live Demo: Access the chatbot hosted on an Apache server, running on a MacBook here: https://home.tago.so/ai/
git clone <repository_url>
cd <repository_folder>
- Create a
.env
file based on.env.example
:cp .env.example .env
- Update the
.env
file with the necessary values:ALLOWED_ORIGIN
GENERATE_API_URL
MODEL_NAME
PORT
npm install
node server.js
- Ensure Ollama is installed and running locally.
- Load the required model:
ollama create <model_name> -f <path_to_Modelfile>
- Here is the fan part -- Customize the Modelfile to suit your preferences. Edit the file to define behavior, rules, or responses specific to your needs before creating the model.
- Install and configure Apache to serve the frontend (
ollama-frontend
) at your desired domain or subdomain.
Navigate to:
http://localhost:<PORT>
Replace <PORT>
with the value specified in your .env
file.
Use the chat interface to interact with your AI secretary. Sample questions are available to help you get started.
- Lightweight AI Assistant: Powered by the Mistral 7B language model.
- Secure Backend API: Ensures safe handling of user queries.
- Interactive Web Interface: Provides real-time responses through a sleek chat UI.
A friendly AI secretary designed to keep your life organized. This humble yet efficient assistant:
- Runs entirely on a macOS laptop (no GPU or cloud required).
- Delivers clear, concise responses with minimal hallucinations.
- Embraces occasional harmless mistakes with charm. 😉
- Built with Node.js and Express.js.
- Security features include:
- Helmet for secure headers.
- Domain-restricted CORS.
- Rate limiting.
- Request logging.
- Interactive chat interface built with HTML, CSS, and JavaScript.
- Real-time query and response functionality.
- Powered by the Mistral 7B language model.
- Uses a custom Modelfile for behavior and rules configuration.
- Local macOS Environment:
- Runs seamlessly on a macOS laptop without requiring a GPU or cloud resources.
- Self-Contained Hosting:
- Backend and frontend are designed for standalone deployment.
- Scalable:
- Optimized for potential production extension.
This project is licensed under the Apache License 2.0. See the LICENSE
file for details.