Alpaca-Turbo is a language model that can be run locally without much setup required. It is a user-friendly web UI for the alpaca.cpp language model based on LLaMA, with unique features that make it stand out from other implementations. The goal is to provide a seamless chat experience that is easy to configure and use, without sacrificing speed or functionality.
Note: for some reason this docker container works on Linux but not on Windows
Docker must be installed on your system
- Download the latest alpaca-turbo.zip from the release page.
- Extract the contents of the zip file into a directory named alpaca-turbo.
- Copy your alpaca models to alpaca-turbo/models/ directory.
- Run the following command to set everything up:
docker-compose up
- Visit http://localhost:7887 to use the chat interface of the chatbot.
OR
For Windows users we have a oneclick standalone launcher - Alpaca-Turbo.exe.
- Links for installing miniconda:
- Download the latest alpaca-turbo.zip from the release page.
- Extract Alpaca-Turbo.zip to Alpaca-Turbo
Make sure you have enough space for the models in the extracted location
- Copy your alpaca models to alpaca-turbo/models/ directory.
- Open cmd as Admin and type
conda init
- close that window
- open a new cmd window in your Alpaca-Turbo dir and type
conda create -n alpaca_turbo python=3.8 -y conda activate alpaca_turbo pip install -r requirements.txt python api.py
- Visit http://localhost:7887 select your model and click change wait for the model to load
- ready to interact
As an open source project in a rapidly developing field, I am open to contributions, whether it be in the form of a new feature, improved infra, or better documentation.
For detailed information on how to contribute.
- ggerganov/LLaMA.cpp For their amazing cpp library
- antimatter15/alpaca.cpp For initial versions of their chat app
- cocktailpeanut/dalai For the Inspiration
- MetaAI for the LLaMA models
- Stanford for the alpaca models