Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
The open source community will eventually witness the Stable Diffusion moment for large language models (LLMs), and Basaran allows you to replace OpenAI's service with the latest open-source model to power your application without modifying a single line of code.
The key features of Basaran are:
- Stream generation using various decoding strategies.
- Support both decoder-only and encoder-decoder models.
- Detokenizer that handles surrogates and whitespace.
- Multi-GPU support with optional 8-bit quantization.
- Real-time partial progress using server-sent events.
- Compatible with OpenAI API and client libraries.
- Comes with a fancy web-based playground!
Replace user/repo
with the selected model (e.g. bigscience/bloomz-560m
) and X.Y.Z
with the latest version, then run:
docker run -p 80:80 -e MODEL=user/repo hyperonym/basaran:X.Y.Z
And you're good to go! 🚀
Playground: http://127.0.0.1/
API: http://127.0.0.1/v1/completions
Docker images are available on Docker Hub and GitHub Packages.
For GPU acceleration, you also need to install the NVIDIA Driver and NVIDIA Container Runtime. Basaran's image already comes with related libraries such as CUDA and cuDNN, so there is no need to install them manually.
Basaran's image can be used in three ways:
- Run directly: By specifying the
MODEL="user/repo"
environment variable, the corresponding model can be downloaded from Hugging Face Hub during the first startup. - Bundling: Create a new Dockerfile to preload a public model or bundle a private model.
- Bind mount: Mount a model from the local file system into the container and point the
MODEL
environment variable to the corresponding path.
For the above use cases, you can find sample Dockerfiles and docker-compose files in the deployments directory.
Basaran is tested on Python 3.8+ and PyTorch 1.13. You should create a virtual environment with the version of Python you want to use, and activate it before proceeding.
- Clone the repository:
git clone https://github.com/hyperonym/basaran.git && cd basaran
- Install dependencies:
pip install -r requirements.txt
- Replace
user/repo
with the selected model and run Basaran:
MODEL=user/repo python -m basaran
For a complete list of environment variables, see __init__.py
.
Basaran's HTTP request and response formats are consistent with the OpenAI API.
Taking text completion as an example:
curl http://127.0.0.1/v1/completions \
-H 'Content-Type: application/json' \
-d '{ "prompt": "once upon a time,", "echo": true }'
Example response
{
"id": "cmpl-e08c701b4ba032c09ef080e1",
"object": "text_completion",
"created": 1678003509,
"model": "bigscience/bloomz-560m",
"choices": [
{
"text": "once upon a time, the human being faces a complicated situation and he needs to find a new life.",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 21,
"total_tokens": 26
}
}
If your application uses client libraries provided by OpenAI, you only need to modify the OPENAI_API_BASE
environment variable to Basaran's corresponding endpoint:
OPENAI_API_BASE="http://127.0.0.1/v1" python your_app.py
The examples directory contains examples of using the Python library.
Basaran's API format is consistent with OpenAI's, with compatibility differences mainly in parameter support and response fields. The following sections provide detailed information on the compatibility of each endpoint.
Each Basaran process serves only one model, so the result will only contain this model.
Although Basaran does not support the model
parameter, the OpenAI client library requires this parameter to be present. Therefore, you can fill in any model name you want.
Parameter | Basaran | OpenAI | Default Value | Maximum Value |
---|---|---|---|---|
model |
○ | ● | - | - |
prompt |
● | ● | "" |
COMPLETION_MAX_PROMPT |
suffix |
○ | ● | - | - |
min_tokens |
● | ○ | 0 |
COMPLETION_MAX_TOKENS |
max_tokens |
● | ● | 16 |
COMPLETION_MAX_TOKENS |
temperature |
● | ● | 1.0 |
- |
top_p |
● | ● | 1.0 |
- |
n |
● | ● | 1 |
COMPLETION_MAX_N |
stream |
● | ● | false |
- |
logprobs |
● | ● | 0 |
COMPLETION_MAX_LOGPROBS |
echo |
● | ● | false |
- |
stop |
○ | ● | - | - |
presence_penalty |
○ | ● | - | - |
frequency_penalty |
○ | ● | - | - |
best_of |
○ | ● | - | - |
logit_bias |
○ | ● | - | - |
user |
○ | ● | - | - |
- API
- Models
- List models
- Retrieve model
- Completions
- Create completion
- Chat
- Create chat completion
- Models
- Model
- Architectures
- Encoder-decoder
- Decoder-only
- Decoding strategies
- Random sampling with temperature
- Nucleus-sampling (top-p)
- Contrastive search
- Architectures
See the open issues for a full list of proposed features.
This project is open-source. If you have any ideas or questions, please feel free to reach out by creating an issue!
Contributions are greatly appreciated, please refer to CONTRIBUTING.md for more information.
Basaran is available under the MIT License.
© 2023 Hyperonym