Skip to content

Flexible, high-throughput serving engine for AI models. Friendly interface. Enterprise scale.

License

Notifications You must be signed in to change notification settings

ckark/LitServe

 
 

Repository files navigation

LitServe: Easily serve AI models Lightning fast ⚡

Lightning

 

Flexible, high-throughput serving engine for AI models.
Friendly interface. Enterprise scale.


LitServe is a flexible serving engine for AI models built on FastAPI. Features like batching, streaming, and GPU autoscaling eliminate the need to rebuild a FastAPI server per model.

LitServe is at least 2x faster than plain FastAPI.

✅ (2x)+ faster serving   ✅ Self-host or fully managed  ✅ GPU autoscaling  
✅ Multi-modal            ✅ PyTorch/JAX/TF              ✅ OpenAPI compliant
✅ Batching               ✅ Built on Fast API           ✅ Streaming        

Discord cpu-tests license

 

 

Quick start

Install LitServe via pip (other install options):

pip install litserve

Define a server

Here's a toy example with 2 models that highlights the flexibility (explore real examples):

# server.py
import litserve as ls

# STEP 1: DEFINE A MODEL API
class SimpleLitAPI(ls.LitAPI):
    def setup(self, device):
        # setup is called once at startup. Build a compound AI system (1+ models), connect DBs, load data, etc...
        self.model1 = lambda x: x**2
        self.model2 = lambda x: x**3

    def decode_request(self, request):
        # Convert the request payload to model input.
        return request["input"] 

    def predict(self, x):
        # Run inference on the the AI system, return the output.
        squared = self.model1(x)
        cubed = self.model2(x)
        output = squared + cubed
        return {"output": output}

    def encode_response(self, output):
        # Convert the model output to a response payload.
        return {"output": output} 

# STEP 2: START THE SERVER
if __name__ == "__main__":
    api = SimpleLitAPI()
    server = ls.LitServer(api, accelerator="auto")
    server.run(port=8000)

Now run the server via the command-line

python server.py

LitAPI class gives full control and hackability.
LitServer handles optimizations like batching, auto-GPU scaling, etc...

Query the server

Use the automatically generated LitServe client:

python client.py
Write a custom client
import requests
response = requests.post(
    "http://127.0.0.1:8000/predict",
    json={"input": 4.0}
)

 

Featured examples

Use LitServe to deploy any model or AI service: (Gen AI, classical ML, embedding servers, LLMs, vision, audio, multi-modal systems, etc...)

litserve-overview-v2.mp4
Featured examples
Toy model: Hello world LLMs: Llama 3 (8B), LLM Proxy server NLP: Hugging face, BERT, Text embedding API Multimodal: OpenAI Clip, MiniCPM, Chameleon 30B Audio: Whisper, AudioCraft, StableAudio, Noise cancellation (DeepFilterNet) Vision: Stable diffusion 2, AuroraFlow, Flux, Image super resolution (Aura SR) Speech: Text-speech (XTTS V2) Classical ML: Random forest, XGBoost Miscellaneous: Media conversion API (ffmpeg)

Browse 100s of community-built templates.

 

Features

LitServe supports multiple advanced state-of-the-art features.

(2x)+ faster serving than plain FastAPI
Self host on your own machines
Host fully managed on Lightning AI
Serve all models: LLMs, vision, time series, etc...
Auto-GPU scaling
Authentication
Autoscaling
Batching
Streaming
Scale to zero (serverless)
All ML frameworks: PyTorch, Jax, Tensorflow, Hugging Face...
OpenAPI compliant
Open AI compatibility

10+ features...

Note: Our goal is not to jump on every hype train, but instead support features that scale under the most demanding enterprise deployments.

 

Performance

LitServe is designed for AI workloads. Specialized multi-worker handling delivers a minimum 2x speedup over FastAPI.

Additional features like batching and GPU autoscaling can drive performance well beyond 2x, scaling efficiently to handle more simultaneous requests than FastAPI and TorchServe.

Reproduce the full benchmarks here (higher is better).

LitServe

These results are for image and text classification ML tasks. The performance relationships hold for other ML tasks (embedding, LLM serving, audio, segmentation, object detection, summarization etc...).

💡 Note on LLM serving: For high-performance LLM serving (like Ollama/VLLM), use LitGPT or build your custom VLLM-like server with LitServe. Optimizations like kv-caching, which can be done with LitServe, are needed to maximize LLM performance.

 

Hosting options

LitServe can be hosted independently on your own machines or fully managed via Lightning Studios.

Self-hosting is ideal for hackers, students, and DIY developers, while fully managed hosting is ideal for enterprise developers needing easy autoscaling, security, release management, and 99.995% uptime and observability.

 

 

Feature Self Managed Fully Managed on Studios
Deployment ✅ Do it yourself deployment ✅ One-button cloud deploy
Load balancing
Autoscaling
Scale to zero
Multi-machine inference
Authentication
Own VPC
AWS, GCP
Use your own cloud commits

 

Community

LitServe is a community project accepting contributions - Let's make the world's most advanced AI inference engine.

💬 Get help on Discord
📋 License: Apache 2.0

About

Flexible, high-throughput serving engine for AI models. Friendly interface. Enterprise scale.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%