Skip to content

Commit

Permalink
Add GPUStack to LLM Deployment
Browse files Browse the repository at this point in the history
  • Loading branch information
linyinli committed Aug 6, 2024
1 parent 2a2ae11 commit ee70270
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -286,6 +286,7 @@ If you're interested in the field of LLM, you may find the above list of milesto
- [AI Gateway](https://github.com/Portkey-AI/gateway) — Gateway streamlines requests to 100+ open & closed source models with a unified API. It is also production-ready with support for caching, fallbacks, retries, timeouts, loadbalancing, and can be edge-deployed for minimum latency.
- [talkd.ai dialog](https://github.com/talkdai/dialog) - Simple API for deploying any RAG or LLM that you want adding plugins.
- [Wllama](https://github.com/ngxson/wllama) - WebAssembly binding for llama.cpp - Enabling in-browser LLM inference
- [GPUStack](https://github.com/gpustack/gpustack) - An open-source GPU cluster manager for running LLMs

## LLM Applications
- [AdalFlow](https://github.com/SylphAI-Inc/AdalFlow) - AdalFlow: The PyTorch library for LLM applications.
Expand Down

0 comments on commit ee70270

Please sign in to comment.