Tiny configuration for Triton Inference Server
-
Updated
Dec 4, 2024 - Python
Tiny configuration for Triton Inference Server
Triton backend for https://github.com/OpenNMT/CTranslate2
A complete containerized setup for Triton inference server and its python client using a realistic pre-trained XGBoost classifier model.
Triton backend is difficult for a client to use whether it's sending by rest-api or grpc. If the client wants to customize the request body then this repository would like to offer a sidecar along with rest-api and triton client on Kubernetes.
A Node.js client for the Triton Inference Server.
QuickStart for Deploying a Basic Model on the Triton Inference Server
Add a description, image, and links to the tritonclient topic page so that developers can more easily learn about it.
To associate your repository with the tritonclient topic, visit your repo's landing page and select "manage topics."