Skip to content

The Triton Inference Server provides an optimized cloud and edge inferencing solution.

License

Notifications You must be signed in to change notification settings

conraddd/server

 
 

Repository files navigation

Triton Inference Server

License

LATEST RELEASE: You are currently on the main branch which tracks under-development progress towards the next release. The current release is version 2.22.0 and corresponds to the 22.05 container release on NVIDIA GPU Cloud (NGC).


Triton Inference Server is an open source inference serving software that streamlines AI inferencing. Triton enables teams to deploy any AI model from multiple deep learning and machine learning frameworks, including TensorRT, TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton supports inference across cloud, data center,edge and embedded devices on NVIDIA GPUs, x86 and ARM CPU, or AWS Inferentia. Triton delivers optimized performance for many query types, including real time, batched, ensembles and audio/video streaming.

Major features include:

Serve a Model in 3 Easy Steps

# Step 1: Create the example model repository 
git clone -b r22.05 https://github.com/triton-inference-server/server.git

cd server/docs/examples

./fetch_models.sh

# Step 2: Launch triton from the NGC Triton container
docker run --gpus=1 --rm --net=host -v /full/path/to/docs/examples/model_repository:/models nvcr.io/nvidia/tritonserver:22.05-py3 tritonserver --model-repository=/models

# Step 3: In a separate console, launch the image_client example from the NGC Triton SDK container
docker run -it --rm --net=host nvcr.io/nvidia/tritonserver:22.05-py3-sdk

/workspace/install/bin/image_client -m densenet_onnx -c 3 -s INCEPTION /workspace/images/mug.jpg

# Inference should return the following
Image '/workspace/images/mug.jpg':
    15.346230 (504) = COFFEE MUG
    13.224326 (968) = CUP
    10.422965 (505) = COFFEEPOT

Please read the QuickStart guide for additional information regarding this example. The quickstart guide also contains an example of how to launch Triton on CPU-only systems.

Examples and Tutorials

Specific end-to-end examples for popular models, such as ResNet, BERT, and DLRM are located in the NVIDIA Deep Learning Examples page on GitHub. The NVIDIA Developer Zone contains additional documentation, presentations, and examples.

Documentation

Build and Deploy

The recommended way to build and use Triton Inference Server is with Docker images.

Using Triton

Preparing Models for Triton Inference Server

The first step in using Triton to serve your models is to place one or more models into a model repository. Depending on the type of the model and on what Triton capabilities you want to enable for the model, you may need to create a model configuration for the model.

Configure and Use Triton Inference Server

Client Support and Examples

A Triton client application sends inference and other requests to Triton. The Python and C++ client libraries provide APIs to simplify this communication.

Extend Triton

Triton Inference Server's archicture is specifically designed for modularity and flexibility

Additional Documentation

Contributing

Contributions to Triton Inference Server are more than welcome. To contribute please review the contribution guidelines. If you have a backend, client, example or similar contribution that is not modifying the core of Triton, then you should file a PR in the contrib repo.

Reporting problems, asking questions

We appreciate any feedback, questions or bug reporting regarding this project. When posting issues in GitHub, follow the process outlined in the Stack Overflow document. Ensure posted examples are:

  • minimal – use as little code as possible that still produces the same problem
  • complete – provide all parts needed to reproduce the problem. Check if you can strip external dependencies and still show the problem. The less time we spend on reproducing problems the more time we have to fix it
  • verifiable – test the code you're about to provide to make sure it reproduces the problem. Remove all other problems that are not related to your request/question.

For more information

Please refer to the NVIDIA Developer Triton page for more information.

About

The Triton Inference Server provides an optimized cloud and edge inferencing solution.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 50.1%
  • Shell 24.9%
  • C++ 19.8%
  • Java 2.0%
  • CMake 1.7%
  • Roff 1.0%
  • Other 0.5%