A tutorial project demonstrating how to use the TensorRT C++ API
- Explain that the model must have a dynamic batch size when exported from onnx.
- Explain motiviation for this project is shitty docs.
- They need to provide their own model sadly.
How to use TensorRT C++ API for high performance GPU inference.
A Venice Computer Vision Presentation
·
·
·
Venice Computer Vision
This project demonstrates how to use the TensorRT C++ API for high performance GPU inference. It covers how to do the following:
The following instructions assume you are using Ubuntu 20.04
sudo apt install build-essential
sudo apt install python3.8
pip3 install cmake
TODO Install TensorRT
mkdir build && cd build
cmake ..
make -j$(nproc)
make install