Skip to content

Latest commit

 

History

History
 
 

PyTorch-ONNX-TensorRT-CPP

How To Run Inference Using TensorRT C++ API

The blog post is here: https://www.learnopencv.com/how-to-run-inference-using-tensorrt-c-api/

To run PyTorch part:

python3 -m pip install -r requirements.txt
python3 pytorch_model.py

To run TensorRT part:

  1. Install CMake at least 3.10 version
  2. Download and install NVIDIA CUDA 10.0 or later following by official instruction: link
  3. Download and extract CuDNN library for your CUDA version (login required): link
  4. Download and extract NVIDIA TensorRT library for your CUDA version (login required): link. The minimum required version is 6.0.1.5
  5. Add the path to CUDA, TensorRT, CuDNN to PATH variable (or LD_LIBRARY_PATH)
  6. Build or install a pre-built version of OpenCV and OpenCV Contrib. The minimum required version is 4.0.0.
mkdir build
cd build
cmake -DOpenCV_DIR=[path-to-opencv-build] -DTensorRT_DIR=[path-to-tensorrt] ..
make -j8
trt_sample[.exe] resnet50.onnx turkish_coffee.jpg

AI Courses by OpenCV

Want to become an expert in AI? AI Courses by OpenCV is a great place to start.