The blog post is here: https://www.learnopencv.com/how-to-run-inference-using-tensorrt-c-api/
python3 -m pip install -r requirements.txt
python3 pytorch_model.py
- Install CMake at least 3.10 version
- Download and install NVIDIA CUDA 10.0 or later following by official instruction: link
- Download and extract CuDNN library for your CUDA version (login required): link
- Download and extract NVIDIA TensorRT library for your CUDA version (login required): link. The minimum required version is 6.0.1.5
- Add the path to CUDA, TensorRT, CuDNN to PATH variable (or LD_LIBRARY_PATH)
- Build or install a pre-built version of OpenCV and OpenCV Contrib. The minimum required version is 4.0.0.
mkdir build
cd build
cmake -DOpenCV_DIR=[path-to-opencv-build] -DTensorRT_DIR=[path-to-tensorrt] ..
make -j8
trt_sample[.exe] resnet50.onnx turkish_coffee.jpg
Want to become an expert in AI? AI Courses by OpenCV is a great place to start.