Skip to content

thangdt277/depth-anything-tensorrt

 
 

Repository files navigation

Depth-Anything TensorRT C++

This repo provides a TensorRT C++ implementation of Depth-Anything model, for real-time inference on GPU.

Getting Started

Load or build an engine from an onnx model and perform depth estimation:

// 1. Build an engine from an onnx file
TRTModule model("./depth_anything_vitb14.onnx");  
// 2. Load a built engine
//TRTModule model("./depth_anything_vitb14.engine"); 

Mat image = imread( "./zidan.jpg");

Mat depth = model.predict(image);

Mat colored_depth;
applyColorMap(depth, colored_depth, COLORMAP_INFERNO);

imshow("Depth", colored_depth);
waitKey(0);

Performance

The inference time includes the pre-preprocessing time and the post-processing time:

Device Model Model Input (WxH) Image Resolution (WxH) Inference Time(ms)
RTX4090 Depth-Anything-S 518x518 1280x720 3
RTX4090 Depth-Anything-B 518x518 1280x720 6
RTX4090 Depth-Anything-L 518x518 1280x720 12

Note that the inference was conducted using FP16 precision, with a warm-up period of 10 frames, and the reported time corresponds to the last inference.

Installation

  1. Download the pretrained model and install Depth-Anything:
git clone https://github.com/LiheYoung/Depth-Anything
cd Depth-Anything
pip install -r requirements.txt
  1. Copy dpt.py to <depth_anything_installpath>/depth_anything folder. Here I only removed a squeeze operation at the end of model's forward function in dpt.py to avoid conflicts with tensorrt.
  2. Export the model to onnx format using export_to_onnx.py.
  3. Install TensorRT using the guide below.
  4. Build the project and run depth_anything.exe
TensorRT/CUDA installation guide
  1. Download the TensorRT zip file that matches the Windows version you are using.
  2. Choose where you want to install TensorRT. The zip file will install everything into a subdirectory called TensorRT-8.x.x.x. This new subdirectory will be referred to as <installpath> in the steps below.
  3. Unzip the TensorRT-8.x.x.x.Windows10.x86_64.cuda-x.x.zip file to the location that you chose. Where:
  • 8.x.x.x is your TensorRT version
  • cuda-x.x is CUDA version 11.6, 11.8 or 12.0
  1. Add the TensorRT library files to your system PATH. To do so, copy the DLL files from <installpath>/lib to your CUDA installation directory, for example, C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vX.Y\bin, where vX.Y is your CUDA version. The CUDA installer should have already added the CUDA path to your system PATH.
  2. Ensure that the following is present in your Visual Studio Solution project properties:
  • <installpath>/lib has been added to your PATH variable and is present under VC++ Directories > Executable Directories.
  • <installpath>/include is present under C/C++ > General > Additional Directories.
  • nvinfer.lib and any other LIB files that your project requires are present under Linker > Input > Additional Dependencies.
  1. Download and install any recent OpenCV for Windows.

Acknowledgement

This project is based on the following projects:

  • TensorRTx - Implementation of popular deep learning networks with TensorRT network definition API.
  • TensorRT - TensorRT samples and api documentation.
  • Depth-Anything - Unleashing the Power of Large-Scale Unlabeled Data.

About

tensorrt cpp implementation of depth-anything model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 79.9%
  • Python 20.1%