This repo provides a TensorRT C++ implementation of Depth-Anything model, for real-time inference on GPU.
Load or build an engine from an onnx model and perform depth estimation:
// 1. Build an engine from an onnx file
TRTModule model("./depth_anything_vitb14.onnx");
// 2. Load a built engine
//TRTModule model("./depth_anything_vitb14.engine");
Mat image = imread( "./zidan.jpg");
Mat depth = model.predict(image);
Mat colored_depth;
applyColorMap(depth, colored_depth, COLORMAP_INFERNO);
imshow("Depth", colored_depth);
waitKey(0);
The inference time includes the pre-preprocessing time and the post-processing time:
Device | Model | Model Input (WxH) | Image Resolution (WxH) | Inference Time(ms) |
---|---|---|---|---|
RTX4090 | Depth-Anything-S |
518x518 | 1280x720 | 3 |
RTX4090 | Depth-Anything-B |
518x518 | 1280x720 | 6 |
RTX4090 | Depth-Anything-L |
518x518 | 1280x720 | 12 |
Note that the inference was conducted using FP16 precision, with a warm-up period of 10 frames, and the reported time corresponds to the last inference.
- Download the pretrained model and install Depth-Anything:
git clone https://github.com/LiheYoung/Depth-Anything
cd Depth-Anything
pip install -r requirements.txt
- Copy
dpt.py
to<depth_anything_installpath>/depth_anything
folder. Here I only removed a squeeze operation at the end of model's forward function in dpt.py to avoid conflicts with tensorrt. - Export the model to onnx format using
export_to_onnx.py
. - Install TensorRT using the guide below.
- Build the project and run depth_anything.exe
TensorRT/CUDA installation guide
- Download the TensorRT zip file that matches the Windows version you are using.
- Choose where you want to install TensorRT. The zip file will install everything into a subdirectory called
TensorRT-8.x.x.x
. This new subdirectory will be referred to as<installpath>
in the steps below. - Unzip the
TensorRT-8.x.x.x.Windows10.x86_64.cuda-x.x.zip
file to the location that you chose. Where:
8.x.x.x
is your TensorRT versioncuda-x.x
is CUDA version11.6
,11.8
or12.0
- Add the TensorRT library files to your system
PATH
. To do so, copy the DLL files from<installpath>/lib
to your CUDA installation directory, for example,C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\vX.Y\bin
, wherevX.Y
is your CUDA version. The CUDA installer should have already added the CUDA path to your system PATH. - Ensure that the following is present in your Visual Studio Solution project properties:
<installpath>/lib
has been added to your PATH variable and is present under VC++ Directories > Executable Directories.<installpath>/include
is present under C/C++ > General > Additional Directories.- nvinfer.lib and any other LIB files that your project requires are present under Linker > Input > Additional Dependencies.
- Download and install any recent OpenCV for Windows.
This project is based on the following projects:
- TensorRTx - Implementation of popular deep learning networks with TensorRT network definition API.
- TensorRT - TensorRT samples and api documentation.
- Depth-Anything - Unleashing the Power of Large-Scale Unlabeled Data.