Skip to content

A tiny wrapper around TensorRT python API which loads a serialized TensorRT engine and runs inferences

License

Notifications You must be signed in to change notification settings

ChengjieWU/TensorRT-Easy-to-Run

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TRE - TensorRT Running Engine

This is a tiny wrapper around TensorRT python API which loads a serialized TensorRT engine and runs inferences. It makes the inference process simpler. It somehow supplements what ONNX-TensorRT misses.

Installation

Dependencies

Before installing TensorRT, you may need to install cuDNN and PyCUDA. See Installing cuDNN and Installing PyCUDA. Follow the instructions to install TensorRT carefully. Make Sure the TensorRT lib is in your LD_LIBRARY_PATH.

Download the code

Clone the code from GitHub.

git clone https://github.com/ChengjieWU/TRE.git

Install the TRE wheel file.

cd TensorRT-Easy-to-Run
python setup.py sdist bdist_wheel
pip install dist/TRE-0.0.1-py3-none-any.whl

Usage

The TensorRT Running Engine can be used in Python as follows:

from TRE import Engine
import numpy as np

engine = Engine("/path/to/serialized/TensorRT/engine", "CUDA:0")
input_data = np.random.random(size=(32, 3, 224, 224)).astype(np.float32)
output_data = engine.run(input_data)
print(output_data)
print(output_data.shape)

About

A tiny wrapper around TensorRT python API which loads a serialized TensorRT engine and runs inferences

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages