Skip to content

Commit

Permalink
TensorRT OSS 21.02 release
Browse files Browse the repository at this point in the history
Signed-off-by: Rajeev Rao <[email protected]>
  • Loading branch information
rajeevsrao committed Feb 5, 2021
1 parent 360eea3 commit d7baf01
Show file tree
Hide file tree
Showing 210 changed files with 24,807 additions and 833 deletions.
16 changes: 8 additions & 8 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
name: Bug report
about: Create a report to help us improve
name: TensorRT OSS Bug Report
about: Report any bugs to help us improve TensorRT.
title: ''
labels: ''
assignees: ''
Expand All @@ -15,20 +15,20 @@ assignees: ''
## Environment

**TensorRT Version**:
**GPU Type**:
**Nvidia Driver Version**:
**NVIDIA GPU**:
**NVIDIA Driver Version**:
**CUDA Version**:
**CUDNN Version**:
**Operating System + Version**:
**Operating System**:
**Python Version (if applicable)**:
**TensorFlow Version (if applicable)**:
**Tensorflow Version (if applicable)**:
**PyTorch Version (if applicable)**:
**Baremetal or Container (if container which image + tag)**:
**Baremetal or Container (if so, version)**:


## Relevant Files

<!-- Please include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.) -->
<!-- Please include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive/Dropbox, etc.) -->


## Steps To Reproduce
Expand Down
21 changes: 21 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,26 @@
# TensorRT OSS Release Changelog

## [21.02](https://github.com/NVIDIA/TensorRT/releases/tag/21.02) - 2021-02-01
### Added
- [TensorRT Python API bindings](python)
- [TensorRT Python samples](samples/python)
- FP16 support to batchedNMSPlugin [#1002](https://github.com/NVIDIA/TensorRT/pull/1002)
- Configurable input size for TLT MaskRCNN Plugin [#986](https://github.com/NVIDIA/TensorRT/pull/986)

### Changed
- TensorRT version updated to 7.2.2.3
- [ONNX-TensorRT v21.02 update](https://github.com/onnx/onnx-tensorrt/blob/master/docs/Changelog.md#2102-container-release---2021-01-22)
- [Polygraphy v0.21.1 update](tools/Polygraphy/CHANGELOG.md#v0211-2021-01-12)
- [PyTorch-Quantization Toolkit](tools/pytorch-quantization) v2.1.0 update
- Documentation update, ONNX opset 13 support, ResNet example
- [ONNX-GraphSurgeon v0.28 update](tools/onnx-graphsurgeon/CHANGELOG.md#v028-2020-10-08)
- [demoBERT builder](demo/BERT) updated to work with Tensorflow2 (in compatibility mode)
- Refactor [Dockerfiles](docker) for OSS container

### Removed
- N/A


## [20.12](https://github.com/NVIDIA/TensorRT/releases/tag/20.12) - 2020-12-18
### Added
- Add configurable input size for TLT MaskRCNN Plugin
Expand Down
34 changes: 33 additions & 1 deletion LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@

END OF TERMS AND CONDITIONS

Copyright 2020 NVIDIA Corporation
Copyright 2021 NVIDIA Corporation

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
Expand All @@ -193,6 +193,38 @@

PORTIONS LICENSED AS FOLLOWS

> tools/pytorch-quantization/examples/torchvision/models/classification/resnet.py

BSD 3-Clause License

Copyright (c) Soumith Chintala 2016,
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.

* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.

* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

> samples/common/windows/getopt.c

Copyright (c) 2002 Todd C. Miller <[email protected]>
Expand Down
8 changes: 8 additions & 0 deletions NOTICE
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
TensorRT Open Source Software
Copyright (c) 2021, NVIDIA CORPORATION.

This product includes software developed at
NVIDIA CORPORATION (https://www.nvidia.com/).

This software contains code derived by Soumith Chintala.
BSD 3-Clause License (https://github.com/pytorch/vision/blob/master/LICENSE)
119 changes: 48 additions & 71 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ This repository contains the Open Source Software (OSS) components of NVIDIA Ten
To build the TensorRT-OSS components, you will first need the following software packages.

**TensorRT GA build**
* [TensorRT](https://developer.nvidia.com/nvidia-tensorrt-download) v7.2.1
* [TensorRT](https://developer.nvidia.com/nvidia-tensorrt-download) v7.2.2
- See [Downloading TensorRT Builds](#downloading-tensorrt-builds) for details

**System Packages**
Expand Down Expand Up @@ -56,69 +56,55 @@ To build the TensorRT-OSS components, you will first need the following software
git clone -b master https://github.com/nvidia/TensorRT TensorRT
cd TensorRT
git submodule update --init --recursive
export TRT_SOURCE=`pwd`
```
**On Windows: Powershell**
```powershell
git clone -b master https://github.com/nvidia/TensorRT TensorRT
cd TensorRT
git submodule update --init --recursive
$Env:TRT_SOURCE = $(Get-Location)
```

2. #### Download TensorRT GA
To build TensorRT OSS, obtain the corresponding TensorRT GA build from [NVIDIA Developer Zone](https://developer.nvidia.com/nvidia-tensorrt-download).
2. #### Specify the TensorRT Release build

**Example: Ubuntu 18.04 on x86-64 with cuda-11.1**
If using NVIDIA build containers, TensorRT is preinstalled under `/usr/lib/x86_64-linux-gnu`.

Download and extract the latest *TensorRT 7.2.1 GA package for Ubuntu 18.04 and CUDA 11.1*
```bash
cd ~/Downloads
tar -xvzf TensorRT-7.2.1.6.Ubuntu-18.04.x86_64-gnu.cuda-11.1.cudnn8.0.tar.gz
export TRT_RELEASE=`pwd`/TensorRT-7.2.1.6
```
**Example: Ubuntu 18.04 on PowerPC with cuda-11.0**
Else download and extract the TensorRT build from [NVIDIA Developer Zone](https://developer.nvidia.com/nvidia-tensorrt-download).

Download and extract the latest *TensorRT 7.2.1 GA package for Ubuntu 18.04 and CUDA 11.0*
```bash
cd ~/Downloads
tar -xvzf TensorRT-7.2.1.6.Ubuntu-18.04.powerpc64le-gnu.cuda-11.0.cudnn8.0.tar.gz
export TRT_RELEASE=`pwd`/TensorRT-7.2.1.6
```
**Example: CentOS/RedHat 7 on x86-64 with cuda-11.0**
**Example: Ubuntu 18.04 on x86-64 with cuda-11.1**

Download and extract the *TensorRT 7.2.1 GA for CentOS/RedHat 7 and CUDA 11.0 tar package*
```bash
cd ~/Downloads
tar -xvzf TensorRT-7.2.1.6.CentOS-7.6.x86_64-gnu.cuda-11.0.cudnn8.0.tar.gz
export TRT_RELEASE=`pwd`/TensorRT-7.2.1.6
```
**Example: Ubuntu18.04 Cross-Compile for QNX with cuda-10.2**
```bash
cd ~/Downloads
tar -xvzf TensorRT-7.2.2.3.Ubuntu-18.04.x86_64-gnu.cuda-11.1.cudnn8.0.tar.gz
export TRT_LIBPATH=`pwd`/TensorRT-7.2.2.3
```

Download and extract the *TensorRT 7.2.1 GA for QNX and CUDA 10.2 tar package*
```bash
cd ~/Downloads
tar -xvzf TensorRT-7.2.1.6.Ubuntu-18.04.aarch64-qnx.cuda-10.2.cudnn7.6.tar.gz
export TRT_RELEASE=`pwd`/TensorRT-7.2.1.6
export QNX_HOST=/<path-to-qnx-toolchain>/host/linux/x86_64
export QNX_TARGET=/<path-to-qnx-toolchain>/target/qnx7
```
**Example: Windows on x86-64 with cuda-11.0**
**Example: Ubuntu18.04 Cross-Compile for QNX with cuda-10.2**

```bash
cd ~/Downloads
tar -xvzf TensorRT-7.2.2.3.Ubuntu-18.04.aarch64-qnx.cuda-10.2.cudnn7.6.tar.gz
export TRT_LIBPATH=`pwd`/TensorRT-7.2.2.3
export QNX_HOST=/<path-to-qnx-toolchain>/host/linux/x86_64
export QNX_TARGET=/<path-to-qnx-toolchain>/target/qnx7
```

**Example: Windows on x86-64 with cuda-11.0**

```powershell
cd ~\Downloads
Expand-Archive .\TensorRT-7.2.2.3.Windows10.x86_64.cuda-11.0.cudnn8.0.zip
$Env:TRT_LIBPATH = '$(Get-Location)\TensorRT-7.2.2.3'
$Env:PATH += 'C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\'
```

Download and extract the *TensorRT 7.2.1 GA for Windows and CUDA 11.0 zip package* and add *msbuild* to *PATH*
```powershell
cd ~\Downloads
Expand-Archive .\TensorRT-7.2.1.6.Windows10.x86_64.cuda-11.0.cudnn8.0.zip
$Env:TRT_RELEASE = '$(Get-Location)\TensorRT-7.2.1.6'
$Env:PATH += 'C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\'
```

3. #### (Optional) JetPack SDK for Jetson builds
Using the JetPack SDK manager, download the host components. Steps:
1. Download and launch the SDK manager. Login with your developer account.
2. Select the platform and target OS (example: Jetson AGX Xavier, `Linux Jetpack 4.4`), and click Continue.
3. Under `Download & Install Options` change the download folder and select `Download now, Install later`. Agree to the license terms and click Continue.
4. Move the extracted files into the `$TRT_SOURCE/docker/jetpack_files` folder.
4. Move the extracted files into the `<TensorRT-OSS>/docker/jetpack_files` folder.
## Setting Up The Build Environment
Expand All @@ -129,25 +115,25 @@ For native builds, install the [prerequisite](#prerequisites) *System Packages*.
**Example: Ubuntu 18.04 on x86-64 with cuda-11.1**
```bash
./docker/build.sh --file docker/ubuntu.Dockerfile --tag tensorrt-ubuntu --os 18.04 --cuda 11.1
./docker/build.sh --file docker/ubuntu-18.04.Dockerfile --tag tensorrt-ubuntu-1804 --cuda 11.1
```
**Example: Ubuntu 18.04 on PowerPC with cuda-11.0**
**Example: Ubuntu 18.04 cross-compile for PowerPC with cuda-11.0**
```bash
./docker/build.sh --file docker/ubuntu-cross-ppc64le.Dockerfile --tag tensorrt-ubuntu-ppc --os 18.04 --cuda 11.0
./docker/build.sh --file docker/ubuntu-cross-ppc64le.Dockerfile --tag tensorrt-ubuntu-ppc --cuda 11.0
```
**Example: CentOS/RedHat 7 on x86-64 with cuda-11.0**
```bash
./docker/build.sh --file docker/centos.Dockerfile --tag tensorrt-centos --os 7 --cuda 11.0
./docker/build.sh --file docker/centos-7.Dockerfile --tag tensorrt-centos --cuda 11.0
```
**Example: Ubuntu 18.04 Cross-Compile for Jetson (arm64) with cuda-10.2 (JetPack)**
**Example: Ubuntu 18.04 cross-compile for Jetson (arm64) with cuda-10.2 (JetPack SDK)**
```bash
./docker/build.sh --file docker/ubuntu-cross-aarch64.Dockerfile --tag tensorrt-cross-jetpack --os 18.04 --cuda 10.2
./docker/build.sh --file docker/ubuntu-cross-aarch64.Dockerfile --tag tensorrt-cross-jetpack --cuda 10.2
```
2. #### Launch the TensorRT-OSS build container.
**Example: Ubuntu 18.04 build container**
```bash
./docker/launch.sh --tag tensorrt-ubuntu --gpus all --release $TRT_RELEASE --source $TRT_SOURCE
./docker/launch.sh --tag tensorrt-ubuntu-1804 --gpus all
```
> NOTE:
1. Use the tag corresponding to the build container you generated in
Expand All @@ -158,37 +144,37 @@ For native builds, install the [prerequisite](#prerequisites) *System Packages*.
**Example: Linux (x86-64) build with default cuda-11.1**
```bash
cd $TRT_SOURCE
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_RELEASE/lib -DTRT_OUT_DIR=`pwd`/out
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out
make -j$(nproc)
```
**Example: Native build on Jetson (arm64) with cuda-10.2**
```bash
cd $TRT_SOURCE
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_RELEASE/lib -DTRT_OUT_DIR=`pwd`/out -DTRT_PLATFORM_ID=aarch64 -DCUDA_VERSION=10.2
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DTRT_PLATFORM_ID=aarch64 -DCUDA_VERSION=10.2
make -j$(nproc)
```
**Example: Ubuntu 18.04 Cross-Compile for Jetson (arm64) with cuda-10.2 (JetPack)**
```bash
cd $TRT_SOURCE
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_RELEASE/lib -DTRT_OUT_DIR=`pwd`/out -DCMAKE_TOOLCHAIN_FILE=$TRT_SOURCE/cmake/toolchains/cmake_aarch64.toolchain -DCUDA_VERSION=10.2
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64.toolchain -DCUDA_VERSION=10.2
make -j$(nproc)
```
**Example: Cross-Compile for QNX with cuda-10.2**
```bash
cd $TRT_SOURCE
cd $TRT_OSSPATH
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_RELEASE/lib -DTRT_OUT_DIR=`pwd`/out -DCMAKE_TOOLCHAIN_FILE=$TRT_SOURCE/cmake/toolchains/cmake_qnx.toolchain -DCUDA_VERSION=10.2
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_qnx.toolchain -DCUDA_VERSION=10.2
make -j$(nproc)
```
**Example: Windows (x86-64) build in Powershell**
```powershell
cd $Env:TRT_SOURCE
cd $Env:TRT_OSSPATH
mkdir -p build ; cd build
cmake .. -DTRT_LIB_DIR=$Env:TRT_RELEASE\lib -DTRT_OUT_DIR='$(Get-Location)\out' -DCMAKE_TOOLCHAIN_FILE=..\cmake\toolchains\cmake_x64_win.toolchain
cmake .. -DTRT_LIB_DIR=$Env:TRT_LIBPATH -DTRT_OUT_DIR='$(Get-Location)\out' -DCMAKE_TOOLCHAIN_FILE=..\cmake\toolchains\cmake_x64_win.toolchain
msbuild ALL_BUILD.vcxproj
```
> NOTE:
Expand All @@ -215,15 +201,6 @@ For native builds, install the [prerequisite](#prerequisites) *System Packages*.
- Multiple SMs: `-DGPU_ARCHS="80 75"`
- `TRT_PLATFORM_ID`: Bare-metal build (unlike containerized cross-compilation) on non Linux/x86 platforms must explicitly specify the target platform. Currently supported options: `x86_64` (default), `aarch64`
#### (Optional) Install TensorRT python bindings

* The TensorRT python API bindings must be installed for running TensorRT python applications

**Example: install TensorRT wheel for python 3.6**
```bash
pip3 install $TRT_RELEASE/python/tensorrt-7.2.1.6-cp36-none-linux_x86_64.whl
```

# References
## TensorRT Resources
Expand All @@ -236,5 +213,5 @@ For native builds, install the [prerequisite](#prerequisites) *System Packages*.
## Known Issues
#### TensorRT 7.2.1
#### TensorRT 7.2.2
* None
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
7.2.1.6
7.2.2.3
8 changes: 4 additions & 4 deletions demo/BERT/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,10 @@ endif()

set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-deprecated-declarations")

include($ENV{TRT_SOURCE}/cmake/modules/set_ifndef.cmake)
set_ifndef(TRT_INC_DIR $ENV{TRT_SOURCE}/include)
set_ifndef(TRT_LIB_DIR $ENV{TRT_RELEASE}/lib)
set_ifndef(TRT_OUT_DIR $ENV{TRT_SOURCE}/build/out)
include($ENV{TRT_OSSPATH}/cmake/modules/set_ifndef.cmake)
set_ifndef(TRT_INC_DIR $ENV{TRT_OSSPATH}/include)
set_ifndef(TRT_LIB_DIR $ENV{TRT_LIBPATH})
set_ifndef(TRT_OUT_DIR $ENV{TRT_OSSPATH}/build/out)

include_directories(
infer_c
Expand Down
Loading

0 comments on commit d7baf01

Please sign in to comment.