Skip to content

Commit

Permalink
8.6.0.12 release updates
Browse files Browse the repository at this point in the history
Signed-off-by: Shuyue Lan <[email protected]>
Signed-off-by: Rajeev Rao <[email protected]>
  • Loading branch information
shuyuelan authored and rajeevsrao committed Mar 17, 2023
1 parent f1a4bd3 commit b0b2d3e
Show file tree
Hide file tree
Showing 12 changed files with 47 additions and 34 deletions.
13 changes: 12 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,16 @@
# TensorRT OSS Release Changelog

## [8.6.0 EA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#tensorrt-8) - 2023-03-10

TensorRT OSS release corresponding to TensorRT 8.6.0.12 EA release.
- Updates since [TensorRT 8.5.3 GA release](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-5-3).
- Please refer to the [TensorRT 8.6.0.12 EA release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#tensorrt-8) for more information.

Key Features and Updates:

- demoDiffusion acceleration is now supported out of the box in TensorRT without requiring plugins.
- Added a new sample called onnx_custom_plugin.

## [8.5.3 GA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-5-3) - 2023-01-30

TensorRT OSS release corresponding to TensorRT 8.5.3.1 GA release.
Expand Down Expand Up @@ -416,7 +427,7 @@ Identical to the TensorRT-OSS [8.0.1](https://github.com/NVIDIA/TensorRT/release

## [8.0.1](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#tensorrt-8) - 2021-07-02
### Added
- Added support for the following ONNX operators: `Celu`, `CumSum`, `EyeLike`, `GatherElements`, `GlobalLpPool`, `GreaterOrEqual`, `LessOrEqual`, `LpNormalization`, `LpPool`, `ReverseSequence`, and `SoftmaxCrossEntropyLoss` [details]().
- Added support for the following ONNX operators: `Celu`, `CumSum`, `EyeLike`, `GatherElements`, `GlobalLpPool`, `GreaterOrEqual`, `LessOrEqual`, `LpNormalization`, `LpPool`, `ReverseSequence`, and `SoftmaxCrossEntropyLoss` [details]().
- Rehauled `Resize` ONNX operator, now fully supporting the following modes:
- Coordinate Transformation modes: `half_pixel`, `pytorch_half_pixel`, `tf_half_pixel_for_nn`, `asymmetric`, and `align_corners`.
- Modes: `nearest`, `linear`.
Expand Down
34 changes: 17 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,14 +25,14 @@ You can skip the **Build** section to enjoy TensorRT with Python.
## Prerequisites
To build the TensorRT-OSS components, you will first need the following software packages.

**TensorRT GA build**
* [TensorRT](https://developer.nvidia.com/nvidia-tensorrt-download) v8.5.3.1
**TensorRT EA build**
* [TensorRT](https://developer.nvidia.com/nvidia-tensorrt-download) v8.6.0.12

**System Packages**
* [CUDA](https://developer.nvidia.com/cuda-toolkit)
* Recommended versions:
* cuda-11.8.0 + cuDNN-8.6
* cuda-10.2 + cuDNN-8.4
* cuda-12.0.1 + cuDNN-8.8
* cuda-11.8.0 + cuDNN-8.8
* [GNU make](https://ftp.gnu.org/gnu/make/) >= v4.1
* [cmake](https://github.com/Kitware/CMake/releases) >= v3.13
* [python](<https://www.python.org/downloads/>) >= v3.6.9, <= v3.10.x
Expand Down Expand Up @@ -70,18 +70,18 @@ To build the TensorRT-OSS components, you will first need the following software
git submodule update --init --recursive
```

2. #### (Optional - if not using TensorRT container) Specify the TensorRT GA release build path
2. #### (Optional - if not using TensorRT container) Specify the TensorRT EA release build path

If using the TensorRT OSS build container, TensorRT libraries are preinstalled under `/usr/lib/x86_64-linux-gnu` and you may skip this step.

Else download and extract the TensorRT GA build from [NVIDIA Developer Zone](https://developer.nvidia.com/nvidia-tensorrt-download).
Else download and extract the TensorRT EA build from [NVIDIA Developer Zone](https://developer.nvidia.com/nvidia-tensorrt-download).

**Example: Ubuntu 20.04 on x86-64 with cuda-11.8.0**
**Example: Ubuntu 20.04 on x86-64 with cuda-12.0**

```bash
cd ~/Downloads
tar -xvzf TensorRT-8.5.3.1.Linux.x86_64-gnu.cuda-11.8.cudnn8.6.tar.gz
export TRT_LIBPATH=`pwd`/TensorRT-8.5.3.1
tar -xvzf TensorRT-8.6.0.12.Linux.x86_64-gnu.cuda-12.0.tar.gz
export TRT_LIBPATH=`pwd`/TensorRT-8.6.0.12
```


Expand All @@ -99,13 +99,13 @@ For Linux platforms, we recommend that you generate a docker container for build
1. #### Generate the TensorRT-OSS build container.
The TensorRT-OSS build container can be generated using the supplied Dockerfiles and build scripts. The build containers are configured for building TensorRT OSS out-of-the-box.
**Example: Ubuntu 20.04 on x86-64 with cuda-11.8.0 (default)**
**Example: Ubuntu 20.04 on x86-64 with cuda-12.0 (default)**
```bash
./docker/build.sh --file docker/ubuntu-20.04.Dockerfile --tag tensorrt-ubuntu20.04-cuda11.8
./docker/build.sh --file docker/ubuntu-20.04.Dockerfile --tag tensorrt-ubuntu20.04-cuda12.0
```
**Example: CentOS/RedHat 7 on x86-64 with cuda-10.2**
**Example: CentOS/RedHat 7 on x86-64 with cuda-12.0**
```bash
./docker/build.sh --file docker/centos-7.Dockerfile --tag tensorrt-centos7-cuda10.2 --cuda 10.2
./docker/build.sh --file docker/centos-7.Dockerfile --tag tensorrt-centos7-cuda12.0 --cuda 12.0
```
**Example: Ubuntu 20.04 cross-compile for Jetson (aarch64) with cuda-11.4.2 (JetPack SDK)**
```bash
Expand All @@ -119,7 +119,7 @@ For Linux platforms, we recommend that you generate a docker container for build
2. #### Launch the TensorRT-OSS build container.
**Example: Ubuntu 20.04 build container**
```bash
./docker/launch.sh --tag tensorrt-ubuntu20.04-cuda11.8 --gpus all
./docker/launch.sh --tag tensorrt-ubuntu20.04-cuda12.0 --gpus all
```
> NOTE:
<br> 1. Use the `--tag` corresponding to build container generated in Step 1.
Expand All @@ -130,7 +130,7 @@ For Linux platforms, we recommend that you generate a docker container for build
## Building TensorRT-OSS
* Generate Makefiles and build.
**Example: Linux (x86-64) build with default cuda-11.8.0**
**Example: Linux (x86-64) build with default cuda-12.0**
```bash
cd $TRT_OSSPATH
mkdir -p build && cd build
Expand All @@ -146,7 +146,7 @@ For Linux platforms, we recommend that you generate a docker container for build
export PATH="/opt/rh/devtoolset-8/root/bin:${PATH}
```
**Example: Linux (aarch64) build with default cuda-11.8.0**
**Example: Linux (aarch64) build with default cuda-12.0**
```bash
cd $TRT_OSSPATH
mkdir -p build && cd build
Expand Down Expand Up @@ -209,4 +209,4 @@ For Linux platforms, we recommend that you generate a docker container for build
## Known Issues
* Please refer to [TensorRT 8.5 Release Notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#tensorrt-8)
* Please refer to [TensorRT 8.6 Release Notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#tensorrt-8)
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
8.5.3.1
8.6.0.12
2 changes: 1 addition & 1 deletion docker/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@

arg_dockerfile=docker/ubuntu-20.04.Dockerfile
arg_imagename=tensorrt-ubuntu
arg_cudaversion=11.8.0
arg_cudaversion=12.0.1
arg_help=0

while [[ "$#" -gt 0 ]]; do case $1 in
Expand Down
6 changes: 3 additions & 3 deletions docker/centos-7.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@
# limitations under the License.
#

ARG CUDA_VERSION=11.8.0
ARG CUDA_VERSION=12.0.1
ARG OS_VERSION=7

FROM nvidia/cuda:${CUDA_VERSION}-cudnn8-devel-centos${OS_VERSION}
LABEL maintainer="NVIDIA CORPORATION"

ENV TRT_VERSION 8.5.3.1
ENV TRT_VERSION 8.6.0.12
SHELL ["/bin/bash", "-c"]

# Setup user account
Expand Down Expand Up @@ -61,7 +61,7 @@ else \
yum -y install libnvinfer8-${v} libnvparsers8-${v} libnvonnxparsers8-${v} libnvinfer-plugin8-${v} \
libnvinfer-devel-${v} libnvparsers-devel-${v} libnvonnxparsers-devel-${v} libnvinfer-plugin-devel-${v} \
python3-libnvinfer-${v}; \
fi
fi

# Install dev-toolset-8 for g++ version that supports c++14
RUN yum -y install centos-release-scl
Expand Down
6 changes: 3 additions & 3 deletions docker/ubuntu-18.04.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@
# limitations under the License.
#

ARG CUDA_VERSION=11.8.0
ARG CUDA_VERSION=12.0.1
ARG OS_VERSION=18.04

FROM nvidia/cuda:${CUDA_VERSION}-cudnn8-devel-ubuntu${OS_VERSION}
LABEL maintainer="NVIDIA CORPORATION"

ENV TRT_VERSION 8.5.3.1
ENV TRT_VERSION 8.6.0.12
SHELL ["/bin/bash", "-c"]

# Setup user account
Expand Down Expand Up @@ -78,7 +78,7 @@ else \
sudo apt-get install libnvinfer8=${v} libnvonnxparsers8=${v} libnvparsers8=${v} libnvinfer-plugin8=${v} \
libnvinfer-dev=${v} libnvonnxparsers-dev=${v} libnvparsers-dev=${v} libnvinfer-plugin-dev=${v} \
python3-libnvinfer=${v}; \
fi
fi

# Install PyPI packages
RUN pip3 install --upgrade pip
Expand Down
4 changes: 2 additions & 2 deletions docker/ubuntu-20.04-aarch64.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@
#

# Multi-arch container support available in non-cudnn containers.
FROM nvidia/cuda:11.8.0-devel-ubuntu20.04
FROM nvidia/cuda:12.0.1-devel-ubuntu20.04

ENV TRT_VERSION 8.5.3.1
ENV TRT_VERSION 8.6.0.12
SHELL ["/bin/bash", "-c"]

# Setup user account
Expand Down
4 changes: 2 additions & 2 deletions docker/ubuntu-20.04.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@
# limitations under the License.
#

ARG CUDA_VERSION=11.8.0
ARG CUDA_VERSION=12.0.1
ARG OS_VERSION=20.04

FROM nvidia/cuda:${CUDA_VERSION}-cudnn8-devel-ubuntu${OS_VERSION}
LABEL maintainer="NVIDIA CORPORATION"

ENV TRT_VERSION 8.5.3.1
ENV TRT_VERSION 8.6.0.12
SHELL ["/bin/bash", "-c"]

# Setup user account
Expand Down
2 changes: 1 addition & 1 deletion docker/ubuntu-cross-aarch64.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ ARG OS_VERSION=20.04
FROM nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${OS_VERSION}
LABEL maintainer="NVIDIA CORPORATION"

ENV TRT_VERSION 8.5.3.1
ENV TRT_VERSION 8.6.0.12
ENV DEBIAN_FRONTEND=noninteractive

ARG uid=1000
Expand Down
4 changes: 3 additions & 1 deletion include/NvInfer.h
Original file line number Diff line number Diff line change
Expand Up @@ -9009,7 +9009,9 @@ enum class PreviewFeature : int32_t
//!
//! The default value for this flag is on.
//!
kFASTER_DYNAMIC_SHAPES_0805 = 0,
//! \deprecated Turning it off is deprecated in TensorRT 8.6. The flag kFASTER_DYNAMIC_SHAPES_0805 will be removed in 9.0.
//!
kFASTER_DYNAMIC_SHAPES_0805 TRT_DEPRECATED_ENUM = 0,

//!
//! Disable usage of cuDNN/cuBLAS/cuBLASLt tactics in the TensorRT core library.
Expand Down
2 changes: 1 addition & 1 deletion include/NvInferPlugin.h
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 1993-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 1993-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: LicenseRef-NvidiaProprietary
*
* NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
Expand Down
2 changes: 1 addition & 1 deletion include/NvInferVersion.h
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
#define NV_TENSORRT_MAJOR 8 //!< TensorRT major version.
#define NV_TENSORRT_MINOR 6 //!< TensorRT minor version.
#define NV_TENSORRT_PATCH 0 //!< TensorRT patch version.
#define NV_TENSORRT_BUILD 9 //!< TensorRT build number.
#define NV_TENSORRT_BUILD 12 //!< TensorRT build number.

#define NV_TENSORRT_LWS_MAJOR 0 //!< TensorRT LWS major version.
#define NV_TENSORRT_LWS_MINOR 0 //!< TensorRT LWS minor version.
Expand Down

0 comments on commit b0b2d3e

Please sign in to comment.