Skip to content

Commit c89bc83

Browse files
ilyasherrajeevsrao
authored andcommitted
Update TensorRT to 8.6.1
Signed-off-by: Ilya Sherstyuk <[email protected]>
1 parent ccde81b commit c89bc83

File tree

578 files changed

+4943
-3505
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

578 files changed

+4943
-3505
lines changed

.github/ISSUE_TEMPLATE/bug_report.md

+39-16
Original file line numberDiff line numberDiff line change
@@ -1,44 +1,67 @@
11
---
2-
name: TensorRT OSS Bug Report
3-
about: Report any bugs to help us improve TensorRT.
4-
title: ''
2+
name: Report a TensorRT issue
3+
about: The more information you share, the more feedback we can provide.
4+
title: 'XXX failure of TensorRT X.Y when running XXX on GPU XXX'
55
labels: ''
66
assignees: ''
77

88
---
99

1010
## Description
1111

12-
<!-- A clear and concise description of the bug or issue. -->
12+
<!--
13+
A clear and concise description of the issue.
14+
15+
For example: I tried to run model ABC on GPU, but it fails with the error below (share a 2-3 line error log).
16+
-->
1317

1418

1519
## Environment
1620

17-
**TensorRT Version**:
18-
**NVIDIA GPU**:
19-
**NVIDIA Driver Version**:
20-
**CUDA Version**:
21-
**CUDNN Version**:
22-
**Operating System**:
23-
**Python Version (if applicable)**:
24-
**Tensorflow Version (if applicable)**:
25-
**PyTorch Version (if applicable)**:
26-
**Baremetal or Container (if so, version)**:
21+
<!-- Please share any setup information you know. This will help us to understand and address your case. -->
22+
23+
**TensorRT Version**:
24+
25+
**NVIDIA GPU**:
26+
27+
**NVIDIA Driver Version**:
28+
29+
**CUDA Version**:
30+
31+
**CUDNN Version**:
32+
33+
34+
Operating System:
35+
36+
Python Version (if applicable):
37+
38+
Tensorflow Version (if applicable):
39+
40+
PyTorch Version (if applicable):
41+
42+
Baremetal or Container (if so, version):
2743

2844

2945
## Relevant Files
3046

3147
<!-- Please include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive/Dropbox, etc.) -->
3248

49+
**Model link**:
50+
3351

3452
## Steps To Reproduce
3553

36-
<!--
54+
<!--
3755
Craft a minimal bug report following this guide - https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
3856
3957
Please include:
4058
* Exact steps/commands to build your repro
4159
* Exact steps/commands to run your repro
42-
* Full traceback of errors encountered
60+
* Full traceback of errors encountered
4361
-->
4462

63+
**Commands or scripts**:
64+
65+
**Have you tried [the latest release](https://developer.nvidia.com/tensorrt)?**:
66+
67+
**Can this model run on other frameworks?** For example run ONNX model with ONNXRuntime (`polygraphy run <model.onnx> --onnxrt`):

CHANGELOG.md

+29-18
Original file line numberDiff line numberDiff line change
@@ -1,34 +1,45 @@
11
# TensorRT OSS Release Changelog
22

3-
## [8.6.0 EA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#tensorrt-8) - 2023-03-14
3+
## [8.6.1 GA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-6-1) - 2023-05-02
4+
5+
TensorRT OSS release corresponding to TensorRT 8.6.1.6 GA release.
6+
- Updates since [TensorRT 8.6.0 EA release](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-6-0-EA).
7+
- Please refer to the [TensorRT 8.6.1.6 GA release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-6-1) for more information.
8+
9+
Key Features and Updates:
10+
11+
- Added a new flag `--use-cuda-graph` to demoDiffusion to improve performance.
12+
- Optimized GPT2 and T5 HuggingFace demos to use fp16 I/O tensors for fp16 networks.
13+
14+
## [8.6.0 EA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-6-0-EA) - 2023-03-10
415

516
TensorRT OSS release corresponding to TensorRT 8.6.0.12 EA release.
6-
- Updates since [TensorRT 8.5.3 GA release](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-5-3).
7-
- Please refer to the [TensorRT 8.6.0.12 EA release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#tensorrt-8) for more information.
17+
- Updates since [TensorRT 8.5.3 GA release](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-5-3).
18+
- Please refer to the [TensorRT 8.6.0.12 EA release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-6-0-EA) for more information.
819

920
Key Features and Updates:
1021

1122
- demoDiffusion acceleration is now supported out of the box in TensorRT without requiring plugins.
1223
- The following plugins have been removed accordingly: GroupNorm, LayerNorm, MultiHeadCrossAttention, MultiHeadFlashAttention, SeqLen2Spatial, and SplitGeLU.
1324
- Added a new sample called onnx_custom_plugin.
1425

15-
## [8.5.3 GA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-5-3) - 2023-01-30
26+
## [8.5.3 GA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-5-3) - 2023-01-30
1627

1728
TensorRT OSS release corresponding to TensorRT 8.5.3.1 GA release.
18-
- Updates since [TensorRT 8.5.2 GA release](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-5-2).
19-
- Please refer to the [TensorRT 8.5.3 GA release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-5-3) for more information.
29+
- Updates since [TensorRT 8.5.2 GA release](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-5-2).
30+
- Please refer to the [TensorRT 8.5.3 GA release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-5-3) for more information.
2031

2132
Key Features and Updates:
2233

2334
- Added the following HuggingFace demos: GPT-J-6B, GPT2-XL, and GPT2-Medium
2435
- Added nvinfer1::plugin namespace
2536
- Optimized KV Cache performance for T5
2637

27-
## [8.5.2 GA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-5-2) - 2022-12-12
38+
## [8.5.2 GA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-5-2) - 2022-12-12
2839

2940
TensorRT OSS release corresponding to TensorRT 8.5.2.2 GA release.
30-
- Updates since [TensorRT 8.5.1 GA release](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-5-1).
31-
- Please refer to the [TensorRT 8.5.2 GA release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-5-2) for more information.
41+
- Updates since [TensorRT 8.5.1 GA release](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-5-1).
42+
- Please refer to the [TensorRT 8.5.2 GA release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-5-2) for more information.
3243

3344
Key Features and Updates:
3445

@@ -51,11 +62,11 @@ Key Features and Updates:
5162
### Removed
5263
- None
5364

54-
## [8.5.1 GA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-5-1) - 2022-11-01
65+
## [8.5.1 GA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-5-1) - 2022-11-01
5566

5667
TensorRT OSS release corresponding to TensorRT 8.5.1.7 GA release.
5768
- Updates since [TensorRT 8.4.1 GA release](https://github.com/NVIDIA/TensorRT/releases/tag/8.4.1).
58-
- Please refer to the [TensorRT 8.5.1 GA release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-5-1) for more information.
69+
- Please refer to the [TensorRT 8.5.1 GA release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-5-1) for more information.
5970

6071
Key Features and Updates:
6172

@@ -84,7 +95,7 @@ Key Features and Updates:
8495

8596
## [22.08](https://github.com/NVIDIA/TensorRT/releases/tag/22.08) - 2022-08-16
8697

87-
Updated TensorRT version to 8.4.2 - see the [TensorRT 8.4.2 release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-4-2) for more information
98+
Updated TensorRT version to 8.4.2 - see the [TensorRT 8.4.2 release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-4-2) for more information
8899

89100
### Changed
90101
- Updated default protobuf version to 3.20.x
@@ -114,11 +125,11 @@ Updated TensorRT version to 8.4.2 - see the [TensorRT 8.4.2 release notes](https
114125
### Removed
115126
- None
116127

117-
## [8.4.1 GA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-4-1) - 2022-06-14
128+
## [8.4.1 GA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-4-1) - 2022-06-14
118129

119130
TensorRT OSS release corresponding to TensorRT 8.4.1.5 GA release.
120131
- Updates since [TensorRT 8.2.1 GA release](https://github.com/NVIDIA/TensorRT/releases/tag/8.2.1).
121-
- Please refer to the [TensorRT 8.4.1 GA release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-4-1) for more information.
132+
- Please refer to the [TensorRT 8.4.1 GA release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-4-1) for more information.
122133

123134
Key Features and Updates:
124135

@@ -258,11 +269,11 @@ Key Features and Updates:
258269
### Removed
259270
- Unused source file(s) in demo/BERT
260271

261-
## [8.2.1 GA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-2-1) - 2021-11-24
272+
## [8.2.1 GA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-2-1) - 2021-11-24
262273

263274
TensorRT OSS release corresponding to TensorRT 8.2.1.8 GA release.
264275
- Updates since [TensorRT 8.2.0 EA release](https://github.com/NVIDIA/TensorRT/releases/tag/8.2.0-EA).
265-
- Please refer to the [TensorRT 8.2.1 GA release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-2-1) for more information.
276+
- Please refer to the [TensorRT 8.2.1 GA release notes](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-2-1) for more information.
266277

267278
- ONNX parser [v8.2.1](https://github.com/onnx/onnx-tensorrt/releases/tag/release%2F8.2-GA)
268279
- Removed duplicate constant layer checks that caused some performance regressions
@@ -316,7 +327,7 @@ TensorRT OSS release corresponding to TensorRT 8.2.1.8 GA release.
316327
- Updated Python documentation for `add_reduce`, `add_top_k`, and `ISoftMaxLayer`
317328
- Renamed default GitHub branch to `main` and updated hyperlinks
318329

319-
## [8.2.0 EA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel-8-2-0-EA) - 2021-10-05
330+
## [8.2.0 EA](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#rel-8-2-0-EA) - 2021-10-05
320331
### Added
321332
- [Demo applications](demo/HuggingFace) showcasing TensorRT inference of [HuggingFace Transformers](https://huggingface.co/transformers).
322333
- Support is currently extended to GPT-2 and T5 models.
@@ -426,7 +437,7 @@ TensorRT OSS release corresponding to TensorRT 8.2.1.8 GA release.
426437
## [21.07](https://github.com/NVIDIA/TensorRT/releases/tag/21.07) - 2021-07-21
427438
Identical to the TensorRT-OSS [8.0.1](https://github.com/NVIDIA/TensorRT/releases/tag/8.0.1) Release.
428439

429-
## [8.0.1](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#tensorrt-8) - 2021-07-02
440+
## [8.0.1](https://docs.nvidia.com/deeplearning/tensorrt/release-notes/#tensorrt-8) - 2021-07-02
430441
### Added
431442
- Added support for the following ONNX operators: `Celu`, `CumSum`, `EyeLike`, `GatherElements`, `GlobalLpPool`, `GreaterOrEqual`, `LessOrEqual`, `LpNormalization`, `LpPool`, `ReverseSequence`, and `SoftmaxCrossEntropyLoss` [details]().
432443
- Rehauled `Resize` ONNX operator, now fully supporting the following modes:

CMakeLists.txt

+3-3
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ set(CMAKE_SKIP_BUILD_RPATH True)
4848
project(TensorRT
4949
LANGUAGES CXX CUDA
5050
VERSION ${TRT_VERSION}
51-
DESCRIPTION "TensorRT is a C++ library that facilitates high performance inference on NVIDIA GPUs and deep learning accelerators."
51+
DESCRIPTION "TensorRT is a C++ library that facilitates high-performance inference on NVIDIA GPUs and deep learning accelerators."
5252
HOMEPAGE_URL "https://github.com/NVIDIA/TensorRT")
5353

5454
if(CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT)
@@ -88,8 +88,8 @@ endif()
8888
############################################################################################
8989
# Dependencies
9090

91-
set(DEFAULT_CUDA_VERSION 11.3.1)
92-
set(DEFAULT_CUDNN_VERSION 8.2)
91+
set(DEFAULT_CUDA_VERSION 12.0.1)
92+
set(DEFAULT_CUDNN_VERSION 8.8)
9393
set(DEFAULT_PROTOBUF_VERSION 3.20.1)
9494

9595
# Dependency Version Resolution

README.md

+13-13
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,8 @@ You can skip the **Build** section to enjoy TensorRT with Python.
2525
## Prerequisites
2626
To build the TensorRT-OSS components, you will first need the following software packages.
2727

28-
**TensorRT EA build**
29-
* [TensorRT](https://developer.nvidia.com/nvidia-tensorrt-download) v8.6.0.12
28+
**TensorRT GA build**
29+
* [TensorRT](https://developer.nvidia.com/nvidia-tensorrt-download) v8.6.1.6
3030

3131
**System Packages**
3232
* [CUDA](https://developer.nvidia.com/cuda-toolkit)
@@ -48,8 +48,8 @@ To build the TensorRT-OSS components, you will first need the following software
4848
* (Cross compilation for Jetson platform) [NVIDIA JetPack](https://developer.nvidia.com/embedded/jetpack) >= 5.0 (current support only for TensorRT 8.4.0 and TensorRT 8.5.2)
4949
* (Cross compilation for QNX platform) [QNX Toolchain](https://blackberry.qnx.com/en)
5050
* PyPI packages (for demo applications/tests)
51-
* [onnx](https://pypi.org/project/onnx/) 1.9.0
52-
* [onnxruntime](https://pypi.org/project/onnxruntime/) 1.8.0
51+
* [onnx](https://pypi.org/project/onnx/)
52+
* [onnxruntime](https://pypi.org/project/onnxruntime/)
5353
* [tensorflow-gpu](https://pypi.org/project/tensorflow/) >= 2.5.1
5454
* [Pillow](https://pypi.org/project/Pillow/) >= 9.0.1
5555
* [pycuda](https://pypi.org/project/pycuda/) < 2021.1
@@ -70,18 +70,18 @@ To build the TensorRT-OSS components, you will first need the following software
7070
git submodule update --init --recursive
7171
```
7272

73-
2. #### (Optional - if not using TensorRT container) Specify the TensorRT EA release build path
73+
2. #### (Optional - if not using TensorRT container) Specify the TensorRT GA release build path
7474

7575
If using the TensorRT OSS build container, TensorRT libraries are preinstalled under `/usr/lib/x86_64-linux-gnu` and you may skip this step.
7676

77-
Else download and extract the TensorRT EA build from [NVIDIA Developer Zone](https://developer.nvidia.com/nvidia-tensorrt-download).
77+
Else download and extract the TensorRT GA build from [NVIDIA Developer Zone](https://developer.nvidia.com/nvidia-tensorrt-download).
7878

7979
**Example: Ubuntu 20.04 on x86-64 with cuda-12.0**
8080

8181
```bash
8282
cd ~/Downloads
83-
tar -xvzf TensorRT-8.6.0.12.Linux.x86_64-gnu.cuda-12.0.tar.gz
84-
export TRT_LIBPATH=`pwd`/TensorRT-8.6.0.12
83+
tar -xvzf TensorRT-8.6.1.6.Linux.x86_64-gnu.cuda-12.0.tar.gz
84+
export TRT_LIBPATH=`pwd`/TensorRT-8.6.1.6
8585
```
8686

8787

@@ -111,9 +111,9 @@ For Linux platforms, we recommend that you generate a docker container for build
111111
```bash
112112
./docker/build.sh --file docker/ubuntu-cross-aarch64.Dockerfile --tag tensorrt-jetpack-cuda11.4
113113
```
114-
**Example: Ubuntu 20.04 on aarch64 with cuda-11.4.2**
114+
**Example: Ubuntu 20.04 on aarch64 with cuda-11.8**
115115
```bash
116-
./docker/build.sh --file docker/ubuntu-20.04-aarch64.Dockerfile --tag tensorrt-aarch64-ubuntu20.04-cuda11.4
116+
./docker/build.sh --file docker/ubuntu-20.04-aarch64.Dockerfile --tag tensorrt-aarch64-ubuntu20.04-cuda11.8 --cuda 11.8.0
117117
```
118118
119119
2. #### Launch the TensorRT-OSS build container.
@@ -143,7 +143,7 @@ For Linux platforms, we recommend that you generate a docker container for build
143143
yum -y install centos-release-scl
144144
yum-config-manager --enable rhel-server-rhscl-7-rpms
145145
yum -y install devtoolset-8
146-
export PATH="/opt/rh/devtoolset-8/root/bin:${PATH}
146+
export PATH="/opt/rh/devtoolset-8/root/bin:${PATH}"
147147
```
148148
149149
**Example: Linux (aarch64) build with default cuda-12.0**
@@ -174,14 +174,14 @@ For Linux platforms, we recommend that you generate a docker container for build
174174
> NOTE: The latest JetPack SDK v5.1 only supports TensorRT 8.5.2.
175175
176176
> NOTE:
177-
<br> 1. The default CUDA version used by CMake is 11.8.0. To override this, for example to 10.2, append `-DCUDA_VERSION=10.2` to the cmake command.
177+
<br> 1. The default CUDA version used by CMake is 12.0.1. To override this, for example to 11.8, append `-DCUDA_VERSION=11.8` to the cmake command.
178178
<br> 2. If samples fail to link on CentOS7, create this symbolic link: `ln -s $TRT_OUT_DIR/libnvinfer_plugin.so $TRT_OUT_DIR/libnvinfer_plugin.so.8`
179179
* Required CMake build arguments are:
180180
- `TRT_LIB_DIR`: Path to the TensorRT installation directory containing libraries.
181181
- `TRT_OUT_DIR`: Output directory where generated build artifacts will be copied.
182182
* Optional CMake build arguments:
183183
- `CMAKE_BUILD_TYPE`: Specify if binaries generated are for release or debug (contain debug symbols). Values consists of [`Release`] | `Debug`
184-
- `CUDA_VERISON`: The version of CUDA to target, for example [`11.7.1`].
184+
- `CUDA_VERSION`: The version of CUDA to target, for example [`11.7.1`].
185185
- `CUDNN_VERSION`: The version of cuDNN to target, for example [`8.6`].
186186
- `PROTOBUF_VERSION`: The version of Protobuf to use, for example [`3.0.0`]. Note: Changing this will not configure CMake to use a system version of Protobuf, it will configure CMake to download and try building that version.
187187
- `CMAKE_TOOLCHAIN_FILE`: The path to a toolchain file for cross compilation.

VERSION

+1-1
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
8.6.0.12
1+
8.6.1.6

cmake/toolchains/cmake_aarch64-native.toolchain

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
#
2-
# SPDX-FileCopyrightText: Copyright (c) 1993-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
# SPDX-FileCopyrightText: Copyright (c) 1993-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
33
# SPDX-License-Identifier: Apache-2.0
44
#
55
# Licensed under the Apache License, Version 2.0 (the "License");

demo/BERT/CMakeLists.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
#
2-
# SPDX-FileCopyrightText: Copyright (c) 1993-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2+
# SPDX-FileCopyrightText: Copyright (c) 1993-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
33
# SPDX-License-Identifier: Apache-2.0
44
#
55
# Licensed under the Apache License, Version 2.0 (the "License");

demo/BERT/README.md

100644100755
+2-1
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ Since the tokenizer and projection of the final predictions are not nearly as co
6464

6565
The tokenizer splits the input text into tokens that can be consumed by the model. For details on this process, see [this tutorial](https://mccormickml.com/2019/05/14/BERT-word-embeddings-tutorial/).
6666

67-
To run the BERT model in TensorRT, we construct the model using TensorRT APIs and import the weights from a pre-trained TensorFlow checkpoint from [NGC](https://ngc.nvidia.com/models/nvidian:bert_tf_v2_large_fp16_128). Finally, a TensorRT engine is generated and serialized to the disk. The various inference scripts then load this engine for inference.
67+
To run the BERT model in TensorRT, we construct the model using TensorRT APIs and import the weights from a pre-trained TensorFlow checkpoint from [NGC](https://catalog.ngc.nvidia.com/orgs/nvidia/models/bert_tf_ckpt_large_qa_squad2_amp_128). Finally, a TensorRT engine is generated and serialized to the disk. The various inference scripts then load this engine for inference.
6868

6969
Lastly, the tokens predicted by the model are projected back to the original text to get a final result.
7070

@@ -586,3 +586,4 @@ Results were obtained by running `scripts/inference_benchmark.sh --gpu Ampere` o
586586
| 384 | 32 | 40.79 | 40.97 | 40.46 |
587587
| 384 | 64 | 78.04 | 78.41 | 77.51 |
588588
| 384 | 128 | 151.33 | 151.62 | 150.76 |
589+

demo/BERT/builder.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
#!/usr/bin/env python3
22
#
3-
# SPDX-FileCopyrightText: Copyright (c) 1993-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
3+
# SPDX-FileCopyrightText: Copyright (c) 1993-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
44
# SPDX-License-Identifier: Apache-2.0
55
#
66
# Licensed under the Apache License, Version 2.0 (the "License");

0 commit comments

Comments
 (0)