Skip to content

Commit 099d83b

Browse files
authored
Fix: links that lead to page not found (#308)
1 parent 6341e40 commit 099d83b

3 files changed

+3
-3
lines changed

nvidia_deeplearningexamples_resnet50.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ The model is initialized as described in [Delving deep into rectifiers: Surpassi
3131

3232
This model is trained with mixed precision using Tensor Cores on Volta, Turing, and the NVIDIA Ampere GPU architectures. Therefore, researchers can get results over 2x faster than training without Tensor Cores, while experiencing the benefits of mixed precision training. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.
3333

34-
Note that the ResNet50 v1.5 model can be deployed for inference on the [NVIDIA Triton Inference Server](https://github.com/NVIDIA/trtis-inference-server) using TorchScript, ONNX Runtime or TensorRT as an execution backend. For details check [NGC](https://ngc.nvidia.com/catalog/resources/nvidia:resnet_for_triton_from_pytorch)
34+
Note that the ResNet50 v1.5 model can be deployed for inference on the [NVIDIA Triton Inference Server](https://github.com/triton-inference-server/server) using TorchScript, ONNX Runtime or TensorRT as an execution backend. For details check [NGC](https://ngc.nvidia.com/catalog/resources/nvidia:resnet_for_triton_from_pytorch)
3535

3636

3737
### Example

nvidia_deeplearningexamples_resnext.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ This model is trained with mixed precision using Tensor Cores on Volta, Turing,
2828

2929
We use [NHWC data layout](https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html) when training using Mixed Precision.
3030

31-
Note that the ResNeXt101-32x4d model can be deployed for inference on the [NVIDIA Triton Inference Server](https://github.com/NVIDIA/trtis-inference-server) using TorchScript, ONNX Runtime or TensorRT as an execution backend. For details check [NGC](https://ngc.nvidia.com/catalog/resources/nvidia:resnext_for_triton_from_pytorch)
31+
Note that the ResNeXt101-32x4d model can be deployed for inference on the [NVIDIA Triton Inference Server](https://github.com/triton-inference-server/server) using TorchScript, ONNX Runtime or TensorRT as an execution backend. For details check [NGC](https://ngc.nvidia.com/catalog/resources/nvidia:resnext_for_triton_from_pytorch)
3232

3333
#### Model architecture
3434

nvidia_deeplearningexamples_se-resnext.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ _Image source: [Squeeze-and-Excitation Networks](https://arxiv.org/pdf/1709.0150
3737
Image shows the architecture of SE block and where is it placed in ResNet bottleneck block.
3838

3939

40-
Note that the SE-ResNeXt101-32x4d model can be deployed for inference on the [NVIDIA Triton Inference Server](https://github.com/NVIDIA/trtis-inference-server) using TorchScript, ONNX Runtime or TensorRT as an execution backend. For details check [NGC](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/se_resnext_for_triton_from_pytorch).
40+
Note that the SE-ResNeXt101-32x4d model can be deployed for inference on the [NVIDIA Triton Inference Server](https://github.com/triton-inference-server/server) using TorchScript, ONNX Runtime or TensorRT as an execution backend. For details check [NGC](https://catalog.ngc.nvidia.com/orgs/nvidia/resources/se_resnext_for_triton_from_pytorch).
4141

4242
### Example
4343

0 commit comments

Comments
 (0)