You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Currently, the Dockerfile uses nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04 which only supports CUDA 11.8. When attempting to use TensorRT with newer GPU hardware/drivers requiring CUDA 12.x, the existing Docker environment fails to provide compatible dependencies, leading to runtime errors and compatibility issues with modern AI frameworks.
Describe the solution you'd like
Update the Docker base image to:
FROM nvidia/cuda:12.4.1-cudnn-devel-ubuntu22.04
This change will:
Provide native CUDA 12.4.1 and cuDNN support
Enable seamless TensorRT integration
Align with modern GPU driver requirements (>= 545.x)
Maintain Ubuntu 22.04 LTS compatibility
Describe alternatives you've considered
Manually installing CUDA 12.4 in existing container:
Would require complex Dockerfile modifications
Risks version conflicts with base CUDA 11.8 libraries
Using CUDA forward-compatibility:
May introduce unexpected behavior
Doesn't resolve underlying dependency mismatches
Maintaining separate Dockerfiles:
Increases maintenance overhead
Fragments development/production environments
Additional context
Validation Plan:
Verify CUDA version post-build:
nvidia-smi | grep CUDA
Confirm TensorRT availability in Python environment:
importtensorrtprint(tensorrt.__version__) # Should return >= 10.0.1
Potential Impact:
May require updating related dependency versions (e.g., PyTorch, TorchAudio)
Need to validate all existing pipeline functionality with CUDA 12.x
- Upgrade base image from nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04 to nvidia/cuda:12.4.1-cudnn-devel-ubuntu22.04
- Enable CUDA 12.4 runtime environment
- Ensure TensorRT dependency compatibility
- Validation steps:
- Verify CUDA version via nvidia-smi after build
- Test import tensorrt in container without errors
ClosesFunAudioLLM#935
Is your feature request related to a problem? Please describe.
Currently, the Dockerfile uses nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04 which only supports CUDA 11.8. When attempting to use TensorRT with newer GPU hardware/drivers requiring CUDA 12.x, the existing Docker environment fails to provide compatible dependencies, leading to runtime errors and compatibility issues with modern AI frameworks.
Describe the solution you'd like
Update the Docker base image to:
FROM nvidia/cuda:12.4.1-cudnn-devel-ubuntu22.04
This change will:
Provide native CUDA 12.4.1 and cuDNN support
Enable seamless TensorRT integration
Align with modern GPU driver requirements (>= 545.x)
Maintain Ubuntu 22.04 LTS compatibility
Describe alternatives you've considered
Would require complex Dockerfile modifications
Risks version conflicts with base CUDA 11.8 libraries
May introduce unexpected behavior
Doesn't resolve underlying dependency mismatches
Increases maintenance overhead
Fragments development/production environments
Additional context
Verify CUDA version post-build:
nvidia-smi | grep CUDA
Confirm TensorRT availability in Python environment:
May require updating related dependency versions (e.g., PyTorch, TorchAudio)
Need to validate all existing pipeline functionality with CUDA 12.x
NVIDIA CUDA 12.4 Release Notes
TensorRT CUDA Compatibility Matrix
The text was updated successfully, but these errors were encountered: