Releases: NVIDIA/DALI
DALI v1.44.0
Key Features and Enhancements
This DALI release includes the following key features and enhancements:
- The dynamic executor (exec_dynamic) is no longer experimental. It supports GPU to CPU transfers and reduces memory consumption. (#5704)
- Added support for zero-copy outputs transfer with dynamic executor. (#5684, #5673)
- Eliminated the outputs copy in PyTorch plugin. (#5699)
- Added dynamic executor support to TF plugin. (#5686)
- Optimized pipeline's output contiguity handling. (#5677)
Fixed Issues
- Restricted nvImageCodec version in DALI wheel dependencies list, as the most recent nvImageCodec (0.4.0) is incompatible. (#5709)
- Fixed custom stream handling on non-default device in
fn.external_source
(#5690). - Fixed problem with using DALI with Python3.12 with no distutils/setuptools installed.
- Fixed incorrect stream usage in
fn.experimental.inputs.video
(#5682) - Fixed possible hang in video decoder when rewinding near the last keyframe (#5676, #5669)
- Fixed
dont_use_mmap
option handling infn.readers.webdataset
(#5683) - Fixed redundant usage of pinned memory in the CPU
fn.readers.numpy
reader (#5678) - Fixed dynamic executor's handling of operators that produce no outputs (#5674)
Improvements
- Make
exec_dynamic
non-experimental (alternative formatting) (#5704) - Use zero-copy outputs with PyTorch (#5699)
- Add Python 3.13 (experimental) support (#5692)
- Add proper NVTX markers to Executor2. (#5694)
- Add Efficientnet pipeline to hw_bench script (#5691)
- Stream aware outputs (#5684)
- Update DALI_DEPS_VERSION for new OpenSSL (#5689)
- Add dynamic executor support to TF plugin. (#5686)
- Make black and flake8 run independently. (#5685)
- Update of FFmpeg to n7.1 (#5681)
- Deps update 10 2024 (#5670)
- Refactor operator output contiguity handling (#5677)
- Add ready event to Tensor and TensorList. (#5673)
Bug Fixes
- Fix nvimgcodec version check, do not install it separately in tests env (#5713)
- Limit the upper versions of DALI wheel installation dependencies (#5710)
- Limit the maximum version of nvimagecodec for current DALI (#5709)
- Use exec-dynamic in RNN-t pipeline. Minor fix to exec2. (#5706)
- Check JAX version and invoke dlpack manually for jax pre-0.4.16. (#5702)
- Fix
nose
imports (#5698) - ExternalSource refactoring and fixing (#5690)
- Move from deprecated distutils to packaging (#5687)
- Make sure that the proper video stream index is used by the GPU decoder (#5682)
- Add an ability to rewind at the end of the video (#5676)
- Fix inverted mmap inside webdataset reader (#5683)
- Fix the redundant usage of pinned memory in the numpy cpu reader (#5678)
- Fix handling of tasks with zero outputs. (#5674)
- Add an ability to retry rewind to the one before the last keyframe (#5669)
Breaking API changes
There are no breaking changes in this DALI release.
Deprecated features
No features were deprecated in this release.
Known issues:
- The most recent nvImageCodec (0.4.0) is currently incompatible with DALI. Python wheel for DALI 1.44 pins the dependency to 0.3.0, but older releases do not specify the required version explicitly. Users of previous DALI releases may need to manually install older nvImageCodec in order to use
fn.experimental.decoders.image.*
or, for DALI 1.39 and 1.40,fn.decoders.image.*
. The compatible version can be installed withpip install nvidia-nvimgcodec-cu12~=0.3.0
. - The following operators:
experimental.readers.fits
,experimental.decoders.video
, andexperimental.inputs.video
do not currently support checkpointing. - The video loader operator requires that the key frames occur, at a minimum, every 10 to 15 frames of the video stream.
If the key frames occur at a frequency that is less than 10-15 frames, the returned frames might be out of sync. - Experimental VideoReaderDecoder does not support open GOP.
It will not report an error and might produce invalid frames. VideoReader uses a heuristic approach to detect open GOP and should work in most common cases. - The DALI TensorFlow plugin might not be compatible with TensorFlow versions 1.15.0 and later.
To use DALI with the TensorFlow version that does not have a prebuilt plugin binary shipped with DALI, make sure that the compiler that is used to build TensorFlow exists on the system during the plugin installation. (Depending on the particular version, you can use GCC 4.8.4, GCC 4.8.5, or GCC 5.4.) - In experimental debug and eager modes, the GPU external source is not properly synchronized with DALI internal streams.
As a workaround, you can manually synchronize the device before returning the data from the callback. - Due to some known issues with meltdown/spectra mitigations and DALI, DALI shows best performance when running in Docker with escalated privileges, for example:
privileged=yes
in Extra Settings for AWS data points--privileged
or--security-opt seccomp=unconfined
for bare Docker.
Binary builds
NOTE: DALI builds for CUDA 12 dynamically link the CUDA toolkit. To use DALI, install the latest CUDA toolkit.
CUDA 11.0 and CUDA 12.0 builds use CUDA toolkit enhanced compatibility.
They are built with the latest CUDA 11.x/12.x toolkit respectively but they can run on the latest,
stable CUDA 11.0/CUDA 12.0 capable drivers (450.80 or later and 525.60 or later respectively).
However, using the most recent driver may enable additional functionality.
More details can be found in enhanced CUDA compatibility guide.
Install via pip for CUDA 12.0:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda120==1.44.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda120==1.44.0
or just:
pip install nvidia-dali-cuda120==1.44.0
pip install nvidia-dali-tf-plugin-cuda120==1.44.0
For CUDA 11:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda110==1.44.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda110==1.44.0
or just:
pip install nvidia-dali-cuda110==1.44.0
pip install nvidia-dali-tf-plugin-cuda110==1.44.0
Or use direct download links (CUDA 12.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.44.0-20402542-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.44.0-20402542-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda120/nvidia-dali-tf-plugin-cuda120-1.44.0.tar.gz
Or use direct download links (CUDA 11.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.44.0-20402554-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.44.0-20402554-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda110/nvidia-dali-tf-plugin-cuda110-1.44.0.tar.gz
FFmpeg source code:
Libsndfile source code:
DALI v1.43.0
Key Features and Enhancements
This DALI release includes the following key features and enhancements:
- Added DataNode methods for runtime access to batch's shape, layout, and source_info (#5650, #5648).
- Added support for CUDA 12.6U2 (#5657)
- Add experimental CV-CUDA resize operator (#5637)
- Improved performance of TensorList resizing and TypeTable (#5638, #5634).
- Improved DLPack support (to enable sharing ownership and pinned memory) (#5661).
Fixed Issues
- Fixed cleanup of pipelines containing PythonFunction. (#5668)
- Fixed CPU resize operator running with multiple resampling modes in a batch. (#5647)
Improvements
- Add support for bool type for the numba operator (#5666)
- Bump numpy version in Xavier tests. (#5663)
- DLPack support rework (#5661)
- Update links in DALI readme (#5660)
- Bump required NumPy version to 1.23. (#5658)
- Move to CUDA 12.6 update 2 (#5657)
- Increase number of the decoder bench iterations (#5655)
- GetProperty refactor + DataNode.property accessor (#5650)
- Remove and forbid direct inclusion of half.hpp. (#5654)
- Add DataNode.shape() (#5648)
- Fix conda build for Python 3.9 (#5649)
- Increase batch size in RN50 test for TF as on H100 it works well again (#5645)
- Add experimental CV-CUDA resize (#5637)
- Pin libprotobuf and protobuf to 5.24 which works with python 3.8-3.12 in conda (#5643)
- Optimize TensorList::Resize (#5638)
- TypeTable/TypeInfo optimization (#5634)
Bug Fixes
- Fix Pipeline reference leak in PythonFunction. (#5668)
- Fix constness in (Const)SampleView. Improve diagnostics. (#5664)
- Fix issues detected by Coverity (2024.09.30) (#5652)
- Fix CPU resize with mixed NN/other resampling filters. (#5647)
- Fix block size in TransposeTiled kernel test. (#5641)
- Fix the lack of the previous release in the docs switcher list (#5640)
Breaking API changes
There are no breaking changes in this DALI release.
Deprecated features
No features were deprecated in this release.
Known issues:
- The following operators:
experimental.readers.fits
,experimental.decoders.video
, andexperimental.inputs.video
do not currently support checkpointing. - The video loader operator requires that the key frames occur, at a minimum, every 10 to 15 frames of the video stream.
If the key frames occur at a frequency that is less than 10-15 frames, the returned frames might be out of sync. - Experimental VideoReaderDecoder does not support open GOP.
It will not report an error and might produce invalid frames. VideoReader uses a heuristic approach to detect open GOP and should work in most common cases. - The DALI TensorFlow plugin might not be compatible with TensorFlow versions 1.15.0 and later.
To use DALI with the TensorFlow version that does not have a prebuilt plugin binary shipped with DALI, make sure that the compiler that is used to build TensorFlow exists on the system during the plugin installation. (Depending on the particular version, you can use GCC 4.8.4, GCC 4.8.5, or GCC 5.4.) - In experimental debug and eager modes, the GPU external source is not properly synchronized with DALI internal streams.
As a workaround, you can manually synchronize the device before returning the data from the callback. - Due to some known issues with meltdown/spectra mitigations and DALI, DALI shows best performance when running in Docker with escalated privileges, for example:
privileged=yes
in Extra Settings for AWS data points--privileged
or--security-opt seccomp=unconfined
for bare Docker.
Binary builds
NOTE: DALI builds for CUDA 12 dynamically link the CUDA toolkit. To use DALI, install the latest CUDA toolkit.
CUDA 11.0 and CUDA 12.0 builds use CUDA toolkit enhanced compatibility.
They are built with the latest CUDA 11.x/12.x toolkit respectively but they can run on the latest,
stable CUDA 11.0/CUDA 12.0 capable drivers (450.80 or later and 525.60 or later respectively).
However, using the most recent driver may enable additional functionality.
More details can be found in enhanced CUDA compatibility guide.
Install via pip for CUDA 12.0:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda120==1.43.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda120==1.43.0
or just:
pip install nvidia-dali-cuda120==1.43.0
pip install nvidia-dali-tf-plugin-cuda120==1.43.0
For CUDA 11:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda110==1.43.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda110==1.43.0
or just:
pip install nvidia-dali-cuda110==1.43.0
pip install nvidia-dali-tf-plugin-cuda110==1.43.0
Or use direct download links (CUDA 12.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.43.0-19497385-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.43.0-19497385-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda120/nvidia-dali-tf-plugin-cuda120-1.43.0.tar.gz
Or use direct download links (CUDA 11.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.43.0-19497391-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.43.0-19497391-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda110/nvidia-dali-tf-plugin-cuda110-1.43.0.tar.gz
FFmpeg source code:
Libsndfile source code:
DALI v1.42.0
Key Features and Enhancements
This DALI release includes the following key features and enhancements:
- Introduced more flexible execution in the DALI pipeline with the
experimental_exec_dynamic
flag (#5635, #5631, #5593, #5528, #5620, #5602, #5529, #5595):- Enabled support for GPU-to-CPU transfers in a pipeline.
- Added support for accessing CPU metadata of GPU outputs (e.g. shape of GPU decoded images/videos).
- Added support for CUDA 12.6U1 (#5616).
- Added an option to return the number of frames in the experimental video reader (#5628).
Fixed Issues
- Fixed handling of optical flow initialization failure (#5624).
Improvements
- Add metadata-only inputs. (#5635)
- Schema-based input device check (#5631)
- Enable GPU->CPU transfers (#5593)
- Adds
enable_frame_num
to the experimental video reader (#5628) - Executor2 class implementation & tests (#5528)
- Executor 2.0: Per-operator stream assignment policy (#5620)
- Move to CUDA 12.6U1 (#5616)
- Executor 2.0: Stream assignment (#5602)
- Tasking: Test returning multiple outputs of type std::any. (#5529)
- Patch OSS vulnerabilities (#5612)
- Executor 2.0: Graph lowering (#5595)
- Make DALI tests compatible with Python 3.12 (#5452)
- Adjust the L3 perf test threshold for H100 runners (#5606)
- Add L1 image decoder DALI test (#5601)
Bug Fixes
- Fix multiple initialization attempts in optical flow operator. (#5624)
- Fix null pointer access when clearing incomplete workspace payload. (#5622)
Breaking API changes
There are no breaking changes in this DALI release.
Deprecated features
No features were deprecated in this release.
Known issues:
- The following operators:
experimental.readers.fits
,experimental.decoders.video
, andexperimental.inputs.video
do not currently support checkpointing. - The video loader operator requires that the key frames occur, at a minimum, every 10 to 15 frames of the video stream.
If the key frames occur at a frequency that is less than 10-15 frames, the returned frames might be out of sync. - Experimental VideoReaderDecoder does not support open GOP.
It will not report an error and might produce invalid frames. VideoReader uses a heuristic approach to detect open GOP and should work in most common cases. - The DALI TensorFlow plugin might not be compatible with TensorFlow versions 1.15.0 and later.
To use DALI with the TensorFlow version that does not have a prebuilt plugin binary shipped with DALI, make sure that the compiler that is used to build TensorFlow exists on the system during the plugin installation. (Depending on the particular version, you can use GCC 4.8.4, GCC 4.8.5, or GCC 5.4.) - In experimental debug and eager modes, the GPU external source is not properly synchronized with DALI internal streams.
As a workaround, you can manually synchronize the device before returning the data from the callback. - Due to some known issues with meltdown/spectra mitigations and DALI, DALI shows best performance when running in Docker with escalated privileges, for example:
privileged=yes
in Extra Settings for AWS data points--privileged
or--security-opt seccomp=unconfined
for bare Docker.
Binary builds
NOTE: DALI builds for CUDA 12 dynamically link the CUDA toolkit. To use DALI, install the latest CUDA toolkit.
CUDA 11.0 and CUDA 12.0 builds use CUDA toolkit enhanced compatibility.
They are built with the latest CUDA 11.x/12.x toolkit respectively but they can run on the latest,
stable CUDA 11.0/CUDA 12.0 capable drivers (450.80 or later and 525.60 or later respectively).
However, using the most recent driver may enable additional functionality.
More details can be found in enhanced CUDA compatibility guide.
Install via pip for CUDA 12.0:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda120==1.42.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda120==1.42.0
or just:
pip install nvidia-dali-cuda120==1.42.0
pip install nvidia-dali-tf-plugin-cuda120==1.42.0
For CUDA 11:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda110==1.42.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda110==1.42.0
or just:
pip install nvidia-dali-cuda110==1.42.0
pip install nvidia-dali-tf-plugin-cuda110==1.42.0
Or use direct download links (CUDA 12.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.42.0-18507157-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.42.0-18507157-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda120/nvidia-dali-tf-plugin-cuda120-1.42.0.tar.gz
Or use direct download links (CUDA 11.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.42.0-18507137-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.42.0-18507137-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda110/nvidia-dali-tf-plugin-cuda110-1.42.0.tar.gz
FFmpeg source code:
Libsndfile source code:
DALI v1.41.0
Key Features and Enhancements
This DALI release includes the following key features and enhancements:
- Added support for CUDA 12.6. (#5596)
- Added
fn.experimental.warp_perspective
operator. (#5542, #5575) - Added
fn.random.beta
random variate sampling operator. (#5550, #5571) - Added
fn.io.file.read
operator that supports loading files from dynamically specified paths. (#5552, #5572) - Enabled support for more simple types in
fn.python_function
,fn.ones
, andfn.zeros
. (#5598) - Removed unnecessary copy of tensor arguments fed into GPU operators. (#5590)
Fixed Issues
- Reverted the
fn.decoders.image*
to use legacy decoders due to performance regression in nvImageCodec. (#5582, #5578, #5586) - Optimized S3 downloading in TFRecord reader. (#5554)
- Added missing validation for number of inputs in argument promotion. (#5592)
- Added missing header to support compilation with GCC 14. (#5594)
- Fixed empty batch handling when copying batch from cpu to gpu. (#5567)
Improvements
- Executor 2.0: ExecGraph (#5587)
- Enable more Python types to be supported by the DALI python function (#5598)
- Remove usages of
std::call_once
. (#5599) - Move to CUDA 12.6 (#5596)
- Remove MakeContiguous before CPU inputs of GPU ops. (#5590)
- nvImageCodec related fixes (#5586)
- Mark PropagateError as [[noreturn]] (#5589)
- Make test_beta_distribution compatible with Python 3.8 (#5571)
- Add default_batch_size to IterationData. (#5588)
- Add thread_setup callback to tasking::Executor (#5581)
- Fix librosa deprecated usage (#5579)
- Bring back the legacy image decoder operator (#5578)
- Extract librosa's effects.trim and stft to DALI test utils, to avoid issues with breaking changes (#5568)
- Remove libjpeg and libtiff deps (#5569)
- Add warp_perspective operator (#5542)
- Remove legacy image decoder (#5559)
- Optimize S3 downloading for TFRecord reader (#5554)
- Add io.file.read operator (#5552)
- Add
fn.random.beta
random variate (#5550) - Reduce the batch size in the TensorFlow RN50 L3 test (#5565)
- Use MakeContiguous when copying CPU->CPU. (#5562)
- Update the DALI EfficientNet example to be compatible with the latest NumPy (#5561)
Bug Fixes
- Fixes problems with fetching LFS objects during nvImageCodec conda build (#5603)
- Fix the --python-tag option passed to python setup.py bdist_wheel command (#5600)
- Revert "Reintroduce "Move old ImageDecoder to legacy module and make the nvImageCodec based ImageDecoder the default" (#5470)" (#5582)
- Adding cstdint header to support GCC 14 compilation (#5594)
- Add missing validation for input count in argument promotion (#5592)
- Don't return pointers to a local variable in dali_operator_test. (#5585)
- Fix operator trace caching (#5580)
- Fix readlink usage - readlink doens't null-terminate strings. (#5577)
- Fix WarpPerspective::GetFillValue (#5575)
- Prevent stack-use-after-scope (#5572)
- Add missing
#include <optional>
in nvcvop.h (#5570) - Fix MakeContiguous sample_dim for empty batches. (#5567)
- Set affinity by device UUID. (#5566)
- Unchecked return value from CUDA library (#5564)
Breaking API changes
- DALI 1.39 was the final release to support the MXNet integration.
Deprecated features
No features were deprecated in this release.
Known issues:
- The following operators:
experimental.readers.fits
,experimental.decoders.video
, andexperimental.inputs.video
do not currently support checkpointing. - The video loader operator requires that the key frames occur, at a minimum, every 10 to 15 frames of the video stream.
If the key frames occur at a frequency that is less than 10-15 frames, the returned frames might be out of sync. - Experimental VideoReaderDecoder does not support open GOP.
It will not report an error and might produce invalid frames. VideoReader uses a heuristic approach to detect open GOP and should work in most common cases. - The DALI TensorFlow plugin might not be compatible with TensorFlow versions 1.15.0 and later.
To use DALI with the TensorFlow version that does not have a prebuilt plugin binary shipped with DALI, make sure that the compiler that is used to build TensorFlow exists on the system during the plugin installation. (Depending on the particular version, you can use GCC 4.8.4, GCC 4.8.5, or GCC 5.4.) - In experimental debug and eager modes, the GPU external source is not properly synchronized with DALI internal streams.
As a workaround, you can manually synchronize the device before returning the data from the callback. - Due to some known issues with meltdown/spectra mitigations and DALI, DALI shows best performance when running in Docker with escalated privileges, for example:
privileged=yes
in Extra Settings for AWS data points--privileged
or--security-opt seccomp=unconfined
for bare Docker.
Binary builds
NOTE: DALI builds for CUDA 12 dynamically link the CUDA toolkit. To use DALI, install the latest CUDA toolkit.
CUDA 11.0 and CUDA 12.0 builds use CUDA toolkit enhanced compatibility.
They are built with the latest CUDA 11.x/12.x toolkit respectively but they can run on the latest,
stable CUDA 11.0/CUDA 12.0 capable drivers (450.80 or later and 525.60 or later respectively).
However, using the most recent driver may enable additional functionality.
More details can be found in enhanced CUDA compatibility guide.
Install via pip for CUDA 12.0:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda120==1.41.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda120==1.41.0
or just:
pip install nvidia-dali-cuda120==1.41.0
pip install nvidia-dali-tf-plugin-cuda120==1.41.0
For CUDA 11:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda110==1.41.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda110==1.41.0
or just:
pip install nvidia-dali-cuda110==1.41.0
pip install nvidia-dali-tf-plugin-cuda110==1.41.0
Or use direct download links (CUDA 12.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.41.0-17427117-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.41.0-17427117-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda120/nvidia-dali-tf-plugin-cuda120-1.41.0.tar.gz
Or use direct download links (CUDA 11.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.41.0-17427118-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.41.0-17427118-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda110/nvidia-dali-tf-plugin-cuda110-1.41.0.tar.gz
FFmpeg source code:
Libsndfile source code:
DALI v1.40.0
Key Features and Enhancements
This DALI release includes the following key features and enhancements:
- Added operators:
fn.zeros
,fn.zeros_like
,fn.ones
,fn.ones_like
,fn.full
andfn.full_like
(#5505). - Added support for H264, H265, and AV1 video formats to
fn.plugin.video
(#5504). - Added support for CUDA 12.5U1 (#5545).
Fixed Issues
- Fixed following issues with S3 files reading:
Improvements
- Dependency update 07/2024 (#5556)
- Move checkpoint to IterationData. Remove ExecIterData. (#5555)
- Remove pruning from the Executor. (#5553)
- Move most of Operator to OperatorBase. Unify and simplify operator interfaces. (#5548)
- Move graph visiting utilities to a separate file. (#5549)
- Move to CUDA 12.5U1 (#5545)
- Extend the external source signature to include all arguments (#5541)
- Update DALI_deps version (#5536)
- Pin numpy to <1.24 in TensorFlow examples (#5534)
- Use new graph in Pipeline (#5520)
- Deps update 06/24 (#5514)
- Revert reducing the number of epoch in SBSA training test case (#5531)
- Add AV1 support (#5504)
- Removes MXNet support from DALI (#5526)
- Video decoder in plugin (#5477)
- Checkpoint refactoring - recognize checkpoints by operator instance name. (#5503)
- Keep separate per-pipeline operator counters. Error out when "stealing" subgraphs from other pipelines results in duplicate names. (#5506)
- Graph lowering. (#5496)
- Use "device" and "preserve" built-in arguments in OpGraph2. (#5516)
- Add fn.zeros, fn.zeros_like, fn.ones, fn.ones_like, fn.full and fn.full_like (#5505)
Bug Fixes
- Support spaces in S3 paths (#5525)
- Fix device ID in s3_client_manager (#5533)
- Add failure tests for stealing subgraphs. Minor fix in pipeline validation. (#5518)
- TFRecord to support S3 index URIs (#5515)
- Exclude docs line length adjustment PR from the blame history (#5509)
- Fix keras compat mode for ResNet50 tensorflow example (#5530)
Breaking API changes
- DALI 1.39 was the final release to support the MXNet integration.
Deprecated features
No features were deprecated in this release.
Known issues:
- Starting with DALI 1.39, a performance regression was observed in hardware-accelerated image decoders for setups with high number of worker threads. The nvImageCodec hardware decoder pre-allocation uses higher mini-batch size, causing extra cuMemFree calls that may slowing down decoding in some iterations. The issue will be fixed in the upcoming release.
- The following operators:
experimental.readers.fits
,experimental.decoders.video
, andexperimental.inputs.video
do not currently support checkpointing. - The video loader operator requires that the key frames occur, at a minimum, every 10 to 15 frames of the video stream.
If the key frames occur at a frequency that is less than 10-15 frames, the returned frames might be out of sync. - Experimental VideoReaderDecoder does not support open GOP.
It will not report an error and might produce invalid frames. VideoReader uses a heuristic approach to detect open GOP and should work in most common cases. - The DALI TensorFlow plugin might not be compatible with TensorFlow versions 1.15.0 and later.
To use DALI with the TensorFlow version that does not have a prebuilt plugin binary shipped with DALI, make sure that the compiler that is used to build TensorFlow exists on the system during the plugin installation. (Depending on the particular version, you can use GCC 4.8.4, GCC 4.8.5, or GCC 5.4.) - In experimental debug and eager modes, the GPU external source is not properly synchronized with DALI internal streams.
As a workaround, you can manually synchronize the device before returning the data from the callback. - Due to some known issues with meltdown/spectra mitigations and DALI, DALI shows best performance when running in Docker with escalated privileges, for example:
privileged=yes
in Extra Settings for AWS data points--privileged
or--security-opt seccomp=unconfined
for bare Docker.
Binary builds
NOTE: DALI builds for CUDA 12 dynamically link the CUDA toolkit. To use DALI, install the latest CUDA toolkit.
CUDA 11.0 and CUDA 12.0 builds use CUDA toolkit enhanced compatibility.
They are built with the latest CUDA 11.x/12.x toolkit respectively but they can run on the latest,
stable CUDA 11.0/CUDA 12.0 capable drivers (450.80 or later and 525.60 or later respectively).
However, using the most recent driver may enable additional functionality.
More details can be found in enhanced CUDA compatibility guide.
Install via pip for CUDA 12.0:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda120==1.40.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda120==1.40.0
or just:
pip install nvidia-dali-cuda120==1.40.0
pip install nvidia-dali-tf-plugin-cuda120==1.40.0
For CUDA 11:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda110==1.40.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda110==1.40.0
or just:
pip install nvidia-dali-cuda110==1.40.0
pip install nvidia-dali-tf-plugin-cuda110==1.40.0
Or use direct download links (CUDA 12.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.40.0-16741769-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.40.0-16741769-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda120/nvidia-dali-tf-plugin-cuda120-1.40.0.tar.gz
Or use direct download links (CUDA 11.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.40.0-16741760-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.40.0-16741760-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda110/nvidia-dali-tf-plugin-cuda110-1.40.0.tar.gz
FFmpeg source code:
Libsndfile source code:
DALI v1.39.0
Key Features and Enhancements
This DALI release includes the following key features and enhancements:
- Added support for CUDA 12.5 (#5478).
- Migrated
fn.decoders.image*
operators to use nvImageCodec as a decoding backend (#5470). - Improved error handling (#5466, #5494, #5486, #5491).
Fixed Issues
- Fixed DALI TF plugin compatibility with TensorFlow 2.9 (#5499).
- Fixed S3
fn.readers.file
support for pad_last_batch=True (#5493). - Fixed a bug that resulted in long build times for some pipelines with enabled conditional execution (#5475).
Improvements
- Add a mention of blogpost in Automatic Augmentation docs (#5508)
- Removal of Python 3.8 notes from documentation (#5502)
- Add default schema and use it in OpSpec argument queries. (#5500)
- Add missing
blocking
argument documentation to the external source operator (#5501) - Trim line length in the documentation/examples for the new theme (#5479)
- Refactoring in Pipeline, OpGraph and old Executor + name lookup improvement in old OpGraph and Pipeline. (#5495)
- Improve error message about FFmpeg not being available (#5494)
- Extend docs by adding info about
@do_not_convert
for NUMBA and Python ops (#5488) - New OpGraph (#5485)
- Fix tests for sanitizer build (#5492)
- Github comment acceptance formating table fix (#5490)
- Remove image decoder memory padding from examples (#5484)
- Adding git lfs as a compilation prerequisite (#5483)
- Remove unused JIT workspace policy. (#5487)
- Add a warning about pipeline definition being executed only once. (#5486)
- Move to CUDA 12.5 (#5478)
- Pin NPP version for CUDA 12 (#5480)
- Reintroduce "Move old ImageDecoder to legacy module and make the nvImageCodec based ImageDecoder the default" (#5470)
- Move to new, unified, NVIDIA sphinx theme (#5471)
- Add DALI video plugin skeleton (#5328)
- Don't initialize NVML when not setting affinity. (#5472)
- Add MXNet deprecation message to the docs and plugin (#5465)
- Add first-class check for nested datanodes in math/arithmetic ops. (#5466)
Bug Fixes
- Fix DALI TF plugin incompatibility with TF 2.9 (#5499)
- Coverity May 2024 (#5497)
- Fix S3 FileReader when using repeated samples (pad_last_batch=True) (#5493)
- Improve the video decoder errors (#5491)
- Add extra rpath for prebuilt ffmpeg dependencies for video plugin (#5481)
- Use dynamic programming in
OpGraph::HasConsumersInOtherStage
(#5475)
Breaking API changes
There are no breaking changes in this DALI release.
Deprecated features
DALI 1.39 is the final release that will support the MXNet integration.
Known issues:
- The following operators:
experimental.readers.fits
,experimental.decoders.video
, andexperimental.inputs.video
do not currently support checkpointing. - The video loader operator requires that the key frames occur, at a minimum, every 10 to 15 frames of the video stream.
If the key frames occur at a frequency that is less than 10-15 frames, the returned frames might be out of sync. - Experimental VideoReaderDecoder does not support open GOP.
It will not report an error and might produce invalid frames. VideoReader uses a heuristic approach to detect open GOP and should work in most common cases. - The DALI TensorFlow plugin might not be compatible with TensorFlow versions 1.15.0 and later.
To use DALI with the TensorFlow version that does not have a prebuilt plugin binary shipped with DALI, make sure that the compiler that is used to build TensorFlow exists on the system during the plugin installation. (Depending on the particular version, you can use GCC 4.8.4, GCC 4.8.5, or GCC 5.4.) - In experimental debug and eager modes, the GPU external source is not properly synchronized with DALI internal streams.
As a workaround, you can manually synchronize the device before returning the data from the callback. - Due to some known issues with meltdown/spectra mitigations and DALI, DALI shows best performance when running in Docker with escalated privileges, for example:
privileged=yes
in Extra Settings for AWS data points--privileged
or--security-opt seccomp=unconfined
for bare Docker.
Binary builds
NOTE: DALI builds for CUDA 12 dynamically link the CUDA toolkit. To use DALI, install the latest CUDA toolkit.
CUDA 11.0 and CUDA 12.0 builds use CUDA toolkit enhanced compatibility.
They are built with the latest CUDA 11.x/12.x toolkit respectively but they can run on the latest,
stable CUDA 11.0/CUDA 12.0 capable drivers (450.80 or later and 525.60 or later respectively).
However, using the most recent driver may enable additional functionality.
More details can be found in enhanced CUDA compatibility guide.
Install via pip for CUDA 12.0:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda120==1.39.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda120==1.39.0
or just:
pip install nvidia-dali-cuda120==1.39.0
pip install nvidia-dali-tf-plugin-cuda120==1.39.0
For CUDA 11:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda110==1.39.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda110==1.39.0
or just:
pip install nvidia-dali-cuda110==1.39.0
pip install nvidia-dali-tf-plugin-cuda110==1.39.0
Or use direct download links (CUDA 12.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.39.0-15829601-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.39.0-15829601-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda120/nvidia-dali-tf-plugin-cuda120-1.39.0.tar.gz
Or use direct download links (CUDA 11.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.39.0-15829602-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.39.0-15829602-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda110/nvidia-dali-tf-plugin-cuda110-1.39.0.tar.gz
FFmpeg source code:
Libsndfile source code:
DALI v1.38.0
Key Features and Enhancements
This DALI release includes the following key features and enhancements:
- Added support for AWS S3 urls in DALI readers (#5415, #5434).
- Improved support for enum types in
types.Constant
,fn.cast
,fn.random.choice
(#5422). - Improved error reporting (#5428).
Fixed Issues
- Fixed checkpoint clean-up in C API. (#5453)
Improvements
- Dependency update for May 2024 - black, boost-pp, cv-cuda, pybind11, rapidjson (#5458)
- Introduce DALI_PRELOAD_PLUGINS (#5457)
- Move old ImageDecoder to legacy module and make the nvImageCodec based ImageDecoder the default (#5445)
- Bump up NUMBA version used in tests to 0.59.1 (#5451)
- Extend the documentation footer (#5454)
- Remove the use of (soon deprecated) aligned_storage. (#5455)
- Make shared IterationData a first class member of Workspace. (#5447)
- Tasking module (#5436)
- Add AWS SDK support to all file readers (FileReader, NumpyReader, WebdatasetReader...) (#5415)
- Fix test_enum_types.py for Python3.11 (#5443)
- Remove files related to QNX that are no longer used (#5438)
- Remove usage of THRUST host&device vector (#5439)
- Add CMake to aarch64 base docker images (#5437)
- Refactoring of File Reader classes to accommodate for AWS SDK S3 integration (#5434)
- Replace Ops class name with proper operator API name (#5428)
- Use CMake binary release (#5435)
- Improve support for DALI enum types (#5422)
- Disable some JAX iterator tests in sanitizer run (#5427)
Bug Fixes
- Fix GTest Death Style Tests and LoadDirectory test in conda (#5469)
- Revert "Move old ImageDecoder to legacy module and make the nvImageCodec based ImageDecoder the default (#5445)" (#5464)
- Pin JAX version for multigpu test (#5460)
- Use C++17 standard in nodeps test. (#5459)
- Fix Coverity issues (May/2024) (#5453)
- Fix equalize unit test (#5456)
Breaking API changes
There are no breaking changes in this DALI release.
Deprecated features
DALI 1.39 will be the last release to support MXNet integration.
Known issues:
- The following operators:
experimental.readers.fits
,experimental.decoders.video
,experimental.inputs.video
, andexperimental.decoders.image_random_crop
do not currently support checkpointing. - The video loader operator requires that the key frames occur, at a minimum, every 10 to 15 frames of the video stream.
If the key frames occur at a frequency that is less than 10-15 frames, the returned frames might be out of sync. - Experimental VideoReaderDecoder does not support open GOP.
It will not report an error and might produce invalid frames. VideoReader uses a heuristic approach to detect open GOP and should work in most common cases. - The DALI TensorFlow plugin might not be compatible with TensorFlow versions 1.15.0 and later.
To use DALI with the TensorFlow version that does not have a prebuilt plugin binary shipped with DALI, make sure that the compiler that is used to build TensorFlow exists on the system during the plugin installation. (Depending on the particular version, you can use GCC 4.8.4, GCC 4.8.5, or GCC 5.4.) - In experimental debug and eager modes, the GPU external source is not properly synchronized with DALI internal streams.
As a workaround, you can manually synchronize the device before returning the data from the callback. - Due to some known issues with meltdown/spectra mitigations and DALI, DALI shows best performance when running in Docker with escalated privileges, for example:
privileged=yes
in Extra Settings for AWS data points--privileged
or--security-opt seccomp=unconfined
for bare Docker.
Binary builds
NOTE: DALI builds for CUDA 12 dynamically link the CUDA toolkit. To use DALI, install the latest CUDA toolkit.
CUDA 11.0 and CUDA 12.0 builds use CUDA toolkit enhanced compatibility.
They are built with the latest CUDA 11.x/12.x toolkit respectively but they can run on the latest,
stable CUDA 11.0/CUDA 12.0 capable drivers (450.80 or later and 525.60 or later respectively).
However, using the most recent driver may enable additional functionality.
More details can be found in enhanced CUDA compatibility guide.
Install via pip for CUDA 12.0:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda120==1.38.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda120==1.38.0
or just:
pip install nvidia-dali-cuda120==1.38.0
pip install nvidia-dali-tf-plugin-cuda120==1.38.0
For CUDA 11:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda110==1.38.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda110==1.38.0
or just:
pip install nvidia-dali-cuda110==1.38.0
pip install nvidia-dali-tf-plugin-cuda110==1.38.0
Or use direct download links (CUDA 12.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.38.0-15028468-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.38.0-15028468-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda120/nvidia-dali-tf-plugin-cuda120-1.38.0.tar.gz
Or use direct download links (CUDA 11.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.38.0-15028467-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.38.0-15028467-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda110/nvidia-dali-tf-plugin-cuda110-1.38.0.tar.gz
FFmpeg source code:
Libsndfile source code:
DALI v1.37.1
Key Features and Enhancements
There are no new features in this release
Fixed Issues
- Fixed DALI TF plugin source compilation during installation #5448
Improvements
There are no new improvements in this release
Bug Fixes
- Fixed DALI TF plugin source compilation during installation #5448
- Pin all nvJPEG2k subpackages #5442
Breaking API changes
There are no breaking changes in this DALI release.
Deprecated features
No features were deprecated in this release.
Known issues:
- The following operators:
experimental.readers.fits
,experimental.decoders.video
,experimental.inputs.video
, andexperimental.decoders.image_random_crop
do not currently support checkpointing. - The video loader operator requires that the key frames occur, at a minimum, every 10 to 15 frames of the video stream.
If the key frames occur at a frequency that is less than 10-15 frames, the returned frames might be out of sync. - Experimental VideoReaderDecoder does not support open GOP.
It will not report an error and might produce invalid frames. VideoReader uses a heuristic approach to detect open GOP and should work in most common cases. - The DALI TensorFlow plugin might not be compatible with TensorFlow versions 1.15.0 and later.
To use DALI with the TensorFlow version that does not have a prebuilt plugin binary shipped with DALI, make sure that the compiler that is used to build TensorFlow exists on the system during the plugin installation. (Depending on the particular version, you can use GCC 4.8.4, GCC 4.8.5, or GCC 5.4.) - In experimental debug and eager modes, the GPU external source is not properly synchronized with DALI internal streams.
As a workaround, you can manually synchronize the device before returning the data from the callback. - Due to some known issues with meltdown/spectra mitigations and DALI, DALI shows best performance when running in Docker with escalated privileges, for example:
privileged=yes
in Extra Settings for AWS data points--privileged
or--security-opt seccomp=unconfined
for bare Docker.
Binary builds
NOTE: DALI builds for CUDA 12 dynamically link the CUDA toolkit. To use DALI, install the latest CUDA toolkit.
CUDA 11.0 and CUDA 12.0 builds use CUDA toolkit enhanced compatibility.
They are built with the latest CUDA 11.x/12.x toolkit respectively but they can run on the latest,
stable CUDA 11.0/CUDA 12.0 capable drivers (450.80 or later and 525.60 or later respectively).
However, using the most recent driver may enable additional functionality.
More details can be found in enhanced CUDA compatibility guide.
Install via pip for CUDA 12.0:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda120==1.37.1
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda120==1.37.1
or just:
pip install nvidia-dali-cuda120==1.37.1
pip install nvidia-dali-tf-plugin-cuda120==1.37.1
For CUDA 11:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda110==1.37.1
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda110==1.37.1
or just:
pip install nvidia-dali-cuda120==1.37.1
pip install nvidia-dali-tf-plugin-cuda120==1.37.1
Or use direct download links (CUDA 12.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.37.1-14636516-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.37.1-14636516-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda120/nvidia-dali-tf-plugin-cuda120-1.37.1.tar.gz
Or use direct download links (CUDA 11.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.37.1-14636526-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.37.1-14636526-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda110/nvidia-dali-tf-plugin-cuda110-1.37.1.tar.gz
FFmpeg source code:
Libsndfile source code:
DALI v1.37.0
Key Features and Enhancements
This DALI release includes the following key features and enhancements:
- Added support for running JAX defined augmentations in the iterator and pipeline. (#5406, #5426, #5432)
- Improved error reporting with a stack trace pointing to the offending operation in user code. (#5357, #5396)
- Added CPU
fn.random.choice
operator. (#5380, #5387) - Added support for CUDA 12.4. (#5353, #5410)
- Improved iterators checkpointing (#5374, #5375, #5371, #5356)
- Optimized
fn.resize
operator for better GPU utilization (#5382) - Added option to skip bboxes in
fn.random_bbox_crop
with the fraction of area within the crop below user-provided threshold. (#5368)
Fixed Issues
- Fixed handling of special values of the
stream
field in CUDA Array Interface v3 (#5425). - Fixed insufficient synchronization around scratch memory in nvImageCoded-based decoders (
fn.experimental.decoders.*
). (#5408) - Fixed readers saving incorrect checkpoint when restored and saved back in the same epoch. (#5378)
Improvements
- Add JAX-defined augmentation examples (#5426)
- Extend context and name propagation in errors (#5396)
- Add experimental jax operator (#5406)
- Enable Bandit security scan (#5402)
- Reworks links in the RST documentation (#5413)
- Refactor to remove duplicated logic in traverse_directories utility function (#5419)
- Update DALI deps version (#5417)
- Changes to dali/util/numpy (#5416)
- Add libcurl-devel (#5412)
- Move to CUDA 12.4 U1 (#5410)
- Separate excutor interface and implementation files. (#5411)
- Make the video reader use cudaVideoDeinterlaceMode_Adaptive only for non-progressive videos (#5392)
- Skip AutoAug test when sanitizers are on (#5403)
- Unpin typing_extensions in tests (#5405)
- Dependency update 03-2024 (#5397)
- Review Bandit reported vulnerabilities (#5398)
- Support checkpointing in JAX decorators (#5374)
- Workaround ASAN bug ignoring RPATH (#5388)
- Update supported TensorFlow version (#5386)
- Disable more video tests on selected machines (#5385)
- Extend fn.random.choice to support n-D inputs (#5387)
- Add random choice CPU operator for 0D samples (#5380)
- Resize: Optimize block sizes, use dynamic amount of shared mem. (#5382)
- Support checkpointing in JAX peekable iterator (#5375)
- Increase DALI TF Plugin loading timeout (#5381)
- Improve iterator checkpointing (#5371)
- Improve logs when the DALI TF plugin loading process fails (#5379)
- Add option to prune bboxes based on % area in Crop ROI (#5368)
- Improve op deprecation and deprecate sequence reader (#5372)
- Fix typo in nvcuvid error (#5373)
- Optimize sanitizer operator tests (#5352)
- Introduce operator origin stack trace in the error message (#5357)
- Make ExternalContext more flexible (#5356)
- Enable CUDA 12.4 build (#5353)
Bug Fixes
- Add nose as a dependency to iterators tests (#5433)
- Disable jax_function notebook conversions for unsupported Python3.8 (#5432)
- Improve handling of CUDA Array Interface v3 (#5425)
- Fix debug build (#5414)
- Add stream synchronization before decode for nvImageCodec <= 0.2 (#5408)
- Fix Loader checkpointing bug (#5378)
- Fix pixelwise_masks support when the ratio is on in the coco reader (#5407)
- Fix storage of non-POD random distributions. (#5395)
- Fix nvImageCodec version check. (#5399)
- Fix bug in checkpointing C API (#5390)
- Add nose to the package list for TL1_separate_executor. (#5393)
- Use host sync allocation for nvImageCodec <= 0.2 (#5391)
- Remove temporary lock file from wheel (#5384)
- Disable type annotation tests in sanitizer build (#5383)
- Fix CUDA 12.4 with ASAN (#5370)
- Skip video tests on M60 (#5369)
- Enable eager mode tests, fix mixed ops and improve coverage (#5367)
Breaking API changes
There are no breaking changes in this DALI release.
Deprecated features
No features were deprecated in this release.
Known issues:
- The following operators:
experimental.readers.fits
,experimental.decoders.video
,experimental.inputs.video
, andexperimental.decoders.image_random_crop
do not currently support checkpointing. - The video loader operator requires that the key frames occur, at a minimum, every 10 to 15 frames of the video stream.
If the key frames occur at a frequency that is less than 10-15 frames, the returned frames might be out of sync. - Experimental VideoReaderDecoder does not support open GOP.
It will not report an error and might produce invalid frames. VideoReader uses a heuristic approach to detect open GOP and should work in most common cases. - The DALI TensorFlow plugin might not be compatible with TensorFlow versions 1.15.0 and later.
To use DALI with the TensorFlow version that does not have a prebuilt plugin binary shipped with DALI, make sure that the compiler that is used to build TensorFlow exists on the system during the plugin installation. (Depending on the particular version, you can use GCC 4.8.4, GCC 4.8.5, or GCC 5.4.) - In experimental debug and eager modes, the GPU external source is not properly synchronized with DALI internal streams.
As a workaround, you can manually synchronize the device before returning the data from the callback. - Due to some known issues with meltdown/spectra mitigations and DALI, DALI shows best performance when running in Docker with escalated privileges, for example:
privileged=yes
in Extra Settings for AWS data points--privileged
or--security-opt seccomp=unconfined
for bare Docker.
Binary builds
NOTE: DALI builds for CUDA 12 dynamically link the CUDA toolkit. To use DALI, install the latest CUDA toolkit.
CUDA 11.0 and CUDA 12.0 builds use CUDA toolkit enhanced compatibility.
They are built with the latest CUDA 11.x/12.x toolkit respectively but they can run on the latest,
stable CUDA 11.0/CUDA 12.0 capable drivers (450.80 or later and 525.60 or later respectively).
However, using the most recent driver may enable additional functionality.
More details can be found in enhanced CUDA compatibility guide.
Install via pip for CUDA 12.0:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda120==1.37.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda120==1.37.0
or just:
pip install nvidia-dali-cuda120==1.37.0
pip install nvidia-dali-tf-plugin-cuda120==1.37.0
For CUDA 11:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda110==1.37.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda110==1.37.0
or just:
pip install nvidia-dali-cuda120==1.37.0
pip install nvidia-dali-tf-plugin-cuda120==1.37.0
Or use direct download links (CUDA 12.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.37.0-14338028-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.37.0-14338028-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda120/nvidia-dali-tf-plugin-cuda120-1.37.0.tar.gz
Or use direct download links (CUDA 11.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.37.0-14338056-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.37.0-14338056-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda110/nvidia-dali-tf-plugin-cuda110-1.37.0.tar.gz
FFmpeg source code:
Libsndfile source code:
DALI v1.36.0
Key Features and Enhancements
This DALI release includes the following key features and enhancements:
- Added support for checkpointing in MXNet iterator and CPU TensorFlow plugin (#5334, #5315).
- Added morphological operators (
fn.experimental.dilate
,fn.experimental.erode
) (#5294). - Integrated nvImageCodec for decoding in
fn.experimental.decoders
(#5297, #5336, #5324, #5333, #5339). - Added
fn.random_crop_generator
operator (#5304). - Added support for multiple inputs and relative shapes and anchors in
fn.multi_paste
(#5331).
Fixed Issues
- Fixed insufficient synchronization in MXNet iterator (#5364).
- Fixed auto_reset argument handling in iterator plugins (#5340).
- Fixed missing calls to nvml::Shutdown (#5317).
- Limited a number of progressive scans for jpeg decoding (#5316).
Improvements
- Propagate module and display name of the operator to backend (#5344)
- Update dependencies (#5349)
- Map backend exceptions into Python exception types (#5345)
- Emphasise the optical flow is calculated at input resolution. (#5350)
- Refactor custom ops classes to use python_op_factory as base (#5338)
- Add origin stack trace capture for DALI operators (#5302)
- Test fused decoder with two separate pipelines (#5343)
- [Cutmix] Make fn.multi_paste more flexible, fix validation (#5331)
- Enable checkpointing in TensorFlow plugin (CPU only) (#5334)
- Copy out nvImageCodec conda package from the build (#5336)
- Add error message when GPU is not available (#5329)
- Enable build with statically linked nvimgcodec + hard dependency for dynamic linking (#5324)
- Add tf_stack util to autograph (#5322)
- Rewrite median blur to use nvcvop tools (#5327)
- Add morphological operators and the nvcvop module (#5294)
- Add OpSpec::ArgumentInputIdx (#5330)
- Simplify workspace object. Ensure predictable argument order in OpSpec. (#5325)
- Support checkpointing in MXNet iterator (#5315)
- Set rpath at cmake level (do not wait for bundle-wheel) (#5323)
- Interpolation modes documentation upgrade (#5314)
- Update links in DALI documentation (#5321)
- Integrate nvimagecodec (#5297)
- Add
naive_histogram
custom operator to test suite (#4731) - Add RandomCropGenerator (#5304)
- Use small videos in checkpointing tests (#5305)
Bug Fixes
- Use synchronous copy to framework array in the absence of a stream (#5364)
- Process TFRecord reader binding classes only when it is enabled (#5360)
- Adjust stack formatting in backend to match Python (#5354)
- Link test operators against nvml wrapper (#5355)
- Fix range check in Workspace::SetInput (#5358)
- Make async_pool immune to stream handle reuse. (#5348)
- Coverity fixes for 1.36 (#5342)
- Fix "auto_reset" argument handling (#5340)
- Fix cupy tests (#5341)
- Add nvimagecodec libs to DALI_EXCLUDES + test utils to dump mismatched images (#5339)
- Fix warning about nvImageCodec version (#5333)
- Silence warning about DOWNLOAD_EXTRACT_TIMESTAMP while fixing the cmake <3.24 builds (#5326)
- Fix inconsistent calls to nvml::Init and nvml::Shutdown (#5317)
- Limit the number of progressive scans for jpeg decoding (#5316)
Breaking API changes
There are no breaking changes in this DALI release.
Deprecated features
No features were deprecated in this release.
Known issues:
- The following operators:
experimental.readers.fits
,experimental.decoders.video
,experimental.inputs.video
, andexperimental.decoders.image_random_crop
do not currently support checkpointing. - The video loader operator requires that the key frames occur, at a minimum, every 10 to 15 frames of the video stream.
If the key frames occur at a frequency that is less than 10-15 frames, the returned frames might be out of sync. - Experimental VideoReaderDecoder does not support open GOP.
It will not report an error and might produce invalid frames. VideoReader uses a heuristic approach to detect open GOP and should work in most common cases. - The DALI TensorFlow plugin might not be compatible with TensorFlow versions 1.15.0 and later.
To use DALI with the TensorFlow version that does not have a prebuilt plugin binary shipped with DALI, make sure that the compiler that is used to build TensorFlow exists on the system during the plugin installation. (Depending on the particular version, you can use GCC 4.8.4, GCC 4.8.5, or GCC 5.4.) - In experimental debug and eager modes, the GPU external source is not properly synchronized with DALI internal streams.
As a workaround, you can manually synchronize the device before returning the data from the callback. - Due to some known issues with meltdown/spectra mitigations and DALI, DALI shows best performance when running in Docker with escalated privileges, for example:
privileged=yes
in Extra Settings for AWS data points--privileged
or--security-opt seccomp=unconfined
for bare Docker.
Binary builds
NOTE: DALI builds for CUDA 12 dynamically link the CUDA toolkit. To use DALI, install the latest CUDA toolkit.
CUDA 11.0 and CUDA 12.0 builds use CUDA toolkit enhanced compatibility.
They are built with the latest CUDA 11.x/12.x toolkit respectively but they can run on the latest,
stable CUDA 11.0/CUDA 12.0 capable drivers (450.80 or later and 525.60 or later respectively).
However, using the most recent driver may enable additional functionality.
More details can be found in enhanced CUDA compatibility guide.
Install via pip for CUDA 12.0:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda120==1.36.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda120==1.36.0
For CUDA 11:
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-cuda110==1.36.0
pip install --extra-index-url https://developer.download.nvidia.com/compute/redist/ nvidia-dali-tf-plugin-cuda110==1.36.0
Or use direct download links (CUDA 12.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.36.0-13435171-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda120/nvidia_dali_cuda120-1.36.0-13435171-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda120/nvidia-dali-tf-plugin-cuda120-1.36.0.tar.gz
Or use direct download links (CUDA 11.0):
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.36.0-13435172-py3-none-manylinux2014_x86_64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-cuda110/nvidia_dali_cuda110-1.36.0-13435172-py3-none-manylinux2014_aarch64.whl
- https://developer.download.nvidia.com/compute/redist/nvidia-dali-tf-plugin-cuda110/nvidia-dali-tf-plugin-cuda110-1.36.0.tar.gz
FFmpeg source code:
Libsndfile source code: