Skip to content
This repository has been archived by the owner on Feb 5, 2024. It is now read-only.

Use pool memory resource for CUDA allocations #65

Open
mlxd opened this issue Oct 20, 2022 · 0 comments
Open

Use pool memory resource for CUDA allocations #65

mlxd opened this issue Oct 20, 2022 · 0 comments
Labels
enhancement New feature or request

Comments

@mlxd
Copy link
Member

mlxd commented Oct 20, 2022

Issue description

Description of the issue - include code snippets and screenshots here
if relevant. You may use the following template below

With batching of observables in the adjoint GPU pipeline, often multiple allocations and frees happen for state-vectors, which can create a natural synchronization point in the GPU, as well as introducing delays for the allocations and frees. Having a preallocated pool resource that is used to enable resuse of statevector memory blocks would remove the need to perform explicit frees, when the memory resource can simply be reset and reused. For the above, I proposed either adapting the DataBuffer class, and the DeviceResource class to track a fixed number of GPU buffers, or to provide explicit bindings to RMM's pool memory resource.

@mlxd mlxd added the enhancement New feature or request label Oct 20, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant