-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase MAX_DMA_REGIONS for vIOMMU use case #588
Comments
The major reason we have this limit is that we have an array, and it's sized based on that. We could reasonably easy make this a resizable array (we're already not thread safe) We still would want some upper limit to avoid a malicious client from exhausting server memory, but this can probably be much bigger.
For a 1TB guest, if we allowed every 4KB page to be its own region, we'd end up with an array size of 22GB. So, we will still need some "reasonable" limit. Any suggestions? |
Maybe make it |
it's relatively easy to change the array to be resizable. But even 1GB seems a bit pricy for servers that might be dealing with lots of clients. |
I fully understand. The solution of making the size configurable is not great either because we don't really know how much we're gonna need, it depends on the device that's exposed and the workload/driver running in the guest. This is really a tricky situation and maybe the best solution would be to explicitly state in the specification that |
We can definitely make clear that it's not currently supported well enough, certainly. I dont' want to say we'll never have some form of support though. |
I think that would help as we can see there are several issues to tackle before being able to claim it is supported. |
We can make the DMA configurable to use two bookkeeping method (static array vs. list or resizeable array) to accomodate for the vIOMMU case, and hide all this complexity behind functions so the DMA controller itself doesn't get too complicated. We can even make it a compile time option. |
Compile time would be annoying; I do like the idea of switching book-keeping types - but perhaps that means we need the SPDK guarantees to not use vfu_map* to also apply to DMA_MAP? So we can resize the array dynamically or wahtever? |
Sorry I don't understand, which guarantees? |
I'm referring to whether SPDK allows active vfu_map,unmap_sg() users in IO threads, when it gets an unmap request. They want to use those APIs locklessly, so they can't do an unmap at the same time. We still need to think really carefully about that though, once SPDK finishes its DMA unregister callback, we modify the structs. Do we need extra callbacks or something? |
MAX_DMA_REGIONS
is currently16
, which is way too small if we try to support the vIOMMU use case, as we might end up with way more than 16 active mappings (especially if the mappings are 4KiB).The text was updated successfully, but these errors were encountered: