Skip to content

Commit

Permalink
drm/i915: Allow dead vm to unbind vma's without lock.
Browse files Browse the repository at this point in the history
i915_gem_vm_close may take the lock, and we currently have no better way
of handling this. At least for now, allow a path in which holding vm->mutex
is sufficient. This is the case, because the object destroy path will
forcefully take vm->mutex now.

Signed-off-by: Maarten Lankhorst <[email protected]>
Link: https://patchwork.freedesktop.org/patch/msgid/[email protected]
Reviewed-by: Thomas Hellstrom <[email protected]>
  • Loading branch information
mlankhorst committed Jan 28, 2022
1 parent 7a05c5a commit a594525
Showing 1 changed file with 13 additions and 2 deletions.
15 changes: 13 additions & 2 deletions drivers/gpu/drm/i915/i915_vma.c
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,17 @@
#include "i915_vma.h"
#include "i915_vma_resource.h"

static inline void assert_vma_held_evict(const struct i915_vma *vma)
{
/*
* We may be forced to unbind when the vm is dead, to clean it up.
* This is the only exception to the requirement of the object lock
* being held.
*/
if (atomic_read(&vma->vm->open))
assert_object_held_shared(vma->obj);
}

static struct kmem_cache *slab_vmas;

static struct i915_vma *i915_vma_alloc(void)
Expand Down Expand Up @@ -1721,7 +1732,7 @@ struct dma_fence *__i915_vma_evict(struct i915_vma *vma, bool async)
struct dma_fence *unbind_fence;

GEM_BUG_ON(i915_vma_is_pinned(vma));
assert_object_held_shared(vma->obj);
assert_vma_held_evict(vma);

if (i915_vma_is_map_and_fenceable(vma)) {
/* Force a pagefault for domain tracking on next user access */
Expand Down Expand Up @@ -1788,7 +1799,7 @@ int __i915_vma_unbind(struct i915_vma *vma)
int ret;

lockdep_assert_held(&vma->vm->mutex);
assert_object_held_shared(vma->obj);
assert_vma_held_evict(vma);

if (!drm_mm_node_allocated(&vma->node))
return 0;
Expand Down

0 comments on commit a594525

Please sign in to comment.