Skip to content

Commit

Permalink
locking/rwsem: Simplify the is-owner-spinnable checks
Browse files Browse the repository at this point in the history
Add the trivial owner_on_cpu() helper for rwsem_can_spin_on_owner() and
rwsem_spin_on_owner(), it also allows to make rwsem_can_spin_on_owner()
a bit more clear.

Signed-off-by: Oleg Nesterov <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Waiman Long <[email protected]>
Cc: Amir Goldstein <[email protected]>
Cc: Davidlohr Bueso <[email protected]>
Cc: Jan Kara <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Theodore Y. Ts'o <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
  • Loading branch information
oleg-nesterov authored and Ingo Molnar committed May 25, 2018
1 parent 675c00c commit 1b22fc6
Showing 1 changed file with 13 additions and 12 deletions.
25 changes: 13 additions & 12 deletions kernel/locking/rwsem-xadd.c
Original file line number Diff line number Diff line change
Expand Up @@ -347,6 +347,15 @@ static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
}
}

static inline bool owner_on_cpu(struct task_struct *owner)
{
/*
* As lock holder preemption issue, we both skip spinning if
* task is not on cpu or its cpu is preempted
*/
return owner->on_cpu && !vcpu_is_preempted(task_cpu(owner));
}

static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
{
struct task_struct *owner;
Expand All @@ -359,17 +368,10 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)

rcu_read_lock();
owner = READ_ONCE(sem->owner);
if (!owner || !is_rwsem_owner_spinnable(owner)) {
ret = !owner; /* !owner is spinnable */
goto done;
if (owner) {
ret = is_rwsem_owner_spinnable(owner) &&
owner_on_cpu(owner);
}

/*
* As lock holder preemption issue, we both skip spinning if task is not
* on cpu or its cpu is preempted
*/
ret = owner->on_cpu && !vcpu_is_preempted(task_cpu(owner));
done:
rcu_read_unlock();
return ret;
}
Expand Down Expand Up @@ -398,8 +400,7 @@ static noinline bool rwsem_spin_on_owner(struct rw_semaphore *sem)
* abort spinning when need_resched or owner is not running or
* owner's cpu is preempted.
*/
if (!owner->on_cpu || need_resched() ||
vcpu_is_preempted(task_cpu(owner))) {
if (need_resched() || !owner_on_cpu(owner)) {
rcu_read_unlock();
return false;
}
Expand Down

0 comments on commit 1b22fc6

Please sign in to comment.