Skip to content

Commit

Permalink
locking/mutexes/mcs: Correct barrier usage
Browse files Browse the repository at this point in the history
This patch corrects the way memory barriers are used in the MCS lock
with smp_load_acquire and smp_store_release fucnctions.  The previous
barriers could leak critical sections if mcs lock is used by itself.
It is not a problem when mcs lock is embedded in mutex but will be an
issue when the mcs_lock is used elsewhere.

The patch removes the incorrect barriers and put in correct
barriers with the pair of functions smp_load_acquire and smp_store_release.

Suggested-by: Michel Lespinasse <[email protected]>
Reviewed-by: Paul E. McKenney <[email protected]>
Signed-off-by: Waiman Long <[email protected]>
Signed-off-by: Jason Low <[email protected]>
Signed-off-by: Tim Chen <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: http://lkml.kernel.org/r/1390347353.3138.62.camel@schen9-DESK
Signed-off-by: Ingo Molnar <[email protected]>
  • Loading branch information
longman88 authored and Ingo Molnar committed Jan 28, 2014
1 parent 270750d commit aff7385
Showing 1 changed file with 13 additions and 5 deletions.
18 changes: 13 additions & 5 deletions kernel/locking/mutex.c
Original file line number Diff line number Diff line change
Expand Up @@ -136,9 +136,12 @@ void mspin_lock(struct mspin_node **lock, struct mspin_node *node)
return;
}
ACCESS_ONCE(prev->next) = node;
smp_wmb();
/* Wait until the lock holder passes the lock down */
while (!ACCESS_ONCE(node->locked))
/*
* Wait until the lock holder passes the lock down.
* Using smp_load_acquire() provides a memory barrier that
* ensures subsequent operations happen after the lock is acquired.
*/
while (!(smp_load_acquire(&node->locked)))
arch_mutex_cpu_relax();
}

Expand All @@ -156,8 +159,13 @@ static void mspin_unlock(struct mspin_node **lock, struct mspin_node *node)
while (!(next = ACCESS_ONCE(node->next)))
arch_mutex_cpu_relax();
}
ACCESS_ONCE(next->locked) = 1;
smp_wmb();
/*
* Pass lock to next waiter.
* smp_store_release() provides a memory barrier to ensure
* all operations in the critical section has been completed
* before unlocking.
*/
smp_store_release(&next->locked, 1);
}

/*
Expand Down

0 comments on commit aff7385

Please sign in to comment.