Skip to content

Commit

Permalink
ipc/sem.c: update/correct memory barriers
Browse files Browse the repository at this point in the history
sem_lock() did not properly pair memory barriers:

!spin_is_locked() and spin_unlock_wait() are both only control barriers.
The code needs an acquire barrier, otherwise the cpu might perform read
operations before the lock test.

As no primitive exists inside <include/spinlock.h> and since it seems
noone wants another primitive, the code creates a local primitive within
ipc/sem.c.

With regards to -stable:

The change of sem_wait_array() is a bugfix, the change to sem_lock() is a
nop (just a preprocessor redefinition to improve the readability).  The
bugfix is necessary for all kernels that use sem_wait_array() (i.e.:
starting from 3.10).

Signed-off-by: Manfred Spraul <[email protected]>
Reported-by: Oleg Nesterov <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: Kirill Tkhai <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: Davidlohr Bueso <[email protected]>
Cc: <[email protected]>	[3.10+]
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
manfred-colorfu authored and torvalds committed Aug 14, 2015
1 parent 7f6bf39 commit 3ed1f8a
Showing 1 changed file with 14 additions and 4 deletions.
18 changes: 14 additions & 4 deletions ipc/sem.c
Original file line number Diff line number Diff line change
Expand Up @@ -252,6 +252,16 @@ static void sem_rcu_free(struct rcu_head *head)
ipc_rcu_free(head);
}

/*
* spin_unlock_wait() and !spin_is_locked() are not memory barriers, they
* are only control barriers.
* The code must pair with spin_unlock(&sem->lock) or
* spin_unlock(&sem_perm.lock), thus just the control barrier is insufficient.
*
* smp_rmb() is sufficient, as writes cannot pass the control barrier.
*/
#define ipc_smp_acquire__after_spin_is_unlocked() smp_rmb()

/*
* Wait until all currently ongoing simple ops have completed.
* Caller must own sem_perm.lock.
Expand All @@ -275,6 +285,7 @@ static void sem_wait_array(struct sem_array *sma)
sem = sma->sem_base + i;
spin_unlock_wait(&sem->lock);
}
ipc_smp_acquire__after_spin_is_unlocked();
}

/*
Expand Down Expand Up @@ -327,13 +338,12 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
/* Then check that the global lock is free */
if (!spin_is_locked(&sma->sem_perm.lock)) {
/*
* The ipc object lock check must be visible on all
* cores before rechecking the complex count. Otherwise
* we can race with another thread that does:
* We need a memory barrier with acquire semantics,
* otherwise we can race with another thread that does:
* complex_count++;
* spin_unlock(sem_perm.lock);
*/
smp_rmb();
ipc_smp_acquire__after_spin_is_unlocked();

/*
* Now repeat the test of complex_count:
Expand Down

0 comments on commit 3ed1f8a

Please sign in to comment.