Skip to content

Commit

Permalink
ipc/sem.c: use READ_ONCE()/WRITE_ONCE() for use_global_lock
Browse files Browse the repository at this point in the history
The patch solves three weaknesses in ipc/sem.c:

1) The initial read of use_global_lock in sem_lock() is an intentional
   race.  KCSAN detects these accesses and prints a warning.

2) The code assumes that plain C read/writes are not mangled by the CPU
   or the compiler.

3) The comment it sysvipc_sem_proc_show() was hard to understand: The
   rest of the comments in ipc/sem.c speaks about sem_perm.lock, and
   suddenly this function speaks about ipc_lock_object().

To solve 1) and 2), use READ_ONCE()/WRITE_ONCE().  Plain C reads are used
in code that owns sma->sem_perm.lock.

The comment is updated to solve 3)

[[email protected]: use READ_ONCE()/WRITE_ONCE() for use_global_lock]
  Link: https://lkml.kernel.org/r/[email protected]

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Manfred Spraul <[email protected]>
Reviewed-by: Paul E. McKenney <[email protected]>
Reviewed-by: Davidlohr Bueso <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
manfred-colorfu authored and torvalds committed Jul 1, 2021
1 parent bc8136a commit 17d056e
Showing 1 changed file with 9 additions and 5 deletions.
14 changes: 9 additions & 5 deletions ipc/sem.c
Original file line number Diff line number Diff line change
Expand Up @@ -217,6 +217,8 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it);
* this smp_load_acquire(), this is guaranteed because the smp_load_acquire()
* is inside a spin_lock() and after a write from 0 to non-zero a
* spin_lock()+spin_unlock() is done.
* To prevent the compiler/cpu temporarily writing 0 to use_global_lock,
* READ_ONCE()/WRITE_ONCE() is used.
*
* 2) queue.status: (SEM_BARRIER_2)
* Initialization is done while holding sem_lock(), so no further barrier is
Expand Down Expand Up @@ -342,10 +344,10 @@ static void complexmode_enter(struct sem_array *sma)
* Nothing to do, just reset the
* counter until we return to simple mode.
*/
sma->use_global_lock = USE_GLOBAL_LOCK_HYSTERESIS;
WRITE_ONCE(sma->use_global_lock, USE_GLOBAL_LOCK_HYSTERESIS);
return;
}
sma->use_global_lock = USE_GLOBAL_LOCK_HYSTERESIS;
WRITE_ONCE(sma->use_global_lock, USE_GLOBAL_LOCK_HYSTERESIS);

for (i = 0; i < sma->sem_nsems; i++) {
sem = &sma->sems[i];
Expand All @@ -371,7 +373,8 @@ static void complexmode_tryleave(struct sem_array *sma)
/* See SEM_BARRIER_1 for purpose/pairing */
smp_store_release(&sma->use_global_lock, 0);
} else {
sma->use_global_lock--;
WRITE_ONCE(sma->use_global_lock,
sma->use_global_lock-1);
}
}

Expand Down Expand Up @@ -412,7 +415,7 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
* Initial check for use_global_lock. Just an optimization,
* no locking, no memory barrier.
*/
if (!sma->use_global_lock) {
if (!READ_ONCE(sma->use_global_lock)) {
/*
* It appears that no complex operation is around.
* Acquire the per-semaphore lock.
Expand Down Expand Up @@ -2436,7 +2439,8 @@ static int sysvipc_sem_proc_show(struct seq_file *s, void *it)

/*
* The proc interface isn't aware of sem_lock(), it calls
* ipc_lock_object() directly (in sysvipc_find_ipc).
* ipc_lock_object(), i.e. spin_lock(&sma->sem_perm.lock).
* (in sysvipc_find_ipc)
* In order to stay compatible with sem_lock(), we must
* enter / leave complex_mode.
*/
Expand Down

0 comments on commit 17d056e

Please sign in to comment.