Skip to content

Commit

Permalink
Merge tag 'locking-core-2020-08-03' of git://git.kernel.org/pub/scm/l…
Browse files Browse the repository at this point in the history
…inux/kernel/git/tip/tip

Pull locking updates from Ingo Molnar:

 - LKMM updates: mostly documentation changes, but also some new litmus
   tests for atomic ops.

 - KCSAN updates: the most important change is that GCC 11 now has all
   fixes in place to support KCSAN, so GCC support can be enabled again.
   Also more annotations.

 - futex updates: minor cleanups and simplifications

 - seqlock updates: merge preparatory changes/cleanups for the
   'associated locks' facilities.

 - lockdep updates:
    - simplify IRQ trace event handling
    - add various new debug checks
    - simplify header dependencies, split out <linux/lockdep_types.h>,
      decouple lockdep from other low level headers some more
    - fix NMI handling

 - misc cleanups and smaller fixes

* tag 'locking-core-2020-08-03' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (60 commits)
  kcsan: Improve IRQ state trace reporting
  lockdep: Refactor IRQ trace events fields into struct
  seqlock: lockdep assert non-preemptibility on seqcount_t write
  lockdep: Add preemption enabled/disabled assertion APIs
  seqlock: Implement raw_seqcount_begin() in terms of raw_read_seqcount()
  seqlock: Add kernel-doc for seqcount_t and seqlock_t APIs
  seqlock: Reorder seqcount_t and seqlock_t API definitions
  seqlock: seqcount_t latch: End read sections with read_seqcount_retry()
  seqlock: Properly format kernel-doc code samples
  Documentation: locking: Describe seqlock design and usage
  locking/qspinlock: Do not include atomic.h from qspinlock_types.h
  locking/atomic: Move ATOMIC_INIT into linux/types.h
  lockdep: Move list.h inclusion into lockdep.h
  locking/lockdep: Fix TRACE_IRQFLAGS vs. NMIs
  futex: Remove unused or redundant includes
  futex: Consistently use fshared as boolean
  futex: Remove needless goto's
  futex: Remove put_futex_key()
  rwsem: fix commas in initialisation
  docs: locking: Replace HTTP links with HTTPS ones
  ...
  • Loading branch information
torvalds committed Aug 3, 2020
2 parents 8f0cb66 + 992414a commit 9ba19cc
Show file tree
Hide file tree
Showing 86 changed files with 2,797 additions and 854 deletions.
24 changes: 12 additions & 12 deletions Documentation/atomic_t.txt
Original file line number Diff line number Diff line change
Expand Up @@ -85,21 +85,21 @@ smp_store_release() respectively. Therefore, if you find yourself only using
the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
and are doing it wrong.

A subtle detail of atomic_set{}() is that it should be observable to the RMW
ops. That is:
A note for the implementation of atomic_set{}() is that it must not break the
atomicity of the RMW ops. That is:

C atomic-set
C Atomic-RMW-ops-are-atomic-WRT-atomic_set

{
atomic_set(v, 1);
atomic_t v = ATOMIC_INIT(1);
}

P1(atomic_t *v)
P0(atomic_t *v)
{
atomic_add_unless(v, 1, 0);
(void)atomic_add_unless(v, 1, 0);
}

P2(atomic_t *v)
P1(atomic_t *v)
{
atomic_set(v, 0);
}
Expand Down Expand Up @@ -233,34 +233,34 @@ as well. Similarly, something like:
is an ACQUIRE pattern (though very much not typical), but again the barrier is
strictly stronger than ACQUIRE. As illustrated:

C strong-acquire
C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire

{
}

P1(int *x, atomic_t *y)
P0(int *x, atomic_t *y)
{
r0 = READ_ONCE(*x);
smp_rmb();
r1 = atomic_read(y);
}

P2(int *x, atomic_t *y)
P1(int *x, atomic_t *y)
{
atomic_inc(y);
smp_mb__after_atomic();
WRITE_ONCE(*x, 1);
}

exists
(r0=1 /\ r1=0)
(0:r0=1 /\ 0:r1=0)

This should not happen; but a hypothetical atomic_inc_acquire() --
(void)atomic_fetch_inc_acquire() for instance -- would allow the outcome,
because it would not order the W part of the RMW against the following
WRITE_ONCE. Thus:

P1 P2
P0 P1

t = LL.acq *y (0)
t++;
Expand Down
3 changes: 2 additions & 1 deletion Documentation/dev-tools/kcsan.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@ approach to detect races. KCSAN's primary purpose is to detect `data races`_.
Usage
-----

KCSAN requires Clang version 11 or later.
KCSAN is supported by both GCC and Clang. With GCC we require version 11 or
later, and with Clang also require version 11 or later.

To enable KCSAN configure the kernel with::

Expand Down
35 changes: 35 additions & 0 deletions Documentation/litmus-tests/README
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
============
LITMUS TESTS
============

Each subdirectory contains litmus tests that are typical to describe the
semantics of respective kernel APIs.
For more information about how to "run" a litmus test or how to generate
a kernel test module based on a litmus test, please see
tools/memory-model/README.


atomic (/atomic derectory)
--------------------------

Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus
Test that an atomic RMW followed by a smp_mb__after_atomic() is
stronger than a normal acquire: both the read and write parts of
the RMW are ordered before the subsequential memory accesses.

Atomic-RMW-ops-are-atomic-WRT-atomic_set.litmus
Test that atomic_set() cannot break the atomicity of atomic RMWs.
NOTE: Require herd7 7.56 or later which supports "(void)expr".


RCU (/rcu directory)
--------------------

MP+onceassign+derefonce.litmus (under tools/memory-model/litmus-tests/)
Demonstrates the use of rcu_assign_pointer() and rcu_dereference() to
ensure that an RCU reader will not see pre-initialization garbage.

RCU+sync+read.litmus
RCU+sync+free.litmus
Both the above litmus tests demonstrate the RCU grace period guarantee
that an RCU read-side critical section can never span a grace period.
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire

(*
* Result: Never
*
* Test that an atomic RMW followed by a smp_mb__after_atomic() is
* stronger than a normal acquire: both the read and write parts of
* the RMW are ordered before the subsequential memory accesses.
*)

{
}

P0(int *x, atomic_t *y)
{
int r0;
int r1;

r0 = READ_ONCE(*x);
smp_rmb();
r1 = atomic_read(y);
}

P1(int *x, atomic_t *y)
{
atomic_inc(y);
smp_mb__after_atomic();
WRITE_ONCE(*x, 1);
}

exists
(0:r0=1 /\ 0:r1=0)
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
C Atomic-RMW-ops-are-atomic-WRT-atomic_set

(*
* Result: Never
*
* Test that atomic_set() cannot break the atomicity of atomic RMWs.
* NOTE: This requires herd7 7.56 or later which supports "(void)expr".
*)

{
atomic_t v = ATOMIC_INIT(1);
}

P0(atomic_t *v)
{
(void)atomic_add_unless(v, 1, 0);
}

P1(atomic_t *v)
{
atomic_set(v, 0);
}

exists
(v=2)
42 changes: 42 additions & 0 deletions Documentation/litmus-tests/rcu/RCU+sync+free.litmus
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
C RCU+sync+free

(*
* Result: Never
*
* This litmus test demonstrates that an RCU reader can never see a write that
* follows a grace period, if it did not see writes that precede that grace
* period.
*
* This is a typical pattern of RCU usage, where the write before the grace
* period assigns a pointer, and the writes following the grace period destroy
* the object that the pointer used to point to.
*
* This is one implication of the RCU grace-period guarantee, which says (among
* other things) that an RCU read-side critical section cannot span a grace period.
*)

{
int x = 1;
int *y = &x;
int z = 1;
}

P0(int *x, int *z, int **y)
{
int *r0;
int r1;

rcu_read_lock();
r0 = rcu_dereference(*y);
r1 = READ_ONCE(*r0);
rcu_read_unlock();
}

P1(int *x, int *z, int **y)
{
rcu_assign_pointer(*y, z);
synchronize_rcu();
WRITE_ONCE(*x, 0);
}

exists (0:r0=x /\ 0:r1=0)
37 changes: 37 additions & 0 deletions Documentation/litmus-tests/rcu/RCU+sync+read.litmus
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
C RCU+sync+read

(*
* Result: Never
*
* This litmus test demonstrates that after a grace period, an RCU updater always
* sees all stores done in prior RCU read-side critical sections. Such
* read-side critical sections would have ended before the grace period ended.
*
* This is one implication of the RCU grace-period guarantee, which says (among
* other things) that an RCU read-side critical section cannot span a grace period.
*)

{
int x = 0;
int y = 0;
}

P0(int *x, int *y)
{
rcu_read_lock();
WRITE_ONCE(*x, 1);
WRITE_ONCE(*y, 1);
rcu_read_unlock();
}

P1(int *x, int *y)
{
int r0;
int r1;

r0 = READ_ONCE(*x);
synchronize_rcu();
r1 = READ_ONCE(*y);
}

exists (1:r0=1 /\ 1:r1=0)
1 change: 1 addition & 0 deletions Documentation/locking/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ locking
mutex-design
rt-mutex-design
rt-mutex
seqlock
spinlocks
ww-mutex-design
preempt-locking
Expand Down
2 changes: 1 addition & 1 deletion Documentation/locking/mutex-design.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ as an alternative to these. This new data structure provided a number
of advantages, including simpler interfaces, and at that time smaller
code (see Disadvantages).

[1] http://lwn.net/Articles/164802/
[1] https://lwn.net/Articles/164802/

Implementation
--------------
Expand Down
Loading

0 comments on commit 9ba19cc

Please sign in to comment.