Skip to content

Commit

Permalink
tools/memory-model: litmus: Add two tests for unlock(A)+lock(B) ordering
Browse files Browse the repository at this point in the history
The memory model has been updated to provide a stronger ordering
guarantee for unlock(A)+lock(B) on the same CPU/thread. Therefore add
two litmus tests describing this new guarantee, these tests are simple
yet can clearly show the usage of the new guarantee, also they can serve
as the self tests for the modification in the model.

Co-developed-by: Alan Stern <[email protected]>
Signed-off-by: Alan Stern <[email protected]>
Signed-off-by: Boqun Feng <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
  • Loading branch information
fbq authored and paulmckrcu committed Dec 1, 2021
1 parent b47c05e commit c438b7d
Show file tree
Hide file tree
Showing 3 changed files with 76 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
C LB+unlocklockonceonce+poacquireonce

(*
* Result: Never
*
* If two locked critical sections execute on the same CPU, all accesses
* in the first must execute before any accesses in the second, even if the
* critical sections are protected by different locks. Note: Even when a
* write executes before a read, their memory effects can be reordered from
* the viewpoint of another CPU (the kind of reordering allowed by TSO).
*)

{}

P0(spinlock_t *s, spinlock_t *t, int *x, int *y)
{
int r1;

spin_lock(s);
r1 = READ_ONCE(*x);
spin_unlock(s);
spin_lock(t);
WRITE_ONCE(*y, 1);
spin_unlock(t);
}

P1(int *x, int *y)
{
int r2;

r2 = smp_load_acquire(y);
WRITE_ONCE(*x, 1);
}

exists (0:r1=1 /\ 1:r2=1)
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
C MP+unlocklockonceonce+fencermbonceonce

(*
* Result: Never
*
* If two locked critical sections execute on the same CPU, stores in the
* first must propagate to each CPU before stores in the second do, even if
* the critical sections are protected by different locks.
*)

{}

P0(spinlock_t *s, spinlock_t *t, int *x, int *y)
{
spin_lock(s);
WRITE_ONCE(*x, 1);
spin_unlock(s);
spin_lock(t);
WRITE_ONCE(*y, 1);
spin_unlock(t);
}

P1(int *x, int *y)
{
int r1;
int r2;

r1 = READ_ONCE(*y);
smp_rmb();
r2 = READ_ONCE(*x);
}

exists (1:r1=1 /\ 1:r2=0)
8 changes: 8 additions & 0 deletions tools/memory-model/litmus-tests/README
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,10 @@ LB+poonceonces.litmus
As above, but with store-release replaced with WRITE_ONCE()
and load-acquire replaced with READ_ONCE().

LB+unlocklockonceonce+poacquireonce.litmus
Does a unlock+lock pair provides ordering guarantee between a
load and a store?

MP+onceassign+derefonce.litmus
As below, but with rcu_assign_pointer() and an rcu_dereference().

Expand Down Expand Up @@ -90,6 +94,10 @@ MP+porevlocks.litmus
As below, but with the first access of the writer process
and the second access of reader process protected by a lock.

MP+unlocklockonceonce+fencermbonceonce.litmus
Does a unlock+lock pair provides ordering guarantee between a
store and another store?

MP+fencewmbonceonce+fencermbonceonce.litmus
Does a smp_wmb() (between the stores) and an smp_rmb() (between
the loads) suffice for the message-passing litmus test, where one
Expand Down

0 comments on commit c438b7d

Please sign in to comment.