2023-03-21 01:02:53

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH memory-model 0/8] LKMM updates for v6.4

Hello!

This series provides LKMM updates:

1. tools/memory-model: Update some warning labels, courtesy of
Alan Stern.

2. tools/memory-model: Unify UNLOCK+LOCK pairings to
po-unlock-lock-po, courtesy of Jonas Oberhauser.

3. tools/memory-model: Add smp_mb__after_srcu_read_unlock().

4. tools/memory-model: Restrict to-r to read-read address dependency,
courtesy of "Joel Fernandes (Google)".

5. tools/memory-model: Provide exact SRCU semantics, courtesy of
Alan Stern.

6. tools/memory-model: Make ppo a subrelation of po, courtesy of
Jonas Oberhauser.

7. tools/memory-model: Add documentation about SRCU read-side
critical sections, courtesy of Alan Stern.

8. Documentation: litmus-tests: Correct spelling, courtesy of
Randy Dunlap.

Thanx, Paul

------------------------------------------------------------------------

b/Documentation/litmus-tests/README | 2
b/tools/memory-model/Documentation/explanation.txt | 178 +++++++++++++++++++--
b/tools/memory-model/linux-kernel.bell | 10 -
b/tools/memory-model/linux-kernel.cat | 15 +
b/tools/memory-model/linux-kernel.def | 1
b/tools/memory-model/lock.cat | 6
tools/memory-model/linux-kernel.bell | 20 --
tools/memory-model/linux-kernel.cat | 7
tools/memory-model/linux-kernel.def | 6
9 files changed, 205 insertions(+), 40 deletions(-)


2023-03-21 01:03:17

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH memory-model 1/8] tools/memory-model: Update some warning labels

From: Alan Stern <[email protected]>

Some of the warning labels used in the LKMM are unfortunately
ambiguous. In particular, the same warning is used for both an
unmatched rcu_read_lock() call and for an unmatched rcu_read_unlock()
call. Likewise for the srcu_* equivalents. Also, the warning about
passing a wrong value to srcu_read_unlock() -- i.e., a value different
from the one returned by the matching srcu_read_lock() -- talks about
bad nesting rather than non-matching values.

Let's update the warning labels to make their meanings more clear.

Signed-off-by: Alan Stern <[email protected]>
Reviewed-by: Jonas Oberhauser <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
tools/memory-model/linux-kernel.bell | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/tools/memory-model/linux-kernel.bell b/tools/memory-model/linux-kernel.bell
index 70a9073dec3e..dc464854d28a 100644
--- a/tools/memory-model/linux-kernel.bell
+++ b/tools/memory-model/linux-kernel.bell
@@ -53,8 +53,8 @@ let rcu-rscs = let rec
in matched

(* Validate nesting *)
-flag ~empty Rcu-lock \ domain(rcu-rscs) as unbalanced-rcu-locking
-flag ~empty Rcu-unlock \ range(rcu-rscs) as unbalanced-rcu-locking
+flag ~empty Rcu-lock \ domain(rcu-rscs) as unmatched-rcu-lock
+flag ~empty Rcu-unlock \ range(rcu-rscs) as unmatched-rcu-unlock

(* Compute matching pairs of nested Srcu-lock and Srcu-unlock *)
let srcu-rscs = let rec
@@ -69,14 +69,14 @@ let srcu-rscs = let rec
in matched

(* Validate nesting *)
-flag ~empty Srcu-lock \ domain(srcu-rscs) as unbalanced-srcu-locking
-flag ~empty Srcu-unlock \ range(srcu-rscs) as unbalanced-srcu-locking
+flag ~empty Srcu-lock \ domain(srcu-rscs) as unmatched-srcu-lock
+flag ~empty Srcu-unlock \ range(srcu-rscs) as unmatched-srcu-unlock

(* Check for use of synchronize_srcu() inside an RCU critical section *)
flag ~empty rcu-rscs & (po ; [Sync-srcu] ; po) as invalid-sleep

(* Validate SRCU dynamic match *)
-flag ~empty different-values(srcu-rscs) as srcu-bad-nesting
+flag ~empty different-values(srcu-rscs) as srcu-bad-value-match

(* Compute marked and plain memory accesses *)
let Marked = (~M) | IW | Once | Release | Acquire | domain(rmw) | range(rmw) |
--
2.40.0.rc2


2023-03-21 01:03:22

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH memory-model 7/8] tools/memory-model: Add documentation about SRCU read-side critical sections

From: Alan Stern <[email protected]>

Expand the discussion of SRCU and its read-side critical sections in
the Linux Kernel Memory Model documentation file explanation.txt. The
new material discusses recent changes to the memory model made in
commit 6cd244c87428 ("tools/memory-model: Provide exact SRCU
semantics").

Signed-off-by: Alan Stern <[email protected]>
Co-developed-by: Joel Fernandes (Google) <[email protected]>
Signed-off-by: Joel Fernandes (Google) <[email protected]>
Reviewed-by: Akira Yokosawa <[email protected]>
Cc: Andrea Parri <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Jade Alglave <[email protected]>
Cc: Jonas Oberhauser <[email protected]>
Cc: Luc Maranget <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: Peter Zijlstra <[email protected]>
CC: Will Deacon <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
.../Documentation/explanation.txt | 178 ++++++++++++++++--
1 file changed, 167 insertions(+), 11 deletions(-)

diff --git a/tools/memory-model/Documentation/explanation.txt b/tools/memory-model/Documentation/explanation.txt
index 8e7085238470..6dc8b3642458 100644
--- a/tools/memory-model/Documentation/explanation.txt
+++ b/tools/memory-model/Documentation/explanation.txt
@@ -28,9 +28,10 @@ Explanation of the Linux-Kernel Memory Consistency Model
20. THE HAPPENS-BEFORE RELATION: hb
21. THE PROPAGATES-BEFORE RELATION: pb
22. RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-order, rcu-fence, and rb
- 23. LOCKING
- 24. PLAIN ACCESSES AND DATA RACES
- 25. ODDS AND ENDS
+ 23. SRCU READ-SIDE CRITICAL SECTIONS
+ 24. LOCKING
+ 25. PLAIN ACCESSES AND DATA RACES
+ 26. ODDS AND ENDS



@@ -1848,14 +1849,169 @@ section in P0 both starts before P1's grace period does and ends
before it does, and the critical section in P2 both starts after P1's
grace period does and ends after it does.

-Addendum: The LKMM now supports SRCU (Sleepable Read-Copy-Update) in
-addition to normal RCU. The ideas involved are much the same as
-above, with new relations srcu-gp and srcu-rscsi added to represent
-SRCU grace periods and read-side critical sections. There is a
-restriction on the srcu-gp and srcu-rscsi links that can appear in an
-rcu-order sequence (the srcu-rscsi links must be paired with srcu-gp
-links having the same SRCU domain with proper nesting); the details
-are relatively unimportant.
+The LKMM supports SRCU (Sleepable Read-Copy-Update) in addition to
+normal RCU. The ideas involved are much the same as above, with new
+relations srcu-gp and srcu-rscsi added to represent SRCU grace periods
+and read-side critical sections. However, there are some significant
+differences between RCU read-side critical sections and their SRCU
+counterparts, as described in the next section.
+
+
+SRCU READ-SIDE CRITICAL SECTIONS
+--------------------------------
+
+The LKMM uses the srcu-rscsi relation to model SRCU read-side critical
+sections. They differ from RCU read-side critical sections in the
+following respects:
+
+1. Unlike the analogous RCU primitives, synchronize_srcu(),
+ srcu_read_lock(), and srcu_read_unlock() take a pointer to a
+ struct srcu_struct as an argument. This structure is called
+ an SRCU domain, and calls linked by srcu-rscsi must have the
+ same domain. Read-side critical sections and grace periods
+ associated with different domains are independent of one
+ another; the SRCU version of the RCU Guarantee applies only
+ to pairs of critical sections and grace periods having the
+ same domain.
+
+2. srcu_read_lock() returns a value, called the index, which must
+ be passed to the matching srcu_read_unlock() call. Unlike
+ rcu_read_lock() and rcu_read_unlock(), an srcu_read_lock()
+ call does not always have to match the next unpaired
+ srcu_read_unlock(). In fact, it is possible for two SRCU
+ read-side critical sections to overlap partially, as in the
+ following example (where s is an srcu_struct and idx1 and idx2
+ are integer variables):
+
+ idx1 = srcu_read_lock(&s); // Start of first RSCS
+ idx2 = srcu_read_lock(&s); // Start of second RSCS
+ srcu_read_unlock(&s, idx1); // End of first RSCS
+ srcu_read_unlock(&s, idx2); // End of second RSCS
+
+ The matching is determined entirely by the domain pointer and
+ index value. By contrast, if the calls had been
+ rcu_read_lock() and rcu_read_unlock() then they would have
+ created two nested (fully overlapping) read-side critical
+ sections: an inner one and an outer one.
+
+3. The srcu_down_read() and srcu_up_read() primitives work
+ exactly like srcu_read_lock() and srcu_read_unlock(), except
+ that matching calls don't have to execute on the same CPU.
+ (The names are meant to be suggestive of operations on
+ semaphores.) Since the matching is determined by the domain
+ pointer and index value, these primitives make it possible for
+ an SRCU read-side critical section to start on one CPU and end
+ on another, so to speak.
+
+In order to account for these properties of SRCU, the LKMM models
+srcu_read_lock() as a special type of load event (which is
+appropriate, since it takes a memory location as argument and returns
+a value, just as a load does) and srcu_read_unlock() as a special type
+of store event (again appropriate, since it takes as arguments a
+memory location and a value). These loads and stores are annotated as
+belonging to the "srcu-lock" and "srcu-unlock" event classes
+respectively.
+
+This approach allows the LKMM to tell whether two events are
+associated with the same SRCU domain, simply by checking whether they
+access the same memory location (i.e., they are linked by the loc
+relation). It also gives a way to tell which unlock matches a
+particular lock, by checking for the presence of a data dependency
+from the load (srcu-lock) to the store (srcu-unlock). For example,
+given the situation outlined earlier (with statement labels added):
+
+ A: idx1 = srcu_read_lock(&s);
+ B: idx2 = srcu_read_lock(&s);
+ C: srcu_read_unlock(&s, idx1);
+ D: srcu_read_unlock(&s, idx2);
+
+the LKMM will treat A and B as loads from s yielding values saved in
+idx1 and idx2 respectively. Similarly, it will treat C and D as
+though they stored the values from idx1 and idx2 in s. The end result
+is much as if we had written:
+
+ A: idx1 = READ_ONCE(s);
+ B: idx2 = READ_ONCE(s);
+ C: WRITE_ONCE(s, idx1);
+ D: WRITE_ONCE(s, idx2);
+
+except for the presence of the special srcu-lock and srcu-unlock
+annotations. You can see at once that we have A ->data C and
+B ->data D. These dependencies tell the LKMM that C is the
+srcu-unlock event matching srcu-lock event A, and D is the
+srcu-unlock event matching srcu-lock event B.
+
+This approach is admittedly a hack, and it has the potential to lead
+to problems. For example, in:
+
+ idx1 = srcu_read_lock(&s);
+ srcu_read_unlock(&s, idx1);
+ idx2 = srcu_read_lock(&s);
+ srcu_read_unlock(&s, idx2);
+
+the LKMM will believe that idx2 must have the same value as idx1,
+since it reads from the immediately preceding store of idx1 in s.
+Fortunately this won't matter, assuming that litmus tests never do
+anything with SRCU index values other than pass them to
+srcu_read_unlock() or srcu_up_read() calls.
+
+However, sometimes it is necessary to store an index value in a
+shared variable temporarily. In fact, this is the only way for
+srcu_down_read() to pass the index it gets to an srcu_up_read() call
+on a different CPU. In more detail, we might have soething like:
+
+ struct srcu_struct s;
+ int x;
+
+ P0()
+ {
+ int r0;
+
+ A: r0 = srcu_down_read(&s);
+ B: WRITE_ONCE(x, r0);
+ }
+
+ P1()
+ {
+ int r1;
+
+ C: r1 = READ_ONCE(x);
+ D: srcu_up_read(&s, r1);
+ }
+
+Assuming that P1 executes after P0 and does read the index value
+stored in x, we can write this (using brackets to represent event
+annotations) as:
+
+ A[srcu-lock] ->data B[once] ->rf C[once] ->data D[srcu-unlock].
+
+The LKMM defines a carry-srcu-data relation to express this pattern;
+it permits an arbitrarily long sequence of
+
+ data ; rf
+
+pairs (that is, a data link followed by an rf link) to occur between
+an srcu-lock event and the final data dependency leading to the
+matching srcu-unlock event. carry-srcu-data is complicated by the
+need to ensure that none of the intermediate store events in this
+sequence are instances of srcu-unlock. This is necessary because in a
+pattern like the one above:
+
+ A: idx1 = srcu_read_lock(&s);
+ B: srcu_read_unlock(&s, idx1);
+ C: idx2 = srcu_read_lock(&s);
+ D: srcu_read_unlock(&s, idx2);
+
+the LKMM treats B as a store to the variable s and C as a load from
+that variable, creating an undesirable rf link from B to C:
+
+ A ->data B ->rf C ->data D.
+
+This would cause carry-srcu-data to mistakenly extend a data
+dependency from A to D, giving the impression that D was the
+srcu-unlock event matching A's srcu-lock. To avoid such problems,
+carry-srcu-data does not accept sequences in which the ends of any of
+the intermediate ->data links (B above) is an srcu-unlock event.


LOCKING
--
2.40.0.rc2


2023-03-21 01:03:26

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH memory-model 5/8] tools/memory-model: Provide exact SRCU semantics

From: Alan Stern <[email protected]>

LKMM has long provided only approximate handling of SRCU read-side
critical sections. This has not been a pressing problem because LKMM's
traditional handling is correct for the common cases of non-overlapping
and properly nested critical sections. However, LKMM's traditional
handling of partially overlapping critical sections incorrectly fuses
them into one large critical section.

For example, consider the following litmus test:

------------------------------------------------------------------------

C C-srcu-nest-5

(*
* Result: Sometimes
*
* This demonstrates non-nested overlapping of SRCU read-side critical
* sections. Unlike RCU, SRCU critical sections do not unconditionally
* nest.
*)

{}

P0(int *x, int *y, struct srcu_struct *s1)
{
int r1;
int r2;
int r3;
int r4;

r3 = srcu_read_lock(s1);
r2 = READ_ONCE(*y);
r4 = srcu_read_lock(s1);
srcu_read_unlock(s1, r3);
r1 = READ_ONCE(*x);
srcu_read_unlock(s1, r4);
}

P1(int *x, int *y, struct srcu_struct *s1)
{
WRITE_ONCE(*y, 1);
synchronize_srcu(s1);
WRITE_ONCE(*x, 1);
}

locations [0:r1]
exists (0:r1=1 /\ 0:r2=0)

------------------------------------------------------------------------

Current mainline incorrectly flattens the two critical sections into
one larger critical section, giving "Never" instead of the correct
"Sometimes":

------------------------------------------------------------------------

$ herd7 -conf linux-kernel.cfg C-srcu-nest-5.litmus
Test C-srcu-nest-5 Allowed
States 3
0:r1=0; 0:r2=0;
0:r1=0; 0:r2=1;
0:r1=1; 0:r2=1;
No
Witnesses
Positive: 0 Negative: 3
Flag srcu-bad-nesting
Condition exists (0:r1=1 /\ 0:r2=0)
Observation C-srcu-nest-5 Never 0 3
Time C-srcu-nest-5 0.01
Hash=e692c106cf3e84e20f12991dc438ff1b

------------------------------------------------------------------------

To its credit, it does complain about bad nesting. But with this
commit we get the following result, which has the virtue of being
correct:

------------------------------------------------------------------------

$ herd7 -conf linux-kernel.cfg C-srcu-nest-5.litmus
Test C-srcu-nest-5 Allowed
States 4
0:r1=0; 0:r2=0;
0:r1=0; 0:r2=1;
0:r1=1; 0:r2=0;
0:r1=1; 0:r2=1;
Ok
Witnesses
Positive: 1 Negative: 3
Condition exists (0:r1=1 /\ 0:r2=0)
Observation C-srcu-nest-5 Sometimes 1 3
Time C-srcu-nest-5 0.05
Hash=e692c106cf3e84e20f12991dc438ff1b

------------------------------------------------------------------------

In addition, there are new srcu_down_read() and srcu_up_read()
functions on their way to mainline. Roughly speaking, these are to
srcu_read_lock() and srcu_read_unlock() as down() and up() are to
mutex_lock() and mutex_unlock(). The key point is that
srcu_down_read() can execute in one process and the matching
srcu_up_read() in another, as shown in this litmus test:

------------------------------------------------------------------------

C C-srcu-nest-6

(*
* Result: Never
*
* This would be valid for srcu_down_read() and srcu_up_read().
*)

{}

P0(int *x, int *y, struct srcu_struct *s1, int *idx, int *f)
{
int r2;
int r3;

r3 = srcu_down_read(s1);
WRITE_ONCE(*idx, r3);
r2 = READ_ONCE(*y);
smp_store_release(f, 1);
}

P1(int *x, int *y, struct srcu_struct *s1, int *idx, int *f)
{
int r1;
int r3;
int r4;

r4 = smp_load_acquire(f);
r1 = READ_ONCE(*x);
r3 = READ_ONCE(*idx);
srcu_up_read(s1, r3);
}

P2(int *x, int *y, struct srcu_struct *s1)
{
WRITE_ONCE(*y, 1);
synchronize_srcu(s1);
WRITE_ONCE(*x, 1);
}

locations [0:r1]
filter (1:r4=1)
exists (1:r1=1 /\ 0:r2=0)

------------------------------------------------------------------------

When run on current mainline, this litmus test gets a complaint about
an unknown macro srcu_down_read(). With this commit:

------------------------------------------------------------------------

herd7 -conf linux-kernel.cfg C-srcu-nest-6.litmus
Test C-srcu-nest-6 Allowed
States 3
0:r1=0; 0:r2=0; 1:r1=0;
0:r1=0; 0:r2=1; 1:r1=0;
0:r1=0; 0:r2=1; 1:r1=1;
No
Witnesses
Positive: 0 Negative: 3
Condition exists (1:r1=1 /\ 0:r2=0)
Observation C-srcu-nest-6 Never 0 3
Time C-srcu-nest-6 0.02
Hash=c1f20257d052ca5e899be508bedcb2a1

------------------------------------------------------------------------

Note that the user must supply the flag "f" and the "filter" clause,
similar to what must be done to emulate call_rcu().

The commit works by treating srcu_read_lock()/srcu_down_read() as
loads and srcu_read_unlock()/srcu_up_read() as stores. This allows us
to determine which unlock matches which lock by looking for a data
dependency between them. In order for this to work properly, the data
dependencies have to be tracked through stores to intermediate
variables such as "idx" in the litmus test above; this is handled by
the new carry-srcu-data relation. But it's important here (and in the
existing carry-dep relation) to avoid tracking the dependencies
through SRCU unlock stores. Otherwise, in situations resembling:

A: r1 = srcu_read_lock(s);
B: srcu_read_unlock(s, r1);
C: r2 = srcu_read_lock(s);
D: srcu_read_unlock(s, r2);

it would look as if D was dependent on both A and C, because "s" would
appear to be an intermediate variable written by B and read by C.
This explains the complications in the definitions of carry-srcu-dep
and carry-dep.

As a debugging aid, the commit adds a check for errors in which the
value returned by one call to srcu_read_lock()/srcu_down_read() is
passed to more than one instance of srcu_read_unlock()/srcu_up_read().

Finally, since these SRCU-related primitives are now treated as
ordinary reads and writes, we have to add them into the lists of
marked accesses (i.e., not subject to data races) and lock-related
accesses (i.e., one shouldn't try to access an srcu_struct with a
non-lock-related primitive such as READ_ONCE() or a plain write).

Portions of this approach were suggested by Boqun Feng and Jonas
Oberhauser.

[ paulmck: Fix space-before-tab whitespace nit. ]

Reported-by: Paul E. McKenney <[email protected]>
Signed-off-by: Alan Stern <[email protected]>
Reviewed-by: Jonas Oberhauser <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
tools/memory-model/linux-kernel.bell | 17 +++++------------
tools/memory-model/linux-kernel.def | 6 ++++--
tools/memory-model/lock.cat | 6 +++---
3 files changed, 12 insertions(+), 17 deletions(-)

diff --git a/tools/memory-model/linux-kernel.bell b/tools/memory-model/linux-kernel.bell
index b92fdf7f6eeb..ce068700939c 100644
--- a/tools/memory-model/linux-kernel.bell
+++ b/tools/memory-model/linux-kernel.bell
@@ -58,20 +58,13 @@ flag ~empty Rcu-lock \ domain(rcu-rscs) as unmatched-rcu-lock
flag ~empty Rcu-unlock \ range(rcu-rscs) as unmatched-rcu-unlock

(* Compute matching pairs of nested Srcu-lock and Srcu-unlock *)
-let srcu-rscs = let rec
- unmatched-locks = Srcu-lock \ domain(matched)
- and unmatched-unlocks = Srcu-unlock \ range(matched)
- and unmatched = unmatched-locks | unmatched-unlocks
- and unmatched-po = ([unmatched] ; po ; [unmatched]) & loc
- and unmatched-locks-to-unlocks =
- ([unmatched-locks] ; po ; [unmatched-unlocks]) & loc
- and matched = matched | (unmatched-locks-to-unlocks \
- (unmatched-po ; unmatched-po))
- in matched
+let carry-srcu-data = (data ; [~ Srcu-unlock] ; rf)*
+let srcu-rscs = ([Srcu-lock] ; carry-srcu-data ; data ; [Srcu-unlock]) & loc

(* Validate nesting *)
flag ~empty Srcu-lock \ domain(srcu-rscs) as unmatched-srcu-lock
flag ~empty Srcu-unlock \ range(srcu-rscs) as unmatched-srcu-unlock
+flag ~empty (srcu-rscs^-1 ; srcu-rscs) \ id as multiple-srcu-matches

(* Check for use of synchronize_srcu() inside an RCU critical section *)
flag ~empty rcu-rscs & (po ; [Sync-srcu] ; po) as invalid-sleep
@@ -81,11 +74,11 @@ flag ~empty different-values(srcu-rscs) as srcu-bad-value-match

(* Compute marked and plain memory accesses *)
let Marked = (~M) | IW | Once | Release | Acquire | domain(rmw) | range(rmw) |
- LKR | LKW | UL | LF | RL | RU
+ LKR | LKW | UL | LF | RL | RU | Srcu-lock | Srcu-unlock
let Plain = M \ Marked

(* Redefine dependencies to include those carried through plain accesses *)
-let carry-dep = (data ; rfi)*
+let carry-dep = (data ; [~ Srcu-unlock] ; rfi)*
let addr = carry-dep ; addr
let ctrl = carry-dep ; ctrl
let data = carry-dep ; data
diff --git a/tools/memory-model/linux-kernel.def b/tools/memory-model/linux-kernel.def
index a6b6fbc9d0b2..88a39601f525 100644
--- a/tools/memory-model/linux-kernel.def
+++ b/tools/memory-model/linux-kernel.def
@@ -50,8 +50,10 @@ synchronize_rcu() { __fence{sync-rcu}; }
synchronize_rcu_expedited() { __fence{sync-rcu}; }

// SRCU
-srcu_read_lock(X) __srcu{srcu-lock}(X)
-srcu_read_unlock(X,Y) { __srcu{srcu-unlock}(X,Y); }
+srcu_read_lock(X) __load{srcu-lock}(*X)
+srcu_read_unlock(X,Y) { __store{srcu-unlock}(*X,Y); }
+srcu_down_read(X) __load{srcu-lock}(*X)
+srcu_up_read(X,Y) { __store{srcu-unlock}(*X,Y); }
synchronize_srcu(X) { __srcu{sync-srcu}(X); }
synchronize_srcu_expedited(X) { __srcu{sync-srcu}(X); }

diff --git a/tools/memory-model/lock.cat b/tools/memory-model/lock.cat
index 6b52f365d73a..53b5a492739d 100644
--- a/tools/memory-model/lock.cat
+++ b/tools/memory-model/lock.cat
@@ -36,9 +36,9 @@ let RU = try RU with emptyset
(* Treat RL as a kind of LF: a read with no ordering properties *)
let LF = LF | RL

-(* There should be no ordinary R or W accesses to spinlocks *)
-let ALL-LOCKS = LKR | LKW | UL | LF | RU
-flag ~empty [M \ IW] ; loc ; [ALL-LOCKS] as mixed-lock-accesses
+(* There should be no ordinary R or W accesses to spinlocks or SRCU structs *)
+let ALL-LOCKS = LKR | LKW | UL | LF | RU | Srcu-lock | Srcu-unlock | Sync-srcu
+flag ~empty [M \ IW \ ALL-LOCKS] ; loc ; [ALL-LOCKS] as mixed-lock-accesses

(* Link Lock-Reads to their RMW-partner Lock-Writes *)
let lk-rmw = ([LKR] ; po-loc ; [LKW]) \ (po ; po)
--
2.40.0.rc2


2023-03-21 01:03:29

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH memory-model 3/8] tools/memory-model: Add smp_mb__after_srcu_read_unlock()

This commit adds support for smp_mb__after_srcu_read_unlock(), which,
when combined with a prior srcu_read_unlock(), implies a full memory
barrier. No ordering is guaranteed to accesses between the two, and
placing accesses between is bad practice in any case.

Tests may be found at https://github.com/paulmckrcu/litmus in files
matching manual/kernel/C-srcu-mb-*.litmus.

Signed-off-by: Paul E. McKenney <[email protected]>
---
tools/memory-model/linux-kernel.bell | 3 ++-
tools/memory-model/linux-kernel.cat | 3 ++-
tools/memory-model/linux-kernel.def | 1 +
3 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/linux-kernel.bell b/tools/memory-model/linux-kernel.bell
index dc464854d28a..b92fdf7f6eeb 100644
--- a/tools/memory-model/linux-kernel.bell
+++ b/tools/memory-model/linux-kernel.bell
@@ -31,7 +31,8 @@ enum Barriers = 'wmb (*smp_wmb*) ||
'before-atomic (*smp_mb__before_atomic*) ||
'after-atomic (*smp_mb__after_atomic*) ||
'after-spinlock (*smp_mb__after_spinlock*) ||
- 'after-unlock-lock (*smp_mb__after_unlock_lock*)
+ 'after-unlock-lock (*smp_mb__after_unlock_lock*) ||
+ 'after-srcu-read-unlock (*smp_mb__after_srcu_read_unlock*)
instructions F[Barriers]

(* SRCU *)
diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat
index 6e531457bb73..3a4d3b49e85c 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -49,7 +49,8 @@ let mb = ([M] ; fencerel(Mb) ; [M]) |
* also affected by the fence.
*)
([M] ; po-unlock-lock-po ;
- [After-unlock-lock] ; po ; [M])
+ [After-unlock-lock] ; po ; [M]) |
+ ([M] ; po? ; [Srcu-unlock] ; fencerel(After-srcu-read-unlock) ; [M])
let gp = po ; [Sync-rcu | Sync-srcu] ; po?
let strong-fence = mb | gp

diff --git a/tools/memory-model/linux-kernel.def b/tools/memory-model/linux-kernel.def
index ef0f3c1850de..a6b6fbc9d0b2 100644
--- a/tools/memory-model/linux-kernel.def
+++ b/tools/memory-model/linux-kernel.def
@@ -24,6 +24,7 @@ smp_mb__before_atomic() { __fence{before-atomic}; }
smp_mb__after_atomic() { __fence{after-atomic}; }
smp_mb__after_spinlock() { __fence{after-spinlock}; }
smp_mb__after_unlock_lock() { __fence{after-unlock-lock}; }
+smp_mb__after_srcu_read_unlock() { __fence{after-srcu-read-unlock}; }
barrier() { __fence{barrier}; }

// Exchange
--
2.40.0.rc2


2023-03-21 01:03:33

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH memory-model 4/8] tools/memory-model: Restrict to-r to read-read address dependency

From: "Joel Fernandes (Google)" <[email protected]>

During a code-reading exercise of linux-kernel.cat CAT file, I generated
a graph to show the to-r relations. While likely not problematic for the
model, I found it confusing that a read-write address dependency would
show as a to-r edge on the graph.

This patch therefore restricts the to-r links derived from addr to only
read-read address dependencies, so that read-write address dependencies don't
show as to-r in the graphs. This should also prevent future users of to-r from
deriving incorrect relations. Note that a read-write address dep, obviously,
still ends up in the ppo relation via the to-w relation.

I verified that a read-read address dependency still shows up as a to-r
link in the graph, as it did before.

For reference, the problematic graph was generated with the following
command:
herd7 -conf linux-kernel.cfg \
-doshow dep -doshow to-r -doshow to-w ./foo.litmus -show all -o OUT/

Signed-off-by: Joel Fernandes (Google) <[email protected]>
Acked-by: Alan Stern <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
tools/memory-model/linux-kernel.cat | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat
index 3a4d3b49e85c..cfc1b8fd46da 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -81,7 +81,7 @@ let dep = addr | data
let rwdep = (dep | ctrl) ; [W]
let overwrite = co | fr
let to-w = rwdep | (overwrite & int) | (addr ; [Plain] ; wmb)
-let to-r = addr | (dep ; [Marked] ; rfi)
+let to-r = (addr ; [R]) | (dep ; [Marked] ; rfi)
let ppo = to-r | to-w | fence | (po-unlock-lock-po & int)

(* Propagation: Ordering from release operations and strong fences. *)
--
2.40.0.rc2


2023-03-21 01:03:36

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH memory-model 2/8] tools/memory-model: Unify UNLOCK+LOCK pairings to po-unlock-lock-po

From: Jonas Oberhauser <[email protected]>

LKMM uses two relations for talking about UNLOCK+LOCK pairings:

1) po-unlock-lock-po, which handles UNLOCK+LOCK pairings
on the same CPU or immediate lock handovers on the same
lock variable

2) po;[UL];(co|po);[LKW];po, which handles UNLOCK+LOCK pairs
literally as described in rcupdate.h#L1002, i.e., even
after a sequence of handovers on the same lock variable.

The latter relation is used only once, to provide the guarantee
defined in rcupdate.h#L1002 by smp_mb__after_unlock_lock(), which
makes any UNLOCK+LOCK pair followed by the fence behave like a full
barrier.

This patch drops this use in favor of using po-unlock-lock-po
everywhere, which unifies the way the model talks about UNLOCK+LOCK
pairings. At first glance this seems to weaken the guarantee given
by LKMM: When considering a long sequence of lock handovers
such as below, where P0 hands the lock to P1, which hands it to P2,
which finally executes such an after_unlock_lock fence, the mb
relation currently links any stores in the critical section of P0
to instructions P2 executes after its fence, but not so after the
patch.

P0(int *x, int *y, spinlock_t *mylock)
{
spin_lock(mylock);
WRITE_ONCE(*x, 2);
spin_unlock(mylock);
WRITE_ONCE(*y, 1);
}

P1(int *y, int *z, spinlock_t *mylock)
{
int r0 = READ_ONCE(*y); // reads 1
spin_lock(mylock);
spin_unlock(mylock);
WRITE_ONCE(*z,1);
}

P2(int *z, int *d, spinlock_t *mylock)
{
int r1 = READ_ONCE(*z); // reads 1
spin_lock(mylock);
spin_unlock(mylock);
smp_mb__after_unlock_lock();
WRITE_ONCE(*d,1);
}

P3(int *x, int *d)
{
WRITE_ONCE(*d,2);
smp_mb();
WRITE_ONCE(*x,1);
}

exists (1:r0=1 /\ 2:r1=1 /\ x=2 /\ d=2)

Nevertheless, the ordering guarantee given in rcupdate.h is actually
not weakened. This is because the unlock operations along the
sequence of handovers are A-cumulative fences. They ensure that any
stores that propagate to the CPU performing the first unlock
operation in the sequence must also propagate to every CPU that
performs a subsequent lock operation in the sequence. Therefore any
such stores will also be ordered correctly by the fence even if only
the final handover is considered a full barrier.

Indeed this patch does not affect the behaviors allowed by LKMM at
all. The mb relation is used to define ordering through:
1) mb/.../ppo/hb, where the ordering is subsumed by hb+ where the
lock-release, rfe, and unlock-acquire orderings each provide hb
2) mb/strong-fence/cumul-fence/prop, where the rfe and A-cumulative
lock-release orderings simply add more fine-grained cumul-fence
edges to substitute a single strong-fence edge provided by a long
lock handover sequence
3) mb/strong-fence/pb and various similar uses in the definition of
data races, where as discussed above any long handover sequence
can be turned into a sequence of cumul-fence edges that provide
the same ordering.

Signed-off-by: Jonas Oberhauser <[email protected]>
Reviewed-by: Alan Stern <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
tools/memory-model/linux-kernel.cat | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat
index 07f884f9b2bf..6e531457bb73 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -37,8 +37,19 @@ let mb = ([M] ; fencerel(Mb) ; [M]) |
([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) |
([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) |
([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) |
- ([M] ; po ; [UL] ; (co | po) ; [LKW] ;
- fencerel(After-unlock-lock) ; [M])
+(*
+ * Note: The po-unlock-lock-po relation only passes the lock to the direct
+ * successor, perhaps giving the impression that the ordering of the
+ * smp_mb__after_unlock_lock() fence only affects a single lock handover.
+ * However, in a longer sequence of lock handovers, the implicit
+ * A-cumulative release fences of lock-release ensure that any stores that
+ * propagate to one of the involved CPUs before it hands over the lock to
+ * the next CPU will also propagate to the final CPU handing over the lock
+ * to the CPU that executes the fence. Therefore, all those stores are
+ * also affected by the fence.
+ *)
+ ([M] ; po-unlock-lock-po ;
+ [After-unlock-lock] ; po ; [M])
let gp = po ; [Sync-rcu | Sync-srcu] ; po?
let strong-fence = mb | gp

--
2.40.0.rc2


2023-03-21 01:03:39

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH memory-model 8/8] Documentation: litmus-tests: Correct spelling

From: Randy Dunlap <[email protected]>

Correct spelling problems for Documentation/litmus-tests/ as reported
by codespell.

Signed-off-by: Randy Dunlap <[email protected]>
Cc: Alan Stern <[email protected]>
Cc: Andrea Parri <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: David Howells <[email protected]>
Cc: Jade Alglave <[email protected]>
Cc: Luc Maranget <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: [email protected]
Cc: Jonathan Corbet <[email protected]>
Cc: [email protected]
Reviewed-by: David Howells <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
Documentation/litmus-tests/README | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Documentation/litmus-tests/README b/Documentation/litmus-tests/README
index 7f5c6c3ed6c3..658d37860d39 100644
--- a/Documentation/litmus-tests/README
+++ b/Documentation/litmus-tests/README
@@ -9,7 +9,7 @@ a kernel test module based on a litmus test, please see
tools/memory-model/README.


-atomic (/atomic derectory)
+atomic (/atomic directory)
--------------------------

Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus
--
2.40.0.rc2


2023-03-21 01:03:43

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH memory-model 6/8] tools/memory-model: Make ppo a subrelation of po

From: Jonas Oberhauser <[email protected]>

As stated in the documentation and implied by its name, the ppo
(preserved program order) relation is intended to link po-earlier
to po-later instructions under certain conditions. However, a
corner case currently allows instructions to be linked by ppo that
are not executed by the same thread, i.e., instructions are being
linked that have no po relation.

This happens due to the mb/strong-fence/fence relations, which (as
one case) provide order when locks are passed between threads
followed by an smp_mb__after_unlock_lock() fence. This is
illustrated in the following litmus test (as can be seen when using
herd7 with `doshow ppo`):

P0(spinlock_t *x, spinlock_t *y)
{
spin_lock(x);
spin_unlock(x);
}

P1(spinlock_t *x, spinlock_t *y)
{
spin_lock(x);
smp_mb__after_unlock_lock();
*y = 1;
}

The ppo relation will link P0's spin_lock(x) and P1's *y=1, because
P0 passes a lock to P1 which then uses this fence.

The patch makes ppo a subrelation of po by letting fence contribute
to ppo only in case the fence links events of the same thread.

Signed-off-by: Jonas Oberhauser <[email protected]>
Acked-by: Alan Stern <[email protected]>
Acked-by: Andrea Parri <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
---
tools/memory-model/linux-kernel.cat | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat
index cfc1b8fd46da..adf3c4f41229 100644
--- a/tools/memory-model/linux-kernel.cat
+++ b/tools/memory-model/linux-kernel.cat
@@ -82,7 +82,7 @@ let rwdep = (dep | ctrl) ; [W]
let overwrite = co | fr
let to-w = rwdep | (overwrite & int) | (addr ; [Plain] ; wmb)
let to-r = (addr ; [R]) | (dep ; [Marked] ; rfi)
-let ppo = to-r | to-w | fence | (po-unlock-lock-po & int)
+let ppo = to-r | to-w | (fence & int) | (po-unlock-lock-po & int)

(* Propagation: Ordering from release operations and strong fences. *)
let A-cumul(r) = (rfe ; [Marked])? ; r
--
2.40.0.rc2


2023-03-22 01:09:25

by Andrea Parri

[permalink] [raw]
Subject: Re: [PATCH memory-model 1/8] tools/memory-model: Update some warning labels

On Mon, Mar 20, 2023 at 06:02:39PM -0700, Paul E. McKenney wrote:
> From: Alan Stern <[email protected]>
>
> Some of the warning labels used in the LKMM are unfortunately
> ambiguous. In particular, the same warning is used for both an
> unmatched rcu_read_lock() call and for an unmatched rcu_read_unlock()
> call. Likewise for the srcu_* equivalents. Also, the warning about
> passing a wrong value to srcu_read_unlock() -- i.e., a value different
> from the one returned by the matching srcu_read_lock() -- talks about
> bad nesting rather than non-matching values.
>
> Let's update the warning labels to make their meanings more clear.
>
> Signed-off-by: Alan Stern <[email protected]>
> Reviewed-by: Jonas Oberhauser <[email protected]>
> Signed-off-by: Paul E. McKenney <[email protected]>

Acked-by: Andrea Parri <[email protected]>

Andrea


> ---
> tools/memory-model/linux-kernel.bell | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/tools/memory-model/linux-kernel.bell b/tools/memory-model/linux-kernel.bell
> index 70a9073dec3e..dc464854d28a 100644
> --- a/tools/memory-model/linux-kernel.bell
> +++ b/tools/memory-model/linux-kernel.bell
> @@ -53,8 +53,8 @@ let rcu-rscs = let rec
> in matched
>
> (* Validate nesting *)
> -flag ~empty Rcu-lock \ domain(rcu-rscs) as unbalanced-rcu-locking
> -flag ~empty Rcu-unlock \ range(rcu-rscs) as unbalanced-rcu-locking
> +flag ~empty Rcu-lock \ domain(rcu-rscs) as unmatched-rcu-lock
> +flag ~empty Rcu-unlock \ range(rcu-rscs) as unmatched-rcu-unlock
>
> (* Compute matching pairs of nested Srcu-lock and Srcu-unlock *)
> let srcu-rscs = let rec
> @@ -69,14 +69,14 @@ let srcu-rscs = let rec
> in matched
>
> (* Validate nesting *)
> -flag ~empty Srcu-lock \ domain(srcu-rscs) as unbalanced-srcu-locking
> -flag ~empty Srcu-unlock \ range(srcu-rscs) as unbalanced-srcu-locking
> +flag ~empty Srcu-lock \ domain(srcu-rscs) as unmatched-srcu-lock
> +flag ~empty Srcu-unlock \ range(srcu-rscs) as unmatched-srcu-unlock
>
> (* Check for use of synchronize_srcu() inside an RCU critical section *)
> flag ~empty rcu-rscs & (po ; [Sync-srcu] ; po) as invalid-sleep
>
> (* Validate SRCU dynamic match *)
> -flag ~empty different-values(srcu-rscs) as srcu-bad-nesting
> +flag ~empty different-values(srcu-rscs) as srcu-bad-value-match
>
> (* Compute marked and plain memory accesses *)
> let Marked = (~M) | IW | Once | Release | Acquire | domain(rmw) | range(rmw) |
> --
> 2.40.0.rc2
>

2023-03-22 01:15:19

by Andrea Parri

[permalink] [raw]
Subject: Re: [PATCH memory-model 2/8] tools/memory-model: Unify UNLOCK+LOCK pairings to po-unlock-lock-po

On Mon, Mar 20, 2023 at 06:02:40PM -0700, Paul E. McKenney wrote:
> From: Jonas Oberhauser <[email protected]>
>
> LKMM uses two relations for talking about UNLOCK+LOCK pairings:
>
> 1) po-unlock-lock-po, which handles UNLOCK+LOCK pairings
> on the same CPU or immediate lock handovers on the same
> lock variable
>
> 2) po;[UL];(co|po);[LKW];po, which handles UNLOCK+LOCK pairs
> literally as described in rcupdate.h#L1002, i.e., even
> after a sequence of handovers on the same lock variable.
>
> The latter relation is used only once, to provide the guarantee
> defined in rcupdate.h#L1002 by smp_mb__after_unlock_lock(), which
> makes any UNLOCK+LOCK pair followed by the fence behave like a full
> barrier.
>
> This patch drops this use in favor of using po-unlock-lock-po
> everywhere, which unifies the way the model talks about UNLOCK+LOCK
> pairings. At first glance this seems to weaken the guarantee given
> by LKMM: When considering a long sequence of lock handovers
> such as below, where P0 hands the lock to P1, which hands it to P2,
> which finally executes such an after_unlock_lock fence, the mb
> relation currently links any stores in the critical section of P0
> to instructions P2 executes after its fence, but not so after the
> patch.
>
> P0(int *x, int *y, spinlock_t *mylock)
> {
> spin_lock(mylock);
> WRITE_ONCE(*x, 2);
> spin_unlock(mylock);
> WRITE_ONCE(*y, 1);
> }
>
> P1(int *y, int *z, spinlock_t *mylock)
> {
> int r0 = READ_ONCE(*y); // reads 1
> spin_lock(mylock);
> spin_unlock(mylock);
> WRITE_ONCE(*z,1);
> }
>
> P2(int *z, int *d, spinlock_t *mylock)
> {
> int r1 = READ_ONCE(*z); // reads 1
> spin_lock(mylock);
> spin_unlock(mylock);
> smp_mb__after_unlock_lock();
> WRITE_ONCE(*d,1);
> }
>
> P3(int *x, int *d)
> {
> WRITE_ONCE(*d,2);
> smp_mb();
> WRITE_ONCE(*x,1);
> }
>
> exists (1:r0=1 /\ 2:r1=1 /\ x=2 /\ d=2)
>
> Nevertheless, the ordering guarantee given in rcupdate.h is actually
> not weakened. This is because the unlock operations along the
> sequence of handovers are A-cumulative fences. They ensure that any
> stores that propagate to the CPU performing the first unlock
> operation in the sequence must also propagate to every CPU that
> performs a subsequent lock operation in the sequence. Therefore any
> such stores will also be ordered correctly by the fence even if only
> the final handover is considered a full barrier.
>
> Indeed this patch does not affect the behaviors allowed by LKMM at
> all. The mb relation is used to define ordering through:
> 1) mb/.../ppo/hb, where the ordering is subsumed by hb+ where the
> lock-release, rfe, and unlock-acquire orderings each provide hb
> 2) mb/strong-fence/cumul-fence/prop, where the rfe and A-cumulative
> lock-release orderings simply add more fine-grained cumul-fence
> edges to substitute a single strong-fence edge provided by a long
> lock handover sequence
> 3) mb/strong-fence/pb and various similar uses in the definition of
> data races, where as discussed above any long handover sequence
> can be turned into a sequence of cumul-fence edges that provide
> the same ordering.
>
> Signed-off-by: Jonas Oberhauser <[email protected]>
> Reviewed-by: Alan Stern <[email protected]>
> Signed-off-by: Paul E. McKenney <[email protected]>

Looks like after-unlock-lock has just won the single fattest inline comment
in linux-kernel.cat. :-)

Acked-by: Andrea Parri <[email protected]>

Andrea


> ---
> tools/memory-model/linux-kernel.cat | 15 +++++++++++++--
> 1 file changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat
> index 07f884f9b2bf..6e531457bb73 100644
> --- a/tools/memory-model/linux-kernel.cat
> +++ b/tools/memory-model/linux-kernel.cat
> @@ -37,8 +37,19 @@ let mb = ([M] ; fencerel(Mb) ; [M]) |
> ([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) |
> ([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) |
> ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) |
> - ([M] ; po ; [UL] ; (co | po) ; [LKW] ;
> - fencerel(After-unlock-lock) ; [M])
> +(*
> + * Note: The po-unlock-lock-po relation only passes the lock to the direct
> + * successor, perhaps giving the impression that the ordering of the
> + * smp_mb__after_unlock_lock() fence only affects a single lock handover.
> + * However, in a longer sequence of lock handovers, the implicit
> + * A-cumulative release fences of lock-release ensure that any stores that
> + * propagate to one of the involved CPUs before it hands over the lock to
> + * the next CPU will also propagate to the final CPU handing over the lock
> + * to the CPU that executes the fence. Therefore, all those stores are
> + * also affected by the fence.
> + *)
> + ([M] ; po-unlock-lock-po ;
> + [After-unlock-lock] ; po ; [M])
> let gp = po ; [Sync-rcu | Sync-srcu] ; po?
> let strong-fence = mb | gp
>
> --
> 2.40.0.rc2
>

2023-03-22 01:15:22

by Andrea Parri

[permalink] [raw]
Subject: Re: [PATCH memory-model 4/8] tools/memory-model: Restrict to-r to read-read address dependency

On Mon, Mar 20, 2023 at 06:02:42PM -0700, Paul E. McKenney wrote:
> From: "Joel Fernandes (Google)" <[email protected]>
>
> During a code-reading exercise of linux-kernel.cat CAT file, I generated
> a graph to show the to-r relations. While likely not problematic for the
> model, I found it confusing that a read-write address dependency would
> show as a to-r edge on the graph.
>
> This patch therefore restricts the to-r links derived from addr to only
> read-read address dependencies, so that read-write address dependencies don't
> show as to-r in the graphs. This should also prevent future users of to-r from
> deriving incorrect relations. Note that a read-write address dep, obviously,
> still ends up in the ppo relation via the to-w relation.
>
> I verified that a read-read address dependency still shows up as a to-r
> link in the graph, as it did before.
>
> For reference, the problematic graph was generated with the following
> command:
> herd7 -conf linux-kernel.cfg \
> -doshow dep -doshow to-r -doshow to-w ./foo.litmus -show all -o OUT/
>
> Signed-off-by: Joel Fernandes (Google) <[email protected]>
> Acked-by: Alan Stern <[email protected]>
> Signed-off-by: Paul E. McKenney <[email protected]>

Acked-by: Andrea Parri <[email protected]>

Andrea


> ---
> tools/memory-model/linux-kernel.cat | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat
> index 3a4d3b49e85c..cfc1b8fd46da 100644
> --- a/tools/memory-model/linux-kernel.cat
> +++ b/tools/memory-model/linux-kernel.cat
> @@ -81,7 +81,7 @@ let dep = addr | data
> let rwdep = (dep | ctrl) ; [W]
> let overwrite = co | fr
> let to-w = rwdep | (overwrite & int) | (addr ; [Plain] ; wmb)
> -let to-r = addr | (dep ; [Marked] ; rfi)
> +let to-r = (addr ; [R]) | (dep ; [Marked] ; rfi)
> let ppo = to-r | to-w | fence | (po-unlock-lock-po & int)
>
> (* Propagation: Ordering from release operations and strong fences. *)
> --
> 2.40.0.rc2
>

2023-03-22 01:15:33

by Andrea Parri

[permalink] [raw]
Subject: Re: [PATCH memory-model 5/8] tools/memory-model: Provide exact SRCU semantics

> +let carry-srcu-data = (data ; [~ Srcu-unlock] ; rf)*

> +let carry-dep = (data ; [~ Srcu-unlock] ; rfi)*

Nit: we use "~M" (no space after the unary operator) in this file, is
there a reason for the different styles?

Andrea

2023-03-22 01:17:12

by Alan Stern

[permalink] [raw]
Subject: Re: [PATCH memory-model 5/8] tools/memory-model: Provide exact SRCU semantics

On Wed, Mar 22, 2023 at 02:07:40AM +0100, Andrea Parri wrote:
> > +let carry-srcu-data = (data ; [~ Srcu-unlock] ; rf)*
>
> > +let carry-dep = (data ; [~ Srcu-unlock] ; rfi)*
>
> Nit: we use "~M" (no space after the unary operator) in this file, is
> there a reason for the different styles?

No reason that I know of, just lack of thought (as far as
carry-srcu-data is concerned). Feel free to change it.

Alan

2023-03-22 01:41:55

by Andrea Parri

[permalink] [raw]
Subject: Re: [PATCH memory-model 7/8] tools/memory-model: Add documentation about SRCU read-side critical sections

On Mon, Mar 20, 2023 at 06:02:45PM -0700, Paul E. McKenney wrote:
> From: Alan Stern <[email protected]>
>
> Expand the discussion of SRCU and its read-side critical sections in
> the Linux Kernel Memory Model documentation file explanation.txt. The
> new material discusses recent changes to the memory model made in
> commit 6cd244c87428 ("tools/memory-model: Provide exact SRCU
> semantics").

How about squashing the diff below (adjusting subject and changelog):

Andrea

diff --git a/tools/memory-model/Documentation/litmus-tests.txt b/tools/memory-model/Documentation/litmus-tests.txt
index 26554b1c5575e..acac527328a1f 100644
--- a/tools/memory-model/Documentation/litmus-tests.txt
+++ b/tools/memory-model/Documentation/litmus-tests.txt
@@ -1028,32 +1028,7 @@ Limitations of the Linux-kernel memory model (LKMM) include:
additional call_rcu() process to the site of the
emulated rcu-barrier().

- e. Although sleepable RCU (SRCU) is now modeled, there
- are some subtle differences between its semantics and
- those in the Linux kernel. For example, the kernel
- might interpret the following sequence as two partially
- overlapping SRCU read-side critical sections:
-
- 1 r1 = srcu_read_lock(&my_srcu);
- 2 do_something_1();
- 3 r2 = srcu_read_lock(&my_srcu);
- 4 do_something_2();
- 5 srcu_read_unlock(&my_srcu, r1);
- 6 do_something_3();
- 7 srcu_read_unlock(&my_srcu, r2);
-
- In contrast, LKMM will interpret this as a nested pair of
- SRCU read-side critical sections, with the outer critical
- section spanning lines 1-7 and the inner critical section
- spanning lines 3-5.
-
- This difference would be more of a concern had anyone
- identified a reasonable use case for partially overlapping
- SRCU read-side critical sections. For more information
- on the trickiness of such overlapping, please see:
- https://paulmck.livejournal.com/40593.html
-
- f. Reader-writer locking is not modeled. It can be
+ e. Reader-writer locking is not modeled. It can be
emulated in litmus tests using atomic read-modify-write
operations.


Andrea

2023-03-22 02:20:42

by Joel Fernandes

[permalink] [raw]
Subject: Re: [PATCH memory-model 7/8] tools/memory-model: Add documentation about SRCU read-side critical sections

On Tue, Mar 21, 2023 at 9:40 PM Andrea Parri <[email protected]> wrote:
>
> On Mon, Mar 20, 2023 at 06:02:45PM -0700, Paul E. McKenney wrote:
> > From: Alan Stern <[email protected]>
> >
> > Expand the discussion of SRCU and its read-side critical sections in
> > the Linux Kernel Memory Model documentation file explanation.txt. The
> > new material discusses recent changes to the memory model made in
> > commit 6cd244c87428 ("tools/memory-model: Provide exact SRCU
> > semantics").
>
> How about squashing the diff below (adjusting subject and changelog):
>
> Andrea
>
> diff --git a/tools/memory-model/Documentation/litmus-tests.txt b/tools/memory-model/Documentation/litmus-tests.txt
> index 26554b1c5575e..acac527328a1f 100644
> --- a/tools/memory-model/Documentation/litmus-tests.txt
> +++ b/tools/memory-model/Documentation/litmus-tests.txt
> @@ -1028,32 +1028,7 @@ Limitations of the Linux-kernel memory model (LKMM) include:
> additional call_rcu() process to the site of the
> emulated rcu-barrier().
>
> - e. Although sleepable RCU (SRCU) is now modeled, there
> - are some subtle differences between its semantics and
> - those in the Linux kernel. For example, the kernel
> - might interpret the following sequence as two partially
> - overlapping SRCU read-side critical sections:
> -
> - 1 r1 = srcu_read_lock(&my_srcu);
> - 2 do_something_1();
> - 3 r2 = srcu_read_lock(&my_srcu);
> - 4 do_something_2();
> - 5 srcu_read_unlock(&my_srcu, r1);
> - 6 do_something_3();
> - 7 srcu_read_unlock(&my_srcu, r2);
> -
> - In contrast, LKMM will interpret this as a nested pair of
> - SRCU read-side critical sections, with the outer critical
> - section spanning lines 1-7 and the inner critical section
> - spanning lines 3-5.
> -
> - This difference would be more of a concern had anyone
> - identified a reasonable use case for partially overlapping
> - SRCU read-side critical sections. For more information
> - on the trickiness of such overlapping, please see:
> - https://paulmck.livejournal.com/40593.html
> -
> - f. Reader-writer locking is not modeled. It can be
> + e. Reader-writer locking is not modeled. It can be
> emulated in litmus tests using atomic read-modify-write

Good point! And for the diff:

Reviewed-by: Joel Fernandes (Google) <[email protected]>

thanks,

- Joel


> operations.
>
>
> Andrea

2023-03-22 14:36:43

by Alan Stern

[permalink] [raw]
Subject: Re: [PATCH memory-model 7/8] tools/memory-model: Add documentation about SRCU read-side critical sections

On Wed, Mar 22, 2023 at 02:40:52AM +0100, Andrea Parri wrote:
> On Mon, Mar 20, 2023 at 06:02:45PM -0700, Paul E. McKenney wrote:
> > From: Alan Stern <[email protected]>
> >
> > Expand the discussion of SRCU and its read-side critical sections in
> > the Linux Kernel Memory Model documentation file explanation.txt. The
> > new material discusses recent changes to the memory model made in
> > commit 6cd244c87428 ("tools/memory-model: Provide exact SRCU
> > semantics").
>
> How about squashing the diff below (adjusting subject and changelog):
>
> Andrea
>
> diff --git a/tools/memory-model/Documentation/litmus-tests.txt b/tools/memory-model/Documentation/litmus-tests.txt
> index 26554b1c5575e..acac527328a1f 100644
> --- a/tools/memory-model/Documentation/litmus-tests.txt
> +++ b/tools/memory-model/Documentation/litmus-tests.txt
> @@ -1028,32 +1028,7 @@ Limitations of the Linux-kernel memory model (LKMM) include:
> additional call_rcu() process to the site of the
> emulated rcu-barrier().
>
> - e. Although sleepable RCU (SRCU) is now modeled, there
> - are some subtle differences between its semantics and
> - those in the Linux kernel. For example, the kernel
> - might interpret the following sequence as two partially
> - overlapping SRCU read-side critical sections:
> -
> - 1 r1 = srcu_read_lock(&my_srcu);
> - 2 do_something_1();
> - 3 r2 = srcu_read_lock(&my_srcu);
> - 4 do_something_2();
> - 5 srcu_read_unlock(&my_srcu, r1);
> - 6 do_something_3();
> - 7 srcu_read_unlock(&my_srcu, r2);
> -
> - In contrast, LKMM will interpret this as a nested pair of
> - SRCU read-side critical sections, with the outer critical
> - section spanning lines 1-7 and the inner critical section
> - spanning lines 3-5.
> -
> - This difference would be more of a concern had anyone
> - identified a reasonable use case for partially overlapping
> - SRCU read-side critical sections. For more information
> - on the trickiness of such overlapping, please see:
> - https://paulmck.livejournal.com/40593.html
> -
> - f. Reader-writer locking is not modeled. It can be
> + e. Reader-writer locking is not modeled. It can be
> emulated in litmus tests using atomic read-modify-write
> operations.

Excellent suggestion!

Acked-by: Alan Stern <[email protected]>

Alan

2023-03-22 18:08:27

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH memory-model 7/8] tools/memory-model: Add documentation about SRCU read-side critical sections

On Wed, Mar 22, 2023 at 02:40:52AM +0100, Andrea Parri wrote:
> On Mon, Mar 20, 2023 at 06:02:45PM -0700, Paul E. McKenney wrote:
> > From: Alan Stern <[email protected]>
> >
> > Expand the discussion of SRCU and its read-side critical sections in
> > the Linux Kernel Memory Model documentation file explanation.txt. The
> > new material discusses recent changes to the memory model made in
> > commit 6cd244c87428 ("tools/memory-model: Provide exact SRCU
> > semantics").
>
> How about squashing the diff below (adjusting subject and changelog):

Looks good to me, thank you!

Please submit an official patch with Joel's and Alan's tags and I
will be happy to pull it in.

Thanx, Paul

> Andrea
>
> diff --git a/tools/memory-model/Documentation/litmus-tests.txt b/tools/memory-model/Documentation/litmus-tests.txt
> index 26554b1c5575e..acac527328a1f 100644
> --- a/tools/memory-model/Documentation/litmus-tests.txt
> +++ b/tools/memory-model/Documentation/litmus-tests.txt
> @@ -1028,32 +1028,7 @@ Limitations of the Linux-kernel memory model (LKMM) include:
> additional call_rcu() process to the site of the
> emulated rcu-barrier().
>
> - e. Although sleepable RCU (SRCU) is now modeled, there
> - are some subtle differences between its semantics and
> - those in the Linux kernel. For example, the kernel
> - might interpret the following sequence as two partially
> - overlapping SRCU read-side critical sections:
> -
> - 1 r1 = srcu_read_lock(&my_srcu);
> - 2 do_something_1();
> - 3 r2 = srcu_read_lock(&my_srcu);
> - 4 do_something_2();
> - 5 srcu_read_unlock(&my_srcu, r1);
> - 6 do_something_3();
> - 7 srcu_read_unlock(&my_srcu, r2);
> -
> - In contrast, LKMM will interpret this as a nested pair of
> - SRCU read-side critical sections, with the outer critical
> - section spanning lines 1-7 and the inner critical section
> - spanning lines 3-5.
> -
> - This difference would be more of a concern had anyone
> - identified a reasonable use case for partially overlapping
> - SRCU read-side critical sections. For more information
> - on the trickiness of such overlapping, please see:
> - https://paulmck.livejournal.com/40593.html
> -
> - f. Reader-writer locking is not modeled. It can be
> + e. Reader-writer locking is not modeled. It can be
> emulated in litmus tests using atomic read-modify-write
> operations.
>
>
> Andrea

2023-03-22 18:08:31

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH memory-model 2/8] tools/memory-model: Unify UNLOCK+LOCK pairings to po-unlock-lock-po

On Wed, Mar 22, 2023 at 01:59:00AM +0100, Andrea Parri wrote:
> On Mon, Mar 20, 2023 at 06:02:40PM -0700, Paul E. McKenney wrote:
> > From: Jonas Oberhauser <[email protected]>
> >
> > LKMM uses two relations for talking about UNLOCK+LOCK pairings:
> >
> > 1) po-unlock-lock-po, which handles UNLOCK+LOCK pairings
> > on the same CPU or immediate lock handovers on the same
> > lock variable
> >
> > 2) po;[UL];(co|po);[LKW];po, which handles UNLOCK+LOCK pairs
> > literally as described in rcupdate.h#L1002, i.e., even
> > after a sequence of handovers on the same lock variable.
> >
> > The latter relation is used only once, to provide the guarantee
> > defined in rcupdate.h#L1002 by smp_mb__after_unlock_lock(), which
> > makes any UNLOCK+LOCK pair followed by the fence behave like a full
> > barrier.
> >
> > This patch drops this use in favor of using po-unlock-lock-po
> > everywhere, which unifies the way the model talks about UNLOCK+LOCK
> > pairings. At first glance this seems to weaken the guarantee given
> > by LKMM: When considering a long sequence of lock handovers
> > such as below, where P0 hands the lock to P1, which hands it to P2,
> > which finally executes such an after_unlock_lock fence, the mb
> > relation currently links any stores in the critical section of P0
> > to instructions P2 executes after its fence, but not so after the
> > patch.
> >
> > P0(int *x, int *y, spinlock_t *mylock)
> > {
> > spin_lock(mylock);
> > WRITE_ONCE(*x, 2);
> > spin_unlock(mylock);
> > WRITE_ONCE(*y, 1);
> > }
> >
> > P1(int *y, int *z, spinlock_t *mylock)
> > {
> > int r0 = READ_ONCE(*y); // reads 1
> > spin_lock(mylock);
> > spin_unlock(mylock);
> > WRITE_ONCE(*z,1);
> > }
> >
> > P2(int *z, int *d, spinlock_t *mylock)
> > {
> > int r1 = READ_ONCE(*z); // reads 1
> > spin_lock(mylock);
> > spin_unlock(mylock);
> > smp_mb__after_unlock_lock();
> > WRITE_ONCE(*d,1);
> > }
> >
> > P3(int *x, int *d)
> > {
> > WRITE_ONCE(*d,2);
> > smp_mb();
> > WRITE_ONCE(*x,1);
> > }
> >
> > exists (1:r0=1 /\ 2:r1=1 /\ x=2 /\ d=2)
> >
> > Nevertheless, the ordering guarantee given in rcupdate.h is actually
> > not weakened. This is because the unlock operations along the
> > sequence of handovers are A-cumulative fences. They ensure that any
> > stores that propagate to the CPU performing the first unlock
> > operation in the sequence must also propagate to every CPU that
> > performs a subsequent lock operation in the sequence. Therefore any
> > such stores will also be ordered correctly by the fence even if only
> > the final handover is considered a full barrier.
> >
> > Indeed this patch does not affect the behaviors allowed by LKMM at
> > all. The mb relation is used to define ordering through:
> > 1) mb/.../ppo/hb, where the ordering is subsumed by hb+ where the
> > lock-release, rfe, and unlock-acquire orderings each provide hb
> > 2) mb/strong-fence/cumul-fence/prop, where the rfe and A-cumulative
> > lock-release orderings simply add more fine-grained cumul-fence
> > edges to substitute a single strong-fence edge provided by a long
> > lock handover sequence
> > 3) mb/strong-fence/pb and various similar uses in the definition of
> > data races, where as discussed above any long handover sequence
> > can be turned into a sequence of cumul-fence edges that provide
> > the same ordering.
> >
> > Signed-off-by: Jonas Oberhauser <[email protected]>
> > Reviewed-by: Alan Stern <[email protected]>
> > Signed-off-by: Paul E. McKenney <[email protected]>
>
> Looks like after-unlock-lock has just won the single fattest inline comment
> in linux-kernel.cat. :-)
>
> Acked-by: Andrea Parri <[email protected]>

Thank you! I will apply these tags (1, 2, and 4) on my next rebase.

Thanx, Paul

> Andrea
>
>
> > ---
> > tools/memory-model/linux-kernel.cat | 15 +++++++++++++--
> > 1 file changed, 13 insertions(+), 2 deletions(-)
> >
> > diff --git a/tools/memory-model/linux-kernel.cat b/tools/memory-model/linux-kernel.cat
> > index 07f884f9b2bf..6e531457bb73 100644
> > --- a/tools/memory-model/linux-kernel.cat
> > +++ b/tools/memory-model/linux-kernel.cat
> > @@ -37,8 +37,19 @@ let mb = ([M] ; fencerel(Mb) ; [M]) |
> > ([M] ; fencerel(Before-atomic) ; [RMW] ; po? ; [M]) |
> > ([M] ; po? ; [RMW] ; fencerel(After-atomic) ; [M]) |
> > ([M] ; po? ; [LKW] ; fencerel(After-spinlock) ; [M]) |
> > - ([M] ; po ; [UL] ; (co | po) ; [LKW] ;
> > - fencerel(After-unlock-lock) ; [M])
> > +(*
> > + * Note: The po-unlock-lock-po relation only passes the lock to the direct
> > + * successor, perhaps giving the impression that the ordering of the
> > + * smp_mb__after_unlock_lock() fence only affects a single lock handover.
> > + * However, in a longer sequence of lock handovers, the implicit
> > + * A-cumulative release fences of lock-release ensure that any stores that
> > + * propagate to one of the involved CPUs before it hands over the lock to
> > + * the next CPU will also propagate to the final CPU handing over the lock
> > + * to the CPU that executes the fence. Therefore, all those stores are
> > + * also affected by the fence.
> > + *)
> > + ([M] ; po-unlock-lock-po ;
> > + [After-unlock-lock] ; po ; [M])
> > let gp = po ; [Sync-rcu | Sync-srcu] ; po?
> > let strong-fence = mb | gp
> >
> > --
> > 2.40.0.rc2
> >