2020-02-27 00:42:57

by Boqun Feng

[permalink] [raw]
Subject: [PATCH v3 5/5] Documentation/locking/atomic: Add a litmus test smp_mb__after_atomic()

We already use a litmus test in atomic_t.txt to describe atomic RMW +
smp_mb__after_atomic() is stronger than acquire (both the read and the
write parts are ordered). So make it a litmus test in atomic-tests
directory, so that people can access the litmus easily.

Additionally, change the processor numbers "P1, P2" to "P0, P1" in
atomic_t.txt for the consistency with the processor numbers in the
litmus test, which herd can handle.

Signed-off-by: Boqun Feng <[email protected]>
---
...ter_atomic-is-stronger-than-acquire.litmus | 32 +++++++++++++++++++
Documentation/atomic-tests/README | 5 +++
Documentation/atomic_t.txt | 10 +++---
3 files changed, 42 insertions(+), 5 deletions(-)
create mode 100644 Documentation/atomic-tests/Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus

diff --git a/Documentation/atomic-tests/Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus b/Documentation/atomic-tests/Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus
new file mode 100644
index 000000000000..9a8e31a44b28
--- /dev/null
+++ b/Documentation/atomic-tests/Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus
@@ -0,0 +1,32 @@
+C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
+
+(*
+ * Result: Never
+ *
+ * Test that an atomic RMW followed by a smp_mb__after_atomic() is
+ * stronger than a normal acquire: both the read and write parts of
+ * the RMW are ordered before the subsequential memory accesses.
+ *)
+
+{
+}
+
+P0(int *x, atomic_t *y)
+{
+ int r0;
+ int r1;
+
+ r0 = READ_ONCE(*x);
+ smp_rmb();
+ r1 = atomic_read(y);
+}
+
+P1(int *x, atomic_t *y)
+{
+ atomic_inc(y);
+ smp_mb__after_atomic();
+ WRITE_ONCE(*x, 1);
+}
+
+exists
+(0:r0=1 /\ 0:r1=0)
diff --git a/Documentation/atomic-tests/README b/Documentation/atomic-tests/README
index a1b72410b539..714cf93816ea 100644
--- a/Documentation/atomic-tests/README
+++ b/Documentation/atomic-tests/README
@@ -7,5 +7,10 @@ tools/memory-model/README.
LITMUS TESTS
============

+Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
+ Test that an atomic RMW followed by a smp_mb__after_atomic() is
+ stronger than a normal acquire: both the read and write parts of
+ the RMW are ordered before the subsequential memory accesses.
+
Atomic-RMW-ops-are-atomic-WRT-atomic_set.litmus
Test that atomic_set() cannot break the atomicity of atomic RMWs.
diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
index 67d1d99f8589..0f1fdedf36bb 100644
--- a/Documentation/atomic_t.txt
+++ b/Documentation/atomic_t.txt
@@ -233,19 +233,19 @@ as well. Similarly, something like:
is an ACQUIRE pattern (though very much not typical), but again the barrier is
strictly stronger than ACQUIRE. As illustrated:

- C strong-acquire
+ C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire

{
}

- P1(int *x, atomic_t *y)
+ P0(int *x, atomic_t *y)
{
r0 = READ_ONCE(*x);
smp_rmb();
r1 = atomic_read(y);
}

- P2(int *x, atomic_t *y)
+ P1(int *x, atomic_t *y)
{
atomic_inc(y);
smp_mb__after_atomic();
@@ -253,14 +253,14 @@ strictly stronger than ACQUIRE. As illustrated:
}

exists
- (r0=1 /\ r1=0)
+ (0:r0=1 /\ 0:r1=0)

This should not happen; but a hypothetical atomic_inc_acquire() --
(void)atomic_fetch_inc_acquire() for instance -- would allow the outcome,
because it would not order the W part of the RMW against the following
WRITE_ONCE. Thus:

- P1 P2
+ P0 P1

t = LL.acq *y (0)
t++;
--
2.25.0


2020-02-27 16:38:43

by Alan Stern

[permalink] [raw]
Subject: Re: [PATCH v3 5/5] Documentation/locking/atomic: Add a litmus test smp_mb__after_atomic()

On Thu, 27 Feb 2020, Boqun Feng wrote:

> We already use a litmus test in atomic_t.txt to describe atomic RMW +
> smp_mb__after_atomic() is stronger than acquire (both the read and the
> write parts are ordered). So make it a litmus test in atomic-tests
> directory, so that people can access the litmus easily.
>
> Additionally, change the processor numbers "P1, P2" to "P0, P1" in
> atomic_t.txt for the consistency with the processor numbers in the
> litmus test, which herd can handle.
>
> Signed-off-by: Boqun Feng <[email protected]>
> ---

Acked-by: Alan Stern <[email protected]>


> ...ter_atomic-is-stronger-than-acquire.litmus | 32 +++++++++++++++++++
> Documentation/atomic-tests/README | 5 +++
> Documentation/atomic_t.txt | 10 +++---
> 3 files changed, 42 insertions(+), 5 deletions(-)
> create mode 100644 Documentation/atomic-tests/Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus
>
> diff --git a/Documentation/atomic-tests/Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus b/Documentation/atomic-tests/Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus
> new file mode 100644
> index 000000000000..9a8e31a44b28
> --- /dev/null
> +++ b/Documentation/atomic-tests/Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus
> @@ -0,0 +1,32 @@
> +C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
> +
> +(*
> + * Result: Never
> + *
> + * Test that an atomic RMW followed by a smp_mb__after_atomic() is
> + * stronger than a normal acquire: both the read and write parts of
> + * the RMW are ordered before the subsequential memory accesses.
> + *)
> +
> +{
> +}
> +
> +P0(int *x, atomic_t *y)
> +{
> + int r0;
> + int r1;
> +
> + r0 = READ_ONCE(*x);
> + smp_rmb();
> + r1 = atomic_read(y);
> +}
> +
> +P1(int *x, atomic_t *y)
> +{
> + atomic_inc(y);
> + smp_mb__after_atomic();
> + WRITE_ONCE(*x, 1);
> +}
> +
> +exists
> +(0:r0=1 /\ 0:r1=0)
> diff --git a/Documentation/atomic-tests/README b/Documentation/atomic-tests/README
> index a1b72410b539..714cf93816ea 100644
> --- a/Documentation/atomic-tests/README
> +++ b/Documentation/atomic-tests/README
> @@ -7,5 +7,10 @@ tools/memory-model/README.
> LITMUS TESTS
> ============
>
> +Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
> + Test that an atomic RMW followed by a smp_mb__after_atomic() is
> + stronger than a normal acquire: both the read and write parts of
> + the RMW are ordered before the subsequential memory accesses.
> +
> Atomic-RMW-ops-are-atomic-WRT-atomic_set.litmus
> Test that atomic_set() cannot break the atomicity of atomic RMWs.
> diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> index 67d1d99f8589..0f1fdedf36bb 100644
> --- a/Documentation/atomic_t.txt
> +++ b/Documentation/atomic_t.txt
> @@ -233,19 +233,19 @@ as well. Similarly, something like:
> is an ACQUIRE pattern (though very much not typical), but again the barrier is
> strictly stronger than ACQUIRE. As illustrated:
>
> - C strong-acquire
> + C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
>
> {
> }
>
> - P1(int *x, atomic_t *y)
> + P0(int *x, atomic_t *y)
> {
> r0 = READ_ONCE(*x);
> smp_rmb();
> r1 = atomic_read(y);
> }
>
> - P2(int *x, atomic_t *y)
> + P1(int *x, atomic_t *y)
> {
> atomic_inc(y);
> smp_mb__after_atomic();
> @@ -253,14 +253,14 @@ strictly stronger than ACQUIRE. As illustrated:
> }
>
> exists
> - (r0=1 /\ r1=0)
> + (0:r0=1 /\ 0:r1=0)
>
> This should not happen; but a hypothetical atomic_inc_acquire() --
> (void)atomic_fetch_inc_acquire() for instance -- would allow the outcome,
> because it would not order the W part of the RMW against the following
> WRITE_ONCE. Thus:
>
> - P1 P2
> + P0 P1
>
> t = LL.acq *y (0)
> t++;
>