2023-05-10 18:33:39

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH locking/atomics 0/19] Add kernel-doc for more atomic operations

Hello!

This series adds kernel-doc for additional atomic operations by adding
them to the gen-atomics.sh-based scripting. This ends up duplicating
a few entries currently in x86, which this series removes in order to
avoid duplicate-entry warnings from "make htmldocs".

1. locking/atomic: Fix fetch_add_unless missing-period typo.

2. locking/atomic: Add "@" before "true" and "false" for fallback
templates.

3. locking/atomic: Add kernel-doc and docbook_oldnew variables
for headers.

4. locking/atomic: Add kernel-doc header for
arch_${atomic}_${pfx}inc${sfx}${order}.

5. locking/atomic: Add kernel-doc header for
arch_${atomic}_${pfx}dec${sfx}${order}.

6. locking/atomic: Add kernel-doc header for
arch_${atomic}_${pfx}andnot${sfx}${order}.

7. locking/atomic: Add kernel-doc header for
arch_${atomic}_try_cmpxchg${order}.

8. locking/atomic: Add kernel-doc header for
arch_${atomic}_dec_if_positive.

9. locking/atomic: Add kernel-doc header for
arch_${atomic}_dec_unless_positive.

10. locking/atomic: Add kernel-doc header for
arch_${atomic}_inc_unless_negative.

11. locking/atomic: Add kernel-doc header for
arch_${atomic}_set_release.

12. locking/atomic: Add kernel-doc header for
arch_${atomic}_read_acquire.

13. locking/atomic: Script to auto-generate acquire, fence, and
release headers.

14. locking/atomic: Add kernel-doc header for
arch_${atomic}_${pfx}${name}${sfx}_acquire.

15. locking/atomic: Add kernel-doc header for
arch_${atomic}_${pfx}${name}${sfx}_release.

16. locking/atomic: Add kernel-doc header for
arch_${atomic}_${pfx}${name}${sfx}.

17. x86/atomic.h: Remove duplicate kernel-doc headers.

18. locking/atomic: Refrain from generating duplicate fallback
kernel-doc.

19. Add atomic operations to the driver basic API documentation.

Aside: There was much much less drama and pain involved in installing
and running "make htmldocs" than last time around. A big "thank you!"
to whoever made this happen. Here is hoping for a similar degree of
improvement for the next required upgrade! ;-)

Thanx, Paul

------------------------------------------------------------------------

b/Documentation/driver-api/basics.rst | 3
b/arch/x86/include/asm/atomic.h | 60
b/include/linux/atomic/atomic-arch-fallback.h | 6
b/scripts/atomic/acqrel.sh | 67
b/scripts/atomic/chkdup.sh | 27
b/scripts/atomic/fallbacks/acquire | 4
b/scripts/atomic/fallbacks/add_negative | 4
b/scripts/atomic/fallbacks/add_unless | 2
b/scripts/atomic/fallbacks/andnot | 8
b/scripts/atomic/fallbacks/dec | 7
b/scripts/atomic/fallbacks/dec_and_test | 2
b/scripts/atomic/fallbacks/dec_if_positive | 10
b/scripts/atomic/fallbacks/dec_unless_positive | 8
b/scripts/atomic/fallbacks/fence | 2
b/scripts/atomic/fallbacks/fetch_add_unless | 2
b/scripts/atomic/fallbacks/inc | 7
b/scripts/atomic/fallbacks/inc_and_test | 2
b/scripts/atomic/fallbacks/inc_not_zero | 2
b/scripts/atomic/fallbacks/inc_unless_negative | 8
b/scripts/atomic/fallbacks/read_acquire | 7
b/scripts/atomic/fallbacks/release | 2
b/scripts/atomic/fallbacks/set_release | 7
b/scripts/atomic/fallbacks/sub_and_test | 2
b/scripts/atomic/fallbacks/try_cmpxchg | 10
b/scripts/atomic/gen-atomic-fallback.sh | 17
b/scripts/atomic/gen-atomic-instrumented.sh | 17
b/scripts/atomic/gen-atomics.sh | 4
include/linux/atomic/atomic-arch-fallback.h | 1754 +++++++++++++++++++------
scripts/atomic/fallbacks/acquire | 3
scripts/atomic/fallbacks/add_negative | 5
scripts/atomic/fallbacks/add_unless | 5
scripts/atomic/fallbacks/andnot | 5
scripts/atomic/fallbacks/dec | 5
scripts/atomic/fallbacks/dec_and_test | 5
scripts/atomic/fallbacks/dec_if_positive | 5
scripts/atomic/fallbacks/dec_unless_positive | 5
scripts/atomic/fallbacks/fence | 3
scripts/atomic/fallbacks/fetch_add_unless | 5
scripts/atomic/fallbacks/inc | 5
scripts/atomic/fallbacks/inc_and_test | 5
scripts/atomic/fallbacks/inc_not_zero | 5
scripts/atomic/fallbacks/inc_unless_negative | 5
scripts/atomic/fallbacks/read_acquire | 5
scripts/atomic/fallbacks/release | 3
scripts/atomic/fallbacks/set_release | 5
scripts/atomic/fallbacks/sub_and_test | 5
scripts/atomic/fallbacks/try_cmpxchg | 5
47 files changed, 1686 insertions(+), 454 deletions(-)


2023-05-10 18:35:05

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH locking/atomic 12/19] locking/atomic: Add kernel-doc header for arch_${atomic}_read_acquire

Add kernel-doc header template for arch_${atomic}_read_acquire
function family.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Mark Rutland <[email protected]>
---
include/linux/atomic/atomic-arch-fallback.h | 16 +++++++++++++++-
scripts/atomic/fallbacks/read_acquire | 7 +++++++
2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 7ba75143c149..c3552b83bf49 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -240,6 +240,13 @@
#endif /* arch_try_cmpxchg64_local */

#ifndef arch_atomic_read_acquire
+/**
+ * arch_atomic_read_acquire - Atomic load acquire
+ * @v: pointer of type atomic_t
+ *
+ * Atomically load from *@v with acquire ordering, returning the value
+ * loaded.
+ */
static __always_inline int
arch_atomic_read_acquire(const atomic_t *v)
{
@@ -1695,6 +1702,13 @@ arch_atomic_dec_if_positive(atomic_t *v)
#endif

#ifndef arch_atomic64_read_acquire
+/**
+ * arch_atomic64_read_acquire - Atomic load acquire
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically load from *@v with acquire ordering, returning the value
+ * loaded.
+ */
static __always_inline s64
arch_atomic64_read_acquire(const atomic64_t *v)
{
@@ -3146,4 +3160,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
#endif

#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// c46693a9f3b3ceacef003cd42764251148e3457d
+// 96c8a3c4d13b12c9f3e0f715709c8af1653a7e79
diff --git a/scripts/atomic/fallbacks/read_acquire b/scripts/atomic/fallbacks/read_acquire
index a0ea1d26e6b2..779f40c07018 100755
--- a/scripts/atomic/fallbacks/read_acquire
+++ b/scripts/atomic/fallbacks/read_acquire
@@ -1,4 +1,11 @@
cat <<EOF
+/**
+ * arch_${atomic}_read_acquire - Atomic load acquire
+ * @v: pointer of type ${atomic}_t
+ *
+ * Atomically load from *@v with acquire ordering, returning the value
+ * loaded.
+ */
static __always_inline ${ret}
arch_${atomic}_read_acquire(const ${atomic}_t *v)
{
--
2.40.1


2023-05-10 18:42:01

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH locking/atomic 19/19] docs: Add atomic operations to the driver basic API documentation

Add the include/linux/atomic/atomic-arch-fallback.h file to the
driver-api/basics.rst in order to provide documentation for the Linux
kernel's atomic operations.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Akira Yokosawa <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: <[email protected]>
---
Documentation/driver-api/basics.rst | 3 +++
1 file changed, 3 insertions(+)

diff --git a/Documentation/driver-api/basics.rst b/Documentation/driver-api/basics.rst
index 4b4d8e28d3be..0ae07f0d8601 100644
--- a/Documentation/driver-api/basics.rst
+++ b/Documentation/driver-api/basics.rst
@@ -87,6 +87,9 @@ Atomics
.. kernel-doc:: arch/x86/include/asm/atomic.h
:internal:

+.. kernel-doc:: include/linux/atomic/atomic-arch-fallback.h
+ :internal:
+
Kernel objects manipulation
---------------------------

--
2.40.1


2023-05-10 18:43:13

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH locking/atomic 04/19] locking/atomic: Add kernel-doc header for arch_${atomic}_${pfx}inc${sfx}${order}

Add kernel-doc header template for arch_${atomic}_${pfx}inc${sfx}${order}
function family.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Mark Rutland <[email protected]>
---
include/linux/atomic/atomic-arch-fallback.h | 128 +++++++++++++++++++-
scripts/atomic/fallbacks/inc | 7 ++
2 files changed, 134 insertions(+), 1 deletion(-)

diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 606be9d3aa22..e7e83f18d192 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -440,6 +440,13 @@ arch_atomic_fetch_sub(int i, atomic_t *v)
#endif /* arch_atomic_fetch_sub_relaxed */

#ifndef arch_atomic_inc
+/**
+ * arch_atomic_inc - Atomic increment
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increment @v with full ordering,
+ * returning no value.
+ */
static __always_inline void
arch_atomic_inc(atomic_t *v)
{
@@ -456,6 +463,13 @@ arch_atomic_inc(atomic_t *v)
#endif /* arch_atomic_inc_return */

#ifndef arch_atomic_inc_return
+/**
+ * arch_atomic_inc_return - Atomic increment
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increment @v with full ordering,
+ * returning new value.
+ */
static __always_inline int
arch_atomic_inc_return(atomic_t *v)
{
@@ -465,6 +479,13 @@ arch_atomic_inc_return(atomic_t *v)
#endif

#ifndef arch_atomic_inc_return_acquire
+/**
+ * arch_atomic_inc_return_acquire - Atomic increment
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increment @v with acquire ordering,
+ * returning new value.
+ */
static __always_inline int
arch_atomic_inc_return_acquire(atomic_t *v)
{
@@ -474,6 +495,13 @@ arch_atomic_inc_return_acquire(atomic_t *v)
#endif

#ifndef arch_atomic_inc_return_release
+/**
+ * arch_atomic_inc_return_release - Atomic increment
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increment @v with release ordering,
+ * returning new value.
+ */
static __always_inline int
arch_atomic_inc_return_release(atomic_t *v)
{
@@ -483,6 +511,13 @@ arch_atomic_inc_return_release(atomic_t *v)
#endif

#ifndef arch_atomic_inc_return_relaxed
+/**
+ * arch_atomic_inc_return_relaxed - Atomic increment
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increment @v with no ordering,
+ * returning new value.
+ */
static __always_inline int
arch_atomic_inc_return_relaxed(atomic_t *v)
{
@@ -537,6 +572,13 @@ arch_atomic_inc_return(atomic_t *v)
#endif /* arch_atomic_fetch_inc */

#ifndef arch_atomic_fetch_inc
+/**
+ * arch_atomic_fetch_inc - Atomic increment
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increment @v with full ordering,
+ * returning old value.
+ */
static __always_inline int
arch_atomic_fetch_inc(atomic_t *v)
{
@@ -546,6 +588,13 @@ arch_atomic_fetch_inc(atomic_t *v)
#endif

#ifndef arch_atomic_fetch_inc_acquire
+/**
+ * arch_atomic_fetch_inc_acquire - Atomic increment
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increment @v with acquire ordering,
+ * returning old value.
+ */
static __always_inline int
arch_atomic_fetch_inc_acquire(atomic_t *v)
{
@@ -555,6 +604,13 @@ arch_atomic_fetch_inc_acquire(atomic_t *v)
#endif

#ifndef arch_atomic_fetch_inc_release
+/**
+ * arch_atomic_fetch_inc_release - Atomic increment
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increment @v with release ordering,
+ * returning old value.
+ */
static __always_inline int
arch_atomic_fetch_inc_release(atomic_t *v)
{
@@ -564,6 +620,13 @@ arch_atomic_fetch_inc_release(atomic_t *v)
#endif

#ifndef arch_atomic_fetch_inc_relaxed
+/**
+ * arch_atomic_fetch_inc_relaxed - Atomic increment
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increment @v with no ordering,
+ * returning old value.
+ */
static __always_inline int
arch_atomic_fetch_inc_relaxed(atomic_t *v)
{
@@ -1656,6 +1719,13 @@ arch_atomic64_fetch_sub(s64 i, atomic64_t *v)
#endif /* arch_atomic64_fetch_sub_relaxed */

#ifndef arch_atomic64_inc
+/**
+ * arch_atomic64_inc - Atomic increment
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically increment @v with full ordering,
+ * returning no value.
+ */
static __always_inline void
arch_atomic64_inc(atomic64_t *v)
{
@@ -1672,6 +1742,13 @@ arch_atomic64_inc(atomic64_t *v)
#endif /* arch_atomic64_inc_return */

#ifndef arch_atomic64_inc_return
+/**
+ * arch_atomic64_inc_return - Atomic increment
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically increment @v with full ordering,
+ * returning new value.
+ */
static __always_inline s64
arch_atomic64_inc_return(atomic64_t *v)
{
@@ -1681,6 +1758,13 @@ arch_atomic64_inc_return(atomic64_t *v)
#endif

#ifndef arch_atomic64_inc_return_acquire
+/**
+ * arch_atomic64_inc_return_acquire - Atomic increment
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically increment @v with acquire ordering,
+ * returning new value.
+ */
static __always_inline s64
arch_atomic64_inc_return_acquire(atomic64_t *v)
{
@@ -1690,6 +1774,13 @@ arch_atomic64_inc_return_acquire(atomic64_t *v)
#endif

#ifndef arch_atomic64_inc_return_release
+/**
+ * arch_atomic64_inc_return_release - Atomic increment
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically increment @v with release ordering,
+ * returning new value.
+ */
static __always_inline s64
arch_atomic64_inc_return_release(atomic64_t *v)
{
@@ -1699,6 +1790,13 @@ arch_atomic64_inc_return_release(atomic64_t *v)
#endif

#ifndef arch_atomic64_inc_return_relaxed
+/**
+ * arch_atomic64_inc_return_relaxed - Atomic increment
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically increment @v with no ordering,
+ * returning new value.
+ */
static __always_inline s64
arch_atomic64_inc_return_relaxed(atomic64_t *v)
{
@@ -1753,6 +1851,13 @@ arch_atomic64_inc_return(atomic64_t *v)
#endif /* arch_atomic64_fetch_inc */

#ifndef arch_atomic64_fetch_inc
+/**
+ * arch_atomic64_fetch_inc - Atomic increment
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically increment @v with full ordering,
+ * returning old value.
+ */
static __always_inline s64
arch_atomic64_fetch_inc(atomic64_t *v)
{
@@ -1762,6 +1867,13 @@ arch_atomic64_fetch_inc(atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_inc_acquire
+/**
+ * arch_atomic64_fetch_inc_acquire - Atomic increment
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically increment @v with acquire ordering,
+ * returning old value.
+ */
static __always_inline s64
arch_atomic64_fetch_inc_acquire(atomic64_t *v)
{
@@ -1771,6 +1883,13 @@ arch_atomic64_fetch_inc_acquire(atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_inc_release
+/**
+ * arch_atomic64_fetch_inc_release - Atomic increment
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically increment @v with release ordering,
+ * returning old value.
+ */
static __always_inline s64
arch_atomic64_fetch_inc_release(atomic64_t *v)
{
@@ -1780,6 +1899,13 @@ arch_atomic64_fetch_inc_release(atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_inc_relaxed
+/**
+ * arch_atomic64_fetch_inc_relaxed - Atomic increment
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically increment @v with no ordering,
+ * returning old value.
+ */
static __always_inline s64
arch_atomic64_fetch_inc_relaxed(atomic64_t *v)
{
@@ -2668,4 +2794,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
#endif

#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// e914194a1a82dfbc39d4d1c79ce1f59f64fb37da
+// 17cefb0ff9b450685d4072202d4a1c309b0606c2
diff --git a/scripts/atomic/fallbacks/inc b/scripts/atomic/fallbacks/inc
index 3c2c3739169e..3f2c0730cd0c 100755
--- a/scripts/atomic/fallbacks/inc
+++ b/scripts/atomic/fallbacks/inc
@@ -1,4 +1,11 @@
cat <<EOF
+/**
+ * arch_${atomic}_${pfx}inc${sfx}${order} - Atomic increment
+ * @v: pointer of type ${atomic}_t
+ *
+ * Atomically increment @v with ${docbook_order} ordering,
+ * returning ${docbook_oldnew} value.
+ */
static __always_inline ${ret}
arch_${atomic}_${pfx}inc${sfx}${order}(${atomic}_t *v)
{
--
2.40.1


2023-05-10 18:43:14

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH locking/atomic 17/19] x86/atomic.h: Remove duplicate kernel-doc headers

Scripting the kernel-doc headers resulted in a few duplicates. Remove the
duplicates from the x86-specific files.

Reported-by: Akira Yokosawa <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: <[email protected]>
---
arch/x86/include/asm/atomic.h | 60 -----------------------------------
1 file changed, 60 deletions(-)

diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 5e754e895767..5df979d65fb5 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -69,27 +69,12 @@ static __always_inline void arch_atomic_sub(int i, atomic_t *v)
: "ir" (i) : "memory");
}

-/**
- * arch_atomic_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer of type atomic_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v)
{
return GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, e, "er", i);
}
#define arch_atomic_sub_and_test arch_atomic_sub_and_test

-/**
- * arch_atomic_inc - increment atomic variable
- * @v: pointer of type atomic_t
- *
- * Atomically increments @v by 1.
- */
static __always_inline void arch_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
@@ -97,12 +82,6 @@ static __always_inline void arch_atomic_inc(atomic_t *v)
}
#define arch_atomic_inc arch_atomic_inc

-/**
- * arch_atomic_dec - decrement atomic variable
- * @v: pointer of type atomic_t
- *
- * Atomically decrements @v by 1.
- */
static __always_inline void arch_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
@@ -110,69 +89,30 @@ static __always_inline void arch_atomic_dec(atomic_t *v)
}
#define arch_atomic_dec arch_atomic_dec

-/**
- * arch_atomic_dec_and_test - decrement and test
- * @v: pointer of type atomic_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
static __always_inline bool arch_atomic_dec_and_test(atomic_t *v)
{
return GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, e);
}
#define arch_atomic_dec_and_test arch_atomic_dec_and_test

-/**
- * arch_atomic_inc_and_test - increment and test
- * @v: pointer of type atomic_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool arch_atomic_inc_and_test(atomic_t *v)
{
return GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, e);
}
#define arch_atomic_inc_and_test arch_atomic_inc_and_test

-/**
- * arch_atomic_add_negative - add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true
- * if the result is negative, or false when
- * result is greater than or equal to zero.
- */
static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
{
return GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, s, "er", i);
}
#define arch_atomic_add_negative arch_atomic_add_negative

-/**
- * arch_atomic_add_return - add integer and return
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns @i + @v
- */
static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
{
return i + xadd(&v->counter, i);
}
#define arch_atomic_add_return arch_atomic_add_return

-/**
- * arch_atomic_sub_return - subtract integer and return
- * @v: pointer of type atomic_t
- * @i: integer value to subtract
- *
- * Atomically subtracts @i from @v and returns @v - @i
- */
static __always_inline int arch_atomic_sub_return(int i, atomic_t *v)
{
return arch_atomic_add_return(-i, v);
--
2.40.1


2023-05-10 18:44:09

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

The gen-atomics.sh script currently generates 42 duplicate definitions:

arch_atomic64_add_negative
arch_atomic64_add_negative_acquire
arch_atomic64_add_negative_release
arch_atomic64_dec_return
arch_atomic64_dec_return_acquire
arch_atomic64_dec_return_release
arch_atomic64_fetch_andnot
arch_atomic64_fetch_andnot_acquire
arch_atomic64_fetch_andnot_release
arch_atomic64_fetch_dec
arch_atomic64_fetch_dec_acquire
arch_atomic64_fetch_dec_release
arch_atomic64_fetch_inc
arch_atomic64_fetch_inc_acquire
arch_atomic64_fetch_inc_release
arch_atomic64_inc_return
arch_atomic64_inc_return_acquire
arch_atomic64_inc_return_release
arch_atomic64_try_cmpxchg
arch_atomic64_try_cmpxchg_acquire
arch_atomic64_try_cmpxchg_release
arch_atomic_add_negative
arch_atomic_add_negative_acquire
arch_atomic_add_negative_release
arch_atomic_dec_return
arch_atomic_dec_return_acquire
arch_atomic_dec_return_release
arch_atomic_fetch_andnot
arch_atomic_fetch_andnot_acquire
arch_atomic_fetch_andnot_release
arch_atomic_fetch_dec
arch_atomic_fetch_dec_acquire
arch_atomic_fetch_dec_release
arch_atomic_fetch_inc
arch_atomic_fetch_inc_acquire
arch_atomic_fetch_inc_release
arch_atomic_inc_return
arch_atomic_inc_return_acquire
arch_atomic_inc_return_release
arch_atomic_try_cmpxchg
arch_atomic_try_cmpxchg_acquire
arch_atomic_try_cmpxchg_release

These duplicates are presumably to handle different architectures
generating hand-coded definitions for different subsets of the atomic
operations. However, generating duplicate kernel-doc headers is
undesirable.

Therefore, generate only the first kernel-doc definition in a group
of duplicates. A comment indicates the name of the function and the
fallback script that generated it.

Reported-by: Akira Yokosawa <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Mark Rutland <[email protected]>
---
include/linux/atomic/atomic-arch-fallback.h | 386 +++----------------
scripts/atomic/chkdup.sh | 27 ++
scripts/atomic/fallbacks/acquire | 3 +
scripts/atomic/fallbacks/add_negative | 5 +
scripts/atomic/fallbacks/add_unless | 5 +
scripts/atomic/fallbacks/andnot | 5 +
scripts/atomic/fallbacks/dec | 5 +
scripts/atomic/fallbacks/dec_and_test | 5 +
scripts/atomic/fallbacks/dec_if_positive | 5 +
scripts/atomic/fallbacks/dec_unless_positive | 5 +
scripts/atomic/fallbacks/fence | 3 +
scripts/atomic/fallbacks/fetch_add_unless | 5 +
scripts/atomic/fallbacks/inc | 5 +
scripts/atomic/fallbacks/inc_and_test | 5 +
scripts/atomic/fallbacks/inc_not_zero | 5 +
scripts/atomic/fallbacks/inc_unless_negative | 5 +
scripts/atomic/fallbacks/read_acquire | 5 +
scripts/atomic/fallbacks/release | 3 +
scripts/atomic/fallbacks/set_release | 5 +
scripts/atomic/fallbacks/sub_and_test | 5 +
scripts/atomic/fallbacks/try_cmpxchg | 5 +
scripts/atomic/gen-atomics.sh | 4 +
22 files changed, 163 insertions(+), 343 deletions(-)
create mode 100644 scripts/atomic/chkdup.sh

diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 41aa94f0aacd..2d56726f8662 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -639,13 +639,7 @@ arch_atomic_inc_return_relaxed(atomic_t *v)
#else /* arch_atomic_inc_return_relaxed */

#ifndef arch_atomic_inc_return_acquire
-/**
- * arch_atomic_inc_return_acquire - Atomic inc with acquire ordering
- * @v: pointer of type atomic_t
- *
- * Atomically increment @v using acquire ordering.
- * Return new value.
- */
+// Fallback acquire omitting duplicate arch_atomic_inc_return_acquire() kernel-doc header.
static __always_inline int
arch_atomic_inc_return_acquire(atomic_t *v)
{
@@ -657,13 +651,7 @@ arch_atomic_inc_return_acquire(atomic_t *v)
#endif

#ifndef arch_atomic_inc_return_release
-/**
- * arch_atomic_inc_return_release - Atomic inc with release ordering
- * @v: pointer of type atomic_t
- *
- * Atomically increment @v using release ordering.
- * Return new value.
- */
+// Fallback release omitting duplicate arch_atomic_inc_return_release() kernel-doc header.
static __always_inline int
arch_atomic_inc_return_release(atomic_t *v)
{
@@ -674,13 +662,7 @@ arch_atomic_inc_return_release(atomic_t *v)
#endif

#ifndef arch_atomic_inc_return
-/**
- * arch_atomic_inc_return - Atomic inc with full ordering
- * @v: pointer of type atomic_t
- *
- * Atomically increment @v using full ordering.
- * Return new value.
- */
+// Fallback fence omitting duplicate arch_atomic_inc_return() kernel-doc header.
static __always_inline int
arch_atomic_inc_return(atomic_t *v)
{
@@ -769,13 +751,7 @@ arch_atomic_fetch_inc_relaxed(atomic_t *v)
#else /* arch_atomic_fetch_inc_relaxed */

#ifndef arch_atomic_fetch_inc_acquire
-/**
- * arch_atomic_fetch_inc_acquire - Atomic inc with acquire ordering
- * @v: pointer of type atomic_t
- *
- * Atomically increment @v using acquire ordering.
- * Return old value.
- */
+// Fallback acquire omitting duplicate arch_atomic_fetch_inc_acquire() kernel-doc header.
static __always_inline int
arch_atomic_fetch_inc_acquire(atomic_t *v)
{
@@ -787,13 +763,7 @@ arch_atomic_fetch_inc_acquire(atomic_t *v)
#endif

#ifndef arch_atomic_fetch_inc_release
-/**
- * arch_atomic_fetch_inc_release - Atomic inc with release ordering
- * @v: pointer of type atomic_t
- *
- * Atomically increment @v using release ordering.
- * Return old value.
- */
+// Fallback release omitting duplicate arch_atomic_fetch_inc_release() kernel-doc header.
static __always_inline int
arch_atomic_fetch_inc_release(atomic_t *v)
{
@@ -804,13 +774,7 @@ arch_atomic_fetch_inc_release(atomic_t *v)
#endif

#ifndef arch_atomic_fetch_inc
-/**
- * arch_atomic_fetch_inc - Atomic inc with full ordering
- * @v: pointer of type atomic_t
- *
- * Atomically increment @v using full ordering.
- * Return old value.
- */
+// Fallback fence omitting duplicate arch_atomic_fetch_inc() kernel-doc header.
static __always_inline int
arch_atomic_fetch_inc(atomic_t *v)
{
@@ -915,13 +879,7 @@ arch_atomic_dec_return_relaxed(atomic_t *v)
#else /* arch_atomic_dec_return_relaxed */

#ifndef arch_atomic_dec_return_acquire
-/**
- * arch_atomic_dec_return_acquire - Atomic dec with acquire ordering
- * @v: pointer of type atomic_t
- *
- * Atomically decrement @v using acquire ordering.
- * Return new value.
- */
+// Fallback acquire omitting duplicate arch_atomic_dec_return_acquire() kernel-doc header.
static __always_inline int
arch_atomic_dec_return_acquire(atomic_t *v)
{
@@ -933,13 +891,7 @@ arch_atomic_dec_return_acquire(atomic_t *v)
#endif

#ifndef arch_atomic_dec_return_release
-/**
- * arch_atomic_dec_return_release - Atomic dec with release ordering
- * @v: pointer of type atomic_t
- *
- * Atomically decrement @v using release ordering.
- * Return new value.
- */
+// Fallback release omitting duplicate arch_atomic_dec_return_release() kernel-doc header.
static __always_inline int
arch_atomic_dec_return_release(atomic_t *v)
{
@@ -950,13 +902,7 @@ arch_atomic_dec_return_release(atomic_t *v)
#endif

#ifndef arch_atomic_dec_return
-/**
- * arch_atomic_dec_return - Atomic dec with full ordering
- * @v: pointer of type atomic_t
- *
- * Atomically decrement @v using full ordering.
- * Return new value.
- */
+// Fallback fence omitting duplicate arch_atomic_dec_return() kernel-doc header.
static __always_inline int
arch_atomic_dec_return(atomic_t *v)
{
@@ -1045,13 +991,7 @@ arch_atomic_fetch_dec_relaxed(atomic_t *v)
#else /* arch_atomic_fetch_dec_relaxed */

#ifndef arch_atomic_fetch_dec_acquire
-/**
- * arch_atomic_fetch_dec_acquire - Atomic dec with acquire ordering
- * @v: pointer of type atomic_t
- *
- * Atomically decrement @v using acquire ordering.
- * Return old value.
- */
+// Fallback acquire omitting duplicate arch_atomic_fetch_dec_acquire() kernel-doc header.
static __always_inline int
arch_atomic_fetch_dec_acquire(atomic_t *v)
{
@@ -1063,13 +1003,7 @@ arch_atomic_fetch_dec_acquire(atomic_t *v)
#endif

#ifndef arch_atomic_fetch_dec_release
-/**
- * arch_atomic_fetch_dec_release - Atomic dec with release ordering
- * @v: pointer of type atomic_t
- *
- * Atomically decrement @v using release ordering.
- * Return old value.
- */
+// Fallback release omitting duplicate arch_atomic_fetch_dec_release() kernel-doc header.
static __always_inline int
arch_atomic_fetch_dec_release(atomic_t *v)
{
@@ -1080,13 +1014,7 @@ arch_atomic_fetch_dec_release(atomic_t *v)
#endif

#ifndef arch_atomic_fetch_dec
-/**
- * arch_atomic_fetch_dec - Atomic dec with full ordering
- * @v: pointer of type atomic_t
- *
- * Atomically decrement @v using full ordering.
- * Return old value.
- */
+// Fallback fence omitting duplicate arch_atomic_fetch_dec() kernel-doc header.
static __always_inline int
arch_atomic_fetch_dec(atomic_t *v)
{
@@ -1262,14 +1190,7 @@ arch_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
#else /* arch_atomic_fetch_andnot_relaxed */

#ifndef arch_atomic_fetch_andnot_acquire
-/**
- * arch_atomic_fetch_andnot_acquire - Atomic andnot with acquire ordering
- * @i: value to complement then AND
- * @v: pointer of type atomic_t
- *
- * Atomically complement then AND @i with @v using acquire ordering.
- * Return old value.
- */
+// Fallback acquire omitting duplicate arch_atomic_fetch_andnot_acquire() kernel-doc header.
static __always_inline int
arch_atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
@@ -1281,14 +1202,7 @@ arch_atomic_fetch_andnot_acquire(int i, atomic_t *v)
#endif

#ifndef arch_atomic_fetch_andnot_release
-/**
- * arch_atomic_fetch_andnot_release - Atomic andnot with release ordering
- * @i: value to complement then AND
- * @v: pointer of type atomic_t
- *
- * Atomically complement then AND @i with @v using release ordering.
- * Return old value.
- */
+// Fallback release omitting duplicate arch_atomic_fetch_andnot_release() kernel-doc header.
static __always_inline int
arch_atomic_fetch_andnot_release(int i, atomic_t *v)
{
@@ -1299,14 +1213,7 @@ arch_atomic_fetch_andnot_release(int i, atomic_t *v)
#endif

#ifndef arch_atomic_fetch_andnot
-/**
- * arch_atomic_fetch_andnot - Atomic andnot with full ordering
- * @i: value to complement then AND
- * @v: pointer of type atomic_t
- *
- * Atomically complement then AND @i with @v using full ordering.
- * Return old value.
- */
+// Fallback fence omitting duplicate arch_atomic_fetch_andnot() kernel-doc header.
static __always_inline int
arch_atomic_fetch_andnot(int i, atomic_t *v)
{
@@ -1699,18 +1606,7 @@ arch_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
#else /* arch_atomic_try_cmpxchg_relaxed */

#ifndef arch_atomic_try_cmpxchg_acquire
-/**
- * arch_atomic_try_cmpxchg_acquire - Atomic try_cmpxchg with acquire ordering
- * @v: pointer of type atomic_t
- * @old: desired old value to match
- * @new: new value to put in
- *
- * Atomically compares @new to *@v, and if equal,
- * stores @new to *@v, providing acquire ordering.
- * Returns @true if the cmpxchg operation succeeded,
- * and false otherwise. Either way, stores the old
- * value of *@v to *@old.
- */
+// Fallback acquire omitting duplicate arch_atomic_try_cmpxchg_acquire() kernel-doc header.
static __always_inline bool
arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
@@ -1722,18 +1618,7 @@ arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
#endif

#ifndef arch_atomic_try_cmpxchg_release
-/**
- * arch_atomic_try_cmpxchg_release - Atomic try_cmpxchg with release ordering
- * @v: pointer of type atomic_t
- * @old: desired old value to match
- * @new: new value to put in
- *
- * Atomically compares @new to *@v, and if equal,
- * stores @new to *@v, providing release ordering.
- * Returns @true if the cmpxchg operation succeeded,
- * and false otherwise. Either way, stores the old
- * value of *@v to *@old.
- */
+// Fallback release omitting duplicate arch_atomic_try_cmpxchg_release() kernel-doc header.
static __always_inline bool
arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
{
@@ -1744,18 +1629,7 @@ arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
#endif

#ifndef arch_atomic_try_cmpxchg
-/**
- * arch_atomic_try_cmpxchg - Atomic try_cmpxchg with full ordering
- * @v: pointer of type atomic_t
- * @old: desired old value to match
- * @new: new value to put in
- *
- * Atomically compares @new to *@v, and if equal,
- * stores @new to *@v, providing full ordering.
- * Returns @true if the cmpxchg operation succeeded,
- * and false otherwise. Either way, stores the old
- * value of *@v to *@old.
- */
+// Fallback fence omitting duplicate arch_atomic_try_cmpxchg() kernel-doc header.
static __always_inline bool
arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
@@ -1900,15 +1774,7 @@ arch_atomic_add_negative_relaxed(int i, atomic_t *v)
#else /* arch_atomic_add_negative_relaxed */

#ifndef arch_atomic_add_negative_acquire
-/**
- * arch_atomic_add_negative_acquire - Atomic add_negative with acquire ordering
- * @i: value to add
- * @v: pointer of type atomic_t
- *
- * Atomically add @i with @v using acquire ordering.
- * Return @true if the result is negative, or @false when
- * the result is greater than or equal to zero.
- */
+// Fallback acquire omitting duplicate arch_atomic_add_negative_acquire() kernel-doc header.
static __always_inline bool
arch_atomic_add_negative_acquire(int i, atomic_t *v)
{
@@ -1920,15 +1786,7 @@ arch_atomic_add_negative_acquire(int i, atomic_t *v)
#endif

#ifndef arch_atomic_add_negative_release
-/**
- * arch_atomic_add_negative_release - Atomic add_negative with release ordering
- * @i: value to add
- * @v: pointer of type atomic_t
- *
- * Atomically add @i with @v using release ordering.
- * Return @true if the result is negative, or @false when
- * the result is greater than or equal to zero.
- */
+// Fallback release omitting duplicate arch_atomic_add_negative_release() kernel-doc header.
static __always_inline bool
arch_atomic_add_negative_release(int i, atomic_t *v)
{
@@ -1939,15 +1797,7 @@ arch_atomic_add_negative_release(int i, atomic_t *v)
#endif

#ifndef arch_atomic_add_negative
-/**
- * arch_atomic_add_negative - Atomic add_negative with full ordering
- * @i: value to add
- * @v: pointer of type atomic_t
- *
- * Atomically add @i with @v using full ordering.
- * Return @true if the result is negative, or @false when
- * the result is greater than or equal to zero.
- */
+// Fallback fence omitting duplicate arch_atomic_add_negative() kernel-doc header.
static __always_inline bool
arch_atomic_add_negative(int i, atomic_t *v)
{
@@ -2500,13 +2350,7 @@ arch_atomic64_inc_return_relaxed(atomic64_t *v)
#else /* arch_atomic64_inc_return_relaxed */

#ifndef arch_atomic64_inc_return_acquire
-/**
- * arch_atomic64_inc_return_acquire - Atomic inc with acquire ordering
- * @v: pointer of type atomic64_t
- *
- * Atomically increment @v using acquire ordering.
- * Return new value.
- */
+// Fallback acquire omitting duplicate arch_atomic64_inc_return_acquire() kernel-doc header.
static __always_inline s64
arch_atomic64_inc_return_acquire(atomic64_t *v)
{
@@ -2518,13 +2362,7 @@ arch_atomic64_inc_return_acquire(atomic64_t *v)
#endif

#ifndef arch_atomic64_inc_return_release
-/**
- * arch_atomic64_inc_return_release - Atomic inc with release ordering
- * @v: pointer of type atomic64_t
- *
- * Atomically increment @v using release ordering.
- * Return new value.
- */
+// Fallback release omitting duplicate arch_atomic64_inc_return_release() kernel-doc header.
static __always_inline s64
arch_atomic64_inc_return_release(atomic64_t *v)
{
@@ -2535,13 +2373,7 @@ arch_atomic64_inc_return_release(atomic64_t *v)
#endif

#ifndef arch_atomic64_inc_return
-/**
- * arch_atomic64_inc_return - Atomic inc with full ordering
- * @v: pointer of type atomic64_t
- *
- * Atomically increment @v using full ordering.
- * Return new value.
- */
+// Fallback fence omitting duplicate arch_atomic64_inc_return() kernel-doc header.
static __always_inline s64
arch_atomic64_inc_return(atomic64_t *v)
{
@@ -2630,13 +2462,7 @@ arch_atomic64_fetch_inc_relaxed(atomic64_t *v)
#else /* arch_atomic64_fetch_inc_relaxed */

#ifndef arch_atomic64_fetch_inc_acquire
-/**
- * arch_atomic64_fetch_inc_acquire - Atomic inc with acquire ordering
- * @v: pointer of type atomic64_t
- *
- * Atomically increment @v using acquire ordering.
- * Return old value.
- */
+// Fallback acquire omitting duplicate arch_atomic64_fetch_inc_acquire() kernel-doc header.
static __always_inline s64
arch_atomic64_fetch_inc_acquire(atomic64_t *v)
{
@@ -2648,13 +2474,7 @@ arch_atomic64_fetch_inc_acquire(atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_inc_release
-/**
- * arch_atomic64_fetch_inc_release - Atomic inc with release ordering
- * @v: pointer of type atomic64_t
- *
- * Atomically increment @v using release ordering.
- * Return old value.
- */
+// Fallback release omitting duplicate arch_atomic64_fetch_inc_release() kernel-doc header.
static __always_inline s64
arch_atomic64_fetch_inc_release(atomic64_t *v)
{
@@ -2665,13 +2485,7 @@ arch_atomic64_fetch_inc_release(atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_inc
-/**
- * arch_atomic64_fetch_inc - Atomic inc with full ordering
- * @v: pointer of type atomic64_t
- *
- * Atomically increment @v using full ordering.
- * Return old value.
- */
+// Fallback fence omitting duplicate arch_atomic64_fetch_inc() kernel-doc header.
static __always_inline s64
arch_atomic64_fetch_inc(atomic64_t *v)
{
@@ -2776,13 +2590,7 @@ arch_atomic64_dec_return_relaxed(atomic64_t *v)
#else /* arch_atomic64_dec_return_relaxed */

#ifndef arch_atomic64_dec_return_acquire
-/**
- * arch_atomic64_dec_return_acquire - Atomic dec with acquire ordering
- * @v: pointer of type atomic64_t
- *
- * Atomically decrement @v using acquire ordering.
- * Return new value.
- */
+// Fallback acquire omitting duplicate arch_atomic64_dec_return_acquire() kernel-doc header.
static __always_inline s64
arch_atomic64_dec_return_acquire(atomic64_t *v)
{
@@ -2794,13 +2602,7 @@ arch_atomic64_dec_return_acquire(atomic64_t *v)
#endif

#ifndef arch_atomic64_dec_return_release
-/**
- * arch_atomic64_dec_return_release - Atomic dec with release ordering
- * @v: pointer of type atomic64_t
- *
- * Atomically decrement @v using release ordering.
- * Return new value.
- */
+// Fallback release omitting duplicate arch_atomic64_dec_return_release() kernel-doc header.
static __always_inline s64
arch_atomic64_dec_return_release(atomic64_t *v)
{
@@ -2811,13 +2613,7 @@ arch_atomic64_dec_return_release(atomic64_t *v)
#endif

#ifndef arch_atomic64_dec_return
-/**
- * arch_atomic64_dec_return - Atomic dec with full ordering
- * @v: pointer of type atomic64_t
- *
- * Atomically decrement @v using full ordering.
- * Return new value.
- */
+// Fallback fence omitting duplicate arch_atomic64_dec_return() kernel-doc header.
static __always_inline s64
arch_atomic64_dec_return(atomic64_t *v)
{
@@ -2906,13 +2702,7 @@ arch_atomic64_fetch_dec_relaxed(atomic64_t *v)
#else /* arch_atomic64_fetch_dec_relaxed */

#ifndef arch_atomic64_fetch_dec_acquire
-/**
- * arch_atomic64_fetch_dec_acquire - Atomic dec with acquire ordering
- * @v: pointer of type atomic64_t
- *
- * Atomically decrement @v using acquire ordering.
- * Return old value.
- */
+// Fallback acquire omitting duplicate arch_atomic64_fetch_dec_acquire() kernel-doc header.
static __always_inline s64
arch_atomic64_fetch_dec_acquire(atomic64_t *v)
{
@@ -2924,13 +2714,7 @@ arch_atomic64_fetch_dec_acquire(atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_dec_release
-/**
- * arch_atomic64_fetch_dec_release - Atomic dec with release ordering
- * @v: pointer of type atomic64_t
- *
- * Atomically decrement @v using release ordering.
- * Return old value.
- */
+// Fallback release omitting duplicate arch_atomic64_fetch_dec_release() kernel-doc header.
static __always_inline s64
arch_atomic64_fetch_dec_release(atomic64_t *v)
{
@@ -2941,13 +2725,7 @@ arch_atomic64_fetch_dec_release(atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_dec
-/**
- * arch_atomic64_fetch_dec - Atomic dec with full ordering
- * @v: pointer of type atomic64_t
- *
- * Atomically decrement @v using full ordering.
- * Return old value.
- */
+// Fallback fence omitting duplicate arch_atomic64_fetch_dec() kernel-doc header.
static __always_inline s64
arch_atomic64_fetch_dec(atomic64_t *v)
{
@@ -3123,14 +2901,7 @@ arch_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
#else /* arch_atomic64_fetch_andnot_relaxed */

#ifndef arch_atomic64_fetch_andnot_acquire
-/**
- * arch_atomic64_fetch_andnot_acquire - Atomic andnot with acquire ordering
- * @i: value to complement then AND
- * @v: pointer of type atomic64_t
- *
- * Atomically complement then AND @i with @v using acquire ordering.
- * Return old value.
- */
+// Fallback acquire omitting duplicate arch_atomic64_fetch_andnot_acquire() kernel-doc header.
static __always_inline s64
arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
@@ -3142,14 +2913,7 @@ arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_andnot_release
-/**
- * arch_atomic64_fetch_andnot_release - Atomic andnot with release ordering
- * @i: value to complement then AND
- * @v: pointer of type atomic64_t
- *
- * Atomically complement then AND @i with @v using release ordering.
- * Return old value.
- */
+// Fallback release omitting duplicate arch_atomic64_fetch_andnot_release() kernel-doc header.
static __always_inline s64
arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
@@ -3160,14 +2924,7 @@ arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_andnot
-/**
- * arch_atomic64_fetch_andnot - Atomic andnot with full ordering
- * @i: value to complement then AND
- * @v: pointer of type atomic64_t
- *
- * Atomically complement then AND @i with @v using full ordering.
- * Return old value.
- */
+// Fallback fence omitting duplicate arch_atomic64_fetch_andnot() kernel-doc header.
static __always_inline s64
arch_atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
@@ -3560,18 +3317,7 @@ arch_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
#else /* arch_atomic64_try_cmpxchg_relaxed */

#ifndef arch_atomic64_try_cmpxchg_acquire
-/**
- * arch_atomic64_try_cmpxchg_acquire - Atomic try_cmpxchg with acquire ordering
- * @v: pointer of type atomic64_t
- * @old: desired old value to match
- * @new: new value to put in
- *
- * Atomically compares @new to *@v, and if equal,
- * stores @new to *@v, providing acquire ordering.
- * Returns @true if the cmpxchg operation succeeded,
- * and false otherwise. Either way, stores the old
- * value of *@v to *@old.
- */
+// Fallback acquire omitting duplicate arch_atomic64_try_cmpxchg_acquire() kernel-doc header.
static __always_inline bool
arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
@@ -3583,18 +3329,7 @@ arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
#endif

#ifndef arch_atomic64_try_cmpxchg_release
-/**
- * arch_atomic64_try_cmpxchg_release - Atomic try_cmpxchg with release ordering
- * @v: pointer of type atomic64_t
- * @old: desired old value to match
- * @new: new value to put in
- *
- * Atomically compares @new to *@v, and if equal,
- * stores @new to *@v, providing release ordering.
- * Returns @true if the cmpxchg operation succeeded,
- * and false otherwise. Either way, stores the old
- * value of *@v to *@old.
- */
+// Fallback release omitting duplicate arch_atomic64_try_cmpxchg_release() kernel-doc header.
static __always_inline bool
arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
{
@@ -3605,18 +3340,7 @@ arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
#endif

#ifndef arch_atomic64_try_cmpxchg
-/**
- * arch_atomic64_try_cmpxchg - Atomic try_cmpxchg with full ordering
- * @v: pointer of type atomic64_t
- * @old: desired old value to match
- * @new: new value to put in
- *
- * Atomically compares @new to *@v, and if equal,
- * stores @new to *@v, providing full ordering.
- * Returns @true if the cmpxchg operation succeeded,
- * and false otherwise. Either way, stores the old
- * value of *@v to *@old.
- */
+// Fallback fence omitting duplicate arch_atomic64_try_cmpxchg() kernel-doc header.
static __always_inline bool
arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
@@ -3761,15 +3485,7 @@ arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
#else /* arch_atomic64_add_negative_relaxed */

#ifndef arch_atomic64_add_negative_acquire
-/**
- * arch_atomic64_add_negative_acquire - Atomic add_negative with acquire ordering
- * @i: value to add
- * @v: pointer of type atomic64_t
- *
- * Atomically add @i with @v using acquire ordering.
- * Return @true if the result is negative, or @false when
- * the result is greater than or equal to zero.
- */
+// Fallback acquire omitting duplicate arch_atomic64_add_negative_acquire() kernel-doc header.
static __always_inline bool
arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
@@ -3781,15 +3497,7 @@ arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_add_negative_release
-/**
- * arch_atomic64_add_negative_release - Atomic add_negative with release ordering
- * @i: value to add
- * @v: pointer of type atomic64_t
- *
- * Atomically add @i with @v using release ordering.
- * Return @true if the result is negative, or @false when
- * the result is greater than or equal to zero.
- */
+// Fallback release omitting duplicate arch_atomic64_add_negative_release() kernel-doc header.
static __always_inline bool
arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
{
@@ -3800,15 +3508,7 @@ arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_add_negative
-/**
- * arch_atomic64_add_negative - Atomic add_negative with full ordering
- * @i: value to add
- * @v: pointer of type atomic64_t
- *
- * Atomically add @i with @v using full ordering.
- * Return @true if the result is negative, or @false when
- * the result is greater than or equal to zero.
- */
+// Fallback fence omitting duplicate arch_atomic64_add_negative() kernel-doc header.
static __always_inline bool
arch_atomic64_add_negative(s64 i, atomic64_t *v)
{
@@ -3958,4 +3658,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
#endif

#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// 7c2c97cd48cf9c672efc44b9fed5a37b8970dde4
+// 9bf9febc5288ed9539d1b3cfbbc6e36743b74c3b
diff --git a/scripts/atomic/chkdup.sh b/scripts/atomic/chkdup.sh
new file mode 100644
index 000000000000..04bb4f5c5c34
--- /dev/null
+++ b/scripts/atomic/chkdup.sh
@@ -0,0 +1,27 @@
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0
+#
+# Check to see if the specified atomic is already in use. This is
+# done by keeping filenames in the temporary directory specified by the
+# environment variable T.
+#
+# Usage:
+# chkdup.sh name fallback
+#
+# The "name" argument is the name of the function to be generated, and
+# the "fallback" argument is the name of the fallback script that is
+# doing the generation.
+#
+# If the function is a duplicate, output a comment saying so and
+# exit with non-zero (error) status. Otherwise exit successfully
+#
+# If the function is a duplicate, output a comment saying so and
+# exit with non-zero (error) status. Otherwise exit successfully.
+
+if test -f ${T}/${1}
+then
+ echo // Fallback ${2} omitting duplicate "${1}()" kernel-doc header.
+ exit 1
+fi
+touch ${T}/${1}
+exit 0
diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/acquire
index 08fc6c30a9ef..a349935ac7fe 100755
--- a/scripts/atomic/fallbacks/acquire
+++ b/scripts/atomic/fallbacks/acquire
@@ -1,5 +1,8 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}${name}${sfx}_acquire acquire
+then
acqrel=acquire
. ${ATOMICDIR}/acqrel.sh
+fi
cat << EOF
static __always_inline ${ret}
arch_${atomic}_${pfx}${name}${sfx}_acquire(${params})
diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative
index c032e8bec6e2..b105fdfe8fd1 100755
--- a/scripts/atomic/fallbacks/add_negative
+++ b/scripts/atomic/fallbacks/add_negative
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_add_negative${order} add_negative
+then
cat <<EOF
/**
* arch_${atomic}_add_negative${order} - Add and test if negative
@@ -7,6 +9,9 @@ cat <<EOF
* Atomically adds @i to @v and returns @true if the result is negative,
* or @false when the result is greater than or equal to zero.
*/
+EOF
+fi
+cat <<EOF
static __always_inline bool
arch_${atomic}_add_negative${order}(${int} i, ${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/add_unless b/scripts/atomic/fallbacks/add_unless
index 650fee935aed..d72d382e3757 100755
--- a/scripts/atomic/fallbacks/add_unless
+++ b/scripts/atomic/fallbacks/add_unless
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_add_unless add_unless
+then
cat << EOF
/**
* arch_${atomic}_add_unless - add unless the number is already a given value
@@ -8,6 +10,9 @@ cat << EOF
* Atomically adds @a to @v, if @v was not already @u.
* Returns @true if the addition was done.
*/
+EOF
+fi
+cat << EOF
static __always_inline bool
arch_${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
{
diff --git a/scripts/atomic/fallbacks/andnot b/scripts/atomic/fallbacks/andnot
index 9fbc0ce75a7c..57b2a187374a 100755
--- a/scripts/atomic/fallbacks/andnot
+++ b/scripts/atomic/fallbacks/andnot
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}andnot${sfx}${order} andnot
+then
cat <<EOF
/**
* arch_${atomic}_${pfx}andnot${sfx}${order} - Atomic and-not
@@ -7,6 +9,9 @@ cat <<EOF
* Atomically and-not @i with @v using ${docbook_order} ordering.
* returning ${docbook_oldnew} value.
*/
+EOF
+fi
+cat <<EOF
static __always_inline ${ret}
arch_${atomic}_${pfx}andnot${sfx}${order}(${int} i, ${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/dec b/scripts/atomic/fallbacks/dec
index e99c8edd36a3..e44d3eb96d2b 100755
--- a/scripts/atomic/fallbacks/dec
+++ b/scripts/atomic/fallbacks/dec
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}dec${sfx}${order} dec
+then
cat <<EOF
/**
* arch_${atomic}_${pfx}dec${sfx}${order} - Atomic decrement
@@ -6,6 +8,9 @@ cat <<EOF
* Atomically decrement @v with ${docbook_order} ordering,
* returning ${docbook_oldnew} value.
*/
+EOF
+fi
+cat <<EOF
static __always_inline ${ret}
arch_${atomic}_${pfx}dec${sfx}${order}(${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/dec_and_test b/scripts/atomic/fallbacks/dec_and_test
index 3720896b1afc..94f5a6d4827c 100755
--- a/scripts/atomic/fallbacks/dec_and_test
+++ b/scripts/atomic/fallbacks/dec_and_test
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_dec_and_test dec_and_test
+then
cat <<EOF
/**
* arch_${atomic}_dec_and_test - decrement and test
@@ -7,6 +9,9 @@ cat <<EOF
* returns @true if the result is 0, or @false for all other
* cases.
*/
+EOF
+fi
+cat <<EOF
static __always_inline bool
arch_${atomic}_dec_and_test(${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/dec_if_positive b/scripts/atomic/fallbacks/dec_if_positive
index dedbdbc1487d..e27eb71dd1b2 100755
--- a/scripts/atomic/fallbacks/dec_if_positive
+++ b/scripts/atomic/fallbacks/dec_if_positive
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_dec_if_positive dec_if_positive
+then
cat <<EOF
/**
* arch_${atomic}_dec_if_positive - Atomic decrement if old value is positive
@@ -9,6 +11,9 @@ cat <<EOF
* there @v will not be decremented, but -4 will be returned. As a result,
* if the return value is non-negative, then the value was in fact decremented.
*/
+EOF
+fi
+cat <<EOF
static __always_inline ${ret}
arch_${atomic}_dec_if_positive(${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/dec_unless_positive b/scripts/atomic/fallbacks/dec_unless_positive
index c3d01d201c63..ee00fffc5f11 100755
--- a/scripts/atomic/fallbacks/dec_unless_positive
+++ b/scripts/atomic/fallbacks/dec_unless_positive
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_dec_unless_positive dec_unless_positive
+then
cat <<EOF
/**
* arch_${atomic}_dec_unless_positive - Atomic decrement if old value is non-positive
@@ -7,6 +9,9 @@ cat <<EOF
* than or equal to zero. Return @true if the decrement happened and
* @false otherwise.
*/
+EOF
+fi
+cat <<EOF
static __always_inline bool
arch_${atomic}_dec_unless_positive(${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/fence b/scripts/atomic/fallbacks/fence
index 975855dfba25..f4901343cd2b 100755
--- a/scripts/atomic/fallbacks/fence
+++ b/scripts/atomic/fallbacks/fence
@@ -1,5 +1,8 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}${name}${sfx} fence
+then
acqrel=full
. ${ATOMICDIR}/acqrel.sh
+fi
cat <<EOF
static __always_inline ${ret}
arch_${atomic}_${pfx}${name}${sfx}(${params})
diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fallbacks/fetch_add_unless
index a1692df0d514..ec583d340785 100755
--- a/scripts/atomic/fallbacks/fetch_add_unless
+++ b/scripts/atomic/fallbacks/fetch_add_unless
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_fetch_add_unless fetch_add_unless
+then
cat << EOF
/**
* arch_${atomic}_fetch_add_unless - add unless the number is already a given value
@@ -8,6 +10,9 @@ cat << EOF
* Atomically adds @a to @v, so long as @v was not already @u.
* Returns original value of @v.
*/
+EOF
+fi
+cat << EOF
static __always_inline ${int}
arch_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
{
diff --git a/scripts/atomic/fallbacks/inc b/scripts/atomic/fallbacks/inc
index 3f2c0730cd0c..bb1d5ea6846c 100755
--- a/scripts/atomic/fallbacks/inc
+++ b/scripts/atomic/fallbacks/inc
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}inc${sfx}${order} inc
+then
cat <<EOF
/**
* arch_${atomic}_${pfx}inc${sfx}${order} - Atomic increment
@@ -6,6 +8,9 @@ cat <<EOF
* Atomically increment @v with ${docbook_order} ordering,
* returning ${docbook_oldnew} value.
*/
+EOF
+fi
+cat <<EOF
static __always_inline ${ret}
arch_${atomic}_${pfx}inc${sfx}${order}(${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/inc_and_test b/scripts/atomic/fallbacks/inc_and_test
index cc3ac1dde508..dd74f6a5ca4a 100755
--- a/scripts/atomic/fallbacks/inc_and_test
+++ b/scripts/atomic/fallbacks/inc_and_test
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_inc_and_test inc_and_test
+then
cat <<EOF
/**
* arch_${atomic}_inc_and_test - increment and test
@@ -7,6 +9,9 @@ cat <<EOF
* and returns @true if the result is zero, or @false for all
* other cases.
*/
+EOF
+fi
+cat <<EOF
static __always_inline bool
arch_${atomic}_inc_and_test(${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/inc_not_zero b/scripts/atomic/fallbacks/inc_not_zero
index 891fa3c057f6..38e2c13dab62 100755
--- a/scripts/atomic/fallbacks/inc_not_zero
+++ b/scripts/atomic/fallbacks/inc_not_zero
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_inc_not_zero inc_not_zero
+then
cat <<EOF
/**
* arch_${atomic}_inc_not_zero - increment unless the number is zero
@@ -6,6 +8,9 @@ cat <<EOF
* Atomically increments @v by 1, if @v is non-zero.
* Returns @true if the increment was done.
*/
+EOF
+fi
+cat <<EOF
static __always_inline bool
arch_${atomic}_inc_not_zero(${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/inc_unless_negative b/scripts/atomic/fallbacks/inc_unless_negative
index 98830b0dcdb1..2dc853c4e5b9 100755
--- a/scripts/atomic/fallbacks/inc_unless_negative
+++ b/scripts/atomic/fallbacks/inc_unless_negative
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_inc_unless_negative inc_unless_negative
+then
cat <<EOF
/**
* arch_${atomic}_inc_unless_negative - Atomic increment if old value is non-negative
@@ -7,6 +9,9 @@ cat <<EOF
* than or equal to zero. Return @true if the increment happened and
* @false otherwise.
*/
+EOF
+fi
+cat <<EOF
static __always_inline bool
arch_${atomic}_inc_unless_negative(${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/read_acquire b/scripts/atomic/fallbacks/read_acquire
index 779f40c07018..680cd43080cb 100755
--- a/scripts/atomic/fallbacks/read_acquire
+++ b/scripts/atomic/fallbacks/read_acquire
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_read_acquire read_acquire
+then
cat <<EOF
/**
* arch_${atomic}_read_acquire - Atomic load acquire
@@ -6,6 +8,9 @@ cat <<EOF
* Atomically load from *@v with acquire ordering, returning the value
* loaded.
*/
+EOF
+fi
+cat <<EOF
static __always_inline ${ret}
arch_${atomic}_read_acquire(const ${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/release b/scripts/atomic/fallbacks/release
index bce3a1cbd497..a1604df66ece 100755
--- a/scripts/atomic/fallbacks/release
+++ b/scripts/atomic/fallbacks/release
@@ -1,5 +1,8 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}${name}${sfx}_release release
+then
acqrel=release
. ${ATOMICDIR}/acqrel.sh
+fi
cat <<EOF
static __always_inline ${ret}
arch_${atomic}_${pfx}${name}${sfx}_release(${params})
diff --git a/scripts/atomic/fallbacks/set_release b/scripts/atomic/fallbacks/set_release
index 46effb6203e5..2a65d3b29f4b 100755
--- a/scripts/atomic/fallbacks/set_release
+++ b/scripts/atomic/fallbacks/set_release
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_set_release set_release
+then
cat <<EOF
/**
* arch_${atomic}_set_release - Atomic store release
@@ -6,6 +8,9 @@ cat <<EOF
*
* Atomically store @i into *@v with release ordering.
*/
+EOF
+fi
+cat <<EOF
static __always_inline void
arch_${atomic}_set_release(${atomic}_t *v, ${int} i)
{
diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test
index 204282e260ea..0397b0e92192 100755
--- a/scripts/atomic/fallbacks/sub_and_test
+++ b/scripts/atomic/fallbacks/sub_and_test
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_sub_and_test sub_and_test
+then
cat <<EOF
/**
* arch_${atomic}_sub_and_test - subtract value from variable and test result
@@ -8,6 +10,9 @@ cat <<EOF
* @true if the result is zero, or @false for all
* other cases.
*/
+EOF
+fi
+cat <<EOF
static __always_inline bool
arch_${atomic}_sub_and_test(${int} i, ${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/try_cmpxchg b/scripts/atomic/fallbacks/try_cmpxchg
index baf7412f9bf4..e08c5962dd83 100755
--- a/scripts/atomic/fallbacks/try_cmpxchg
+++ b/scripts/atomic/fallbacks/try_cmpxchg
@@ -1,3 +1,5 @@
+if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_try_cmpxchg${order} try_cmpxchg
+then
cat <<EOF
/**
* arch_${atomic}_try_cmpxchg${order} - Atomic cmpxchg with bool return value
@@ -9,6 +11,9 @@ cat <<EOF
* providing ${docbook_order} ordering.
* Returns @true if the cmpxchg operation succeeded, and false otherwise.
*/
+EOF
+fi
+cat <<EOF
static __always_inline bool
arch_${atomic}_try_cmpxchg${order}(${atomic}_t *v, ${int} *old, ${int} new)
{
diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh
index 5b98a8307693..69bf3754df5a 100755
--- a/scripts/atomic/gen-atomics.sh
+++ b/scripts/atomic/gen-atomics.sh
@@ -3,6 +3,10 @@
#
# Generate atomic headers

+T="`mktemp -d ${TMPDIR-/tmp}/gen-atomics.sh.XXXXXX`"
+trap 'rm -rf $T' 0
+export T
+
ATOMICDIR=$(dirname $0)
ATOMICTBL=${ATOMICDIR}/atomics.tbl
LINUXDIR=${ATOMICDIR}/../..
--
2.40.1


2023-05-10 18:44:21

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH locking/atomic 01/19] locking/atomic: Fix fetch_add_unless missing-period typo

The fetch_add_unless() kernel-doc header is missing a period (".").
Therefore, add it.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Mark Rutland <[email protected]>
---
include/linux/atomic/atomic-arch-fallback.h | 6 +++---
scripts/atomic/fallbacks/fetch_add_unless | 2 +-
2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index a6e4437c5f36..c4087c32fb0e 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -1351,7 +1351,7 @@ arch_atomic_add_negative(int i, atomic_t *v)
* @u: ...unless v is equal to u.
*
* Atomically adds @a to @v, so long as @v was not already @u.
- * Returns original value of @v
+ * Returns original value of @v.
*/
static __always_inline int
arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
@@ -2567,7 +2567,7 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v)
* @u: ...unless v is equal to u.
*
* Atomically adds @a to @v, so long as @v was not already @u.
- * Returns original value of @v
+ * Returns original value of @v.
*/
static __always_inline s64
arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
@@ -2668,4 +2668,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
#endif

#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// ad2e2b4d168dbc60a73922616047a9bfa446af36
+// 201cc01b616875888e0b2c79965c569a89c0edcd
diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fallbacks/fetch_add_unless
index 68ce13c8b9da..a1692df0d514 100755
--- a/scripts/atomic/fallbacks/fetch_add_unless
+++ b/scripts/atomic/fallbacks/fetch_add_unless
@@ -6,7 +6,7 @@ cat << EOF
* @u: ...unless v is equal to u.
*
* Atomically adds @a to @v, so long as @v was not already @u.
- * Returns original value of @v
+ * Returns original value of @v.
*/
static __always_inline ${int}
arch_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
--
2.40.1


2023-05-10 18:44:53

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH locking/atomic 06/19] locking/atomic: Add kernel-doc header for arch_${atomic}_${pfx}andnot${sfx}${order}

Add kernel-doc header template for arch_${atomic}_${pfx}andnot${sfx}${order}
function family.

[ paulmck: Apply feedback from Akira Yokosawa. ]

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Mark Rutland <[email protected]>
---
include/linux/atomic/atomic-arch-fallback.h | 82 ++++++++++++++++++++-
scripts/atomic/fallbacks/andnot | 8 ++
2 files changed, 89 insertions(+), 1 deletion(-)

diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 41e43e8dff8d..d5ff29a7128d 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -950,6 +950,14 @@ arch_atomic_fetch_and(int i, atomic_t *v)
#endif /* arch_atomic_fetch_and_relaxed */

#ifndef arch_atomic_andnot
+/**
+ * arch_atomic_andnot - Atomic and-not
+ * @i: the quantity to and-not with *@v
+ * @v: pointer of type atomic_t
+ *
+ * Atomically and-not @i with @v using full ordering.
+ * returning no value.
+ */
static __always_inline void
arch_atomic_andnot(int i, atomic_t *v)
{
@@ -966,6 +974,14 @@ arch_atomic_andnot(int i, atomic_t *v)
#endif /* arch_atomic_fetch_andnot */

#ifndef arch_atomic_fetch_andnot
+/**
+ * arch_atomic_fetch_andnot - Atomic and-not
+ * @i: the quantity to and-not with *@v
+ * @v: pointer of type atomic_t
+ *
+ * Atomically and-not @i with @v using full ordering.
+ * returning old value.
+ */
static __always_inline int
arch_atomic_fetch_andnot(int i, atomic_t *v)
{
@@ -975,6 +991,14 @@ arch_atomic_fetch_andnot(int i, atomic_t *v)
#endif

#ifndef arch_atomic_fetch_andnot_acquire
+/**
+ * arch_atomic_fetch_andnot_acquire - Atomic and-not
+ * @i: the quantity to and-not with *@v
+ * @v: pointer of type atomic_t
+ *
+ * Atomically and-not @i with @v using acquire ordering.
+ * returning old value.
+ */
static __always_inline int
arch_atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
@@ -984,6 +1008,14 @@ arch_atomic_fetch_andnot_acquire(int i, atomic_t *v)
#endif

#ifndef arch_atomic_fetch_andnot_release
+/**
+ * arch_atomic_fetch_andnot_release - Atomic and-not
+ * @i: the quantity to and-not with *@v
+ * @v: pointer of type atomic_t
+ *
+ * Atomically and-not @i with @v using release ordering.
+ * returning old value.
+ */
static __always_inline int
arch_atomic_fetch_andnot_release(int i, atomic_t *v)
{
@@ -993,6 +1025,14 @@ arch_atomic_fetch_andnot_release(int i, atomic_t *v)
#endif

#ifndef arch_atomic_fetch_andnot_relaxed
+/**
+ * arch_atomic_fetch_andnot_relaxed - Atomic and-not
+ * @i: the quantity to and-not with *@v
+ * @v: pointer of type atomic_t
+ *
+ * Atomically and-not @i with @v using no ordering.
+ * returning old value.
+ */
static __always_inline int
arch_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
{
@@ -2292,6 +2332,14 @@ arch_atomic64_fetch_and(s64 i, atomic64_t *v)
#endif /* arch_atomic64_fetch_and_relaxed */

#ifndef arch_atomic64_andnot
+/**
+ * arch_atomic64_andnot - Atomic and-not
+ * @i: the quantity to and-not with *@v
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically and-not @i with @v using full ordering.
+ * returning no value.
+ */
static __always_inline void
arch_atomic64_andnot(s64 i, atomic64_t *v)
{
@@ -2308,6 +2356,14 @@ arch_atomic64_andnot(s64 i, atomic64_t *v)
#endif /* arch_atomic64_fetch_andnot */

#ifndef arch_atomic64_fetch_andnot
+/**
+ * arch_atomic64_fetch_andnot - Atomic and-not
+ * @i: the quantity to and-not with *@v
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically and-not @i with @v using full ordering.
+ * returning old value.
+ */
static __always_inline s64
arch_atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
@@ -2317,6 +2373,14 @@ arch_atomic64_fetch_andnot(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_andnot_acquire
+/**
+ * arch_atomic64_fetch_andnot_acquire - Atomic and-not
+ * @i: the quantity to and-not with *@v
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically and-not @i with @v using acquire ordering.
+ * returning old value.
+ */
static __always_inline s64
arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
@@ -2326,6 +2390,14 @@ arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_andnot_release
+/**
+ * arch_atomic64_fetch_andnot_release - Atomic and-not
+ * @i: the quantity to and-not with *@v
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically and-not @i with @v using release ordering.
+ * returning old value.
+ */
static __always_inline s64
arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
@@ -2335,6 +2407,14 @@ arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_andnot_relaxed
+/**
+ * arch_atomic64_fetch_andnot_relaxed - Atomic and-not
+ * @i: the quantity to and-not with *@v
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically and-not @i with @v using no ordering.
+ * returning old value.
+ */
static __always_inline s64
arch_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
{
@@ -2920,4 +3000,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
#endif

#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// 1a1d30491494653253bfe3b5d2e2c6583cb57473
+// e403f06ce98fe72ae0698e8f2c78f8a45894e465
diff --git a/scripts/atomic/fallbacks/andnot b/scripts/atomic/fallbacks/andnot
index 5a42f54a3595..9fbc0ce75a7c 100755
--- a/scripts/atomic/fallbacks/andnot
+++ b/scripts/atomic/fallbacks/andnot
@@ -1,4 +1,12 @@
cat <<EOF
+/**
+ * arch_${atomic}_${pfx}andnot${sfx}${order} - Atomic and-not
+ * @i: the quantity to and-not with *@v
+ * @v: pointer of type ${atomic}_t
+ *
+ * Atomically and-not @i with @v using ${docbook_order} ordering.
+ * returning ${docbook_oldnew} value.
+ */
static __always_inline ${ret}
arch_${atomic}_${pfx}andnot${sfx}${order}(${int} i, ${atomic}_t *v)
{
--
2.40.1


2023-05-10 18:45:03

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH locking/atomic 05/19] locking/atomic: Add kernel-doc header for arch_${atomic}_${pfx}dec${sfx}${order}

Add kernel-doc header template for arch_${atomic}_${pfx}dec${sfx}${order}
function family.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Mark Rutland <[email protected]>
---
include/linux/atomic/atomic-arch-fallback.h | 128 +++++++++++++++++++-
scripts/atomic/fallbacks/dec | 7 ++
2 files changed, 134 insertions(+), 1 deletion(-)

diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index e7e83f18d192..41e43e8dff8d 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -674,6 +674,13 @@ arch_atomic_fetch_inc(atomic_t *v)
#endif /* arch_atomic_fetch_inc_relaxed */

#ifndef arch_atomic_dec
+/**
+ * arch_atomic_dec - Atomic decrement
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrement @v with full ordering,
+ * returning no value.
+ */
static __always_inline void
arch_atomic_dec(atomic_t *v)
{
@@ -690,6 +697,13 @@ arch_atomic_dec(atomic_t *v)
#endif /* arch_atomic_dec_return */

#ifndef arch_atomic_dec_return
+/**
+ * arch_atomic_dec_return - Atomic decrement
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrement @v with full ordering,
+ * returning new value.
+ */
static __always_inline int
arch_atomic_dec_return(atomic_t *v)
{
@@ -699,6 +713,13 @@ arch_atomic_dec_return(atomic_t *v)
#endif

#ifndef arch_atomic_dec_return_acquire
+/**
+ * arch_atomic_dec_return_acquire - Atomic decrement
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrement @v with acquire ordering,
+ * returning new value.
+ */
static __always_inline int
arch_atomic_dec_return_acquire(atomic_t *v)
{
@@ -708,6 +729,13 @@ arch_atomic_dec_return_acquire(atomic_t *v)
#endif

#ifndef arch_atomic_dec_return_release
+/**
+ * arch_atomic_dec_return_release - Atomic decrement
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrement @v with release ordering,
+ * returning new value.
+ */
static __always_inline int
arch_atomic_dec_return_release(atomic_t *v)
{
@@ -717,6 +745,13 @@ arch_atomic_dec_return_release(atomic_t *v)
#endif

#ifndef arch_atomic_dec_return_relaxed
+/**
+ * arch_atomic_dec_return_relaxed - Atomic decrement
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrement @v with no ordering,
+ * returning new value.
+ */
static __always_inline int
arch_atomic_dec_return_relaxed(atomic_t *v)
{
@@ -771,6 +806,13 @@ arch_atomic_dec_return(atomic_t *v)
#endif /* arch_atomic_fetch_dec */

#ifndef arch_atomic_fetch_dec
+/**
+ * arch_atomic_fetch_dec - Atomic decrement
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrement @v with full ordering,
+ * returning old value.
+ */
static __always_inline int
arch_atomic_fetch_dec(atomic_t *v)
{
@@ -780,6 +822,13 @@ arch_atomic_fetch_dec(atomic_t *v)
#endif

#ifndef arch_atomic_fetch_dec_acquire
+/**
+ * arch_atomic_fetch_dec_acquire - Atomic decrement
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrement @v with acquire ordering,
+ * returning old value.
+ */
static __always_inline int
arch_atomic_fetch_dec_acquire(atomic_t *v)
{
@@ -789,6 +838,13 @@ arch_atomic_fetch_dec_acquire(atomic_t *v)
#endif

#ifndef arch_atomic_fetch_dec_release
+/**
+ * arch_atomic_fetch_dec_release - Atomic decrement
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrement @v with release ordering,
+ * returning old value.
+ */
static __always_inline int
arch_atomic_fetch_dec_release(atomic_t *v)
{
@@ -798,6 +854,13 @@ arch_atomic_fetch_dec_release(atomic_t *v)
#endif

#ifndef arch_atomic_fetch_dec_relaxed
+/**
+ * arch_atomic_fetch_dec_relaxed - Atomic decrement
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrement @v with no ordering,
+ * returning old value.
+ */
static __always_inline int
arch_atomic_fetch_dec_relaxed(atomic_t *v)
{
@@ -1953,6 +2016,13 @@ arch_atomic64_fetch_inc(atomic64_t *v)
#endif /* arch_atomic64_fetch_inc_relaxed */

#ifndef arch_atomic64_dec
+/**
+ * arch_atomic64_dec - Atomic decrement
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically decrement @v with full ordering,
+ * returning no value.
+ */
static __always_inline void
arch_atomic64_dec(atomic64_t *v)
{
@@ -1969,6 +2039,13 @@ arch_atomic64_dec(atomic64_t *v)
#endif /* arch_atomic64_dec_return */

#ifndef arch_atomic64_dec_return
+/**
+ * arch_atomic64_dec_return - Atomic decrement
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically decrement @v with full ordering,
+ * returning new value.
+ */
static __always_inline s64
arch_atomic64_dec_return(atomic64_t *v)
{
@@ -1978,6 +2055,13 @@ arch_atomic64_dec_return(atomic64_t *v)
#endif

#ifndef arch_atomic64_dec_return_acquire
+/**
+ * arch_atomic64_dec_return_acquire - Atomic decrement
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically decrement @v with acquire ordering,
+ * returning new value.
+ */
static __always_inline s64
arch_atomic64_dec_return_acquire(atomic64_t *v)
{
@@ -1987,6 +2071,13 @@ arch_atomic64_dec_return_acquire(atomic64_t *v)
#endif

#ifndef arch_atomic64_dec_return_release
+/**
+ * arch_atomic64_dec_return_release - Atomic decrement
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically decrement @v with release ordering,
+ * returning new value.
+ */
static __always_inline s64
arch_atomic64_dec_return_release(atomic64_t *v)
{
@@ -1996,6 +2087,13 @@ arch_atomic64_dec_return_release(atomic64_t *v)
#endif

#ifndef arch_atomic64_dec_return_relaxed
+/**
+ * arch_atomic64_dec_return_relaxed - Atomic decrement
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically decrement @v with no ordering,
+ * returning new value.
+ */
static __always_inline s64
arch_atomic64_dec_return_relaxed(atomic64_t *v)
{
@@ -2050,6 +2148,13 @@ arch_atomic64_dec_return(atomic64_t *v)
#endif /* arch_atomic64_fetch_dec */

#ifndef arch_atomic64_fetch_dec
+/**
+ * arch_atomic64_fetch_dec - Atomic decrement
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically decrement @v with full ordering,
+ * returning old value.
+ */
static __always_inline s64
arch_atomic64_fetch_dec(atomic64_t *v)
{
@@ -2059,6 +2164,13 @@ arch_atomic64_fetch_dec(atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_dec_acquire
+/**
+ * arch_atomic64_fetch_dec_acquire - Atomic decrement
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically decrement @v with acquire ordering,
+ * returning old value.
+ */
static __always_inline s64
arch_atomic64_fetch_dec_acquire(atomic64_t *v)
{
@@ -2068,6 +2180,13 @@ arch_atomic64_fetch_dec_acquire(atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_dec_release
+/**
+ * arch_atomic64_fetch_dec_release - Atomic decrement
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically decrement @v with release ordering,
+ * returning old value.
+ */
static __always_inline s64
arch_atomic64_fetch_dec_release(atomic64_t *v)
{
@@ -2077,6 +2196,13 @@ arch_atomic64_fetch_dec_release(atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_dec_relaxed
+/**
+ * arch_atomic64_fetch_dec_relaxed - Atomic decrement
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically decrement @v with no ordering,
+ * returning old value.
+ */
static __always_inline s64
arch_atomic64_fetch_dec_relaxed(atomic64_t *v)
{
@@ -2794,4 +2920,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
#endif

#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// 17cefb0ff9b450685d4072202d4a1c309b0606c2
+// 1a1d30491494653253bfe3b5d2e2c6583cb57473
diff --git a/scripts/atomic/fallbacks/dec b/scripts/atomic/fallbacks/dec
index 8c144c818e9e..e99c8edd36a3 100755
--- a/scripts/atomic/fallbacks/dec
+++ b/scripts/atomic/fallbacks/dec
@@ -1,4 +1,11 @@
cat <<EOF
+/**
+ * arch_${atomic}_${pfx}dec${sfx}${order} - Atomic decrement
+ * @v: pointer of type ${atomic}_t
+ *
+ * Atomically decrement @v with ${docbook_order} ordering,
+ * returning ${docbook_oldnew} value.
+ */
static __always_inline ${ret}
arch_${atomic}_${pfx}dec${sfx}${order}(${atomic}_t *v)
{
--
2.40.1


2023-05-10 18:45:22

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH locking/atomic 02/19] locking/atomic: Add "@" before "true" and "false" for fallback templates

Fix up kernel-doc pretty-printing by adding "@" before "true" and "false"
for atomic-operation fallback scripts lacking them.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Mark Rutland <[email protected]>
---
include/linux/atomic/atomic-arch-fallback.h | 54 ++++++++++-----------
scripts/atomic/fallbacks/add_negative | 4 +-
scripts/atomic/fallbacks/add_unless | 2 +-
scripts/atomic/fallbacks/dec_and_test | 2 +-
scripts/atomic/fallbacks/inc_and_test | 2 +-
scripts/atomic/fallbacks/inc_not_zero | 2 +-
scripts/atomic/fallbacks/sub_and_test | 2 +-
7 files changed, 34 insertions(+), 34 deletions(-)

diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index c4087c32fb0e..606be9d3aa22 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -1185,7 +1185,7 @@ arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
* @v: pointer of type atomic_t
*
* Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
+ * @true if the result is zero, or @false for all
* other cases.
*/
static __always_inline bool
@@ -1202,7 +1202,7 @@ arch_atomic_sub_and_test(int i, atomic_t *v)
* @v: pointer of type atomic_t
*
* Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
+ * returns @true if the result is 0, or @false for all other
* cases.
*/
static __always_inline bool
@@ -1219,7 +1219,7 @@ arch_atomic_dec_and_test(atomic_t *v)
* @v: pointer of type atomic_t
*
* Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
+ * and returns @true if the result is zero, or @false for all
* other cases.
*/
static __always_inline bool
@@ -1243,8 +1243,8 @@ arch_atomic_inc_and_test(atomic_t *v)
* @i: integer value to add
* @v: pointer of type atomic_t
*
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
+ * Atomically adds @i to @v and returns @true if the result is negative,
+ * or @false when the result is greater than or equal to zero.
*/
static __always_inline bool
arch_atomic_add_negative(int i, atomic_t *v)
@@ -1260,8 +1260,8 @@ arch_atomic_add_negative(int i, atomic_t *v)
* @i: integer value to add
* @v: pointer of type atomic_t
*
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
+ * Atomically adds @i to @v and returns @true if the result is negative,
+ * or @false when the result is greater than or equal to zero.
*/
static __always_inline bool
arch_atomic_add_negative_acquire(int i, atomic_t *v)
@@ -1277,8 +1277,8 @@ arch_atomic_add_negative_acquire(int i, atomic_t *v)
* @i: integer value to add
* @v: pointer of type atomic_t
*
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
+ * Atomically adds @i to @v and returns @true if the result is negative,
+ * or @false when the result is greater than or equal to zero.
*/
static __always_inline bool
arch_atomic_add_negative_release(int i, atomic_t *v)
@@ -1294,8 +1294,8 @@ arch_atomic_add_negative_release(int i, atomic_t *v)
* @i: integer value to add
* @v: pointer of type atomic_t
*
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
+ * Atomically adds @i to @v and returns @true if the result is negative,
+ * or @false when the result is greater than or equal to zero.
*/
static __always_inline bool
arch_atomic_add_negative_relaxed(int i, atomic_t *v)
@@ -1376,7 +1376,7 @@ arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
* @u: ...unless v is equal to u.
*
* Atomically adds @a to @v, if @v was not already @u.
- * Returns true if the addition was done.
+ * Returns @true if the addition was done.
*/
static __always_inline bool
arch_atomic_add_unless(atomic_t *v, int a, int u)
@@ -1392,7 +1392,7 @@ arch_atomic_add_unless(atomic_t *v, int a, int u)
* @v: pointer of type atomic_t
*
* Atomically increments @v by 1, if @v is non-zero.
- * Returns true if the increment was done.
+ * Returns @true if the increment was done.
*/
static __always_inline bool
arch_atomic_inc_not_zero(atomic_t *v)
@@ -2401,7 +2401,7 @@ arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
* @v: pointer of type atomic64_t
*
* Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
+ * @true if the result is zero, or @false for all
* other cases.
*/
static __always_inline bool
@@ -2418,7 +2418,7 @@ arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
* @v: pointer of type atomic64_t
*
* Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
+ * returns @true if the result is 0, or @false for all other
* cases.
*/
static __always_inline bool
@@ -2435,7 +2435,7 @@ arch_atomic64_dec_and_test(atomic64_t *v)
* @v: pointer of type atomic64_t
*
* Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
+ * and returns @true if the result is zero, or @false for all
* other cases.
*/
static __always_inline bool
@@ -2459,8 +2459,8 @@ arch_atomic64_inc_and_test(atomic64_t *v)
* @i: integer value to add
* @v: pointer of type atomic64_t
*
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
+ * Atomically adds @i to @v and returns @true if the result is negative,
+ * or @false when the result is greater than or equal to zero.
*/
static __always_inline bool
arch_atomic64_add_negative(s64 i, atomic64_t *v)
@@ -2476,8 +2476,8 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v)
* @i: integer value to add
* @v: pointer of type atomic64_t
*
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
+ * Atomically adds @i to @v and returns @true if the result is negative,
+ * or @false when the result is greater than or equal to zero.
*/
static __always_inline bool
arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
@@ -2493,8 +2493,8 @@ arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
* @i: integer value to add
* @v: pointer of type atomic64_t
*
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
+ * Atomically adds @i to @v and returns @true if the result is negative,
+ * or @false when the result is greater than or equal to zero.
*/
static __always_inline bool
arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
@@ -2510,8 +2510,8 @@ arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
* @i: integer value to add
* @v: pointer of type atomic64_t
*
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
+ * Atomically adds @i to @v and returns @true if the result is negative,
+ * or @false when the result is greater than or equal to zero.
*/
static __always_inline bool
arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
@@ -2592,7 +2592,7 @@ arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
* @u: ...unless v is equal to u.
*
* Atomically adds @a to @v, if @v was not already @u.
- * Returns true if the addition was done.
+ * Returns @true if the addition was done.
*/
static __always_inline bool
arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
@@ -2608,7 +2608,7 @@ arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
* @v: pointer of type atomic64_t
*
* Atomically increments @v by 1, if @v is non-zero.
- * Returns true if the increment was done.
+ * Returns @true if the increment was done.
*/
static __always_inline bool
arch_atomic64_inc_not_zero(atomic64_t *v)
@@ -2668,4 +2668,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
#endif

#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// 201cc01b616875888e0b2c79965c569a89c0edcd
+// e914194a1a82dfbc39d4d1c79ce1f59f64fb37da
diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative
index e5980abf5904..c032e8bec6e2 100755
--- a/scripts/atomic/fallbacks/add_negative
+++ b/scripts/atomic/fallbacks/add_negative
@@ -4,8 +4,8 @@ cat <<EOF
* @i: integer value to add
* @v: pointer of type ${atomic}_t
*
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
+ * Atomically adds @i to @v and returns @true if the result is negative,
+ * or @false when the result is greater than or equal to zero.
*/
static __always_inline bool
arch_${atomic}_add_negative${order}(${int} i, ${atomic}_t *v)
diff --git a/scripts/atomic/fallbacks/add_unless b/scripts/atomic/fallbacks/add_unless
index 9e5159c2ccfc..650fee935aed 100755
--- a/scripts/atomic/fallbacks/add_unless
+++ b/scripts/atomic/fallbacks/add_unless
@@ -6,7 +6,7 @@ cat << EOF
* @u: ...unless v is equal to u.
*
* Atomically adds @a to @v, if @v was not already @u.
- * Returns true if the addition was done.
+ * Returns @true if the addition was done.
*/
static __always_inline bool
arch_${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
diff --git a/scripts/atomic/fallbacks/dec_and_test b/scripts/atomic/fallbacks/dec_and_test
index 8549f359bd0e..3720896b1afc 100755
--- a/scripts/atomic/fallbacks/dec_and_test
+++ b/scripts/atomic/fallbacks/dec_and_test
@@ -4,7 +4,7 @@ cat <<EOF
* @v: pointer of type ${atomic}_t
*
* Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
+ * returns @true if the result is 0, or @false for all other
* cases.
*/
static __always_inline bool
diff --git a/scripts/atomic/fallbacks/inc_and_test b/scripts/atomic/fallbacks/inc_and_test
index 0cf23fe1efb8..cc3ac1dde508 100755
--- a/scripts/atomic/fallbacks/inc_and_test
+++ b/scripts/atomic/fallbacks/inc_and_test
@@ -4,7 +4,7 @@ cat <<EOF
* @v: pointer of type ${atomic}_t
*
* Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
+ * and returns @true if the result is zero, or @false for all
* other cases.
*/
static __always_inline bool
diff --git a/scripts/atomic/fallbacks/inc_not_zero b/scripts/atomic/fallbacks/inc_not_zero
index ed8a1f562667..891fa3c057f6 100755
--- a/scripts/atomic/fallbacks/inc_not_zero
+++ b/scripts/atomic/fallbacks/inc_not_zero
@@ -4,7 +4,7 @@ cat <<EOF
* @v: pointer of type ${atomic}_t
*
* Atomically increments @v by 1, if @v is non-zero.
- * Returns true if the increment was done.
+ * Returns @true if the increment was done.
*/
static __always_inline bool
arch_${atomic}_inc_not_zero(${atomic}_t *v)
diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test
index 260f37341c88..204282e260ea 100755
--- a/scripts/atomic/fallbacks/sub_and_test
+++ b/scripts/atomic/fallbacks/sub_and_test
@@ -5,7 +5,7 @@ cat <<EOF
* @v: pointer of type ${atomic}_t
*
* Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
+ * @true if the result is zero, or @false for all
* other cases.
*/
static __always_inline bool
--
2.40.1


2023-05-10 18:45:39

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH locking/atomic 09/19] locking/atomic: Add kernel-doc header for arch_${atomic}_dec_unless_positive

Add kernel-doc header template for arch_${atomic}_dec_unless_positive
function family.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Mark Rutland <[email protected]>
---
include/linux/atomic/atomic-arch-fallback.h | 18 +++++++++++++++++-
scripts/atomic/fallbacks/dec_unless_positive | 8 ++++++++
2 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 4d4d94925cb0..e6c7356d5dfc 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -1625,6 +1625,14 @@ arch_atomic_inc_unless_negative(atomic_t *v)
#endif

#ifndef arch_atomic_dec_unless_positive
+/**
+ * arch_atomic_dec_unless_positive - Atomic decrement if old value is non-positive
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrement @v, but only if the original value is less
+ * than or equal to zero. Return @true if the decrement happened and
+ * @false otherwise.
+ */
static __always_inline bool
arch_atomic_dec_unless_positive(atomic_t *v)
{
@@ -3057,6 +3065,14 @@ arch_atomic64_inc_unless_negative(atomic64_t *v)
#endif

#ifndef arch_atomic64_dec_unless_positive
+/**
+ * arch_atomic64_dec_unless_positive - Atomic decrement if old value is non-positive
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically decrement @v, but only if the original value is less
+ * than or equal to zero. Return @true if the decrement happened and
+ * @false otherwise.
+ */
static __always_inline bool
arch_atomic64_dec_unless_positive(atomic64_t *v)
{
@@ -3100,4 +3116,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
#endif

#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// c7041896e7e66a52d8005ba021f3b3b05f99bcb3
+// 225b2fe3eb6bbe34729abed7a856b91abc8d434e
diff --git a/scripts/atomic/fallbacks/dec_unless_positive b/scripts/atomic/fallbacks/dec_unless_positive
index c531d5afecc4..c3d01d201c63 100755
--- a/scripts/atomic/fallbacks/dec_unless_positive
+++ b/scripts/atomic/fallbacks/dec_unless_positive
@@ -1,4 +1,12 @@
cat <<EOF
+/**
+ * arch_${atomic}_dec_unless_positive - Atomic decrement if old value is non-positive
+ * @v: pointer of type ${atomic}_t
+ *
+ * Atomically decrement @v, but only if the original value is less
+ * than or equal to zero. Return @true if the decrement happened and
+ * @false otherwise.
+ */
static __always_inline bool
arch_${atomic}_dec_unless_positive(${atomic}_t *v)
{
--
2.40.1


2023-05-10 18:46:22

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH locking/atomic 13/19] locking/atomic: Script to auto-generate acquire, fence, and release headers

The scripts/atomic/fallbacks/{acquire,fence,release} scripts require almost
identical scripting to automatically generated the required kernel-doci
headers. Therefore, provide a single acqrel.sh script that does this
work. This new script is to be invoked from each of those scripts using
the "." command, and with the shell variable "acqrel" set to either
"acquire", "full", or "release".

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Mark Rutland <[email protected]>
---
scripts/atomic/acqrel.sh | 67 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 67 insertions(+)
create mode 100644 scripts/atomic/acqrel.sh

diff --git a/scripts/atomic/acqrel.sh b/scripts/atomic/acqrel.sh
new file mode 100644
index 000000000000..5009a54fdac5
--- /dev/null
+++ b/scripts/atomic/acqrel.sh
@@ -0,0 +1,67 @@
+echo ${args} | tr -d ' ' | tr ',' '\012' |
+ awk -v atomic=${atomic} \
+ -v name_op=${name} \
+ -v ret=${ret} \
+ -v oldnew=${docbook_oldnew} \
+ -v acqrel=${acqrel} \
+ -v basefuncname=arch_${atomic}_${pfx}${name}${sfx} '
+ BEGIN {
+ print "/**";
+ sfxord = "_" acqrel;
+ if (acqrel == "full")
+ sfxord = "";
+ print " * " basefuncname sfxord " - Atomic " name_op " with " acqrel " ordering";
+ longname["add"] = "add";
+ longname["sub"] = "subtract";
+ longname["inc"] = "increment";
+ longname["dec"] = "decrement";
+ longname["and"] = "AND";
+ longname["andnot"] = "complement then AND";
+ longname["or"] = "OR";
+ longname["xor"] = "XOR";
+ longname["xchg"] = "exchange";
+ longname["add_negative"] = "add";
+ desc["i"] = "value to " longname[name_op];
+ desc["v"] = "pointer of type " atomic "_t";
+ desc["old"] = "desired old value to match";
+ desc["new"] = "new value to put in";
+ opmod = "with";
+ if (name_op == "add")
+ opmod = "to";
+ else if (name_op == "sub")
+ opmod = "from";
+ }
+
+ {
+ print " * @" $1 ": " desc[$1];
+ have[$1] = 1;
+ }
+
+ END {
+ print " *";
+ if (name_op ~ /cmpxchg/) {
+ print " * Atomically compares @new to *@v, and if equal,";
+ print " * stores @new to *@v, providing " acqrel " ordering.";
+ } else if (have["i"]) {
+ print " * Atomically " longname[name_op] " @i " opmod " @v using " acqrel " ordering.";
+ } else {
+ print " * Atomically " longname[name_op] " @v using " acqrel " ordering.";
+ }
+ if (name_op ~ /cmpxchg/ && ret == "bool") {
+ print " * Returns @true if the cmpxchg operation succeeded,";
+ print " * and false otherwise. Either way, stores the old";
+ print " * value of *@v to *@old.";
+ } else if (name_op == "cmpxchg") {
+ print " * Returns the old value *@v regardless of the result of";
+ print " * the comparison. Therefore, if the return value is not";
+ print " * equal to @old, the cmpxchg operation failed.";
+ } else if (name_op == "xchg") {
+ print " * Return old value.";
+ } else if (name_op == "add_negative") {
+ print " * Return @true if the result is negative, or @false when"
+ print " * the result is greater than or equal to zero.";
+ } else {
+ print " * Return " oldnew " value.";
+ }
+ print " */";
+ }'
--
2.40.1


2023-05-10 18:47:14

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH locking/atomic 07/19] locking/atomic: Add kernel-doc header for arch_${atomic}_try_cmpxchg${order}

Add kernel-doc header template for arch_${atomic}_try_cmpxchg${order}
function family.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Mark Rutland <[email protected]>
---
include/linux/atomic/atomic-arch-fallback.h | 82 ++++++++++++++++++++-
scripts/atomic/fallbacks/try_cmpxchg | 10 +++
2 files changed, 91 insertions(+), 1 deletion(-)

diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index d5ff29a7128d..ed72d94346e9 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -1255,6 +1255,16 @@ arch_atomic_cmpxchg(atomic_t *v, int old, int new)
#endif /* arch_atomic_try_cmpxchg */

#ifndef arch_atomic_try_cmpxchg
+/**
+ * arch_atomic_try_cmpxchg - Atomic cmpxchg with bool return value
+ * @v: pointer of type atomic_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal, stores @new to *@v,
+ * providing full ordering.
+ * Returns @true if the cmpxchg operation succeeded, and false otherwise.
+ */
static __always_inline bool
arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
@@ -1268,6 +1278,16 @@ arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
#endif

#ifndef arch_atomic_try_cmpxchg_acquire
+/**
+ * arch_atomic_try_cmpxchg_acquire - Atomic cmpxchg with bool return value
+ * @v: pointer of type atomic_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal, stores @new to *@v,
+ * providing acquire ordering.
+ * Returns @true if the cmpxchg operation succeeded, and false otherwise.
+ */
static __always_inline bool
arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
@@ -1281,6 +1301,16 @@ arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
#endif

#ifndef arch_atomic_try_cmpxchg_release
+/**
+ * arch_atomic_try_cmpxchg_release - Atomic cmpxchg with bool return value
+ * @v: pointer of type atomic_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal, stores @new to *@v,
+ * providing release ordering.
+ * Returns @true if the cmpxchg operation succeeded, and false otherwise.
+ */
static __always_inline bool
arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
{
@@ -1294,6 +1324,16 @@ arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
#endif

#ifndef arch_atomic_try_cmpxchg_relaxed
+/**
+ * arch_atomic_try_cmpxchg_relaxed - Atomic cmpxchg with bool return value
+ * @v: pointer of type atomic_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal, stores @new to *@v,
+ * providing no ordering.
+ * Returns @true if the cmpxchg operation succeeded, and false otherwise.
+ */
static __always_inline bool
arch_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
{
@@ -2637,6 +2677,16 @@ arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
#endif /* arch_atomic64_try_cmpxchg */

#ifndef arch_atomic64_try_cmpxchg
+/**
+ * arch_atomic64_try_cmpxchg - Atomic cmpxchg with bool return value
+ * @v: pointer of type atomic64_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal, stores @new to *@v,
+ * providing full ordering.
+ * Returns @true if the cmpxchg operation succeeded, and false otherwise.
+ */
static __always_inline bool
arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
@@ -2650,6 +2700,16 @@ arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
#endif

#ifndef arch_atomic64_try_cmpxchg_acquire
+/**
+ * arch_atomic64_try_cmpxchg_acquire - Atomic cmpxchg with bool return value
+ * @v: pointer of type atomic64_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal, stores @new to *@v,
+ * providing acquire ordering.
+ * Returns @true if the cmpxchg operation succeeded, and false otherwise.
+ */
static __always_inline bool
arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
@@ -2663,6 +2723,16 @@ arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
#endif

#ifndef arch_atomic64_try_cmpxchg_release
+/**
+ * arch_atomic64_try_cmpxchg_release - Atomic cmpxchg with bool return value
+ * @v: pointer of type atomic64_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal, stores @new to *@v,
+ * providing release ordering.
+ * Returns @true if the cmpxchg operation succeeded, and false otherwise.
+ */
static __always_inline bool
arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
{
@@ -2676,6 +2746,16 @@ arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
#endif

#ifndef arch_atomic64_try_cmpxchg_relaxed
+/**
+ * arch_atomic64_try_cmpxchg_relaxed - Atomic cmpxchg with bool return value
+ * @v: pointer of type atomic64_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal, stores @new to *@v,
+ * providing no ordering.
+ * Returns @true if the cmpxchg operation succeeded, and false otherwise.
+ */
static __always_inline bool
arch_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
{
@@ -3000,4 +3080,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
#endif

#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// e403f06ce98fe72ae0698e8f2c78f8a45894e465
+// 3b29d5595f48f921507f19bc794c91aecb782ad3
diff --git a/scripts/atomic/fallbacks/try_cmpxchg b/scripts/atomic/fallbacks/try_cmpxchg
index 890f850ede37..baf7412f9bf4 100755
--- a/scripts/atomic/fallbacks/try_cmpxchg
+++ b/scripts/atomic/fallbacks/try_cmpxchg
@@ -1,4 +1,14 @@
cat <<EOF
+/**
+ * arch_${atomic}_try_cmpxchg${order} - Atomic cmpxchg with bool return value
+ * @v: pointer of type ${atomic}_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal, stores @new to *@v,
+ * providing ${docbook_order} ordering.
+ * Returns @true if the cmpxchg operation succeeded, and false otherwise.
+ */
static __always_inline bool
arch_${atomic}_try_cmpxchg${order}(${atomic}_t *v, ${int} *old, ${int} new)
{
--
2.40.1


2023-05-10 18:47:29

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH locking/atomic 14/19] locking/atomic: Add kernel-doc header for arch_${atomic}_${pfx}${name}${sfx}_acquire

Add kernel-doc header template for arch_${atomic}_${pfx}${name}${sfx}_acquire
function family with the help of my good friend awk, as encapsulated in
acqrel.sh.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Mark Rutland <[email protected]>
---
include/linux/atomic/atomic-arch-fallback.h | 268 +++++++++++++++++++-
scripts/atomic/fallbacks/acquire | 4 +-
2 files changed, 270 insertions(+), 2 deletions(-)

diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index c3552b83bf49..fc80113ca60a 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -292,6 +292,14 @@ arch_atomic_set_release(atomic_t *v, int i)
#else /* arch_atomic_add_return_relaxed */

#ifndef arch_atomic_add_return_acquire
+/**
+ * arch_atomic_add_return_acquire - Atomic add with acquire ordering
+ * @i: value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically add @i to @v using acquire ordering.
+ * Return new value.
+ */
static __always_inline int
arch_atomic_add_return_acquire(int i, atomic_t *v)
{
@@ -334,6 +342,14 @@ arch_atomic_add_return(int i, atomic_t *v)
#else /* arch_atomic_fetch_add_relaxed */

#ifndef arch_atomic_fetch_add_acquire
+/**
+ * arch_atomic_fetch_add_acquire - Atomic add with acquire ordering
+ * @i: value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically add @i to @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_add_acquire(int i, atomic_t *v)
{
@@ -376,6 +392,14 @@ arch_atomic_fetch_add(int i, atomic_t *v)
#else /* arch_atomic_sub_return_relaxed */

#ifndef arch_atomic_sub_return_acquire
+/**
+ * arch_atomic_sub_return_acquire - Atomic sub with acquire ordering
+ * @i: value to subtract
+ * @v: pointer of type atomic_t
+ *
+ * Atomically subtract @i from @v using acquire ordering.
+ * Return new value.
+ */
static __always_inline int
arch_atomic_sub_return_acquire(int i, atomic_t *v)
{
@@ -418,6 +442,14 @@ arch_atomic_sub_return(int i, atomic_t *v)
#else /* arch_atomic_fetch_sub_relaxed */

#ifndef arch_atomic_fetch_sub_acquire
+/**
+ * arch_atomic_fetch_sub_acquire - Atomic sub with acquire ordering
+ * @i: value to subtract
+ * @v: pointer of type atomic_t
+ *
+ * Atomically subtract @i from @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_sub_acquire(int i, atomic_t *v)
{
@@ -543,6 +575,13 @@ arch_atomic_inc_return_relaxed(atomic_t *v)
#else /* arch_atomic_inc_return_relaxed */

#ifndef arch_atomic_inc_return_acquire
+/**
+ * arch_atomic_inc_return_acquire - Atomic inc with acquire ordering
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increment @v using acquire ordering.
+ * Return new value.
+ */
static __always_inline int
arch_atomic_inc_return_acquire(atomic_t *v)
{
@@ -652,6 +691,13 @@ arch_atomic_fetch_inc_relaxed(atomic_t *v)
#else /* arch_atomic_fetch_inc_relaxed */

#ifndef arch_atomic_fetch_inc_acquire
+/**
+ * arch_atomic_fetch_inc_acquire - Atomic inc with acquire ordering
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increment @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_inc_acquire(atomic_t *v)
{
@@ -777,6 +823,13 @@ arch_atomic_dec_return_relaxed(atomic_t *v)
#else /* arch_atomic_dec_return_relaxed */

#ifndef arch_atomic_dec_return_acquire
+/**
+ * arch_atomic_dec_return_acquire - Atomic dec with acquire ordering
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrement @v using acquire ordering.
+ * Return new value.
+ */
static __always_inline int
arch_atomic_dec_return_acquire(atomic_t *v)
{
@@ -886,6 +939,13 @@ arch_atomic_fetch_dec_relaxed(atomic_t *v)
#else /* arch_atomic_fetch_dec_relaxed */

#ifndef arch_atomic_fetch_dec_acquire
+/**
+ * arch_atomic_fetch_dec_acquire - Atomic dec with acquire ordering
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrement @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_dec_acquire(atomic_t *v)
{
@@ -928,6 +988,14 @@ arch_atomic_fetch_dec(atomic_t *v)
#else /* arch_atomic_fetch_and_relaxed */

#ifndef arch_atomic_fetch_and_acquire
+/**
+ * arch_atomic_fetch_and_acquire - Atomic and with acquire ordering
+ * @i: value to AND
+ * @v: pointer of type atomic_t
+ *
+ * Atomically AND @i with @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_and_acquire(int i, atomic_t *v)
{
@@ -1058,6 +1126,14 @@ arch_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
#else /* arch_atomic_fetch_andnot_relaxed */

#ifndef arch_atomic_fetch_andnot_acquire
+/**
+ * arch_atomic_fetch_andnot_acquire - Atomic andnot with acquire ordering
+ * @i: value to complement then AND
+ * @v: pointer of type atomic_t
+ *
+ * Atomically complement then AND @i with @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
@@ -1100,6 +1176,14 @@ arch_atomic_fetch_andnot(int i, atomic_t *v)
#else /* arch_atomic_fetch_or_relaxed */

#ifndef arch_atomic_fetch_or_acquire
+/**
+ * arch_atomic_fetch_or_acquire - Atomic or with acquire ordering
+ * @i: value to OR
+ * @v: pointer of type atomic_t
+ *
+ * Atomically OR @i with @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_or_acquire(int i, atomic_t *v)
{
@@ -1142,6 +1226,14 @@ arch_atomic_fetch_or(int i, atomic_t *v)
#else /* arch_atomic_fetch_xor_relaxed */

#ifndef arch_atomic_fetch_xor_acquire
+/**
+ * arch_atomic_fetch_xor_acquire - Atomic xor with acquire ordering
+ * @i: value to XOR
+ * @v: pointer of type atomic_t
+ *
+ * Atomically XOR @i with @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_xor_acquire(int i, atomic_t *v)
{
@@ -1184,6 +1276,14 @@ arch_atomic_fetch_xor(int i, atomic_t *v)
#else /* arch_atomic_xchg_relaxed */

#ifndef arch_atomic_xchg_acquire
+/**
+ * arch_atomic_xchg_acquire - Atomic xchg with acquire ordering
+ * @v: pointer of type atomic_t
+ * @i: value to exchange
+ *
+ * Atomically exchange @i with @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_xchg_acquire(atomic_t *v, int i)
{
@@ -1226,6 +1326,18 @@ arch_atomic_xchg(atomic_t *v, int i)
#else /* arch_atomic_cmpxchg_relaxed */

#ifndef arch_atomic_cmpxchg_acquire
+/**
+ * arch_atomic_cmpxchg_acquire - Atomic cmpxchg with acquire ordering
+ * @v: pointer of type atomic_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal,
+ * stores @new to *@v, providing acquire ordering.
+ * Returns the old value *@v regardless of the result of
+ * the comparison. Therefore, if the return value is not
+ * equal to @old, the cmpxchg operation failed.
+ */
static __always_inline int
arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
@@ -1363,6 +1475,18 @@ arch_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
#else /* arch_atomic_try_cmpxchg_relaxed */

#ifndef arch_atomic_try_cmpxchg_acquire
+/**
+ * arch_atomic_try_cmpxchg_acquire - Atomic try_cmpxchg with acquire ordering
+ * @v: pointer of type atomic_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal,
+ * stores @new to *@v, providing acquire ordering.
+ * Returns @true if the cmpxchg operation succeeded,
+ * and false otherwise. Either way, stores the old
+ * value of *@v to *@old.
+ */
static __always_inline bool
arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
@@ -1528,6 +1652,15 @@ arch_atomic_add_negative_relaxed(int i, atomic_t *v)
#else /* arch_atomic_add_negative_relaxed */

#ifndef arch_atomic_add_negative_acquire
+/**
+ * arch_atomic_add_negative_acquire - Atomic add_negative with acquire ordering
+ * @i: value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically add @i with @v using acquire ordering.
+ * Return @true if the result is negative, or @false when
+ * the result is greater than or equal to zero.
+ */
static __always_inline bool
arch_atomic_add_negative_acquire(int i, atomic_t *v)
{
@@ -1754,6 +1887,14 @@ arch_atomic64_set_release(atomic64_t *v, s64 i)
#else /* arch_atomic64_add_return_relaxed */

#ifndef arch_atomic64_add_return_acquire
+/**
+ * arch_atomic64_add_return_acquire - Atomic add with acquire ordering
+ * @i: value to add
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically add @i to @v using acquire ordering.
+ * Return new value.
+ */
static __always_inline s64
arch_atomic64_add_return_acquire(s64 i, atomic64_t *v)
{
@@ -1796,6 +1937,14 @@ arch_atomic64_add_return(s64 i, atomic64_t *v)
#else /* arch_atomic64_fetch_add_relaxed */

#ifndef arch_atomic64_fetch_add_acquire
+/**
+ * arch_atomic64_fetch_add_acquire - Atomic add with acquire ordering
+ * @i: value to add
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically add @i to @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
{
@@ -1838,6 +1987,14 @@ arch_atomic64_fetch_add(s64 i, atomic64_t *v)
#else /* arch_atomic64_sub_return_relaxed */

#ifndef arch_atomic64_sub_return_acquire
+/**
+ * arch_atomic64_sub_return_acquire - Atomic sub with acquire ordering
+ * @i: value to subtract
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically subtract @i from @v using acquire ordering.
+ * Return new value.
+ */
static __always_inline s64
arch_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
{
@@ -1880,6 +2037,14 @@ arch_atomic64_sub_return(s64 i, atomic64_t *v)
#else /* arch_atomic64_fetch_sub_relaxed */

#ifndef arch_atomic64_fetch_sub_acquire
+/**
+ * arch_atomic64_fetch_sub_acquire - Atomic sub with acquire ordering
+ * @i: value to subtract
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically subtract @i from @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
{
@@ -2005,6 +2170,13 @@ arch_atomic64_inc_return_relaxed(atomic64_t *v)
#else /* arch_atomic64_inc_return_relaxed */

#ifndef arch_atomic64_inc_return_acquire
+/**
+ * arch_atomic64_inc_return_acquire - Atomic inc with acquire ordering
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically increment @v using acquire ordering.
+ * Return new value.
+ */
static __always_inline s64
arch_atomic64_inc_return_acquire(atomic64_t *v)
{
@@ -2114,6 +2286,13 @@ arch_atomic64_fetch_inc_relaxed(atomic64_t *v)
#else /* arch_atomic64_fetch_inc_relaxed */

#ifndef arch_atomic64_fetch_inc_acquire
+/**
+ * arch_atomic64_fetch_inc_acquire - Atomic inc with acquire ordering
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically increment @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_inc_acquire(atomic64_t *v)
{
@@ -2239,6 +2418,13 @@ arch_atomic64_dec_return_relaxed(atomic64_t *v)
#else /* arch_atomic64_dec_return_relaxed */

#ifndef arch_atomic64_dec_return_acquire
+/**
+ * arch_atomic64_dec_return_acquire - Atomic dec with acquire ordering
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically decrement @v using acquire ordering.
+ * Return new value.
+ */
static __always_inline s64
arch_atomic64_dec_return_acquire(atomic64_t *v)
{
@@ -2348,6 +2534,13 @@ arch_atomic64_fetch_dec_relaxed(atomic64_t *v)
#else /* arch_atomic64_fetch_dec_relaxed */

#ifndef arch_atomic64_fetch_dec_acquire
+/**
+ * arch_atomic64_fetch_dec_acquire - Atomic dec with acquire ordering
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically decrement @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_dec_acquire(atomic64_t *v)
{
@@ -2390,6 +2583,14 @@ arch_atomic64_fetch_dec(atomic64_t *v)
#else /* arch_atomic64_fetch_and_relaxed */

#ifndef arch_atomic64_fetch_and_acquire
+/**
+ * arch_atomic64_fetch_and_acquire - Atomic and with acquire ordering
+ * @i: value to AND
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically AND @i with @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
{
@@ -2520,6 +2721,14 @@ arch_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
#else /* arch_atomic64_fetch_andnot_relaxed */

#ifndef arch_atomic64_fetch_andnot_acquire
+/**
+ * arch_atomic64_fetch_andnot_acquire - Atomic andnot with acquire ordering
+ * @i: value to complement then AND
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically complement then AND @i with @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
@@ -2562,6 +2771,14 @@ arch_atomic64_fetch_andnot(s64 i, atomic64_t *v)
#else /* arch_atomic64_fetch_or_relaxed */

#ifndef arch_atomic64_fetch_or_acquire
+/**
+ * arch_atomic64_fetch_or_acquire - Atomic or with acquire ordering
+ * @i: value to OR
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically OR @i with @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
{
@@ -2604,6 +2821,14 @@ arch_atomic64_fetch_or(s64 i, atomic64_t *v)
#else /* arch_atomic64_fetch_xor_relaxed */

#ifndef arch_atomic64_fetch_xor_acquire
+/**
+ * arch_atomic64_fetch_xor_acquire - Atomic xor with acquire ordering
+ * @i: value to XOR
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically XOR @i with @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
{
@@ -2646,6 +2871,14 @@ arch_atomic64_fetch_xor(s64 i, atomic64_t *v)
#else /* arch_atomic64_xchg_relaxed */

#ifndef arch_atomic64_xchg_acquire
+/**
+ * arch_atomic64_xchg_acquire - Atomic xchg with acquire ordering
+ * @v: pointer of type atomic64_t
+ * @i: value to exchange
+ *
+ * Atomically exchange @i with @v using acquire ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_xchg_acquire(atomic64_t *v, s64 i)
{
@@ -2688,6 +2921,18 @@ arch_atomic64_xchg(atomic64_t *v, s64 i)
#else /* arch_atomic64_cmpxchg_relaxed */

#ifndef arch_atomic64_cmpxchg_acquire
+/**
+ * arch_atomic64_cmpxchg_acquire - Atomic cmpxchg with acquire ordering
+ * @v: pointer of type atomic64_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal,
+ * stores @new to *@v, providing acquire ordering.
+ * Returns the old value *@v regardless of the result of
+ * the comparison. Therefore, if the return value is not
+ * equal to @old, the cmpxchg operation failed.
+ */
static __always_inline s64
arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
@@ -2825,6 +3070,18 @@ arch_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
#else /* arch_atomic64_try_cmpxchg_relaxed */

#ifndef arch_atomic64_try_cmpxchg_acquire
+/**
+ * arch_atomic64_try_cmpxchg_acquire - Atomic try_cmpxchg with acquire ordering
+ * @v: pointer of type atomic64_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal,
+ * stores @new to *@v, providing acquire ordering.
+ * Returns @true if the cmpxchg operation succeeded,
+ * and false otherwise. Either way, stores the old
+ * value of *@v to *@old.
+ */
static __always_inline bool
arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
@@ -2990,6 +3247,15 @@ arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
#else /* arch_atomic64_add_negative_relaxed */

#ifndef arch_atomic64_add_negative_acquire
+/**
+ * arch_atomic64_add_negative_acquire - Atomic add_negative with acquire ordering
+ * @i: value to add
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically add @i with @v using acquire ordering.
+ * Return @true if the result is negative, or @false when
+ * the result is greater than or equal to zero.
+ */
static __always_inline bool
arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
@@ -3160,4 +3426,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
#endif

#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// 96c8a3c4d13b12c9f3e0f715709c8af1653a7e79
+// a7944792460cf5adb72d49025850800d2cd178be
diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/acquire
index ef764085c79a..08fc6c30a9ef 100755
--- a/scripts/atomic/fallbacks/acquire
+++ b/scripts/atomic/fallbacks/acquire
@@ -1,4 +1,6 @@
-cat <<EOF
+acqrel=acquire
+. ${ATOMICDIR}/acqrel.sh
+cat << EOF
static __always_inline ${ret}
arch_${atomic}_${pfx}${name}${sfx}_acquire(${params})
{
--
2.40.1


2023-05-10 18:48:20

by Paul E. McKenney

[permalink] [raw]
Subject: [PATCH locking/atomic 15/19] locking/atomic: Add kernel-doc header for arch_${atomic}_${pfx}${name}${sfx}_release

Add kernel-doc header template for arch_${atomic}_${pfx}${name}${sfx}_release
function family with the help of my good friend awk, as encapsulated in
acqrel.sh.

Signed-off-by: Paul E. McKenney <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Mark Rutland <[email protected]>
---
include/linux/atomic/atomic-arch-fallback.h | 268 +++++++++++++++++++-
scripts/atomic/fallbacks/release | 2 +
2 files changed, 269 insertions(+), 1 deletion(-)

diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index fc80113ca60a..ec6821b4bbc1 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -311,6 +311,14 @@ arch_atomic_add_return_acquire(int i, atomic_t *v)
#endif

#ifndef arch_atomic_add_return_release
+/**
+ * arch_atomic_add_return_release - Atomic add with release ordering
+ * @i: value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically add @i to @v using release ordering.
+ * Return new value.
+ */
static __always_inline int
arch_atomic_add_return_release(int i, atomic_t *v)
{
@@ -361,6 +369,14 @@ arch_atomic_fetch_add_acquire(int i, atomic_t *v)
#endif

#ifndef arch_atomic_fetch_add_release
+/**
+ * arch_atomic_fetch_add_release - Atomic add with release ordering
+ * @i: value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically add @i to @v using release ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_add_release(int i, atomic_t *v)
{
@@ -411,6 +427,14 @@ arch_atomic_sub_return_acquire(int i, atomic_t *v)
#endif

#ifndef arch_atomic_sub_return_release
+/**
+ * arch_atomic_sub_return_release - Atomic sub with release ordering
+ * @i: value to subtract
+ * @v: pointer of type atomic_t
+ *
+ * Atomically subtract @i from @v using release ordering.
+ * Return new value.
+ */
static __always_inline int
arch_atomic_sub_return_release(int i, atomic_t *v)
{
@@ -461,6 +485,14 @@ arch_atomic_fetch_sub_acquire(int i, atomic_t *v)
#endif

#ifndef arch_atomic_fetch_sub_release
+/**
+ * arch_atomic_fetch_sub_release - Atomic sub with release ordering
+ * @i: value to subtract
+ * @v: pointer of type atomic_t
+ *
+ * Atomically subtract @i from @v using release ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_sub_release(int i, atomic_t *v)
{
@@ -593,6 +625,13 @@ arch_atomic_inc_return_acquire(atomic_t *v)
#endif

#ifndef arch_atomic_inc_return_release
+/**
+ * arch_atomic_inc_return_release - Atomic inc with release ordering
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increment @v using release ordering.
+ * Return new value.
+ */
static __always_inline int
arch_atomic_inc_return_release(atomic_t *v)
{
@@ -709,6 +748,13 @@ arch_atomic_fetch_inc_acquire(atomic_t *v)
#endif

#ifndef arch_atomic_fetch_inc_release
+/**
+ * arch_atomic_fetch_inc_release - Atomic inc with release ordering
+ * @v: pointer of type atomic_t
+ *
+ * Atomically increment @v using release ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_inc_release(atomic_t *v)
{
@@ -841,6 +887,13 @@ arch_atomic_dec_return_acquire(atomic_t *v)
#endif

#ifndef arch_atomic_dec_return_release
+/**
+ * arch_atomic_dec_return_release - Atomic dec with release ordering
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrement @v using release ordering.
+ * Return new value.
+ */
static __always_inline int
arch_atomic_dec_return_release(atomic_t *v)
{
@@ -957,6 +1010,13 @@ arch_atomic_fetch_dec_acquire(atomic_t *v)
#endif

#ifndef arch_atomic_fetch_dec_release
+/**
+ * arch_atomic_fetch_dec_release - Atomic dec with release ordering
+ * @v: pointer of type atomic_t
+ *
+ * Atomically decrement @v using release ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_dec_release(atomic_t *v)
{
@@ -1007,6 +1067,14 @@ arch_atomic_fetch_and_acquire(int i, atomic_t *v)
#endif

#ifndef arch_atomic_fetch_and_release
+/**
+ * arch_atomic_fetch_and_release - Atomic and with release ordering
+ * @i: value to AND
+ * @v: pointer of type atomic_t
+ *
+ * Atomically AND @i with @v using release ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_and_release(int i, atomic_t *v)
{
@@ -1145,6 +1213,14 @@ arch_atomic_fetch_andnot_acquire(int i, atomic_t *v)
#endif

#ifndef arch_atomic_fetch_andnot_release
+/**
+ * arch_atomic_fetch_andnot_release - Atomic andnot with release ordering
+ * @i: value to complement then AND
+ * @v: pointer of type atomic_t
+ *
+ * Atomically complement then AND @i with @v using release ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_andnot_release(int i, atomic_t *v)
{
@@ -1195,6 +1271,14 @@ arch_atomic_fetch_or_acquire(int i, atomic_t *v)
#endif

#ifndef arch_atomic_fetch_or_release
+/**
+ * arch_atomic_fetch_or_release - Atomic or with release ordering
+ * @i: value to OR
+ * @v: pointer of type atomic_t
+ *
+ * Atomically OR @i with @v using release ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_or_release(int i, atomic_t *v)
{
@@ -1245,6 +1329,14 @@ arch_atomic_fetch_xor_acquire(int i, atomic_t *v)
#endif

#ifndef arch_atomic_fetch_xor_release
+/**
+ * arch_atomic_fetch_xor_release - Atomic xor with release ordering
+ * @i: value to XOR
+ * @v: pointer of type atomic_t
+ *
+ * Atomically XOR @i with @v using release ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_fetch_xor_release(int i, atomic_t *v)
{
@@ -1295,6 +1387,14 @@ arch_atomic_xchg_acquire(atomic_t *v, int i)
#endif

#ifndef arch_atomic_xchg_release
+/**
+ * arch_atomic_xchg_release - Atomic xchg with release ordering
+ * @v: pointer of type atomic_t
+ * @i: value to exchange
+ *
+ * Atomically exchange @i with @v using release ordering.
+ * Return old value.
+ */
static __always_inline int
arch_atomic_xchg_release(atomic_t *v, int i)
{
@@ -1349,6 +1449,18 @@ arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
#endif

#ifndef arch_atomic_cmpxchg_release
+/**
+ * arch_atomic_cmpxchg_release - Atomic cmpxchg with release ordering
+ * @v: pointer of type atomic_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal,
+ * stores @new to *@v, providing release ordering.
+ * Returns the old value *@v regardless of the result of
+ * the comparison. Therefore, if the return value is not
+ * equal to @old, the cmpxchg operation failed.
+ */
static __always_inline int
arch_atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
@@ -1498,6 +1610,18 @@ arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
#endif

#ifndef arch_atomic_try_cmpxchg_release
+/**
+ * arch_atomic_try_cmpxchg_release - Atomic try_cmpxchg with release ordering
+ * @v: pointer of type atomic_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal,
+ * stores @new to *@v, providing release ordering.
+ * Returns @true if the cmpxchg operation succeeded,
+ * and false otherwise. Either way, stores the old
+ * value of *@v to *@old.
+ */
static __always_inline bool
arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
{
@@ -1672,6 +1796,15 @@ arch_atomic_add_negative_acquire(int i, atomic_t *v)
#endif

#ifndef arch_atomic_add_negative_release
+/**
+ * arch_atomic_add_negative_release - Atomic add_negative with release ordering
+ * @i: value to add
+ * @v: pointer of type atomic_t
+ *
+ * Atomically add @i with @v using release ordering.
+ * Return @true if the result is negative, or @false when
+ * the result is greater than or equal to zero.
+ */
static __always_inline bool
arch_atomic_add_negative_release(int i, atomic_t *v)
{
@@ -1906,6 +2039,14 @@ arch_atomic64_add_return_acquire(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_add_return_release
+/**
+ * arch_atomic64_add_return_release - Atomic add with release ordering
+ * @i: value to add
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically add @i to @v using release ordering.
+ * Return new value.
+ */
static __always_inline s64
arch_atomic64_add_return_release(s64 i, atomic64_t *v)
{
@@ -1956,6 +2097,14 @@ arch_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_add_release
+/**
+ * arch_atomic64_fetch_add_release - Atomic add with release ordering
+ * @i: value to add
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically add @i to @v using release ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_add_release(s64 i, atomic64_t *v)
{
@@ -2006,6 +2155,14 @@ arch_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_sub_return_release
+/**
+ * arch_atomic64_sub_return_release - Atomic sub with release ordering
+ * @i: value to subtract
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically subtract @i from @v using release ordering.
+ * Return new value.
+ */
static __always_inline s64
arch_atomic64_sub_return_release(s64 i, atomic64_t *v)
{
@@ -2056,6 +2213,14 @@ arch_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_sub_release
+/**
+ * arch_atomic64_fetch_sub_release - Atomic sub with release ordering
+ * @i: value to subtract
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically subtract @i from @v using release ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
{
@@ -2188,6 +2353,13 @@ arch_atomic64_inc_return_acquire(atomic64_t *v)
#endif

#ifndef arch_atomic64_inc_return_release
+/**
+ * arch_atomic64_inc_return_release - Atomic inc with release ordering
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically increment @v using release ordering.
+ * Return new value.
+ */
static __always_inline s64
arch_atomic64_inc_return_release(atomic64_t *v)
{
@@ -2304,6 +2476,13 @@ arch_atomic64_fetch_inc_acquire(atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_inc_release
+/**
+ * arch_atomic64_fetch_inc_release - Atomic inc with release ordering
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically increment @v using release ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_inc_release(atomic64_t *v)
{
@@ -2436,6 +2615,13 @@ arch_atomic64_dec_return_acquire(atomic64_t *v)
#endif

#ifndef arch_atomic64_dec_return_release
+/**
+ * arch_atomic64_dec_return_release - Atomic dec with release ordering
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically decrement @v using release ordering.
+ * Return new value.
+ */
static __always_inline s64
arch_atomic64_dec_return_release(atomic64_t *v)
{
@@ -2552,6 +2738,13 @@ arch_atomic64_fetch_dec_acquire(atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_dec_release
+/**
+ * arch_atomic64_fetch_dec_release - Atomic dec with release ordering
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically decrement @v using release ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_dec_release(atomic64_t *v)
{
@@ -2602,6 +2795,14 @@ arch_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_and_release
+/**
+ * arch_atomic64_fetch_and_release - Atomic and with release ordering
+ * @i: value to AND
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically AND @i with @v using release ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_and_release(s64 i, atomic64_t *v)
{
@@ -2740,6 +2941,14 @@ arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_andnot_release
+/**
+ * arch_atomic64_fetch_andnot_release - Atomic andnot with release ordering
+ * @i: value to complement then AND
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically complement then AND @i with @v using release ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
@@ -2790,6 +2999,14 @@ arch_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_or_release
+/**
+ * arch_atomic64_fetch_or_release - Atomic or with release ordering
+ * @i: value to OR
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically OR @i with @v using release ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_or_release(s64 i, atomic64_t *v)
{
@@ -2840,6 +3057,14 @@ arch_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_fetch_xor_release
+/**
+ * arch_atomic64_fetch_xor_release - Atomic xor with release ordering
+ * @i: value to XOR
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically XOR @i with @v using release ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
{
@@ -2890,6 +3115,14 @@ arch_atomic64_xchg_acquire(atomic64_t *v, s64 i)
#endif

#ifndef arch_atomic64_xchg_release
+/**
+ * arch_atomic64_xchg_release - Atomic xchg with release ordering
+ * @v: pointer of type atomic64_t
+ * @i: value to exchange
+ *
+ * Atomically exchange @i with @v using release ordering.
+ * Return old value.
+ */
static __always_inline s64
arch_atomic64_xchg_release(atomic64_t *v, s64 i)
{
@@ -2944,6 +3177,18 @@ arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
#endif

#ifndef arch_atomic64_cmpxchg_release
+/**
+ * arch_atomic64_cmpxchg_release - Atomic cmpxchg with release ordering
+ * @v: pointer of type atomic64_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal,
+ * stores @new to *@v, providing release ordering.
+ * Returns the old value *@v regardless of the result of
+ * the comparison. Therefore, if the return value is not
+ * equal to @old, the cmpxchg operation failed.
+ */
static __always_inline s64
arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
@@ -3093,6 +3338,18 @@ arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
#endif

#ifndef arch_atomic64_try_cmpxchg_release
+/**
+ * arch_atomic64_try_cmpxchg_release - Atomic try_cmpxchg with release ordering
+ * @v: pointer of type atomic64_t
+ * @old: desired old value to match
+ * @new: new value to put in
+ *
+ * Atomically compares @new to *@v, and if equal,
+ * stores @new to *@v, providing release ordering.
+ * Returns @true if the cmpxchg operation succeeded,
+ * and false otherwise. Either way, stores the old
+ * value of *@v to *@old.
+ */
static __always_inline bool
arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
{
@@ -3267,6 +3524,15 @@ arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
#endif

#ifndef arch_atomic64_add_negative_release
+/**
+ * arch_atomic64_add_negative_release - Atomic add_negative with release ordering
+ * @i: value to add
+ * @v: pointer of type atomic64_t
+ *
+ * Atomically add @i with @v using release ordering.
+ * Return @true if the result is negative, or @false when
+ * the result is greater than or equal to zero.
+ */
static __always_inline bool
arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
{
@@ -3426,4 +3692,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
#endif

#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// a7944792460cf5adb72d49025850800d2cd178be
+// 2caf9e8360f71f3841789431533b32b620a12c1e
diff --git a/scripts/atomic/fallbacks/release b/scripts/atomic/fallbacks/release
index b46feb56d69c..bce3a1cbd497 100755
--- a/scripts/atomic/fallbacks/release
+++ b/scripts/atomic/fallbacks/release
@@ -1,3 +1,5 @@
+acqrel=release
+. ${ATOMICDIR}/acqrel.sh
cat <<EOF
static __always_inline ${ret}
arch_${atomic}_${pfx}${name}${sfx}_release(${params})
--
2.40.1


2023-05-11 17:33:27

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

Hi Paul

On Wed, May 10, 2023 at 11:17:16AM -0700, Paul E. McKenney wrote:
> The gen-atomics.sh script currently generates 42 duplicate definitions:
>
> arch_atomic64_add_negative
> arch_atomic64_add_negative_acquire
> arch_atomic64_add_negative_release

[...]

> These duplicates are presumably to handle different architectures
> generating hand-coded definitions for different subsets of the atomic
> operations.

Yup, for each FULL/ACQUIRE/RELEASE/RELAXED variant of each op, we allow the
archtiecture to choose between:

* Providing the ordering variant directly
* Providing the FULL ordering variant only
* Providing the RELAXED ordering variant only
* Providing an equivalent op that we can build from

> However, generating duplicate kernel-doc headers is undesirable.

Understood -- I hadn't understood that duplication was a problem when this was
originally written.

The way this is currently done is largely an artifact of our ifdeffery (and the
kerneldoc for fallbacks living inthe fallback templates), and I think we can
fix both of those.

> Therefore, generate only the first kernel-doc definition in a group
> of duplicates. A comment indicates the name of the function and the
> fallback script that generated it.

I'm not keen on this approach, especially with the chkdup.sh script -- it feels
like we're working around an underlying structural issue.

I think that we can restructure the ifdeffery so that each ordering variant
gets its own ifdeffery, and then we could place the kerneldoc immediately above
that, e.g.

/**
* arch_atomic_inc_return_release()
*
* [ full kerneldoc block here ]
*/
#if defined(arch_atomic_inc_return_release)
/* defined in arch code */
#elif defined(arch_atomic_inc_return_relaxed)
[ define in terms of arch_atomic_inc_return_relaxed ]
#elif defined(arch_atomic_inc_return)
[ define in terms of arch_atomic_inc_return ]
#else
[ define in terms of arch_atomic_fetch_inc_release ]
#endif

... with similar for the mandatory ops that each arch must provide, e.g.

/**
* arch_atomic_or()
*
* [ full kerneldoc block here ]
*/
/* arch_atomic_or() is mandatory -- architectures must define it! */

I had a go at that restructuring today, and while local build testing indicates
I haven't got it quite right, I think it's possible:

https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/fallback-rework

Does that sound ok to you?

Thanks,
Mark.

> Reported-by: Akira Yokosawa <[email protected]>
> Signed-off-by: Paul E. McKenney <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Boqun Feng <[email protected]>
> Cc: Mark Rutland <[email protected]>
> ---
> include/linux/atomic/atomic-arch-fallback.h | 386 +++----------------
> scripts/atomic/chkdup.sh | 27 ++
> scripts/atomic/fallbacks/acquire | 3 +
> scripts/atomic/fallbacks/add_negative | 5 +
> scripts/atomic/fallbacks/add_unless | 5 +
> scripts/atomic/fallbacks/andnot | 5 +
> scripts/atomic/fallbacks/dec | 5 +
> scripts/atomic/fallbacks/dec_and_test | 5 +
> scripts/atomic/fallbacks/dec_if_positive | 5 +
> scripts/atomic/fallbacks/dec_unless_positive | 5 +
> scripts/atomic/fallbacks/fence | 3 +
> scripts/atomic/fallbacks/fetch_add_unless | 5 +
> scripts/atomic/fallbacks/inc | 5 +
> scripts/atomic/fallbacks/inc_and_test | 5 +
> scripts/atomic/fallbacks/inc_not_zero | 5 +
> scripts/atomic/fallbacks/inc_unless_negative | 5 +
> scripts/atomic/fallbacks/read_acquire | 5 +
> scripts/atomic/fallbacks/release | 3 +
> scripts/atomic/fallbacks/set_release | 5 +
> scripts/atomic/fallbacks/sub_and_test | 5 +
> scripts/atomic/fallbacks/try_cmpxchg | 5 +
> scripts/atomic/gen-atomics.sh | 4 +
> 22 files changed, 163 insertions(+), 343 deletions(-)
> create mode 100644 scripts/atomic/chkdup.sh
>
> diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
> index 41aa94f0aacd..2d56726f8662 100644
> --- a/include/linux/atomic/atomic-arch-fallback.h
> +++ b/include/linux/atomic/atomic-arch-fallback.h
> @@ -639,13 +639,7 @@ arch_atomic_inc_return_relaxed(atomic_t *v)
> #else /* arch_atomic_inc_return_relaxed */
>
> #ifndef arch_atomic_inc_return_acquire
> -/**
> - * arch_atomic_inc_return_acquire - Atomic inc with acquire ordering
> - * @v: pointer of type atomic_t
> - *
> - * Atomically increment @v using acquire ordering.
> - * Return new value.
> - */
> +// Fallback acquire omitting duplicate arch_atomic_inc_return_acquire() kernel-doc header.
> static __always_inline int
> arch_atomic_inc_return_acquire(atomic_t *v)
> {
> @@ -657,13 +651,7 @@ arch_atomic_inc_return_acquire(atomic_t *v)
> #endif
>
> #ifndef arch_atomic_inc_return_release
> -/**
> - * arch_atomic_inc_return_release - Atomic inc with release ordering
> - * @v: pointer of type atomic_t
> - *
> - * Atomically increment @v using release ordering.
> - * Return new value.
> - */
> +// Fallback release omitting duplicate arch_atomic_inc_return_release() kernel-doc header.
> static __always_inline int
> arch_atomic_inc_return_release(atomic_t *v)
> {
> @@ -674,13 +662,7 @@ arch_atomic_inc_return_release(atomic_t *v)
> #endif
>
> #ifndef arch_atomic_inc_return
> -/**
> - * arch_atomic_inc_return - Atomic inc with full ordering
> - * @v: pointer of type atomic_t
> - *
> - * Atomically increment @v using full ordering.
> - * Return new value.
> - */
> +// Fallback fence omitting duplicate arch_atomic_inc_return() kernel-doc header.
> static __always_inline int
> arch_atomic_inc_return(atomic_t *v)
> {
> @@ -769,13 +751,7 @@ arch_atomic_fetch_inc_relaxed(atomic_t *v)
> #else /* arch_atomic_fetch_inc_relaxed */
>
> #ifndef arch_atomic_fetch_inc_acquire
> -/**
> - * arch_atomic_fetch_inc_acquire - Atomic inc with acquire ordering
> - * @v: pointer of type atomic_t
> - *
> - * Atomically increment @v using acquire ordering.
> - * Return old value.
> - */
> +// Fallback acquire omitting duplicate arch_atomic_fetch_inc_acquire() kernel-doc header.
> static __always_inline int
> arch_atomic_fetch_inc_acquire(atomic_t *v)
> {
> @@ -787,13 +763,7 @@ arch_atomic_fetch_inc_acquire(atomic_t *v)
> #endif
>
> #ifndef arch_atomic_fetch_inc_release
> -/**
> - * arch_atomic_fetch_inc_release - Atomic inc with release ordering
> - * @v: pointer of type atomic_t
> - *
> - * Atomically increment @v using release ordering.
> - * Return old value.
> - */
> +// Fallback release omitting duplicate arch_atomic_fetch_inc_release() kernel-doc header.
> static __always_inline int
> arch_atomic_fetch_inc_release(atomic_t *v)
> {
> @@ -804,13 +774,7 @@ arch_atomic_fetch_inc_release(atomic_t *v)
> #endif
>
> #ifndef arch_atomic_fetch_inc
> -/**
> - * arch_atomic_fetch_inc - Atomic inc with full ordering
> - * @v: pointer of type atomic_t
> - *
> - * Atomically increment @v using full ordering.
> - * Return old value.
> - */
> +// Fallback fence omitting duplicate arch_atomic_fetch_inc() kernel-doc header.
> static __always_inline int
> arch_atomic_fetch_inc(atomic_t *v)
> {
> @@ -915,13 +879,7 @@ arch_atomic_dec_return_relaxed(atomic_t *v)
> #else /* arch_atomic_dec_return_relaxed */
>
> #ifndef arch_atomic_dec_return_acquire
> -/**
> - * arch_atomic_dec_return_acquire - Atomic dec with acquire ordering
> - * @v: pointer of type atomic_t
> - *
> - * Atomically decrement @v using acquire ordering.
> - * Return new value.
> - */
> +// Fallback acquire omitting duplicate arch_atomic_dec_return_acquire() kernel-doc header.
> static __always_inline int
> arch_atomic_dec_return_acquire(atomic_t *v)
> {
> @@ -933,13 +891,7 @@ arch_atomic_dec_return_acquire(atomic_t *v)
> #endif
>
> #ifndef arch_atomic_dec_return_release
> -/**
> - * arch_atomic_dec_return_release - Atomic dec with release ordering
> - * @v: pointer of type atomic_t
> - *
> - * Atomically decrement @v using release ordering.
> - * Return new value.
> - */
> +// Fallback release omitting duplicate arch_atomic_dec_return_release() kernel-doc header.
> static __always_inline int
> arch_atomic_dec_return_release(atomic_t *v)
> {
> @@ -950,13 +902,7 @@ arch_atomic_dec_return_release(atomic_t *v)
> #endif
>
> #ifndef arch_atomic_dec_return
> -/**
> - * arch_atomic_dec_return - Atomic dec with full ordering
> - * @v: pointer of type atomic_t
> - *
> - * Atomically decrement @v using full ordering.
> - * Return new value.
> - */
> +// Fallback fence omitting duplicate arch_atomic_dec_return() kernel-doc header.
> static __always_inline int
> arch_atomic_dec_return(atomic_t *v)
> {
> @@ -1045,13 +991,7 @@ arch_atomic_fetch_dec_relaxed(atomic_t *v)
> #else /* arch_atomic_fetch_dec_relaxed */
>
> #ifndef arch_atomic_fetch_dec_acquire
> -/**
> - * arch_atomic_fetch_dec_acquire - Atomic dec with acquire ordering
> - * @v: pointer of type atomic_t
> - *
> - * Atomically decrement @v using acquire ordering.
> - * Return old value.
> - */
> +// Fallback acquire omitting duplicate arch_atomic_fetch_dec_acquire() kernel-doc header.
> static __always_inline int
> arch_atomic_fetch_dec_acquire(atomic_t *v)
> {
> @@ -1063,13 +1003,7 @@ arch_atomic_fetch_dec_acquire(atomic_t *v)
> #endif
>
> #ifndef arch_atomic_fetch_dec_release
> -/**
> - * arch_atomic_fetch_dec_release - Atomic dec with release ordering
> - * @v: pointer of type atomic_t
> - *
> - * Atomically decrement @v using release ordering.
> - * Return old value.
> - */
> +// Fallback release omitting duplicate arch_atomic_fetch_dec_release() kernel-doc header.
> static __always_inline int
> arch_atomic_fetch_dec_release(atomic_t *v)
> {
> @@ -1080,13 +1014,7 @@ arch_atomic_fetch_dec_release(atomic_t *v)
> #endif
>
> #ifndef arch_atomic_fetch_dec
> -/**
> - * arch_atomic_fetch_dec - Atomic dec with full ordering
> - * @v: pointer of type atomic_t
> - *
> - * Atomically decrement @v using full ordering.
> - * Return old value.
> - */
> +// Fallback fence omitting duplicate arch_atomic_fetch_dec() kernel-doc header.
> static __always_inline int
> arch_atomic_fetch_dec(atomic_t *v)
> {
> @@ -1262,14 +1190,7 @@ arch_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
> #else /* arch_atomic_fetch_andnot_relaxed */
>
> #ifndef arch_atomic_fetch_andnot_acquire
> -/**
> - * arch_atomic_fetch_andnot_acquire - Atomic andnot with acquire ordering
> - * @i: value to complement then AND
> - * @v: pointer of type atomic_t
> - *
> - * Atomically complement then AND @i with @v using acquire ordering.
> - * Return old value.
> - */
> +// Fallback acquire omitting duplicate arch_atomic_fetch_andnot_acquire() kernel-doc header.
> static __always_inline int
> arch_atomic_fetch_andnot_acquire(int i, atomic_t *v)
> {
> @@ -1281,14 +1202,7 @@ arch_atomic_fetch_andnot_acquire(int i, atomic_t *v)
> #endif
>
> #ifndef arch_atomic_fetch_andnot_release
> -/**
> - * arch_atomic_fetch_andnot_release - Atomic andnot with release ordering
> - * @i: value to complement then AND
> - * @v: pointer of type atomic_t
> - *
> - * Atomically complement then AND @i with @v using release ordering.
> - * Return old value.
> - */
> +// Fallback release omitting duplicate arch_atomic_fetch_andnot_release() kernel-doc header.
> static __always_inline int
> arch_atomic_fetch_andnot_release(int i, atomic_t *v)
> {
> @@ -1299,14 +1213,7 @@ arch_atomic_fetch_andnot_release(int i, atomic_t *v)
> #endif
>
> #ifndef arch_atomic_fetch_andnot
> -/**
> - * arch_atomic_fetch_andnot - Atomic andnot with full ordering
> - * @i: value to complement then AND
> - * @v: pointer of type atomic_t
> - *
> - * Atomically complement then AND @i with @v using full ordering.
> - * Return old value.
> - */
> +// Fallback fence omitting duplicate arch_atomic_fetch_andnot() kernel-doc header.
> static __always_inline int
> arch_atomic_fetch_andnot(int i, atomic_t *v)
> {
> @@ -1699,18 +1606,7 @@ arch_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
> #else /* arch_atomic_try_cmpxchg_relaxed */
>
> #ifndef arch_atomic_try_cmpxchg_acquire
> -/**
> - * arch_atomic_try_cmpxchg_acquire - Atomic try_cmpxchg with acquire ordering
> - * @v: pointer of type atomic_t
> - * @old: desired old value to match
> - * @new: new value to put in
> - *
> - * Atomically compares @new to *@v, and if equal,
> - * stores @new to *@v, providing acquire ordering.
> - * Returns @true if the cmpxchg operation succeeded,
> - * and false otherwise. Either way, stores the old
> - * value of *@v to *@old.
> - */
> +// Fallback acquire omitting duplicate arch_atomic_try_cmpxchg_acquire() kernel-doc header.
> static __always_inline bool
> arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
> {
> @@ -1722,18 +1618,7 @@ arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
> #endif
>
> #ifndef arch_atomic_try_cmpxchg_release
> -/**
> - * arch_atomic_try_cmpxchg_release - Atomic try_cmpxchg with release ordering
> - * @v: pointer of type atomic_t
> - * @old: desired old value to match
> - * @new: new value to put in
> - *
> - * Atomically compares @new to *@v, and if equal,
> - * stores @new to *@v, providing release ordering.
> - * Returns @true if the cmpxchg operation succeeded,
> - * and false otherwise. Either way, stores the old
> - * value of *@v to *@old.
> - */
> +// Fallback release omitting duplicate arch_atomic_try_cmpxchg_release() kernel-doc header.
> static __always_inline bool
> arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
> {
> @@ -1744,18 +1629,7 @@ arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
> #endif
>
> #ifndef arch_atomic_try_cmpxchg
> -/**
> - * arch_atomic_try_cmpxchg - Atomic try_cmpxchg with full ordering
> - * @v: pointer of type atomic_t
> - * @old: desired old value to match
> - * @new: new value to put in
> - *
> - * Atomically compares @new to *@v, and if equal,
> - * stores @new to *@v, providing full ordering.
> - * Returns @true if the cmpxchg operation succeeded,
> - * and false otherwise. Either way, stores the old
> - * value of *@v to *@old.
> - */
> +// Fallback fence omitting duplicate arch_atomic_try_cmpxchg() kernel-doc header.
> static __always_inline bool
> arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
> {
> @@ -1900,15 +1774,7 @@ arch_atomic_add_negative_relaxed(int i, atomic_t *v)
> #else /* arch_atomic_add_negative_relaxed */
>
> #ifndef arch_atomic_add_negative_acquire
> -/**
> - * arch_atomic_add_negative_acquire - Atomic add_negative with acquire ordering
> - * @i: value to add
> - * @v: pointer of type atomic_t
> - *
> - * Atomically add @i with @v using acquire ordering.
> - * Return @true if the result is negative, or @false when
> - * the result is greater than or equal to zero.
> - */
> +// Fallback acquire omitting duplicate arch_atomic_add_negative_acquire() kernel-doc header.
> static __always_inline bool
> arch_atomic_add_negative_acquire(int i, atomic_t *v)
> {
> @@ -1920,15 +1786,7 @@ arch_atomic_add_negative_acquire(int i, atomic_t *v)
> #endif
>
> #ifndef arch_atomic_add_negative_release
> -/**
> - * arch_atomic_add_negative_release - Atomic add_negative with release ordering
> - * @i: value to add
> - * @v: pointer of type atomic_t
> - *
> - * Atomically add @i with @v using release ordering.
> - * Return @true if the result is negative, or @false when
> - * the result is greater than or equal to zero.
> - */
> +// Fallback release omitting duplicate arch_atomic_add_negative_release() kernel-doc header.
> static __always_inline bool
> arch_atomic_add_negative_release(int i, atomic_t *v)
> {
> @@ -1939,15 +1797,7 @@ arch_atomic_add_negative_release(int i, atomic_t *v)
> #endif
>
> #ifndef arch_atomic_add_negative
> -/**
> - * arch_atomic_add_negative - Atomic add_negative with full ordering
> - * @i: value to add
> - * @v: pointer of type atomic_t
> - *
> - * Atomically add @i with @v using full ordering.
> - * Return @true if the result is negative, or @false when
> - * the result is greater than or equal to zero.
> - */
> +// Fallback fence omitting duplicate arch_atomic_add_negative() kernel-doc header.
> static __always_inline bool
> arch_atomic_add_negative(int i, atomic_t *v)
> {
> @@ -2500,13 +2350,7 @@ arch_atomic64_inc_return_relaxed(atomic64_t *v)
> #else /* arch_atomic64_inc_return_relaxed */
>
> #ifndef arch_atomic64_inc_return_acquire
> -/**
> - * arch_atomic64_inc_return_acquire - Atomic inc with acquire ordering
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically increment @v using acquire ordering.
> - * Return new value.
> - */
> +// Fallback acquire omitting duplicate arch_atomic64_inc_return_acquire() kernel-doc header.
> static __always_inline s64
> arch_atomic64_inc_return_acquire(atomic64_t *v)
> {
> @@ -2518,13 +2362,7 @@ arch_atomic64_inc_return_acquire(atomic64_t *v)
> #endif
>
> #ifndef arch_atomic64_inc_return_release
> -/**
> - * arch_atomic64_inc_return_release - Atomic inc with release ordering
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically increment @v using release ordering.
> - * Return new value.
> - */
> +// Fallback release omitting duplicate arch_atomic64_inc_return_release() kernel-doc header.
> static __always_inline s64
> arch_atomic64_inc_return_release(atomic64_t *v)
> {
> @@ -2535,13 +2373,7 @@ arch_atomic64_inc_return_release(atomic64_t *v)
> #endif
>
> #ifndef arch_atomic64_inc_return
> -/**
> - * arch_atomic64_inc_return - Atomic inc with full ordering
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically increment @v using full ordering.
> - * Return new value.
> - */
> +// Fallback fence omitting duplicate arch_atomic64_inc_return() kernel-doc header.
> static __always_inline s64
> arch_atomic64_inc_return(atomic64_t *v)
> {
> @@ -2630,13 +2462,7 @@ arch_atomic64_fetch_inc_relaxed(atomic64_t *v)
> #else /* arch_atomic64_fetch_inc_relaxed */
>
> #ifndef arch_atomic64_fetch_inc_acquire
> -/**
> - * arch_atomic64_fetch_inc_acquire - Atomic inc with acquire ordering
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically increment @v using acquire ordering.
> - * Return old value.
> - */
> +// Fallback acquire omitting duplicate arch_atomic64_fetch_inc_acquire() kernel-doc header.
> static __always_inline s64
> arch_atomic64_fetch_inc_acquire(atomic64_t *v)
> {
> @@ -2648,13 +2474,7 @@ arch_atomic64_fetch_inc_acquire(atomic64_t *v)
> #endif
>
> #ifndef arch_atomic64_fetch_inc_release
> -/**
> - * arch_atomic64_fetch_inc_release - Atomic inc with release ordering
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically increment @v using release ordering.
> - * Return old value.
> - */
> +// Fallback release omitting duplicate arch_atomic64_fetch_inc_release() kernel-doc header.
> static __always_inline s64
> arch_atomic64_fetch_inc_release(atomic64_t *v)
> {
> @@ -2665,13 +2485,7 @@ arch_atomic64_fetch_inc_release(atomic64_t *v)
> #endif
>
> #ifndef arch_atomic64_fetch_inc
> -/**
> - * arch_atomic64_fetch_inc - Atomic inc with full ordering
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically increment @v using full ordering.
> - * Return old value.
> - */
> +// Fallback fence omitting duplicate arch_atomic64_fetch_inc() kernel-doc header.
> static __always_inline s64
> arch_atomic64_fetch_inc(atomic64_t *v)
> {
> @@ -2776,13 +2590,7 @@ arch_atomic64_dec_return_relaxed(atomic64_t *v)
> #else /* arch_atomic64_dec_return_relaxed */
>
> #ifndef arch_atomic64_dec_return_acquire
> -/**
> - * arch_atomic64_dec_return_acquire - Atomic dec with acquire ordering
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically decrement @v using acquire ordering.
> - * Return new value.
> - */
> +// Fallback acquire omitting duplicate arch_atomic64_dec_return_acquire() kernel-doc header.
> static __always_inline s64
> arch_atomic64_dec_return_acquire(atomic64_t *v)
> {
> @@ -2794,13 +2602,7 @@ arch_atomic64_dec_return_acquire(atomic64_t *v)
> #endif
>
> #ifndef arch_atomic64_dec_return_release
> -/**
> - * arch_atomic64_dec_return_release - Atomic dec with release ordering
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically decrement @v using release ordering.
> - * Return new value.
> - */
> +// Fallback release omitting duplicate arch_atomic64_dec_return_release() kernel-doc header.
> static __always_inline s64
> arch_atomic64_dec_return_release(atomic64_t *v)
> {
> @@ -2811,13 +2613,7 @@ arch_atomic64_dec_return_release(atomic64_t *v)
> #endif
>
> #ifndef arch_atomic64_dec_return
> -/**
> - * arch_atomic64_dec_return - Atomic dec with full ordering
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically decrement @v using full ordering.
> - * Return new value.
> - */
> +// Fallback fence omitting duplicate arch_atomic64_dec_return() kernel-doc header.
> static __always_inline s64
> arch_atomic64_dec_return(atomic64_t *v)
> {
> @@ -2906,13 +2702,7 @@ arch_atomic64_fetch_dec_relaxed(atomic64_t *v)
> #else /* arch_atomic64_fetch_dec_relaxed */
>
> #ifndef arch_atomic64_fetch_dec_acquire
> -/**
> - * arch_atomic64_fetch_dec_acquire - Atomic dec with acquire ordering
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically decrement @v using acquire ordering.
> - * Return old value.
> - */
> +// Fallback acquire omitting duplicate arch_atomic64_fetch_dec_acquire() kernel-doc header.
> static __always_inline s64
> arch_atomic64_fetch_dec_acquire(atomic64_t *v)
> {
> @@ -2924,13 +2714,7 @@ arch_atomic64_fetch_dec_acquire(atomic64_t *v)
> #endif
>
> #ifndef arch_atomic64_fetch_dec_release
> -/**
> - * arch_atomic64_fetch_dec_release - Atomic dec with release ordering
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically decrement @v using release ordering.
> - * Return old value.
> - */
> +// Fallback release omitting duplicate arch_atomic64_fetch_dec_release() kernel-doc header.
> static __always_inline s64
> arch_atomic64_fetch_dec_release(atomic64_t *v)
> {
> @@ -2941,13 +2725,7 @@ arch_atomic64_fetch_dec_release(atomic64_t *v)
> #endif
>
> #ifndef arch_atomic64_fetch_dec
> -/**
> - * arch_atomic64_fetch_dec - Atomic dec with full ordering
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically decrement @v using full ordering.
> - * Return old value.
> - */
> +// Fallback fence omitting duplicate arch_atomic64_fetch_dec() kernel-doc header.
> static __always_inline s64
> arch_atomic64_fetch_dec(atomic64_t *v)
> {
> @@ -3123,14 +2901,7 @@ arch_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
> #else /* arch_atomic64_fetch_andnot_relaxed */
>
> #ifndef arch_atomic64_fetch_andnot_acquire
> -/**
> - * arch_atomic64_fetch_andnot_acquire - Atomic andnot with acquire ordering
> - * @i: value to complement then AND
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically complement then AND @i with @v using acquire ordering.
> - * Return old value.
> - */
> +// Fallback acquire omitting duplicate arch_atomic64_fetch_andnot_acquire() kernel-doc header.
> static __always_inline s64
> arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
> {
> @@ -3142,14 +2913,7 @@ arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
> #endif
>
> #ifndef arch_atomic64_fetch_andnot_release
> -/**
> - * arch_atomic64_fetch_andnot_release - Atomic andnot with release ordering
> - * @i: value to complement then AND
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically complement then AND @i with @v using release ordering.
> - * Return old value.
> - */
> +// Fallback release omitting duplicate arch_atomic64_fetch_andnot_release() kernel-doc header.
> static __always_inline s64
> arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
> {
> @@ -3160,14 +2924,7 @@ arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
> #endif
>
> #ifndef arch_atomic64_fetch_andnot
> -/**
> - * arch_atomic64_fetch_andnot - Atomic andnot with full ordering
> - * @i: value to complement then AND
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically complement then AND @i with @v using full ordering.
> - * Return old value.
> - */
> +// Fallback fence omitting duplicate arch_atomic64_fetch_andnot() kernel-doc header.
> static __always_inline s64
> arch_atomic64_fetch_andnot(s64 i, atomic64_t *v)
> {
> @@ -3560,18 +3317,7 @@ arch_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
> #else /* arch_atomic64_try_cmpxchg_relaxed */
>
> #ifndef arch_atomic64_try_cmpxchg_acquire
> -/**
> - * arch_atomic64_try_cmpxchg_acquire - Atomic try_cmpxchg with acquire ordering
> - * @v: pointer of type atomic64_t
> - * @old: desired old value to match
> - * @new: new value to put in
> - *
> - * Atomically compares @new to *@v, and if equal,
> - * stores @new to *@v, providing acquire ordering.
> - * Returns @true if the cmpxchg operation succeeded,
> - * and false otherwise. Either way, stores the old
> - * value of *@v to *@old.
> - */
> +// Fallback acquire omitting duplicate arch_atomic64_try_cmpxchg_acquire() kernel-doc header.
> static __always_inline bool
> arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
> {
> @@ -3583,18 +3329,7 @@ arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
> #endif
>
> #ifndef arch_atomic64_try_cmpxchg_release
> -/**
> - * arch_atomic64_try_cmpxchg_release - Atomic try_cmpxchg with release ordering
> - * @v: pointer of type atomic64_t
> - * @old: desired old value to match
> - * @new: new value to put in
> - *
> - * Atomically compares @new to *@v, and if equal,
> - * stores @new to *@v, providing release ordering.
> - * Returns @true if the cmpxchg operation succeeded,
> - * and false otherwise. Either way, stores the old
> - * value of *@v to *@old.
> - */
> +// Fallback release omitting duplicate arch_atomic64_try_cmpxchg_release() kernel-doc header.
> static __always_inline bool
> arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
> {
> @@ -3605,18 +3340,7 @@ arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
> #endif
>
> #ifndef arch_atomic64_try_cmpxchg
> -/**
> - * arch_atomic64_try_cmpxchg - Atomic try_cmpxchg with full ordering
> - * @v: pointer of type atomic64_t
> - * @old: desired old value to match
> - * @new: new value to put in
> - *
> - * Atomically compares @new to *@v, and if equal,
> - * stores @new to *@v, providing full ordering.
> - * Returns @true if the cmpxchg operation succeeded,
> - * and false otherwise. Either way, stores the old
> - * value of *@v to *@old.
> - */
> +// Fallback fence omitting duplicate arch_atomic64_try_cmpxchg() kernel-doc header.
> static __always_inline bool
> arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
> {
> @@ -3761,15 +3485,7 @@ arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
> #else /* arch_atomic64_add_negative_relaxed */
>
> #ifndef arch_atomic64_add_negative_acquire
> -/**
> - * arch_atomic64_add_negative_acquire - Atomic add_negative with acquire ordering
> - * @i: value to add
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically add @i with @v using acquire ordering.
> - * Return @true if the result is negative, or @false when
> - * the result is greater than or equal to zero.
> - */
> +// Fallback acquire omitting duplicate arch_atomic64_add_negative_acquire() kernel-doc header.
> static __always_inline bool
> arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
> {
> @@ -3781,15 +3497,7 @@ arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
> #endif
>
> #ifndef arch_atomic64_add_negative_release
> -/**
> - * arch_atomic64_add_negative_release - Atomic add_negative with release ordering
> - * @i: value to add
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically add @i with @v using release ordering.
> - * Return @true if the result is negative, or @false when
> - * the result is greater than or equal to zero.
> - */
> +// Fallback release omitting duplicate arch_atomic64_add_negative_release() kernel-doc header.
> static __always_inline bool
> arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
> {
> @@ -3800,15 +3508,7 @@ arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
> #endif
>
> #ifndef arch_atomic64_add_negative
> -/**
> - * arch_atomic64_add_negative - Atomic add_negative with full ordering
> - * @i: value to add
> - * @v: pointer of type atomic64_t
> - *
> - * Atomically add @i with @v using full ordering.
> - * Return @true if the result is negative, or @false when
> - * the result is greater than or equal to zero.
> - */
> +// Fallback fence omitting duplicate arch_atomic64_add_negative() kernel-doc header.
> static __always_inline bool
> arch_atomic64_add_negative(s64 i, atomic64_t *v)
> {
> @@ -3958,4 +3658,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
> #endif
>
> #endif /* _LINUX_ATOMIC_FALLBACK_H */
> -// 7c2c97cd48cf9c672efc44b9fed5a37b8970dde4
> +// 9bf9febc5288ed9539d1b3cfbbc6e36743b74c3b
> diff --git a/scripts/atomic/chkdup.sh b/scripts/atomic/chkdup.sh
> new file mode 100644
> index 000000000000..04bb4f5c5c34
> --- /dev/null
> +++ b/scripts/atomic/chkdup.sh
> @@ -0,0 +1,27 @@
> +#!/bin/sh
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +# Check to see if the specified atomic is already in use. This is
> +# done by keeping filenames in the temporary directory specified by the
> +# environment variable T.
> +#
> +# Usage:
> +# chkdup.sh name fallback
> +#
> +# The "name" argument is the name of the function to be generated, and
> +# the "fallback" argument is the name of the fallback script that is
> +# doing the generation.
> +#
> +# If the function is a duplicate, output a comment saying so and
> +# exit with non-zero (error) status. Otherwise exit successfully
> +#
> +# If the function is a duplicate, output a comment saying so and
> +# exit with non-zero (error) status. Otherwise exit successfully.
> +
> +if test -f ${T}/${1}
> +then
> + echo // Fallback ${2} omitting duplicate "${1}()" kernel-doc header.
> + exit 1
> +fi
> +touch ${T}/${1}
> +exit 0
> diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/acquire
> index 08fc6c30a9ef..a349935ac7fe 100755
> --- a/scripts/atomic/fallbacks/acquire
> +++ b/scripts/atomic/fallbacks/acquire
> @@ -1,5 +1,8 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}${name}${sfx}_acquire acquire
> +then
> acqrel=acquire
> . ${ATOMICDIR}/acqrel.sh
> +fi
> cat << EOF
> static __always_inline ${ret}
> arch_${atomic}_${pfx}${name}${sfx}_acquire(${params})
> diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative
> index c032e8bec6e2..b105fdfe8fd1 100755
> --- a/scripts/atomic/fallbacks/add_negative
> +++ b/scripts/atomic/fallbacks/add_negative
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_add_negative${order} add_negative
> +then
> cat <<EOF
> /**
> * arch_${atomic}_add_negative${order} - Add and test if negative
> @@ -7,6 +9,9 @@ cat <<EOF
> * Atomically adds @i to @v and returns @true if the result is negative,
> * or @false when the result is greater than or equal to zero.
> */
> +EOF
> +fi
> +cat <<EOF
> static __always_inline bool
> arch_${atomic}_add_negative${order}(${int} i, ${atomic}_t *v)
> {
> diff --git a/scripts/atomic/fallbacks/add_unless b/scripts/atomic/fallbacks/add_unless
> index 650fee935aed..d72d382e3757 100755
> --- a/scripts/atomic/fallbacks/add_unless
> +++ b/scripts/atomic/fallbacks/add_unless
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_add_unless add_unless
> +then
> cat << EOF
> /**
> * arch_${atomic}_add_unless - add unless the number is already a given value
> @@ -8,6 +10,9 @@ cat << EOF
> * Atomically adds @a to @v, if @v was not already @u.
> * Returns @true if the addition was done.
> */
> +EOF
> +fi
> +cat << EOF
> static __always_inline bool
> arch_${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
> {
> diff --git a/scripts/atomic/fallbacks/andnot b/scripts/atomic/fallbacks/andnot
> index 9fbc0ce75a7c..57b2a187374a 100755
> --- a/scripts/atomic/fallbacks/andnot
> +++ b/scripts/atomic/fallbacks/andnot
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}andnot${sfx}${order} andnot
> +then
> cat <<EOF
> /**
> * arch_${atomic}_${pfx}andnot${sfx}${order} - Atomic and-not
> @@ -7,6 +9,9 @@ cat <<EOF
> * Atomically and-not @i with @v using ${docbook_order} ordering.
> * returning ${docbook_oldnew} value.
> */
> +EOF
> +fi
> +cat <<EOF
> static __always_inline ${ret}
> arch_${atomic}_${pfx}andnot${sfx}${order}(${int} i, ${atomic}_t *v)
> {
> diff --git a/scripts/atomic/fallbacks/dec b/scripts/atomic/fallbacks/dec
> index e99c8edd36a3..e44d3eb96d2b 100755
> --- a/scripts/atomic/fallbacks/dec
> +++ b/scripts/atomic/fallbacks/dec
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}dec${sfx}${order} dec
> +then
> cat <<EOF
> /**
> * arch_${atomic}_${pfx}dec${sfx}${order} - Atomic decrement
> @@ -6,6 +8,9 @@ cat <<EOF
> * Atomically decrement @v with ${docbook_order} ordering,
> * returning ${docbook_oldnew} value.
> */
> +EOF
> +fi
> +cat <<EOF
> static __always_inline ${ret}
> arch_${atomic}_${pfx}dec${sfx}${order}(${atomic}_t *v)
> {
> diff --git a/scripts/atomic/fallbacks/dec_and_test b/scripts/atomic/fallbacks/dec_and_test
> index 3720896b1afc..94f5a6d4827c 100755
> --- a/scripts/atomic/fallbacks/dec_and_test
> +++ b/scripts/atomic/fallbacks/dec_and_test
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_dec_and_test dec_and_test
> +then
> cat <<EOF
> /**
> * arch_${atomic}_dec_and_test - decrement and test
> @@ -7,6 +9,9 @@ cat <<EOF
> * returns @true if the result is 0, or @false for all other
> * cases.
> */
> +EOF
> +fi
> +cat <<EOF
> static __always_inline bool
> arch_${atomic}_dec_and_test(${atomic}_t *v)
> {
> diff --git a/scripts/atomic/fallbacks/dec_if_positive b/scripts/atomic/fallbacks/dec_if_positive
> index dedbdbc1487d..e27eb71dd1b2 100755
> --- a/scripts/atomic/fallbacks/dec_if_positive
> +++ b/scripts/atomic/fallbacks/dec_if_positive
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_dec_if_positive dec_if_positive
> +then
> cat <<EOF
> /**
> * arch_${atomic}_dec_if_positive - Atomic decrement if old value is positive
> @@ -9,6 +11,9 @@ cat <<EOF
> * there @v will not be decremented, but -4 will be returned. As a result,
> * if the return value is non-negative, then the value was in fact decremented.
> */
> +EOF
> +fi
> +cat <<EOF
> static __always_inline ${ret}
> arch_${atomic}_dec_if_positive(${atomic}_t *v)
> {
> diff --git a/scripts/atomic/fallbacks/dec_unless_positive b/scripts/atomic/fallbacks/dec_unless_positive
> index c3d01d201c63..ee00fffc5f11 100755
> --- a/scripts/atomic/fallbacks/dec_unless_positive
> +++ b/scripts/atomic/fallbacks/dec_unless_positive
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_dec_unless_positive dec_unless_positive
> +then
> cat <<EOF
> /**
> * arch_${atomic}_dec_unless_positive - Atomic decrement if old value is non-positive
> @@ -7,6 +9,9 @@ cat <<EOF
> * than or equal to zero. Return @true if the decrement happened and
> * @false otherwise.
> */
> +EOF
> +fi
> +cat <<EOF
> static __always_inline bool
> arch_${atomic}_dec_unless_positive(${atomic}_t *v)
> {
> diff --git a/scripts/atomic/fallbacks/fence b/scripts/atomic/fallbacks/fence
> index 975855dfba25..f4901343cd2b 100755
> --- a/scripts/atomic/fallbacks/fence
> +++ b/scripts/atomic/fallbacks/fence
> @@ -1,5 +1,8 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}${name}${sfx} fence
> +then
> acqrel=full
> . ${ATOMICDIR}/acqrel.sh
> +fi
> cat <<EOF
> static __always_inline ${ret}
> arch_${atomic}_${pfx}${name}${sfx}(${params})
> diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fallbacks/fetch_add_unless
> index a1692df0d514..ec583d340785 100755
> --- a/scripts/atomic/fallbacks/fetch_add_unless
> +++ b/scripts/atomic/fallbacks/fetch_add_unless
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_fetch_add_unless fetch_add_unless
> +then
> cat << EOF
> /**
> * arch_${atomic}_fetch_add_unless - add unless the number is already a given value
> @@ -8,6 +10,9 @@ cat << EOF
> * Atomically adds @a to @v, so long as @v was not already @u.
> * Returns original value of @v.
> */
> +EOF
> +fi
> +cat << EOF
> static __always_inline ${int}
> arch_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
> {
> diff --git a/scripts/atomic/fallbacks/inc b/scripts/atomic/fallbacks/inc
> index 3f2c0730cd0c..bb1d5ea6846c 100755
> --- a/scripts/atomic/fallbacks/inc
> +++ b/scripts/atomic/fallbacks/inc
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}inc${sfx}${order} inc
> +then
> cat <<EOF
> /**
> * arch_${atomic}_${pfx}inc${sfx}${order} - Atomic increment
> @@ -6,6 +8,9 @@ cat <<EOF
> * Atomically increment @v with ${docbook_order} ordering,
> * returning ${docbook_oldnew} value.
> */
> +EOF
> +fi
> +cat <<EOF
> static __always_inline ${ret}
> arch_${atomic}_${pfx}inc${sfx}${order}(${atomic}_t *v)
> {
> diff --git a/scripts/atomic/fallbacks/inc_and_test b/scripts/atomic/fallbacks/inc_and_test
> index cc3ac1dde508..dd74f6a5ca4a 100755
> --- a/scripts/atomic/fallbacks/inc_and_test
> +++ b/scripts/atomic/fallbacks/inc_and_test
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_inc_and_test inc_and_test
> +then
> cat <<EOF
> /**
> * arch_${atomic}_inc_and_test - increment and test
> @@ -7,6 +9,9 @@ cat <<EOF
> * and returns @true if the result is zero, or @false for all
> * other cases.
> */
> +EOF
> +fi
> +cat <<EOF
> static __always_inline bool
> arch_${atomic}_inc_and_test(${atomic}_t *v)
> {
> diff --git a/scripts/atomic/fallbacks/inc_not_zero b/scripts/atomic/fallbacks/inc_not_zero
> index 891fa3c057f6..38e2c13dab62 100755
> --- a/scripts/atomic/fallbacks/inc_not_zero
> +++ b/scripts/atomic/fallbacks/inc_not_zero
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_inc_not_zero inc_not_zero
> +then
> cat <<EOF
> /**
> * arch_${atomic}_inc_not_zero - increment unless the number is zero
> @@ -6,6 +8,9 @@ cat <<EOF
> * Atomically increments @v by 1, if @v is non-zero.
> * Returns @true if the increment was done.
> */
> +EOF
> +fi
> +cat <<EOF
> static __always_inline bool
> arch_${atomic}_inc_not_zero(${atomic}_t *v)
> {
> diff --git a/scripts/atomic/fallbacks/inc_unless_negative b/scripts/atomic/fallbacks/inc_unless_negative
> index 98830b0dcdb1..2dc853c4e5b9 100755
> --- a/scripts/atomic/fallbacks/inc_unless_negative
> +++ b/scripts/atomic/fallbacks/inc_unless_negative
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_inc_unless_negative inc_unless_negative
> +then
> cat <<EOF
> /**
> * arch_${atomic}_inc_unless_negative - Atomic increment if old value is non-negative
> @@ -7,6 +9,9 @@ cat <<EOF
> * than or equal to zero. Return @true if the increment happened and
> * @false otherwise.
> */
> +EOF
> +fi
> +cat <<EOF
> static __always_inline bool
> arch_${atomic}_inc_unless_negative(${atomic}_t *v)
> {
> diff --git a/scripts/atomic/fallbacks/read_acquire b/scripts/atomic/fallbacks/read_acquire
> index 779f40c07018..680cd43080cb 100755
> --- a/scripts/atomic/fallbacks/read_acquire
> +++ b/scripts/atomic/fallbacks/read_acquire
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_read_acquire read_acquire
> +then
> cat <<EOF
> /**
> * arch_${atomic}_read_acquire - Atomic load acquire
> @@ -6,6 +8,9 @@ cat <<EOF
> * Atomically load from *@v with acquire ordering, returning the value
> * loaded.
> */
> +EOF
> +fi
> +cat <<EOF
> static __always_inline ${ret}
> arch_${atomic}_read_acquire(const ${atomic}_t *v)
> {
> diff --git a/scripts/atomic/fallbacks/release b/scripts/atomic/fallbacks/release
> index bce3a1cbd497..a1604df66ece 100755
> --- a/scripts/atomic/fallbacks/release
> +++ b/scripts/atomic/fallbacks/release
> @@ -1,5 +1,8 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}${name}${sfx}_release release
> +then
> acqrel=release
> . ${ATOMICDIR}/acqrel.sh
> +fi
> cat <<EOF
> static __always_inline ${ret}
> arch_${atomic}_${pfx}${name}${sfx}_release(${params})
> diff --git a/scripts/atomic/fallbacks/set_release b/scripts/atomic/fallbacks/set_release
> index 46effb6203e5..2a65d3b29f4b 100755
> --- a/scripts/atomic/fallbacks/set_release
> +++ b/scripts/atomic/fallbacks/set_release
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_set_release set_release
> +then
> cat <<EOF
> /**
> * arch_${atomic}_set_release - Atomic store release
> @@ -6,6 +8,9 @@ cat <<EOF
> *
> * Atomically store @i into *@v with release ordering.
> */
> +EOF
> +fi
> +cat <<EOF
> static __always_inline void
> arch_${atomic}_set_release(${atomic}_t *v, ${int} i)
> {
> diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test
> index 204282e260ea..0397b0e92192 100755
> --- a/scripts/atomic/fallbacks/sub_and_test
> +++ b/scripts/atomic/fallbacks/sub_and_test
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_sub_and_test sub_and_test
> +then
> cat <<EOF
> /**
> * arch_${atomic}_sub_and_test - subtract value from variable and test result
> @@ -8,6 +10,9 @@ cat <<EOF
> * @true if the result is zero, or @false for all
> * other cases.
> */
> +EOF
> +fi
> +cat <<EOF
> static __always_inline bool
> arch_${atomic}_sub_and_test(${int} i, ${atomic}_t *v)
> {
> diff --git a/scripts/atomic/fallbacks/try_cmpxchg b/scripts/atomic/fallbacks/try_cmpxchg
> index baf7412f9bf4..e08c5962dd83 100755
> --- a/scripts/atomic/fallbacks/try_cmpxchg
> +++ b/scripts/atomic/fallbacks/try_cmpxchg
> @@ -1,3 +1,5 @@
> +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_try_cmpxchg${order} try_cmpxchg
> +then
> cat <<EOF
> /**
> * arch_${atomic}_try_cmpxchg${order} - Atomic cmpxchg with bool return value
> @@ -9,6 +11,9 @@ cat <<EOF
> * providing ${docbook_order} ordering.
> * Returns @true if the cmpxchg operation succeeded, and false otherwise.
> */
> +EOF
> +fi
> +cat <<EOF
> static __always_inline bool
> arch_${atomic}_try_cmpxchg${order}(${atomic}_t *v, ${int} *old, ${int} new)
> {
> diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh
> index 5b98a8307693..69bf3754df5a 100755
> --- a/scripts/atomic/gen-atomics.sh
> +++ b/scripts/atomic/gen-atomics.sh
> @@ -3,6 +3,10 @@
> #
> # Generate atomic headers
>
> +T="`mktemp -d ${TMPDIR-/tmp}/gen-atomics.sh.XXXXXX`"
> +trap 'rm -rf $T' 0
> +export T
> +
> ATOMICDIR=$(dirname $0)
> ATOMICTBL=${ATOMICDIR}/atomics.tbl
> LINUXDIR=${ATOMICDIR}/../..
> --
> 2.40.1
>

2023-05-11 19:23:15

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Thu, May 11, 2023 at 06:10:00PM +0100, Mark Rutland wrote:
> Hi Paul
>
> On Wed, May 10, 2023 at 11:17:16AM -0700, Paul E. McKenney wrote:
> > The gen-atomics.sh script currently generates 42 duplicate definitions:
> >
> > arch_atomic64_add_negative
> > arch_atomic64_add_negative_acquire
> > arch_atomic64_add_negative_release
>
> [...]
>
> > These duplicates are presumably to handle different architectures
> > generating hand-coded definitions for different subsets of the atomic
> > operations.
>
> Yup, for each FULL/ACQUIRE/RELEASE/RELAXED variant of each op, we allow the
> archtiecture to choose between:
>
> * Providing the ordering variant directly
> * Providing the FULL ordering variant only
> * Providing the RELAXED ordering variant only
> * Providing an equivalent op that we can build from

Thank you for the explanation!

> > However, generating duplicate kernel-doc headers is undesirable.
>
> Understood -- I hadn't understood that duplication was a problem when this was
> originally written.

And neither did I!!!

Instead Akira kindly ran "make htmldocs" on my original attempt and let
me know of the breakage.

> The way this is currently done is largely an artifact of our ifdeffery (and the
> kerneldoc for fallbacks living inthe fallback templates), and I think we can
> fix both of those.

Fair enough!

> > Therefore, generate only the first kernel-doc definition in a group
> > of duplicates. A comment indicates the name of the function and the
> > fallback script that generated it.
>
> I'm not keen on this approach, especially with the chkdup.sh script -- it feels
> like we're working around an underlying structural issue.

I freely admit that I was taking the most expedient path. ;-)

> I think that we can restructure the ifdeffery so that each ordering variant
> gets its own ifdeffery, and then we could place the kerneldoc immediately above
> that, e.g.
>
> /**
> * arch_atomic_inc_return_release()
> *
> * [ full kerneldoc block here ]
> */
> #if defined(arch_atomic_inc_return_release)
> /* defined in arch code */
> #elif defined(arch_atomic_inc_return_relaxed)
> [ define in terms of arch_atomic_inc_return_relaxed ]
> #elif defined(arch_atomic_inc_return)
> [ define in terms of arch_atomic_inc_return ]
> #else
> [ define in terms of arch_atomic_fetch_inc_release ]
> #endif
>
> ... with similar for the mandatory ops that each arch must provide, e.g.
>
> /**
> * arch_atomic_or()
> *
> * [ full kerneldoc block here ]
> */
> /* arch_atomic_or() is mandatory -- architectures must define it! */
>
> I had a go at that restructuring today, and while local build testing indicates
> I haven't got it quite right, I think it's possible:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/fallback-rework
>
> Does that sound ok to you?

At first glance, it appears that your "TODO" locations have the same
information that I was using, so it should not be hard for me to adapt the
current kernel-doc generation to your new scheme. (Famous last words!)

Plus having the kernel-doc generation all in one place does have some
serious attractions.

I will continue maintaining my current stack, but would of course be
happy to port it on top of your refactoring. If it turns out that
the refactoring will take a long time, we can discuss what to do in
the meantime. But here is hoping that the refactoring goes smoothly!
That would be easier all around. ;-)

Thanx, Paul

> Thanks,
> Mark.
>
> > Reported-by: Akira Yokosawa <[email protected]>
> > Signed-off-by: Paul E. McKenney <[email protected]>
> > Cc: Will Deacon <[email protected]>
> > Cc: Peter Zijlstra <[email protected]>
> > Cc: Boqun Feng <[email protected]>
> > Cc: Mark Rutland <[email protected]>
> > ---
> > include/linux/atomic/atomic-arch-fallback.h | 386 +++----------------
> > scripts/atomic/chkdup.sh | 27 ++
> > scripts/atomic/fallbacks/acquire | 3 +
> > scripts/atomic/fallbacks/add_negative | 5 +
> > scripts/atomic/fallbacks/add_unless | 5 +
> > scripts/atomic/fallbacks/andnot | 5 +
> > scripts/atomic/fallbacks/dec | 5 +
> > scripts/atomic/fallbacks/dec_and_test | 5 +
> > scripts/atomic/fallbacks/dec_if_positive | 5 +
> > scripts/atomic/fallbacks/dec_unless_positive | 5 +
> > scripts/atomic/fallbacks/fence | 3 +
> > scripts/atomic/fallbacks/fetch_add_unless | 5 +
> > scripts/atomic/fallbacks/inc | 5 +
> > scripts/atomic/fallbacks/inc_and_test | 5 +
> > scripts/atomic/fallbacks/inc_not_zero | 5 +
> > scripts/atomic/fallbacks/inc_unless_negative | 5 +
> > scripts/atomic/fallbacks/read_acquire | 5 +
> > scripts/atomic/fallbacks/release | 3 +
> > scripts/atomic/fallbacks/set_release | 5 +
> > scripts/atomic/fallbacks/sub_and_test | 5 +
> > scripts/atomic/fallbacks/try_cmpxchg | 5 +
> > scripts/atomic/gen-atomics.sh | 4 +
> > 22 files changed, 163 insertions(+), 343 deletions(-)
> > create mode 100644 scripts/atomic/chkdup.sh
> >
> > diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
> > index 41aa94f0aacd..2d56726f8662 100644
> > --- a/include/linux/atomic/atomic-arch-fallback.h
> > +++ b/include/linux/atomic/atomic-arch-fallback.h
> > @@ -639,13 +639,7 @@ arch_atomic_inc_return_relaxed(atomic_t *v)
> > #else /* arch_atomic_inc_return_relaxed */
> >
> > #ifndef arch_atomic_inc_return_acquire
> > -/**
> > - * arch_atomic_inc_return_acquire - Atomic inc with acquire ordering
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically increment @v using acquire ordering.
> > - * Return new value.
> > - */
> > +// Fallback acquire omitting duplicate arch_atomic_inc_return_acquire() kernel-doc header.
> > static __always_inline int
> > arch_atomic_inc_return_acquire(atomic_t *v)
> > {
> > @@ -657,13 +651,7 @@ arch_atomic_inc_return_acquire(atomic_t *v)
> > #endif
> >
> > #ifndef arch_atomic_inc_return_release
> > -/**
> > - * arch_atomic_inc_return_release - Atomic inc with release ordering
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically increment @v using release ordering.
> > - * Return new value.
> > - */
> > +// Fallback release omitting duplicate arch_atomic_inc_return_release() kernel-doc header.
> > static __always_inline int
> > arch_atomic_inc_return_release(atomic_t *v)
> > {
> > @@ -674,13 +662,7 @@ arch_atomic_inc_return_release(atomic_t *v)
> > #endif
> >
> > #ifndef arch_atomic_inc_return
> > -/**
> > - * arch_atomic_inc_return - Atomic inc with full ordering
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically increment @v using full ordering.
> > - * Return new value.
> > - */
> > +// Fallback fence omitting duplicate arch_atomic_inc_return() kernel-doc header.
> > static __always_inline int
> > arch_atomic_inc_return(atomic_t *v)
> > {
> > @@ -769,13 +751,7 @@ arch_atomic_fetch_inc_relaxed(atomic_t *v)
> > #else /* arch_atomic_fetch_inc_relaxed */
> >
> > #ifndef arch_atomic_fetch_inc_acquire
> > -/**
> > - * arch_atomic_fetch_inc_acquire - Atomic inc with acquire ordering
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically increment @v using acquire ordering.
> > - * Return old value.
> > - */
> > +// Fallback acquire omitting duplicate arch_atomic_fetch_inc_acquire() kernel-doc header.
> > static __always_inline int
> > arch_atomic_fetch_inc_acquire(atomic_t *v)
> > {
> > @@ -787,13 +763,7 @@ arch_atomic_fetch_inc_acquire(atomic_t *v)
> > #endif
> >
> > #ifndef arch_atomic_fetch_inc_release
> > -/**
> > - * arch_atomic_fetch_inc_release - Atomic inc with release ordering
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically increment @v using release ordering.
> > - * Return old value.
> > - */
> > +// Fallback release omitting duplicate arch_atomic_fetch_inc_release() kernel-doc header.
> > static __always_inline int
> > arch_atomic_fetch_inc_release(atomic_t *v)
> > {
> > @@ -804,13 +774,7 @@ arch_atomic_fetch_inc_release(atomic_t *v)
> > #endif
> >
> > #ifndef arch_atomic_fetch_inc
> > -/**
> > - * arch_atomic_fetch_inc - Atomic inc with full ordering
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically increment @v using full ordering.
> > - * Return old value.
> > - */
> > +// Fallback fence omitting duplicate arch_atomic_fetch_inc() kernel-doc header.
> > static __always_inline int
> > arch_atomic_fetch_inc(atomic_t *v)
> > {
> > @@ -915,13 +879,7 @@ arch_atomic_dec_return_relaxed(atomic_t *v)
> > #else /* arch_atomic_dec_return_relaxed */
> >
> > #ifndef arch_atomic_dec_return_acquire
> > -/**
> > - * arch_atomic_dec_return_acquire - Atomic dec with acquire ordering
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically decrement @v using acquire ordering.
> > - * Return new value.
> > - */
> > +// Fallback acquire omitting duplicate arch_atomic_dec_return_acquire() kernel-doc header.
> > static __always_inline int
> > arch_atomic_dec_return_acquire(atomic_t *v)
> > {
> > @@ -933,13 +891,7 @@ arch_atomic_dec_return_acquire(atomic_t *v)
> > #endif
> >
> > #ifndef arch_atomic_dec_return_release
> > -/**
> > - * arch_atomic_dec_return_release - Atomic dec with release ordering
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically decrement @v using release ordering.
> > - * Return new value.
> > - */
> > +// Fallback release omitting duplicate arch_atomic_dec_return_release() kernel-doc header.
> > static __always_inline int
> > arch_atomic_dec_return_release(atomic_t *v)
> > {
> > @@ -950,13 +902,7 @@ arch_atomic_dec_return_release(atomic_t *v)
> > #endif
> >
> > #ifndef arch_atomic_dec_return
> > -/**
> > - * arch_atomic_dec_return - Atomic dec with full ordering
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically decrement @v using full ordering.
> > - * Return new value.
> > - */
> > +// Fallback fence omitting duplicate arch_atomic_dec_return() kernel-doc header.
> > static __always_inline int
> > arch_atomic_dec_return(atomic_t *v)
> > {
> > @@ -1045,13 +991,7 @@ arch_atomic_fetch_dec_relaxed(atomic_t *v)
> > #else /* arch_atomic_fetch_dec_relaxed */
> >
> > #ifndef arch_atomic_fetch_dec_acquire
> > -/**
> > - * arch_atomic_fetch_dec_acquire - Atomic dec with acquire ordering
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically decrement @v using acquire ordering.
> > - * Return old value.
> > - */
> > +// Fallback acquire omitting duplicate arch_atomic_fetch_dec_acquire() kernel-doc header.
> > static __always_inline int
> > arch_atomic_fetch_dec_acquire(atomic_t *v)
> > {
> > @@ -1063,13 +1003,7 @@ arch_atomic_fetch_dec_acquire(atomic_t *v)
> > #endif
> >
> > #ifndef arch_atomic_fetch_dec_release
> > -/**
> > - * arch_atomic_fetch_dec_release - Atomic dec with release ordering
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically decrement @v using release ordering.
> > - * Return old value.
> > - */
> > +// Fallback release omitting duplicate arch_atomic_fetch_dec_release() kernel-doc header.
> > static __always_inline int
> > arch_atomic_fetch_dec_release(atomic_t *v)
> > {
> > @@ -1080,13 +1014,7 @@ arch_atomic_fetch_dec_release(atomic_t *v)
> > #endif
> >
> > #ifndef arch_atomic_fetch_dec
> > -/**
> > - * arch_atomic_fetch_dec - Atomic dec with full ordering
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically decrement @v using full ordering.
> > - * Return old value.
> > - */
> > +// Fallback fence omitting duplicate arch_atomic_fetch_dec() kernel-doc header.
> > static __always_inline int
> > arch_atomic_fetch_dec(atomic_t *v)
> > {
> > @@ -1262,14 +1190,7 @@ arch_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
> > #else /* arch_atomic_fetch_andnot_relaxed */
> >
> > #ifndef arch_atomic_fetch_andnot_acquire
> > -/**
> > - * arch_atomic_fetch_andnot_acquire - Atomic andnot with acquire ordering
> > - * @i: value to complement then AND
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically complement then AND @i with @v using acquire ordering.
> > - * Return old value.
> > - */
> > +// Fallback acquire omitting duplicate arch_atomic_fetch_andnot_acquire() kernel-doc header.
> > static __always_inline int
> > arch_atomic_fetch_andnot_acquire(int i, atomic_t *v)
> > {
> > @@ -1281,14 +1202,7 @@ arch_atomic_fetch_andnot_acquire(int i, atomic_t *v)
> > #endif
> >
> > #ifndef arch_atomic_fetch_andnot_release
> > -/**
> > - * arch_atomic_fetch_andnot_release - Atomic andnot with release ordering
> > - * @i: value to complement then AND
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically complement then AND @i with @v using release ordering.
> > - * Return old value.
> > - */
> > +// Fallback release omitting duplicate arch_atomic_fetch_andnot_release() kernel-doc header.
> > static __always_inline int
> > arch_atomic_fetch_andnot_release(int i, atomic_t *v)
> > {
> > @@ -1299,14 +1213,7 @@ arch_atomic_fetch_andnot_release(int i, atomic_t *v)
> > #endif
> >
> > #ifndef arch_atomic_fetch_andnot
> > -/**
> > - * arch_atomic_fetch_andnot - Atomic andnot with full ordering
> > - * @i: value to complement then AND
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically complement then AND @i with @v using full ordering.
> > - * Return old value.
> > - */
> > +// Fallback fence omitting duplicate arch_atomic_fetch_andnot() kernel-doc header.
> > static __always_inline int
> > arch_atomic_fetch_andnot(int i, atomic_t *v)
> > {
> > @@ -1699,18 +1606,7 @@ arch_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
> > #else /* arch_atomic_try_cmpxchg_relaxed */
> >
> > #ifndef arch_atomic_try_cmpxchg_acquire
> > -/**
> > - * arch_atomic_try_cmpxchg_acquire - Atomic try_cmpxchg with acquire ordering
> > - * @v: pointer of type atomic_t
> > - * @old: desired old value to match
> > - * @new: new value to put in
> > - *
> > - * Atomically compares @new to *@v, and if equal,
> > - * stores @new to *@v, providing acquire ordering.
> > - * Returns @true if the cmpxchg operation succeeded,
> > - * and false otherwise. Either way, stores the old
> > - * value of *@v to *@old.
> > - */
> > +// Fallback acquire omitting duplicate arch_atomic_try_cmpxchg_acquire() kernel-doc header.
> > static __always_inline bool
> > arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
> > {
> > @@ -1722,18 +1618,7 @@ arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
> > #endif
> >
> > #ifndef arch_atomic_try_cmpxchg_release
> > -/**
> > - * arch_atomic_try_cmpxchg_release - Atomic try_cmpxchg with release ordering
> > - * @v: pointer of type atomic_t
> > - * @old: desired old value to match
> > - * @new: new value to put in
> > - *
> > - * Atomically compares @new to *@v, and if equal,
> > - * stores @new to *@v, providing release ordering.
> > - * Returns @true if the cmpxchg operation succeeded,
> > - * and false otherwise. Either way, stores the old
> > - * value of *@v to *@old.
> > - */
> > +// Fallback release omitting duplicate arch_atomic_try_cmpxchg_release() kernel-doc header.
> > static __always_inline bool
> > arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
> > {
> > @@ -1744,18 +1629,7 @@ arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
> > #endif
> >
> > #ifndef arch_atomic_try_cmpxchg
> > -/**
> > - * arch_atomic_try_cmpxchg - Atomic try_cmpxchg with full ordering
> > - * @v: pointer of type atomic_t
> > - * @old: desired old value to match
> > - * @new: new value to put in
> > - *
> > - * Atomically compares @new to *@v, and if equal,
> > - * stores @new to *@v, providing full ordering.
> > - * Returns @true if the cmpxchg operation succeeded,
> > - * and false otherwise. Either way, stores the old
> > - * value of *@v to *@old.
> > - */
> > +// Fallback fence omitting duplicate arch_atomic_try_cmpxchg() kernel-doc header.
> > static __always_inline bool
> > arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
> > {
> > @@ -1900,15 +1774,7 @@ arch_atomic_add_negative_relaxed(int i, atomic_t *v)
> > #else /* arch_atomic_add_negative_relaxed */
> >
> > #ifndef arch_atomic_add_negative_acquire
> > -/**
> > - * arch_atomic_add_negative_acquire - Atomic add_negative with acquire ordering
> > - * @i: value to add
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically add @i with @v using acquire ordering.
> > - * Return @true if the result is negative, or @false when
> > - * the result is greater than or equal to zero.
> > - */
> > +// Fallback acquire omitting duplicate arch_atomic_add_negative_acquire() kernel-doc header.
> > static __always_inline bool
> > arch_atomic_add_negative_acquire(int i, atomic_t *v)
> > {
> > @@ -1920,15 +1786,7 @@ arch_atomic_add_negative_acquire(int i, atomic_t *v)
> > #endif
> >
> > #ifndef arch_atomic_add_negative_release
> > -/**
> > - * arch_atomic_add_negative_release - Atomic add_negative with release ordering
> > - * @i: value to add
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically add @i with @v using release ordering.
> > - * Return @true if the result is negative, or @false when
> > - * the result is greater than or equal to zero.
> > - */
> > +// Fallback release omitting duplicate arch_atomic_add_negative_release() kernel-doc header.
> > static __always_inline bool
> > arch_atomic_add_negative_release(int i, atomic_t *v)
> > {
> > @@ -1939,15 +1797,7 @@ arch_atomic_add_negative_release(int i, atomic_t *v)
> > #endif
> >
> > #ifndef arch_atomic_add_negative
> > -/**
> > - * arch_atomic_add_negative - Atomic add_negative with full ordering
> > - * @i: value to add
> > - * @v: pointer of type atomic_t
> > - *
> > - * Atomically add @i with @v using full ordering.
> > - * Return @true if the result is negative, or @false when
> > - * the result is greater than or equal to zero.
> > - */
> > +// Fallback fence omitting duplicate arch_atomic_add_negative() kernel-doc header.
> > static __always_inline bool
> > arch_atomic_add_negative(int i, atomic_t *v)
> > {
> > @@ -2500,13 +2350,7 @@ arch_atomic64_inc_return_relaxed(atomic64_t *v)
> > #else /* arch_atomic64_inc_return_relaxed */
> >
> > #ifndef arch_atomic64_inc_return_acquire
> > -/**
> > - * arch_atomic64_inc_return_acquire - Atomic inc with acquire ordering
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically increment @v using acquire ordering.
> > - * Return new value.
> > - */
> > +// Fallback acquire omitting duplicate arch_atomic64_inc_return_acquire() kernel-doc header.
> > static __always_inline s64
> > arch_atomic64_inc_return_acquire(atomic64_t *v)
> > {
> > @@ -2518,13 +2362,7 @@ arch_atomic64_inc_return_acquire(atomic64_t *v)
> > #endif
> >
> > #ifndef arch_atomic64_inc_return_release
> > -/**
> > - * arch_atomic64_inc_return_release - Atomic inc with release ordering
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically increment @v using release ordering.
> > - * Return new value.
> > - */
> > +// Fallback release omitting duplicate arch_atomic64_inc_return_release() kernel-doc header.
> > static __always_inline s64
> > arch_atomic64_inc_return_release(atomic64_t *v)
> > {
> > @@ -2535,13 +2373,7 @@ arch_atomic64_inc_return_release(atomic64_t *v)
> > #endif
> >
> > #ifndef arch_atomic64_inc_return
> > -/**
> > - * arch_atomic64_inc_return - Atomic inc with full ordering
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically increment @v using full ordering.
> > - * Return new value.
> > - */
> > +// Fallback fence omitting duplicate arch_atomic64_inc_return() kernel-doc header.
> > static __always_inline s64
> > arch_atomic64_inc_return(atomic64_t *v)
> > {
> > @@ -2630,13 +2462,7 @@ arch_atomic64_fetch_inc_relaxed(atomic64_t *v)
> > #else /* arch_atomic64_fetch_inc_relaxed */
> >
> > #ifndef arch_atomic64_fetch_inc_acquire
> > -/**
> > - * arch_atomic64_fetch_inc_acquire - Atomic inc with acquire ordering
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically increment @v using acquire ordering.
> > - * Return old value.
> > - */
> > +// Fallback acquire omitting duplicate arch_atomic64_fetch_inc_acquire() kernel-doc header.
> > static __always_inline s64
> > arch_atomic64_fetch_inc_acquire(atomic64_t *v)
> > {
> > @@ -2648,13 +2474,7 @@ arch_atomic64_fetch_inc_acquire(atomic64_t *v)
> > #endif
> >
> > #ifndef arch_atomic64_fetch_inc_release
> > -/**
> > - * arch_atomic64_fetch_inc_release - Atomic inc with release ordering
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically increment @v using release ordering.
> > - * Return old value.
> > - */
> > +// Fallback release omitting duplicate arch_atomic64_fetch_inc_release() kernel-doc header.
> > static __always_inline s64
> > arch_atomic64_fetch_inc_release(atomic64_t *v)
> > {
> > @@ -2665,13 +2485,7 @@ arch_atomic64_fetch_inc_release(atomic64_t *v)
> > #endif
> >
> > #ifndef arch_atomic64_fetch_inc
> > -/**
> > - * arch_atomic64_fetch_inc - Atomic inc with full ordering
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically increment @v using full ordering.
> > - * Return old value.
> > - */
> > +// Fallback fence omitting duplicate arch_atomic64_fetch_inc() kernel-doc header.
> > static __always_inline s64
> > arch_atomic64_fetch_inc(atomic64_t *v)
> > {
> > @@ -2776,13 +2590,7 @@ arch_atomic64_dec_return_relaxed(atomic64_t *v)
> > #else /* arch_atomic64_dec_return_relaxed */
> >
> > #ifndef arch_atomic64_dec_return_acquire
> > -/**
> > - * arch_atomic64_dec_return_acquire - Atomic dec with acquire ordering
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically decrement @v using acquire ordering.
> > - * Return new value.
> > - */
> > +// Fallback acquire omitting duplicate arch_atomic64_dec_return_acquire() kernel-doc header.
> > static __always_inline s64
> > arch_atomic64_dec_return_acquire(atomic64_t *v)
> > {
> > @@ -2794,13 +2602,7 @@ arch_atomic64_dec_return_acquire(atomic64_t *v)
> > #endif
> >
> > #ifndef arch_atomic64_dec_return_release
> > -/**
> > - * arch_atomic64_dec_return_release - Atomic dec with release ordering
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically decrement @v using release ordering.
> > - * Return new value.
> > - */
> > +// Fallback release omitting duplicate arch_atomic64_dec_return_release() kernel-doc header.
> > static __always_inline s64
> > arch_atomic64_dec_return_release(atomic64_t *v)
> > {
> > @@ -2811,13 +2613,7 @@ arch_atomic64_dec_return_release(atomic64_t *v)
> > #endif
> >
> > #ifndef arch_atomic64_dec_return
> > -/**
> > - * arch_atomic64_dec_return - Atomic dec with full ordering
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically decrement @v using full ordering.
> > - * Return new value.
> > - */
> > +// Fallback fence omitting duplicate arch_atomic64_dec_return() kernel-doc header.
> > static __always_inline s64
> > arch_atomic64_dec_return(atomic64_t *v)
> > {
> > @@ -2906,13 +2702,7 @@ arch_atomic64_fetch_dec_relaxed(atomic64_t *v)
> > #else /* arch_atomic64_fetch_dec_relaxed */
> >
> > #ifndef arch_atomic64_fetch_dec_acquire
> > -/**
> > - * arch_atomic64_fetch_dec_acquire - Atomic dec with acquire ordering
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically decrement @v using acquire ordering.
> > - * Return old value.
> > - */
> > +// Fallback acquire omitting duplicate arch_atomic64_fetch_dec_acquire() kernel-doc header.
> > static __always_inline s64
> > arch_atomic64_fetch_dec_acquire(atomic64_t *v)
> > {
> > @@ -2924,13 +2714,7 @@ arch_atomic64_fetch_dec_acquire(atomic64_t *v)
> > #endif
> >
> > #ifndef arch_atomic64_fetch_dec_release
> > -/**
> > - * arch_atomic64_fetch_dec_release - Atomic dec with release ordering
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically decrement @v using release ordering.
> > - * Return old value.
> > - */
> > +// Fallback release omitting duplicate arch_atomic64_fetch_dec_release() kernel-doc header.
> > static __always_inline s64
> > arch_atomic64_fetch_dec_release(atomic64_t *v)
> > {
> > @@ -2941,13 +2725,7 @@ arch_atomic64_fetch_dec_release(atomic64_t *v)
> > #endif
> >
> > #ifndef arch_atomic64_fetch_dec
> > -/**
> > - * arch_atomic64_fetch_dec - Atomic dec with full ordering
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically decrement @v using full ordering.
> > - * Return old value.
> > - */
> > +// Fallback fence omitting duplicate arch_atomic64_fetch_dec() kernel-doc header.
> > static __always_inline s64
> > arch_atomic64_fetch_dec(atomic64_t *v)
> > {
> > @@ -3123,14 +2901,7 @@ arch_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
> > #else /* arch_atomic64_fetch_andnot_relaxed */
> >
> > #ifndef arch_atomic64_fetch_andnot_acquire
> > -/**
> > - * arch_atomic64_fetch_andnot_acquire - Atomic andnot with acquire ordering
> > - * @i: value to complement then AND
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically complement then AND @i with @v using acquire ordering.
> > - * Return old value.
> > - */
> > +// Fallback acquire omitting duplicate arch_atomic64_fetch_andnot_acquire() kernel-doc header.
> > static __always_inline s64
> > arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
> > {
> > @@ -3142,14 +2913,7 @@ arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
> > #endif
> >
> > #ifndef arch_atomic64_fetch_andnot_release
> > -/**
> > - * arch_atomic64_fetch_andnot_release - Atomic andnot with release ordering
> > - * @i: value to complement then AND
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically complement then AND @i with @v using release ordering.
> > - * Return old value.
> > - */
> > +// Fallback release omitting duplicate arch_atomic64_fetch_andnot_release() kernel-doc header.
> > static __always_inline s64
> > arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
> > {
> > @@ -3160,14 +2924,7 @@ arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
> > #endif
> >
> > #ifndef arch_atomic64_fetch_andnot
> > -/**
> > - * arch_atomic64_fetch_andnot - Atomic andnot with full ordering
> > - * @i: value to complement then AND
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically complement then AND @i with @v using full ordering.
> > - * Return old value.
> > - */
> > +// Fallback fence omitting duplicate arch_atomic64_fetch_andnot() kernel-doc header.
> > static __always_inline s64
> > arch_atomic64_fetch_andnot(s64 i, atomic64_t *v)
> > {
> > @@ -3560,18 +3317,7 @@ arch_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
> > #else /* arch_atomic64_try_cmpxchg_relaxed */
> >
> > #ifndef arch_atomic64_try_cmpxchg_acquire
> > -/**
> > - * arch_atomic64_try_cmpxchg_acquire - Atomic try_cmpxchg with acquire ordering
> > - * @v: pointer of type atomic64_t
> > - * @old: desired old value to match
> > - * @new: new value to put in
> > - *
> > - * Atomically compares @new to *@v, and if equal,
> > - * stores @new to *@v, providing acquire ordering.
> > - * Returns @true if the cmpxchg operation succeeded,
> > - * and false otherwise. Either way, stores the old
> > - * value of *@v to *@old.
> > - */
> > +// Fallback acquire omitting duplicate arch_atomic64_try_cmpxchg_acquire() kernel-doc header.
> > static __always_inline bool
> > arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
> > {
> > @@ -3583,18 +3329,7 @@ arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
> > #endif
> >
> > #ifndef arch_atomic64_try_cmpxchg_release
> > -/**
> > - * arch_atomic64_try_cmpxchg_release - Atomic try_cmpxchg with release ordering
> > - * @v: pointer of type atomic64_t
> > - * @old: desired old value to match
> > - * @new: new value to put in
> > - *
> > - * Atomically compares @new to *@v, and if equal,
> > - * stores @new to *@v, providing release ordering.
> > - * Returns @true if the cmpxchg operation succeeded,
> > - * and false otherwise. Either way, stores the old
> > - * value of *@v to *@old.
> > - */
> > +// Fallback release omitting duplicate arch_atomic64_try_cmpxchg_release() kernel-doc header.
> > static __always_inline bool
> > arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
> > {
> > @@ -3605,18 +3340,7 @@ arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
> > #endif
> >
> > #ifndef arch_atomic64_try_cmpxchg
> > -/**
> > - * arch_atomic64_try_cmpxchg - Atomic try_cmpxchg with full ordering
> > - * @v: pointer of type atomic64_t
> > - * @old: desired old value to match
> > - * @new: new value to put in
> > - *
> > - * Atomically compares @new to *@v, and if equal,
> > - * stores @new to *@v, providing full ordering.
> > - * Returns @true if the cmpxchg operation succeeded,
> > - * and false otherwise. Either way, stores the old
> > - * value of *@v to *@old.
> > - */
> > +// Fallback fence omitting duplicate arch_atomic64_try_cmpxchg() kernel-doc header.
> > static __always_inline bool
> > arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
> > {
> > @@ -3761,15 +3485,7 @@ arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
> > #else /* arch_atomic64_add_negative_relaxed */
> >
> > #ifndef arch_atomic64_add_negative_acquire
> > -/**
> > - * arch_atomic64_add_negative_acquire - Atomic add_negative with acquire ordering
> > - * @i: value to add
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically add @i with @v using acquire ordering.
> > - * Return @true if the result is negative, or @false when
> > - * the result is greater than or equal to zero.
> > - */
> > +// Fallback acquire omitting duplicate arch_atomic64_add_negative_acquire() kernel-doc header.
> > static __always_inline bool
> > arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
> > {
> > @@ -3781,15 +3497,7 @@ arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
> > #endif
> >
> > #ifndef arch_atomic64_add_negative_release
> > -/**
> > - * arch_atomic64_add_negative_release - Atomic add_negative with release ordering
> > - * @i: value to add
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically add @i with @v using release ordering.
> > - * Return @true if the result is negative, or @false when
> > - * the result is greater than or equal to zero.
> > - */
> > +// Fallback release omitting duplicate arch_atomic64_add_negative_release() kernel-doc header.
> > static __always_inline bool
> > arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
> > {
> > @@ -3800,15 +3508,7 @@ arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
> > #endif
> >
> > #ifndef arch_atomic64_add_negative
> > -/**
> > - * arch_atomic64_add_negative - Atomic add_negative with full ordering
> > - * @i: value to add
> > - * @v: pointer of type atomic64_t
> > - *
> > - * Atomically add @i with @v using full ordering.
> > - * Return @true if the result is negative, or @false when
> > - * the result is greater than or equal to zero.
> > - */
> > +// Fallback fence omitting duplicate arch_atomic64_add_negative() kernel-doc header.
> > static __always_inline bool
> > arch_atomic64_add_negative(s64 i, atomic64_t *v)
> > {
> > @@ -3958,4 +3658,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
> > #endif
> >
> > #endif /* _LINUX_ATOMIC_FALLBACK_H */
> > -// 7c2c97cd48cf9c672efc44b9fed5a37b8970dde4
> > +// 9bf9febc5288ed9539d1b3cfbbc6e36743b74c3b
> > diff --git a/scripts/atomic/chkdup.sh b/scripts/atomic/chkdup.sh
> > new file mode 100644
> > index 000000000000..04bb4f5c5c34
> > --- /dev/null
> > +++ b/scripts/atomic/chkdup.sh
> > @@ -0,0 +1,27 @@
> > +#!/bin/sh
> > +# SPDX-License-Identifier: GPL-2.0
> > +#
> > +# Check to see if the specified atomic is already in use. This is
> > +# done by keeping filenames in the temporary directory specified by the
> > +# environment variable T.
> > +#
> > +# Usage:
> > +# chkdup.sh name fallback
> > +#
> > +# The "name" argument is the name of the function to be generated, and
> > +# the "fallback" argument is the name of the fallback script that is
> > +# doing the generation.
> > +#
> > +# If the function is a duplicate, output a comment saying so and
> > +# exit with non-zero (error) status. Otherwise exit successfully
> > +#
> > +# If the function is a duplicate, output a comment saying so and
> > +# exit with non-zero (error) status. Otherwise exit successfully.
> > +
> > +if test -f ${T}/${1}
> > +then
> > + echo // Fallback ${2} omitting duplicate "${1}()" kernel-doc header.
> > + exit 1
> > +fi
> > +touch ${T}/${1}
> > +exit 0
> > diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/acquire
> > index 08fc6c30a9ef..a349935ac7fe 100755
> > --- a/scripts/atomic/fallbacks/acquire
> > +++ b/scripts/atomic/fallbacks/acquire
> > @@ -1,5 +1,8 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}${name}${sfx}_acquire acquire
> > +then
> > acqrel=acquire
> > . ${ATOMICDIR}/acqrel.sh
> > +fi
> > cat << EOF
> > static __always_inline ${ret}
> > arch_${atomic}_${pfx}${name}${sfx}_acquire(${params})
> > diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative
> > index c032e8bec6e2..b105fdfe8fd1 100755
> > --- a/scripts/atomic/fallbacks/add_negative
> > +++ b/scripts/atomic/fallbacks/add_negative
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_add_negative${order} add_negative
> > +then
> > cat <<EOF
> > /**
> > * arch_${atomic}_add_negative${order} - Add and test if negative
> > @@ -7,6 +9,9 @@ cat <<EOF
> > * Atomically adds @i to @v and returns @true if the result is negative,
> > * or @false when the result is greater than or equal to zero.
> > */
> > +EOF
> > +fi
> > +cat <<EOF
> > static __always_inline bool
> > arch_${atomic}_add_negative${order}(${int} i, ${atomic}_t *v)
> > {
> > diff --git a/scripts/atomic/fallbacks/add_unless b/scripts/atomic/fallbacks/add_unless
> > index 650fee935aed..d72d382e3757 100755
> > --- a/scripts/atomic/fallbacks/add_unless
> > +++ b/scripts/atomic/fallbacks/add_unless
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_add_unless add_unless
> > +then
> > cat << EOF
> > /**
> > * arch_${atomic}_add_unless - add unless the number is already a given value
> > @@ -8,6 +10,9 @@ cat << EOF
> > * Atomically adds @a to @v, if @v was not already @u.
> > * Returns @true if the addition was done.
> > */
> > +EOF
> > +fi
> > +cat << EOF
> > static __always_inline bool
> > arch_${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
> > {
> > diff --git a/scripts/atomic/fallbacks/andnot b/scripts/atomic/fallbacks/andnot
> > index 9fbc0ce75a7c..57b2a187374a 100755
> > --- a/scripts/atomic/fallbacks/andnot
> > +++ b/scripts/atomic/fallbacks/andnot
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}andnot${sfx}${order} andnot
> > +then
> > cat <<EOF
> > /**
> > * arch_${atomic}_${pfx}andnot${sfx}${order} - Atomic and-not
> > @@ -7,6 +9,9 @@ cat <<EOF
> > * Atomically and-not @i with @v using ${docbook_order} ordering.
> > * returning ${docbook_oldnew} value.
> > */
> > +EOF
> > +fi
> > +cat <<EOF
> > static __always_inline ${ret}
> > arch_${atomic}_${pfx}andnot${sfx}${order}(${int} i, ${atomic}_t *v)
> > {
> > diff --git a/scripts/atomic/fallbacks/dec b/scripts/atomic/fallbacks/dec
> > index e99c8edd36a3..e44d3eb96d2b 100755
> > --- a/scripts/atomic/fallbacks/dec
> > +++ b/scripts/atomic/fallbacks/dec
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}dec${sfx}${order} dec
> > +then
> > cat <<EOF
> > /**
> > * arch_${atomic}_${pfx}dec${sfx}${order} - Atomic decrement
> > @@ -6,6 +8,9 @@ cat <<EOF
> > * Atomically decrement @v with ${docbook_order} ordering,
> > * returning ${docbook_oldnew} value.
> > */
> > +EOF
> > +fi
> > +cat <<EOF
> > static __always_inline ${ret}
> > arch_${atomic}_${pfx}dec${sfx}${order}(${atomic}_t *v)
> > {
> > diff --git a/scripts/atomic/fallbacks/dec_and_test b/scripts/atomic/fallbacks/dec_and_test
> > index 3720896b1afc..94f5a6d4827c 100755
> > --- a/scripts/atomic/fallbacks/dec_and_test
> > +++ b/scripts/atomic/fallbacks/dec_and_test
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_dec_and_test dec_and_test
> > +then
> > cat <<EOF
> > /**
> > * arch_${atomic}_dec_and_test - decrement and test
> > @@ -7,6 +9,9 @@ cat <<EOF
> > * returns @true if the result is 0, or @false for all other
> > * cases.
> > */
> > +EOF
> > +fi
> > +cat <<EOF
> > static __always_inline bool
> > arch_${atomic}_dec_and_test(${atomic}_t *v)
> > {
> > diff --git a/scripts/atomic/fallbacks/dec_if_positive b/scripts/atomic/fallbacks/dec_if_positive
> > index dedbdbc1487d..e27eb71dd1b2 100755
> > --- a/scripts/atomic/fallbacks/dec_if_positive
> > +++ b/scripts/atomic/fallbacks/dec_if_positive
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_dec_if_positive dec_if_positive
> > +then
> > cat <<EOF
> > /**
> > * arch_${atomic}_dec_if_positive - Atomic decrement if old value is positive
> > @@ -9,6 +11,9 @@ cat <<EOF
> > * there @v will not be decremented, but -4 will be returned. As a result,
> > * if the return value is non-negative, then the value was in fact decremented.
> > */
> > +EOF
> > +fi
> > +cat <<EOF
> > static __always_inline ${ret}
> > arch_${atomic}_dec_if_positive(${atomic}_t *v)
> > {
> > diff --git a/scripts/atomic/fallbacks/dec_unless_positive b/scripts/atomic/fallbacks/dec_unless_positive
> > index c3d01d201c63..ee00fffc5f11 100755
> > --- a/scripts/atomic/fallbacks/dec_unless_positive
> > +++ b/scripts/atomic/fallbacks/dec_unless_positive
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_dec_unless_positive dec_unless_positive
> > +then
> > cat <<EOF
> > /**
> > * arch_${atomic}_dec_unless_positive - Atomic decrement if old value is non-positive
> > @@ -7,6 +9,9 @@ cat <<EOF
> > * than or equal to zero. Return @true if the decrement happened and
> > * @false otherwise.
> > */
> > +EOF
> > +fi
> > +cat <<EOF
> > static __always_inline bool
> > arch_${atomic}_dec_unless_positive(${atomic}_t *v)
> > {
> > diff --git a/scripts/atomic/fallbacks/fence b/scripts/atomic/fallbacks/fence
> > index 975855dfba25..f4901343cd2b 100755
> > --- a/scripts/atomic/fallbacks/fence
> > +++ b/scripts/atomic/fallbacks/fence
> > @@ -1,5 +1,8 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}${name}${sfx} fence
> > +then
> > acqrel=full
> > . ${ATOMICDIR}/acqrel.sh
> > +fi
> > cat <<EOF
> > static __always_inline ${ret}
> > arch_${atomic}_${pfx}${name}${sfx}(${params})
> > diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fallbacks/fetch_add_unless
> > index a1692df0d514..ec583d340785 100755
> > --- a/scripts/atomic/fallbacks/fetch_add_unless
> > +++ b/scripts/atomic/fallbacks/fetch_add_unless
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_fetch_add_unless fetch_add_unless
> > +then
> > cat << EOF
> > /**
> > * arch_${atomic}_fetch_add_unless - add unless the number is already a given value
> > @@ -8,6 +10,9 @@ cat << EOF
> > * Atomically adds @a to @v, so long as @v was not already @u.
> > * Returns original value of @v.
> > */
> > +EOF
> > +fi
> > +cat << EOF
> > static __always_inline ${int}
> > arch_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
> > {
> > diff --git a/scripts/atomic/fallbacks/inc b/scripts/atomic/fallbacks/inc
> > index 3f2c0730cd0c..bb1d5ea6846c 100755
> > --- a/scripts/atomic/fallbacks/inc
> > +++ b/scripts/atomic/fallbacks/inc
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}inc${sfx}${order} inc
> > +then
> > cat <<EOF
> > /**
> > * arch_${atomic}_${pfx}inc${sfx}${order} - Atomic increment
> > @@ -6,6 +8,9 @@ cat <<EOF
> > * Atomically increment @v with ${docbook_order} ordering,
> > * returning ${docbook_oldnew} value.
> > */
> > +EOF
> > +fi
> > +cat <<EOF
> > static __always_inline ${ret}
> > arch_${atomic}_${pfx}inc${sfx}${order}(${atomic}_t *v)
> > {
> > diff --git a/scripts/atomic/fallbacks/inc_and_test b/scripts/atomic/fallbacks/inc_and_test
> > index cc3ac1dde508..dd74f6a5ca4a 100755
> > --- a/scripts/atomic/fallbacks/inc_and_test
> > +++ b/scripts/atomic/fallbacks/inc_and_test
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_inc_and_test inc_and_test
> > +then
> > cat <<EOF
> > /**
> > * arch_${atomic}_inc_and_test - increment and test
> > @@ -7,6 +9,9 @@ cat <<EOF
> > * and returns @true if the result is zero, or @false for all
> > * other cases.
> > */
> > +EOF
> > +fi
> > +cat <<EOF
> > static __always_inline bool
> > arch_${atomic}_inc_and_test(${atomic}_t *v)
> > {
> > diff --git a/scripts/atomic/fallbacks/inc_not_zero b/scripts/atomic/fallbacks/inc_not_zero
> > index 891fa3c057f6..38e2c13dab62 100755
> > --- a/scripts/atomic/fallbacks/inc_not_zero
> > +++ b/scripts/atomic/fallbacks/inc_not_zero
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_inc_not_zero inc_not_zero
> > +then
> > cat <<EOF
> > /**
> > * arch_${atomic}_inc_not_zero - increment unless the number is zero
> > @@ -6,6 +8,9 @@ cat <<EOF
> > * Atomically increments @v by 1, if @v is non-zero.
> > * Returns @true if the increment was done.
> > */
> > +EOF
> > +fi
> > +cat <<EOF
> > static __always_inline bool
> > arch_${atomic}_inc_not_zero(${atomic}_t *v)
> > {
> > diff --git a/scripts/atomic/fallbacks/inc_unless_negative b/scripts/atomic/fallbacks/inc_unless_negative
> > index 98830b0dcdb1..2dc853c4e5b9 100755
> > --- a/scripts/atomic/fallbacks/inc_unless_negative
> > +++ b/scripts/atomic/fallbacks/inc_unless_negative
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_inc_unless_negative inc_unless_negative
> > +then
> > cat <<EOF
> > /**
> > * arch_${atomic}_inc_unless_negative - Atomic increment if old value is non-negative
> > @@ -7,6 +9,9 @@ cat <<EOF
> > * than or equal to zero. Return @true if the increment happened and
> > * @false otherwise.
> > */
> > +EOF
> > +fi
> > +cat <<EOF
> > static __always_inline bool
> > arch_${atomic}_inc_unless_negative(${atomic}_t *v)
> > {
> > diff --git a/scripts/atomic/fallbacks/read_acquire b/scripts/atomic/fallbacks/read_acquire
> > index 779f40c07018..680cd43080cb 100755
> > --- a/scripts/atomic/fallbacks/read_acquire
> > +++ b/scripts/atomic/fallbacks/read_acquire
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_read_acquire read_acquire
> > +then
> > cat <<EOF
> > /**
> > * arch_${atomic}_read_acquire - Atomic load acquire
> > @@ -6,6 +8,9 @@ cat <<EOF
> > * Atomically load from *@v with acquire ordering, returning the value
> > * loaded.
> > */
> > +EOF
> > +fi
> > +cat <<EOF
> > static __always_inline ${ret}
> > arch_${atomic}_read_acquire(const ${atomic}_t *v)
> > {
> > diff --git a/scripts/atomic/fallbacks/release b/scripts/atomic/fallbacks/release
> > index bce3a1cbd497..a1604df66ece 100755
> > --- a/scripts/atomic/fallbacks/release
> > +++ b/scripts/atomic/fallbacks/release
> > @@ -1,5 +1,8 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_${pfx}${name}${sfx}_release release
> > +then
> > acqrel=release
> > . ${ATOMICDIR}/acqrel.sh
> > +fi
> > cat <<EOF
> > static __always_inline ${ret}
> > arch_${atomic}_${pfx}${name}${sfx}_release(${params})
> > diff --git a/scripts/atomic/fallbacks/set_release b/scripts/atomic/fallbacks/set_release
> > index 46effb6203e5..2a65d3b29f4b 100755
> > --- a/scripts/atomic/fallbacks/set_release
> > +++ b/scripts/atomic/fallbacks/set_release
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_set_release set_release
> > +then
> > cat <<EOF
> > /**
> > * arch_${atomic}_set_release - Atomic store release
> > @@ -6,6 +8,9 @@ cat <<EOF
> > *
> > * Atomically store @i into *@v with release ordering.
> > */
> > +EOF
> > +fi
> > +cat <<EOF
> > static __always_inline void
> > arch_${atomic}_set_release(${atomic}_t *v, ${int} i)
> > {
> > diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test
> > index 204282e260ea..0397b0e92192 100755
> > --- a/scripts/atomic/fallbacks/sub_and_test
> > +++ b/scripts/atomic/fallbacks/sub_and_test
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_sub_and_test sub_and_test
> > +then
> > cat <<EOF
> > /**
> > * arch_${atomic}_sub_and_test - subtract value from variable and test result
> > @@ -8,6 +10,9 @@ cat <<EOF
> > * @true if the result is zero, or @false for all
> > * other cases.
> > */
> > +EOF
> > +fi
> > +cat <<EOF
> > static __always_inline bool
> > arch_${atomic}_sub_and_test(${int} i, ${atomic}_t *v)
> > {
> > diff --git a/scripts/atomic/fallbacks/try_cmpxchg b/scripts/atomic/fallbacks/try_cmpxchg
> > index baf7412f9bf4..e08c5962dd83 100755
> > --- a/scripts/atomic/fallbacks/try_cmpxchg
> > +++ b/scripts/atomic/fallbacks/try_cmpxchg
> > @@ -1,3 +1,5 @@
> > +if /bin/sh ${ATOMICDIR}/chkdup.sh arch_${atomic}_try_cmpxchg${order} try_cmpxchg
> > +then
> > cat <<EOF
> > /**
> > * arch_${atomic}_try_cmpxchg${order} - Atomic cmpxchg with bool return value
> > @@ -9,6 +11,9 @@ cat <<EOF
> > * providing ${docbook_order} ordering.
> > * Returns @true if the cmpxchg operation succeeded, and false otherwise.
> > */
> > +EOF
> > +fi
> > +cat <<EOF
> > static __always_inline bool
> > arch_${atomic}_try_cmpxchg${order}(${atomic}_t *v, ${int} *old, ${int} new)
> > {
> > diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh
> > index 5b98a8307693..69bf3754df5a 100755
> > --- a/scripts/atomic/gen-atomics.sh
> > +++ b/scripts/atomic/gen-atomics.sh
> > @@ -3,6 +3,10 @@
> > #
> > # Generate atomic headers
> >
> > +T="`mktemp -d ${TMPDIR-/tmp}/gen-atomics.sh.XXXXXX`"
> > +trap 'rm -rf $T' 0
> > +export T
> > +
> > ATOMICDIR=$(dirname $0)
> > ATOMICTBL=${ATOMICDIR}/atomics.tbl
> > LINUXDIR=${ATOMICDIR}/../..
> > --
> > 2.40.1
> >

2023-05-11 19:59:07

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Thu, May 11, 2023 at 06:10:00PM +0100, Mark Rutland wrote:
> Hi Paul
>
> On Wed, May 10, 2023 at 11:17:16AM -0700, Paul E. McKenney wrote:
> > The gen-atomics.sh script currently generates 42 duplicate definitions:
> >
> > arch_atomic64_add_negative
> > arch_atomic64_add_negative_acquire
> > arch_atomic64_add_negative_release
>
> [...]
>
> > These duplicates are presumably to handle different architectures
> > generating hand-coded definitions for different subsets of the atomic
> > operations.
>
> Yup, for each FULL/ACQUIRE/RELEASE/RELAXED variant of each op, we allow the
> archtiecture to choose between:
>
> * Providing the ordering variant directly
> * Providing the FULL ordering variant only
> * Providing the RELAXED ordering variant only
> * Providing an equivalent op that we can build from
>
> > However, generating duplicate kernel-doc headers is undesirable.
>
> Understood -- I hadn't understood that duplication was a problem when this was
> originally written.
>
> The way this is currently done is largely an artifact of our ifdeffery (and the
> kerneldoc for fallbacks living inthe fallback templates), and I think we can
> fix both of those.
>
> > Therefore, generate only the first kernel-doc definition in a group
> > of duplicates. A comment indicates the name of the function and the
> > fallback script that generated it.
>
> I'm not keen on this approach, especially with the chkdup.sh script -- it feels
> like we're working around an underlying structural issue.
>
> I think that we can restructure the ifdeffery so that each ordering variant
> gets its own ifdeffery, and then we could place the kerneldoc immediately above
> that, e.g.
>
> /**
> * arch_atomic_inc_return_release()
> *
> * [ full kerneldoc block here ]
> */
> #if defined(arch_atomic_inc_return_release)
> /* defined in arch code */
> #elif defined(arch_atomic_inc_return_relaxed)
> [ define in terms of arch_atomic_inc_return_relaxed ]
> #elif defined(arch_atomic_inc_return)
> [ define in terms of arch_atomic_inc_return ]
> #else
> [ define in terms of arch_atomic_fetch_inc_release ]
> #endif
>
> ... with similar for the mandatory ops that each arch must provide, e.g.
>
> /**
> * arch_atomic_or()
> *
> * [ full kerneldoc block here ]
> */
> /* arch_atomic_or() is mandatory -- architectures must define it! */
>
> I had a go at that restructuring today, and while local build testing indicates
> I haven't got it quite right, I think it's possible:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/fallback-rework
>
> Does that sound ok to you?

If the end result is simpler scripts, sure.

I'm not at all keen to complicate the scripts for something daft like
kernel-doc. The last thing we need is documentation style weenies making
an unholy mess of things.

2023-05-11 20:08:45

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Thu, May 11, 2023 at 09:38:56PM +0200, Peter Zijlstra wrote:
> On Thu, May 11, 2023 at 06:10:00PM +0100, Mark Rutland wrote:
> > Hi Paul
> >
> > On Wed, May 10, 2023 at 11:17:16AM -0700, Paul E. McKenney wrote:
> > > The gen-atomics.sh script currently generates 42 duplicate definitions:
> > >
> > > arch_atomic64_add_negative
> > > arch_atomic64_add_negative_acquire
> > > arch_atomic64_add_negative_release
> >
> > [...]
> >
> > > These duplicates are presumably to handle different architectures
> > > generating hand-coded definitions for different subsets of the atomic
> > > operations.
> >
> > Yup, for each FULL/ACQUIRE/RELEASE/RELAXED variant of each op, we allow the
> > archtiecture to choose between:
> >
> > * Providing the ordering variant directly
> > * Providing the FULL ordering variant only
> > * Providing the RELAXED ordering variant only
> > * Providing an equivalent op that we can build from
> >
> > > However, generating duplicate kernel-doc headers is undesirable.
> >
> > Understood -- I hadn't understood that duplication was a problem when this was
> > originally written.
> >
> > The way this is currently done is largely an artifact of our ifdeffery (and the
> > kerneldoc for fallbacks living inthe fallback templates), and I think we can
> > fix both of those.
> >
> > > Therefore, generate only the first kernel-doc definition in a group
> > > of duplicates. A comment indicates the name of the function and the
> > > fallback script that generated it.
> >
> > I'm not keen on this approach, especially with the chkdup.sh script -- it feels
> > like we're working around an underlying structural issue.
> >
> > I think that we can restructure the ifdeffery so that each ordering variant
> > gets its own ifdeffery, and then we could place the kerneldoc immediately above
> > that, e.g.
> >
> > /**
> > * arch_atomic_inc_return_release()
> > *
> > * [ full kerneldoc block here ]
> > */
> > #if defined(arch_atomic_inc_return_release)
> > /* defined in arch code */
> > #elif defined(arch_atomic_inc_return_relaxed)
> > [ define in terms of arch_atomic_inc_return_relaxed ]
> > #elif defined(arch_atomic_inc_return)
> > [ define in terms of arch_atomic_inc_return ]
> > #else
> > [ define in terms of arch_atomic_fetch_inc_release ]
> > #endif
> >
> > ... with similar for the mandatory ops that each arch must provide, e.g.
> >
> > /**
> > * arch_atomic_or()
> > *
> > * [ full kerneldoc block here ]
> > */
> > /* arch_atomic_or() is mandatory -- architectures must define it! */
> >
> > I had a go at that restructuring today, and while local build testing indicates
> > I haven't got it quite right, I think it's possible:
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/fallback-rework
> >
> > Does that sound ok to you?
>
> If the end result is simpler scripts, sure.
>
> I'm not at all keen to complicate the scripts for something daft like
> kernel-doc. The last thing we need is documentation style weenies making
> an unholy mess of things.

Do you have an alternative suggestion for generating the kernel-doc?
The current lack of it is problematic.

Thanx, Paul

2023-05-11 20:25:28

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Thu, May 11, 2023 at 12:53:46PM -0700, Paul E. McKenney wrote:
> Do you have an alternative suggestion for generating the kernel-doc?
> The current lack of it is problematic.

I've never found a lack of kernel-doc to be a problem. And I'm very much
against complicating the scripts to add it.

Also, there's Documentation/atomic_t.txt

2023-05-11 20:47:56

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Thu, May 11, 2023 at 10:01:42PM +0200, Peter Zijlstra wrote:
> On Thu, May 11, 2023 at 12:53:46PM -0700, Paul E. McKenney wrote:
> > Do you have an alternative suggestion for generating the kernel-doc?
> > The current lack of it is problematic.
>
> I've never found a lack of kernel-doc to be a problem. And I'm very much
> against complicating the scripts to add it.

I am sure that you have not recently found the lack of kernel-doc for
the atomic operations to be a problem, given that you wrote many of
these functions.

OK, you mentioned concerns about documentation people nitpicking. This
can be dealt with. The added scripting is not that large or complex.

> Also, there's Documentation/atomic_t.txt

Yes, if you very carefully read that document end to end, correctly
interpreting it all, you will know what you need to. Of course, first
you have to find it. And then you must avoid any lapses while reading
it while under pressure. Not particularly friendly to someone trying
to chase a bug.

Thanx, Paul

2023-05-11 20:48:41

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Wed, May 10, 2023 at 11:17:16AM -0700, Paul E. McKenney wrote:
> The gen-atomics.sh script currently generates 42 duplicate definitions:
>
> arch_atomic64_add_negative
> arch_atomic64_add_negative_acquire
> arch_atomic64_add_negative_release
> arch_atomic64_dec_return
> arch_atomic64_dec_return_acquire
> arch_atomic64_dec_return_release
> arch_atomic64_fetch_andnot
> arch_atomic64_fetch_andnot_acquire
> arch_atomic64_fetch_andnot_release
> arch_atomic64_fetch_dec
> arch_atomic64_fetch_dec_acquire
> arch_atomic64_fetch_dec_release
> arch_atomic64_fetch_inc
> arch_atomic64_fetch_inc_acquire
> arch_atomic64_fetch_inc_release
> arch_atomic64_inc_return
> arch_atomic64_inc_return_acquire
> arch_atomic64_inc_return_release
> arch_atomic64_try_cmpxchg
> arch_atomic64_try_cmpxchg_acquire
> arch_atomic64_try_cmpxchg_release
> arch_atomic_add_negative
> arch_atomic_add_negative_acquire
> arch_atomic_add_negative_release
> arch_atomic_dec_return
> arch_atomic_dec_return_acquire
> arch_atomic_dec_return_release
> arch_atomic_fetch_andnot
> arch_atomic_fetch_andnot_acquire
> arch_atomic_fetch_andnot_release
> arch_atomic_fetch_dec
> arch_atomic_fetch_dec_acquire
> arch_atomic_fetch_dec_release
> arch_atomic_fetch_inc
> arch_atomic_fetch_inc_acquire
> arch_atomic_fetch_inc_release
> arch_atomic_inc_return
> arch_atomic_inc_return_acquire
> arch_atomic_inc_return_release
> arch_atomic_try_cmpxchg
> arch_atomic_try_cmpxchg_acquire
> arch_atomic_try_cmpxchg_release
>
> These duplicates are presumably to handle different architectures
> generating hand-coded definitions for different subsets of the atomic
> operations. However, generating duplicate kernel-doc headers is
> undesirable.
>
> Therefore, generate only the first kernel-doc definition in a group
> of duplicates. A comment indicates the name of the function and the
> fallback script that generated it.

So my canonical solution to fixing kernel-doc related problems is this
trivial regex:

s/\/\*\*/\/\*/

works every time.

And is *much* simpler than this:

> scripts/atomic/chkdup.sh | 27 ++
> scripts/atomic/fallbacks/acquire | 3 +
> scripts/atomic/fallbacks/add_negative | 5 +
> scripts/atomic/fallbacks/add_unless | 5 +
> scripts/atomic/fallbacks/andnot | 5 +
> scripts/atomic/fallbacks/dec | 5 +
> scripts/atomic/fallbacks/dec_and_test | 5 +
> scripts/atomic/fallbacks/dec_if_positive | 5 +
> scripts/atomic/fallbacks/dec_unless_positive | 5 +
> scripts/atomic/fallbacks/fence | 3 +
> scripts/atomic/fallbacks/fetch_add_unless | 5 +
> scripts/atomic/fallbacks/inc | 5 +
> scripts/atomic/fallbacks/inc_and_test | 5 +
> scripts/atomic/fallbacks/inc_not_zero | 5 +
> scripts/atomic/fallbacks/inc_unless_negative | 5 +
> scripts/atomic/fallbacks/read_acquire | 5 +
> scripts/atomic/fallbacks/release | 3 +
> scripts/atomic/fallbacks/set_release | 5 +
> scripts/atomic/fallbacks/sub_and_test | 5 +
> scripts/atomic/fallbacks/try_cmpxchg | 5 +
> scripts/atomic/gen-atomics.sh | 4 +

2023-05-11 21:01:35

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Thu, May 11, 2023 at 01:25:18PM -0700, Paul E. McKenney wrote:
> On Thu, May 11, 2023 at 10:01:42PM +0200, Peter Zijlstra wrote:
> > On Thu, May 11, 2023 at 12:53:46PM -0700, Paul E. McKenney wrote:
> > > Do you have an alternative suggestion for generating the kernel-doc?
> > > The current lack of it is problematic.
> >
> > I've never found a lack of kernel-doc to be a problem. And I'm very much
> > against complicating the scripts to add it.
>
> I am sure that you have not recently found the lack of kernel-doc for
> the atomic operations to be a problem, given that you wrote many of
> these functions.

Sure; but I meant in general -- I've *never* used kernel-doc. Comments I
occasionally read, and sometimes they're not even broken either, but
kernel-doc, nope.

> OK, you mentioned concerns about documentation people nitpicking. This
> can be dealt with. The added scripting is not that large or complex.
>
> > Also, there's Documentation/atomic_t.txt
>
> Yes, if you very carefully read that document end to end, correctly
> interpreting it all, you will know what you need to. Of course, first
> you have to find it. And then you must avoid any lapses while reading
> it while under pressure. Not particularly friendly to someone trying
> to chase a bug.

It's either brief and terse or tediously long -- I vastly prefer the
former, my brain can much better parse structure than English prose.

Also, I find, pressure is never conductive to anything, except prehaps
cooking rice and steam trains (because nothing is as delicous as a
pressure cooked train -- oh wait).

Add enough pressure and the human brain reduces to driven and can't read
even the most coherent of text no matter how easy to find.

In such situations it's for the manager to take the pressure away and
the engineer to think in relative peace.

2023-05-11 21:04:28

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Thu, May 11, 2023 at 10:46:33PM +0200, Peter Zijlstra wrote:
> On Thu, May 11, 2023 at 01:25:18PM -0700, Paul E. McKenney wrote:
> > On Thu, May 11, 2023 at 10:01:42PM +0200, Peter Zijlstra wrote:
> > > On Thu, May 11, 2023 at 12:53:46PM -0700, Paul E. McKenney wrote:
> > > > Do you have an alternative suggestion for generating the kernel-doc?
> > > > The current lack of it is problematic.
> > >
> > > I've never found a lack of kernel-doc to be a problem. And I'm very much
> > > against complicating the scripts to add it.
> >
> > I am sure that you have not recently found the lack of kernel-doc for
> > the atomic operations to be a problem, given that you wrote many of
> > these functions.
>
> Sure; but I meant in general -- I've *never* used kernel-doc. Comments I
> occasionally read, and sometimes they're not even broken either, but
> kernel-doc, nope.
>
> > OK, you mentioned concerns about documentation people nitpicking. This
> > can be dealt with. The added scripting is not that large or complex.
> >
> > > Also, there's Documentation/atomic_t.txt
> >
> > Yes, if you very carefully read that document end to end, correctly
> > interpreting it all, you will know what you need to. Of course, first
> > you have to find it. And then you must avoid any lapses while reading
> > it while under pressure. Not particularly friendly to someone trying
> > to chase a bug.
>
> It's either brief and terse or tediously long -- I vastly prefer the
> former, my brain can much better parse structure than English prose.
>
> Also, I find, pressure is never conductive to anything, except prehaps
> cooking rice and steam trains (because nothing is as delicous as a
> pressure cooked train -- oh wait).
>
> Add enough pressure and the human brain reduces to driven and can't read

Just in case it weren't clear: s/driven/drivel/

> even the most coherent of text no matter how easy to find.
>
> In such situations it's for the manager to take the pressure away and
> the engineer to think in relative peace.

2023-05-11 21:06:40

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Thu, May 11, 2023 at 10:18:37PM +0200, Peter Zijlstra wrote:
> On Wed, May 10, 2023 at 11:17:16AM -0700, Paul E. McKenney wrote:
> > The gen-atomics.sh script currently generates 42 duplicate definitions:
> >
> > arch_atomic64_add_negative
> > arch_atomic64_add_negative_acquire
> > arch_atomic64_add_negative_release
> > arch_atomic64_dec_return
> > arch_atomic64_dec_return_acquire
> > arch_atomic64_dec_return_release
> > arch_atomic64_fetch_andnot
> > arch_atomic64_fetch_andnot_acquire
> > arch_atomic64_fetch_andnot_release
> > arch_atomic64_fetch_dec
> > arch_atomic64_fetch_dec_acquire
> > arch_atomic64_fetch_dec_release
> > arch_atomic64_fetch_inc
> > arch_atomic64_fetch_inc_acquire
> > arch_atomic64_fetch_inc_release
> > arch_atomic64_inc_return
> > arch_atomic64_inc_return_acquire
> > arch_atomic64_inc_return_release
> > arch_atomic64_try_cmpxchg
> > arch_atomic64_try_cmpxchg_acquire
> > arch_atomic64_try_cmpxchg_release
> > arch_atomic_add_negative
> > arch_atomic_add_negative_acquire
> > arch_atomic_add_negative_release
> > arch_atomic_dec_return
> > arch_atomic_dec_return_acquire
> > arch_atomic_dec_return_release
> > arch_atomic_fetch_andnot
> > arch_atomic_fetch_andnot_acquire
> > arch_atomic_fetch_andnot_release
> > arch_atomic_fetch_dec
> > arch_atomic_fetch_dec_acquire
> > arch_atomic_fetch_dec_release
> > arch_atomic_fetch_inc
> > arch_atomic_fetch_inc_acquire
> > arch_atomic_fetch_inc_release
> > arch_atomic_inc_return
> > arch_atomic_inc_return_acquire
> > arch_atomic_inc_return_release
> > arch_atomic_try_cmpxchg
> > arch_atomic_try_cmpxchg_acquire
> > arch_atomic_try_cmpxchg_release
> >
> > These duplicates are presumably to handle different architectures
> > generating hand-coded definitions for different subsets of the atomic
> > operations. However, generating duplicate kernel-doc headers is
> > undesirable.
> >
> > Therefore, generate only the first kernel-doc definition in a group
> > of duplicates. A comment indicates the name of the function and the
> > fallback script that generated it.
>
> So my canonical solution to fixing kernel-doc related problems is this
> trivial regex:
>
> s/\/\*\*/\/\*/
>
> works every time.

Can't say that I am a fan of that approach.

> And is *much* simpler than this:
>
> > scripts/atomic/chkdup.sh | 27 ++
> > scripts/atomic/fallbacks/acquire | 3 +
> > scripts/atomic/fallbacks/add_negative | 5 +
> > scripts/atomic/fallbacks/add_unless | 5 +
> > scripts/atomic/fallbacks/andnot | 5 +
> > scripts/atomic/fallbacks/dec | 5 +
> > scripts/atomic/fallbacks/dec_and_test | 5 +
> > scripts/atomic/fallbacks/dec_if_positive | 5 +
> > scripts/atomic/fallbacks/dec_unless_positive | 5 +
> > scripts/atomic/fallbacks/fence | 3 +
> > scripts/atomic/fallbacks/fetch_add_unless | 5 +
> > scripts/atomic/fallbacks/inc | 5 +
> > scripts/atomic/fallbacks/inc_and_test | 5 +
> > scripts/atomic/fallbacks/inc_not_zero | 5 +
> > scripts/atomic/fallbacks/inc_unless_negative | 5 +
> > scripts/atomic/fallbacks/read_acquire | 5 +
> > scripts/atomic/fallbacks/release | 3 +
> > scripts/atomic/fallbacks/set_release | 5 +
> > scripts/atomic/fallbacks/sub_and_test | 5 +
> > scripts/atomic/fallbacks/try_cmpxchg | 5 +
> > scripts/atomic/gen-atomics.sh | 4 +

This is not a huge addition, now is it?

Thanx, Paul

2023-05-11 21:37:39

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Thu, May 11, 2023 at 10:48:06PM +0200, Peter Zijlstra wrote:
> On Thu, May 11, 2023 at 10:46:33PM +0200, Peter Zijlstra wrote:
> > On Thu, May 11, 2023 at 01:25:18PM -0700, Paul E. McKenney wrote:
> > > On Thu, May 11, 2023 at 10:01:42PM +0200, Peter Zijlstra wrote:
> > > > On Thu, May 11, 2023 at 12:53:46PM -0700, Paul E. McKenney wrote:
> > > > > Do you have an alternative suggestion for generating the kernel-doc?
> > > > > The current lack of it is problematic.
> > > >
> > > > I've never found a lack of kernel-doc to be a problem. And I'm very much
> > > > against complicating the scripts to add it.
> > >
> > > I am sure that you have not recently found the lack of kernel-doc for
> > > the atomic operations to be a problem, given that you wrote many of
> > > these functions.
> >
> > Sure; but I meant in general -- I've *never* used kernel-doc. Comments I
> > occasionally read, and sometimes they're not even broken either, but
> > kernel-doc, nope.

I am not arguing that *you* need kernel-doc, and I must admit that I
also tend to look much more carefully at the code than the comments.
But not everyone has your level of code-reading talent, nor does everyone
have my half century of practice reading code.

(OK, OK, so it won't really be a half century until this coming
September!)

> > > OK, you mentioned concerns about documentation people nitpicking. This
> > > can be dealt with. The added scripting is not that large or complex.
> > >
> > > > Also, there's Documentation/atomic_t.txt
> > >
> > > Yes, if you very carefully read that document end to end, correctly
> > > interpreting it all, you will know what you need to. Of course, first
> > > you have to find it. And then you must avoid any lapses while reading
> > > it while under pressure. Not particularly friendly to someone trying
> > > to chase a bug.
> >
> > It's either brief and terse or tediously long -- I vastly prefer the
> > former, my brain can much better parse structure than English prose.

Agreed, English prose does have its challenges, no two ways about it.

But in order to successfully communicate with someone, it is necessary
to start where that person is, not where we might prefer them to be.
And there are quite a few people who benefit from a bit of English prose.

As long as we are picking on languages, there are some who assert that
Dutch is the easiest language for native English speakers to learn.
I know nothing myself, but for the purposes of this discussion, I will
assume that this is because Dutch uses similar words but is better
structured than is English. As opposed, say, that Dutch is messed up
almost exactly the same ways that English is. ;-)

> > Also, I find, pressure is never conductive to anything, except prehaps
> > cooking rice and steam trains (because nothing is as delicous as a
> > pressure cooked train -- oh wait).
> >
> > Add enough pressure and the human brain reduces to driven and can't read
>
> Just in case it weren't clear: s/driven/drivel/

You know, my brain auto-corrected and I didn't even notice. And I still
sometimes wonder why I fail to spot bugs. ;-)

> > even the most coherent of text no matter how easy to find.
> >
> > In such situations it's for the manager to take the pressure away and
> > the engineer to think in relative peace.

I won't argue, having forced that a few times back in past lives.
"No, you don't have to get this done by Friday, and if anyone tells
you differently, you tell them to talk to me." In one memorable case,
once pressure was relieved, they actually got it done before Friday.

Still, that is no reason to make that poor engineer's life even harder.
After all, there never have been more than a handful of managers that
I had that kind of influence over. Plus there is the occasional true
emergency, though admittedly far fewer true emergencies that proclaimed
emergencies.

But suppose your response to someone nitpicking the atomic-operation
kernel-doc could simply be "Not my problem, go talk to Paul." I would
of course be plenty happy to respond appropriately to people who expect
to read a long series of kernel-doc entries for atomic operations as
if it was a novel. For some appropriate definition of "appropriately",
of course.

Would that help?

Thanx, Paul

2023-05-12 13:21:57

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Thu, May 11, 2023 at 12:12:16PM -0700, Paul E. McKenney wrote:
> On Thu, May 11, 2023 at 06:10:00PM +0100, Mark Rutland wrote:
> > I think that we can restructure the ifdeffery so that each ordering variant
> > gets its own ifdeffery, and then we could place the kerneldoc immediately above
> > that, e.g.
> >
> > /**
> > * arch_atomic_inc_return_release()
> > *
> > * [ full kerneldoc block here ]
> > */
> > #if defined(arch_atomic_inc_return_release)
> > /* defined in arch code */
> > #elif defined(arch_atomic_inc_return_relaxed)
> > [ define in terms of arch_atomic_inc_return_relaxed ]
> > #elif defined(arch_atomic_inc_return)
> > [ define in terms of arch_atomic_inc_return ]
> > #else
> > [ define in terms of arch_atomic_fetch_inc_release ]
> > #endif
> >
> > ... with similar for the mandatory ops that each arch must provide, e.g.
> >
> > /**
> > * arch_atomic_or()
> > *
> > * [ full kerneldoc block here ]
> > */
> > /* arch_atomic_or() is mandatory -- architectures must define it! */
> >
> > I had a go at that restructuring today, and while local build testing indicates
> > I haven't got it quite right, I think it's possible:
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/fallback-rework
> >
> > Does that sound ok to you?
>
> At first glance, it appears that your "TODO" locations have the same
> information that I was using, so it should not be hard for me to adapt the
> current kernel-doc generation to your new scheme. (Famous last words!)

Great!

> Plus having the kernel-doc generation all in one place does have some
> serious attractions.

:)

> I will continue maintaining my current stack, but would of course be
> happy to port it on top of your refactoring. If it turns out that
> the refactoring will take a long time, we can discuss what to do in
> the meantime. But here is hoping that the refactoring goes smoothly!
> That would be easier all around. ;-)

FWIW, I think that's working now; every cross-build I've tried works.

I've updated the branch at:

https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/fallback-rework

Tagged as:

atomics-fallback-rework-20230512

Thanks,
Mark.

2023-05-12 13:57:36

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Thu, May 11, 2023 at 09:38:56PM +0200, Peter Zijlstra wrote:
> On Thu, May 11, 2023 at 06:10:00PM +0100, Mark Rutland wrote:
> > Hi Paul
> >
> > On Wed, May 10, 2023 at 11:17:16AM -0700, Paul E. McKenney wrote:
> > > The gen-atomics.sh script currently generates 42 duplicate definitions:
> > >
> > > arch_atomic64_add_negative
> > > arch_atomic64_add_negative_acquire
> > > arch_atomic64_add_negative_release
> >
> > [...]
> >
> > > These duplicates are presumably to handle different architectures
> > > generating hand-coded definitions for different subsets of the atomic
> > > operations.
> >
> > Yup, for each FULL/ACQUIRE/RELEASE/RELAXED variant of each op, we allow the
> > archtiecture to choose between:
> >
> > * Providing the ordering variant directly
> > * Providing the FULL ordering variant only
> > * Providing the RELAXED ordering variant only
> > * Providing an equivalent op that we can build from
> >
> > > However, generating duplicate kernel-doc headers is undesirable.
> >
> > Understood -- I hadn't understood that duplication was a problem when this was
> > originally written.
> >
> > The way this is currently done is largely an artifact of our ifdeffery (and the
> > kerneldoc for fallbacks living inthe fallback templates), and I think we can
> > fix both of those.
> >
> > > Therefore, generate only the first kernel-doc definition in a group
> > > of duplicates. A comment indicates the name of the function and the
> > > fallback script that generated it.
> >
> > I'm not keen on this approach, especially with the chkdup.sh script -- it feels
> > like we're working around an underlying structural issue.
> >
> > I think that we can restructure the ifdeffery so that each ordering variant
> > gets its own ifdeffery, and then we could place the kerneldoc immediately above
> > that, e.g.
> >
> > /**
> > * arch_atomic_inc_return_release()
> > *
> > * [ full kerneldoc block here ]
> > */
> > #if defined(arch_atomic_inc_return_release)
> > /* defined in arch code */
> > #elif defined(arch_atomic_inc_return_relaxed)
> > [ define in terms of arch_atomic_inc_return_relaxed ]
> > #elif defined(arch_atomic_inc_return)
> > [ define in terms of arch_atomic_inc_return ]
> > #else
> > [ define in terms of arch_atomic_fetch_inc_release ]
> > #endif
> >
> > ... with similar for the mandatory ops that each arch must provide, e.g.
> >
> > /**
> > * arch_atomic_or()
> > *
> > * [ full kerneldoc block here ]
> > */
> > /* arch_atomic_or() is mandatory -- architectures must define it! */
> >
> > I had a go at that restructuring today, and while local build testing indicates
> > I haven't got it quite right, I think it's possible:
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/fallback-rework
> >
> > Does that sound ok to you?
>
> If the end result is simpler scripts, sure.

FWIW, regardless of the comments, I'd like to make this restructuring as it
makes it easier to add some more fallback cases, and I find the generated
ifdeffery a bit easier to follow when it's a chain of of-elif-elif-else-end
rather than a few nested cases.

> I'm not at all keen to complicate the scripts for something daft like
> kernel-doc. The last thing we need is documentation style weenies making
> an unholy mess of things.

Sure. I agree we don't want to bend over backwards for it at the cost of
maintainability, but I think it can be made pretty simple and self-contained,
and hopefully we can prove that with a v2 or v3. ;)

If nothing else, handling this centrally means that we'll have *one* set of
comments for this rather than a tonne of randomly managed copies in arch
code, which seems like a win...

Thanks,
Mark.

2023-05-12 16:10:09

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Fri, May 12, 2023 at 02:18:48PM +0100, Mark Rutland wrote:
> On Thu, May 11, 2023 at 12:12:16PM -0700, Paul E. McKenney wrote:
> > On Thu, May 11, 2023 at 06:10:00PM +0100, Mark Rutland wrote:
> > > I think that we can restructure the ifdeffery so that each ordering variant
> > > gets its own ifdeffery, and then we could place the kerneldoc immediately above
> > > that, e.g.
> > >
> > > /**
> > > * arch_atomic_inc_return_release()
> > > *
> > > * [ full kerneldoc block here ]
> > > */
> > > #if defined(arch_atomic_inc_return_release)
> > > /* defined in arch code */
> > > #elif defined(arch_atomic_inc_return_relaxed)
> > > [ define in terms of arch_atomic_inc_return_relaxed ]
> > > #elif defined(arch_atomic_inc_return)
> > > [ define in terms of arch_atomic_inc_return ]
> > > #else
> > > [ define in terms of arch_atomic_fetch_inc_release ]
> > > #endif
> > >
> > > ... with similar for the mandatory ops that each arch must provide, e.g.
> > >
> > > /**
> > > * arch_atomic_or()
> > > *
> > > * [ full kerneldoc block here ]
> > > */
> > > /* arch_atomic_or() is mandatory -- architectures must define it! */
> > >
> > > I had a go at that restructuring today, and while local build testing indicates
> > > I haven't got it quite right, I think it's possible:
> > >
> > > https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/fallback-rework
> > >
> > > Does that sound ok to you?
> >
> > At first glance, it appears that your "TODO" locations have the same
> > information that I was using, so it should not be hard for me to adapt the
> > current kernel-doc generation to your new scheme. (Famous last words!)
>
> Great!
>
> > Plus having the kernel-doc generation all in one place does have some
> > serious attractions.
>
> :)
>
> > I will continue maintaining my current stack, but would of course be
> > happy to port it on top of your refactoring. If it turns out that
> > the refactoring will take a long time, we can discuss what to do in
> > the meantime. But here is hoping that the refactoring goes smoothly!
> > That would be easier all around. ;-)
>
> FWIW, I think that's working now; every cross-build I've tried works.
>
> I've updated the branch at:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/fallback-rework
>
> Tagged as:
>
> atomics-fallback-rework-20230512

Thank you very much!

I expect to send v2 of my original late today on the perhaps unlikely
off-chance that someone might be interested in reviewing the verbiage.

More to the point, I have started porting my changes on top of your
stack. My thought is to have a separate "."-included script that does
the kernel-doc work.

I am also thinking in terms of putting the kernel-doc generation into
an "else" clause to the "is mandatory" check, and leaving the kernel-doc
for the mandatory functions in arch/x86/include/asm/atomic.h.

But in both cases, please let me know if something else would work better.

Thanx, Paul

2023-05-12 17:10:35

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Fri, May 12, 2023 at 09:01:27AM -0700, Paul E. McKenney wrote:
> On Fri, May 12, 2023 at 02:18:48PM +0100, Mark Rutland wrote:
> > On Thu, May 11, 2023 at 12:12:16PM -0700, Paul E. McKenney wrote:
> > > On Thu, May 11, 2023 at 06:10:00PM +0100, Mark Rutland wrote:
> > > > I think that we can restructure the ifdeffery so that each ordering variant
> > > > gets its own ifdeffery, and then we could place the kerneldoc immediately above
> > > > that, e.g.
> > > >
> > > > /**
> > > > * arch_atomic_inc_return_release()
> > > > *
> > > > * [ full kerneldoc block here ]
> > > > */
> > > > #if defined(arch_atomic_inc_return_release)
> > > > /* defined in arch code */
> > > > #elif defined(arch_atomic_inc_return_relaxed)
> > > > [ define in terms of arch_atomic_inc_return_relaxed ]
> > > > #elif defined(arch_atomic_inc_return)
> > > > [ define in terms of arch_atomic_inc_return ]
> > > > #else
> > > > [ define in terms of arch_atomic_fetch_inc_release ]
> > > > #endif
> > > >
> > > > ... with similar for the mandatory ops that each arch must provide, e.g.
> > > >
> > > > /**
> > > > * arch_atomic_or()
> > > > *
> > > > * [ full kerneldoc block here ]
> > > > */
> > > > /* arch_atomic_or() is mandatory -- architectures must define it! */
> > > >
> > > > I had a go at that restructuring today, and while local build testing indicates
> > > > I haven't got it quite right, I think it's possible:
> > > >
> > > > https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/fallback-rework
> > > >
> > > > Does that sound ok to you?
> > >
> > > At first glance, it appears that your "TODO" locations have the same
> > > information that I was using, so it should not be hard for me to adapt the
> > > current kernel-doc generation to your new scheme. (Famous last words!)
> >
> > Great!
> >
> > > Plus having the kernel-doc generation all in one place does have some
> > > serious attractions.
> >
> > :)
> >
> > > I will continue maintaining my current stack, but would of course be
> > > happy to port it on top of your refactoring. If it turns out that
> > > the refactoring will take a long time, we can discuss what to do in
> > > the meantime. But here is hoping that the refactoring goes smoothly!
> > > That would be easier all around. ;-)
> >
> > FWIW, I think that's working now; every cross-build I've tried works.
> >
> > I've updated the branch at:
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/fallback-rework
> >
> > Tagged as:
> >
> > atomics-fallback-rework-20230512
>
> Thank you very much!
>
> I expect to send v2 of my original late today on the perhaps unlikely
> off-chance that someone might be interested in reviewing the verbiage.

I'll be more than happy to, though I suspect "late today" is far too late today
for me in UK time terms, so I probably won't look until Monday.

> More to the point, I have started porting my changes on top of your
> stack. My thought is to have a separate "."-included script that does
> the kernel-doc work.

I was thinking that we'd have a gen_kerneldoc(...) shell function (probably in
atomic-tbl.sh), but that's an easy thing to refactor after v2, so either way is
fine for now!

> I am also thinking in terms of putting the kernel-doc generation into
> an "else" clause to the "is mandatory" check, and leaving the kernel-doc
> for the mandatory functions in arch/x86/include/asm/atomic.h.

My thinking was that all the kernel-doc bits should live in the common header
so that they're all easy to find when looking at the source code, and since if
feels a bit weird to have to look into arch/x86/ to figure out the semantics of
a function on !x86.

That said, if that's painful for some reason, please go with the easiest option
for now and we can figure out how to attack it for v3. :)

Thanks,
Mark.

2023-05-12 19:24:49

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Fri, May 12, 2023 at 06:03:26PM +0100, Mark Rutland wrote:
> On Fri, May 12, 2023 at 09:01:27AM -0700, Paul E. McKenney wrote:
> > On Fri, May 12, 2023 at 02:18:48PM +0100, Mark Rutland wrote:
> > > On Thu, May 11, 2023 at 12:12:16PM -0700, Paul E. McKenney wrote:
> > > > On Thu, May 11, 2023 at 06:10:00PM +0100, Mark Rutland wrote:
> > > > > I think that we can restructure the ifdeffery so that each ordering variant
> > > > > gets its own ifdeffery, and then we could place the kerneldoc immediately above
> > > > > that, e.g.
> > > > >
> > > > > /**
> > > > > * arch_atomic_inc_return_release()
> > > > > *
> > > > > * [ full kerneldoc block here ]
> > > > > */
> > > > > #if defined(arch_atomic_inc_return_release)
> > > > > /* defined in arch code */
> > > > > #elif defined(arch_atomic_inc_return_relaxed)
> > > > > [ define in terms of arch_atomic_inc_return_relaxed ]
> > > > > #elif defined(arch_atomic_inc_return)
> > > > > [ define in terms of arch_atomic_inc_return ]
> > > > > #else
> > > > > [ define in terms of arch_atomic_fetch_inc_release ]
> > > > > #endif
> > > > >
> > > > > ... with similar for the mandatory ops that each arch must provide, e.g.
> > > > >
> > > > > /**
> > > > > * arch_atomic_or()
> > > > > *
> > > > > * [ full kerneldoc block here ]
> > > > > */
> > > > > /* arch_atomic_or() is mandatory -- architectures must define it! */
> > > > >
> > > > > I had a go at that restructuring today, and while local build testing indicates
> > > > > I haven't got it quite right, I think it's possible:
> > > > >
> > > > > https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/fallback-rework
> > > > >
> > > > > Does that sound ok to you?
> > > >
> > > > At first glance, it appears that your "TODO" locations have the same
> > > > information that I was using, so it should not be hard for me to adapt the
> > > > current kernel-doc generation to your new scheme. (Famous last words!)
> > >
> > > Great!
> > >
> > > > Plus having the kernel-doc generation all in one place does have some
> > > > serious attractions.
> > >
> > > :)
> > >
> > > > I will continue maintaining my current stack, but would of course be
> > > > happy to port it on top of your refactoring. If it turns out that
> > > > the refactoring will take a long time, we can discuss what to do in
> > > > the meantime. But here is hoping that the refactoring goes smoothly!
> > > > That would be easier all around. ;-)
> > >
> > > FWIW, I think that's working now; every cross-build I've tried works.
> > >
> > > I've updated the branch at:
> > >
> > > https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/fallback-rework
> > >
> > > Tagged as:
> > >
> > > atomics-fallback-rework-20230512
> >
> > Thank you very much!
> >
> > I expect to send v2 of my original late today on the perhaps unlikely
> > off-chance that someone might be interested in reviewing the verbiage.
>
> I'll be more than happy to, though I suspect "late today" is far too late today
> for me in UK time terms, so I probably won't look until Monday.

Works for me!

> > More to the point, I have started porting my changes on top of your
> > stack. My thought is to have a separate "."-included script that does
> > the kernel-doc work.
>
> I was thinking that we'd have a gen_kerneldoc(...) shell function (probably in
> atomic-tbl.sh), but that's an easy thing to refactor after v2, so either way is
> fine for now!

Good point, will make that happen. Easy to move the code, so might
as well be v1. ;-)

> > I am also thinking in terms of putting the kernel-doc generation into
> > an "else" clause to the "is mandatory" check, and leaving the kernel-doc
> > for the mandatory functions in arch/x86/include/asm/atomic.h.
>
> My thinking was that all the kernel-doc bits should live in the common header
> so that they're all easy to find when looking at the source code, and since if
> feels a bit weird to have to look into arch/x86/ to figure out the semantics of
> a function on !x86.
>
> That said, if that's painful for some reason, please go with the easiest option
> for now and we can figure out how to attack it for v3. :)

I will give it a shot.

Thanx, Paul

2023-05-13 06:33:45

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Fri, May 12, 2023 at 11:42:02AM -0700, Paul E. McKenney wrote:
> On Fri, May 12, 2023 at 06:03:26PM +0100, Mark Rutland wrote:
> > On Fri, May 12, 2023 at 09:01:27AM -0700, Paul E. McKenney wrote:
> > > On Fri, May 12, 2023 at 02:18:48PM +0100, Mark Rutland wrote:
> > > > On Thu, May 11, 2023 at 12:12:16PM -0700, Paul E. McKenney wrote:
> > > > > On Thu, May 11, 2023 at 06:10:00PM +0100, Mark Rutland wrote:
> > > > > > I think that we can restructure the ifdeffery so that each ordering variant
> > > > > > gets its own ifdeffery, and then we could place the kerneldoc immediately above
> > > > > > that, e.g.
> > > > > >
> > > > > > /**
> > > > > > * arch_atomic_inc_return_release()
> > > > > > *
> > > > > > * [ full kerneldoc block here ]
> > > > > > */
> > > > > > #if defined(arch_atomic_inc_return_release)
> > > > > > /* defined in arch code */
> > > > > > #elif defined(arch_atomic_inc_return_relaxed)
> > > > > > [ define in terms of arch_atomic_inc_return_relaxed ]
> > > > > > #elif defined(arch_atomic_inc_return)
> > > > > > [ define in terms of arch_atomic_inc_return ]
> > > > > > #else
> > > > > > [ define in terms of arch_atomic_fetch_inc_release ]
> > > > > > #endif
> > > > > >
> > > > > > ... with similar for the mandatory ops that each arch must provide, e.g.
> > > > > >
> > > > > > /**
> > > > > > * arch_atomic_or()
> > > > > > *
> > > > > > * [ full kerneldoc block here ]
> > > > > > */
> > > > > > /* arch_atomic_or() is mandatory -- architectures must define it! */
> > > > > >
> > > > > > I had a go at that restructuring today, and while local build testing indicates
> > > > > > I haven't got it quite right, I think it's possible:
> > > > > >
> > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/fallback-rework
> > > > > >
> > > > > > Does that sound ok to you?
> > > > >
> > > > > At first glance, it appears that your "TODO" locations have the same
> > > > > information that I was using, so it should not be hard for me to adapt the
> > > > > current kernel-doc generation to your new scheme. (Famous last words!)
> > > >
> > > > Great!
> > > >
> > > > > Plus having the kernel-doc generation all in one place does have some
> > > > > serious attractions.
> > > >
> > > > :)
> > > >
> > > > > I will continue maintaining my current stack, but would of course be
> > > > > happy to port it on top of your refactoring. If it turns out that
> > > > > the refactoring will take a long time, we can discuss what to do in
> > > > > the meantime. But here is hoping that the refactoring goes smoothly!
> > > > > That would be easier all around. ;-)
> > > >
> > > > FWIW, I think that's working now; every cross-build I've tried works.
> > > >
> > > > I've updated the branch at:
> > > >
> > > > https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/fallback-rework
> > > >
> > > > Tagged as:
> > > >
> > > > atomics-fallback-rework-20230512
> > >
> > > Thank you very much!
> > >
> > > I expect to send v2 of my original late today on the perhaps unlikely
> > > off-chance that someone might be interested in reviewing the verbiage.
> >
> > I'll be more than happy to, though I suspect "late today" is far too late today
> > for me in UK time terms, so I probably won't look until Monday.
>
> Works for me!

Except that cleaning up the old version proved more obnoxious than
creating a new one, adding more evidence behind the wisdom of your
reworkin. So no v2 of the previous series, for the moment, at least.

> > > More to the point, I have started porting my changes on top of your
> > > stack. My thought is to have a separate "."-included script that does
> > > the kernel-doc work.
> >
> > I was thinking that we'd have a gen_kerneldoc(...) shell function (probably in
> > atomic-tbl.sh), but that's an easy thing to refactor after v2, so either way is
> > fine for now!
>
> Good point, will make that happen. Easy to move the code, so might
> as well be v1. ;-)
>
> > > I am also thinking in terms of putting the kernel-doc generation into
> > > an "else" clause to the "is mandatory" check, and leaving the kernel-doc
> > > for the mandatory functions in arch/x86/include/asm/atomic.h.
> >
> > My thinking was that all the kernel-doc bits should live in the common header
> > so that they're all easy to find when looking at the source code, and since if
> > feels a bit weird to have to look into arch/x86/ to figure out the semantics of
> > a function on !x86.
> >
> > That said, if that's painful for some reason, please go with the easiest option
> > for now and we can figure out how to attack it for v3. :)
>
> I will give it a shot.

And here is a rough first cut:

git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git tags/fallback-rework-kernel-doc.2023.05.12a

Or via HTML:

https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git/log/?h=fallback-rework-kernel-doc.2023.05.12a

Thoughts?

In the meantime, enjoy the weekend!

Thanx, Paul

2023-05-14 00:31:24

by Akira Yokosawa

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

Hi,

On Fri, 12 May 2023 19:11:15 -0700, Paul E. McKenney wrote:
[...]

>
> And here is a rough first cut:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git tags/fallback-rework-kernel-doc.2023.05.12a
>
> Or via HTML:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git/log/?h=fallback-rework-kernel-doc.2023.05.12a

Running "./scripts/kernel-doc -none include/linux/atomic/atomic-arch-fallback.h"
on the tag emits a lot of warnings.

Looks like there are kernel-doc comments who don't have a corresponding
function signature next to them.

/**
* function_name() - Brief description of function.
* @arg1: Describe the first argument.
* @arg2: Describe the second argument.
* One can provide multiple line descriptions
* for arguments.
*
* A longer description, with more discussion of the function function_name()
* that might be useful to those using or modifying it. Begins with an
* empty comment line, and may include additional embedded empty
* comment lines.
*/
int function_name(int arg1, int arg2) <---

Note that the kernel-doc script ignores #ifdef -- #else.

BTW, I couldn't checkout the tag so downloaded the tar ball via
HTML.

Thanks, Akira

>
> Thoughts?
>
> In the meantime, enjoy the weekend!
>
> Thanx, Paul

2023-05-14 01:42:23

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Sun, May 14, 2023 at 08:58:00AM +0900, Akira Yokosawa wrote:
> Hi,
>
> On Fri, 12 May 2023 19:11:15 -0700, Paul E. McKenney wrote:
> [...]
>
> >
> > And here is a rough first cut:
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git tags/fallback-rework-kernel-doc.2023.05.12a
> >
> > Or via HTML:
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git/log/?h=fallback-rework-kernel-doc.2023.05.12a
>
> Running "./scripts/kernel-doc -none include/linux/atomic/atomic-arch-fallback.h"
> on the tag emits a lot of warnings.
>
> Looks like there are kernel-doc comments who don't have a corresponding
> function signature next to them.
>
> /**
> * function_name() - Brief description of function.
> * @arg1: Describe the first argument.
> * @arg2: Describe the second argument.
> * One can provide multiple line descriptions
> * for arguments.
> *
> * A longer description, with more discussion of the function function_name()
> * that might be useful to those using or modifying it. Begins with an
> * empty comment line, and may include additional embedded empty
> * comment lines.
> */
> int function_name(int arg1, int arg2) <---
>
> Note that the kernel-doc script ignores #ifdef -- #else.

Me, I was thinking in terms of making this diagnostic ignore
include/linux/atomic/atomic-arch-fallback.h. ;-)

The actual definitions are off in architecture-specific files, and
the kernel-doc headers could be left there. But there are benefits to
automatically generating all of them.

Another approach might be to put a "it is OK for the definition to
be elsewhere" comment following those kernel-doc headers.

Any other ways to make this work? For me, the option of making this
diagnostic ignore include/linux/atomic/atomic-arch-fallback.h has
considerable attraction.

> BTW, I couldn't checkout the tag so downloaded the tar ball via
> HTML.

Tags can be a bit annoying in that way. I will provide a branch next
time.

Thanx, Paul

> Thanks, Akira
>
> >
> > Thoughts?
> >
> > In the meantime, enjoy the weekend!
> >
> > Thanx, Paul

2023-05-16 17:00:39

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

Hi Paul,

On Sat, May 13, 2023 at 06:14:36PM -0700, Paul E. McKenney wrote:
> On Sun, May 14, 2023 at 08:58:00AM +0900, Akira Yokosawa wrote:
> > Running "./scripts/kernel-doc -none include/linux/atomic/atomic-arch-fallback.h"
> > on the tag emits a lot of warnings.
> >
> > Looks like there are kernel-doc comments who don't have a corresponding
> > function signature next to them.
> >
> > /**
> > * function_name() - Brief description of function.
> > * @arg1: Describe the first argument.
> > * @arg2: Describe the second argument.
> > * One can provide multiple line descriptions
> > * for arguments.
> > *
> > * A longer description, with more discussion of the function function_name()
> > * that might be useful to those using or modifying it. Begins with an
> > * empty comment line, and may include additional embedded empty
> > * comment lines.
> > */
> > int function_name(int arg1, int arg2) <---
> >
> > Note that the kernel-doc script ignores #ifdef -- #else.
>
> Me, I was thinking in terms of making this diagnostic ignore
> include/linux/atomic/atomic-arch-fallback.h. ;-)
>
> The actual definitions are off in architecture-specific files, and
> the kernel-doc headers could be left there. But there are benefits to
> automatically generating all of them.
>
> Another approach might be to put a "it is OK for the definition to
> be elsewhere" comment following those kernel-doc headers.
>
> Any other ways to make this work?

I've spent the last day or so playing with this, and I think we can do this by
relegating the arch_atomic*() functions to an implementation detail (and not
documenting those with kerneldoc), and having a raw_atomic*() layer where we
flesh out the API, where each can have a mandatory function definition as
below:

/**
* raw_atomic_fetch_inc_release() - does a thing atomically
*
* TODO: fill this in
*
* This is a version of atomic_fetch_inc_release() which is safe to use in
* noinstr code. Unless instrumentation needs to be avoided,
* atomic_fetch_inc_release() should be used in preference.
*/
static __always_inline int
raw_atomic_fetch_inc_release(atomic_t *v)
{
#if defined(arch_atomic_fetch_inc_release)
return arch_atomic_fetch_inc_release(v)
#elif defined(arch_atomic_fetch_inc_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_inc_relaxed(v);
#elif defined(arch_atomic_fetch_inc)
return arch_atomic_fetch_inc(v)
#else
return raw_atomic_fetch_add_release(1, v);
#endif
}

... and likewise we can add comments for the regular instrumented atomics.

I've pushed out the WIP patches to my atomics/fallback-rework branch; if you're
happy to give me another day or two I can get a bit further.

> For me, the option of making this
> diagnostic ignore include/linux/atomic/atomic-arch-fallback.h has
> considerable attraction.

It's certainly appealing...

Thanks,
Mark.

2023-05-16 19:04:09

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 18/19] locking/atomic: Refrain from generating duplicate fallback kernel-doc

On Tue, May 16, 2023 at 05:52:03PM +0100, Mark Rutland wrote:
> Hi Paul,
>
> On Sat, May 13, 2023 at 06:14:36PM -0700, Paul E. McKenney wrote:
> > On Sun, May 14, 2023 at 08:58:00AM +0900, Akira Yokosawa wrote:
> > > Running "./scripts/kernel-doc -none include/linux/atomic/atomic-arch-fallback.h"
> > > on the tag emits a lot of warnings.
> > >
> > > Looks like there are kernel-doc comments who don't have a corresponding
> > > function signature next to them.
> > >
> > > /**
> > > * function_name() - Brief description of function.
> > > * @arg1: Describe the first argument.
> > > * @arg2: Describe the second argument.
> > > * One can provide multiple line descriptions
> > > * for arguments.
> > > *
> > > * A longer description, with more discussion of the function function_name()
> > > * that might be useful to those using or modifying it. Begins with an
> > > * empty comment line, and may include additional embedded empty
> > > * comment lines.
> > > */
> > > int function_name(int arg1, int arg2) <---
> > >
> > > Note that the kernel-doc script ignores #ifdef -- #else.
> >
> > Me, I was thinking in terms of making this diagnostic ignore
> > include/linux/atomic/atomic-arch-fallback.h. ;-)
> >
> > The actual definitions are off in architecture-specific files, and
> > the kernel-doc headers could be left there. But there are benefits to
> > automatically generating all of them.
> >
> > Another approach might be to put a "it is OK for the definition to
> > be elsewhere" comment following those kernel-doc headers.
> >
> > Any other ways to make this work?
>
> I've spent the last day or so playing with this, and I think we can do this by
> relegating the arch_atomic*() functions to an implementation detail (and not
> documenting those with kerneldoc), and having a raw_atomic*() layer where we
> flesh out the API, where each can have a mandatory function definition as
> below:
>
> /**
> * raw_atomic_fetch_inc_release() - does a thing atomically
> *
> * TODO: fill this in
> *
> * This is a version of atomic_fetch_inc_release() which is safe to use in
> * noinstr code. Unless instrumentation needs to be avoided,
> * atomic_fetch_inc_release() should be used in preference.
> */
> static __always_inline int
> raw_atomic_fetch_inc_release(atomic_t *v)
> {
> #if defined(arch_atomic_fetch_inc_release)
> return arch_atomic_fetch_inc_release(v)
> #elif defined(arch_atomic_fetch_inc_relaxed)
> __atomic_release_fence();
> return arch_atomic_fetch_inc_relaxed(v);
> #elif defined(arch_atomic_fetch_inc)
> return arch_atomic_fetch_inc(v)
> #else
> return raw_atomic_fetch_add_release(1, v);
> #endif
> }
>
> ... and likewise we can add comments for the regular instrumented atomics.

I do like that approach! It should be easy to adapt the kernel-doc
scripting to this.

> I've pushed out the WIP patches to my atomics/fallback-rework branch; if you're
> happy to give me another day or two I can get a bit further.

An RCU issue currently has me by the ankle, so I am quite happy to give
you a day or two. ;-)

Just FYI, I will be in the air this coming Friday, your time.

> > For me, the option of making this
> > diagnostic ignore include/linux/atomic/atomic-arch-fallback.h has
> > considerable attraction.
>
> It's certainly appealing...

But I do like your approach of simply always having the function prototype
available. ;-)

Thanx, Paul

2023-05-16 21:36:15

by Kees Cook

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 19/19] docs: Add atomic operations to the driver basic API documentation

On Wed, May 10, 2023 at 11:17:17AM -0700, Paul E. McKenney wrote:
> Add the include/linux/atomic/atomic-arch-fallback.h file to the
> driver-api/basics.rst in order to provide documentation for the Linux
> kernel's atomic operations.
>
> Signed-off-by: Paul E. McKenney <[email protected]>

Reviewed-by: Kees Cook <[email protected]>

--
Kees Cook

2023-05-17 10:52:29

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 19/19] docs: Add atomic operations to the driver basic API documentation

On Tue, May 16, 2023 at 02:33:29PM -0700, Kees Cook wrote:
> On Wed, May 10, 2023 at 11:17:17AM -0700, Paul E. McKenney wrote:
> > Add the include/linux/atomic/atomic-arch-fallback.h file to the
> > driver-api/basics.rst in order to provide documentation for the Linux
> > kernel's atomic operations.
> >
> > Signed-off-by: Paul E. McKenney <[email protected]>
>
> Reviewed-by: Kees Cook <[email protected]>

Thank you, Kees, I will apply on my next rebase.

Given Mark's ongoing atomics rework, a later version of this patch is
likely to remove ".. kernel-doc:: arch/x86/include/asm/atomic.h" from
that same file. (One of the benefits of Mark's rework is that all the
kernel-doc headers can be in the same file.)

Unless you tell me otherwise, I will retain your Reviewed-by across that
change.

Thanx, Paul

2023-05-22 13:02:38

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH locking/atomic 19/19] docs: Add atomic operations to the driver basic API documentation

On Wed, May 17, 2023 at 03:10:44AM -0700, Paul E. McKenney wrote:
> On Tue, May 16, 2023 at 02:33:29PM -0700, Kees Cook wrote:
> > On Wed, May 10, 2023 at 11:17:17AM -0700, Paul E. McKenney wrote:
> > > Add the include/linux/atomic/atomic-arch-fallback.h file to the
> > > driver-api/basics.rst in order to provide documentation for the Linux
> > > kernel's atomic operations.
> > >
> > > Signed-off-by: Paul E. McKenney <[email protected]>
> >
> > Reviewed-by: Kees Cook <[email protected]>
>
> Thank you, Kees, I will apply on my next rebase.
>
> Given Mark's ongoing atomics rework, a later version of this patch is
> likely to remove ".. kernel-doc:: arch/x86/include/asm/atomic.h" from
> that same file. (One of the benefits of Mark's rework is that all the
> kernel-doc headers can be in the same file.)

FWIW, I retained that in the series I just posted at:

https://lore.kernel.org/lkml/[email protected]/

... which drops ".. kernel-doc:: arch/x86/include/asm/atomic.h" as mentioned
above.

Please let me know if you'd like that R-b dropped.

Thanks,
Mark.