Hi all,
These patches restructure the generated atomic headers, and add
kerneldoc comments for all of the generic atomic{,64,_long}_t
operations.
The core headers now generate raw_atomic*() operations as the
fundamental instrumentation-safe atomics, with the arch_atomic*()
functions being an implementation detail that shouldn't be used
directly.
Each raw_atomic*() op is given a single definition with all related
ifdeffery inside, e.g.
| /**
| * raw_atomic_inc_return_acquire() - atomic increment with acquire ordering
| * @v: pointer to atomic_t
| *
| * Atomically updates @v to (@v + 1) with acquire ordering.
| *
| * Safe to use in noinstr code; prefer atomic_inc_return_acquire() elsewhere.
| *
| * Return: the updated value of @v.
| */
| static __always_inline int
| raw_atomic_inc_return_acquire(atomic_t *v)
| {
| #if defined(arch_atomic_inc_return_acquire)
| return arch_atomic_inc_return_acquire(v);
| #elif defined(arch_atomic_inc_return_relaxed)
| int ret = arch_atomic_inc_return_relaxed(v);
| __atomic_acquire_fence();
| return ret;
| #elif defined(arch_atomic_inc_return)
| return arch_atomic_inc_return(v);
| #else
| return raw_atomic_add_return_acquire(1, v);
| #endif
| }
Similarly, the regular atomic*() ops (which already have a single
definition) are given kerneldoc comments, e.g.
| /**
| * atomic_inc_return_acquire() - atomic increment with acquire ordering
| * @v: pointer to atomic_t
| *
| * Atomically updates @v to (@v + 1) with acquire ordering.
| *
| * Unsafe to use in noinstr code; use raw_atomic_inc_return_acquire() there.
| *
| * Return: the updated value of @v.
| */
| static __always_inline int
| atomic_inc_return_acquire(atomic_t *v)
| {
| instrument_atomic_read_write(v, sizeof(*v));
| return raw_atomic_inc_return_acquire(v);
| }
The kerneldoc comments themselves are built from templates as with the
fallbacks, which should allow them to be extended in future if necessary.
I've compile-tested this for a number of architectures and
configurations, but as usual this probably needs to see some testing by
build robots.
The patches are based on Peter Zijlstra's queued locking/core branch,
specifically commit:
bb6e9a06cba6b850 ("s390/cpum_sf: Convert to cmpxchg128()")
Which can be found in the git tree at:
https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git/
Since v1 [1]:
* Add kernel-doc handling of "~@v"
* Add atomic-instrumented.h to included documentation headers
* Fix typos and punctuation
* Clarify kerneldoc wording
[1] https://lore.kernel.org/lkml/[email protected]/
Thanks,
Mark.
Mark Rutland (26):
locking/atomic: arm: fix sync ops
locking/atomic: remove fallback comments
locking/atomic: hexagon: remove redundant arch_atomic_cmpxchg
locking/atomic: make atomic*_{cmp,}xchg optional
locking/atomic: arc: add preprocessor symbols
locking/atomic: arm: add preprocessor symbols
locking/atomic: hexagon: add preprocessor symbols
locking/atomic: m68k: add preprocessor symbols
locking/atomic: parisc: add preprocessor symbols
locking/atomic: sh: add preprocessor symbols
locking/atomic: sparc: add preprocessor symbols
locking/atomic: x86: add preprocessor symbols
locking/atomic: xtensa: add preprocessor symbols
locking/atomic: scripts: remove bogus order parameter
locking/atomic: scripts: remove leftover "${mult}"
locking/atomic: scripts: factor out order template generation
locking/atomic: scripts: add trivial raw_atomic*_<op>()
locking/atomic: treewide: use raw_atomic*_<op>()
locking/atomic: scripts: build raw_atomic_long*() directly
locking/atomic: scripts: restructure fallback ifdeffery
locking/atomic: scripts: split pfx/name/sfx/order
locking/atomic: scripts: simplify raw_atomic_long*() definitions
locking/atomic: scripts: simplify raw_atomic*() definitions
docs: scripts: kernel-doc: accept bitwise negation like ~@var
locking/atomic: scripts: generate kerneldoc comments
locking/atomic: treewide: delete arch_atomic_*() kerneldoc
Paul E. McKenney (1):
locking/atomic: docs: Add atomic operations to the driver basic API
documentation
Documentation/driver-api/basics.rst | 8 +-
arch/alpha/include/asm/atomic.h | 35 -
arch/arc/include/asm/atomic-spinlock.h | 9 +
arch/arc/include/asm/atomic.h | 24 -
arch/arc/include/asm/atomic64-arcv2.h | 19 +-
arch/arm/include/asm/assembler.h | 17 +
arch/arm/include/asm/atomic.h | 15 +-
arch/arm/include/asm/sync_bitops.h | 29 +-
arch/arm/lib/bitops.h | 14 +-
arch/arm/lib/testchangebit.S | 4 +
arch/arm/lib/testclearbit.S | 4 +
arch/arm/lib/testsetbit.S | 4 +
arch/arm64/include/asm/atomic.h | 28 -
arch/csky/include/asm/atomic.h | 35 -
arch/hexagon/include/asm/atomic.h | 69 +-
arch/ia64/include/asm/atomic.h | 7 -
arch/loongarch/include/asm/atomic.h | 56 -
arch/m68k/include/asm/atomic.h | 18 +-
arch/mips/include/asm/atomic.h | 11 -
arch/openrisc/include/asm/atomic.h | 3 -
arch/parisc/include/asm/atomic.h | 27 +-
arch/powerpc/include/asm/atomic.h | 24 -
arch/powerpc/kernel/smp.c | 12 +-
arch/riscv/include/asm/atomic.h | 72 -
arch/sh/include/asm/atomic-grb.h | 9 +
arch/sh/include/asm/atomic-irq.h | 9 +
arch/sh/include/asm/atomic-llsc.h | 9 +
arch/sh/include/asm/atomic.h | 3 -
arch/sparc/include/asm/atomic_32.h | 18 +-
arch/sparc/include/asm/atomic_64.h | 29 +-
arch/x86/include/asm/atomic.h | 87 -
arch/x86/include/asm/atomic64_32.h | 76 -
arch/x86/include/asm/atomic64_64.h | 81 -
arch/x86/include/asm/cmpxchg_64.h | 4 +
arch/x86/kernel/alternative.c | 4 +-
arch/x86/kernel/cpu/mce/core.c | 16 +-
arch/x86/kernel/nmi.c | 2 +-
arch/x86/kernel/pvclock.c | 4 +-
arch/x86/kvm/x86.c | 2 +-
arch/xtensa/include/asm/atomic.h | 12 +-
include/asm-generic/atomic.h | 3 -
include/asm-generic/bitops/atomic.h | 12 +-
include/asm-generic/bitops/lock.h | 8 +-
include/linux/atomic/atomic-arch-fallback.h | 5200 ++++++++++++------
include/linux/atomic/atomic-instrumented.h | 3484 ++++++++++--
include/linux/atomic/atomic-long.h | 2122 ++++---
include/linux/context_tracking.h | 4 +-
include/linux/context_tracking_state.h | 2 +-
include/linux/cpumask.h | 2 +-
include/linux/jump_label.h | 2 +-
kernel/context_tracking.c | 12 +-
kernel/sched/clock.c | 2 +-
scripts/atomic/atomic-tbl.sh | 112 +-
scripts/atomic/atomics.tbl | 2 +-
scripts/atomic/fallbacks/acquire | 4 -
scripts/atomic/fallbacks/add_negative | 14 +-
scripts/atomic/fallbacks/add_unless | 15 +-
scripts/atomic/fallbacks/andnot | 6 +-
scripts/atomic/fallbacks/cmpxchg | 3 +
scripts/atomic/fallbacks/dec | 6 +-
scripts/atomic/fallbacks/dec_and_test | 14 +-
scripts/atomic/fallbacks/dec_if_positive | 8 +-
scripts/atomic/fallbacks/dec_unless_positive | 8 +-
scripts/atomic/fallbacks/fence | 4 -
scripts/atomic/fallbacks/fetch_add_unless | 17 +-
scripts/atomic/fallbacks/inc | 6 +-
scripts/atomic/fallbacks/inc_and_test | 14 +-
scripts/atomic/fallbacks/inc_not_zero | 13 +-
scripts/atomic/fallbacks/inc_unless_negative | 8 +-
scripts/atomic/fallbacks/read_acquire | 6 +-
scripts/atomic/fallbacks/release | 4 -
scripts/atomic/fallbacks/set_release | 6 +-
scripts/atomic/fallbacks/sub_and_test | 15 +-
scripts/atomic/fallbacks/try_cmpxchg | 6 +-
scripts/atomic/fallbacks/xchg | 3 +
scripts/atomic/gen-atomic-fallback.sh | 264 +-
scripts/atomic/gen-atomic-instrumented.sh | 23 +-
scripts/atomic/gen-atomic-long.sh | 38 +-
scripts/atomic/kerneldoc/add | 13 +
scripts/atomic/kerneldoc/add_negative | 13 +
scripts/atomic/kerneldoc/add_unless | 18 +
scripts/atomic/kerneldoc/and | 13 +
scripts/atomic/kerneldoc/andnot | 13 +
scripts/atomic/kerneldoc/cmpxchg | 14 +
scripts/atomic/kerneldoc/dec | 12 +
scripts/atomic/kerneldoc/dec_and_test | 12 +
scripts/atomic/kerneldoc/dec_if_positive | 12 +
scripts/atomic/kerneldoc/dec_unless_positive | 12 +
scripts/atomic/kerneldoc/inc | 12 +
scripts/atomic/kerneldoc/inc_and_test | 12 +
scripts/atomic/kerneldoc/inc_not_zero | 12 +
scripts/atomic/kerneldoc/inc_unless_negative | 12 +
scripts/atomic/kerneldoc/or | 13 +
scripts/atomic/kerneldoc/read | 12 +
scripts/atomic/kerneldoc/set | 13 +
scripts/atomic/kerneldoc/sub | 13 +
scripts/atomic/kerneldoc/sub_and_test | 13 +
scripts/atomic/kerneldoc/try_cmpxchg | 15 +
scripts/atomic/kerneldoc/xchg | 13 +
scripts/atomic/kerneldoc/xor | 13 +
scripts/kernel-doc | 2 +-
101 files changed, 8979 insertions(+), 3689 deletions(-)
create mode 100755 scripts/atomic/fallbacks/cmpxchg
create mode 100755 scripts/atomic/fallbacks/xchg
create mode 100644 scripts/atomic/kerneldoc/add
create mode 100644 scripts/atomic/kerneldoc/add_negative
create mode 100644 scripts/atomic/kerneldoc/add_unless
create mode 100644 scripts/atomic/kerneldoc/and
create mode 100644 scripts/atomic/kerneldoc/andnot
create mode 100644 scripts/atomic/kerneldoc/cmpxchg
create mode 100644 scripts/atomic/kerneldoc/dec
create mode 100644 scripts/atomic/kerneldoc/dec_and_test
create mode 100644 scripts/atomic/kerneldoc/dec_if_positive
create mode 100644 scripts/atomic/kerneldoc/dec_unless_positive
create mode 100644 scripts/atomic/kerneldoc/inc
create mode 100644 scripts/atomic/kerneldoc/inc_and_test
create mode 100644 scripts/atomic/kerneldoc/inc_not_zero
create mode 100644 scripts/atomic/kerneldoc/inc_unless_negative
create mode 100644 scripts/atomic/kerneldoc/or
create mode 100644 scripts/atomic/kerneldoc/read
create mode 100644 scripts/atomic/kerneldoc/set
create mode 100644 scripts/atomic/kerneldoc/sub
create mode 100644 scripts/atomic/kerneldoc/sub_and_test
create mode 100644 scripts/atomic/kerneldoc/try_cmpxchg
create mode 100644 scripts/atomic/kerneldoc/xchg
create mode 100644 scripts/atomic/kerneldoc/xor
--
2.30.2
The sync_*() ops on arch/arm are defined in terms of the regular bitops
with no special handling. This is not correct, as UP kernels elide
barriers for the fully-ordered operations, and so the required ordering
is lost when such UP kernels are run under a hypervsior on an SMP
system.
Fix this by defining sync ops with the required barriers.
Note: On 32-bit arm, the sync_*() ops are currently only used by Xen,
which requires ARMv7, but the semantics can be implemented for ARMv6+.
Fixes: e54d2f61528165bb ("xen/arm: sync_bitops")
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Russell King <[email protected]>
Cc: Stefano Stabellini <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm/include/asm/assembler.h | 17 +++++++++++++++++
arch/arm/include/asm/sync_bitops.h | 29 +++++++++++++++++++++++++----
arch/arm/lib/bitops.h | 14 +++++++++++---
arch/arm/lib/testchangebit.S | 4 ++++
arch/arm/lib/testclearbit.S | 4 ++++
arch/arm/lib/testsetbit.S | 4 ++++
6 files changed, 65 insertions(+), 7 deletions(-)
diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
index 505a306e0271a..aebe2c8f6a686 100644
--- a/arch/arm/include/asm/assembler.h
+++ b/arch/arm/include/asm/assembler.h
@@ -394,6 +394,23 @@ ALT_UP_B(.L0_\@)
#endif
.endm
+/*
+ * Raw SMP data memory barrier
+ */
+ .macro __smp_dmb mode
+#if __LINUX_ARM_ARCH__ >= 7
+ .ifeqs "\mode","arm"
+ dmb ish
+ .else
+ W(dmb) ish
+ .endif
+#elif __LINUX_ARM_ARCH__ == 6
+ mcr p15, 0, r0, c7, c10, 5 @ dmb
+#else
+ .error "Incompatible SMP platform"
+#endif
+ .endm
+
#if defined(CONFIG_CPU_V7M)
/*
* setmode is used to assert to be in svc mode during boot. For v7-M
diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
index 6f5d627c44a3c..f46b3c570f92e 100644
--- a/arch/arm/include/asm/sync_bitops.h
+++ b/arch/arm/include/asm/sync_bitops.h
@@ -14,14 +14,35 @@
* ops which are SMP safe even on a UP kernel.
*/
+/*
+ * Unordered
+ */
+
#define sync_set_bit(nr, p) _set_bit(nr, p)
#define sync_clear_bit(nr, p) _clear_bit(nr, p)
#define sync_change_bit(nr, p) _change_bit(nr, p)
-#define sync_test_and_set_bit(nr, p) _test_and_set_bit(nr, p)
-#define sync_test_and_clear_bit(nr, p) _test_and_clear_bit(nr, p)
-#define sync_test_and_change_bit(nr, p) _test_and_change_bit(nr, p)
#define sync_test_bit(nr, addr) test_bit(nr, addr)
-#define arch_sync_cmpxchg arch_cmpxchg
+/*
+ * Fully ordered
+ */
+
+int _sync_test_and_set_bit(int nr, volatile unsigned long * p);
+#define sync_test_and_set_bit(nr, p) _sync_test_and_set_bit(nr, p)
+
+int _sync_test_and_clear_bit(int nr, volatile unsigned long * p);
+#define sync_test_and_clear_bit(nr, p) _sync_test_and_clear_bit(nr, p)
+
+int _sync_test_and_change_bit(int nr, volatile unsigned long * p);
+#define sync_test_and_change_bit(nr, p) _sync_test_and_change_bit(nr, p)
+
+#define arch_sync_cmpxchg(ptr, old, new) \
+({ \
+ __typeof__(*(ptr)) __ret; \
+ __smp_mb__before_atomic(); \
+ __ret = arch_cmpxchg_relaxed((ptr), (old), (new)); \
+ __smp_mb__after_atomic(); \
+ __ret; \
+})
#endif
diff --git a/arch/arm/lib/bitops.h b/arch/arm/lib/bitops.h
index 95bd359912889..f069d1b2318e6 100644
--- a/arch/arm/lib/bitops.h
+++ b/arch/arm/lib/bitops.h
@@ -28,7 +28,7 @@ UNWIND( .fnend )
ENDPROC(\name )
.endm
- .macro testop, name, instr, store
+ .macro __testop, name, instr, store, barrier
ENTRY( \name )
UNWIND( .fnstart )
ands ip, r1, #3
@@ -38,7 +38,7 @@ UNWIND( .fnstart )
mov r0, r0, lsr #5
add r1, r1, r0, lsl #2 @ Get word offset
mov r3, r2, lsl r3 @ create mask
- smp_dmb
+ \barrier
#if __LINUX_ARM_ARCH__ >= 7 && defined(CONFIG_SMP)
.arch_extension mp
ALT_SMP(W(pldw) [r1])
@@ -50,13 +50,21 @@ UNWIND( .fnstart )
strex ip, r2, [r1]
cmp ip, #0
bne 1b
- smp_dmb
+ \barrier
cmp r0, #0
movne r0, #1
2: bx lr
UNWIND( .fnend )
ENDPROC(\name )
.endm
+
+ .macro testop, name, instr, store
+ __testop \name, \instr, \store, smp_dmb
+ .endm
+
+ .macro sync_testop, name, instr, store
+ __testop \name, \instr, \store, __smp_dmb
+ .endm
#else
.macro bitop, name, instr
ENTRY( \name )
diff --git a/arch/arm/lib/testchangebit.S b/arch/arm/lib/testchangebit.S
index 4ebecc67e6e04..f13fe9bc2399a 100644
--- a/arch/arm/lib/testchangebit.S
+++ b/arch/arm/lib/testchangebit.S
@@ -10,3 +10,7 @@
.text
testop _test_and_change_bit, eor, str
+
+#if __LINUX_ARM_ARCH__ >= 6
+sync_testop _sync_test_and_change_bit, eor, str
+#endif
diff --git a/arch/arm/lib/testclearbit.S b/arch/arm/lib/testclearbit.S
index 009afa0f5b4a7..4d2c5ca620ebf 100644
--- a/arch/arm/lib/testclearbit.S
+++ b/arch/arm/lib/testclearbit.S
@@ -10,3 +10,7 @@
.text
testop _test_and_clear_bit, bicne, strne
+
+#if __LINUX_ARM_ARCH__ >= 6
+sync_testop _sync_test_and_clear_bit, bicne, strne
+#endif
diff --git a/arch/arm/lib/testsetbit.S b/arch/arm/lib/testsetbit.S
index f3192e55acc87..649dbab65d8d0 100644
--- a/arch/arm/lib/testsetbit.S
+++ b/arch/arm/lib/testsetbit.S
@@ -10,3 +10,7 @@
.text
testop _test_and_set_bit, orreq, streq
+
+#if __LINUX_ARM_ARCH__ >= 6
+sync_testop _sync_test_and_set_bit, orreq, streq
+#endif
--
2.30.2
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/sparc.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/sparc/include/asm/atomic_32.h | 16 ++++++++++++++--
arch/sparc/include/asm/atomic_64.h | 18 ++++++++++++++++++
2 files changed, 32 insertions(+), 2 deletions(-)
diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h
index 1c9e6c7366e41..60ce2fe57fcd7 100644
--- a/arch/sparc/include/asm/atomic_32.h
+++ b/arch/sparc/include/asm/atomic_32.h
@@ -19,19 +19,31 @@
#include <asm-generic/atomic64.h>
int arch_atomic_add_return(int, atomic_t *);
+#define arch_atomic_add_return arch_atomic_add_return
+
int arch_atomic_fetch_add(int, atomic_t *);
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+
int arch_atomic_fetch_and(int, atomic_t *);
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+
int arch_atomic_fetch_or(int, atomic_t *);
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+
int arch_atomic_fetch_xor(int, atomic_t *);
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
int arch_atomic_cmpxchg(atomic_t *, int, int);
#define arch_atomic_cmpxchg arch_atomic_cmpxchg
+
int arch_atomic_xchg(atomic_t *, int);
#define arch_atomic_xchg arch_atomic_xchg
-int arch_atomic_fetch_add_unless(atomic_t *, int, int);
-void arch_atomic_set(atomic_t *, int);
+int arch_atomic_fetch_add_unless(atomic_t *, int, int);
#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
+void arch_atomic_set(atomic_t *, int);
+
#define arch_atomic_set_release(v, i) arch_atomic_set((v), (i))
#define arch_atomic_read(v) READ_ONCE((v)->counter)
diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h
index df6a8b07d7e63..a5e9c37605a70 100644
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -37,6 +37,16 @@ s64 arch_atomic64_fetch_##op(s64, atomic64_t *);
ATOMIC_OPS(add)
ATOMIC_OPS(sub)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
+#define arch_atomic64_add_return arch_atomic64_add_return
+#define arch_atomic64_sub_return arch_atomic64_sub_return
+#define arch_atomic64_fetch_add arch_atomic64_fetch_add
+#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
@@ -44,6 +54,14 @@ ATOMIC_OPS(and)
ATOMIC_OPS(or)
ATOMIC_OPS(xor)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
+#define arch_atomic64_fetch_and arch_atomic64_fetch_and
+#define arch_atomic64_fetch_or arch_atomic64_fetch_or
+#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
--
2.30.2
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/parisc.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/parisc/include/asm/atomic.h | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h
index 0b3f64c92e3c0..d4f023887ff87 100644
--- a/arch/parisc/include/asm/atomic.h
+++ b/arch/parisc/include/asm/atomic.h
@@ -118,6 +118,11 @@ static __inline__ int arch_atomic_fetch_##op(int i, atomic_t *v) \
ATOMIC_OPS(add, +=)
ATOMIC_OPS(sub, -=)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op, c_op) \
ATOMIC_OP(op, c_op) \
@@ -127,6 +132,10 @@ ATOMIC_OPS(and, &=)
ATOMIC_OPS(or, |=)
ATOMIC_OPS(xor, ^=)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
@@ -181,6 +190,11 @@ static __inline__ s64 arch_atomic64_fetch_##op(s64 i, atomic64_t *v) \
ATOMIC64_OPS(add, +=)
ATOMIC64_OPS(sub, -=)
+#define arch_atomic64_add_return arch_atomic64_add_return
+#define arch_atomic64_sub_return arch_atomic64_sub_return
+#define arch_atomic64_fetch_add arch_atomic64_fetch_add
+#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub
+
#undef ATOMIC64_OPS
#define ATOMIC64_OPS(op, c_op) \
ATOMIC64_OP(op, c_op) \
@@ -190,6 +204,10 @@ ATOMIC64_OPS(and, &=)
ATOMIC64_OPS(or, |=)
ATOMIC64_OPS(xor, ^=)
+#define arch_atomic64_fetch_and arch_atomic64_fetch_and
+#define arch_atomic64_fetch_or arch_atomic64_fetch_or
+#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor
+
#undef ATOMIC64_OPS
#undef ATOMIC64_FETCH_OP
#undef ATOMIC64_OP_RETURN
--
2.30.2
Currently gen_proto_order_variants() hard codes the path for the templates used
for order fallbacks. Factor this out into a helper so that it can be reused
elsewhere.
This results in no change to the generated headers, so there should be
no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
scripts/atomic/gen-atomic-fallback.sh | 34 +++++++++++++--------------
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh
index 7a6bcea8f565b..337330865fa2e 100755
--- a/scripts/atomic/gen-atomic-fallback.sh
+++ b/scripts/atomic/gen-atomic-fallback.sh
@@ -32,6 +32,20 @@ gen_template_fallback()
fi
}
+#gen_order_fallback(meta, pfx, name, sfx, order, atomic, int, args...)
+gen_order_fallback()
+{
+ local meta="$1"; shift
+ local pfx="$1"; shift
+ local name="$1"; shift
+ local sfx="$1"; shift
+ local order="$1"; shift
+
+ local tmpl_order=${order#_}
+ local tmpl="${ATOMICDIR}/fallbacks/${tmpl_order:-fence}"
+ gen_template_fallback "${tmpl}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
+}
+
#gen_proto_fallback(meta, pfx, name, sfx, order, atomic, int, args...)
gen_proto_fallback()
{
@@ -56,20 +70,6 @@ cat << EOF
EOF
}
-gen_proto_order_variant()
-{
- local meta="$1"; shift
- local pfx="$1"; shift
- local name="$1"; shift
- local sfx="$1"; shift
- local order="$1"; shift
- local atomic="$1"
-
- local basename="arch_${atomic}_${pfx}${name}${sfx}"
-
- printf "#define ${basename}${order} ${basename}${order}\n"
-}
-
#gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...)
gen_proto_order_variants()
{
@@ -117,9 +117,9 @@ gen_proto_order_variants()
printf "#else /* ${basename}_relaxed */\n\n"
- gen_template_fallback "${ATOMICDIR}/fallbacks/acquire" "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
- gen_template_fallback "${ATOMICDIR}/fallbacks/release" "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
- gen_template_fallback "${ATOMICDIR}/fallbacks/fence" "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
+ gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
+ gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
+ gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
printf "#endif /* ${basename}_relaxed */\n\n"
}
--
2.30.2
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/xtensa.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/xtensa/include/asm/atomic.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h
index 1d323a864002c..7308b7f777d79 100644
--- a/arch/xtensa/include/asm/atomic.h
+++ b/arch/xtensa/include/asm/atomic.h
@@ -245,6 +245,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t * v) \
ATOMIC_OPS(add)
ATOMIC_OPS(sub)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
@@ -252,6 +257,10 @@ ATOMIC_OPS(and)
ATOMIC_OPS(or)
ATOMIC_OPS(xor)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
--
2.30.2
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/hexagon.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/hexagon/include/asm/atomic.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h
index ad6c111e9c10f..5c8440016c762 100644
--- a/arch/hexagon/include/asm/atomic.h
+++ b/arch/hexagon/include/asm/atomic.h
@@ -91,6 +91,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \
ATOMIC_OPS(add)
ATOMIC_OPS(sub)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
@@ -98,6 +103,10 @@ ATOMIC_OPS(and)
ATOMIC_OPS(or)
ATOMIC_OPS(xor)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
--
2.30.2
Currently a number of arch_atomic*_<op>() functions are optional, and
where an arch does not provide a given arch_atomic*_<op>() we will
define an implementation of arch_atomic*_<op>() in
atomic-arch-fallback.h.
Filling in the missing ops requires special care as we want to select
the optimal definition of each op (e.g. preferentially defining ops in
terms of their relaxed form rather than their fully-ordered form). The
ifdeffery necessary for this requires us to group ordering variants
together, which can be a bit painful to read, and is painful for
kerneldoc generation.
It would be easier to handle this if we generated ops into a separate
namespace, as this would remove the need to take special care with the
ifdeffery, and allow each ordering variant to be generated separately.
This patch adds a new set of raw_atomic_<op>() definitions, which are
currently trivial wrappers of their arch_atomic_<op>() equivalent. This
will allow us to move treewide users of arch_atomic_<op>() over to raw
atomic op before we rework the fallback generation to generate
raw_atomic_<op> directly.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
include/linux/atomic.h | 1 +
include/linux/atomic/atomic-instrumented.h | 595 ++++---
include/linux/atomic/atomic-raw.h | 1645 ++++++++++++++++++++
scripts/atomic/gen-atomic-instrumented.sh | 19 +-
scripts/atomic/gen-atomic-raw.sh | 84 +
scripts/atomic/gen-atomics.sh | 1 +
6 files changed, 2033 insertions(+), 312 deletions(-)
create mode 100644 include/linux/atomic/atomic-raw.h
create mode 100755 scripts/atomic/gen-atomic-raw.sh
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 8dd57c3a99e9b..127f5dc63a7df 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -79,6 +79,7 @@
#include <linux/atomic/atomic-arch-fallback.h>
#include <linux/atomic/atomic-long.h>
+#include <linux/atomic/atomic-raw.h>
#include <linux/atomic/atomic-instrumented.h>
#endif /* _LINUX_ATOMIC_H */
diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h
index a55b5b70a3e15..90ee2f55af770 100644
--- a/include/linux/atomic/atomic-instrumented.h
+++ b/include/linux/atomic/atomic-instrumented.h
@@ -4,15 +4,10 @@
// DO NOT MODIFY THIS FILE DIRECTLY
/*
- * This file provides wrappers with KASAN instrumentation for atomic operations.
- * To use this functionality an arch's atomic.h file needs to define all
- * atomic operations with arch_ prefix (e.g. arch_atomic_read()) and include
- * this file at the end. This file provides atomic_read() that forwards to
- * arch_atomic_read() for actual atomic operation.
- * Note: if an arch atomic operation is implemented by means of other atomic
- * operations (e.g. atomic_read()/atomic_cmpxchg() loop), then it needs to use
- * arch_ variants (i.e. arch_atomic_read()/arch_atomic_cmpxchg()) to avoid
- * double instrumentation.
+ * This file provoides atomic operations with explicit instrumentation (e.g.
+ * KASAN, KCSAN), which should be used unless it is necessary to avoid
+ * instrumentation. Where it is necessary to aovid instrumenation, the
+ * raw_atomic*() operations should be used.
*/
#ifndef _LINUX_ATOMIC_INSTRUMENTED_H
#define _LINUX_ATOMIC_INSTRUMENTED_H
@@ -25,21 +20,21 @@ static __always_inline int
atomic_read(const atomic_t *v)
{
instrument_atomic_read(v, sizeof(*v));
- return arch_atomic_read(v);
+ return raw_atomic_read(v);
}
static __always_inline int
atomic_read_acquire(const atomic_t *v)
{
instrument_atomic_read(v, sizeof(*v));
- return arch_atomic_read_acquire(v);
+ return raw_atomic_read_acquire(v);
}
static __always_inline void
atomic_set(atomic_t *v, int i)
{
instrument_atomic_write(v, sizeof(*v));
- arch_atomic_set(v, i);
+ raw_atomic_set(v, i);
}
static __always_inline void
@@ -47,14 +42,14 @@ atomic_set_release(atomic_t *v, int i)
{
kcsan_release();
instrument_atomic_write(v, sizeof(*v));
- arch_atomic_set_release(v, i);
+ raw_atomic_set_release(v, i);
}
static __always_inline void
atomic_add(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_add(i, v);
+ raw_atomic_add(i, v);
}
static __always_inline int
@@ -62,14 +57,14 @@ atomic_add_return(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_return(i, v);
+ return raw_atomic_add_return(i, v);
}
static __always_inline int
atomic_add_return_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_return_acquire(i, v);
+ return raw_atomic_add_return_acquire(i, v);
}
static __always_inline int
@@ -77,14 +72,14 @@ atomic_add_return_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_return_release(i, v);
+ return raw_atomic_add_return_release(i, v);
}
static __always_inline int
atomic_add_return_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_return_relaxed(i, v);
+ return raw_atomic_add_return_relaxed(i, v);
}
static __always_inline int
@@ -92,14 +87,14 @@ atomic_fetch_add(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_add(i, v);
+ return raw_atomic_fetch_add(i, v);
}
static __always_inline int
atomic_fetch_add_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_add_acquire(i, v);
+ return raw_atomic_fetch_add_acquire(i, v);
}
static __always_inline int
@@ -107,21 +102,21 @@ atomic_fetch_add_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_add_release(i, v);
+ return raw_atomic_fetch_add_release(i, v);
}
static __always_inline int
atomic_fetch_add_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_add_relaxed(i, v);
+ return raw_atomic_fetch_add_relaxed(i, v);
}
static __always_inline void
atomic_sub(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_sub(i, v);
+ raw_atomic_sub(i, v);
}
static __always_inline int
@@ -129,14 +124,14 @@ atomic_sub_return(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_sub_return(i, v);
+ return raw_atomic_sub_return(i, v);
}
static __always_inline int
atomic_sub_return_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_sub_return_acquire(i, v);
+ return raw_atomic_sub_return_acquire(i, v);
}
static __always_inline int
@@ -144,14 +139,14 @@ atomic_sub_return_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_sub_return_release(i, v);
+ return raw_atomic_sub_return_release(i, v);
}
static __always_inline int
atomic_sub_return_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_sub_return_relaxed(i, v);
+ return raw_atomic_sub_return_relaxed(i, v);
}
static __always_inline int
@@ -159,14 +154,14 @@ atomic_fetch_sub(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_sub(i, v);
+ return raw_atomic_fetch_sub(i, v);
}
static __always_inline int
atomic_fetch_sub_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_sub_acquire(i, v);
+ return raw_atomic_fetch_sub_acquire(i, v);
}
static __always_inline int
@@ -174,21 +169,21 @@ atomic_fetch_sub_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_sub_release(i, v);
+ return raw_atomic_fetch_sub_release(i, v);
}
static __always_inline int
atomic_fetch_sub_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_sub_relaxed(i, v);
+ return raw_atomic_fetch_sub_relaxed(i, v);
}
static __always_inline void
atomic_inc(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_inc(v);
+ raw_atomic_inc(v);
}
static __always_inline int
@@ -196,14 +191,14 @@ atomic_inc_return(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_inc_return(v);
+ return raw_atomic_inc_return(v);
}
static __always_inline int
atomic_inc_return_acquire(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_inc_return_acquire(v);
+ return raw_atomic_inc_return_acquire(v);
}
static __always_inline int
@@ -211,14 +206,14 @@ atomic_inc_return_release(atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_inc_return_release(v);
+ return raw_atomic_inc_return_release(v);
}
static __always_inline int
atomic_inc_return_relaxed(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_inc_return_relaxed(v);
+ return raw_atomic_inc_return_relaxed(v);
}
static __always_inline int
@@ -226,14 +221,14 @@ atomic_fetch_inc(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_inc(v);
+ return raw_atomic_fetch_inc(v);
}
static __always_inline int
atomic_fetch_inc_acquire(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_inc_acquire(v);
+ return raw_atomic_fetch_inc_acquire(v);
}
static __always_inline int
@@ -241,21 +236,21 @@ atomic_fetch_inc_release(atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_inc_release(v);
+ return raw_atomic_fetch_inc_release(v);
}
static __always_inline int
atomic_fetch_inc_relaxed(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_inc_relaxed(v);
+ return raw_atomic_fetch_inc_relaxed(v);
}
static __always_inline void
atomic_dec(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_dec(v);
+ raw_atomic_dec(v);
}
static __always_inline int
@@ -263,14 +258,14 @@ atomic_dec_return(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_dec_return(v);
+ return raw_atomic_dec_return(v);
}
static __always_inline int
atomic_dec_return_acquire(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_dec_return_acquire(v);
+ return raw_atomic_dec_return_acquire(v);
}
static __always_inline int
@@ -278,14 +273,14 @@ atomic_dec_return_release(atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_dec_return_release(v);
+ return raw_atomic_dec_return_release(v);
}
static __always_inline int
atomic_dec_return_relaxed(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_dec_return_relaxed(v);
+ return raw_atomic_dec_return_relaxed(v);
}
static __always_inline int
@@ -293,14 +288,14 @@ atomic_fetch_dec(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_dec(v);
+ return raw_atomic_fetch_dec(v);
}
static __always_inline int
atomic_fetch_dec_acquire(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_dec_acquire(v);
+ return raw_atomic_fetch_dec_acquire(v);
}
static __always_inline int
@@ -308,21 +303,21 @@ atomic_fetch_dec_release(atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_dec_release(v);
+ return raw_atomic_fetch_dec_release(v);
}
static __always_inline int
atomic_fetch_dec_relaxed(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_dec_relaxed(v);
+ return raw_atomic_fetch_dec_relaxed(v);
}
static __always_inline void
atomic_and(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_and(i, v);
+ raw_atomic_and(i, v);
}
static __always_inline int
@@ -330,14 +325,14 @@ atomic_fetch_and(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_and(i, v);
+ return raw_atomic_fetch_and(i, v);
}
static __always_inline int
atomic_fetch_and_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_and_acquire(i, v);
+ return raw_atomic_fetch_and_acquire(i, v);
}
static __always_inline int
@@ -345,21 +340,21 @@ atomic_fetch_and_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_and_release(i, v);
+ return raw_atomic_fetch_and_release(i, v);
}
static __always_inline int
atomic_fetch_and_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_and_relaxed(i, v);
+ return raw_atomic_fetch_and_relaxed(i, v);
}
static __always_inline void
atomic_andnot(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_andnot(i, v);
+ raw_atomic_andnot(i, v);
}
static __always_inline int
@@ -367,14 +362,14 @@ atomic_fetch_andnot(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_andnot(i, v);
+ return raw_atomic_fetch_andnot(i, v);
}
static __always_inline int
atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_andnot_acquire(i, v);
+ return raw_atomic_fetch_andnot_acquire(i, v);
}
static __always_inline int
@@ -382,21 +377,21 @@ atomic_fetch_andnot_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_andnot_release(i, v);
+ return raw_atomic_fetch_andnot_release(i, v);
}
static __always_inline int
atomic_fetch_andnot_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_andnot_relaxed(i, v);
+ return raw_atomic_fetch_andnot_relaxed(i, v);
}
static __always_inline void
atomic_or(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_or(i, v);
+ raw_atomic_or(i, v);
}
static __always_inline int
@@ -404,14 +399,14 @@ atomic_fetch_or(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_or(i, v);
+ return raw_atomic_fetch_or(i, v);
}
static __always_inline int
atomic_fetch_or_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_or_acquire(i, v);
+ return raw_atomic_fetch_or_acquire(i, v);
}
static __always_inline int
@@ -419,21 +414,21 @@ atomic_fetch_or_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_or_release(i, v);
+ return raw_atomic_fetch_or_release(i, v);
}
static __always_inline int
atomic_fetch_or_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_or_relaxed(i, v);
+ return raw_atomic_fetch_or_relaxed(i, v);
}
static __always_inline void
atomic_xor(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_xor(i, v);
+ raw_atomic_xor(i, v);
}
static __always_inline int
@@ -441,14 +436,14 @@ atomic_fetch_xor(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_xor(i, v);
+ return raw_atomic_fetch_xor(i, v);
}
static __always_inline int
atomic_fetch_xor_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_xor_acquire(i, v);
+ return raw_atomic_fetch_xor_acquire(i, v);
}
static __always_inline int
@@ -456,14 +451,14 @@ atomic_fetch_xor_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_xor_release(i, v);
+ return raw_atomic_fetch_xor_release(i, v);
}
static __always_inline int
atomic_fetch_xor_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_xor_relaxed(i, v);
+ return raw_atomic_fetch_xor_relaxed(i, v);
}
static __always_inline int
@@ -471,14 +466,14 @@ atomic_xchg(atomic_t *v, int i)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_xchg(v, i);
+ return raw_atomic_xchg(v, i);
}
static __always_inline int
atomic_xchg_acquire(atomic_t *v, int i)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_xchg_acquire(v, i);
+ return raw_atomic_xchg_acquire(v, i);
}
static __always_inline int
@@ -486,14 +481,14 @@ atomic_xchg_release(atomic_t *v, int i)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_xchg_release(v, i);
+ return raw_atomic_xchg_release(v, i);
}
static __always_inline int
atomic_xchg_relaxed(atomic_t *v, int i)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_xchg_relaxed(v, i);
+ return raw_atomic_xchg_relaxed(v, i);
}
static __always_inline int
@@ -501,14 +496,14 @@ atomic_cmpxchg(atomic_t *v, int old, int new)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_cmpxchg(v, old, new);
+ return raw_atomic_cmpxchg(v, old, new);
}
static __always_inline int
atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_cmpxchg_acquire(v, old, new);
+ return raw_atomic_cmpxchg_acquire(v, old, new);
}
static __always_inline int
@@ -516,14 +511,14 @@ atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_cmpxchg_release(v, old, new);
+ return raw_atomic_cmpxchg_release(v, old, new);
}
static __always_inline int
atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_cmpxchg_relaxed(v, old, new);
+ return raw_atomic_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
@@ -532,7 +527,7 @@ atomic_try_cmpxchg(atomic_t *v, int *old, int new)
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_try_cmpxchg(v, old, new);
+ return raw_atomic_try_cmpxchg(v, old, new);
}
static __always_inline bool
@@ -540,7 +535,7 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_try_cmpxchg_acquire(v, old, new);
+ return raw_atomic_try_cmpxchg_acquire(v, old, new);
}
static __always_inline bool
@@ -549,7 +544,7 @@ atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_try_cmpxchg_release(v, old, new);
+ return raw_atomic_try_cmpxchg_release(v, old, new);
}
static __always_inline bool
@@ -557,7 +552,7 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
{
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_try_cmpxchg_relaxed(v, old, new);
+ return raw_atomic_try_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
@@ -565,7 +560,7 @@ atomic_sub_and_test(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_sub_and_test(i, v);
+ return raw_atomic_sub_and_test(i, v);
}
static __always_inline bool
@@ -573,7 +568,7 @@ atomic_dec_and_test(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_dec_and_test(v);
+ return raw_atomic_dec_and_test(v);
}
static __always_inline bool
@@ -581,7 +576,7 @@ atomic_inc_and_test(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_inc_and_test(v);
+ return raw_atomic_inc_and_test(v);
}
static __always_inline bool
@@ -589,14 +584,14 @@ atomic_add_negative(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_negative(i, v);
+ return raw_atomic_add_negative(i, v);
}
static __always_inline bool
atomic_add_negative_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_negative_acquire(i, v);
+ return raw_atomic_add_negative_acquire(i, v);
}
static __always_inline bool
@@ -604,14 +599,14 @@ atomic_add_negative_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_negative_release(i, v);
+ return raw_atomic_add_negative_release(i, v);
}
static __always_inline bool
atomic_add_negative_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_negative_relaxed(i, v);
+ return raw_atomic_add_negative_relaxed(i, v);
}
static __always_inline int
@@ -619,7 +614,7 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_add_unless(v, a, u);
+ return raw_atomic_fetch_add_unless(v, a, u);
}
static __always_inline bool
@@ -627,7 +622,7 @@ atomic_add_unless(atomic_t *v, int a, int u)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_unless(v, a, u);
+ return raw_atomic_add_unless(v, a, u);
}
static __always_inline bool
@@ -635,7 +630,7 @@ atomic_inc_not_zero(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_inc_not_zero(v);
+ return raw_atomic_inc_not_zero(v);
}
static __always_inline bool
@@ -643,7 +638,7 @@ atomic_inc_unless_negative(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_inc_unless_negative(v);
+ return raw_atomic_inc_unless_negative(v);
}
static __always_inline bool
@@ -651,7 +646,7 @@ atomic_dec_unless_positive(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_dec_unless_positive(v);
+ return raw_atomic_dec_unless_positive(v);
}
static __always_inline int
@@ -659,28 +654,28 @@ atomic_dec_if_positive(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_dec_if_positive(v);
+ return raw_atomic_dec_if_positive(v);
}
static __always_inline s64
atomic64_read(const atomic64_t *v)
{
instrument_atomic_read(v, sizeof(*v));
- return arch_atomic64_read(v);
+ return raw_atomic64_read(v);
}
static __always_inline s64
atomic64_read_acquire(const atomic64_t *v)
{
instrument_atomic_read(v, sizeof(*v));
- return arch_atomic64_read_acquire(v);
+ return raw_atomic64_read_acquire(v);
}
static __always_inline void
atomic64_set(atomic64_t *v, s64 i)
{
instrument_atomic_write(v, sizeof(*v));
- arch_atomic64_set(v, i);
+ raw_atomic64_set(v, i);
}
static __always_inline void
@@ -688,14 +683,14 @@ atomic64_set_release(atomic64_t *v, s64 i)
{
kcsan_release();
instrument_atomic_write(v, sizeof(*v));
- arch_atomic64_set_release(v, i);
+ raw_atomic64_set_release(v, i);
}
static __always_inline void
atomic64_add(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_add(i, v);
+ raw_atomic64_add(i, v);
}
static __always_inline s64
@@ -703,14 +698,14 @@ atomic64_add_return(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_return(i, v);
+ return raw_atomic64_add_return(i, v);
}
static __always_inline s64
atomic64_add_return_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_return_acquire(i, v);
+ return raw_atomic64_add_return_acquire(i, v);
}
static __always_inline s64
@@ -718,14 +713,14 @@ atomic64_add_return_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_return_release(i, v);
+ return raw_atomic64_add_return_release(i, v);
}
static __always_inline s64
atomic64_add_return_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_return_relaxed(i, v);
+ return raw_atomic64_add_return_relaxed(i, v);
}
static __always_inline s64
@@ -733,14 +728,14 @@ atomic64_fetch_add(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_add(i, v);
+ return raw_atomic64_fetch_add(i, v);
}
static __always_inline s64
atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_add_acquire(i, v);
+ return raw_atomic64_fetch_add_acquire(i, v);
}
static __always_inline s64
@@ -748,21 +743,21 @@ atomic64_fetch_add_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_add_release(i, v);
+ return raw_atomic64_fetch_add_release(i, v);
}
static __always_inline s64
atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_add_relaxed(i, v);
+ return raw_atomic64_fetch_add_relaxed(i, v);
}
static __always_inline void
atomic64_sub(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_sub(i, v);
+ raw_atomic64_sub(i, v);
}
static __always_inline s64
@@ -770,14 +765,14 @@ atomic64_sub_return(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_sub_return(i, v);
+ return raw_atomic64_sub_return(i, v);
}
static __always_inline s64
atomic64_sub_return_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_sub_return_acquire(i, v);
+ return raw_atomic64_sub_return_acquire(i, v);
}
static __always_inline s64
@@ -785,14 +780,14 @@ atomic64_sub_return_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_sub_return_release(i, v);
+ return raw_atomic64_sub_return_release(i, v);
}
static __always_inline s64
atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_sub_return_relaxed(i, v);
+ return raw_atomic64_sub_return_relaxed(i, v);
}
static __always_inline s64
@@ -800,14 +795,14 @@ atomic64_fetch_sub(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_sub(i, v);
+ return raw_atomic64_fetch_sub(i, v);
}
static __always_inline s64
atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_sub_acquire(i, v);
+ return raw_atomic64_fetch_sub_acquire(i, v);
}
static __always_inline s64
@@ -815,21 +810,21 @@ atomic64_fetch_sub_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_sub_release(i, v);
+ return raw_atomic64_fetch_sub_release(i, v);
}
static __always_inline s64
atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_sub_relaxed(i, v);
+ return raw_atomic64_fetch_sub_relaxed(i, v);
}
static __always_inline void
atomic64_inc(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_inc(v);
+ raw_atomic64_inc(v);
}
static __always_inline s64
@@ -837,14 +832,14 @@ atomic64_inc_return(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_inc_return(v);
+ return raw_atomic64_inc_return(v);
}
static __always_inline s64
atomic64_inc_return_acquire(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_inc_return_acquire(v);
+ return raw_atomic64_inc_return_acquire(v);
}
static __always_inline s64
@@ -852,14 +847,14 @@ atomic64_inc_return_release(atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_inc_return_release(v);
+ return raw_atomic64_inc_return_release(v);
}
static __always_inline s64
atomic64_inc_return_relaxed(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_inc_return_relaxed(v);
+ return raw_atomic64_inc_return_relaxed(v);
}
static __always_inline s64
@@ -867,14 +862,14 @@ atomic64_fetch_inc(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_inc(v);
+ return raw_atomic64_fetch_inc(v);
}
static __always_inline s64
atomic64_fetch_inc_acquire(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_inc_acquire(v);
+ return raw_atomic64_fetch_inc_acquire(v);
}
static __always_inline s64
@@ -882,21 +877,21 @@ atomic64_fetch_inc_release(atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_inc_release(v);
+ return raw_atomic64_fetch_inc_release(v);
}
static __always_inline s64
atomic64_fetch_inc_relaxed(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_inc_relaxed(v);
+ return raw_atomic64_fetch_inc_relaxed(v);
}
static __always_inline void
atomic64_dec(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_dec(v);
+ raw_atomic64_dec(v);
}
static __always_inline s64
@@ -904,14 +899,14 @@ atomic64_dec_return(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_dec_return(v);
+ return raw_atomic64_dec_return(v);
}
static __always_inline s64
atomic64_dec_return_acquire(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_dec_return_acquire(v);
+ return raw_atomic64_dec_return_acquire(v);
}
static __always_inline s64
@@ -919,14 +914,14 @@ atomic64_dec_return_release(atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_dec_return_release(v);
+ return raw_atomic64_dec_return_release(v);
}
static __always_inline s64
atomic64_dec_return_relaxed(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_dec_return_relaxed(v);
+ return raw_atomic64_dec_return_relaxed(v);
}
static __always_inline s64
@@ -934,14 +929,14 @@ atomic64_fetch_dec(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_dec(v);
+ return raw_atomic64_fetch_dec(v);
}
static __always_inline s64
atomic64_fetch_dec_acquire(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_dec_acquire(v);
+ return raw_atomic64_fetch_dec_acquire(v);
}
static __always_inline s64
@@ -949,21 +944,21 @@ atomic64_fetch_dec_release(atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_dec_release(v);
+ return raw_atomic64_fetch_dec_release(v);
}
static __always_inline s64
atomic64_fetch_dec_relaxed(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_dec_relaxed(v);
+ return raw_atomic64_fetch_dec_relaxed(v);
}
static __always_inline void
atomic64_and(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_and(i, v);
+ raw_atomic64_and(i, v);
}
static __always_inline s64
@@ -971,14 +966,14 @@ atomic64_fetch_and(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_and(i, v);
+ return raw_atomic64_fetch_and(i, v);
}
static __always_inline s64
atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_and_acquire(i, v);
+ return raw_atomic64_fetch_and_acquire(i, v);
}
static __always_inline s64
@@ -986,21 +981,21 @@ atomic64_fetch_and_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_and_release(i, v);
+ return raw_atomic64_fetch_and_release(i, v);
}
static __always_inline s64
atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_and_relaxed(i, v);
+ return raw_atomic64_fetch_and_relaxed(i, v);
}
static __always_inline void
atomic64_andnot(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_andnot(i, v);
+ raw_atomic64_andnot(i, v);
}
static __always_inline s64
@@ -1008,14 +1003,14 @@ atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_andnot(i, v);
+ return raw_atomic64_fetch_andnot(i, v);
}
static __always_inline s64
atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_andnot_acquire(i, v);
+ return raw_atomic64_fetch_andnot_acquire(i, v);
}
static __always_inline s64
@@ -1023,21 +1018,21 @@ atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_andnot_release(i, v);
+ return raw_atomic64_fetch_andnot_release(i, v);
}
static __always_inline s64
atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_andnot_relaxed(i, v);
+ return raw_atomic64_fetch_andnot_relaxed(i, v);
}
static __always_inline void
atomic64_or(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_or(i, v);
+ raw_atomic64_or(i, v);
}
static __always_inline s64
@@ -1045,14 +1040,14 @@ atomic64_fetch_or(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_or(i, v);
+ return raw_atomic64_fetch_or(i, v);
}
static __always_inline s64
atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_or_acquire(i, v);
+ return raw_atomic64_fetch_or_acquire(i, v);
}
static __always_inline s64
@@ -1060,21 +1055,21 @@ atomic64_fetch_or_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_or_release(i, v);
+ return raw_atomic64_fetch_or_release(i, v);
}
static __always_inline s64
atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_or_relaxed(i, v);
+ return raw_atomic64_fetch_or_relaxed(i, v);
}
static __always_inline void
atomic64_xor(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_xor(i, v);
+ raw_atomic64_xor(i, v);
}
static __always_inline s64
@@ -1082,14 +1077,14 @@ atomic64_fetch_xor(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_xor(i, v);
+ return raw_atomic64_fetch_xor(i, v);
}
static __always_inline s64
atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_xor_acquire(i, v);
+ return raw_atomic64_fetch_xor_acquire(i, v);
}
static __always_inline s64
@@ -1097,14 +1092,14 @@ atomic64_fetch_xor_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_xor_release(i, v);
+ return raw_atomic64_fetch_xor_release(i, v);
}
static __always_inline s64
atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_xor_relaxed(i, v);
+ return raw_atomic64_fetch_xor_relaxed(i, v);
}
static __always_inline s64
@@ -1112,14 +1107,14 @@ atomic64_xchg(atomic64_t *v, s64 i)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_xchg(v, i);
+ return raw_atomic64_xchg(v, i);
}
static __always_inline s64
atomic64_xchg_acquire(atomic64_t *v, s64 i)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_xchg_acquire(v, i);
+ return raw_atomic64_xchg_acquire(v, i);
}
static __always_inline s64
@@ -1127,14 +1122,14 @@ atomic64_xchg_release(atomic64_t *v, s64 i)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_xchg_release(v, i);
+ return raw_atomic64_xchg_release(v, i);
}
static __always_inline s64
atomic64_xchg_relaxed(atomic64_t *v, s64 i)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_xchg_relaxed(v, i);
+ return raw_atomic64_xchg_relaxed(v, i);
}
static __always_inline s64
@@ -1142,14 +1137,14 @@ atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_cmpxchg(v, old, new);
+ return raw_atomic64_cmpxchg(v, old, new);
}
static __always_inline s64
atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_cmpxchg_acquire(v, old, new);
+ return raw_atomic64_cmpxchg_acquire(v, old, new);
}
static __always_inline s64
@@ -1157,14 +1152,14 @@ atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_cmpxchg_release(v, old, new);
+ return raw_atomic64_cmpxchg_release(v, old, new);
}
static __always_inline s64
atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_cmpxchg_relaxed(v, old, new);
+ return raw_atomic64_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
@@ -1173,7 +1168,7 @@ atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic64_try_cmpxchg(v, old, new);
+ return raw_atomic64_try_cmpxchg(v, old, new);
}
static __always_inline bool
@@ -1181,7 +1176,7 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic64_try_cmpxchg_acquire(v, old, new);
+ return raw_atomic64_try_cmpxchg_acquire(v, old, new);
}
static __always_inline bool
@@ -1190,7 +1185,7 @@ atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic64_try_cmpxchg_release(v, old, new);
+ return raw_atomic64_try_cmpxchg_release(v, old, new);
}
static __always_inline bool
@@ -1198,7 +1193,7 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
{
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
+ return raw_atomic64_try_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
@@ -1206,7 +1201,7 @@ atomic64_sub_and_test(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_sub_and_test(i, v);
+ return raw_atomic64_sub_and_test(i, v);
}
static __always_inline bool
@@ -1214,7 +1209,7 @@ atomic64_dec_and_test(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_dec_and_test(v);
+ return raw_atomic64_dec_and_test(v);
}
static __always_inline bool
@@ -1222,7 +1217,7 @@ atomic64_inc_and_test(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_inc_and_test(v);
+ return raw_atomic64_inc_and_test(v);
}
static __always_inline bool
@@ -1230,14 +1225,14 @@ atomic64_add_negative(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_negative(i, v);
+ return raw_atomic64_add_negative(i, v);
}
static __always_inline bool
atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_negative_acquire(i, v);
+ return raw_atomic64_add_negative_acquire(i, v);
}
static __always_inline bool
@@ -1245,14 +1240,14 @@ atomic64_add_negative_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_negative_release(i, v);
+ return raw_atomic64_add_negative_release(i, v);
}
static __always_inline bool
atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_negative_relaxed(i, v);
+ return raw_atomic64_add_negative_relaxed(i, v);
}
static __always_inline s64
@@ -1260,7 +1255,7 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_add_unless(v, a, u);
+ return raw_atomic64_fetch_add_unless(v, a, u);
}
static __always_inline bool
@@ -1268,7 +1263,7 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_unless(v, a, u);
+ return raw_atomic64_add_unless(v, a, u);
}
static __always_inline bool
@@ -1276,7 +1271,7 @@ atomic64_inc_not_zero(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_inc_not_zero(v);
+ return raw_atomic64_inc_not_zero(v);
}
static __always_inline bool
@@ -1284,7 +1279,7 @@ atomic64_inc_unless_negative(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_inc_unless_negative(v);
+ return raw_atomic64_inc_unless_negative(v);
}
static __always_inline bool
@@ -1292,7 +1287,7 @@ atomic64_dec_unless_positive(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_dec_unless_positive(v);
+ return raw_atomic64_dec_unless_positive(v);
}
static __always_inline s64
@@ -1300,28 +1295,28 @@ atomic64_dec_if_positive(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_dec_if_positive(v);
+ return raw_atomic64_dec_if_positive(v);
}
static __always_inline long
atomic_long_read(const atomic_long_t *v)
{
instrument_atomic_read(v, sizeof(*v));
- return arch_atomic_long_read(v);
+ return raw_atomic_long_read(v);
}
static __always_inline long
atomic_long_read_acquire(const atomic_long_t *v)
{
instrument_atomic_read(v, sizeof(*v));
- return arch_atomic_long_read_acquire(v);
+ return raw_atomic_long_read_acquire(v);
}
static __always_inline void
atomic_long_set(atomic_long_t *v, long i)
{
instrument_atomic_write(v, sizeof(*v));
- arch_atomic_long_set(v, i);
+ raw_atomic_long_set(v, i);
}
static __always_inline void
@@ -1329,14 +1324,14 @@ atomic_long_set_release(atomic_long_t *v, long i)
{
kcsan_release();
instrument_atomic_write(v, sizeof(*v));
- arch_atomic_long_set_release(v, i);
+ raw_atomic_long_set_release(v, i);
}
static __always_inline void
atomic_long_add(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_add(i, v);
+ raw_atomic_long_add(i, v);
}
static __always_inline long
@@ -1344,14 +1339,14 @@ atomic_long_add_return(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_return(i, v);
+ return raw_atomic_long_add_return(i, v);
}
static __always_inline long
atomic_long_add_return_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_return_acquire(i, v);
+ return raw_atomic_long_add_return_acquire(i, v);
}
static __always_inline long
@@ -1359,14 +1354,14 @@ atomic_long_add_return_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_return_release(i, v);
+ return raw_atomic_long_add_return_release(i, v);
}
static __always_inline long
atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_return_relaxed(i, v);
+ return raw_atomic_long_add_return_relaxed(i, v);
}
static __always_inline long
@@ -1374,14 +1369,14 @@ atomic_long_fetch_add(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_add(i, v);
+ return raw_atomic_long_fetch_add(i, v);
}
static __always_inline long
atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_add_acquire(i, v);
+ return raw_atomic_long_fetch_add_acquire(i, v);
}
static __always_inline long
@@ -1389,21 +1384,21 @@ atomic_long_fetch_add_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_add_release(i, v);
+ return raw_atomic_long_fetch_add_release(i, v);
}
static __always_inline long
atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_add_relaxed(i, v);
+ return raw_atomic_long_fetch_add_relaxed(i, v);
}
static __always_inline void
atomic_long_sub(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_sub(i, v);
+ raw_atomic_long_sub(i, v);
}
static __always_inline long
@@ -1411,14 +1406,14 @@ atomic_long_sub_return(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_sub_return(i, v);
+ return raw_atomic_long_sub_return(i, v);
}
static __always_inline long
atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_sub_return_acquire(i, v);
+ return raw_atomic_long_sub_return_acquire(i, v);
}
static __always_inline long
@@ -1426,14 +1421,14 @@ atomic_long_sub_return_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_sub_return_release(i, v);
+ return raw_atomic_long_sub_return_release(i, v);
}
static __always_inline long
atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_sub_return_relaxed(i, v);
+ return raw_atomic_long_sub_return_relaxed(i, v);
}
static __always_inline long
@@ -1441,14 +1436,14 @@ atomic_long_fetch_sub(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_sub(i, v);
+ return raw_atomic_long_fetch_sub(i, v);
}
static __always_inline long
atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_sub_acquire(i, v);
+ return raw_atomic_long_fetch_sub_acquire(i, v);
}
static __always_inline long
@@ -1456,21 +1451,21 @@ atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_sub_release(i, v);
+ return raw_atomic_long_fetch_sub_release(i, v);
}
static __always_inline long
atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_sub_relaxed(i, v);
+ return raw_atomic_long_fetch_sub_relaxed(i, v);
}
static __always_inline void
atomic_long_inc(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_inc(v);
+ raw_atomic_long_inc(v);
}
static __always_inline long
@@ -1478,14 +1473,14 @@ atomic_long_inc_return(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_inc_return(v);
+ return raw_atomic_long_inc_return(v);
}
static __always_inline long
atomic_long_inc_return_acquire(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_inc_return_acquire(v);
+ return raw_atomic_long_inc_return_acquire(v);
}
static __always_inline long
@@ -1493,14 +1488,14 @@ atomic_long_inc_return_release(atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_inc_return_release(v);
+ return raw_atomic_long_inc_return_release(v);
}
static __always_inline long
atomic_long_inc_return_relaxed(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_inc_return_relaxed(v);
+ return raw_atomic_long_inc_return_relaxed(v);
}
static __always_inline long
@@ -1508,14 +1503,14 @@ atomic_long_fetch_inc(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_inc(v);
+ return raw_atomic_long_fetch_inc(v);
}
static __always_inline long
atomic_long_fetch_inc_acquire(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_inc_acquire(v);
+ return raw_atomic_long_fetch_inc_acquire(v);
}
static __always_inline long
@@ -1523,21 +1518,21 @@ atomic_long_fetch_inc_release(atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_inc_release(v);
+ return raw_atomic_long_fetch_inc_release(v);
}
static __always_inline long
atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_inc_relaxed(v);
+ return raw_atomic_long_fetch_inc_relaxed(v);
}
static __always_inline void
atomic_long_dec(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_dec(v);
+ raw_atomic_long_dec(v);
}
static __always_inline long
@@ -1545,14 +1540,14 @@ atomic_long_dec_return(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_dec_return(v);
+ return raw_atomic_long_dec_return(v);
}
static __always_inline long
atomic_long_dec_return_acquire(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_dec_return_acquire(v);
+ return raw_atomic_long_dec_return_acquire(v);
}
static __always_inline long
@@ -1560,14 +1555,14 @@ atomic_long_dec_return_release(atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_dec_return_release(v);
+ return raw_atomic_long_dec_return_release(v);
}
static __always_inline long
atomic_long_dec_return_relaxed(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_dec_return_relaxed(v);
+ return raw_atomic_long_dec_return_relaxed(v);
}
static __always_inline long
@@ -1575,14 +1570,14 @@ atomic_long_fetch_dec(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_dec(v);
+ return raw_atomic_long_fetch_dec(v);
}
static __always_inline long
atomic_long_fetch_dec_acquire(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_dec_acquire(v);
+ return raw_atomic_long_fetch_dec_acquire(v);
}
static __always_inline long
@@ -1590,21 +1585,21 @@ atomic_long_fetch_dec_release(atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_dec_release(v);
+ return raw_atomic_long_fetch_dec_release(v);
}
static __always_inline long
atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_dec_relaxed(v);
+ return raw_atomic_long_fetch_dec_relaxed(v);
}
static __always_inline void
atomic_long_and(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_and(i, v);
+ raw_atomic_long_and(i, v);
}
static __always_inline long
@@ -1612,14 +1607,14 @@ atomic_long_fetch_and(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_and(i, v);
+ return raw_atomic_long_fetch_and(i, v);
}
static __always_inline long
atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_and_acquire(i, v);
+ return raw_atomic_long_fetch_and_acquire(i, v);
}
static __always_inline long
@@ -1627,21 +1622,21 @@ atomic_long_fetch_and_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_and_release(i, v);
+ return raw_atomic_long_fetch_and_release(i, v);
}
static __always_inline long
atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_and_relaxed(i, v);
+ return raw_atomic_long_fetch_and_relaxed(i, v);
}
static __always_inline void
atomic_long_andnot(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_andnot(i, v);
+ raw_atomic_long_andnot(i, v);
}
static __always_inline long
@@ -1649,14 +1644,14 @@ atomic_long_fetch_andnot(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_andnot(i, v);
+ return raw_atomic_long_fetch_andnot(i, v);
}
static __always_inline long
atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_andnot_acquire(i, v);
+ return raw_atomic_long_fetch_andnot_acquire(i, v);
}
static __always_inline long
@@ -1664,21 +1659,21 @@ atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_andnot_release(i, v);
+ return raw_atomic_long_fetch_andnot_release(i, v);
}
static __always_inline long
atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_andnot_relaxed(i, v);
+ return raw_atomic_long_fetch_andnot_relaxed(i, v);
}
static __always_inline void
atomic_long_or(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_or(i, v);
+ raw_atomic_long_or(i, v);
}
static __always_inline long
@@ -1686,14 +1681,14 @@ atomic_long_fetch_or(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_or(i, v);
+ return raw_atomic_long_fetch_or(i, v);
}
static __always_inline long
atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_or_acquire(i, v);
+ return raw_atomic_long_fetch_or_acquire(i, v);
}
static __always_inline long
@@ -1701,21 +1696,21 @@ atomic_long_fetch_or_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_or_release(i, v);
+ return raw_atomic_long_fetch_or_release(i, v);
}
static __always_inline long
atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_or_relaxed(i, v);
+ return raw_atomic_long_fetch_or_relaxed(i, v);
}
static __always_inline void
atomic_long_xor(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_xor(i, v);
+ raw_atomic_long_xor(i, v);
}
static __always_inline long
@@ -1723,14 +1718,14 @@ atomic_long_fetch_xor(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_xor(i, v);
+ return raw_atomic_long_fetch_xor(i, v);
}
static __always_inline long
atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_xor_acquire(i, v);
+ return raw_atomic_long_fetch_xor_acquire(i, v);
}
static __always_inline long
@@ -1738,14 +1733,14 @@ atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_xor_release(i, v);
+ return raw_atomic_long_fetch_xor_release(i, v);
}
static __always_inline long
atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_xor_relaxed(i, v);
+ return raw_atomic_long_fetch_xor_relaxed(i, v);
}
static __always_inline long
@@ -1753,14 +1748,14 @@ atomic_long_xchg(atomic_long_t *v, long i)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_xchg(v, i);
+ return raw_atomic_long_xchg(v, i);
}
static __always_inline long
atomic_long_xchg_acquire(atomic_long_t *v, long i)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_xchg_acquire(v, i);
+ return raw_atomic_long_xchg_acquire(v, i);
}
static __always_inline long
@@ -1768,14 +1763,14 @@ atomic_long_xchg_release(atomic_long_t *v, long i)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_xchg_release(v, i);
+ return raw_atomic_long_xchg_release(v, i);
}
static __always_inline long
atomic_long_xchg_relaxed(atomic_long_t *v, long i)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_xchg_relaxed(v, i);
+ return raw_atomic_long_xchg_relaxed(v, i);
}
static __always_inline long
@@ -1783,14 +1778,14 @@ atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_cmpxchg(v, old, new);
+ return raw_atomic_long_cmpxchg(v, old, new);
}
static __always_inline long
atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_cmpxchg_acquire(v, old, new);
+ return raw_atomic_long_cmpxchg_acquire(v, old, new);
}
static __always_inline long
@@ -1798,14 +1793,14 @@ atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_cmpxchg_release(v, old, new);
+ return raw_atomic_long_cmpxchg_release(v, old, new);
}
static __always_inline long
atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_cmpxchg_relaxed(v, old, new);
+ return raw_atomic_long_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
@@ -1814,7 +1809,7 @@ atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_long_try_cmpxchg(v, old, new);
+ return raw_atomic_long_try_cmpxchg(v, old, new);
}
static __always_inline bool
@@ -1822,7 +1817,7 @@ atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_long_try_cmpxchg_acquire(v, old, new);
+ return raw_atomic_long_try_cmpxchg_acquire(v, old, new);
}
static __always_inline bool
@@ -1831,7 +1826,7 @@ atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_long_try_cmpxchg_release(v, old, new);
+ return raw_atomic_long_try_cmpxchg_release(v, old, new);
}
static __always_inline bool
@@ -1839,7 +1834,7 @@ atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_long_try_cmpxchg_relaxed(v, old, new);
+ return raw_atomic_long_try_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
@@ -1847,7 +1842,7 @@ atomic_long_sub_and_test(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_sub_and_test(i, v);
+ return raw_atomic_long_sub_and_test(i, v);
}
static __always_inline bool
@@ -1855,7 +1850,7 @@ atomic_long_dec_and_test(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_dec_and_test(v);
+ return raw_atomic_long_dec_and_test(v);
}
static __always_inline bool
@@ -1863,7 +1858,7 @@ atomic_long_inc_and_test(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_inc_and_test(v);
+ return raw_atomic_long_inc_and_test(v);
}
static __always_inline bool
@@ -1871,14 +1866,14 @@ atomic_long_add_negative(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_negative(i, v);
+ return raw_atomic_long_add_negative(i, v);
}
static __always_inline bool
atomic_long_add_negative_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_negative_acquire(i, v);
+ return raw_atomic_long_add_negative_acquire(i, v);
}
static __always_inline bool
@@ -1886,14 +1881,14 @@ atomic_long_add_negative_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_negative_release(i, v);
+ return raw_atomic_long_add_negative_release(i, v);
}
static __always_inline bool
atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_negative_relaxed(i, v);
+ return raw_atomic_long_add_negative_relaxed(i, v);
}
static __always_inline long
@@ -1901,7 +1896,7 @@ atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_add_unless(v, a, u);
+ return raw_atomic_long_fetch_add_unless(v, a, u);
}
static __always_inline bool
@@ -1909,7 +1904,7 @@ atomic_long_add_unless(atomic_long_t *v, long a, long u)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_unless(v, a, u);
+ return raw_atomic_long_add_unless(v, a, u);
}
static __always_inline bool
@@ -1917,7 +1912,7 @@ atomic_long_inc_not_zero(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_inc_not_zero(v);
+ return raw_atomic_long_inc_not_zero(v);
}
static __always_inline bool
@@ -1925,7 +1920,7 @@ atomic_long_inc_unless_negative(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_inc_unless_negative(v);
+ return raw_atomic_long_inc_unless_negative(v);
}
static __always_inline bool
@@ -1933,7 +1928,7 @@ atomic_long_dec_unless_positive(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_dec_unless_positive(v);
+ return raw_atomic_long_dec_unless_positive(v);
}
static __always_inline long
@@ -1941,7 +1936,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_dec_if_positive(v);
+ return raw_atomic_long_dec_if_positive(v);
}
#define xchg(ptr, ...) \
@@ -1949,14 +1944,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_xchg(__ai_ptr, __VA_ARGS__); \
+ raw_xchg(__ai_ptr, __VA_ARGS__); \
})
#define xchg_acquire(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_xchg_acquire(__ai_ptr, __VA_ARGS__); \
+ raw_xchg_acquire(__ai_ptr, __VA_ARGS__); \
})
#define xchg_release(ptr, ...) \
@@ -1964,14 +1959,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_release(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_xchg_release(__ai_ptr, __VA_ARGS__); \
+ raw_xchg_release(__ai_ptr, __VA_ARGS__); \
})
#define xchg_relaxed(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_xchg_relaxed(__ai_ptr, __VA_ARGS__); \
+ raw_xchg_relaxed(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg(ptr, ...) \
@@ -1979,14 +1974,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg_acquire(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg_acquire(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg_acquire(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg_release(ptr, ...) \
@@ -1994,14 +1989,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_release(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg_release(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg_release(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg_relaxed(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg64(ptr, ...) \
@@ -2009,14 +2004,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg64(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg64(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg64_acquire(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg64_acquire(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg64_acquire(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg64_release(ptr, ...) \
@@ -2024,14 +2019,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_release(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg64_relaxed(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg64_relaxed(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg64_relaxed(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg128(ptr, ...) \
@@ -2039,14 +2034,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg128(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg128(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg128_acquire(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg128_acquire(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg128_acquire(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg128_release(ptr, ...) \
@@ -2054,14 +2049,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_release(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg128_release(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg128_release(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg128_relaxed(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg128_relaxed(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg128_relaxed(__ai_ptr, __VA_ARGS__); \
})
#define try_cmpxchg(ptr, oldp, ...) \
@@ -2071,7 +2066,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg_acquire(ptr, oldp, ...) \
@@ -2080,7 +2075,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg_release(ptr, oldp, ...) \
@@ -2090,7 +2085,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
kcsan_release(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg_relaxed(ptr, oldp, ...) \
@@ -2099,7 +2094,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg64(ptr, oldp, ...) \
@@ -2109,7 +2104,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg64(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg64(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg64_acquire(ptr, oldp, ...) \
@@ -2118,7 +2113,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg64_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg64_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg64_release(ptr, oldp, ...) \
@@ -2128,7 +2123,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
kcsan_release(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg64_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg64_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg64_relaxed(ptr, oldp, ...) \
@@ -2137,7 +2132,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg64_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg64_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg128(ptr, oldp, ...) \
@@ -2147,7 +2142,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg128(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg128(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg128_acquire(ptr, oldp, ...) \
@@ -2156,7 +2151,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg128_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg128_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg128_release(ptr, oldp, ...) \
@@ -2166,7 +2161,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
kcsan_release(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg128_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg128_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg128_relaxed(ptr, oldp, ...) \
@@ -2175,28 +2170,28 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg128_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg128_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define cmpxchg_local(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg_local(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg_local(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg64_local(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg64_local(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg64_local(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg128_local(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg128_local(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg128_local(__ai_ptr, __VA_ARGS__); \
})
#define sync_cmpxchg(ptr, ...) \
@@ -2204,7 +2199,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \
+ raw_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \
})
#define try_cmpxchg_local(ptr, oldp, ...) \
@@ -2213,7 +2208,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg64_local(ptr, oldp, ...) \
@@ -2222,7 +2217,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg64_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg64_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg128_local(ptr, oldp, ...) \
@@ -2231,9 +2226,9 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg128_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg128_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#endif /* _LINUX_ATOMIC_INSTRUMENTED_H */
-// 3611991b015450e119bcd7417a9431af7f3ba13c
+// f6502977180430e61c1a7c4e5e665f04f501fb8d
diff --git a/include/linux/atomic/atomic-raw.h b/include/linux/atomic/atomic-raw.h
new file mode 100644
index 0000000000000..83ff0269657e7
--- /dev/null
+++ b/include/linux/atomic/atomic-raw.h
@@ -0,0 +1,1645 @@
+// SPDX-License-Identifier: GPL-2.0
+
+// Generated by scripts/atomic/gen-atomic-raw.sh
+// DO NOT MODIFY THIS FILE DIRECTLY
+
+#ifndef _LINUX_ATOMIC_RAW_H
+#define _LINUX_ATOMIC_RAW_H
+
+static __always_inline int
+raw_atomic_read(const atomic_t *v)
+{
+ return arch_atomic_read(v);
+}
+
+static __always_inline int
+raw_atomic_read_acquire(const atomic_t *v)
+{
+ return arch_atomic_read_acquire(v);
+}
+
+static __always_inline void
+raw_atomic_set(atomic_t *v, int i)
+{
+ arch_atomic_set(v, i);
+}
+
+static __always_inline void
+raw_atomic_set_release(atomic_t *v, int i)
+{
+ arch_atomic_set_release(v, i);
+}
+
+static __always_inline void
+raw_atomic_add(int i, atomic_t *v)
+{
+ arch_atomic_add(i, v);
+}
+
+static __always_inline int
+raw_atomic_add_return(int i, atomic_t *v)
+{
+ return arch_atomic_add_return(i, v);
+}
+
+static __always_inline int
+raw_atomic_add_return_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_add_return_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_add_return_release(int i, atomic_t *v)
+{
+ return arch_atomic_add_return_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_add_return_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_add_return_relaxed(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_add(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_add(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_add_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_add_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_add_release(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_add_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_add_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_add_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_sub(int i, atomic_t *v)
+{
+ arch_atomic_sub(i, v);
+}
+
+static __always_inline int
+raw_atomic_sub_return(int i, atomic_t *v)
+{
+ return arch_atomic_sub_return(i, v);
+}
+
+static __always_inline int
+raw_atomic_sub_return_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_sub_return_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_sub_return_release(int i, atomic_t *v)
+{
+ return arch_atomic_sub_return_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_sub_return_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_sub_return_relaxed(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_sub(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_sub(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_sub_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_sub_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_sub_release(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_sub_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_sub_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_sub_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_inc(atomic_t *v)
+{
+ arch_atomic_inc(v);
+}
+
+static __always_inline int
+raw_atomic_inc_return(atomic_t *v)
+{
+ return arch_atomic_inc_return(v);
+}
+
+static __always_inline int
+raw_atomic_inc_return_acquire(atomic_t *v)
+{
+ return arch_atomic_inc_return_acquire(v);
+}
+
+static __always_inline int
+raw_atomic_inc_return_release(atomic_t *v)
+{
+ return arch_atomic_inc_return_release(v);
+}
+
+static __always_inline int
+raw_atomic_inc_return_relaxed(atomic_t *v)
+{
+ return arch_atomic_inc_return_relaxed(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_inc(atomic_t *v)
+{
+ return arch_atomic_fetch_inc(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_inc_acquire(atomic_t *v)
+{
+ return arch_atomic_fetch_inc_acquire(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_inc_release(atomic_t *v)
+{
+ return arch_atomic_fetch_inc_release(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_inc_relaxed(atomic_t *v)
+{
+ return arch_atomic_fetch_inc_relaxed(v);
+}
+
+static __always_inline void
+raw_atomic_dec(atomic_t *v)
+{
+ arch_atomic_dec(v);
+}
+
+static __always_inline int
+raw_atomic_dec_return(atomic_t *v)
+{
+ return arch_atomic_dec_return(v);
+}
+
+static __always_inline int
+raw_atomic_dec_return_acquire(atomic_t *v)
+{
+ return arch_atomic_dec_return_acquire(v);
+}
+
+static __always_inline int
+raw_atomic_dec_return_release(atomic_t *v)
+{
+ return arch_atomic_dec_return_release(v);
+}
+
+static __always_inline int
+raw_atomic_dec_return_relaxed(atomic_t *v)
+{
+ return arch_atomic_dec_return_relaxed(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_dec(atomic_t *v)
+{
+ return arch_atomic_fetch_dec(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_dec_acquire(atomic_t *v)
+{
+ return arch_atomic_fetch_dec_acquire(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_dec_release(atomic_t *v)
+{
+ return arch_atomic_fetch_dec_release(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_dec_relaxed(atomic_t *v)
+{
+ return arch_atomic_fetch_dec_relaxed(v);
+}
+
+static __always_inline void
+raw_atomic_and(int i, atomic_t *v)
+{
+ arch_atomic_and(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_and(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_and(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_and_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_and_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_and_release(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_and_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_and_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_and_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_andnot(int i, atomic_t *v)
+{
+ arch_atomic_andnot(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_andnot(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_andnot(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_andnot_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_andnot_release(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_andnot_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_andnot_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_or(int i, atomic_t *v)
+{
+ arch_atomic_or(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_or(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_or(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_or_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_or_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_or_release(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_or_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_or_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_or_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_xor(int i, atomic_t *v)
+{
+ arch_atomic_xor(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_xor(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_xor(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_xor_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_xor_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_xor_release(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_xor_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_xor_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_xor_relaxed(i, v);
+}
+
+static __always_inline int
+raw_atomic_xchg(atomic_t *v, int i)
+{
+ return arch_atomic_xchg(v, i);
+}
+
+static __always_inline int
+raw_atomic_xchg_acquire(atomic_t *v, int i)
+{
+ return arch_atomic_xchg_acquire(v, i);
+}
+
+static __always_inline int
+raw_atomic_xchg_release(atomic_t *v, int i)
+{
+ return arch_atomic_xchg_release(v, i);
+}
+
+static __always_inline int
+raw_atomic_xchg_relaxed(atomic_t *v, int i)
+{
+ return arch_atomic_xchg_relaxed(v, i);
+}
+
+static __always_inline int
+raw_atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+ return arch_atomic_cmpxchg(v, old, new);
+}
+
+static __always_inline int
+raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
+{
+ return arch_atomic_cmpxchg_acquire(v, old, new);
+}
+
+static __always_inline int
+raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
+{
+ return arch_atomic_cmpxchg_release(v, old, new);
+}
+
+static __always_inline int
+raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
+{
+ return arch_atomic_cmpxchg_relaxed(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
+{
+ return arch_atomic_try_cmpxchg(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
+{
+ return arch_atomic_try_cmpxchg_acquire(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
+{
+ return arch_atomic_try_cmpxchg_release(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
+{
+ return arch_atomic_try_cmpxchg_relaxed(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_sub_and_test(int i, atomic_t *v)
+{
+ return arch_atomic_sub_and_test(i, v);
+}
+
+static __always_inline bool
+raw_atomic_dec_and_test(atomic_t *v)
+{
+ return arch_atomic_dec_and_test(v);
+}
+
+static __always_inline bool
+raw_atomic_inc_and_test(atomic_t *v)
+{
+ return arch_atomic_inc_and_test(v);
+}
+
+static __always_inline bool
+raw_atomic_add_negative(int i, atomic_t *v)
+{
+ return arch_atomic_add_negative(i, v);
+}
+
+static __always_inline bool
+raw_atomic_add_negative_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_add_negative_acquire(i, v);
+}
+
+static __always_inline bool
+raw_atomic_add_negative_release(int i, atomic_t *v)
+{
+ return arch_atomic_add_negative_release(i, v);
+}
+
+static __always_inline bool
+raw_atomic_add_negative_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_add_negative_relaxed(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
+{
+ return arch_atomic_fetch_add_unless(v, a, u);
+}
+
+static __always_inline bool
+raw_atomic_add_unless(atomic_t *v, int a, int u)
+{
+ return arch_atomic_add_unless(v, a, u);
+}
+
+static __always_inline bool
+raw_atomic_inc_not_zero(atomic_t *v)
+{
+ return arch_atomic_inc_not_zero(v);
+}
+
+static __always_inline bool
+raw_atomic_inc_unless_negative(atomic_t *v)
+{
+ return arch_atomic_inc_unless_negative(v);
+}
+
+static __always_inline bool
+raw_atomic_dec_unless_positive(atomic_t *v)
+{
+ return arch_atomic_dec_unless_positive(v);
+}
+
+static __always_inline int
+raw_atomic_dec_if_positive(atomic_t *v)
+{
+ return arch_atomic_dec_if_positive(v);
+}
+
+static __always_inline s64
+raw_atomic64_read(const atomic64_t *v)
+{
+ return arch_atomic64_read(v);
+}
+
+static __always_inline s64
+raw_atomic64_read_acquire(const atomic64_t *v)
+{
+ return arch_atomic64_read_acquire(v);
+}
+
+static __always_inline void
+raw_atomic64_set(atomic64_t *v, s64 i)
+{
+ arch_atomic64_set(v, i);
+}
+
+static __always_inline void
+raw_atomic64_set_release(atomic64_t *v, s64 i)
+{
+ arch_atomic64_set_release(v, i);
+}
+
+static __always_inline void
+raw_atomic64_add(s64 i, atomic64_t *v)
+{
+ arch_atomic64_add(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_add_return(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_return(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_return_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_add_return_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_return_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_return_relaxed(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_add(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_add(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_add_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_add_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_add_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic64_sub(s64 i, atomic64_t *v)
+{
+ arch_atomic64_sub(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_sub_return(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_sub_return(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_sub_return_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_sub_return_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_sub_return_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_sub_return_relaxed(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_sub(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_sub(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_sub_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_sub_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_sub_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic64_inc(atomic64_t *v)
+{
+ arch_atomic64_inc(v);
+}
+
+static __always_inline s64
+raw_atomic64_inc_return(atomic64_t *v)
+{
+ return arch_atomic64_inc_return(v);
+}
+
+static __always_inline s64
+raw_atomic64_inc_return_acquire(atomic64_t *v)
+{
+ return arch_atomic64_inc_return_acquire(v);
+}
+
+static __always_inline s64
+raw_atomic64_inc_return_release(atomic64_t *v)
+{
+ return arch_atomic64_inc_return_release(v);
+}
+
+static __always_inline s64
+raw_atomic64_inc_return_relaxed(atomic64_t *v)
+{
+ return arch_atomic64_inc_return_relaxed(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_inc(atomic64_t *v)
+{
+ return arch_atomic64_fetch_inc(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_inc_acquire(atomic64_t *v)
+{
+ return arch_atomic64_fetch_inc_acquire(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_inc_release(atomic64_t *v)
+{
+ return arch_atomic64_fetch_inc_release(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_inc_relaxed(atomic64_t *v)
+{
+ return arch_atomic64_fetch_inc_relaxed(v);
+}
+
+static __always_inline void
+raw_atomic64_dec(atomic64_t *v)
+{
+ arch_atomic64_dec(v);
+}
+
+static __always_inline s64
+raw_atomic64_dec_return(atomic64_t *v)
+{
+ return arch_atomic64_dec_return(v);
+}
+
+static __always_inline s64
+raw_atomic64_dec_return_acquire(atomic64_t *v)
+{
+ return arch_atomic64_dec_return_acquire(v);
+}
+
+static __always_inline s64
+raw_atomic64_dec_return_release(atomic64_t *v)
+{
+ return arch_atomic64_dec_return_release(v);
+}
+
+static __always_inline s64
+raw_atomic64_dec_return_relaxed(atomic64_t *v)
+{
+ return arch_atomic64_dec_return_relaxed(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_dec(atomic64_t *v)
+{
+ return arch_atomic64_fetch_dec(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_dec_acquire(atomic64_t *v)
+{
+ return arch_atomic64_fetch_dec_acquire(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_dec_release(atomic64_t *v)
+{
+ return arch_atomic64_fetch_dec_release(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_dec_relaxed(atomic64_t *v)
+{
+ return arch_atomic64_fetch_dec_relaxed(v);
+}
+
+static __always_inline void
+raw_atomic64_and(s64 i, atomic64_t *v)
+{
+ arch_atomic64_and(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_and(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_and(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_and_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_and_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_and_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic64_andnot(s64 i, atomic64_t *v)
+{
+ arch_atomic64_andnot(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_andnot(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_andnot_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_andnot_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_andnot_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic64_or(s64 i, atomic64_t *v)
+{
+ arch_atomic64_or(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_or(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_or(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_or_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_or_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_or_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic64_xor(s64 i, atomic64_t *v)
+{
+ arch_atomic64_xor(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_xor(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_xor(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_xor_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_xor_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_xor_relaxed(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_xchg(atomic64_t *v, s64 i)
+{
+ return arch_atomic64_xchg(v, i);
+}
+
+static __always_inline s64
+raw_atomic64_xchg_acquire(atomic64_t *v, s64 i)
+{
+ return arch_atomic64_xchg_acquire(v, i);
+}
+
+static __always_inline s64
+raw_atomic64_xchg_release(atomic64_t *v, s64 i)
+{
+ return arch_atomic64_xchg_release(v, i);
+}
+
+static __always_inline s64
+raw_atomic64_xchg_relaxed(atomic64_t *v, s64 i)
+{
+ return arch_atomic64_xchg_relaxed(v, i);
+}
+
+static __always_inline s64
+raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_atomic64_cmpxchg(v, old, new);
+}
+
+static __always_inline s64
+raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_atomic64_cmpxchg_acquire(v, old, new);
+}
+
+static __always_inline s64
+raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_atomic64_cmpxchg_release(v, old, new);
+}
+
+static __always_inline s64
+raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_atomic64_cmpxchg_relaxed(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
+{
+ return arch_atomic64_try_cmpxchg(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
+{
+ return arch_atomic64_try_cmpxchg_acquire(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
+{
+ return arch_atomic64_try_cmpxchg_release(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
+{
+ return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic64_sub_and_test(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_sub_and_test(i, v);
+}
+
+static __always_inline bool
+raw_atomic64_dec_and_test(atomic64_t *v)
+{
+ return arch_atomic64_dec_and_test(v);
+}
+
+static __always_inline bool
+raw_atomic64_inc_and_test(atomic64_t *v)
+{
+ return arch_atomic64_inc_and_test(v);
+}
+
+static __always_inline bool
+raw_atomic64_add_negative(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_negative(i, v);
+}
+
+static __always_inline bool
+raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_negative_acquire(i, v);
+}
+
+static __always_inline bool
+raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_negative_release(i, v);
+}
+
+static __always_inline bool
+raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_negative_relaxed(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+{
+ return arch_atomic64_fetch_add_unless(v, a, u);
+}
+
+static __always_inline bool
+raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
+{
+ return arch_atomic64_add_unless(v, a, u);
+}
+
+static __always_inline bool
+raw_atomic64_inc_not_zero(atomic64_t *v)
+{
+ return arch_atomic64_inc_not_zero(v);
+}
+
+static __always_inline bool
+raw_atomic64_inc_unless_negative(atomic64_t *v)
+{
+ return arch_atomic64_inc_unless_negative(v);
+}
+
+static __always_inline bool
+raw_atomic64_dec_unless_positive(atomic64_t *v)
+{
+ return arch_atomic64_dec_unless_positive(v);
+}
+
+static __always_inline s64
+raw_atomic64_dec_if_positive(atomic64_t *v)
+{
+ return arch_atomic64_dec_if_positive(v);
+}
+
+static __always_inline long
+raw_atomic_long_read(const atomic_long_t *v)
+{
+ return arch_atomic_long_read(v);
+}
+
+static __always_inline long
+raw_atomic_long_read_acquire(const atomic_long_t *v)
+{
+ return arch_atomic_long_read_acquire(v);
+}
+
+static __always_inline void
+raw_atomic_long_set(atomic_long_t *v, long i)
+{
+ arch_atomic_long_set(v, i);
+}
+
+static __always_inline void
+raw_atomic_long_set_release(atomic_long_t *v, long i)
+{
+ arch_atomic_long_set_release(v, i);
+}
+
+static __always_inline void
+raw_atomic_long_add(long i, atomic_long_t *v)
+{
+ arch_atomic_long_add(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_add_return(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_return(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_return_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_add_return_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_return_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_return_relaxed(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_add(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_add(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_add_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_add_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_add_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_long_sub(long i, atomic_long_t *v)
+{
+ arch_atomic_long_sub(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_sub_return(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_sub_return(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_sub_return_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_sub_return_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_sub_return_relaxed(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_sub(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_sub_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_sub_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_sub_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_long_inc(atomic_long_t *v)
+{
+ arch_atomic_long_inc(v);
+}
+
+static __always_inline long
+raw_atomic_long_inc_return(atomic_long_t *v)
+{
+ return arch_atomic_long_inc_return(v);
+}
+
+static __always_inline long
+raw_atomic_long_inc_return_acquire(atomic_long_t *v)
+{
+ return arch_atomic_long_inc_return_acquire(v);
+}
+
+static __always_inline long
+raw_atomic_long_inc_return_release(atomic_long_t *v)
+{
+ return arch_atomic_long_inc_return_release(v);
+}
+
+static __always_inline long
+raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
+{
+ return arch_atomic_long_inc_return_relaxed(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_inc(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_inc(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_inc_acquire(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_inc_release(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_inc_release(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_inc_relaxed(v);
+}
+
+static __always_inline void
+raw_atomic_long_dec(atomic_long_t *v)
+{
+ arch_atomic_long_dec(v);
+}
+
+static __always_inline long
+raw_atomic_long_dec_return(atomic_long_t *v)
+{
+ return arch_atomic_long_dec_return(v);
+}
+
+static __always_inline long
+raw_atomic_long_dec_return_acquire(atomic_long_t *v)
+{
+ return arch_atomic_long_dec_return_acquire(v);
+}
+
+static __always_inline long
+raw_atomic_long_dec_return_release(atomic_long_t *v)
+{
+ return arch_atomic_long_dec_return_release(v);
+}
+
+static __always_inline long
+raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
+{
+ return arch_atomic_long_dec_return_relaxed(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_dec(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_dec(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_dec_acquire(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_dec_release(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_dec_release(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_dec_relaxed(v);
+}
+
+static __always_inline void
+raw_atomic_long_and(long i, atomic_long_t *v)
+{
+ arch_atomic_long_and(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_and(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_and(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_and_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_and_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_and_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_long_andnot(long i, atomic_long_t *v)
+{
+ arch_atomic_long_andnot(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_andnot(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_andnot_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_andnot_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_andnot_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_long_or(long i, atomic_long_t *v)
+{
+ arch_atomic_long_or(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_or(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_or(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_or_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_or_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_or_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_long_xor(long i, atomic_long_t *v)
+{
+ arch_atomic_long_xor(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_xor(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_xor_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_xor_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_xor_relaxed(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_xchg(atomic_long_t *v, long i)
+{
+ return arch_atomic_long_xchg(v, i);
+}
+
+static __always_inline long
+raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
+{
+ return arch_atomic_long_xchg_acquire(v, i);
+}
+
+static __always_inline long
+raw_atomic_long_xchg_release(atomic_long_t *v, long i)
+{
+ return arch_atomic_long_xchg_release(v, i);
+}
+
+static __always_inline long
+raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
+{
+ return arch_atomic_long_xchg_relaxed(v, i);
+}
+
+static __always_inline long
+raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
+{
+ return arch_atomic_long_cmpxchg(v, old, new);
+}
+
+static __always_inline long
+raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
+{
+ return arch_atomic_long_cmpxchg_acquire(v, old, new);
+}
+
+static __always_inline long
+raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
+{
+ return arch_atomic_long_cmpxchg_release(v, old, new);
+}
+
+static __always_inline long
+raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
+{
+ return arch_atomic_long_cmpxchg_relaxed(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
+{
+ return arch_atomic_long_try_cmpxchg(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
+{
+ return arch_atomic_long_try_cmpxchg_acquire(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
+{
+ return arch_atomic_long_try_cmpxchg_release(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
+{
+ return arch_atomic_long_try_cmpxchg_relaxed(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_sub_and_test(i, v);
+}
+
+static __always_inline bool
+raw_atomic_long_dec_and_test(atomic_long_t *v)
+{
+ return arch_atomic_long_dec_and_test(v);
+}
+
+static __always_inline bool
+raw_atomic_long_inc_and_test(atomic_long_t *v)
+{
+ return arch_atomic_long_inc_and_test(v);
+}
+
+static __always_inline bool
+raw_atomic_long_add_negative(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_negative(i, v);
+}
+
+static __always_inline bool
+raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_negative_acquire(i, v);
+}
+
+static __always_inline bool
+raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_negative_release(i, v);
+}
+
+static __always_inline bool
+raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_negative_relaxed(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
+{
+ return arch_atomic_long_fetch_add_unless(v, a, u);
+}
+
+static __always_inline bool
+raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
+{
+ return arch_atomic_long_add_unless(v, a, u);
+}
+
+static __always_inline bool
+raw_atomic_long_inc_not_zero(atomic_long_t *v)
+{
+ return arch_atomic_long_inc_not_zero(v);
+}
+
+static __always_inline bool
+raw_atomic_long_inc_unless_negative(atomic_long_t *v)
+{
+ return arch_atomic_long_inc_unless_negative(v);
+}
+
+static __always_inline bool
+raw_atomic_long_dec_unless_positive(atomic_long_t *v)
+{
+ return arch_atomic_long_dec_unless_positive(v);
+}
+
+static __always_inline long
+raw_atomic_long_dec_if_positive(atomic_long_t *v)
+{
+ return arch_atomic_long_dec_if_positive(v);
+}
+
+#define raw_xchg(...) \
+ arch_xchg(__VA_ARGS__)
+
+#define raw_xchg_acquire(...) \
+ arch_xchg_acquire(__VA_ARGS__)
+
+#define raw_xchg_release(...) \
+ arch_xchg_release(__VA_ARGS__)
+
+#define raw_xchg_relaxed(...) \
+ arch_xchg_relaxed(__VA_ARGS__)
+
+#define raw_cmpxchg(...) \
+ arch_cmpxchg(__VA_ARGS__)
+
+#define raw_cmpxchg_acquire(...) \
+ arch_cmpxchg_acquire(__VA_ARGS__)
+
+#define raw_cmpxchg_release(...) \
+ arch_cmpxchg_release(__VA_ARGS__)
+
+#define raw_cmpxchg_relaxed(...) \
+ arch_cmpxchg_relaxed(__VA_ARGS__)
+
+#define raw_cmpxchg64(...) \
+ arch_cmpxchg64(__VA_ARGS__)
+
+#define raw_cmpxchg64_acquire(...) \
+ arch_cmpxchg64_acquire(__VA_ARGS__)
+
+#define raw_cmpxchg64_release(...) \
+ arch_cmpxchg64_release(__VA_ARGS__)
+
+#define raw_cmpxchg64_relaxed(...) \
+ arch_cmpxchg64_relaxed(__VA_ARGS__)
+
+#define raw_cmpxchg128(...) \
+ arch_cmpxchg128(__VA_ARGS__)
+
+#define raw_cmpxchg128_acquire(...) \
+ arch_cmpxchg128_acquire(__VA_ARGS__)
+
+#define raw_cmpxchg128_release(...) \
+ arch_cmpxchg128_release(__VA_ARGS__)
+
+#define raw_cmpxchg128_relaxed(...) \
+ arch_cmpxchg128_relaxed(__VA_ARGS__)
+
+#define raw_try_cmpxchg(...) \
+ arch_try_cmpxchg(__VA_ARGS__)
+
+#define raw_try_cmpxchg_acquire(...) \
+ arch_try_cmpxchg_acquire(__VA_ARGS__)
+
+#define raw_try_cmpxchg_release(...) \
+ arch_try_cmpxchg_release(__VA_ARGS__)
+
+#define raw_try_cmpxchg_relaxed(...) \
+ arch_try_cmpxchg_relaxed(__VA_ARGS__)
+
+#define raw_try_cmpxchg64(...) \
+ arch_try_cmpxchg64(__VA_ARGS__)
+
+#define raw_try_cmpxchg64_acquire(...) \
+ arch_try_cmpxchg64_acquire(__VA_ARGS__)
+
+#define raw_try_cmpxchg64_release(...) \
+ arch_try_cmpxchg64_release(__VA_ARGS__)
+
+#define raw_try_cmpxchg64_relaxed(...) \
+ arch_try_cmpxchg64_relaxed(__VA_ARGS__)
+
+#define raw_try_cmpxchg128(...) \
+ arch_try_cmpxchg128(__VA_ARGS__)
+
+#define raw_try_cmpxchg128_acquire(...) \
+ arch_try_cmpxchg128_acquire(__VA_ARGS__)
+
+#define raw_try_cmpxchg128_release(...) \
+ arch_try_cmpxchg128_release(__VA_ARGS__)
+
+#define raw_try_cmpxchg128_relaxed(...) \
+ arch_try_cmpxchg128_relaxed(__VA_ARGS__)
+
+#define raw_cmpxchg_local(...) \
+ arch_cmpxchg_local(__VA_ARGS__)
+
+#define raw_cmpxchg64_local(...) \
+ arch_cmpxchg64_local(__VA_ARGS__)
+
+#define raw_cmpxchg128_local(...) \
+ arch_cmpxchg128_local(__VA_ARGS__)
+
+#define raw_sync_cmpxchg(...) \
+ arch_sync_cmpxchg(__VA_ARGS__)
+
+#define raw_try_cmpxchg_local(...) \
+ arch_try_cmpxchg_local(__VA_ARGS__)
+
+#define raw_try_cmpxchg64_local(...) \
+ arch_try_cmpxchg64_local(__VA_ARGS__)
+
+#define raw_try_cmpxchg128_local(...) \
+ arch_try_cmpxchg128_local(__VA_ARGS__)
+
+#endif /* _LINUX_ATOMIC_RAW_H */
+// 01d54200571b3857755a07c10074a4fd58cef6b1
diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh
index 68557bfbbdc5e..93c949aa9e544 100755
--- a/scripts/atomic/gen-atomic-instrumented.sh
+++ b/scripts/atomic/gen-atomic-instrumented.sh
@@ -73,7 +73,7 @@ static __always_inline ${ret}
${atomicname}(${params})
{
${checks}
- ${retstmt}arch_${atomicname}(${args});
+ ${retstmt}raw_${atomicname}(${args});
}
EOF
@@ -105,7 +105,7 @@ EOF
cat <<EOF
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \\
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \\
- arch_${xchg}${order}(__ai_ptr, __ai_oldp, __VA_ARGS__); \\
+ raw_${xchg}${order}(__ai_ptr, __ai_oldp, __VA_ARGS__); \\
})
EOF
@@ -119,7 +119,7 @@ EOF
[ -n "$kcsan_barrier" ] && printf "\t${kcsan_barrier}; \\\\\n"
cat <<EOF
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \\
- arch_${xchg}${order}(__ai_ptr, __VA_ARGS__); \\
+ raw_${xchg}${order}(__ai_ptr, __VA_ARGS__); \\
})
EOF
@@ -133,15 +133,10 @@ cat << EOF
// DO NOT MODIFY THIS FILE DIRECTLY
/*
- * This file provides wrappers with KASAN instrumentation for atomic operations.
- * To use this functionality an arch's atomic.h file needs to define all
- * atomic operations with arch_ prefix (e.g. arch_atomic_read()) and include
- * this file at the end. This file provides atomic_read() that forwards to
- * arch_atomic_read() for actual atomic operation.
- * Note: if an arch atomic operation is implemented by means of other atomic
- * operations (e.g. atomic_read()/atomic_cmpxchg() loop), then it needs to use
- * arch_ variants (i.e. arch_atomic_read()/arch_atomic_cmpxchg()) to avoid
- * double instrumentation.
+ * This file provoides atomic operations with explicit instrumentation (e.g.
+ * KASAN, KCSAN), which should be used unless it is necessary to avoid
+ * instrumentation. Where it is necessary to aovid instrumenation, the
+ * raw_atomic*() operations should be used.
*/
#ifndef _LINUX_ATOMIC_INSTRUMENTED_H
#define _LINUX_ATOMIC_INSTRUMENTED_H
diff --git a/scripts/atomic/gen-atomic-raw.sh b/scripts/atomic/gen-atomic-raw.sh
new file mode 100755
index 0000000000000..ba8d136f30e4c
--- /dev/null
+++ b/scripts/atomic/gen-atomic-raw.sh
@@ -0,0 +1,84 @@
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0
+
+ATOMICDIR=$(dirname $0)
+
+. ${ATOMICDIR}/atomic-tbl.sh
+
+#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...)
+gen_proto_order_variant()
+{
+ local meta="$1"; shift
+ local pfx="$1"; shift
+ local name="$1"; shift
+ local sfx="$1"; shift
+ local order="$1"; shift
+ local atomic="$1"; shift
+ local int="$1"; shift
+
+ local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
+
+ local ret="$(gen_ret_type "${meta}" "${int}")"
+ local params="$(gen_params "${int}" "${atomic}" "$@")"
+ local args="$(gen_args "$@")"
+ local retstmt="$(gen_ret_stmt "${meta}")"
+
+cat <<EOF
+static __always_inline ${ret}
+raw_${atomicname}(${params})
+{
+ ${retstmt}arch_${atomicname}(${args});
+}
+
+EOF
+}
+
+gen_xchg()
+{
+ local xchg="$1"; shift
+ local order="$1"; shift
+
+cat <<EOF
+#define raw_${xchg}${order}(...) \\
+ arch_${xchg}${order}(__VA_ARGS__)
+EOF
+}
+
+cat << EOF
+// SPDX-License-Identifier: GPL-2.0
+
+// Generated by $0
+// DO NOT MODIFY THIS FILE DIRECTLY
+
+#ifndef _LINUX_ATOMIC_RAW_H
+#define _LINUX_ATOMIC_RAW_H
+
+EOF
+
+grep '^[a-z]' "$1" | while read name meta args; do
+ gen_proto "${meta}" "${name}" "atomic" "int" ${args}
+done
+
+grep '^[a-z]' "$1" | while read name meta args; do
+ gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
+done
+
+grep '^[a-z]' "$1" | while read name meta args; do
+ gen_proto "${meta}" "${name}" "atomic_long" "long" ${args}
+done
+
+for xchg in "xchg" "cmpxchg" "cmpxchg64" "cmpxchg128" "try_cmpxchg" "try_cmpxchg64" "try_cmpxchg128"; do
+ for order in "" "_acquire" "_release" "_relaxed"; do
+ gen_xchg "${xchg}" "${order}"
+ printf "\n"
+ done
+done
+
+for xchg in "cmpxchg_local" "cmpxchg64_local" "cmpxchg128_local" "sync_cmpxchg" "try_cmpxchg_local" "try_cmpxchg64_local" "try_cmpxchg128_local"; do
+ gen_xchg "${xchg}" ""
+ printf "\n"
+done
+
+cat <<EOF
+#endif /* _LINUX_ATOMIC_RAW_H */
+EOF
diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh
index 5b98a83076932..631d351f9f1f3 100755
--- a/scripts/atomic/gen-atomics.sh
+++ b/scripts/atomic/gen-atomics.sh
@@ -11,6 +11,7 @@ cat <<EOF |
gen-atomic-instrumented.sh linux/atomic/atomic-instrumented.h
gen-atomic-long.sh linux/atomic/atomic-long.h
gen-atomic-fallback.sh linux/atomic/atomic-arch-fallback.h
+gen-atomic-raw.sh linux/atomic/atomic-raw.h
EOF
while read script header args; do
/bin/sh ${ATOMICDIR}/${script} ${ATOMICTBL} ${args} > ${LINUXDIR}/include/${header}
--
2.30.2
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/x86.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/x86/include/asm/cmpxchg_64.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/x86/include/asm/cmpxchg_64.h b/arch/x86/include/asm/cmpxchg_64.h
index 3e6e3eef701b3..44b08b53ab32f 100644
--- a/arch/x86/include/asm/cmpxchg_64.h
+++ b/arch/x86/include/asm/cmpxchg_64.h
@@ -45,11 +45,13 @@ static __always_inline u128 arch_cmpxchg128(volatile u128 *ptr, u128 old, u128 n
{
return __arch_cmpxchg128(ptr, old, new, LOCK_PREFIX);
}
+#define arch_cmpxchg128 arch_cmpxchg128
static __always_inline u128 arch_cmpxchg128_local(volatile u128 *ptr, u128 old, u128 new)
{
return __arch_cmpxchg128(ptr, old, new,);
}
+#define arch_cmpxchg128_local arch_cmpxchg128_local
#define __arch_try_cmpxchg128(_ptr, _oldp, _new, _lock) \
({ \
@@ -75,11 +77,13 @@ static __always_inline bool arch_try_cmpxchg128(volatile u128 *ptr, u128 *oldp,
{
return __arch_try_cmpxchg128(ptr, oldp, new, LOCK_PREFIX);
}
+#define arch_try_cmpxchg128 arch_try_cmpxchg128
static __always_inline bool arch_try_cmpxchg128_local(volatile u128 *ptr, u128 *oldp, u128 new)
{
return __arch_try_cmpxchg128(ptr, oldp, new,);
}
+#define arch_try_cmpxchg128_local arch_try_cmpxchg128_local
#define system_has_cmpxchg128() boot_cpu_has(X86_FEATURE_CX16)
--
2.30.2
Currently gen-atomic-long.sh's gen_proto_order_variant() function
combines the pfx/name/sfx/order variables immediately, unlike other
functions in gen-atomic-*.sh.
This is fine today, but subsequent patches will require the individual
individual pfx/name/sfx/order variables within gen-atomic-long.sh's
gen_proto_order_variant() function. In preparation for this, split the
variables in the style of other gen-atomic-*.sh scripts.
This results in no change to the generated headers, so there should be
no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
scripts/atomic/gen-atomic-long.sh | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh
index 75e91d6da30d3..13832171f7219 100755
--- a/scripts/atomic/gen-atomic-long.sh
+++ b/scripts/atomic/gen-atomic-long.sh
@@ -36,10 +36,15 @@ gen_args_cast()
gen_proto_order_variant()
{
local meta="$1"; shift
- local name="$1$2$3$4"; shift; shift; shift; shift
+ local pfx="$1"; shift
+ local name="$1"; shift
+ local sfx="$1"; shift
+ local order="$1"; shift
local atomic="$1"; shift
local int="$1"; shift
+ local atomicname="${pfx}${name}${sfx}${order}"
+
local ret="$(gen_ret_type "${meta}" "long")"
local params="$(gen_params "long" "atomic_long" "$@")"
local argscast="$(gen_args_cast "${int}" "${atomic}" "$@")"
@@ -47,9 +52,9 @@ gen_proto_order_variant()
cat <<EOF
static __always_inline ${ret}
-raw_atomic_long_${name}(${params})
+raw_atomic_long_${atomicname}(${params})
{
- ${retstmt}raw_${atomic}_${name}(${argscast});
+ ${retstmt}raw_${atomic}_${atomicname}(${argscast});
}
EOF
--
2.30.2
Currently, atomic-long is split into two sections, one defining the
raw_atomic_long_*() ops for CONFIG_64BIT, and one defining the raw
atomic_long_*() ops for !CONFIG_64BIT.
With many lines elided, this looks like:
| #ifdef CONFIG_64BIT
| ...
| static __always_inline bool
| raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
| {
| return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
| }
| ...
| #else /* CONFIG_64BIT */
| ...
| static __always_inline bool
| raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
| {
| return raw_atomic_try_cmpxchg(v, (int *)old, new);
| }
| ...
| #endif
The two definitions are spread far apart in the file, and duplicate the
prototype, making it hard to have a legible set of kerneldoc comments.
Make this simpler by defining the C prototype once, and writing the two
definitions inline. For example, the above becomes:
| static __always_inline bool
| raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
| {
| #ifdef CONFIG_64BIT
| return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
| #else
| return raw_atomic_try_cmpxchg(v, (int *)old, new);
| #endif
| }
As we now always have a single copy of the C prototype wrapping all the
potential definitions, we now have an obvious single location for kerneldoc
comments. As a bonus, both the script and the generated file are
somewhat shorter.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
include/linux/atomic/atomic-long.h | 857 ++++++++++++-----------------
scripts/atomic/gen-atomic-long.sh | 27 +-
2 files changed, 350 insertions(+), 534 deletions(-)
diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h
index 92dc82ce1ce6d..63e0b4078ebd5 100644
--- a/include/linux/atomic/atomic-long.h
+++ b/include/linux/atomic/atomic-long.h
@@ -21,1030 +21,855 @@ typedef atomic_t atomic_long_t;
#define atomic_long_cond_read_relaxed atomic_cond_read_relaxed
#endif
-#ifdef CONFIG_64BIT
-
-static __always_inline long
-raw_atomic_long_read(const atomic_long_t *v)
-{
- return raw_atomic64_read(v);
-}
-
-static __always_inline long
-raw_atomic_long_read_acquire(const atomic_long_t *v)
-{
- return raw_atomic64_read_acquire(v);
-}
-
-static __always_inline void
-raw_atomic_long_set(atomic_long_t *v, long i)
-{
- raw_atomic64_set(v, i);
-}
-
-static __always_inline void
-raw_atomic_long_set_release(atomic_long_t *v, long i)
-{
- raw_atomic64_set_release(v, i);
-}
-
-static __always_inline void
-raw_atomic_long_add(long i, atomic_long_t *v)
-{
- raw_atomic64_add(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_return(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_return_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_return_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_return_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_add(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_add_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_add_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_add_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_sub(long i, atomic_long_t *v)
-{
- raw_atomic64_sub(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return(long i, atomic_long_t *v)
-{
- return raw_atomic64_sub_return(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_sub_return_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_sub_return_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_sub_return_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_sub(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_sub_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_sub_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_sub_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_inc(atomic_long_t *v)
-{
- raw_atomic64_inc(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return(atomic_long_t *v)
-{
- return raw_atomic64_inc_return(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_acquire(atomic_long_t *v)
-{
- return raw_atomic64_inc_return_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_release(atomic_long_t *v)
-{
- return raw_atomic64_inc_return_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
-{
- return raw_atomic64_inc_return_relaxed(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc(atomic_long_t *v)
-{
- return raw_atomic64_fetch_inc(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
-{
- return raw_atomic64_fetch_inc_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_release(atomic_long_t *v)
-{
- return raw_atomic64_fetch_inc_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
-{
- return raw_atomic64_fetch_inc_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_long_dec(atomic_long_t *v)
-{
- raw_atomic64_dec(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return(atomic_long_t *v)
-{
- return raw_atomic64_dec_return(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_acquire(atomic_long_t *v)
-{
- return raw_atomic64_dec_return_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_release(atomic_long_t *v)
-{
- return raw_atomic64_dec_return_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
-{
- return raw_atomic64_dec_return_relaxed(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec(atomic_long_t *v)
-{
- return raw_atomic64_fetch_dec(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
-{
- return raw_atomic64_fetch_dec_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_release(atomic_long_t *v)
-{
- return raw_atomic64_fetch_dec_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
-{
- return raw_atomic64_fetch_dec_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_long_and(long i, atomic_long_t *v)
-{
- raw_atomic64_and(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_and(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_and_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_and_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_and_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_andnot(long i, atomic_long_t *v)
-{
- raw_atomic64_andnot(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_andnot(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_andnot_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_andnot_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_andnot_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_or(long i, atomic_long_t *v)
-{
- raw_atomic64_or(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_or(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_or_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_or_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_or_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_xor(long i, atomic_long_t *v)
-{
- raw_atomic64_xor(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_xor(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_xor_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_xor_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_xor_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_xchg(atomic_long_t *v, long i)
-{
- return raw_atomic64_xchg(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
-{
- return raw_atomic64_xchg_acquire(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_release(atomic_long_t *v, long i)
-{
- return raw_atomic64_xchg_release(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
-{
- return raw_atomic64_xchg_relaxed(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
-{
- return raw_atomic64_cmpxchg(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
-{
- return raw_atomic64_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
-{
- return raw_atomic64_cmpxchg_release(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
-{
- return raw_atomic64_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
-{
- return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
-{
- return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
-{
- return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
-{
- return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
-{
- return raw_atomic64_sub_and_test(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_dec_and_test(atomic_long_t *v)
-{
- return raw_atomic64_dec_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_and_test(atomic_long_t *v)
-{
- return raw_atomic64_inc_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_negative(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_negative_acquire(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_negative_release(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_negative_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
-{
- return raw_atomic64_fetch_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
-{
- return raw_atomic64_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_not_zero(atomic_long_t *v)
-{
- return raw_atomic64_inc_not_zero(v);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_unless_negative(atomic_long_t *v)
-{
- return raw_atomic64_inc_unless_negative(v);
-}
-
-static __always_inline bool
-raw_atomic_long_dec_unless_positive(atomic_long_t *v)
-{
- return raw_atomic64_dec_unless_positive(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_if_positive(atomic_long_t *v)
-{
- return raw_atomic64_dec_if_positive(v);
-}
-
-#else /* CONFIG_64BIT */
-
static __always_inline long
raw_atomic_long_read(const atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_read(v);
+#else
return raw_atomic_read(v);
+#endif
}
static __always_inline long
raw_atomic_long_read_acquire(const atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_read_acquire(v);
+#else
return raw_atomic_read_acquire(v);
+#endif
}
static __always_inline void
raw_atomic_long_set(atomic_long_t *v, long i)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_set(v, i);
+#else
raw_atomic_set(v, i);
+#endif
}
static __always_inline void
raw_atomic_long_set_release(atomic_long_t *v, long i)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_set_release(v, i);
+#else
raw_atomic_set_release(v, i);
+#endif
}
static __always_inline void
raw_atomic_long_add(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_add(i, v);
+#else
raw_atomic_add(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_add_return(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_return(i, v);
+#else
return raw_atomic_add_return(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_return_acquire(i, v);
+#else
return raw_atomic_add_return_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_add_return_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_return_release(i, v);
+#else
return raw_atomic_add_return_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_return_relaxed(i, v);
+#else
return raw_atomic_add_return_relaxed(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_add(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_add(i, v);
+#else
return raw_atomic_fetch_add(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_add_acquire(i, v);
+#else
return raw_atomic_fetch_add_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_add_release(i, v);
+#else
return raw_atomic_fetch_add_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_add_relaxed(i, v);
+#else
return raw_atomic_fetch_add_relaxed(i, v);
+#endif
}
static __always_inline void
raw_atomic_long_sub(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_sub(i, v);
+#else
raw_atomic_sub(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_sub_return(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_sub_return(i, v);
+#else
return raw_atomic_sub_return(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_sub_return_acquire(i, v);
+#else
return raw_atomic_sub_return_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_sub_return_release(i, v);
+#else
return raw_atomic_sub_return_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_sub_return_relaxed(i, v);
+#else
return raw_atomic_sub_return_relaxed(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_sub(i, v);
+#else
return raw_atomic_fetch_sub(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_sub_acquire(i, v);
+#else
return raw_atomic_fetch_sub_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_sub_release(i, v);
+#else
return raw_atomic_fetch_sub_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_sub_relaxed(i, v);
+#else
return raw_atomic_fetch_sub_relaxed(i, v);
+#endif
}
static __always_inline void
raw_atomic_long_inc(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_inc(v);
+#else
raw_atomic_inc(v);
+#endif
}
static __always_inline long
raw_atomic_long_inc_return(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_inc_return(v);
+#else
return raw_atomic_inc_return(v);
+#endif
}
static __always_inline long
raw_atomic_long_inc_return_acquire(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_inc_return_acquire(v);
+#else
return raw_atomic_inc_return_acquire(v);
+#endif
}
static __always_inline long
raw_atomic_long_inc_return_release(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_inc_return_release(v);
+#else
return raw_atomic_inc_return_release(v);
+#endif
}
static __always_inline long
raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_inc_return_relaxed(v);
+#else
return raw_atomic_inc_return_relaxed(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_inc(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_inc(v);
+#else
return raw_atomic_fetch_inc(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_inc_acquire(v);
+#else
return raw_atomic_fetch_inc_acquire(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_inc_release(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_inc_release(v);
+#else
return raw_atomic_fetch_inc_release(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_inc_relaxed(v);
+#else
return raw_atomic_fetch_inc_relaxed(v);
+#endif
}
static __always_inline void
raw_atomic_long_dec(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_dec(v);
+#else
raw_atomic_dec(v);
+#endif
}
static __always_inline long
raw_atomic_long_dec_return(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_dec_return(v);
+#else
return raw_atomic_dec_return(v);
+#endif
}
static __always_inline long
raw_atomic_long_dec_return_acquire(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_dec_return_acquire(v);
+#else
return raw_atomic_dec_return_acquire(v);
+#endif
}
static __always_inline long
raw_atomic_long_dec_return_release(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_dec_return_release(v);
+#else
return raw_atomic_dec_return_release(v);
+#endif
}
static __always_inline long
raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_dec_return_relaxed(v);
+#else
return raw_atomic_dec_return_relaxed(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_dec(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_dec(v);
+#else
return raw_atomic_fetch_dec(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_dec_acquire(v);
+#else
return raw_atomic_fetch_dec_acquire(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_dec_release(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_dec_release(v);
+#else
return raw_atomic_fetch_dec_release(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_dec_relaxed(v);
+#else
return raw_atomic_fetch_dec_relaxed(v);
+#endif
}
static __always_inline void
raw_atomic_long_and(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_and(i, v);
+#else
raw_atomic_and(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_and(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_and(i, v);
+#else
return raw_atomic_fetch_and(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_and_acquire(i, v);
+#else
return raw_atomic_fetch_and_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_and_release(i, v);
+#else
return raw_atomic_fetch_and_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_and_relaxed(i, v);
+#else
return raw_atomic_fetch_and_relaxed(i, v);
+#endif
}
static __always_inline void
raw_atomic_long_andnot(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_andnot(i, v);
+#else
raw_atomic_andnot(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_andnot(i, v);
+#else
return raw_atomic_fetch_andnot(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_andnot_acquire(i, v);
+#else
return raw_atomic_fetch_andnot_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_andnot_release(i, v);
+#else
return raw_atomic_fetch_andnot_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_andnot_relaxed(i, v);
+#else
return raw_atomic_fetch_andnot_relaxed(i, v);
+#endif
}
static __always_inline void
raw_atomic_long_or(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_or(i, v);
+#else
raw_atomic_or(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_or(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_or(i, v);
+#else
return raw_atomic_fetch_or(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_or_acquire(i, v);
+#else
return raw_atomic_fetch_or_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_or_release(i, v);
+#else
return raw_atomic_fetch_or_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_or_relaxed(i, v);
+#else
return raw_atomic_fetch_or_relaxed(i, v);
+#endif
}
static __always_inline void
raw_atomic_long_xor(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_xor(i, v);
+#else
raw_atomic_xor(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_xor(i, v);
+#else
return raw_atomic_fetch_xor(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_xor_acquire(i, v);
+#else
return raw_atomic_fetch_xor_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_xor_release(i, v);
+#else
return raw_atomic_fetch_xor_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_xor_relaxed(i, v);
+#else
return raw_atomic_fetch_xor_relaxed(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_xchg(atomic_long_t *v, long i)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_xchg(v, i);
+#else
return raw_atomic_xchg(v, i);
+#endif
}
static __always_inline long
raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_xchg_acquire(v, i);
+#else
return raw_atomic_xchg_acquire(v, i);
+#endif
}
static __always_inline long
raw_atomic_long_xchg_release(atomic_long_t *v, long i)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_xchg_release(v, i);
+#else
return raw_atomic_xchg_release(v, i);
+#endif
}
static __always_inline long
raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_xchg_relaxed(v, i);
+#else
return raw_atomic_xchg_relaxed(v, i);
+#endif
}
static __always_inline long
raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_cmpxchg(v, old, new);
+#else
return raw_atomic_cmpxchg(v, old, new);
+#endif
}
static __always_inline long
raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_cmpxchg_acquire(v, old, new);
+#else
return raw_atomic_cmpxchg_acquire(v, old, new);
+#endif
}
static __always_inline long
raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_cmpxchg_release(v, old, new);
+#else
return raw_atomic_cmpxchg_release(v, old, new);
+#endif
}
static __always_inline long
raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_cmpxchg_relaxed(v, old, new);
+#else
return raw_atomic_cmpxchg_relaxed(v, old, new);
+#endif
}
static __always_inline bool
raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
+#else
return raw_atomic_try_cmpxchg(v, (int *)old, new);
+#endif
}
static __always_inline bool
raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new);
+#else
return raw_atomic_try_cmpxchg_acquire(v, (int *)old, new);
+#endif
}
static __always_inline bool
raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new);
+#else
return raw_atomic_try_cmpxchg_release(v, (int *)old, new);
+#endif
}
static __always_inline bool
raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new);
+#else
return raw_atomic_try_cmpxchg_relaxed(v, (int *)old, new);
+#endif
}
static __always_inline bool
raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_sub_and_test(i, v);
+#else
return raw_atomic_sub_and_test(i, v);
+#endif
}
static __always_inline bool
raw_atomic_long_dec_and_test(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_dec_and_test(v);
+#else
return raw_atomic_dec_and_test(v);
+#endif
}
static __always_inline bool
raw_atomic_long_inc_and_test(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_inc_and_test(v);
+#else
return raw_atomic_inc_and_test(v);
+#endif
}
static __always_inline bool
raw_atomic_long_add_negative(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_negative(i, v);
+#else
return raw_atomic_add_negative(i, v);
+#endif
}
static __always_inline bool
raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_negative_acquire(i, v);
+#else
return raw_atomic_add_negative_acquire(i, v);
+#endif
}
static __always_inline bool
raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_negative_release(i, v);
+#else
return raw_atomic_add_negative_release(i, v);
+#endif
}
static __always_inline bool
raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_negative_relaxed(i, v);
+#else
return raw_atomic_add_negative_relaxed(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_add_unless(v, a, u);
+#else
return raw_atomic_fetch_add_unless(v, a, u);
+#endif
}
static __always_inline bool
raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_unless(v, a, u);
+#else
return raw_atomic_add_unless(v, a, u);
+#endif
}
static __always_inline bool
raw_atomic_long_inc_not_zero(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_inc_not_zero(v);
+#else
return raw_atomic_inc_not_zero(v);
+#endif
}
static __always_inline bool
raw_atomic_long_inc_unless_negative(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_inc_unless_negative(v);
+#else
return raw_atomic_inc_unless_negative(v);
+#endif
}
static __always_inline bool
raw_atomic_long_dec_unless_positive(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_dec_unless_positive(v);
+#else
return raw_atomic_dec_unless_positive(v);
+#endif
}
static __always_inline long
raw_atomic_long_dec_if_positive(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_dec_if_positive(v);
+#else
return raw_atomic_dec_if_positive(v);
+#endif
}
-#endif /* CONFIG_64BIT */
#endif /* _LINUX_ATOMIC_LONG_H */
-// 108784846d3bbbb201b8dabe621c5dc30b216206
+// ad09f849db0db5b30c82e497eeb9056a394c5f22
diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh
index 13832171f7219..af27a71b37ef1 100755
--- a/scripts/atomic/gen-atomic-long.sh
+++ b/scripts/atomic/gen-atomic-long.sh
@@ -32,7 +32,7 @@ gen_args_cast()
done
}
-#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...)
+#gen_proto_order_variant(meta, pfx, name, sfx, order, arg...)
gen_proto_order_variant()
{
local meta="$1"; shift
@@ -40,21 +40,24 @@ gen_proto_order_variant()
local name="$1"; shift
local sfx="$1"; shift
local order="$1"; shift
- local atomic="$1"; shift
- local int="$1"; shift
local atomicname="${pfx}${name}${sfx}${order}"
local ret="$(gen_ret_type "${meta}" "long")"
local params="$(gen_params "long" "atomic_long" "$@")"
- local argscast="$(gen_args_cast "${int}" "${atomic}" "$@")"
+ local argscast_32="$(gen_args_cast "int" "atomic" "$@")"
+ local argscast_64="$(gen_args_cast "s64" "atomic64" "$@")"
local retstmt="$(gen_ret_stmt "${meta}")"
cat <<EOF
static __always_inline ${ret}
raw_atomic_long_${atomicname}(${params})
{
- ${retstmt}raw_${atomic}_${atomicname}(${argscast});
+#ifdef CONFIG_64BIT
+ ${retstmt}raw_atomic64_${atomicname}(${argscast_64});
+#else
+ ${retstmt}raw_atomic_${atomicname}(${argscast_32});
+#endif
}
EOF
@@ -84,24 +87,12 @@ typedef atomic_t atomic_long_t;
#define atomic_long_cond_read_relaxed atomic_cond_read_relaxed
#endif
-#ifdef CONFIG_64BIT
-
-EOF
-
-grep '^[a-z]' "$1" | while read name meta args; do
- gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
-done
-
-cat <<EOF
-#else /* CONFIG_64BIT */
-
EOF
grep '^[a-z]' "$1" | while read name meta args; do
- gen_proto "${meta}" "${name}" "atomic" "int" ${args}
+ gen_proto "${meta}" "${name}" ${args}
done
cat <<EOF
-#endif /* CONFIG_64BIT */
#endif /* _LINUX_ATOMIC_LONG_H */
EOF
--
2.30.2
Currently a subset of the fallback templates have kerneldoc comments,
resulting in a haphazard set of generated kerneldoc comments as only
some operations have fallback templates to begin with.
We'd like to generate more consistent kerneldoc comments, and to do so
we'll need to restructure the way the fallback code is generated.
To minimize churn and to make it easier to restructure the fallback
code, this patch removes the existing kerneldoc comments from the
fallback templates. We can add new kerneldoc comments in subsequent
patches.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
include/linux/atomic/atomic-arch-fallback.h | 166 +-------------------
scripts/atomic/fallbacks/add_negative | 8 -
scripts/atomic/fallbacks/add_unless | 9 --
scripts/atomic/fallbacks/dec_and_test | 8 -
scripts/atomic/fallbacks/fetch_add_unless | 9 --
scripts/atomic/fallbacks/inc_and_test | 8 -
scripts/atomic/fallbacks/inc_not_zero | 7 -
scripts/atomic/fallbacks/sub_and_test | 9 --
8 files changed, 1 insertion(+), 223 deletions(-)
diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 1722ddb6f17e0..3ce4cb5e790c5 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -1272,15 +1272,6 @@ arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
#endif /* arch_atomic_try_cmpxchg_relaxed */
#ifndef arch_atomic_sub_and_test
-/**
- * arch_atomic_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer of type atomic_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool
arch_atomic_sub_and_test(int i, atomic_t *v)
{
@@ -1290,14 +1281,6 @@ arch_atomic_sub_and_test(int i, atomic_t *v)
#endif
#ifndef arch_atomic_dec_and_test
-/**
- * arch_atomic_dec_and_test - decrement and test
- * @v: pointer of type atomic_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
static __always_inline bool
arch_atomic_dec_and_test(atomic_t *v)
{
@@ -1307,14 +1290,6 @@ arch_atomic_dec_and_test(atomic_t *v)
#endif
#ifndef arch_atomic_inc_and_test
-/**
- * arch_atomic_inc_and_test - increment and test
- * @v: pointer of type atomic_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool
arch_atomic_inc_and_test(atomic_t *v)
{
@@ -1331,14 +1306,6 @@ arch_atomic_inc_and_test(atomic_t *v)
#endif /* arch_atomic_add_negative */
#ifndef arch_atomic_add_negative
-/**
- * arch_atomic_add_negative - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic_add_negative(int i, atomic_t *v)
{
@@ -1348,14 +1315,6 @@ arch_atomic_add_negative(int i, atomic_t *v)
#endif
#ifndef arch_atomic_add_negative_acquire
-/**
- * arch_atomic_add_negative_acquire - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic_add_negative_acquire(int i, atomic_t *v)
{
@@ -1365,14 +1324,6 @@ arch_atomic_add_negative_acquire(int i, atomic_t *v)
#endif
#ifndef arch_atomic_add_negative_release
-/**
- * arch_atomic_add_negative_release - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic_add_negative_release(int i, atomic_t *v)
{
@@ -1382,14 +1333,6 @@ arch_atomic_add_negative_release(int i, atomic_t *v)
#endif
#ifndef arch_atomic_add_negative_relaxed
-/**
- * arch_atomic_add_negative_relaxed - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic_add_negative_relaxed(int i, atomic_t *v)
{
@@ -1437,15 +1380,6 @@ arch_atomic_add_negative(int i, atomic_t *v)
#endif /* arch_atomic_add_negative_relaxed */
#ifndef arch_atomic_fetch_add_unless
-/**
- * arch_atomic_fetch_add_unless - add unless the number is already a given value
- * @v: pointer of type atomic_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as @v was not already @u.
- * Returns original value of @v
- */
static __always_inline int
arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
@@ -1462,15 +1396,6 @@ arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
#endif
#ifndef arch_atomic_add_unless
-/**
- * arch_atomic_add_unless - add unless the number is already a given value
- * @v: pointer of type atomic_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, if @v was not already @u.
- * Returns true if the addition was done.
- */
static __always_inline bool
arch_atomic_add_unless(atomic_t *v, int a, int u)
{
@@ -1480,13 +1405,6 @@ arch_atomic_add_unless(atomic_t *v, int a, int u)
#endif
#ifndef arch_atomic_inc_not_zero
-/**
- * arch_atomic_inc_not_zero - increment unless the number is zero
- * @v: pointer of type atomic_t
- *
- * Atomically increments @v by 1, if @v is non-zero.
- * Returns true if the increment was done.
- */
static __always_inline bool
arch_atomic_inc_not_zero(atomic_t *v)
{
@@ -2488,15 +2406,6 @@ arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
#endif /* arch_atomic64_try_cmpxchg_relaxed */
#ifndef arch_atomic64_sub_and_test
-/**
- * arch_atomic64_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer of type atomic64_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool
arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
{
@@ -2506,14 +2415,6 @@ arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
#endif
#ifndef arch_atomic64_dec_and_test
-/**
- * arch_atomic64_dec_and_test - decrement and test
- * @v: pointer of type atomic64_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
static __always_inline bool
arch_atomic64_dec_and_test(atomic64_t *v)
{
@@ -2523,14 +2424,6 @@ arch_atomic64_dec_and_test(atomic64_t *v)
#endif
#ifndef arch_atomic64_inc_and_test
-/**
- * arch_atomic64_inc_and_test - increment and test
- * @v: pointer of type atomic64_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool
arch_atomic64_inc_and_test(atomic64_t *v)
{
@@ -2547,14 +2440,6 @@ arch_atomic64_inc_and_test(atomic64_t *v)
#endif /* arch_atomic64_add_negative */
#ifndef arch_atomic64_add_negative
-/**
- * arch_atomic64_add_negative - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic64_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic64_add_negative(s64 i, atomic64_t *v)
{
@@ -2564,14 +2449,6 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v)
#endif
#ifndef arch_atomic64_add_negative_acquire
-/**
- * arch_atomic64_add_negative_acquire - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic64_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
@@ -2581,14 +2458,6 @@ arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
#endif
#ifndef arch_atomic64_add_negative_release
-/**
- * arch_atomic64_add_negative_release - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic64_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
{
@@ -2598,14 +2467,6 @@ arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
#endif
#ifndef arch_atomic64_add_negative_relaxed
-/**
- * arch_atomic64_add_negative_relaxed - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic64_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
{
@@ -2653,15 +2514,6 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v)
#endif /* arch_atomic64_add_negative_relaxed */
#ifndef arch_atomic64_fetch_add_unless
-/**
- * arch_atomic64_fetch_add_unless - add unless the number is already a given value
- * @v: pointer of type atomic64_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as @v was not already @u.
- * Returns original value of @v
- */
static __always_inline s64
arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
@@ -2678,15 +2530,6 @@ arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
#endif
#ifndef arch_atomic64_add_unless
-/**
- * arch_atomic64_add_unless - add unless the number is already a given value
- * @v: pointer of type atomic64_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, if @v was not already @u.
- * Returns true if the addition was done.
- */
static __always_inline bool
arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
@@ -2696,13 +2539,6 @@ arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
#endif
#ifndef arch_atomic64_inc_not_zero
-/**
- * arch_atomic64_inc_not_zero - increment unless the number is zero
- * @v: pointer of type atomic64_t
- *
- * Atomically increments @v by 1, if @v is non-zero.
- * Returns true if the increment was done.
- */
static __always_inline bool
arch_atomic64_inc_not_zero(atomic64_t *v)
{
@@ -2761,4 +2597,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
#endif
#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// 52dfc6fe4a2e7234bbd2aa3e16a377c1db793a53
+// 9f0fd6ed53267c6ec64e36cd18e6fd8df57ea277
diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative
index e5980abf5904e..d0bd2dfbb244c 100755
--- a/scripts/atomic/fallbacks/add_negative
+++ b/scripts/atomic/fallbacks/add_negative
@@ -1,12 +1,4 @@
cat <<EOF
-/**
- * arch_${atomic}_add_negative${order} - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type ${atomic}_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_${atomic}_add_negative${order}(${int} i, ${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/add_unless b/scripts/atomic/fallbacks/add_unless
index 9e5159c2ccfc8..cf79b9da38dbb 100755
--- a/scripts/atomic/fallbacks/add_unless
+++ b/scripts/atomic/fallbacks/add_unless
@@ -1,13 +1,4 @@
cat << EOF
-/**
- * arch_${atomic}_add_unless - add unless the number is already a given value
- * @v: pointer of type ${atomic}_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, if @v was not already @u.
- * Returns true if the addition was done.
- */
static __always_inline bool
arch_${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
{
diff --git a/scripts/atomic/fallbacks/dec_and_test b/scripts/atomic/fallbacks/dec_and_test
index 8549f359bd0ef..3f6b6a8b47733 100755
--- a/scripts/atomic/fallbacks/dec_and_test
+++ b/scripts/atomic/fallbacks/dec_and_test
@@ -1,12 +1,4 @@
cat <<EOF
-/**
- * arch_${atomic}_dec_and_test - decrement and test
- * @v: pointer of type ${atomic}_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
static __always_inline bool
arch_${atomic}_dec_and_test(${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fallbacks/fetch_add_unless
index 68ce13c8b9dad..81d2834f03d23 100755
--- a/scripts/atomic/fallbacks/fetch_add_unless
+++ b/scripts/atomic/fallbacks/fetch_add_unless
@@ -1,13 +1,4 @@
cat << EOF
-/**
- * arch_${atomic}_fetch_add_unless - add unless the number is already a given value
- * @v: pointer of type ${atomic}_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as @v was not already @u.
- * Returns original value of @v
- */
static __always_inline ${int}
arch_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
{
diff --git a/scripts/atomic/fallbacks/inc_and_test b/scripts/atomic/fallbacks/inc_and_test
index 0cf23fe1efb85..c726a6d0634d3 100755
--- a/scripts/atomic/fallbacks/inc_and_test
+++ b/scripts/atomic/fallbacks/inc_and_test
@@ -1,12 +1,4 @@
cat <<EOF
-/**
- * arch_${atomic}_inc_and_test - increment and test
- * @v: pointer of type ${atomic}_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool
arch_${atomic}_inc_and_test(${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/inc_not_zero b/scripts/atomic/fallbacks/inc_not_zero
index ed8a1f5626675..97603591aac2a 100755
--- a/scripts/atomic/fallbacks/inc_not_zero
+++ b/scripts/atomic/fallbacks/inc_not_zero
@@ -1,11 +1,4 @@
cat <<EOF
-/**
- * arch_${atomic}_inc_not_zero - increment unless the number is zero
- * @v: pointer of type ${atomic}_t
- *
- * Atomically increments @v by 1, if @v is non-zero.
- * Returns true if the increment was done.
- */
static __always_inline bool
arch_${atomic}_inc_not_zero(${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test
index 260f37341c888..da8a049c9b02b 100755
--- a/scripts/atomic/fallbacks/sub_and_test
+++ b/scripts/atomic/fallbacks/sub_and_test
@@ -1,13 +1,4 @@
cat <<EOF
-/**
- * arch_${atomic}_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer of type ${atomic}_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool
arch_${atomic}_sub_and_test(${int} i, ${atomic}_t *v)
{
--
2.30.2
Most architectures define the atomic/atomic64 xchg and cmpxchg
operations in terms of arch_xchg and arch_cmpxchg respectfully.
Add fallbacks for these cases and remove the trivial cases from arch
code. On some architectures the existing definitions are kept as these
are used to build other arch_atomic*() operations.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/alpha/include/asm/atomic.h | 10 --
arch/arc/include/asm/atomic.h | 24 ---
arch/arc/include/asm/atomic64-arcv2.h | 2 +
arch/arm/include/asm/atomic.h | 3 +-
arch/arm64/include/asm/atomic.h | 28 ----
arch/csky/include/asm/atomic.h | 35 -----
arch/hexagon/include/asm/atomic.h | 6 -
arch/ia64/include/asm/atomic.h | 7 -
arch/loongarch/include/asm/atomic.h | 7 -
arch/m68k/include/asm/atomic.h | 9 +-
arch/mips/include/asm/atomic.h | 11 --
arch/openrisc/include/asm/atomic.h | 3 -
arch/parisc/include/asm/atomic.h | 9 --
arch/powerpc/include/asm/atomic.h | 24 ---
arch/riscv/include/asm/atomic.h | 72 ---------
arch/sh/include/asm/atomic.h | 3 -
arch/sparc/include/asm/atomic_32.h | 2 +
arch/sparc/include/asm/atomic_64.h | 11 --
arch/xtensa/include/asm/atomic.h | 3 -
include/asm-generic/atomic.h | 3 -
include/linux/atomic/atomic-arch-fallback.h | 158 +++++++++++++++++++-
scripts/atomic/fallbacks/cmpxchg | 7 +
scripts/atomic/fallbacks/xchg | 7 +
23 files changed, 179 insertions(+), 265 deletions(-)
create mode 100755 scripts/atomic/fallbacks/cmpxchg
create mode 100755 scripts/atomic/fallbacks/xchg
diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index f2861a43a61ef..ec8ab552c527a 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -200,16 +200,6 @@ ATOMIC_OPS(xor, xor)
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
-#define arch_atomic64_cmpxchg(v, old, new) \
- (arch_cmpxchg(&((v)->counter), old, new))
-#define arch_atomic64_xchg(v, new) \
- (arch_xchg(&((v)->counter), new))
-
-#define arch_atomic_cmpxchg(v, old, new) \
- (arch_cmpxchg(&((v)->counter), old, new))
-#define arch_atomic_xchg(v, new) \
- (arch_xchg(&((v)->counter), new))
-
/**
* arch_atomic_fetch_add_unless - add unless the number is a given value
* @v: pointer of type atomic_t
diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
index 52ee51e1ff7c2..592d7fffc223c 100644
--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -22,30 +22,6 @@
#include <asm/atomic-spinlock.h>
#endif
-#define arch_atomic_cmpxchg(v, o, n) \
-({ \
- arch_cmpxchg(&((v)->counter), (o), (n)); \
-})
-
-#ifdef arch_cmpxchg_relaxed
-#define arch_atomic_cmpxchg_relaxed(v, o, n) \
-({ \
- arch_cmpxchg_relaxed(&((v)->counter), (o), (n)); \
-})
-#endif
-
-#define arch_atomic_xchg(v, n) \
-({ \
- arch_xchg(&((v)->counter), (n)); \
-})
-
-#ifdef arch_xchg_relaxed
-#define arch_atomic_xchg_relaxed(v, n) \
-({ \
- arch_xchg_relaxed(&((v)->counter), (n)); \
-})
-#endif
-
/*
* 64-bit atomics
*/
diff --git a/arch/arc/include/asm/atomic64-arcv2.h b/arch/arc/include/asm/atomic64-arcv2.h
index c5a8010fdc97d..2b7c9e61a2947 100644
--- a/arch/arc/include/asm/atomic64-arcv2.h
+++ b/arch/arc/include/asm/atomic64-arcv2.h
@@ -159,6 +159,7 @@ arch_atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new)
return prev;
}
+#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new)
{
@@ -179,6 +180,7 @@ static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new)
return prev;
}
+#define arch_atomic64_xchg arch_atomic64_xchg
/**
* arch_atomic64_dec_if_positive - decrement by 1 if old value positive
diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index db8512d9a918d..9458d47ff209c 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -210,6 +210,7 @@ static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
return ret;
}
+#define arch_atomic_cmpxchg arch_atomic_cmpxchg
#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot
@@ -240,8 +241,6 @@ ATOMIC_OPS(xor, ^=, eor)
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
#ifndef CONFIG_GENERIC_ATOMIC64
typedef struct {
s64 counter;
diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h
index c9979273d3898..400d279e0f8d0 100644
--- a/arch/arm64/include/asm/atomic.h
+++ b/arch/arm64/include/asm/atomic.h
@@ -142,24 +142,6 @@ static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v)
#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor_release
#define arch_atomic_fetch_xor arch_atomic_fetch_xor
-#define arch_atomic_xchg_relaxed(v, new) \
- arch_xchg_relaxed(&((v)->counter), (new))
-#define arch_atomic_xchg_acquire(v, new) \
- arch_xchg_acquire(&((v)->counter), (new))
-#define arch_atomic_xchg_release(v, new) \
- arch_xchg_release(&((v)->counter), (new))
-#define arch_atomic_xchg(v, new) \
- arch_xchg(&((v)->counter), (new))
-
-#define arch_atomic_cmpxchg_relaxed(v, old, new) \
- arch_cmpxchg_relaxed(&((v)->counter), (old), (new))
-#define arch_atomic_cmpxchg_acquire(v, old, new) \
- arch_cmpxchg_acquire(&((v)->counter), (old), (new))
-#define arch_atomic_cmpxchg_release(v, old, new) \
- arch_cmpxchg_release(&((v)->counter), (old), (new))
-#define arch_atomic_cmpxchg(v, old, new) \
- arch_cmpxchg(&((v)->counter), (old), (new))
-
#define arch_atomic_andnot arch_atomic_andnot
/*
@@ -209,16 +191,6 @@ static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v)
#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release
#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor
-#define arch_atomic64_xchg_relaxed arch_atomic_xchg_relaxed
-#define arch_atomic64_xchg_acquire arch_atomic_xchg_acquire
-#define arch_atomic64_xchg_release arch_atomic_xchg_release
-#define arch_atomic64_xchg arch_atomic_xchg
-
-#define arch_atomic64_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
-#define arch_atomic64_cmpxchg_acquire arch_atomic_cmpxchg_acquire
-#define arch_atomic64_cmpxchg_release arch_atomic_cmpxchg_release
-#define arch_atomic64_cmpxchg arch_atomic_cmpxchg
-
#define arch_atomic64_andnot arch_atomic64_andnot
#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
diff --git a/arch/csky/include/asm/atomic.h b/arch/csky/include/asm/atomic.h
index 60406ef9c2bbc..4dab44f6143a5 100644
--- a/arch/csky/include/asm/atomic.h
+++ b/arch/csky/include/asm/atomic.h
@@ -195,41 +195,6 @@ arch_atomic_dec_if_positive(atomic_t *v)
}
#define arch_atomic_dec_if_positive arch_atomic_dec_if_positive
-#define ATOMIC_OP() \
-static __always_inline \
-int arch_atomic_xchg_relaxed(atomic_t *v, int n) \
-{ \
- return __xchg_relaxed(n, &(v->counter), 4); \
-} \
-static __always_inline \
-int arch_atomic_cmpxchg_relaxed(atomic_t *v, int o, int n) \
-{ \
- return __cmpxchg_relaxed(&(v->counter), o, n, 4); \
-} \
-static __always_inline \
-int arch_atomic_cmpxchg_acquire(atomic_t *v, int o, int n) \
-{ \
- return __cmpxchg_acquire(&(v->counter), o, n, 4); \
-} \
-static __always_inline \
-int arch_atomic_cmpxchg(atomic_t *v, int o, int n) \
-{ \
- return __cmpxchg(&(v->counter), o, n, 4); \
-}
-
-#define ATOMIC_OPS() \
- ATOMIC_OP()
-
-ATOMIC_OPS()
-
-#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed
-#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
-#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire
-#define arch_atomic_cmpxchg arch_atomic_cmpxchg
-
-#undef ATOMIC_OPS
-#undef ATOMIC_OP
-
#else
#include <asm-generic/atomic.h>
#endif
diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h
index 738857e10d6ec..ad6c111e9c10f 100644
--- a/arch/hexagon/include/asm/atomic.h
+++ b/arch/hexagon/include/asm/atomic.h
@@ -36,12 +36,6 @@ static inline void arch_atomic_set(atomic_t *v, int new)
*/
#define arch_atomic_read(v) READ_ONCE((v)->counter)
-#define arch_atomic_xchg(v, new) \
- (arch_xchg(&((v)->counter), (new)))
-
-#define arch_atomic_cmpxchg(v, old, new) \
- (arch_cmpxchg(&((v)->counter), (old), (new)))
-
#define ATOMIC_OP(op) \
static inline void arch_atomic_##op(int i, atomic_t *v) \
{ \
diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h
index 266c429b91372..6540a628d2573 100644
--- a/arch/ia64/include/asm/atomic.h
+++ b/arch/ia64/include/asm/atomic.h
@@ -207,13 +207,6 @@ ATOMIC64_FETCH_OP(xor, ^)
#undef ATOMIC64_FETCH_OP
#undef ATOMIC64_OP
-#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), old, new))
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
-#define arch_atomic64_cmpxchg(v, old, new) \
- (arch_cmpxchg(&((v)->counter), old, new))
-#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
#define arch_atomic_add(i,v) (void)arch_atomic_add_return((i), (v))
#define arch_atomic_sub(i,v) (void)arch_atomic_sub_return((i), (v))
diff --git a/arch/loongarch/include/asm/atomic.h b/arch/loongarch/include/asm/atomic.h
index 6b9aca9ab6e9f..8d73c85911b08 100644
--- a/arch/loongarch/include/asm/atomic.h
+++ b/arch/loongarch/include/asm/atomic.h
@@ -181,9 +181,6 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
return result;
}
-#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), (new)))
-
/*
* arch_atomic_dec_if_positive - decrement by 1 if old value positive
* @v: pointer of type atomic_t
@@ -342,10 +339,6 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
return result;
}
-#define arch_atomic64_cmpxchg(v, o, n) \
- ((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), (new)))
-
/*
* arch_atomic64_dec_if_positive - decrement by 1 if old value positive
* @v: pointer of type atomic64_t
diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h
index cfba83d230fde..190a032f19be7 100644
--- a/arch/m68k/include/asm/atomic.h
+++ b/arch/m68k/include/asm/atomic.h
@@ -158,12 +158,7 @@ static inline int arch_atomic_inc_and_test(atomic_t *v)
}
#define arch_atomic_inc_and_test arch_atomic_inc_and_test
-#ifdef CONFIG_RMW_INSNS
-
-#define arch_atomic_cmpxchg(v, o, n) ((int)arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
-#else /* !CONFIG_RMW_INSNS */
+#ifndef CONFIG_RMW_INSNS
static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
{
@@ -177,6 +172,7 @@ static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
local_irq_restore(flags);
return prev;
}
+#define arch_atomic_cmpxchg arch_atomic_cmpxchg
static inline int arch_atomic_xchg(atomic_t *v, int new)
{
@@ -189,6 +185,7 @@ static inline int arch_atomic_xchg(atomic_t *v, int new)
local_irq_restore(flags);
return prev;
}
+#define arch_atomic_xchg arch_atomic_xchg
#endif /* !CONFIG_RMW_INSNS */
diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
index 712fb5a6a5682..ba188e77768b2 100644
--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -33,17 +33,6 @@ static __always_inline void arch_##pfx##_set(pfx##_t *v, type i) \
{ \
WRITE_ONCE(v->counter, i); \
} \
- \
-static __always_inline type \
-arch_##pfx##_cmpxchg(pfx##_t *v, type o, type n) \
-{ \
- return arch_cmpxchg(&v->counter, o, n); \
-} \
- \
-static __always_inline type arch_##pfx##_xchg(pfx##_t *v, type n) \
-{ \
- return arch_xchg(&v->counter, n); \
-}
ATOMIC_OPS(atomic, int)
diff --git a/arch/openrisc/include/asm/atomic.h b/arch/openrisc/include/asm/atomic.h
index 326167e4783a9..8ce67ec7c9a30 100644
--- a/arch/openrisc/include/asm/atomic.h
+++ b/arch/openrisc/include/asm/atomic.h
@@ -130,7 +130,4 @@ static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
#include <asm/cmpxchg.h>
-#define arch_atomic_xchg(ptr, v) (arch_xchg(&(ptr)->counter, (v)))
-#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), (old), (new)))
-
#endif /* __ASM_OPENRISC_ATOMIC_H */
diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h
index dd5a299ada695..0b3f64c92e3c0 100644
--- a/arch/parisc/include/asm/atomic.h
+++ b/arch/parisc/include/asm/atomic.h
@@ -73,10 +73,6 @@ static __inline__ int arch_atomic_read(const atomic_t *v)
return READ_ONCE((v)->counter);
}
-/* exported interface */
-#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
#define ATOMIC_OP(op, c_op) \
static __inline__ void arch_atomic_##op(int i, atomic_t *v) \
{ \
@@ -218,11 +214,6 @@ arch_atomic64_read(const atomic64_t *v)
return READ_ONCE((v)->counter);
}
-/* exported interface */
-#define arch_atomic64_cmpxchg(v, o, n) \
- ((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
#endif /* !CONFIG_64BIT */
diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
index 47228b1774781..5bf6a4d49268c 100644
--- a/arch/powerpc/include/asm/atomic.h
+++ b/arch/powerpc/include/asm/atomic.h
@@ -126,18 +126,6 @@ ATOMIC_OPS(xor, xor, "", K)
#undef ATOMIC_OP_RETURN_RELAXED
#undef ATOMIC_OP
-#define arch_atomic_cmpxchg(v, o, n) \
- (arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic_cmpxchg_relaxed(v, o, n) \
- arch_cmpxchg_relaxed(&((v)->counter), (o), (n))
-#define arch_atomic_cmpxchg_acquire(v, o, n) \
- arch_cmpxchg_acquire(&((v)->counter), (o), (n))
-
-#define arch_atomic_xchg(v, new) \
- (arch_xchg(&((v)->counter), new))
-#define arch_atomic_xchg_relaxed(v, new) \
- arch_xchg_relaxed(&((v)->counter), (new))
-
/**
* atomic_fetch_add_unless - add unless the number is a given value
* @v: pointer of type atomic_t
@@ -396,18 +384,6 @@ static __inline__ s64 arch_atomic64_dec_if_positive(atomic64_t *v)
}
#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
-#define arch_atomic64_cmpxchg(v, o, n) \
- (arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic64_cmpxchg_relaxed(v, o, n) \
- arch_cmpxchg_relaxed(&((v)->counter), (o), (n))
-#define arch_atomic64_cmpxchg_acquire(v, o, n) \
- arch_cmpxchg_acquire(&((v)->counter), (o), (n))
-
-#define arch_atomic64_xchg(v, new) \
- (arch_xchg(&((v)->counter), new))
-#define arch_atomic64_xchg_relaxed(v, new) \
- arch_xchg_relaxed(&((v)->counter), (new))
-
/**
* atomic64_fetch_add_unless - add unless the number is a given value
* @v: pointer of type atomic64_t
diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index bba472928b539..f5dfef6c2153f 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -238,78 +238,6 @@ static __always_inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a,
#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
#endif
-/*
- * atomic_{cmp,}xchg is required to have exactly the same ordering semantics as
- * {cmp,}xchg and the operations that return, so they need a full barrier.
- */
-#define ATOMIC_OP(c_t, prefix, size) \
-static __always_inline \
-c_t arch_atomic##prefix##_xchg_relaxed(atomic##prefix##_t *v, c_t n) \
-{ \
- return __xchg_relaxed(&(v->counter), n, size); \
-} \
-static __always_inline \
-c_t arch_atomic##prefix##_xchg_acquire(atomic##prefix##_t *v, c_t n) \
-{ \
- return __xchg_acquire(&(v->counter), n, size); \
-} \
-static __always_inline \
-c_t arch_atomic##prefix##_xchg_release(atomic##prefix##_t *v, c_t n) \
-{ \
- return __xchg_release(&(v->counter), n, size); \
-} \
-static __always_inline \
-c_t arch_atomic##prefix##_xchg(atomic##prefix##_t *v, c_t n) \
-{ \
- return __arch_xchg(&(v->counter), n, size); \
-} \
-static __always_inline \
-c_t arch_atomic##prefix##_cmpxchg_relaxed(atomic##prefix##_t *v, \
- c_t o, c_t n) \
-{ \
- return __cmpxchg_relaxed(&(v->counter), o, n, size); \
-} \
-static __always_inline \
-c_t arch_atomic##prefix##_cmpxchg_acquire(atomic##prefix##_t *v, \
- c_t o, c_t n) \
-{ \
- return __cmpxchg_acquire(&(v->counter), o, n, size); \
-} \
-static __always_inline \
-c_t arch_atomic##prefix##_cmpxchg_release(atomic##prefix##_t *v, \
- c_t o, c_t n) \
-{ \
- return __cmpxchg_release(&(v->counter), o, n, size); \
-} \
-static __always_inline \
-c_t arch_atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n) \
-{ \
- return __cmpxchg(&(v->counter), o, n, size); \
-}
-
-#ifdef CONFIG_GENERIC_ATOMIC64
-#define ATOMIC_OPS() \
- ATOMIC_OP(int, , 4)
-#else
-#define ATOMIC_OPS() \
- ATOMIC_OP(int, , 4) \
- ATOMIC_OP(s64, 64, 8)
-#endif
-
-ATOMIC_OPS()
-
-#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed
-#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire
-#define arch_atomic_xchg_release arch_atomic_xchg_release
-#define arch_atomic_xchg arch_atomic_xchg
-#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
-#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire
-#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release
-#define arch_atomic_cmpxchg arch_atomic_cmpxchg
-
-#undef ATOMIC_OPS
-#undef ATOMIC_OP
-
static __always_inline bool arch_atomic_inc_unless_negative(atomic_t *v)
{
int prev, rc;
diff --git a/arch/sh/include/asm/atomic.h b/arch/sh/include/asm/atomic.h
index 528bfeda78f56..7a18cb2a1c1ac 100644
--- a/arch/sh/include/asm/atomic.h
+++ b/arch/sh/include/asm/atomic.h
@@ -30,9 +30,6 @@
#include <asm/atomic-irq.h>
#endif
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n)))
-
#endif /* CONFIG_CPU_J2 */
#endif /* __ASM_SH_ATOMIC_H */
diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h
index d775daa83d129..1c9e6c7366e41 100644
--- a/arch/sparc/include/asm/atomic_32.h
+++ b/arch/sparc/include/asm/atomic_32.h
@@ -24,7 +24,9 @@ int arch_atomic_fetch_and(int, atomic_t *);
int arch_atomic_fetch_or(int, atomic_t *);
int arch_atomic_fetch_xor(int, atomic_t *);
int arch_atomic_cmpxchg(atomic_t *, int, int);
+#define arch_atomic_cmpxchg arch_atomic_cmpxchg
int arch_atomic_xchg(atomic_t *, int);
+#define arch_atomic_xchg arch_atomic_xchg
int arch_atomic_fetch_add_unless(atomic_t *, int, int);
void arch_atomic_set(atomic_t *, int);
diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h
index 077891686715a..df6a8b07d7e63 100644
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -49,17 +49,6 @@ ATOMIC_OPS(xor)
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
-#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n)))
-
-static inline int arch_atomic_xchg(atomic_t *v, int new)
-{
- return arch_xchg(&v->counter, new);
-}
-
-#define arch_atomic64_cmpxchg(v, o, n) \
- ((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
s64 arch_atomic64_dec_if_positive(atomic64_t *v);
#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h
index 52da614f953ce..1d323a864002c 100644
--- a/arch/xtensa/include/asm/atomic.h
+++ b/arch/xtensa/include/asm/atomic.h
@@ -257,7 +257,4 @@ ATOMIC_OPS(xor)
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
-#define arch_atomic_cmpxchg(v, o, n) ((int)arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
#endif /* _XTENSA_ATOMIC_H */
diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index e271d6708c876..22142c71d35a1 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -130,7 +130,4 @@ ATOMIC_OP(xor, ^)
#define arch_atomic_read(v) READ_ONCE((v)->counter)
#define arch_atomic_set(v, i) WRITE_ONCE(((v)->counter), (i))
-#define arch_atomic_xchg(ptr, v) (arch_xchg(&(ptr)->counter, (u32)(v)))
-#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), (u32)(old), (u32)(new)))
-
#endif /* __ASM_GENERIC_ATOMIC_H */
diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 3ce4cb5e790c5..1a2d81dbc2e48 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -1091,9 +1091,48 @@ arch_atomic_fetch_xor(int i, atomic_t *v)
#endif /* arch_atomic_fetch_xor_relaxed */
#ifndef arch_atomic_xchg_relaxed
+#ifdef arch_atomic_xchg
#define arch_atomic_xchg_acquire arch_atomic_xchg
#define arch_atomic_xchg_release arch_atomic_xchg
#define arch_atomic_xchg_relaxed arch_atomic_xchg
+#endif /* arch_atomic_xchg */
+
+#ifndef arch_atomic_xchg
+static __always_inline int
+arch_atomic_xchg(atomic_t *v, int new)
+{
+ return arch_xchg(&v->counter, new);
+}
+#define arch_atomic_xchg arch_atomic_xchg
+#endif
+
+#ifndef arch_atomic_xchg_acquire
+static __always_inline int
+arch_atomic_xchg_acquire(atomic_t *v, int new)
+{
+ return arch_xchg_acquire(&v->counter, new);
+}
+#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire
+#endif
+
+#ifndef arch_atomic_xchg_release
+static __always_inline int
+arch_atomic_xchg_release(atomic_t *v, int new)
+{
+ return arch_xchg_release(&v->counter, new);
+}
+#define arch_atomic_xchg_release arch_atomic_xchg_release
+#endif
+
+#ifndef arch_atomic_xchg_relaxed
+static __always_inline int
+arch_atomic_xchg_relaxed(atomic_t *v, int new)
+{
+ return arch_xchg_relaxed(&v->counter, new);
+}
+#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed
+#endif
+
#else /* arch_atomic_xchg_relaxed */
#ifndef arch_atomic_xchg_acquire
@@ -1133,9 +1172,48 @@ arch_atomic_xchg(atomic_t *v, int i)
#endif /* arch_atomic_xchg_relaxed */
#ifndef arch_atomic_cmpxchg_relaxed
+#ifdef arch_atomic_cmpxchg
#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg
#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg
#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg
+#endif /* arch_atomic_cmpxchg */
+
+#ifndef arch_atomic_cmpxchg
+static __always_inline int
+arch_atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+ return arch_cmpxchg(&v->counter, old, new);
+}
+#define arch_atomic_cmpxchg arch_atomic_cmpxchg
+#endif
+
+#ifndef arch_atomic_cmpxchg_acquire
+static __always_inline int
+arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
+{
+ return arch_cmpxchg_acquire(&v->counter, old, new);
+}
+#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire
+#endif
+
+#ifndef arch_atomic_cmpxchg_release
+static __always_inline int
+arch_atomic_cmpxchg_release(atomic_t *v, int old, int new)
+{
+ return arch_cmpxchg_release(&v->counter, old, new);
+}
+#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release
+#endif
+
+#ifndef arch_atomic_cmpxchg_relaxed
+static __always_inline int
+arch_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
+{
+ return arch_cmpxchg_relaxed(&v->counter, old, new);
+}
+#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
+#endif
+
#else /* arch_atomic_cmpxchg_relaxed */
#ifndef arch_atomic_cmpxchg_acquire
@@ -2225,9 +2303,48 @@ arch_atomic64_fetch_xor(s64 i, atomic64_t *v)
#endif /* arch_atomic64_fetch_xor_relaxed */
#ifndef arch_atomic64_xchg_relaxed
+#ifdef arch_atomic64_xchg
#define arch_atomic64_xchg_acquire arch_atomic64_xchg
#define arch_atomic64_xchg_release arch_atomic64_xchg
#define arch_atomic64_xchg_relaxed arch_atomic64_xchg
+#endif /* arch_atomic64_xchg */
+
+#ifndef arch_atomic64_xchg
+static __always_inline s64
+arch_atomic64_xchg(atomic64_t *v, s64 new)
+{
+ return arch_xchg(&v->counter, new);
+}
+#define arch_atomic64_xchg arch_atomic64_xchg
+#endif
+
+#ifndef arch_atomic64_xchg_acquire
+static __always_inline s64
+arch_atomic64_xchg_acquire(atomic64_t *v, s64 new)
+{
+ return arch_xchg_acquire(&v->counter, new);
+}
+#define arch_atomic64_xchg_acquire arch_atomic64_xchg_acquire
+#endif
+
+#ifndef arch_atomic64_xchg_release
+static __always_inline s64
+arch_atomic64_xchg_release(atomic64_t *v, s64 new)
+{
+ return arch_xchg_release(&v->counter, new);
+}
+#define arch_atomic64_xchg_release arch_atomic64_xchg_release
+#endif
+
+#ifndef arch_atomic64_xchg_relaxed
+static __always_inline s64
+arch_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
+{
+ return arch_xchg_relaxed(&v->counter, new);
+}
+#define arch_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed
+#endif
+
#else /* arch_atomic64_xchg_relaxed */
#ifndef arch_atomic64_xchg_acquire
@@ -2267,9 +2384,48 @@ arch_atomic64_xchg(atomic64_t *v, s64 i)
#endif /* arch_atomic64_xchg_relaxed */
#ifndef arch_atomic64_cmpxchg_relaxed
+#ifdef arch_atomic64_cmpxchg
#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg
#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg
#define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg
+#endif /* arch_atomic64_cmpxchg */
+
+#ifndef arch_atomic64_cmpxchg
+static __always_inline s64
+arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_cmpxchg(&v->counter, old, new);
+}
+#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
+#endif
+
+#ifndef arch_atomic64_cmpxchg_acquire
+static __always_inline s64
+arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_cmpxchg_acquire(&v->counter, old, new);
+}
+#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire
+#endif
+
+#ifndef arch_atomic64_cmpxchg_release
+static __always_inline s64
+arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_cmpxchg_release(&v->counter, old, new);
+}
+#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release
+#endif
+
+#ifndef arch_atomic64_cmpxchg_relaxed
+static __always_inline s64
+arch_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_cmpxchg_relaxed(&v->counter, old, new);
+}
+#define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed
+#endif
+
#else /* arch_atomic64_cmpxchg_relaxed */
#ifndef arch_atomic64_cmpxchg_acquire
@@ -2597,4 +2753,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
#endif
#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// 9f0fd6ed53267c6ec64e36cd18e6fd8df57ea277
+// e1cee558cc61cae887890db30fcdf93baca9f498
diff --git a/scripts/atomic/fallbacks/cmpxchg b/scripts/atomic/fallbacks/cmpxchg
new file mode 100755
index 0000000000000..87cd010f98d58
--- /dev/null
+++ b/scripts/atomic/fallbacks/cmpxchg
@@ -0,0 +1,7 @@
+cat <<EOF
+static __always_inline ${int}
+arch_${atomic}_cmpxchg${order}(${atomic}_t *v, ${int} old, ${int} new)
+{
+ return arch_cmpxchg${order}(&v->counter, old, new);
+}
+EOF
diff --git a/scripts/atomic/fallbacks/xchg b/scripts/atomic/fallbacks/xchg
new file mode 100755
index 0000000000000..733b8980b2f3b
--- /dev/null
+++ b/scripts/atomic/fallbacks/xchg
@@ -0,0 +1,7 @@
+cat <<EOF
+static __always_inline ${int}
+arch_${atomic}_xchg${order}(${atomic}_t *v, ${int} new)
+{
+ return arch_xchg${order}(&v->counter, new);
+}
+EOF
--
2.30.2
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/arm.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm/include/asm/atomic.h | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index 9458d47ff209c..f0e3b01afa746 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -197,6 +197,16 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \
return val; \
}
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
{
int ret;
@@ -212,8 +222,6 @@ static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
}
#define arch_atomic_cmpxchg arch_atomic_cmpxchg
-#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot
-
#endif /* __LINUX_ARM_ARCH__ */
#define ATOMIC_OPS(op, c_op, asm_op) \
--
2.30.2
In some cases we'd like to indicate the bitwise negation of a parameter,
e.g.
~@var
This will be helpful for describing the atomic andnot operations, where
we'd like to write comments of the form:
Atomically updates @v to (@v & ~@i)
Which kernel-doc currently transforms to:
Atomically updates **v** to (**v** & ~**i**)
Rather than the preferable form:
Atomically updates **v** to (**v** & **~i**)
This is similar to what we did for '!@var' in commit:
ee2aa7590398 ("scripts: kernel-doc: accept negation like !@var")
This patch follows the same pattern that commit used to permit a '!'
prefix on a param ref, allowing a '~' prefix on a param ref, cuasing
kernel-doc to generate the preferred form above.
Suggested-by: Akira Yokosawa <[email protected]>
Link: https://lore.kernel.org/lkml/[email protected]
Signed-off-by: Mark Rutland <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Randy Dunlap <[email protected]>
Cc: Will Deacon <[email protected]>
---
scripts/kernel-doc | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/kernel-doc b/scripts/kernel-doc
index 2486689ffc7b4..eb70c1fd4e868 100755
--- a/scripts/kernel-doc
+++ b/scripts/kernel-doc
@@ -64,7 +64,7 @@ my $type_constant = '\b``([^\`]+)``\b';
my $type_constant2 = '\%([-_\w]+)';
my $type_func = '(\w+)\(\)';
my $type_param = '\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
-my $type_param_ref = '([\!]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
+my $type_param_ref = '([\!~]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
my $type_fp_param = '\@(\w+)\(\)'; # Special RST handling for func ptr params
my $type_fp_param2 = '\@(\w+->\S+)\(\)'; # Special RST handling for structs with func ptr params
my $type_env = '(\$\w+)';
--
2.30.2
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/sh.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/sh/include/asm/atomic-grb.h | 9 +++++++++
arch/sh/include/asm/atomic-irq.h | 9 +++++++++
arch/sh/include/asm/atomic-llsc.h | 9 +++++++++
3 files changed, 27 insertions(+)
diff --git a/arch/sh/include/asm/atomic-grb.h b/arch/sh/include/asm/atomic-grb.h
index 059791fd394fc..cf1c10f15528b 100644
--- a/arch/sh/include/asm/atomic-grb.h
+++ b/arch/sh/include/asm/atomic-grb.h
@@ -71,6 +71,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \
ATOMIC_OPS(add)
ATOMIC_OPS(sub)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
@@ -78,6 +83,10 @@ ATOMIC_OPS(and)
ATOMIC_OPS(or)
ATOMIC_OPS(xor)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
diff --git a/arch/sh/include/asm/atomic-irq.h b/arch/sh/include/asm/atomic-irq.h
index 7665de9d00d0d..b4090cc354935 100644
--- a/arch/sh/include/asm/atomic-irq.h
+++ b/arch/sh/include/asm/atomic-irq.h
@@ -55,6 +55,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \
ATOMIC_OPS(add, +=)
ATOMIC_OPS(sub, -=)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op, c_op) \
ATOMIC_OP(op, c_op) \
@@ -64,6 +69,10 @@ ATOMIC_OPS(and, &=)
ATOMIC_OPS(or, |=)
ATOMIC_OPS(xor, ^=)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
diff --git a/arch/sh/include/asm/atomic-llsc.h b/arch/sh/include/asm/atomic-llsc.h
index b63dcfbfa14ef..9ef1fb1dd12ee 100644
--- a/arch/sh/include/asm/atomic-llsc.h
+++ b/arch/sh/include/asm/atomic-llsc.h
@@ -73,6 +73,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \
ATOMIC_OPS(add)
ATOMIC_OPS(sub)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
@@ -80,6 +85,10 @@ ATOMIC_OPS(and)
ATOMIC_OPS(or)
ATOMIC_OPS(xor)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
--
2.30.2
Currently several architectures have kerneldoc comments for
arch_atomic_*(), which is unhelpful as these live in a shared namespace
where they clash, and the arch_atomic_*() ops are now an implementation
detail of the raw_atomic_*() ops, which no-one should use those
directly.
Delete the kerneldoc comments for arch_atomic_*(), along with
pseudo-kerneldoc comments which are in the correct style but are missing
the leading '/**' necessary to be true kerneldoc comments.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/alpha/include/asm/atomic.h | 25 --------
arch/arc/include/asm/atomic64-arcv2.h | 17 ------
arch/hexagon/include/asm/atomic.h | 16 -----
arch/loongarch/include/asm/atomic.h | 49 ---------------
arch/x86/include/asm/atomic.h | 87 ---------------------------
arch/x86/include/asm/atomic64_32.h | 76 -----------------------
arch/x86/include/asm/atomic64_64.h | 81 -------------------------
7 files changed, 351 deletions(-)
diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index ec8ab552c527a..cbd9244571af0 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -200,15 +200,6 @@ ATOMIC_OPS(xor, xor)
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
-/**
- * arch_atomic_fetch_add_unless - add unless the number is a given value
- * @v: pointer of type atomic_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as it was not @u.
- * Returns the old value of @v.
- */
static __inline__ int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
int c, new, old;
@@ -232,15 +223,6 @@ static __inline__ int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
}
#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
-/**
- * arch_atomic64_fetch_add_unless - add unless the number is a given value
- * @v: pointer of type atomic64_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as it was not @u.
- * Returns the old value of @v.
- */
static __inline__ s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
s64 c, new, old;
@@ -264,13 +246,6 @@ static __inline__ s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u
}
#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
-/*
- * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
- * @v: pointer of type atomic_t
- *
- * The function returns the old value of *v minus 1, even if
- * the atomic variable, v, was not decremented.
- */
static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
{
s64 old, tmp;
diff --git a/arch/arc/include/asm/atomic64-arcv2.h b/arch/arc/include/asm/atomic64-arcv2.h
index 2b7c9e61a2947..6b6db981967ae 100644
--- a/arch/arc/include/asm/atomic64-arcv2.h
+++ b/arch/arc/include/asm/atomic64-arcv2.h
@@ -182,14 +182,6 @@ static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new)
}
#define arch_atomic64_xchg arch_atomic64_xchg
-/**
- * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
- * @v: pointer of type atomic64_t
- *
- * The function returns the old value of *v minus 1, even if
- * the atomic variable, v, was not decremented.
- */
-
static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
{
s64 val;
@@ -214,15 +206,6 @@ static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
}
#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
-/**
- * arch_atomic64_fetch_add_unless - add unless the number is a given value
- * @v: pointer of type atomic64_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, if it was not @u.
- * Returns the old value of @v
- */
static inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
s64 old, temp;
diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h
index 5c8440016c762..2447d083c432f 100644
--- a/arch/hexagon/include/asm/atomic.h
+++ b/arch/hexagon/include/asm/atomic.h
@@ -28,12 +28,6 @@ static inline void arch_atomic_set(atomic_t *v, int new)
#define arch_atomic_set_release(v, i) arch_atomic_set((v), (i))
-/**
- * arch_atomic_read - reads a word, atomically
- * @v: pointer to atomic value
- *
- * Assumes all word reads on our architecture are atomic.
- */
#define arch_atomic_read(v) READ_ONCE((v)->counter)
#define ATOMIC_OP(op) \
@@ -112,16 +106,6 @@ ATOMIC_OPS(xor)
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
-/**
- * arch_atomic_fetch_add_unless - add unless the number is a given value
- * @v: pointer to value
- * @a: amount to add
- * @u: unless value is equal to u
- *
- * Returns old value.
- *
- */
-
static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
int __oldval;
diff --git a/arch/loongarch/include/asm/atomic.h b/arch/loongarch/include/asm/atomic.h
index 8d73c85911b08..e27f0c72d3242 100644
--- a/arch/loongarch/include/asm/atomic.h
+++ b/arch/loongarch/include/asm/atomic.h
@@ -29,21 +29,7 @@
#define ATOMIC_INIT(i) { (i) }
-/*
- * arch_atomic_read - read atomic variable
- * @v: pointer of type atomic_t
- *
- * Atomically reads the value of @v.
- */
#define arch_atomic_read(v) READ_ONCE((v)->counter)
-
-/*
- * arch_atomic_set - set atomic variable
- * @v: pointer of type atomic_t
- * @i: required value
- *
- * Atomically sets the value of @v to @i.
- */
#define arch_atomic_set(v, i) WRITE_ONCE((v)->counter, (i))
#define ATOMIC_OP(op, I, asm_op) \
@@ -139,14 +125,6 @@ static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
}
#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
-/*
- * arch_atomic_sub_if_positive - conditionally subtract integer from atomic variable
- * @i: integer value to subtract
- * @v: pointer of type atomic_t
- *
- * Atomically test @v and subtract @i if @v is greater or equal than @i.
- * The function returns the old value of @v minus @i.
- */
static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
{
int result;
@@ -181,28 +159,13 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
return result;
}
-/*
- * arch_atomic_dec_if_positive - decrement by 1 if old value positive
- * @v: pointer of type atomic_t
- */
#define arch_atomic_dec_if_positive(v) arch_atomic_sub_if_positive(1, v)
#ifdef CONFIG_64BIT
#define ATOMIC64_INIT(i) { (i) }
-/*
- * arch_atomic64_read - read atomic variable
- * @v: pointer of type atomic64_t
- *
- */
#define arch_atomic64_read(v) READ_ONCE((v)->counter)
-
-/*
- * arch_atomic64_set - set atomic variable
- * @v: pointer of type atomic64_t
- * @i: required value
- */
#define arch_atomic64_set(v, i) WRITE_ONCE((v)->counter, (i))
#define ATOMIC64_OP(op, I, asm_op) \
@@ -297,14 +260,6 @@ static inline long arch_atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
}
#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
-/*
- * arch_atomic64_sub_if_positive - conditionally subtract integer from atomic variable
- * @i: integer value to subtract
- * @v: pointer of type atomic64_t
- *
- * Atomically test @v and subtract @i if @v is greater or equal than @i.
- * The function returns the old value of @v minus @i.
- */
static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
{
long result;
@@ -339,10 +294,6 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
return result;
}
-/*
- * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
- * @v: pointer of type atomic64_t
- */
#define arch_atomic64_dec_if_positive(v) arch_atomic64_sub_if_positive(1, v)
#endif /* CONFIG_64BIT */
diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 5e754e8957671..55a55ec043502 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -14,12 +14,6 @@
* resource counting etc..
*/
-/**
- * arch_atomic_read - read atomic variable
- * @v: pointer of type atomic_t
- *
- * Atomically reads the value of @v.
- */
static __always_inline int arch_atomic_read(const atomic_t *v)
{
/*
@@ -29,25 +23,11 @@ static __always_inline int arch_atomic_read(const atomic_t *v)
return __READ_ONCE((v)->counter);
}
-/**
- * arch_atomic_set - set atomic variable
- * @v: pointer of type atomic_t
- * @i: required value
- *
- * Atomically sets the value of @v to @i.
- */
static __always_inline void arch_atomic_set(atomic_t *v, int i)
{
__WRITE_ONCE(v->counter, i);
}
-/**
- * arch_atomic_add - add integer to atomic variable
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v.
- */
static __always_inline void arch_atomic_add(int i, atomic_t *v)
{
asm volatile(LOCK_PREFIX "addl %1,%0"
@@ -55,13 +35,6 @@ static __always_inline void arch_atomic_add(int i, atomic_t *v)
: "ir" (i) : "memory");
}
-/**
- * arch_atomic_sub - subtract integer from atomic variable
- * @i: integer value to subtract
- * @v: pointer of type atomic_t
- *
- * Atomically subtracts @i from @v.
- */
static __always_inline void arch_atomic_sub(int i, atomic_t *v)
{
asm volatile(LOCK_PREFIX "subl %1,%0"
@@ -69,27 +42,12 @@ static __always_inline void arch_atomic_sub(int i, atomic_t *v)
: "ir" (i) : "memory");
}
-/**
- * arch_atomic_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer of type atomic_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v)
{
return GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, e, "er", i);
}
#define arch_atomic_sub_and_test arch_atomic_sub_and_test
-/**
- * arch_atomic_inc - increment atomic variable
- * @v: pointer of type atomic_t
- *
- * Atomically increments @v by 1.
- */
static __always_inline void arch_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
@@ -97,12 +55,6 @@ static __always_inline void arch_atomic_inc(atomic_t *v)
}
#define arch_atomic_inc arch_atomic_inc
-/**
- * arch_atomic_dec - decrement atomic variable
- * @v: pointer of type atomic_t
- *
- * Atomically decrements @v by 1.
- */
static __always_inline void arch_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
@@ -110,69 +62,30 @@ static __always_inline void arch_atomic_dec(atomic_t *v)
}
#define arch_atomic_dec arch_atomic_dec
-/**
- * arch_atomic_dec_and_test - decrement and test
- * @v: pointer of type atomic_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
static __always_inline bool arch_atomic_dec_and_test(atomic_t *v)
{
return GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, e);
}
#define arch_atomic_dec_and_test arch_atomic_dec_and_test
-/**
- * arch_atomic_inc_and_test - increment and test
- * @v: pointer of type atomic_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool arch_atomic_inc_and_test(atomic_t *v)
{
return GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, e);
}
#define arch_atomic_inc_and_test arch_atomic_inc_and_test
-/**
- * arch_atomic_add_negative - add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true
- * if the result is negative, or false when
- * result is greater than or equal to zero.
- */
static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
{
return GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, s, "er", i);
}
#define arch_atomic_add_negative arch_atomic_add_negative
-/**
- * arch_atomic_add_return - add integer and return
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns @i + @v
- */
static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
{
return i + xadd(&v->counter, i);
}
#define arch_atomic_add_return arch_atomic_add_return
-/**
- * arch_atomic_sub_return - subtract integer and return
- * @v: pointer of type atomic_t
- * @i: integer value to subtract
- *
- * Atomically subtracts @i from @v and returns @v - @i
- */
static __always_inline int arch_atomic_sub_return(int i, atomic_t *v)
{
return arch_atomic_add_return(-i, v);
diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 808b4eece251e..3486d91b8595f 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -61,30 +61,12 @@ ATOMIC64_DECL(add_unless);
#undef __ATOMIC64_DECL
#undef ATOMIC64_EXPORT
-/**
- * arch_atomic64_cmpxchg - cmpxchg atomic64 variable
- * @v: pointer to type atomic64_t
- * @o: expected value
- * @n: new value
- *
- * Atomically sets @v to @n if it was equal to @o and returns
- * the old value.
- */
-
static __always_inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
{
return arch_cmpxchg64(&v->counter, o, n);
}
#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
-/**
- * arch_atomic64_xchg - xchg atomic64 variable
- * @v: pointer to type atomic64_t
- * @n: value to assign
- *
- * Atomically xchgs the value of @v to @n and returns
- * the old value.
- */
static __always_inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n)
{
s64 o;
@@ -97,13 +79,6 @@ static __always_inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n)
}
#define arch_atomic64_xchg arch_atomic64_xchg
-/**
- * arch_atomic64_set - set atomic64 variable
- * @v: pointer to type atomic64_t
- * @i: value to assign
- *
- * Atomically sets the value of @v to @n.
- */
static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i)
{
unsigned high = (unsigned)(i >> 32);
@@ -113,12 +88,6 @@ static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i)
: "eax", "edx", "memory");
}
-/**
- * arch_atomic64_read - read atomic64 variable
- * @v: pointer to type atomic64_t
- *
- * Atomically reads the value of @v and returns it.
- */
static __always_inline s64 arch_atomic64_read(const atomic64_t *v)
{
s64 r;
@@ -126,13 +95,6 @@ static __always_inline s64 arch_atomic64_read(const atomic64_t *v)
return r;
}
-/**
- * arch_atomic64_add_return - add and return
- * @i: integer value to add
- * @v: pointer to type atomic64_t
- *
- * Atomically adds @i to @v and returns @i + *@v
- */
static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
{
alternative_atomic64(add_return,
@@ -142,9 +104,6 @@ static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
}
#define arch_atomic64_add_return arch_atomic64_add_return
-/*
- * Other variants with different arithmetic operators:
- */
static __always_inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v)
{
alternative_atomic64(sub_return,
@@ -172,13 +131,6 @@ static __always_inline s64 arch_atomic64_dec_return(atomic64_t *v)
}
#define arch_atomic64_dec_return arch_atomic64_dec_return
-/**
- * arch_atomic64_add - add integer to atomic64 variable
- * @i: integer value to add
- * @v: pointer to type atomic64_t
- *
- * Atomically adds @i to @v.
- */
static __always_inline s64 arch_atomic64_add(s64 i, atomic64_t *v)
{
__alternative_atomic64(add, add_return,
@@ -187,13 +139,6 @@ static __always_inline s64 arch_atomic64_add(s64 i, atomic64_t *v)
return i;
}
-/**
- * arch_atomic64_sub - subtract the atomic64 variable
- * @i: integer value to subtract
- * @v: pointer to type atomic64_t
- *
- * Atomically subtracts @i from @v.
- */
static __always_inline s64 arch_atomic64_sub(s64 i, atomic64_t *v)
{
__alternative_atomic64(sub, sub_return,
@@ -202,12 +147,6 @@ static __always_inline s64 arch_atomic64_sub(s64 i, atomic64_t *v)
return i;
}
-/**
- * arch_atomic64_inc - increment atomic64 variable
- * @v: pointer to type atomic64_t
- *
- * Atomically increments @v by 1.
- */
static __always_inline void arch_atomic64_inc(atomic64_t *v)
{
__alternative_atomic64(inc, inc_return, /* no output */,
@@ -215,12 +154,6 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v)
}
#define arch_atomic64_inc arch_atomic64_inc
-/**
- * arch_atomic64_dec - decrement atomic64 variable
- * @v: pointer to type atomic64_t
- *
- * Atomically decrements @v by 1.
- */
static __always_inline void arch_atomic64_dec(atomic64_t *v)
{
__alternative_atomic64(dec, dec_return, /* no output */,
@@ -228,15 +161,6 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v)
}
#define arch_atomic64_dec arch_atomic64_dec
-/**
- * arch_atomic64_add_unless - add unless the number is a given value
- * @v: pointer of type atomic64_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as it was not @u.
- * Returns non-zero if the add was done, zero otherwise.
- */
static __always_inline int arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
unsigned low = (unsigned)u;
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index c496595bf6012..3165c0feedf74 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -10,37 +10,16 @@
#define ATOMIC64_INIT(i) { (i) }
-/**
- * arch_atomic64_read - read atomic64 variable
- * @v: pointer of type atomic64_t
- *
- * Atomically reads the value of @v.
- * Doesn't imply a read memory barrier.
- */
static __always_inline s64 arch_atomic64_read(const atomic64_t *v)
{
return __READ_ONCE((v)->counter);
}
-/**
- * arch_atomic64_set - set atomic64 variable
- * @v: pointer to type atomic64_t
- * @i: required value
- *
- * Atomically sets the value of @v to @i.
- */
static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i)
{
__WRITE_ONCE(v->counter, i);
}
-/**
- * arch_atomic64_add - add integer to atomic64 variable
- * @i: integer value to add
- * @v: pointer to type atomic64_t
- *
- * Atomically adds @i to @v.
- */
static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "addq %1,%0"
@@ -48,13 +27,6 @@ static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v)
: "er" (i), "m" (v->counter) : "memory");
}
-/**
- * arch_atomic64_sub - subtract the atomic64 variable
- * @i: integer value to subtract
- * @v: pointer to type atomic64_t
- *
- * Atomically subtracts @i from @v.
- */
static __always_inline void arch_atomic64_sub(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "subq %1,%0"
@@ -62,27 +34,12 @@ static __always_inline void arch_atomic64_sub(s64 i, atomic64_t *v)
: "er" (i), "m" (v->counter) : "memory");
}
-/**
- * arch_atomic64_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer to type atomic64_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
{
return GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, e, "er", i);
}
#define arch_atomic64_sub_and_test arch_atomic64_sub_and_test
-/**
- * arch_atomic64_inc - increment atomic64 variable
- * @v: pointer to type atomic64_t
- *
- * Atomically increments @v by 1.
- */
static __always_inline void arch_atomic64_inc(atomic64_t *v)
{
asm volatile(LOCK_PREFIX "incq %0"
@@ -91,12 +48,6 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v)
}
#define arch_atomic64_inc arch_atomic64_inc
-/**
- * arch_atomic64_dec - decrement atomic64 variable
- * @v: pointer to type atomic64_t
- *
- * Atomically decrements @v by 1.
- */
static __always_inline void arch_atomic64_dec(atomic64_t *v)
{
asm volatile(LOCK_PREFIX "decq %0"
@@ -105,56 +56,24 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v)
}
#define arch_atomic64_dec arch_atomic64_dec
-/**
- * arch_atomic64_dec_and_test - decrement and test
- * @v: pointer to type atomic64_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
static __always_inline bool arch_atomic64_dec_and_test(atomic64_t *v)
{
return GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, e);
}
#define arch_atomic64_dec_and_test arch_atomic64_dec_and_test
-/**
- * arch_atomic64_inc_and_test - increment and test
- * @v: pointer to type atomic64_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool arch_atomic64_inc_and_test(atomic64_t *v)
{
return GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, e);
}
#define arch_atomic64_inc_and_test arch_atomic64_inc_and_test
-/**
- * arch_atomic64_add_negative - add and test if negative
- * @i: integer value to add
- * @v: pointer to type atomic64_t
- *
- * Atomically adds @i to @v and returns true
- * if the result is negative, or false when
- * result is greater than or equal to zero.
- */
static __always_inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v)
{
return GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, s, "er", i);
}
#define arch_atomic64_add_negative arch_atomic64_add_negative
-/**
- * arch_atomic64_add_return - add and return
- * @i: integer value to add
- * @v: pointer to type atomic64_t
- *
- * Atomically adds @i to @v and returns @i + @v
- */
static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
{
return i + xadd(&v->counter, i);
--
2.30.2
Now that we have raw_atomic*_<op>() definitions, there's no need to use
arch_atomic*_<op>() definitions outside of the low-level atomic
definitions.
Move treewide users of arch_atomic*_<op>() over to the equivalent
raw_atomic*_<op>().
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/powerpc/kernel/smp.c | 12 ++++++------
arch/x86/kernel/alternative.c | 4 ++--
arch/x86/kernel/cpu/mce/core.c | 16 ++++++++--------
arch/x86/kernel/nmi.c | 2 +-
arch/x86/kernel/pvclock.c | 4 ++--
arch/x86/kvm/x86.c | 2 +-
include/asm-generic/bitops/atomic.h | 12 ++++++------
include/asm-generic/bitops/lock.h | 8 ++++----
include/linux/context_tracking.h | 4 ++--
include/linux/context_tracking_state.h | 2 +-
include/linux/cpumask.h | 2 +-
include/linux/jump_label.h | 2 +-
kernel/context_tracking.c | 12 ++++++------
kernel/sched/clock.c | 2 +-
14 files changed, 42 insertions(+), 42 deletions(-)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 265801a3e94cf..e8965f18686f0 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -417,9 +417,9 @@ noinstr static void nmi_ipi_lock_start(unsigned long *flags)
{
raw_local_irq_save(*flags);
hard_irq_disable();
- while (arch_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) {
+ while (raw_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) {
raw_local_irq_restore(*flags);
- spin_until_cond(arch_atomic_read(&__nmi_ipi_lock) == 0);
+ spin_until_cond(raw_atomic_read(&__nmi_ipi_lock) == 0);
raw_local_irq_save(*flags);
hard_irq_disable();
}
@@ -427,15 +427,15 @@ noinstr static void nmi_ipi_lock_start(unsigned long *flags)
noinstr static void nmi_ipi_lock(void)
{
- while (arch_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1)
- spin_until_cond(arch_atomic_read(&__nmi_ipi_lock) == 0);
+ while (raw_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1)
+ spin_until_cond(raw_atomic_read(&__nmi_ipi_lock) == 0);
}
noinstr static void nmi_ipi_unlock(void)
{
smp_mb();
- WARN_ON(arch_atomic_read(&__nmi_ipi_lock) != 1);
- arch_atomic_set(&__nmi_ipi_lock, 0);
+ WARN_ON(raw_atomic_read(&__nmi_ipi_lock) != 1);
+ raw_atomic_set(&__nmi_ipi_lock, 0);
}
noinstr static void nmi_ipi_unlock_end(unsigned long *flags)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index f615e0cb6d932..18f16e93838fe 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -1799,7 +1799,7 @@ struct bp_patching_desc *try_get_desc(void)
{
struct bp_patching_desc *desc = &bp_desc;
- if (!arch_atomic_inc_not_zero(&desc->refs))
+ if (!raw_atomic_inc_not_zero(&desc->refs))
return NULL;
return desc;
@@ -1810,7 +1810,7 @@ static __always_inline void put_desc(void)
struct bp_patching_desc *desc = &bp_desc;
smp_mb__before_atomic();
- arch_atomic_dec(&desc->refs);
+ raw_atomic_dec(&desc->refs);
}
static __always_inline void *text_poke_addr(struct text_poke_loc *tp)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 2eec60f50057a..ab156e6e71208 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1022,12 +1022,12 @@ static noinstr int mce_start(int *no_way_out)
if (!timeout)
return ret;
- arch_atomic_add(*no_way_out, &global_nwo);
+ raw_atomic_add(*no_way_out, &global_nwo);
/*
* Rely on the implied barrier below, such that global_nwo
* is updated before mce_callin.
*/
- order = arch_atomic_inc_return(&mce_callin);
+ order = raw_atomic_inc_return(&mce_callin);
arch_cpumask_clear_cpu(smp_processor_id(), &mce_missing_cpus);
/* Enable instrumentation around calls to external facilities */
@@ -1036,10 +1036,10 @@ static noinstr int mce_start(int *no_way_out)
/*
* Wait for everyone.
*/
- while (arch_atomic_read(&mce_callin) != num_online_cpus()) {
+ while (raw_atomic_read(&mce_callin) != num_online_cpus()) {
if (mce_timed_out(&timeout,
"Timeout: Not all CPUs entered broadcast exception handler")) {
- arch_atomic_set(&global_nwo, 0);
+ raw_atomic_set(&global_nwo, 0);
goto out;
}
ndelay(SPINUNIT);
@@ -1054,7 +1054,7 @@ static noinstr int mce_start(int *no_way_out)
/*
* Monarch: Starts executing now, the others wait.
*/
- arch_atomic_set(&mce_executing, 1);
+ raw_atomic_set(&mce_executing, 1);
} else {
/*
* Subject: Now start the scanning loop one by one in
@@ -1062,10 +1062,10 @@ static noinstr int mce_start(int *no_way_out)
* This way when there are any shared banks it will be
* only seen by one CPU before cleared, avoiding duplicates.
*/
- while (arch_atomic_read(&mce_executing) < order) {
+ while (raw_atomic_read(&mce_executing) < order) {
if (mce_timed_out(&timeout,
"Timeout: Subject CPUs unable to finish machine check processing")) {
- arch_atomic_set(&global_nwo, 0);
+ raw_atomic_set(&global_nwo, 0);
goto out;
}
ndelay(SPINUNIT);
@@ -1075,7 +1075,7 @@ static noinstr int mce_start(int *no_way_out)
/*
* Cache the global no_way_out state.
*/
- *no_way_out = arch_atomic_read(&global_nwo);
+ *no_way_out = raw_atomic_read(&global_nwo);
ret = order;
diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
index 776f4b1e395b5..a0c551846b35f 100644
--- a/arch/x86/kernel/nmi.c
+++ b/arch/x86/kernel/nmi.c
@@ -496,7 +496,7 @@ DEFINE_IDTENTRY_RAW(exc_nmi)
*/
sev_es_nmi_complete();
if (IS_ENABLED(CONFIG_NMI_CHECK_CPU))
- arch_atomic_long_inc(&nsp->idt_calls);
+ raw_atomic_long_inc(&nsp->idt_calls);
if (IS_ENABLED(CONFIG_SMP) && arch_cpu_is_offline(smp_processor_id()))
return;
diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
index 56acf53a782ad..b3f81379c2fc0 100644
--- a/arch/x86/kernel/pvclock.c
+++ b/arch/x86/kernel/pvclock.c
@@ -101,11 +101,11 @@ u64 __pvclock_clocksource_read(struct pvclock_vcpu_time_info *src, bool dowd)
* updating at the same time, and one of them could be slightly behind,
* making the assumption that last_value always go forward fail to hold.
*/
- last = arch_atomic64_read(&last_value);
+ last = raw_atomic64_read(&last_value);
do {
if (ret <= last)
return last;
- } while (!arch_atomic64_try_cmpxchg(&last_value, &last, ret));
+ } while (!raw_atomic64_try_cmpxchg(&last_value, &last, ret));
return ret;
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ceb7c5e9cf9e9..ac6f609068106 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -13155,7 +13155,7 @@ EXPORT_SYMBOL_GPL(kvm_arch_end_assignment);
bool noinstr kvm_arch_has_assigned_device(struct kvm *kvm)
{
- return arch_atomic_read(&kvm->arch.assigned_device_count);
+ return raw_atomic_read(&kvm->arch.assigned_device_count);
}
EXPORT_SYMBOL_GPL(kvm_arch_has_assigned_device);
diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
index 71ab4ba9c25d1..e076e079f6b2e 100644
--- a/include/asm-generic/bitops/atomic.h
+++ b/include/asm-generic/bitops/atomic.h
@@ -15,21 +15,21 @@ static __always_inline void
arch_set_bit(unsigned int nr, volatile unsigned long *p)
{
p += BIT_WORD(nr);
- arch_atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p);
+ raw_atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p);
}
static __always_inline void
arch_clear_bit(unsigned int nr, volatile unsigned long *p)
{
p += BIT_WORD(nr);
- arch_atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p);
+ raw_atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p);
}
static __always_inline void
arch_change_bit(unsigned int nr, volatile unsigned long *p)
{
p += BIT_WORD(nr);
- arch_atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p);
+ raw_atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p);
}
static __always_inline int
@@ -39,7 +39,7 @@ arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p)
unsigned long mask = BIT_MASK(nr);
p += BIT_WORD(nr);
- old = arch_atomic_long_fetch_or(mask, (atomic_long_t *)p);
+ old = raw_atomic_long_fetch_or(mask, (atomic_long_t *)p);
return !!(old & mask);
}
@@ -50,7 +50,7 @@ arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p)
unsigned long mask = BIT_MASK(nr);
p += BIT_WORD(nr);
- old = arch_atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
+ old = raw_atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
return !!(old & mask);
}
@@ -61,7 +61,7 @@ arch_test_and_change_bit(unsigned int nr, volatile unsigned long *p)
unsigned long mask = BIT_MASK(nr);
p += BIT_WORD(nr);
- old = arch_atomic_long_fetch_xor(mask, (atomic_long_t *)p);
+ old = raw_atomic_long_fetch_xor(mask, (atomic_long_t *)p);
return !!(old & mask);
}
diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h
index 630f2f6b95956..40913516e654c 100644
--- a/include/asm-generic/bitops/lock.h
+++ b/include/asm-generic/bitops/lock.h
@@ -25,7 +25,7 @@ arch_test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p)
if (READ_ONCE(*p) & mask)
return 1;
- old = arch_atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p);
+ old = raw_atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p);
return !!(old & mask);
}
@@ -41,7 +41,7 @@ static __always_inline void
arch_clear_bit_unlock(unsigned int nr, volatile unsigned long *p)
{
p += BIT_WORD(nr);
- arch_atomic_long_fetch_andnot_release(BIT_MASK(nr), (atomic_long_t *)p);
+ raw_atomic_long_fetch_andnot_release(BIT_MASK(nr), (atomic_long_t *)p);
}
/**
@@ -63,7 +63,7 @@ arch___clear_bit_unlock(unsigned int nr, volatile unsigned long *p)
p += BIT_WORD(nr);
old = READ_ONCE(*p);
old &= ~BIT_MASK(nr);
- arch_atomic_long_set_release((atomic_long_t *)p, old);
+ raw_atomic_long_set_release((atomic_long_t *)p, old);
}
/**
@@ -83,7 +83,7 @@ static inline bool arch_clear_bit_unlock_is_negative_byte(unsigned int nr,
unsigned long mask = BIT_MASK(nr);
p += BIT_WORD(nr);
- old = arch_atomic_long_fetch_andnot_release(mask, (atomic_long_t *)p);
+ old = raw_atomic_long_fetch_andnot_release(mask, (atomic_long_t *)p);
return !!(old & BIT(7));
}
#define arch_clear_bit_unlock_is_negative_byte arch_clear_bit_unlock_is_negative_byte
diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h
index d3cbb6c16babf..6e76b9dba00e7 100644
--- a/include/linux/context_tracking.h
+++ b/include/linux/context_tracking.h
@@ -119,7 +119,7 @@ extern void ct_idle_exit(void);
*/
static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void)
{
- return !(arch_atomic_read(this_cpu_ptr(&context_tracking.state)) & RCU_DYNTICKS_IDX);
+ return !(raw_atomic_read(this_cpu_ptr(&context_tracking.state)) & RCU_DYNTICKS_IDX);
}
/*
@@ -128,7 +128,7 @@ static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void)
*/
static __always_inline unsigned long ct_state_inc(int incby)
{
- return arch_atomic_add_return(incby, this_cpu_ptr(&context_tracking.state));
+ return raw_atomic_add_return(incby, this_cpu_ptr(&context_tracking.state));
}
static __always_inline bool warn_rcu_enter(void)
diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h
index fdd537ea513ff..bbff5f7f88030 100644
--- a/include/linux/context_tracking_state.h
+++ b/include/linux/context_tracking_state.h
@@ -51,7 +51,7 @@ DECLARE_PER_CPU(struct context_tracking, context_tracking);
#ifdef CONFIG_CONTEXT_TRACKING_USER
static __always_inline int __ct_state(void)
{
- return arch_atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_STATE_MASK;
+ return raw_atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_STATE_MASK;
}
#endif
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index ca736b05ec7b0..0d2e2a38b92d0 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -1071,7 +1071,7 @@ static inline const struct cpumask *get_cpu_mask(unsigned int cpu)
*/
static __always_inline unsigned int num_online_cpus(void)
{
- return arch_atomic_read(&__num_online_cpus);
+ return raw_atomic_read(&__num_online_cpus);
}
#define num_possible_cpus() cpumask_weight(cpu_possible_mask)
#define num_present_cpus() cpumask_weight(cpu_present_mask)
diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index 4e968ebadce60..f0a949b7c9733 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -257,7 +257,7 @@ extern enum jump_label_type jump_label_init_type(struct jump_entry *entry);
static __always_inline int static_key_count(struct static_key *key)
{
- return arch_atomic_read(&key->enabled);
+ return raw_atomic_read(&key->enabled);
}
static __always_inline void jump_label_init(void)
diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index a09f1c19336ae..6ef0b35fc28c5 100644
--- a/kernel/context_tracking.c
+++ b/kernel/context_tracking.c
@@ -510,7 +510,7 @@ void noinstr __ct_user_enter(enum ctx_state state)
* In this we case we don't care about any concurrency/ordering.
*/
if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE))
- arch_atomic_set(&ct->state, state);
+ raw_atomic_set(&ct->state, state);
} else {
/*
* Even if context tracking is disabled on this CPU, because it's outside
@@ -527,7 +527,7 @@ void noinstr __ct_user_enter(enum ctx_state state)
*/
if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) {
/* Tracking for vtime only, no concurrent RCU EQS accounting */
- arch_atomic_set(&ct->state, state);
+ raw_atomic_set(&ct->state, state);
} else {
/*
* Tracking for vtime and RCU EQS. Make sure we don't race
@@ -535,7 +535,7 @@ void noinstr __ct_user_enter(enum ctx_state state)
* RCU only requires RCU_DYNTICKS_IDX increments to be fully
* ordered.
*/
- arch_atomic_add(state, &ct->state);
+ raw_atomic_add(state, &ct->state);
}
}
}
@@ -630,12 +630,12 @@ void noinstr __ct_user_exit(enum ctx_state state)
* In this we case we don't care about any concurrency/ordering.
*/
if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE))
- arch_atomic_set(&ct->state, CONTEXT_KERNEL);
+ raw_atomic_set(&ct->state, CONTEXT_KERNEL);
} else {
if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) {
/* Tracking for vtime only, no concurrent RCU EQS accounting */
- arch_atomic_set(&ct->state, CONTEXT_KERNEL);
+ raw_atomic_set(&ct->state, CONTEXT_KERNEL);
} else {
/*
* Tracking for vtime and RCU EQS. Make sure we don't race
@@ -643,7 +643,7 @@ void noinstr __ct_user_exit(enum ctx_state state)
* RCU only requires RCU_DYNTICKS_IDX increments to be fully
* ordered.
*/
- arch_atomic_sub(state, &ct->state);
+ raw_atomic_sub(state, &ct->state);
}
}
}
diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
index b5cc2b53464de..71443cff31f0d 100644
--- a/kernel/sched/clock.c
+++ b/kernel/sched/clock.c
@@ -287,7 +287,7 @@ static __always_inline u64 sched_clock_local(struct sched_clock_data *scd)
clock = wrap_max(clock, min_clock);
clock = wrap_min(clock, max_clock);
- if (!arch_try_cmpxchg64(&scd->clock, &old_clock, clock))
+ if (!raw_try_cmpxchg64(&scd->clock, &old_clock, clock))
goto again;
return clock;
--
2.30.2
Currently each ordering variant has several potential definitions,
with a mixture of preprocessor and C definitions, including several
copies of its C prototype, e.g.
| #if defined(arch_atomic_fetch_andnot_acquire)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
| #elif defined(arch_atomic_fetch_andnot_relaxed)
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| int ret = arch_atomic_fetch_andnot_relaxed(i, v);
| __atomic_acquire_fence();
| return ret;
| }
| #elif defined(arch_atomic_fetch_andnot)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot
| #else
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| return raw_atomic_fetch_and_acquire(~i, v);
| }
| #endif
Make this a bit simpler by defining the C prototype once, and writing
the various potential definitions as plain C code guarded by ifdeffery.
For example, the above becomes:
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| #if defined(arch_atomic_fetch_andnot_acquire)
| return arch_atomic_fetch_andnot_acquire(i, v);
| #elif defined(arch_atomic_fetch_andnot_relaxed)
| int ret = arch_atomic_fetch_andnot_relaxed(i, v);
| __atomic_acquire_fence();
| return ret;
| #elif defined(arch_atomic_fetch_andnot)
| return arch_atomic_fetch_andnot(i, v);
| #else
| return raw_atomic_fetch_and_acquire(~i, v);
| #endif
| }
Which is far easier to read. As we now always have a single copy of the
C prototype wrapping all the potential definitions, we now have an
obvious single location for kerneldoc comments.
At the same time, the fallbacks for raw_atomic*_xhcg() are made to use
'new' rather than 'i' as the name of the new value. This is what the
existing fallback template used, and is more consistent with the
raw_atomic{_try,}cmpxchg() fallbacks.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
include/linux/atomic/atomic-arch-fallback.h | 1790 +++++++++---------
include/linux/atomic/atomic-instrumented.h | 50 +-
include/linux/atomic/atomic-long.h | 26 +-
scripts/atomic/atomics.tbl | 2 +-
scripts/atomic/fallbacks/acquire | 4 -
scripts/atomic/fallbacks/add_negative | 4 -
scripts/atomic/fallbacks/add_unless | 4 -
scripts/atomic/fallbacks/andnot | 4 -
scripts/atomic/fallbacks/cmpxchg | 4 -
scripts/atomic/fallbacks/dec | 4 -
scripts/atomic/fallbacks/dec_and_test | 4 -
scripts/atomic/fallbacks/dec_if_positive | 4 -
scripts/atomic/fallbacks/dec_unless_positive | 4 -
scripts/atomic/fallbacks/fence | 4 -
scripts/atomic/fallbacks/fetch_add_unless | 4 -
scripts/atomic/fallbacks/inc | 4 -
scripts/atomic/fallbacks/inc_and_test | 4 -
scripts/atomic/fallbacks/inc_not_zero | 4 -
scripts/atomic/fallbacks/inc_unless_negative | 4 -
scripts/atomic/fallbacks/read_acquire | 4 -
scripts/atomic/fallbacks/release | 4 -
scripts/atomic/fallbacks/set_release | 4 -
scripts/atomic/fallbacks/sub_and_test | 4 -
scripts/atomic/fallbacks/try_cmpxchg | 4 -
scripts/atomic/fallbacks/xchg | 4 -
scripts/atomic/gen-atomic-fallback.sh | 26 +-
26 files changed, 901 insertions(+), 1077 deletions(-)
diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 99bc1a871dc12..470c2890ab8d6 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -428,16 +428,20 @@ extern void raw_cmpxchg128_relaxed_not_implemented(void);
#define raw_sync_cmpxchg arch_sync_cmpxchg
-#define raw_atomic_read arch_atomic_read
+static __always_inline int
+raw_atomic_read(const atomic_t *v)
+{
+ return arch_atomic_read(v);
+}
-#if defined(arch_atomic_read_acquire)
-#define raw_atomic_read_acquire arch_atomic_read_acquire
-#elif defined(arch_atomic_read)
-#define raw_atomic_read_acquire arch_atomic_read
-#else
static __always_inline int
raw_atomic_read_acquire(const atomic_t *v)
{
+#if defined(arch_atomic_read_acquire)
+ return arch_atomic_read_acquire(v);
+#elif defined(arch_atomic_read)
+ return arch_atomic_read(v);
+#else
int ret;
if (__native_word(atomic_t)) {
@@ -448,1144 +452,1088 @@ raw_atomic_read_acquire(const atomic_t *v)
}
return ret;
-}
#endif
+}
-#define raw_atomic_set arch_atomic_set
+static __always_inline void
+raw_atomic_set(atomic_t *v, int i)
+{
+ arch_atomic_set(v, i);
+}
-#if defined(arch_atomic_set_release)
-#define raw_atomic_set_release arch_atomic_set_release
-#elif defined(arch_atomic_set)
-#define raw_atomic_set_release arch_atomic_set
-#else
static __always_inline void
raw_atomic_set_release(atomic_t *v, int i)
{
+#if defined(arch_atomic_set_release)
+ arch_atomic_set_release(v, i);
+#elif defined(arch_atomic_set)
+ arch_atomic_set(v, i);
+#else
if (__native_word(atomic_t)) {
smp_store_release(&(v)->counter, i);
} else {
__atomic_release_fence();
raw_atomic_set(v, i);
}
-}
#endif
+}
-#define raw_atomic_add arch_atomic_add
+static __always_inline void
+raw_atomic_add(int i, atomic_t *v)
+{
+ arch_atomic_add(i, v);
+}
-#if defined(arch_atomic_add_return)
-#define raw_atomic_add_return arch_atomic_add_return
-#elif defined(arch_atomic_add_return_relaxed)
static __always_inline int
raw_atomic_add_return(int i, atomic_t *v)
{
+#if defined(arch_atomic_add_return)
+ return arch_atomic_add_return(i, v);
+#elif defined(arch_atomic_add_return_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_add_return_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic_add_return"
#endif
+}
-#if defined(arch_atomic_add_return_acquire)
-#define raw_atomic_add_return_acquire arch_atomic_add_return_acquire
-#elif defined(arch_atomic_add_return_relaxed)
static __always_inline int
raw_atomic_add_return_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_add_return_acquire)
+ return arch_atomic_add_return_acquire(i, v);
+#elif defined(arch_atomic_add_return_relaxed)
int ret = arch_atomic_add_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_add_return)
-#define raw_atomic_add_return_acquire arch_atomic_add_return
+ return arch_atomic_add_return(i, v);
#else
#error "Unable to define raw_atomic_add_return_acquire"
#endif
+}
-#if defined(arch_atomic_add_return_release)
-#define raw_atomic_add_return_release arch_atomic_add_return_release
-#elif defined(arch_atomic_add_return_relaxed)
static __always_inline int
raw_atomic_add_return_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_add_return_release)
+ return arch_atomic_add_return_release(i, v);
+#elif defined(arch_atomic_add_return_relaxed)
__atomic_release_fence();
return arch_atomic_add_return_relaxed(i, v);
-}
#elif defined(arch_atomic_add_return)
-#define raw_atomic_add_return_release arch_atomic_add_return
+ return arch_atomic_add_return(i, v);
#else
#error "Unable to define raw_atomic_add_return_release"
#endif
+}
+static __always_inline int
+raw_atomic_add_return_relaxed(int i, atomic_t *v)
+{
#if defined(arch_atomic_add_return_relaxed)
-#define raw_atomic_add_return_relaxed arch_atomic_add_return_relaxed
+ return arch_atomic_add_return_relaxed(i, v);
#elif defined(arch_atomic_add_return)
-#define raw_atomic_add_return_relaxed arch_atomic_add_return
+ return arch_atomic_add_return(i, v);
#else
#error "Unable to define raw_atomic_add_return_relaxed"
#endif
+}
-#if defined(arch_atomic_fetch_add)
-#define raw_atomic_fetch_add arch_atomic_fetch_add
-#elif defined(arch_atomic_fetch_add_relaxed)
static __always_inline int
raw_atomic_fetch_add(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_add)
+ return arch_atomic_fetch_add(i, v);
+#elif defined(arch_atomic_fetch_add_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_add_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic_fetch_add"
#endif
+}
-#if defined(arch_atomic_fetch_add_acquire)
-#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire
-#elif defined(arch_atomic_fetch_add_relaxed)
static __always_inline int
raw_atomic_fetch_add_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_add_acquire)
+ return arch_atomic_fetch_add_acquire(i, v);
+#elif defined(arch_atomic_fetch_add_relaxed)
int ret = arch_atomic_fetch_add_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_add)
-#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add
+ return arch_atomic_fetch_add(i, v);
#else
#error "Unable to define raw_atomic_fetch_add_acquire"
#endif
+}
-#if defined(arch_atomic_fetch_add_release)
-#define raw_atomic_fetch_add_release arch_atomic_fetch_add_release
-#elif defined(arch_atomic_fetch_add_relaxed)
static __always_inline int
raw_atomic_fetch_add_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_add_release)
+ return arch_atomic_fetch_add_release(i, v);
+#elif defined(arch_atomic_fetch_add_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_add_relaxed(i, v);
-}
#elif defined(arch_atomic_fetch_add)
-#define raw_atomic_fetch_add_release arch_atomic_fetch_add
+ return arch_atomic_fetch_add(i, v);
#else
#error "Unable to define raw_atomic_fetch_add_release"
#endif
+}
+static __always_inline int
+raw_atomic_fetch_add_relaxed(int i, atomic_t *v)
+{
#if defined(arch_atomic_fetch_add_relaxed)
-#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed
+ return arch_atomic_fetch_add_relaxed(i, v);
#elif defined(arch_atomic_fetch_add)
-#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add
+ return arch_atomic_fetch_add(i, v);
#else
#error "Unable to define raw_atomic_fetch_add_relaxed"
#endif
+}
-#define raw_atomic_sub arch_atomic_sub
+static __always_inline void
+raw_atomic_sub(int i, atomic_t *v)
+{
+ arch_atomic_sub(i, v);
+}
-#if defined(arch_atomic_sub_return)
-#define raw_atomic_sub_return arch_atomic_sub_return
-#elif defined(arch_atomic_sub_return_relaxed)
static __always_inline int
raw_atomic_sub_return(int i, atomic_t *v)
{
+#if defined(arch_atomic_sub_return)
+ return arch_atomic_sub_return(i, v);
+#elif defined(arch_atomic_sub_return_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_sub_return_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic_sub_return"
#endif
+}
-#if defined(arch_atomic_sub_return_acquire)
-#define raw_atomic_sub_return_acquire arch_atomic_sub_return_acquire
-#elif defined(arch_atomic_sub_return_relaxed)
static __always_inline int
raw_atomic_sub_return_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_sub_return_acquire)
+ return arch_atomic_sub_return_acquire(i, v);
+#elif defined(arch_atomic_sub_return_relaxed)
int ret = arch_atomic_sub_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_sub_return)
-#define raw_atomic_sub_return_acquire arch_atomic_sub_return
+ return arch_atomic_sub_return(i, v);
#else
#error "Unable to define raw_atomic_sub_return_acquire"
#endif
+}
-#if defined(arch_atomic_sub_return_release)
-#define raw_atomic_sub_return_release arch_atomic_sub_return_release
-#elif defined(arch_atomic_sub_return_relaxed)
static __always_inline int
raw_atomic_sub_return_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_sub_return_release)
+ return arch_atomic_sub_return_release(i, v);
+#elif defined(arch_atomic_sub_return_relaxed)
__atomic_release_fence();
return arch_atomic_sub_return_relaxed(i, v);
-}
#elif defined(arch_atomic_sub_return)
-#define raw_atomic_sub_return_release arch_atomic_sub_return
+ return arch_atomic_sub_return(i, v);
#else
#error "Unable to define raw_atomic_sub_return_release"
#endif
+}
+static __always_inline int
+raw_atomic_sub_return_relaxed(int i, atomic_t *v)
+{
#if defined(arch_atomic_sub_return_relaxed)
-#define raw_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed
+ return arch_atomic_sub_return_relaxed(i, v);
#elif defined(arch_atomic_sub_return)
-#define raw_atomic_sub_return_relaxed arch_atomic_sub_return
+ return arch_atomic_sub_return(i, v);
#else
#error "Unable to define raw_atomic_sub_return_relaxed"
#endif
+}
-#if defined(arch_atomic_fetch_sub)
-#define raw_atomic_fetch_sub arch_atomic_fetch_sub
-#elif defined(arch_atomic_fetch_sub_relaxed)
static __always_inline int
raw_atomic_fetch_sub(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_sub)
+ return arch_atomic_fetch_sub(i, v);
+#elif defined(arch_atomic_fetch_sub_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_sub_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic_fetch_sub"
#endif
+}
-#if defined(arch_atomic_fetch_sub_acquire)
-#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire
-#elif defined(arch_atomic_fetch_sub_relaxed)
static __always_inline int
raw_atomic_fetch_sub_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_sub_acquire)
+ return arch_atomic_fetch_sub_acquire(i, v);
+#elif defined(arch_atomic_fetch_sub_relaxed)
int ret = arch_atomic_fetch_sub_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_sub)
-#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub
+ return arch_atomic_fetch_sub(i, v);
#else
#error "Unable to define raw_atomic_fetch_sub_acquire"
#endif
+}
-#if defined(arch_atomic_fetch_sub_release)
-#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub_release
-#elif defined(arch_atomic_fetch_sub_relaxed)
static __always_inline int
raw_atomic_fetch_sub_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_sub_release)
+ return arch_atomic_fetch_sub_release(i, v);
+#elif defined(arch_atomic_fetch_sub_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_sub_relaxed(i, v);
-}
#elif defined(arch_atomic_fetch_sub)
-#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub
+ return arch_atomic_fetch_sub(i, v);
#else
#error "Unable to define raw_atomic_fetch_sub_release"
#endif
+}
+static __always_inline int
+raw_atomic_fetch_sub_relaxed(int i, atomic_t *v)
+{
#if defined(arch_atomic_fetch_sub_relaxed)
-#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed
+ return arch_atomic_fetch_sub_relaxed(i, v);
#elif defined(arch_atomic_fetch_sub)
-#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub
+ return arch_atomic_fetch_sub(i, v);
#else
#error "Unable to define raw_atomic_fetch_sub_relaxed"
#endif
+}
-#if defined(arch_atomic_inc)
-#define raw_atomic_inc arch_atomic_inc
-#else
static __always_inline void
raw_atomic_inc(atomic_t *v)
{
+#if defined(arch_atomic_inc)
+ arch_atomic_inc(v);
+#else
raw_atomic_add(1, v);
-}
#endif
+}
-#if defined(arch_atomic_inc_return)
-#define raw_atomic_inc_return arch_atomic_inc_return
-#elif defined(arch_atomic_inc_return_relaxed)
static __always_inline int
raw_atomic_inc_return(atomic_t *v)
{
+#if defined(arch_atomic_inc_return)
+ return arch_atomic_inc_return(v);
+#elif defined(arch_atomic_inc_return_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_inc_return_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline int
-raw_atomic_inc_return(atomic_t *v)
-{
return raw_atomic_add_return(1, v);
-}
#endif
+}
-#if defined(arch_atomic_inc_return_acquire)
-#define raw_atomic_inc_return_acquire arch_atomic_inc_return_acquire
-#elif defined(arch_atomic_inc_return_relaxed)
static __always_inline int
raw_atomic_inc_return_acquire(atomic_t *v)
{
+#if defined(arch_atomic_inc_return_acquire)
+ return arch_atomic_inc_return_acquire(v);
+#elif defined(arch_atomic_inc_return_relaxed)
int ret = arch_atomic_inc_return_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_inc_return)
-#define raw_atomic_inc_return_acquire arch_atomic_inc_return
+ return arch_atomic_inc_return(v);
#else
-static __always_inline int
-raw_atomic_inc_return_acquire(atomic_t *v)
-{
return raw_atomic_add_return_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic_inc_return_release)
-#define raw_atomic_inc_return_release arch_atomic_inc_return_release
-#elif defined(arch_atomic_inc_return_relaxed)
static __always_inline int
raw_atomic_inc_return_release(atomic_t *v)
{
+#if defined(arch_atomic_inc_return_release)
+ return arch_atomic_inc_return_release(v);
+#elif defined(arch_atomic_inc_return_relaxed)
__atomic_release_fence();
return arch_atomic_inc_return_relaxed(v);
-}
#elif defined(arch_atomic_inc_return)
-#define raw_atomic_inc_return_release arch_atomic_inc_return
+ return arch_atomic_inc_return(v);
#else
-static __always_inline int
-raw_atomic_inc_return_release(atomic_t *v)
-{
return raw_atomic_add_return_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic_inc_return_relaxed)
-#define raw_atomic_inc_return_relaxed arch_atomic_inc_return_relaxed
-#elif defined(arch_atomic_inc_return)
-#define raw_atomic_inc_return_relaxed arch_atomic_inc_return
-#else
static __always_inline int
raw_atomic_inc_return_relaxed(atomic_t *v)
{
+#if defined(arch_atomic_inc_return_relaxed)
+ return arch_atomic_inc_return_relaxed(v);
+#elif defined(arch_atomic_inc_return)
+ return arch_atomic_inc_return(v);
+#else
return raw_atomic_add_return_relaxed(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_inc)
-#define raw_atomic_fetch_inc arch_atomic_fetch_inc
-#elif defined(arch_atomic_fetch_inc_relaxed)
static __always_inline int
raw_atomic_fetch_inc(atomic_t *v)
{
+#if defined(arch_atomic_fetch_inc)
+ return arch_atomic_fetch_inc(v);
+#elif defined(arch_atomic_fetch_inc_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_inc_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline int
-raw_atomic_fetch_inc(atomic_t *v)
-{
return raw_atomic_fetch_add(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_inc_acquire)
-#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire
-#elif defined(arch_atomic_fetch_inc_relaxed)
static __always_inline int
raw_atomic_fetch_inc_acquire(atomic_t *v)
{
+#if defined(arch_atomic_fetch_inc_acquire)
+ return arch_atomic_fetch_inc_acquire(v);
+#elif defined(arch_atomic_fetch_inc_relaxed)
int ret = arch_atomic_fetch_inc_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_inc)
-#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc
+ return arch_atomic_fetch_inc(v);
#else
-static __always_inline int
-raw_atomic_fetch_inc_acquire(atomic_t *v)
-{
return raw_atomic_fetch_add_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_inc_release)
-#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc_release
-#elif defined(arch_atomic_fetch_inc_relaxed)
static __always_inline int
raw_atomic_fetch_inc_release(atomic_t *v)
{
+#if defined(arch_atomic_fetch_inc_release)
+ return arch_atomic_fetch_inc_release(v);
+#elif defined(arch_atomic_fetch_inc_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_inc_relaxed(v);
-}
#elif defined(arch_atomic_fetch_inc)
-#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc
+ return arch_atomic_fetch_inc(v);
#else
-static __always_inline int
-raw_atomic_fetch_inc_release(atomic_t *v)
-{
return raw_atomic_fetch_add_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_inc_relaxed)
-#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc_relaxed
-#elif defined(arch_atomic_fetch_inc)
-#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc
-#else
static __always_inline int
raw_atomic_fetch_inc_relaxed(atomic_t *v)
{
+#if defined(arch_atomic_fetch_inc_relaxed)
+ return arch_atomic_fetch_inc_relaxed(v);
+#elif defined(arch_atomic_fetch_inc)
+ return arch_atomic_fetch_inc(v);
+#else
return raw_atomic_fetch_add_relaxed(1, v);
-}
#endif
+}
-#if defined(arch_atomic_dec)
-#define raw_atomic_dec arch_atomic_dec
-#else
static __always_inline void
raw_atomic_dec(atomic_t *v)
{
+#if defined(arch_atomic_dec)
+ arch_atomic_dec(v);
+#else
raw_atomic_sub(1, v);
-}
#endif
+}
-#if defined(arch_atomic_dec_return)
-#define raw_atomic_dec_return arch_atomic_dec_return
-#elif defined(arch_atomic_dec_return_relaxed)
static __always_inline int
raw_atomic_dec_return(atomic_t *v)
{
+#if defined(arch_atomic_dec_return)
+ return arch_atomic_dec_return(v);
+#elif defined(arch_atomic_dec_return_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_dec_return_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline int
-raw_atomic_dec_return(atomic_t *v)
-{
return raw_atomic_sub_return(1, v);
-}
#endif
+}
-#if defined(arch_atomic_dec_return_acquire)
-#define raw_atomic_dec_return_acquire arch_atomic_dec_return_acquire
-#elif defined(arch_atomic_dec_return_relaxed)
static __always_inline int
raw_atomic_dec_return_acquire(atomic_t *v)
{
+#if defined(arch_atomic_dec_return_acquire)
+ return arch_atomic_dec_return_acquire(v);
+#elif defined(arch_atomic_dec_return_relaxed)
int ret = arch_atomic_dec_return_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_dec_return)
-#define raw_atomic_dec_return_acquire arch_atomic_dec_return
+ return arch_atomic_dec_return(v);
#else
-static __always_inline int
-raw_atomic_dec_return_acquire(atomic_t *v)
-{
return raw_atomic_sub_return_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic_dec_return_release)
-#define raw_atomic_dec_return_release arch_atomic_dec_return_release
-#elif defined(arch_atomic_dec_return_relaxed)
static __always_inline int
raw_atomic_dec_return_release(atomic_t *v)
{
+#if defined(arch_atomic_dec_return_release)
+ return arch_atomic_dec_return_release(v);
+#elif defined(arch_atomic_dec_return_relaxed)
__atomic_release_fence();
return arch_atomic_dec_return_relaxed(v);
-}
#elif defined(arch_atomic_dec_return)
-#define raw_atomic_dec_return_release arch_atomic_dec_return
+ return arch_atomic_dec_return(v);
#else
-static __always_inline int
-raw_atomic_dec_return_release(atomic_t *v)
-{
return raw_atomic_sub_return_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic_dec_return_relaxed)
-#define raw_atomic_dec_return_relaxed arch_atomic_dec_return_relaxed
-#elif defined(arch_atomic_dec_return)
-#define raw_atomic_dec_return_relaxed arch_atomic_dec_return
-#else
static __always_inline int
raw_atomic_dec_return_relaxed(atomic_t *v)
{
+#if defined(arch_atomic_dec_return_relaxed)
+ return arch_atomic_dec_return_relaxed(v);
+#elif defined(arch_atomic_dec_return)
+ return arch_atomic_dec_return(v);
+#else
return raw_atomic_sub_return_relaxed(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_dec)
-#define raw_atomic_fetch_dec arch_atomic_fetch_dec
-#elif defined(arch_atomic_fetch_dec_relaxed)
static __always_inline int
raw_atomic_fetch_dec(atomic_t *v)
{
+#if defined(arch_atomic_fetch_dec)
+ return arch_atomic_fetch_dec(v);
+#elif defined(arch_atomic_fetch_dec_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_dec_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline int
-raw_atomic_fetch_dec(atomic_t *v)
-{
return raw_atomic_fetch_sub(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_dec_acquire)
-#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire
-#elif defined(arch_atomic_fetch_dec_relaxed)
static __always_inline int
raw_atomic_fetch_dec_acquire(atomic_t *v)
{
+#if defined(arch_atomic_fetch_dec_acquire)
+ return arch_atomic_fetch_dec_acquire(v);
+#elif defined(arch_atomic_fetch_dec_relaxed)
int ret = arch_atomic_fetch_dec_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_dec)
-#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec
+ return arch_atomic_fetch_dec(v);
#else
-static __always_inline int
-raw_atomic_fetch_dec_acquire(atomic_t *v)
-{
return raw_atomic_fetch_sub_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_dec_release)
-#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec_release
-#elif defined(arch_atomic_fetch_dec_relaxed)
static __always_inline int
raw_atomic_fetch_dec_release(atomic_t *v)
{
+#if defined(arch_atomic_fetch_dec_release)
+ return arch_atomic_fetch_dec_release(v);
+#elif defined(arch_atomic_fetch_dec_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_dec_relaxed(v);
-}
#elif defined(arch_atomic_fetch_dec)
-#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec
+ return arch_atomic_fetch_dec(v);
#else
-static __always_inline int
-raw_atomic_fetch_dec_release(atomic_t *v)
-{
return raw_atomic_fetch_sub_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_dec_relaxed)
-#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec_relaxed
-#elif defined(arch_atomic_fetch_dec)
-#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec
-#else
static __always_inline int
raw_atomic_fetch_dec_relaxed(atomic_t *v)
{
+#if defined(arch_atomic_fetch_dec_relaxed)
+ return arch_atomic_fetch_dec_relaxed(v);
+#elif defined(arch_atomic_fetch_dec)
+ return arch_atomic_fetch_dec(v);
+#else
return raw_atomic_fetch_sub_relaxed(1, v);
-}
#endif
+}
-#define raw_atomic_and arch_atomic_and
+static __always_inline void
+raw_atomic_and(int i, atomic_t *v)
+{
+ arch_atomic_and(i, v);
+}
-#if defined(arch_atomic_fetch_and)
-#define raw_atomic_fetch_and arch_atomic_fetch_and
-#elif defined(arch_atomic_fetch_and_relaxed)
static __always_inline int
raw_atomic_fetch_and(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_and)
+ return arch_atomic_fetch_and(i, v);
+#elif defined(arch_atomic_fetch_and_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_and_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic_fetch_and"
#endif
+}
-#if defined(arch_atomic_fetch_and_acquire)
-#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire
-#elif defined(arch_atomic_fetch_and_relaxed)
static __always_inline int
raw_atomic_fetch_and_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_and_acquire)
+ return arch_atomic_fetch_and_acquire(i, v);
+#elif defined(arch_atomic_fetch_and_relaxed)
int ret = arch_atomic_fetch_and_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_and)
-#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and
+ return arch_atomic_fetch_and(i, v);
#else
#error "Unable to define raw_atomic_fetch_and_acquire"
#endif
+}
-#if defined(arch_atomic_fetch_and_release)
-#define raw_atomic_fetch_and_release arch_atomic_fetch_and_release
-#elif defined(arch_atomic_fetch_and_relaxed)
static __always_inline int
raw_atomic_fetch_and_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_and_release)
+ return arch_atomic_fetch_and_release(i, v);
+#elif defined(arch_atomic_fetch_and_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_and_relaxed(i, v);
-}
#elif defined(arch_atomic_fetch_and)
-#define raw_atomic_fetch_and_release arch_atomic_fetch_and
+ return arch_atomic_fetch_and(i, v);
#else
#error "Unable to define raw_atomic_fetch_and_release"
#endif
+}
+static __always_inline int
+raw_atomic_fetch_and_relaxed(int i, atomic_t *v)
+{
#if defined(arch_atomic_fetch_and_relaxed)
-#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed
+ return arch_atomic_fetch_and_relaxed(i, v);
#elif defined(arch_atomic_fetch_and)
-#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and
+ return arch_atomic_fetch_and(i, v);
#else
#error "Unable to define raw_atomic_fetch_and_relaxed"
#endif
+}
-#if defined(arch_atomic_andnot)
-#define raw_atomic_andnot arch_atomic_andnot
-#else
static __always_inline void
raw_atomic_andnot(int i, atomic_t *v)
{
+#if defined(arch_atomic_andnot)
+ arch_atomic_andnot(i, v);
+#else
raw_atomic_and(~i, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_andnot)
-#define raw_atomic_fetch_andnot arch_atomic_fetch_andnot
-#elif defined(arch_atomic_fetch_andnot_relaxed)
static __always_inline int
raw_atomic_fetch_andnot(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_andnot)
+ return arch_atomic_fetch_andnot(i, v);
+#elif defined(arch_atomic_fetch_andnot_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_andnot_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline int
-raw_atomic_fetch_andnot(int i, atomic_t *v)
-{
return raw_atomic_fetch_and(~i, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_andnot_acquire)
-#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
-#elif defined(arch_atomic_fetch_andnot_relaxed)
static __always_inline int
raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_andnot_acquire)
+ return arch_atomic_fetch_andnot_acquire(i, v);
+#elif defined(arch_atomic_fetch_andnot_relaxed)
int ret = arch_atomic_fetch_andnot_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_andnot)
-#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot
+ return arch_atomic_fetch_andnot(i, v);
#else
-static __always_inline int
-raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
-{
return raw_atomic_fetch_and_acquire(~i, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_andnot_release)
-#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release
-#elif defined(arch_atomic_fetch_andnot_relaxed)
static __always_inline int
raw_atomic_fetch_andnot_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_andnot_release)
+ return arch_atomic_fetch_andnot_release(i, v);
+#elif defined(arch_atomic_fetch_andnot_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_andnot_relaxed(i, v);
-}
#elif defined(arch_atomic_fetch_andnot)
-#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot
+ return arch_atomic_fetch_andnot(i, v);
#else
-static __always_inline int
-raw_atomic_fetch_andnot_release(int i, atomic_t *v)
-{
return raw_atomic_fetch_and_release(~i, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_andnot_relaxed)
-#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed
-#elif defined(arch_atomic_fetch_andnot)
-#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot
-#else
static __always_inline int
raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_andnot_relaxed)
+ return arch_atomic_fetch_andnot_relaxed(i, v);
+#elif defined(arch_atomic_fetch_andnot)
+ return arch_atomic_fetch_andnot(i, v);
+#else
return raw_atomic_fetch_and_relaxed(~i, v);
-}
#endif
+}
-#define raw_atomic_or arch_atomic_or
+static __always_inline void
+raw_atomic_or(int i, atomic_t *v)
+{
+ arch_atomic_or(i, v);
+}
-#if defined(arch_atomic_fetch_or)
-#define raw_atomic_fetch_or arch_atomic_fetch_or
-#elif defined(arch_atomic_fetch_or_relaxed)
static __always_inline int
raw_atomic_fetch_or(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_or)
+ return arch_atomic_fetch_or(i, v);
+#elif defined(arch_atomic_fetch_or_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_or_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic_fetch_or"
#endif
+}
-#if defined(arch_atomic_fetch_or_acquire)
-#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire
-#elif defined(arch_atomic_fetch_or_relaxed)
static __always_inline int
raw_atomic_fetch_or_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_or_acquire)
+ return arch_atomic_fetch_or_acquire(i, v);
+#elif defined(arch_atomic_fetch_or_relaxed)
int ret = arch_atomic_fetch_or_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_or)
-#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or
+ return arch_atomic_fetch_or(i, v);
#else
#error "Unable to define raw_atomic_fetch_or_acquire"
#endif
+}
-#if defined(arch_atomic_fetch_or_release)
-#define raw_atomic_fetch_or_release arch_atomic_fetch_or_release
-#elif defined(arch_atomic_fetch_or_relaxed)
static __always_inline int
raw_atomic_fetch_or_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_or_release)
+ return arch_atomic_fetch_or_release(i, v);
+#elif defined(arch_atomic_fetch_or_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_or_relaxed(i, v);
-}
#elif defined(arch_atomic_fetch_or)
-#define raw_atomic_fetch_or_release arch_atomic_fetch_or
+ return arch_atomic_fetch_or(i, v);
#else
#error "Unable to define raw_atomic_fetch_or_release"
#endif
+}
+static __always_inline int
+raw_atomic_fetch_or_relaxed(int i, atomic_t *v)
+{
#if defined(arch_atomic_fetch_or_relaxed)
-#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed
+ return arch_atomic_fetch_or_relaxed(i, v);
#elif defined(arch_atomic_fetch_or)
-#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or
+ return arch_atomic_fetch_or(i, v);
#else
#error "Unable to define raw_atomic_fetch_or_relaxed"
#endif
+}
-#define raw_atomic_xor arch_atomic_xor
+static __always_inline void
+raw_atomic_xor(int i, atomic_t *v)
+{
+ arch_atomic_xor(i, v);
+}
-#if defined(arch_atomic_fetch_xor)
-#define raw_atomic_fetch_xor arch_atomic_fetch_xor
-#elif defined(arch_atomic_fetch_xor_relaxed)
static __always_inline int
raw_atomic_fetch_xor(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_xor)
+ return arch_atomic_fetch_xor(i, v);
+#elif defined(arch_atomic_fetch_xor_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_xor_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic_fetch_xor"
#endif
+}
-#if defined(arch_atomic_fetch_xor_acquire)
-#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire
-#elif defined(arch_atomic_fetch_xor_relaxed)
static __always_inline int
raw_atomic_fetch_xor_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_xor_acquire)
+ return arch_atomic_fetch_xor_acquire(i, v);
+#elif defined(arch_atomic_fetch_xor_relaxed)
int ret = arch_atomic_fetch_xor_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_xor)
-#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor
+ return arch_atomic_fetch_xor(i, v);
#else
#error "Unable to define raw_atomic_fetch_xor_acquire"
#endif
+}
-#if defined(arch_atomic_fetch_xor_release)
-#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor_release
-#elif defined(arch_atomic_fetch_xor_relaxed)
static __always_inline int
raw_atomic_fetch_xor_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_xor_release)
+ return arch_atomic_fetch_xor_release(i, v);
+#elif defined(arch_atomic_fetch_xor_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_xor_relaxed(i, v);
-}
#elif defined(arch_atomic_fetch_xor)
-#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor
+ return arch_atomic_fetch_xor(i, v);
#else
#error "Unable to define raw_atomic_fetch_xor_release"
#endif
+}
+static __always_inline int
+raw_atomic_fetch_xor_relaxed(int i, atomic_t *v)
+{
#if defined(arch_atomic_fetch_xor_relaxed)
-#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed
+ return arch_atomic_fetch_xor_relaxed(i, v);
#elif defined(arch_atomic_fetch_xor)
-#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor
+ return arch_atomic_fetch_xor(i, v);
#else
#error "Unable to define raw_atomic_fetch_xor_relaxed"
#endif
+}
-#if defined(arch_atomic_xchg)
-#define raw_atomic_xchg arch_atomic_xchg
-#elif defined(arch_atomic_xchg_relaxed)
static __always_inline int
-raw_atomic_xchg(atomic_t *v, int i)
+raw_atomic_xchg(atomic_t *v, int new)
{
+#if defined(arch_atomic_xchg)
+ return arch_atomic_xchg(v, new);
+#elif defined(arch_atomic_xchg_relaxed)
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_xchg_relaxed(v, i);
+ ret = arch_atomic_xchg_relaxed(v, new);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline int
-raw_atomic_xchg(atomic_t *v, int new)
-{
return raw_xchg(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic_xchg_acquire)
-#define raw_atomic_xchg_acquire arch_atomic_xchg_acquire
-#elif defined(arch_atomic_xchg_relaxed)
static __always_inline int
-raw_atomic_xchg_acquire(atomic_t *v, int i)
+raw_atomic_xchg_acquire(atomic_t *v, int new)
{
- int ret = arch_atomic_xchg_relaxed(v, i);
+#if defined(arch_atomic_xchg_acquire)
+ return arch_atomic_xchg_acquire(v, new);
+#elif defined(arch_atomic_xchg_relaxed)
+ int ret = arch_atomic_xchg_relaxed(v, new);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_xchg)
-#define raw_atomic_xchg_acquire arch_atomic_xchg
+ return arch_atomic_xchg(v, new);
#else
-static __always_inline int
-raw_atomic_xchg_acquire(atomic_t *v, int new)
-{
return raw_xchg_acquire(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic_xchg_release)
-#define raw_atomic_xchg_release arch_atomic_xchg_release
-#elif defined(arch_atomic_xchg_relaxed)
static __always_inline int
-raw_atomic_xchg_release(atomic_t *v, int i)
+raw_atomic_xchg_release(atomic_t *v, int new)
{
+#if defined(arch_atomic_xchg_release)
+ return arch_atomic_xchg_release(v, new);
+#elif defined(arch_atomic_xchg_relaxed)
__atomic_release_fence();
- return arch_atomic_xchg_relaxed(v, i);
-}
+ return arch_atomic_xchg_relaxed(v, new);
#elif defined(arch_atomic_xchg)
-#define raw_atomic_xchg_release arch_atomic_xchg
+ return arch_atomic_xchg(v, new);
#else
-static __always_inline int
-raw_atomic_xchg_release(atomic_t *v, int new)
-{
return raw_xchg_release(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic_xchg_relaxed)
-#define raw_atomic_xchg_relaxed arch_atomic_xchg_relaxed
-#elif defined(arch_atomic_xchg)
-#define raw_atomic_xchg_relaxed arch_atomic_xchg
-#else
static __always_inline int
raw_atomic_xchg_relaxed(atomic_t *v, int new)
{
+#if defined(arch_atomic_xchg_relaxed)
+ return arch_atomic_xchg_relaxed(v, new);
+#elif defined(arch_atomic_xchg)
+ return arch_atomic_xchg(v, new);
+#else
return raw_xchg_relaxed(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic_cmpxchg)
-#define raw_atomic_cmpxchg arch_atomic_cmpxchg
-#elif defined(arch_atomic_cmpxchg_relaxed)
static __always_inline int
raw_atomic_cmpxchg(atomic_t *v, int old, int new)
{
+#if defined(arch_atomic_cmpxchg)
+ return arch_atomic_cmpxchg(v, old, new);
+#elif defined(arch_atomic_cmpxchg_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline int
-raw_atomic_cmpxchg(atomic_t *v, int old, int new)
-{
return raw_cmpxchg(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic_cmpxchg_acquire)
-#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire
-#elif defined(arch_atomic_cmpxchg_relaxed)
static __always_inline int
raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
+#if defined(arch_atomic_cmpxchg_acquire)
+ return arch_atomic_cmpxchg_acquire(v, old, new);
+#elif defined(arch_atomic_cmpxchg_relaxed)
int ret = arch_atomic_cmpxchg_relaxed(v, old, new);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_cmpxchg)
-#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg
+ return arch_atomic_cmpxchg(v, old, new);
#else
-static __always_inline int
-raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
-{
return raw_cmpxchg_acquire(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic_cmpxchg_release)
-#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg_release
-#elif defined(arch_atomic_cmpxchg_relaxed)
static __always_inline int
raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
+#if defined(arch_atomic_cmpxchg_release)
+ return arch_atomic_cmpxchg_release(v, old, new);
+#elif defined(arch_atomic_cmpxchg_relaxed)
__atomic_release_fence();
return arch_atomic_cmpxchg_relaxed(v, old, new);
-}
#elif defined(arch_atomic_cmpxchg)
-#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg
+ return arch_atomic_cmpxchg(v, old, new);
#else
-static __always_inline int
-raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
-{
return raw_cmpxchg_release(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic_cmpxchg_relaxed)
-#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
-#elif defined(arch_atomic_cmpxchg)
-#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg
-#else
static __always_inline int
raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
{
+#if defined(arch_atomic_cmpxchg_relaxed)
+ return arch_atomic_cmpxchg_relaxed(v, old, new);
+#elif defined(arch_atomic_cmpxchg)
+ return arch_atomic_cmpxchg(v, old, new);
+#else
return raw_cmpxchg_relaxed(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic_try_cmpxchg)
-#define raw_atomic_try_cmpxchg arch_atomic_try_cmpxchg
-#elif defined(arch_atomic_try_cmpxchg_relaxed)
static __always_inline bool
raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
+#if defined(arch_atomic_try_cmpxchg)
+ return arch_atomic_try_cmpxchg(v, old, new);
+#elif defined(arch_atomic_try_cmpxchg_relaxed)
bool ret;
__atomic_pre_full_fence();
ret = arch_atomic_try_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline bool
-raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
-{
int r, o = *old;
r = raw_atomic_cmpxchg(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic_try_cmpxchg_acquire)
-#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire
-#elif defined(arch_atomic_try_cmpxchg_relaxed)
static __always_inline bool
raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
+#if defined(arch_atomic_try_cmpxchg_acquire)
+ return arch_atomic_try_cmpxchg_acquire(v, old, new);
+#elif defined(arch_atomic_try_cmpxchg_relaxed)
bool ret = arch_atomic_try_cmpxchg_relaxed(v, old, new);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_try_cmpxchg)
-#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg
+ return arch_atomic_try_cmpxchg(v, old, new);
#else
-static __always_inline bool
-raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
-{
int r, o = *old;
r = raw_atomic_cmpxchg_acquire(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic_try_cmpxchg_release)
-#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release
-#elif defined(arch_atomic_try_cmpxchg_relaxed)
static __always_inline bool
raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
{
+#if defined(arch_atomic_try_cmpxchg_release)
+ return arch_atomic_try_cmpxchg_release(v, old, new);
+#elif defined(arch_atomic_try_cmpxchg_relaxed)
__atomic_release_fence();
return arch_atomic_try_cmpxchg_relaxed(v, old, new);
-}
#elif defined(arch_atomic_try_cmpxchg)
-#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg
+ return arch_atomic_try_cmpxchg(v, old, new);
#else
-static __always_inline bool
-raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
-{
int r, o = *old;
r = raw_atomic_cmpxchg_release(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic_try_cmpxchg_relaxed)
-#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg_relaxed
-#elif defined(arch_atomic_try_cmpxchg)
-#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg
-#else
static __always_inline bool
raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
{
+#if defined(arch_atomic_try_cmpxchg_relaxed)
+ return arch_atomic_try_cmpxchg_relaxed(v, old, new);
+#elif defined(arch_atomic_try_cmpxchg)
+ return arch_atomic_try_cmpxchg(v, old, new);
+#else
int r, o = *old;
r = raw_atomic_cmpxchg_relaxed(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic_sub_and_test)
-#define raw_atomic_sub_and_test arch_atomic_sub_and_test
-#else
static __always_inline bool
raw_atomic_sub_and_test(int i, atomic_t *v)
{
+#if defined(arch_atomic_sub_and_test)
+ return arch_atomic_sub_and_test(i, v);
+#else
return raw_atomic_sub_return(i, v) == 0;
-}
#endif
+}
-#if defined(arch_atomic_dec_and_test)
-#define raw_atomic_dec_and_test arch_atomic_dec_and_test
-#else
static __always_inline bool
raw_atomic_dec_and_test(atomic_t *v)
{
+#if defined(arch_atomic_dec_and_test)
+ return arch_atomic_dec_and_test(v);
+#else
return raw_atomic_dec_return(v) == 0;
-}
#endif
+}
-#if defined(arch_atomic_inc_and_test)
-#define raw_atomic_inc_and_test arch_atomic_inc_and_test
-#else
static __always_inline bool
raw_atomic_inc_and_test(atomic_t *v)
{
+#if defined(arch_atomic_inc_and_test)
+ return arch_atomic_inc_and_test(v);
+#else
return raw_atomic_inc_return(v) == 0;
-}
#endif
+}
-#if defined(arch_atomic_add_negative)
-#define raw_atomic_add_negative arch_atomic_add_negative
-#elif defined(arch_atomic_add_negative_relaxed)
static __always_inline bool
raw_atomic_add_negative(int i, atomic_t *v)
{
+#if defined(arch_atomic_add_negative)
+ return arch_atomic_add_negative(i, v);
+#elif defined(arch_atomic_add_negative_relaxed)
bool ret;
__atomic_pre_full_fence();
ret = arch_atomic_add_negative_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline bool
-raw_atomic_add_negative(int i, atomic_t *v)
-{
return raw_atomic_add_return(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic_add_negative_acquire)
-#define raw_atomic_add_negative_acquire arch_atomic_add_negative_acquire
-#elif defined(arch_atomic_add_negative_relaxed)
static __always_inline bool
raw_atomic_add_negative_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_add_negative_acquire)
+ return arch_atomic_add_negative_acquire(i, v);
+#elif defined(arch_atomic_add_negative_relaxed)
bool ret = arch_atomic_add_negative_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_add_negative)
-#define raw_atomic_add_negative_acquire arch_atomic_add_negative
+ return arch_atomic_add_negative(i, v);
#else
-static __always_inline bool
-raw_atomic_add_negative_acquire(int i, atomic_t *v)
-{
return raw_atomic_add_return_acquire(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic_add_negative_release)
-#define raw_atomic_add_negative_release arch_atomic_add_negative_release
-#elif defined(arch_atomic_add_negative_relaxed)
static __always_inline bool
raw_atomic_add_negative_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_add_negative_release)
+ return arch_atomic_add_negative_release(i, v);
+#elif defined(arch_atomic_add_negative_relaxed)
__atomic_release_fence();
return arch_atomic_add_negative_relaxed(i, v);
-}
#elif defined(arch_atomic_add_negative)
-#define raw_atomic_add_negative_release arch_atomic_add_negative
+ return arch_atomic_add_negative(i, v);
#else
-static __always_inline bool
-raw_atomic_add_negative_release(int i, atomic_t *v)
-{
return raw_atomic_add_return_release(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic_add_negative_relaxed)
-#define raw_atomic_add_negative_relaxed arch_atomic_add_negative_relaxed
-#elif defined(arch_atomic_add_negative)
-#define raw_atomic_add_negative_relaxed arch_atomic_add_negative
-#else
static __always_inline bool
raw_atomic_add_negative_relaxed(int i, atomic_t *v)
{
+#if defined(arch_atomic_add_negative_relaxed)
+ return arch_atomic_add_negative_relaxed(i, v);
+#elif defined(arch_atomic_add_negative)
+ return arch_atomic_add_negative(i, v);
+#else
return raw_atomic_add_return_relaxed(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic_fetch_add_unless)
-#define raw_atomic_fetch_add_unless arch_atomic_fetch_add_unless
-#else
static __always_inline int
raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
+#if defined(arch_atomic_fetch_add_unless)
+ return arch_atomic_fetch_add_unless(v, a, u);
+#else
int c = raw_atomic_read(v);
do {
@@ -1594,35 +1542,35 @@ raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
} while (!raw_atomic_try_cmpxchg(v, &c, c + a));
return c;
-}
#endif
+}
-#if defined(arch_atomic_add_unless)
-#define raw_atomic_add_unless arch_atomic_add_unless
-#else
static __always_inline bool
raw_atomic_add_unless(atomic_t *v, int a, int u)
{
+#if defined(arch_atomic_add_unless)
+ return arch_atomic_add_unless(v, a, u);
+#else
return raw_atomic_fetch_add_unless(v, a, u) != u;
-}
#endif
+}
-#if defined(arch_atomic_inc_not_zero)
-#define raw_atomic_inc_not_zero arch_atomic_inc_not_zero
-#else
static __always_inline bool
raw_atomic_inc_not_zero(atomic_t *v)
{
+#if defined(arch_atomic_inc_not_zero)
+ return arch_atomic_inc_not_zero(v);
+#else
return raw_atomic_add_unless(v, 1, 0);
-}
#endif
+}
-#if defined(arch_atomic_inc_unless_negative)
-#define raw_atomic_inc_unless_negative arch_atomic_inc_unless_negative
-#else
static __always_inline bool
raw_atomic_inc_unless_negative(atomic_t *v)
{
+#if defined(arch_atomic_inc_unless_negative)
+ return arch_atomic_inc_unless_negative(v);
+#else
int c = raw_atomic_read(v);
do {
@@ -1631,15 +1579,15 @@ raw_atomic_inc_unless_negative(atomic_t *v)
} while (!raw_atomic_try_cmpxchg(v, &c, c + 1));
return true;
-}
#endif
+}
-#if defined(arch_atomic_dec_unless_positive)
-#define raw_atomic_dec_unless_positive arch_atomic_dec_unless_positive
-#else
static __always_inline bool
raw_atomic_dec_unless_positive(atomic_t *v)
{
+#if defined(arch_atomic_dec_unless_positive)
+ return arch_atomic_dec_unless_positive(v);
+#else
int c = raw_atomic_read(v);
do {
@@ -1648,15 +1596,15 @@ raw_atomic_dec_unless_positive(atomic_t *v)
} while (!raw_atomic_try_cmpxchg(v, &c, c - 1));
return true;
-}
#endif
+}
-#if defined(arch_atomic_dec_if_positive)
-#define raw_atomic_dec_if_positive arch_atomic_dec_if_positive
-#else
static __always_inline int
raw_atomic_dec_if_positive(atomic_t *v)
{
+#if defined(arch_atomic_dec_if_positive)
+ return arch_atomic_dec_if_positive(v);
+#else
int dec, c = raw_atomic_read(v);
do {
@@ -1666,23 +1614,27 @@ raw_atomic_dec_if_positive(atomic_t *v)
} while (!raw_atomic_try_cmpxchg(v, &c, dec));
return dec;
-}
#endif
+}
#ifdef CONFIG_GENERIC_ATOMIC64
#include <asm-generic/atomic64.h>
#endif
-#define raw_atomic64_read arch_atomic64_read
+static __always_inline s64
+raw_atomic64_read(const atomic64_t *v)
+{
+ return arch_atomic64_read(v);
+}
-#if defined(arch_atomic64_read_acquire)
-#define raw_atomic64_read_acquire arch_atomic64_read_acquire
-#elif defined(arch_atomic64_read)
-#define raw_atomic64_read_acquire arch_atomic64_read
-#else
static __always_inline s64
raw_atomic64_read_acquire(const atomic64_t *v)
{
+#if defined(arch_atomic64_read_acquire)
+ return arch_atomic64_read_acquire(v);
+#elif defined(arch_atomic64_read)
+ return arch_atomic64_read(v);
+#else
s64 ret;
if (__native_word(atomic64_t)) {
@@ -1693,1144 +1645,1088 @@ raw_atomic64_read_acquire(const atomic64_t *v)
}
return ret;
-}
#endif
+}
-#define raw_atomic64_set arch_atomic64_set
+static __always_inline void
+raw_atomic64_set(atomic64_t *v, s64 i)
+{
+ arch_atomic64_set(v, i);
+}
-#if defined(arch_atomic64_set_release)
-#define raw_atomic64_set_release arch_atomic64_set_release
-#elif defined(arch_atomic64_set)
-#define raw_atomic64_set_release arch_atomic64_set
-#else
static __always_inline void
raw_atomic64_set_release(atomic64_t *v, s64 i)
{
+#if defined(arch_atomic64_set_release)
+ arch_atomic64_set_release(v, i);
+#elif defined(arch_atomic64_set)
+ arch_atomic64_set(v, i);
+#else
if (__native_word(atomic64_t)) {
smp_store_release(&(v)->counter, i);
} else {
__atomic_release_fence();
raw_atomic64_set(v, i);
}
-}
#endif
+}
-#define raw_atomic64_add arch_atomic64_add
+static __always_inline void
+raw_atomic64_add(s64 i, atomic64_t *v)
+{
+ arch_atomic64_add(i, v);
+}
-#if defined(arch_atomic64_add_return)
-#define raw_atomic64_add_return arch_atomic64_add_return
-#elif defined(arch_atomic64_add_return_relaxed)
static __always_inline s64
raw_atomic64_add_return(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_add_return)
+ return arch_atomic64_add_return(i, v);
+#elif defined(arch_atomic64_add_return_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_add_return_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic64_add_return"
#endif
+}
-#if defined(arch_atomic64_add_return_acquire)
-#define raw_atomic64_add_return_acquire arch_atomic64_add_return_acquire
-#elif defined(arch_atomic64_add_return_relaxed)
static __always_inline s64
raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_add_return_acquire)
+ return arch_atomic64_add_return_acquire(i, v);
+#elif defined(arch_atomic64_add_return_relaxed)
s64 ret = arch_atomic64_add_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_add_return)
-#define raw_atomic64_add_return_acquire arch_atomic64_add_return
+ return arch_atomic64_add_return(i, v);
#else
#error "Unable to define raw_atomic64_add_return_acquire"
#endif
+}
-#if defined(arch_atomic64_add_return_release)
-#define raw_atomic64_add_return_release arch_atomic64_add_return_release
-#elif defined(arch_atomic64_add_return_relaxed)
static __always_inline s64
raw_atomic64_add_return_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_add_return_release)
+ return arch_atomic64_add_return_release(i, v);
+#elif defined(arch_atomic64_add_return_relaxed)
__atomic_release_fence();
return arch_atomic64_add_return_relaxed(i, v);
-}
#elif defined(arch_atomic64_add_return)
-#define raw_atomic64_add_return_release arch_atomic64_add_return
+ return arch_atomic64_add_return(i, v);
#else
#error "Unable to define raw_atomic64_add_return_release"
#endif
+}
+static __always_inline s64
+raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
+{
#if defined(arch_atomic64_add_return_relaxed)
-#define raw_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed
+ return arch_atomic64_add_return_relaxed(i, v);
#elif defined(arch_atomic64_add_return)
-#define raw_atomic64_add_return_relaxed arch_atomic64_add_return
+ return arch_atomic64_add_return(i, v);
#else
#error "Unable to define raw_atomic64_add_return_relaxed"
#endif
+}
-#if defined(arch_atomic64_fetch_add)
-#define raw_atomic64_fetch_add arch_atomic64_fetch_add
-#elif defined(arch_atomic64_fetch_add_relaxed)
static __always_inline s64
raw_atomic64_fetch_add(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_add)
+ return arch_atomic64_fetch_add(i, v);
+#elif defined(arch_atomic64_fetch_add_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_add_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic64_fetch_add"
#endif
+}
-#if defined(arch_atomic64_fetch_add_acquire)
-#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire
-#elif defined(arch_atomic64_fetch_add_relaxed)
static __always_inline s64
raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_add_acquire)
+ return arch_atomic64_fetch_add_acquire(i, v);
+#elif defined(arch_atomic64_fetch_add_relaxed)
s64 ret = arch_atomic64_fetch_add_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_add)
-#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add
+ return arch_atomic64_fetch_add(i, v);
#else
#error "Unable to define raw_atomic64_fetch_add_acquire"
#endif
+}
-#if defined(arch_atomic64_fetch_add_release)
-#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add_release
-#elif defined(arch_atomic64_fetch_add_relaxed)
static __always_inline s64
raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_add_release)
+ return arch_atomic64_fetch_add_release(i, v);
+#elif defined(arch_atomic64_fetch_add_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_add_relaxed(i, v);
-}
#elif defined(arch_atomic64_fetch_add)
-#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add
+ return arch_atomic64_fetch_add(i, v);
#else
#error "Unable to define raw_atomic64_fetch_add_release"
#endif
+}
+static __always_inline s64
+raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
+{
#if defined(arch_atomic64_fetch_add_relaxed)
-#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed
+ return arch_atomic64_fetch_add_relaxed(i, v);
#elif defined(arch_atomic64_fetch_add)
-#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add
+ return arch_atomic64_fetch_add(i, v);
#else
#error "Unable to define raw_atomic64_fetch_add_relaxed"
#endif
+}
-#define raw_atomic64_sub arch_atomic64_sub
+static __always_inline void
+raw_atomic64_sub(s64 i, atomic64_t *v)
+{
+ arch_atomic64_sub(i, v);
+}
-#if defined(arch_atomic64_sub_return)
-#define raw_atomic64_sub_return arch_atomic64_sub_return
-#elif defined(arch_atomic64_sub_return_relaxed)
static __always_inline s64
raw_atomic64_sub_return(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_sub_return)
+ return arch_atomic64_sub_return(i, v);
+#elif defined(arch_atomic64_sub_return_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_sub_return_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic64_sub_return"
#endif
+}
-#if defined(arch_atomic64_sub_return_acquire)
-#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire
-#elif defined(arch_atomic64_sub_return_relaxed)
static __always_inline s64
raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_sub_return_acquire)
+ return arch_atomic64_sub_return_acquire(i, v);
+#elif defined(arch_atomic64_sub_return_relaxed)
s64 ret = arch_atomic64_sub_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_sub_return)
-#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return
+ return arch_atomic64_sub_return(i, v);
#else
#error "Unable to define raw_atomic64_sub_return_acquire"
#endif
+}
-#if defined(arch_atomic64_sub_return_release)
-#define raw_atomic64_sub_return_release arch_atomic64_sub_return_release
-#elif defined(arch_atomic64_sub_return_relaxed)
static __always_inline s64
raw_atomic64_sub_return_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_sub_return_release)
+ return arch_atomic64_sub_return_release(i, v);
+#elif defined(arch_atomic64_sub_return_relaxed)
__atomic_release_fence();
return arch_atomic64_sub_return_relaxed(i, v);
-}
#elif defined(arch_atomic64_sub_return)
-#define raw_atomic64_sub_return_release arch_atomic64_sub_return
+ return arch_atomic64_sub_return(i, v);
#else
#error "Unable to define raw_atomic64_sub_return_release"
#endif
+}
+static __always_inline s64
+raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
+{
#if defined(arch_atomic64_sub_return_relaxed)
-#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed
+ return arch_atomic64_sub_return_relaxed(i, v);
#elif defined(arch_atomic64_sub_return)
-#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return
+ return arch_atomic64_sub_return(i, v);
#else
#error "Unable to define raw_atomic64_sub_return_relaxed"
#endif
+}
-#if defined(arch_atomic64_fetch_sub)
-#define raw_atomic64_fetch_sub arch_atomic64_fetch_sub
-#elif defined(arch_atomic64_fetch_sub_relaxed)
static __always_inline s64
raw_atomic64_fetch_sub(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_sub)
+ return arch_atomic64_fetch_sub(i, v);
+#elif defined(arch_atomic64_fetch_sub_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_sub_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic64_fetch_sub"
#endif
+}
-#if defined(arch_atomic64_fetch_sub_acquire)
-#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire
-#elif defined(arch_atomic64_fetch_sub_relaxed)
static __always_inline s64
raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_sub_acquire)
+ return arch_atomic64_fetch_sub_acquire(i, v);
+#elif defined(arch_atomic64_fetch_sub_relaxed)
s64 ret = arch_atomic64_fetch_sub_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_sub)
-#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub
+ return arch_atomic64_fetch_sub(i, v);
#else
#error "Unable to define raw_atomic64_fetch_sub_acquire"
#endif
+}
-#if defined(arch_atomic64_fetch_sub_release)
-#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release
-#elif defined(arch_atomic64_fetch_sub_relaxed)
static __always_inline s64
raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_sub_release)
+ return arch_atomic64_fetch_sub_release(i, v);
+#elif defined(arch_atomic64_fetch_sub_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_sub_relaxed(i, v);
-}
#elif defined(arch_atomic64_fetch_sub)
-#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub
+ return arch_atomic64_fetch_sub(i, v);
#else
#error "Unable to define raw_atomic64_fetch_sub_release"
#endif
+}
+static __always_inline s64
+raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
+{
#if defined(arch_atomic64_fetch_sub_relaxed)
-#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed
+ return arch_atomic64_fetch_sub_relaxed(i, v);
#elif defined(arch_atomic64_fetch_sub)
-#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub
+ return arch_atomic64_fetch_sub(i, v);
#else
#error "Unable to define raw_atomic64_fetch_sub_relaxed"
#endif
+}
-#if defined(arch_atomic64_inc)
-#define raw_atomic64_inc arch_atomic64_inc
-#else
static __always_inline void
raw_atomic64_inc(atomic64_t *v)
{
+#if defined(arch_atomic64_inc)
+ arch_atomic64_inc(v);
+#else
raw_atomic64_add(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_inc_return)
-#define raw_atomic64_inc_return arch_atomic64_inc_return
-#elif defined(arch_atomic64_inc_return_relaxed)
static __always_inline s64
raw_atomic64_inc_return(atomic64_t *v)
{
+#if defined(arch_atomic64_inc_return)
+ return arch_atomic64_inc_return(v);
+#elif defined(arch_atomic64_inc_return_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_inc_return_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline s64
-raw_atomic64_inc_return(atomic64_t *v)
-{
return raw_atomic64_add_return(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_inc_return_acquire)
-#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire
-#elif defined(arch_atomic64_inc_return_relaxed)
static __always_inline s64
raw_atomic64_inc_return_acquire(atomic64_t *v)
{
+#if defined(arch_atomic64_inc_return_acquire)
+ return arch_atomic64_inc_return_acquire(v);
+#elif defined(arch_atomic64_inc_return_relaxed)
s64 ret = arch_atomic64_inc_return_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_inc_return)
-#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return
+ return arch_atomic64_inc_return(v);
#else
-static __always_inline s64
-raw_atomic64_inc_return_acquire(atomic64_t *v)
-{
return raw_atomic64_add_return_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_inc_return_release)
-#define raw_atomic64_inc_return_release arch_atomic64_inc_return_release
-#elif defined(arch_atomic64_inc_return_relaxed)
static __always_inline s64
raw_atomic64_inc_return_release(atomic64_t *v)
{
+#if defined(arch_atomic64_inc_return_release)
+ return arch_atomic64_inc_return_release(v);
+#elif defined(arch_atomic64_inc_return_relaxed)
__atomic_release_fence();
return arch_atomic64_inc_return_relaxed(v);
-}
#elif defined(arch_atomic64_inc_return)
-#define raw_atomic64_inc_return_release arch_atomic64_inc_return
+ return arch_atomic64_inc_return(v);
#else
-static __always_inline s64
-raw_atomic64_inc_return_release(atomic64_t *v)
-{
return raw_atomic64_add_return_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_inc_return_relaxed)
-#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return_relaxed
-#elif defined(arch_atomic64_inc_return)
-#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return
-#else
static __always_inline s64
raw_atomic64_inc_return_relaxed(atomic64_t *v)
{
+#if defined(arch_atomic64_inc_return_relaxed)
+ return arch_atomic64_inc_return_relaxed(v);
+#elif defined(arch_atomic64_inc_return)
+ return arch_atomic64_inc_return(v);
+#else
return raw_atomic64_add_return_relaxed(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_inc)
-#define raw_atomic64_fetch_inc arch_atomic64_fetch_inc
-#elif defined(arch_atomic64_fetch_inc_relaxed)
static __always_inline s64
raw_atomic64_fetch_inc(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_inc)
+ return arch_atomic64_fetch_inc(v);
+#elif defined(arch_atomic64_fetch_inc_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_inc_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline s64
-raw_atomic64_fetch_inc(atomic64_t *v)
-{
return raw_atomic64_fetch_add(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_inc_acquire)
-#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire
-#elif defined(arch_atomic64_fetch_inc_relaxed)
static __always_inline s64
raw_atomic64_fetch_inc_acquire(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_inc_acquire)
+ return arch_atomic64_fetch_inc_acquire(v);
+#elif defined(arch_atomic64_fetch_inc_relaxed)
s64 ret = arch_atomic64_fetch_inc_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_inc)
-#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc
+ return arch_atomic64_fetch_inc(v);
#else
-static __always_inline s64
-raw_atomic64_fetch_inc_acquire(atomic64_t *v)
-{
return raw_atomic64_fetch_add_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_inc_release)
-#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release
-#elif defined(arch_atomic64_fetch_inc_relaxed)
static __always_inline s64
raw_atomic64_fetch_inc_release(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_inc_release)
+ return arch_atomic64_fetch_inc_release(v);
+#elif defined(arch_atomic64_fetch_inc_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_inc_relaxed(v);
-}
#elif defined(arch_atomic64_fetch_inc)
-#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc
+ return arch_atomic64_fetch_inc(v);
#else
-static __always_inline s64
-raw_atomic64_fetch_inc_release(atomic64_t *v)
-{
return raw_atomic64_fetch_add_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_inc_relaxed)
-#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc_relaxed
-#elif defined(arch_atomic64_fetch_inc)
-#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc
-#else
static __always_inline s64
raw_atomic64_fetch_inc_relaxed(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_inc_relaxed)
+ return arch_atomic64_fetch_inc_relaxed(v);
+#elif defined(arch_atomic64_fetch_inc)
+ return arch_atomic64_fetch_inc(v);
+#else
return raw_atomic64_fetch_add_relaxed(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_dec)
-#define raw_atomic64_dec arch_atomic64_dec
-#else
static __always_inline void
raw_atomic64_dec(atomic64_t *v)
{
+#if defined(arch_atomic64_dec)
+ arch_atomic64_dec(v);
+#else
raw_atomic64_sub(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_dec_return)
-#define raw_atomic64_dec_return arch_atomic64_dec_return
-#elif defined(arch_atomic64_dec_return_relaxed)
static __always_inline s64
raw_atomic64_dec_return(atomic64_t *v)
{
+#if defined(arch_atomic64_dec_return)
+ return arch_atomic64_dec_return(v);
+#elif defined(arch_atomic64_dec_return_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_dec_return_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline s64
-raw_atomic64_dec_return(atomic64_t *v)
-{
return raw_atomic64_sub_return(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_dec_return_acquire)
-#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire
-#elif defined(arch_atomic64_dec_return_relaxed)
static __always_inline s64
raw_atomic64_dec_return_acquire(atomic64_t *v)
{
+#if defined(arch_atomic64_dec_return_acquire)
+ return arch_atomic64_dec_return_acquire(v);
+#elif defined(arch_atomic64_dec_return_relaxed)
s64 ret = arch_atomic64_dec_return_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_dec_return)
-#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return
+ return arch_atomic64_dec_return(v);
#else
-static __always_inline s64
-raw_atomic64_dec_return_acquire(atomic64_t *v)
-{
return raw_atomic64_sub_return_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_dec_return_release)
-#define raw_atomic64_dec_return_release arch_atomic64_dec_return_release
-#elif defined(arch_atomic64_dec_return_relaxed)
static __always_inline s64
raw_atomic64_dec_return_release(atomic64_t *v)
{
+#if defined(arch_atomic64_dec_return_release)
+ return arch_atomic64_dec_return_release(v);
+#elif defined(arch_atomic64_dec_return_relaxed)
__atomic_release_fence();
return arch_atomic64_dec_return_relaxed(v);
-}
#elif defined(arch_atomic64_dec_return)
-#define raw_atomic64_dec_return_release arch_atomic64_dec_return
+ return arch_atomic64_dec_return(v);
#else
-static __always_inline s64
-raw_atomic64_dec_return_release(atomic64_t *v)
-{
return raw_atomic64_sub_return_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_dec_return_relaxed)
-#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return_relaxed
-#elif defined(arch_atomic64_dec_return)
-#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return
-#else
static __always_inline s64
raw_atomic64_dec_return_relaxed(atomic64_t *v)
{
+#if defined(arch_atomic64_dec_return_relaxed)
+ return arch_atomic64_dec_return_relaxed(v);
+#elif defined(arch_atomic64_dec_return)
+ return arch_atomic64_dec_return(v);
+#else
return raw_atomic64_sub_return_relaxed(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_dec)
-#define raw_atomic64_fetch_dec arch_atomic64_fetch_dec
-#elif defined(arch_atomic64_fetch_dec_relaxed)
static __always_inline s64
raw_atomic64_fetch_dec(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_dec)
+ return arch_atomic64_fetch_dec(v);
+#elif defined(arch_atomic64_fetch_dec_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_dec_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline s64
-raw_atomic64_fetch_dec(atomic64_t *v)
-{
return raw_atomic64_fetch_sub(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_dec_acquire)
-#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire
-#elif defined(arch_atomic64_fetch_dec_relaxed)
static __always_inline s64
raw_atomic64_fetch_dec_acquire(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_dec_acquire)
+ return arch_atomic64_fetch_dec_acquire(v);
+#elif defined(arch_atomic64_fetch_dec_relaxed)
s64 ret = arch_atomic64_fetch_dec_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_dec)
-#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec
+ return arch_atomic64_fetch_dec(v);
#else
-static __always_inline s64
-raw_atomic64_fetch_dec_acquire(atomic64_t *v)
-{
return raw_atomic64_fetch_sub_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_dec_release)
-#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release
-#elif defined(arch_atomic64_fetch_dec_relaxed)
static __always_inline s64
raw_atomic64_fetch_dec_release(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_dec_release)
+ return arch_atomic64_fetch_dec_release(v);
+#elif defined(arch_atomic64_fetch_dec_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_dec_relaxed(v);
-}
#elif defined(arch_atomic64_fetch_dec)
-#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec
+ return arch_atomic64_fetch_dec(v);
#else
-static __always_inline s64
-raw_atomic64_fetch_dec_release(atomic64_t *v)
-{
return raw_atomic64_fetch_sub_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_dec_relaxed)
-#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec_relaxed
-#elif defined(arch_atomic64_fetch_dec)
-#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec
-#else
static __always_inline s64
raw_atomic64_fetch_dec_relaxed(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_dec_relaxed)
+ return arch_atomic64_fetch_dec_relaxed(v);
+#elif defined(arch_atomic64_fetch_dec)
+ return arch_atomic64_fetch_dec(v);
+#else
return raw_atomic64_fetch_sub_relaxed(1, v);
-}
#endif
+}
-#define raw_atomic64_and arch_atomic64_and
+static __always_inline void
+raw_atomic64_and(s64 i, atomic64_t *v)
+{
+ arch_atomic64_and(i, v);
+}
-#if defined(arch_atomic64_fetch_and)
-#define raw_atomic64_fetch_and arch_atomic64_fetch_and
-#elif defined(arch_atomic64_fetch_and_relaxed)
static __always_inline s64
raw_atomic64_fetch_and(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_and)
+ return arch_atomic64_fetch_and(i, v);
+#elif defined(arch_atomic64_fetch_and_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_and_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic64_fetch_and"
#endif
+}
-#if defined(arch_atomic64_fetch_and_acquire)
-#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire
-#elif defined(arch_atomic64_fetch_and_relaxed)
static __always_inline s64
raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_and_acquire)
+ return arch_atomic64_fetch_and_acquire(i, v);
+#elif defined(arch_atomic64_fetch_and_relaxed)
s64 ret = arch_atomic64_fetch_and_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_and)
-#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and
+ return arch_atomic64_fetch_and(i, v);
#else
#error "Unable to define raw_atomic64_fetch_and_acquire"
#endif
+}
-#if defined(arch_atomic64_fetch_and_release)
-#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and_release
-#elif defined(arch_atomic64_fetch_and_relaxed)
static __always_inline s64
raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_and_release)
+ return arch_atomic64_fetch_and_release(i, v);
+#elif defined(arch_atomic64_fetch_and_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_and_relaxed(i, v);
-}
#elif defined(arch_atomic64_fetch_and)
-#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and
+ return arch_atomic64_fetch_and(i, v);
#else
#error "Unable to define raw_atomic64_fetch_and_release"
#endif
+}
+static __always_inline s64
+raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
+{
#if defined(arch_atomic64_fetch_and_relaxed)
-#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed
+ return arch_atomic64_fetch_and_relaxed(i, v);
#elif defined(arch_atomic64_fetch_and)
-#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and
+ return arch_atomic64_fetch_and(i, v);
#else
#error "Unable to define raw_atomic64_fetch_and_relaxed"
#endif
+}
-#if defined(arch_atomic64_andnot)
-#define raw_atomic64_andnot arch_atomic64_andnot
-#else
static __always_inline void
raw_atomic64_andnot(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_andnot)
+ arch_atomic64_andnot(i, v);
+#else
raw_atomic64_and(~i, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_andnot)
-#define raw_atomic64_fetch_andnot arch_atomic64_fetch_andnot
-#elif defined(arch_atomic64_fetch_andnot_relaxed)
static __always_inline s64
raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_andnot)
+ return arch_atomic64_fetch_andnot(i, v);
+#elif defined(arch_atomic64_fetch_andnot_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_andnot_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline s64
-raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
-{
return raw_atomic64_fetch_and(~i, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_andnot_acquire)
-#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire
-#elif defined(arch_atomic64_fetch_andnot_relaxed)
static __always_inline s64
raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_andnot_acquire)
+ return arch_atomic64_fetch_andnot_acquire(i, v);
+#elif defined(arch_atomic64_fetch_andnot_relaxed)
s64 ret = arch_atomic64_fetch_andnot_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_andnot)
-#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot
+ return arch_atomic64_fetch_andnot(i, v);
#else
-static __always_inline s64
-raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
-{
return raw_atomic64_fetch_and_acquire(~i, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_andnot_release)
-#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release
-#elif defined(arch_atomic64_fetch_andnot_relaxed)
static __always_inline s64
raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_andnot_release)
+ return arch_atomic64_fetch_andnot_release(i, v);
+#elif defined(arch_atomic64_fetch_andnot_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_andnot_relaxed(i, v);
-}
#elif defined(arch_atomic64_fetch_andnot)
-#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot
+ return arch_atomic64_fetch_andnot(i, v);
#else
-static __always_inline s64
-raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
-{
return raw_atomic64_fetch_and_release(~i, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_andnot_relaxed)
-#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed
-#elif defined(arch_atomic64_fetch_andnot)
-#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot
-#else
static __always_inline s64
raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_andnot_relaxed)
+ return arch_atomic64_fetch_andnot_relaxed(i, v);
+#elif defined(arch_atomic64_fetch_andnot)
+ return arch_atomic64_fetch_andnot(i, v);
+#else
return raw_atomic64_fetch_and_relaxed(~i, v);
-}
#endif
+}
-#define raw_atomic64_or arch_atomic64_or
+static __always_inline void
+raw_atomic64_or(s64 i, atomic64_t *v)
+{
+ arch_atomic64_or(i, v);
+}
-#if defined(arch_atomic64_fetch_or)
-#define raw_atomic64_fetch_or arch_atomic64_fetch_or
-#elif defined(arch_atomic64_fetch_or_relaxed)
static __always_inline s64
raw_atomic64_fetch_or(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_or)
+ return arch_atomic64_fetch_or(i, v);
+#elif defined(arch_atomic64_fetch_or_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_or_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic64_fetch_or"
#endif
+}
-#if defined(arch_atomic64_fetch_or_acquire)
-#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire
-#elif defined(arch_atomic64_fetch_or_relaxed)
static __always_inline s64
raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_or_acquire)
+ return arch_atomic64_fetch_or_acquire(i, v);
+#elif defined(arch_atomic64_fetch_or_relaxed)
s64 ret = arch_atomic64_fetch_or_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_or)
-#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or
+ return arch_atomic64_fetch_or(i, v);
#else
#error "Unable to define raw_atomic64_fetch_or_acquire"
#endif
+}
-#if defined(arch_atomic64_fetch_or_release)
-#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or_release
-#elif defined(arch_atomic64_fetch_or_relaxed)
static __always_inline s64
raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_or_release)
+ return arch_atomic64_fetch_or_release(i, v);
+#elif defined(arch_atomic64_fetch_or_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_or_relaxed(i, v);
-}
#elif defined(arch_atomic64_fetch_or)
-#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or
+ return arch_atomic64_fetch_or(i, v);
#else
#error "Unable to define raw_atomic64_fetch_or_release"
#endif
+}
+static __always_inline s64
+raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
+{
#if defined(arch_atomic64_fetch_or_relaxed)
-#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed
+ return arch_atomic64_fetch_or_relaxed(i, v);
#elif defined(arch_atomic64_fetch_or)
-#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or
+ return arch_atomic64_fetch_or(i, v);
#else
#error "Unable to define raw_atomic64_fetch_or_relaxed"
#endif
+}
-#define raw_atomic64_xor arch_atomic64_xor
+static __always_inline void
+raw_atomic64_xor(s64 i, atomic64_t *v)
+{
+ arch_atomic64_xor(i, v);
+}
-#if defined(arch_atomic64_fetch_xor)
-#define raw_atomic64_fetch_xor arch_atomic64_fetch_xor
-#elif defined(arch_atomic64_fetch_xor_relaxed)
static __always_inline s64
raw_atomic64_fetch_xor(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_xor)
+ return arch_atomic64_fetch_xor(i, v);
+#elif defined(arch_atomic64_fetch_xor_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_xor_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic64_fetch_xor"
#endif
+}
-#if defined(arch_atomic64_fetch_xor_acquire)
-#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire
-#elif defined(arch_atomic64_fetch_xor_relaxed)
static __always_inline s64
raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_xor_acquire)
+ return arch_atomic64_fetch_xor_acquire(i, v);
+#elif defined(arch_atomic64_fetch_xor_relaxed)
s64 ret = arch_atomic64_fetch_xor_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_xor)
-#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor
+ return arch_atomic64_fetch_xor(i, v);
#else
#error "Unable to define raw_atomic64_fetch_xor_acquire"
#endif
+}
-#if defined(arch_atomic64_fetch_xor_release)
-#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release
-#elif defined(arch_atomic64_fetch_xor_relaxed)
static __always_inline s64
raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_xor_release)
+ return arch_atomic64_fetch_xor_release(i, v);
+#elif defined(arch_atomic64_fetch_xor_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_xor_relaxed(i, v);
-}
#elif defined(arch_atomic64_fetch_xor)
-#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor
+ return arch_atomic64_fetch_xor(i, v);
#else
#error "Unable to define raw_atomic64_fetch_xor_release"
#endif
+}
+static __always_inline s64
+raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
+{
#if defined(arch_atomic64_fetch_xor_relaxed)
-#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed
+ return arch_atomic64_fetch_xor_relaxed(i, v);
#elif defined(arch_atomic64_fetch_xor)
-#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor
+ return arch_atomic64_fetch_xor(i, v);
#else
#error "Unable to define raw_atomic64_fetch_xor_relaxed"
#endif
+}
-#if defined(arch_atomic64_xchg)
-#define raw_atomic64_xchg arch_atomic64_xchg
-#elif defined(arch_atomic64_xchg_relaxed)
static __always_inline s64
-raw_atomic64_xchg(atomic64_t *v, s64 i)
+raw_atomic64_xchg(atomic64_t *v, s64 new)
{
+#if defined(arch_atomic64_xchg)
+ return arch_atomic64_xchg(v, new);
+#elif defined(arch_atomic64_xchg_relaxed)
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_xchg_relaxed(v, i);
+ ret = arch_atomic64_xchg_relaxed(v, new);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline s64
-raw_atomic64_xchg(atomic64_t *v, s64 new)
-{
return raw_xchg(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic64_xchg_acquire)
-#define raw_atomic64_xchg_acquire arch_atomic64_xchg_acquire
-#elif defined(arch_atomic64_xchg_relaxed)
static __always_inline s64
-raw_atomic64_xchg_acquire(atomic64_t *v, s64 i)
+raw_atomic64_xchg_acquire(atomic64_t *v, s64 new)
{
- s64 ret = arch_atomic64_xchg_relaxed(v, i);
+#if defined(arch_atomic64_xchg_acquire)
+ return arch_atomic64_xchg_acquire(v, new);
+#elif defined(arch_atomic64_xchg_relaxed)
+ s64 ret = arch_atomic64_xchg_relaxed(v, new);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_xchg)
-#define raw_atomic64_xchg_acquire arch_atomic64_xchg
+ return arch_atomic64_xchg(v, new);
#else
-static __always_inline s64
-raw_atomic64_xchg_acquire(atomic64_t *v, s64 new)
-{
return raw_xchg_acquire(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic64_xchg_release)
-#define raw_atomic64_xchg_release arch_atomic64_xchg_release
-#elif defined(arch_atomic64_xchg_relaxed)
static __always_inline s64
-raw_atomic64_xchg_release(atomic64_t *v, s64 i)
+raw_atomic64_xchg_release(atomic64_t *v, s64 new)
{
+#if defined(arch_atomic64_xchg_release)
+ return arch_atomic64_xchg_release(v, new);
+#elif defined(arch_atomic64_xchg_relaxed)
__atomic_release_fence();
- return arch_atomic64_xchg_relaxed(v, i);
-}
+ return arch_atomic64_xchg_relaxed(v, new);
#elif defined(arch_atomic64_xchg)
-#define raw_atomic64_xchg_release arch_atomic64_xchg
+ return arch_atomic64_xchg(v, new);
#else
-static __always_inline s64
-raw_atomic64_xchg_release(atomic64_t *v, s64 new)
-{
return raw_xchg_release(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic64_xchg_relaxed)
-#define raw_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed
-#elif defined(arch_atomic64_xchg)
-#define raw_atomic64_xchg_relaxed arch_atomic64_xchg
-#else
static __always_inline s64
raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
{
+#if defined(arch_atomic64_xchg_relaxed)
+ return arch_atomic64_xchg_relaxed(v, new);
+#elif defined(arch_atomic64_xchg)
+ return arch_atomic64_xchg(v, new);
+#else
return raw_xchg_relaxed(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic64_cmpxchg)
-#define raw_atomic64_cmpxchg arch_atomic64_cmpxchg
-#elif defined(arch_atomic64_cmpxchg_relaxed)
static __always_inline s64
raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
+#if defined(arch_atomic64_cmpxchg)
+ return arch_atomic64_cmpxchg(v, old, new);
+#elif defined(arch_atomic64_cmpxchg_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline s64
-raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
-{
return raw_cmpxchg(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic64_cmpxchg_acquire)
-#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire
-#elif defined(arch_atomic64_cmpxchg_relaxed)
static __always_inline s64
raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
+#if defined(arch_atomic64_cmpxchg_acquire)
+ return arch_atomic64_cmpxchg_acquire(v, old, new);
+#elif defined(arch_atomic64_cmpxchg_relaxed)
s64 ret = arch_atomic64_cmpxchg_relaxed(v, old, new);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_cmpxchg)
-#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg
+ return arch_atomic64_cmpxchg(v, old, new);
#else
-static __always_inline s64
-raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
-{
return raw_cmpxchg_acquire(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic64_cmpxchg_release)
-#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release
-#elif defined(arch_atomic64_cmpxchg_relaxed)
static __always_inline s64
raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
+#if defined(arch_atomic64_cmpxchg_release)
+ return arch_atomic64_cmpxchg_release(v, old, new);
+#elif defined(arch_atomic64_cmpxchg_relaxed)
__atomic_release_fence();
return arch_atomic64_cmpxchg_relaxed(v, old, new);
-}
#elif defined(arch_atomic64_cmpxchg)
-#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg
+ return arch_atomic64_cmpxchg(v, old, new);
#else
-static __always_inline s64
-raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
-{
return raw_cmpxchg_release(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic64_cmpxchg_relaxed)
-#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed
-#elif defined(arch_atomic64_cmpxchg)
-#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg
-#else
static __always_inline s64
raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
{
+#if defined(arch_atomic64_cmpxchg_relaxed)
+ return arch_atomic64_cmpxchg_relaxed(v, old, new);
+#elif defined(arch_atomic64_cmpxchg)
+ return arch_atomic64_cmpxchg(v, old, new);
+#else
return raw_cmpxchg_relaxed(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic64_try_cmpxchg)
-#define raw_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
-#elif defined(arch_atomic64_try_cmpxchg_relaxed)
static __always_inline bool
raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
+#if defined(arch_atomic64_try_cmpxchg)
+ return arch_atomic64_try_cmpxchg(v, old, new);
+#elif defined(arch_atomic64_try_cmpxchg_relaxed)
bool ret;
__atomic_pre_full_fence();
ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline bool
-raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
-{
s64 r, o = *old;
r = raw_atomic64_cmpxchg(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic64_try_cmpxchg_acquire)
-#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire
-#elif defined(arch_atomic64_try_cmpxchg_relaxed)
static __always_inline bool
raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
+#if defined(arch_atomic64_try_cmpxchg_acquire)
+ return arch_atomic64_try_cmpxchg_acquire(v, old, new);
+#elif defined(arch_atomic64_try_cmpxchg_relaxed)
bool ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_try_cmpxchg)
-#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg
+ return arch_atomic64_try_cmpxchg(v, old, new);
#else
-static __always_inline bool
-raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
-{
s64 r, o = *old;
r = raw_atomic64_cmpxchg_acquire(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic64_try_cmpxchg_release)
-#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release
-#elif defined(arch_atomic64_try_cmpxchg_relaxed)
static __always_inline bool
raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
{
+#if defined(arch_atomic64_try_cmpxchg_release)
+ return arch_atomic64_try_cmpxchg_release(v, old, new);
+#elif defined(arch_atomic64_try_cmpxchg_relaxed)
__atomic_release_fence();
return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
-}
#elif defined(arch_atomic64_try_cmpxchg)
-#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg
+ return arch_atomic64_try_cmpxchg(v, old, new);
#else
-static __always_inline bool
-raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
-{
s64 r, o = *old;
r = raw_atomic64_cmpxchg_release(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic64_try_cmpxchg_relaxed)
-#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg_relaxed
-#elif defined(arch_atomic64_try_cmpxchg)
-#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg
-#else
static __always_inline bool
raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
{
+#if defined(arch_atomic64_try_cmpxchg_relaxed)
+ return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
+#elif defined(arch_atomic64_try_cmpxchg)
+ return arch_atomic64_try_cmpxchg(v, old, new);
+#else
s64 r, o = *old;
r = raw_atomic64_cmpxchg_relaxed(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic64_sub_and_test)
-#define raw_atomic64_sub_and_test arch_atomic64_sub_and_test
-#else
static __always_inline bool
raw_atomic64_sub_and_test(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_sub_and_test)
+ return arch_atomic64_sub_and_test(i, v);
+#else
return raw_atomic64_sub_return(i, v) == 0;
-}
#endif
+}
-#if defined(arch_atomic64_dec_and_test)
-#define raw_atomic64_dec_and_test arch_atomic64_dec_and_test
-#else
static __always_inline bool
raw_atomic64_dec_and_test(atomic64_t *v)
{
+#if defined(arch_atomic64_dec_and_test)
+ return arch_atomic64_dec_and_test(v);
+#else
return raw_atomic64_dec_return(v) == 0;
-}
#endif
+}
-#if defined(arch_atomic64_inc_and_test)
-#define raw_atomic64_inc_and_test arch_atomic64_inc_and_test
-#else
static __always_inline bool
raw_atomic64_inc_and_test(atomic64_t *v)
{
+#if defined(arch_atomic64_inc_and_test)
+ return arch_atomic64_inc_and_test(v);
+#else
return raw_atomic64_inc_return(v) == 0;
-}
#endif
+}
-#if defined(arch_atomic64_add_negative)
-#define raw_atomic64_add_negative arch_atomic64_add_negative
-#elif defined(arch_atomic64_add_negative_relaxed)
static __always_inline bool
raw_atomic64_add_negative(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_add_negative)
+ return arch_atomic64_add_negative(i, v);
+#elif defined(arch_atomic64_add_negative_relaxed)
bool ret;
__atomic_pre_full_fence();
ret = arch_atomic64_add_negative_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline bool
-raw_atomic64_add_negative(s64 i, atomic64_t *v)
-{
return raw_atomic64_add_return(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic64_add_negative_acquire)
-#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire
-#elif defined(arch_atomic64_add_negative_relaxed)
static __always_inline bool
raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_add_negative_acquire)
+ return arch_atomic64_add_negative_acquire(i, v);
+#elif defined(arch_atomic64_add_negative_relaxed)
bool ret = arch_atomic64_add_negative_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_add_negative)
-#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative
+ return arch_atomic64_add_negative(i, v);
#else
-static __always_inline bool
-raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
-{
return raw_atomic64_add_return_acquire(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic64_add_negative_release)
-#define raw_atomic64_add_negative_release arch_atomic64_add_negative_release
-#elif defined(arch_atomic64_add_negative_relaxed)
static __always_inline bool
raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_add_negative_release)
+ return arch_atomic64_add_negative_release(i, v);
+#elif defined(arch_atomic64_add_negative_relaxed)
__atomic_release_fence();
return arch_atomic64_add_negative_relaxed(i, v);
-}
#elif defined(arch_atomic64_add_negative)
-#define raw_atomic64_add_negative_release arch_atomic64_add_negative
+ return arch_atomic64_add_negative(i, v);
#else
-static __always_inline bool
-raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
-{
return raw_atomic64_add_return_release(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic64_add_negative_relaxed)
-#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative_relaxed
-#elif defined(arch_atomic64_add_negative)
-#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative
-#else
static __always_inline bool
raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_add_negative_relaxed)
+ return arch_atomic64_add_negative_relaxed(i, v);
+#elif defined(arch_atomic64_add_negative)
+ return arch_atomic64_add_negative(i, v);
+#else
return raw_atomic64_add_return_relaxed(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic64_fetch_add_unless)
-#define raw_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
-#else
static __always_inline s64
raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
+#if defined(arch_atomic64_fetch_add_unless)
+ return arch_atomic64_fetch_add_unless(v, a, u);
+#else
s64 c = raw_atomic64_read(v);
do {
@@ -2839,35 +2735,35 @@ raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
} while (!raw_atomic64_try_cmpxchg(v, &c, c + a));
return c;
-}
#endif
+}
-#if defined(arch_atomic64_add_unless)
-#define raw_atomic64_add_unless arch_atomic64_add_unless
-#else
static __always_inline bool
raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
+#if defined(arch_atomic64_add_unless)
+ return arch_atomic64_add_unless(v, a, u);
+#else
return raw_atomic64_fetch_add_unless(v, a, u) != u;
-}
#endif
+}
-#if defined(arch_atomic64_inc_not_zero)
-#define raw_atomic64_inc_not_zero arch_atomic64_inc_not_zero
-#else
static __always_inline bool
raw_atomic64_inc_not_zero(atomic64_t *v)
{
+#if defined(arch_atomic64_inc_not_zero)
+ return arch_atomic64_inc_not_zero(v);
+#else
return raw_atomic64_add_unless(v, 1, 0);
-}
#endif
+}
-#if defined(arch_atomic64_inc_unless_negative)
-#define raw_atomic64_inc_unless_negative arch_atomic64_inc_unless_negative
-#else
static __always_inline bool
raw_atomic64_inc_unless_negative(atomic64_t *v)
{
+#if defined(arch_atomic64_inc_unless_negative)
+ return arch_atomic64_inc_unless_negative(v);
+#else
s64 c = raw_atomic64_read(v);
do {
@@ -2876,15 +2772,15 @@ raw_atomic64_inc_unless_negative(atomic64_t *v)
} while (!raw_atomic64_try_cmpxchg(v, &c, c + 1));
return true;
-}
#endif
+}
-#if defined(arch_atomic64_dec_unless_positive)
-#define raw_atomic64_dec_unless_positive arch_atomic64_dec_unless_positive
-#else
static __always_inline bool
raw_atomic64_dec_unless_positive(atomic64_t *v)
{
+#if defined(arch_atomic64_dec_unless_positive)
+ return arch_atomic64_dec_unless_positive(v);
+#else
s64 c = raw_atomic64_read(v);
do {
@@ -2893,15 +2789,15 @@ raw_atomic64_dec_unless_positive(atomic64_t *v)
} while (!raw_atomic64_try_cmpxchg(v, &c, c - 1));
return true;
-}
#endif
+}
-#if defined(arch_atomic64_dec_if_positive)
-#define raw_atomic64_dec_if_positive arch_atomic64_dec_if_positive
-#else
static __always_inline s64
raw_atomic64_dec_if_positive(atomic64_t *v)
{
+#if defined(arch_atomic64_dec_if_positive)
+ return arch_atomic64_dec_if_positive(v);
+#else
s64 dec, c = raw_atomic64_read(v);
do {
@@ -2911,8 +2807,8 @@ raw_atomic64_dec_if_positive(atomic64_t *v)
} while (!raw_atomic64_try_cmpxchg(v, &c, dec));
return dec;
-}
#endif
+}
#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// c2048fccede6fac923252290e2b303949d5dec83
+// 205e090382132f1fc85e48b46e722865f9c81309
diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h
index 90ee2f55af770..5491c89dc03a0 100644
--- a/include/linux/atomic/atomic-instrumented.h
+++ b/include/linux/atomic/atomic-instrumented.h
@@ -462,33 +462,33 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v)
}
static __always_inline int
-atomic_xchg(atomic_t *v, int i)
+atomic_xchg(atomic_t *v, int new)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_xchg(v, i);
+ return raw_atomic_xchg(v, new);
}
static __always_inline int
-atomic_xchg_acquire(atomic_t *v, int i)
+atomic_xchg_acquire(atomic_t *v, int new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_xchg_acquire(v, i);
+ return raw_atomic_xchg_acquire(v, new);
}
static __always_inline int
-atomic_xchg_release(atomic_t *v, int i)
+atomic_xchg_release(atomic_t *v, int new)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_xchg_release(v, i);
+ return raw_atomic_xchg_release(v, new);
}
static __always_inline int
-atomic_xchg_relaxed(atomic_t *v, int i)
+atomic_xchg_relaxed(atomic_t *v, int new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_xchg_relaxed(v, i);
+ return raw_atomic_xchg_relaxed(v, new);
}
static __always_inline int
@@ -1103,33 +1103,33 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
}
static __always_inline s64
-atomic64_xchg(atomic64_t *v, s64 i)
+atomic64_xchg(atomic64_t *v, s64 new)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic64_xchg(v, i);
+ return raw_atomic64_xchg(v, new);
}
static __always_inline s64
-atomic64_xchg_acquire(atomic64_t *v, s64 i)
+atomic64_xchg_acquire(atomic64_t *v, s64 new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic64_xchg_acquire(v, i);
+ return raw_atomic64_xchg_acquire(v, new);
}
static __always_inline s64
-atomic64_xchg_release(atomic64_t *v, s64 i)
+atomic64_xchg_release(atomic64_t *v, s64 new)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic64_xchg_release(v, i);
+ return raw_atomic64_xchg_release(v, new);
}
static __always_inline s64
-atomic64_xchg_relaxed(atomic64_t *v, s64 i)
+atomic64_xchg_relaxed(atomic64_t *v, s64 new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic64_xchg_relaxed(v, i);
+ return raw_atomic64_xchg_relaxed(v, new);
}
static __always_inline s64
@@ -1744,33 +1744,33 @@ atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
}
static __always_inline long
-atomic_long_xchg(atomic_long_t *v, long i)
+atomic_long_xchg(atomic_long_t *v, long new)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_long_xchg(v, i);
+ return raw_atomic_long_xchg(v, new);
}
static __always_inline long
-atomic_long_xchg_acquire(atomic_long_t *v, long i)
+atomic_long_xchg_acquire(atomic_long_t *v, long new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_long_xchg_acquire(v, i);
+ return raw_atomic_long_xchg_acquire(v, new);
}
static __always_inline long
-atomic_long_xchg_release(atomic_long_t *v, long i)
+atomic_long_xchg_release(atomic_long_t *v, long new)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_long_xchg_release(v, i);
+ return raw_atomic_long_xchg_release(v, new);
}
static __always_inline long
-atomic_long_xchg_relaxed(atomic_long_t *v, long i)
+atomic_long_xchg_relaxed(atomic_long_t *v, long new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_long_xchg_relaxed(v, i);
+ return raw_atomic_long_xchg_relaxed(v, new);
}
static __always_inline long
@@ -2231,4 +2231,4 @@ atomic_long_dec_if_positive(atomic_long_t *v)
#endif /* _LINUX_ATOMIC_INSTRUMENTED_H */
-// f6502977180430e61c1a7c4e5e665f04f501fb8d
+// a4c3d2b229f907654cc53cb5d40e80f7fed1ec9c
diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h
index 63e0b4078ebd5..f564f71ff8afc 100644
--- a/include/linux/atomic/atomic-long.h
+++ b/include/linux/atomic/atomic-long.h
@@ -622,42 +622,42 @@ raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
}
static __always_inline long
-raw_atomic_long_xchg(atomic_long_t *v, long i)
+raw_atomic_long_xchg(atomic_long_t *v, long new)
{
#ifdef CONFIG_64BIT
- return raw_atomic64_xchg(v, i);
+ return raw_atomic64_xchg(v, new);
#else
- return raw_atomic_xchg(v, i);
+ return raw_atomic_xchg(v, new);
#endif
}
static __always_inline long
-raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
+raw_atomic_long_xchg_acquire(atomic_long_t *v, long new)
{
#ifdef CONFIG_64BIT
- return raw_atomic64_xchg_acquire(v, i);
+ return raw_atomic64_xchg_acquire(v, new);
#else
- return raw_atomic_xchg_acquire(v, i);
+ return raw_atomic_xchg_acquire(v, new);
#endif
}
static __always_inline long
-raw_atomic_long_xchg_release(atomic_long_t *v, long i)
+raw_atomic_long_xchg_release(atomic_long_t *v, long new)
{
#ifdef CONFIG_64BIT
- return raw_atomic64_xchg_release(v, i);
+ return raw_atomic64_xchg_release(v, new);
#else
- return raw_atomic_xchg_release(v, i);
+ return raw_atomic_xchg_release(v, new);
#endif
}
static __always_inline long
-raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
+raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new)
{
#ifdef CONFIG_64BIT
- return raw_atomic64_xchg_relaxed(v, i);
+ return raw_atomic64_xchg_relaxed(v, new);
#else
- return raw_atomic_xchg_relaxed(v, i);
+ return raw_atomic_xchg_relaxed(v, new);
#endif
}
@@ -872,4 +872,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v)
}
#endif /* _LINUX_ATOMIC_LONG_H */
-// ad09f849db0db5b30c82e497eeb9056a394c5f22
+// e785d25cc3f220b7d473d36aac9da85dd7eb13a8
diff --git a/scripts/atomic/atomics.tbl b/scripts/atomic/atomics.tbl
index 85ca8d9b5c279..903946cbf1b3e 100644
--- a/scripts/atomic/atomics.tbl
+++ b/scripts/atomic/atomics.tbl
@@ -27,7 +27,7 @@ and vF i v
andnot vF i v
or vF i v
xor vF i v
-xchg I v i
+xchg I v i:new
cmpxchg I v i:old i:new
try_cmpxchg B v p:old i:new
sub_and_test b i v
diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/acquire
index b0f732a5c46ef..4da0cab3604e2 100755
--- a/scripts/atomic/fallbacks/acquire
+++ b/scripts/atomic/fallbacks/acquire
@@ -1,9 +1,5 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_${pfx}${name}${sfx}_acquire(${params})
-{
${ret} ret = arch_${atomic}_${pfx}${name}${sfx}_relaxed(${args});
__atomic_acquire_fence();
return ret;
-}
EOF
diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative
index 16876118019ec..1d3d4ab3a9d29 100755
--- a/scripts/atomic/fallbacks/add_negative
+++ b/scripts/atomic/fallbacks/add_negative
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_add_negative${order}(${int} i, ${atomic}_t *v)
-{
return raw_${atomic}_add_return${order}(i, v) < 0;
-}
EOF
diff --git a/scripts/atomic/fallbacks/add_unless b/scripts/atomic/fallbacks/add_unless
index 88593e28b1637..95ecb2b7405be 100755
--- a/scripts/atomic/fallbacks/add_unless
+++ b/scripts/atomic/fallbacks/add_unless
@@ -1,7 +1,3 @@
cat << EOF
-static __always_inline bool
-raw_${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
-{
return raw_${atomic}_fetch_add_unless(v, a, u) != u;
-}
EOF
diff --git a/scripts/atomic/fallbacks/andnot b/scripts/atomic/fallbacks/andnot
index 5b83bb63f7284..66760457e67a5 100755
--- a/scripts/atomic/fallbacks/andnot
+++ b/scripts/atomic/fallbacks/andnot
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_${pfx}andnot${sfx}${order}(${int} i, ${atomic}_t *v)
-{
${retstmt}raw_${atomic}_${pfx}and${sfx}${order}(~i, v);
-}
EOF
diff --git a/scripts/atomic/fallbacks/cmpxchg b/scripts/atomic/fallbacks/cmpxchg
index 312ee67f1743e..1c8507f62e049 100755
--- a/scripts/atomic/fallbacks/cmpxchg
+++ b/scripts/atomic/fallbacks/cmpxchg
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline ${int}
-raw_${atomic}_cmpxchg${order}(${atomic}_t *v, ${int} old, ${int} new)
-{
return raw_cmpxchg${order}(&v->counter, old, new);
-}
EOF
diff --git a/scripts/atomic/fallbacks/dec b/scripts/atomic/fallbacks/dec
index a660ac65994bd..60d286d40300f 100755
--- a/scripts/atomic/fallbacks/dec
+++ b/scripts/atomic/fallbacks/dec
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_${pfx}dec${sfx}${order}(${atomic}_t *v)
-{
${retstmt}raw_${atomic}_${pfx}sub${sfx}${order}(1, v);
-}
EOF
diff --git a/scripts/atomic/fallbacks/dec_and_test b/scripts/atomic/fallbacks/dec_and_test
index 521dfcae03f24..3a0278e0ddd73 100755
--- a/scripts/atomic/fallbacks/dec_and_test
+++ b/scripts/atomic/fallbacks/dec_and_test
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_dec_and_test(${atomic}_t *v)
-{
return raw_${atomic}_dec_return(v) == 0;
-}
EOF
diff --git a/scripts/atomic/fallbacks/dec_if_positive b/scripts/atomic/fallbacks/dec_if_positive
index 7acb205e6ce35..f65c11b4b85bd 100755
--- a/scripts/atomic/fallbacks/dec_if_positive
+++ b/scripts/atomic/fallbacks/dec_if_positive
@@ -1,7 +1,4 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_dec_if_positive(${atomic}_t *v)
-{
${int} dec, c = raw_${atomic}_read(v);
do {
@@ -11,5 +8,4 @@ raw_${atomic}_dec_if_positive(${atomic}_t *v)
} while (!raw_${atomic}_try_cmpxchg(v, &c, dec));
return dec;
-}
EOF
diff --git a/scripts/atomic/fallbacks/dec_unless_positive b/scripts/atomic/fallbacks/dec_unless_positive
index bcb4f27945eaa..d025361d7b85a 100755
--- a/scripts/atomic/fallbacks/dec_unless_positive
+++ b/scripts/atomic/fallbacks/dec_unless_positive
@@ -1,7 +1,4 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_dec_unless_positive(${atomic}_t *v)
-{
${int} c = raw_${atomic}_read(v);
do {
@@ -10,5 +7,4 @@ raw_${atomic}_dec_unless_positive(${atomic}_t *v)
} while (!raw_${atomic}_try_cmpxchg(v, &c, c - 1));
return true;
-}
EOF
diff --git a/scripts/atomic/fallbacks/fence b/scripts/atomic/fallbacks/fence
index 067eea553f5e0..40d5b397658f7 100755
--- a/scripts/atomic/fallbacks/fence
+++ b/scripts/atomic/fallbacks/fence
@@ -1,11 +1,7 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_${pfx}${name}${sfx}(${params})
-{
${ret} ret;
__atomic_pre_full_fence();
ret = arch_${atomic}_${pfx}${name}${sfx}_relaxed(${args});
__atomic_post_full_fence();
return ret;
-}
EOF
diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fallbacks/fetch_add_unless
index c18b940153dfd..8db7e9e17facf 100755
--- a/scripts/atomic/fallbacks/fetch_add_unless
+++ b/scripts/atomic/fallbacks/fetch_add_unless
@@ -1,7 +1,4 @@
cat << EOF
-static __always_inline ${int}
-raw_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
-{
${int} c = raw_${atomic}_read(v);
do {
@@ -10,5 +7,4 @@ raw_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
} while (!raw_${atomic}_try_cmpxchg(v, &c, c + a));
return c;
-}
EOF
diff --git a/scripts/atomic/fallbacks/inc b/scripts/atomic/fallbacks/inc
index 7d838f0b66391..56c770f5919c0 100755
--- a/scripts/atomic/fallbacks/inc
+++ b/scripts/atomic/fallbacks/inc
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_${pfx}inc${sfx}${order}(${atomic}_t *v)
-{
${retstmt}raw_${atomic}_${pfx}add${sfx}${order}(1, v);
-}
EOF
diff --git a/scripts/atomic/fallbacks/inc_and_test b/scripts/atomic/fallbacks/inc_and_test
index de25aebee715d..7d16a10f2257e 100755
--- a/scripts/atomic/fallbacks/inc_and_test
+++ b/scripts/atomic/fallbacks/inc_and_test
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_inc_and_test(${atomic}_t *v)
-{
return raw_${atomic}_inc_return(v) == 0;
-}
EOF
diff --git a/scripts/atomic/fallbacks/inc_not_zero b/scripts/atomic/fallbacks/inc_not_zero
index e02206d017f62..1fcef1e55bc97 100755
--- a/scripts/atomic/fallbacks/inc_not_zero
+++ b/scripts/atomic/fallbacks/inc_not_zero
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_inc_not_zero(${atomic}_t *v)
-{
return raw_${atomic}_add_unless(v, 1, 0);
-}
EOF
diff --git a/scripts/atomic/fallbacks/inc_unless_negative b/scripts/atomic/fallbacks/inc_unless_negative
index 7b85cc5b00d2b..7b4b09868842d 100755
--- a/scripts/atomic/fallbacks/inc_unless_negative
+++ b/scripts/atomic/fallbacks/inc_unless_negative
@@ -1,7 +1,4 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_inc_unless_negative(${atomic}_t *v)
-{
${int} c = raw_${atomic}_read(v);
do {
@@ -10,5 +7,4 @@ raw_${atomic}_inc_unless_negative(${atomic}_t *v)
} while (!raw_${atomic}_try_cmpxchg(v, &c, c + 1));
return true;
-}
EOF
diff --git a/scripts/atomic/fallbacks/read_acquire b/scripts/atomic/fallbacks/read_acquire
index 26d15ad92d043..e319862d2f1a5 100755
--- a/scripts/atomic/fallbacks/read_acquire
+++ b/scripts/atomic/fallbacks/read_acquire
@@ -1,7 +1,4 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_read_acquire(const ${atomic}_t *v)
-{
${int} ret;
if (__native_word(${atomic}_t)) {
@@ -12,5 +9,4 @@ raw_${atomic}_read_acquire(const ${atomic}_t *v)
}
return ret;
-}
EOF
diff --git a/scripts/atomic/fallbacks/release b/scripts/atomic/fallbacks/release
index cbbff708129b8..1e6daf57b4ba5 100755
--- a/scripts/atomic/fallbacks/release
+++ b/scripts/atomic/fallbacks/release
@@ -1,8 +1,4 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_${pfx}${name}${sfx}_release(${params})
-{
__atomic_release_fence();
${retstmt}arch_${atomic}_${pfx}${name}${sfx}_relaxed(${args});
-}
EOF
diff --git a/scripts/atomic/fallbacks/set_release b/scripts/atomic/fallbacks/set_release
index 104693bc3c660..16a374ae6bb16 100755
--- a/scripts/atomic/fallbacks/set_release
+++ b/scripts/atomic/fallbacks/set_release
@@ -1,12 +1,8 @@
cat <<EOF
-static __always_inline void
-raw_${atomic}_set_release(${atomic}_t *v, ${int} i)
-{
if (__native_word(${atomic}_t)) {
smp_store_release(&(v)->counter, i);
} else {
__atomic_release_fence();
raw_${atomic}_set(v, i);
}
-}
EOF
diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test
index 8975a496d495c..d1f746fe0ca4d 100755
--- a/scripts/atomic/fallbacks/sub_and_test
+++ b/scripts/atomic/fallbacks/sub_and_test
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_sub_and_test(${int} i, ${atomic}_t *v)
-{
return raw_${atomic}_sub_return(i, v) == 0;
-}
EOF
diff --git a/scripts/atomic/fallbacks/try_cmpxchg b/scripts/atomic/fallbacks/try_cmpxchg
index 4c911a6cced94..d4da82092baf7 100755
--- a/scripts/atomic/fallbacks/try_cmpxchg
+++ b/scripts/atomic/fallbacks/try_cmpxchg
@@ -1,11 +1,7 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_try_cmpxchg${order}(${atomic}_t *v, ${int} *old, ${int} new)
-{
${int} r, o = *old;
r = raw_${atomic}_cmpxchg${order}(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
EOF
diff --git a/scripts/atomic/fallbacks/xchg b/scripts/atomic/fallbacks/xchg
index bdd788aa575ff..e4def1e0d0926 100755
--- a/scripts/atomic/fallbacks/xchg
+++ b/scripts/atomic/fallbacks/xchg
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline ${int}
-raw_${atomic}_xchg${order}(${atomic}_t *v, ${int} new)
-{
return raw_xchg${order}(&v->counter, new);
-}
EOF
diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh
index 86aca4f9f315a..2b470d31e3539 100755
--- a/scripts/atomic/gen-atomic-fallback.sh
+++ b/scripts/atomic/gen-atomic-fallback.sh
@@ -60,13 +60,23 @@ gen_proto_order_variant()
local name="$1"; shift
local sfx="$1"; shift
local order="$1"; shift
- local atomic="$1"
+ local atomic="$1"; shift
+ local int="$1"; shift
local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
local basename="${atomic}_${pfx}${name}${sfx}"
local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")"
+ local ret="$(gen_ret_type "${meta}" "${int}")"
+ local retstmt="$(gen_ret_stmt "${meta}")"
+ local params="$(gen_params "${int}" "${atomic}" "$@")"
+ local args="$(gen_args "$@")"
+
+ printf "static __always_inline ${ret}\n"
+ printf "raw_${atomicname}(${params})\n"
+ printf "{\n"
+
# Where there is no possible fallback, this order variant is mandatory
# and must be provided by arch code. Add a comment to the header to
# make this obvious.
@@ -75,33 +85,35 @@ gen_proto_order_variant()
# define this order variant as a C function without a preprocessor
# symbol.
if [ -z ${template} ] && [ -z "${order}" ] && ! meta_has_relaxed "${meta}"; then
- printf "#define raw_${atomicname} arch_${atomicname}\n\n"
+ printf "\t${retstmt}arch_${atomicname}(${args});\n"
+ printf "}\n\n"
return
fi
printf "#if defined(arch_${atomicname})\n"
- printf "#define raw_${atomicname} arch_${atomicname}\n"
+ printf "\t${retstmt}arch_${atomicname}(${args});\n"
# Allow FULL/ACQUIRE/RELEASE ops to be defined in terms of RELAXED ops
if [ "${order}" != "_relaxed" ] && meta_has_relaxed "${meta}"; then
printf "#elif defined(arch_${basename}_relaxed)\n"
- gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
+ gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@"
fi
# Allow ACQUIRE/RELEASE/RELAXED ops to be defined in terms of FULL ops
if [ ! -z "${order}" ]; then
printf "#elif defined(arch_${basename})\n"
- printf "#define raw_${atomicname} arch_${basename}\n"
+ printf "\t${retstmt}arch_${basename}(${args});\n"
fi
printf "#else\n"
if [ ! -z "${template}" ]; then
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
+ gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@"
else
printf "#error \"Unable to define raw_${atomicname}\"\n"
fi
- printf "#endif\n\n"
+ printf "#endif\n"
+ printf "}\n\n"
}
--
2.30.2
Hexagon's implementation of arch_atomic_cmpxchg() is identical to its
implementation of arch_cmpxchg(). Have it define arch_atomic_cmpxchg()
in terms of arch_cmpxchg(), matching what it does for arch_atomic_xchg()
and arch_xchg().
At the same time, remove the kerneldoc comments for hexagon's
arch_atomic_xchg() and arch_atomic_cmpxchg(). The arch_atomic_*()
namespace is shared by all architectures and the API should be
documented centrally, and the comments aren't all that helpful as-is.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/hexagon/include/asm/atomic.h | 46 +++----------------------------
1 file changed, 4 insertions(+), 42 deletions(-)
diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h
index 6e94f8d04146f..738857e10d6ec 100644
--- a/arch/hexagon/include/asm/atomic.h
+++ b/arch/hexagon/include/asm/atomic.h
@@ -36,49 +36,11 @@ static inline void arch_atomic_set(atomic_t *v, int new)
*/
#define arch_atomic_read(v) READ_ONCE((v)->counter)
-/**
- * arch_atomic_xchg - atomic
- * @v: pointer to memory to change
- * @new: new value (technically passed in a register -- see xchg)
- */
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), (new)))
-
-
-/**
- * arch_atomic_cmpxchg - atomic compare-and-exchange values
- * @v: pointer to value to change
- * @old: desired old value to match
- * @new: new value to put in
- *
- * Parameters are then pointer, value-in-register, value-in-register,
- * and the output is the old value.
- *
- * Apparently this is complicated for archs that don't support
- * the memw_locked like we do (or it's broken or whatever).
- *
- * Kind of the lynchpin of the rest of the generically defined routines.
- * Remember V2 had that bug with dotnew predicate set by memw_locked.
- *
- * "old" is "expected" old val, __oldval is actual old value
- */
-static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
-{
- int __oldval;
+#define arch_atomic_xchg(v, new) \
+ (arch_xchg(&((v)->counter), (new)))
- asm volatile(
- "1: %0 = memw_locked(%1);\n"
- " { P0 = cmp.eq(%0,%2);\n"
- " if (!P0.new) jump:nt 2f; }\n"
- " memw_locked(%1,P0) = %3;\n"
- " if (!P0) jump 1b;\n"
- "2:\n"
- : "=&r" (__oldval)
- : "r" (&v->counter), "r" (old), "r" (new)
- : "memory", "p0"
- );
-
- return __oldval;
-}
+#define arch_atomic_cmpxchg(v, old, new) \
+ (arch_cmpxchg(&((v)->counter), (old), (new)))
#define ATOMIC_OP(op) \
static inline void arch_atomic_##op(int i, atomic_t *v) \
--
2.30.2
At the start of gen_proto_order_variants(), the ${order} variable is not
yet defined, and will be substituted with an empty string.
Replace the current bogus use of ${order} with an empty string instead.
This results in no change to the generated headers.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
scripts/atomic/gen-atomic-fallback.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh
index a70acd548fcd8..7a6bcea8f565b 100755
--- a/scripts/atomic/gen-atomic-fallback.sh
+++ b/scripts/atomic/gen-atomic-fallback.sh
@@ -81,7 +81,7 @@ gen_proto_order_variants()
local basename="arch_${atomic}_${pfx}${name}${sfx}"
- local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")"
+ local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "")"
# If we don't have relaxed atomics, then we don't bother with ordering fallbacks
# read_acquire and set_release need to be templated, though
--
2.30.2
Currently the various ordering variants of an atomic operation are
defined in groups of full/acquire/release/relaxed ordering variants with
some shared ifdeffery and several potential definitions of each ordering
variant in different branches of the shared ifdeffery.
As an ordering variant can have several potential definitions down
different branches of the shared ifdeffery, it can be painful for a
human to find a relevant definition, and we don't have a good location
to place anything common to all definitions of an ordering variant (e.g.
kerneldoc).
Historically the grouping of full/acquire/release/relaxed ordering
variants was necessary as we filled in the missing atomics in the same
namespace as the architecture used. It would be easy to accidentally
define one ordering fallback in terms of another ordering fallback with
redundant barriers, and avoiding that would otherwise require a lot of
baroque ifdeffery.
With recent changes we no longer need to fill in the missing atomics in
the arch_atomic*_<op>() namespace, and only need to fill in the
raw_atomic*_<op>() namespace. Due to this, there's no risk of a
namespace collision, and we can define each raw_atomic*_<op> ordering
variant with its own ifdeffery checking for the arch_atomic*_<op>
ordering variants.
Restructure the fallbacks in this way, with each ordering variant having
its own ifdeffery of the form:
| #if defined(arch_atomic_fetch_andnot_acquire)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
| #elif defined(arch_atomic_fetch_andnot_relaxed)
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| int ret = arch_atomic_fetch_andnot_relaxed(i, v);
| __atomic_acquire_fence();
| return ret;
| }
| #elif defined(arch_atomic_fetch_andnot)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot
| #else
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| return raw_atomic_fetch_and_acquire(~i, v);
| }
| #endif
Note that where there's no relevant arch_atomic*_<op>() ordering
variant, we'll define the operation in terms of a distinct
raw_atomic*_<otherop>(), as this itself might have been filled in with a
fallback.
As we now generate the raw_atomic*_<op>() implementations directly, we
no longer need the trivial wrappers, so they are removed.
This makes the ifdeffery easier to follow, and will allow for further
improvements in subsequent patches.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
include/linux/atomic.h | 1 -
include/linux/atomic/atomic-arch-fallback.h | 3178 +++++++++---------
include/linux/atomic/atomic-raw.h | 1135 -------
scripts/atomic/fallbacks/acquire | 2 +-
scripts/atomic/fallbacks/add_negative | 4 +-
scripts/atomic/fallbacks/add_unless | 4 +-
scripts/atomic/fallbacks/andnot | 4 +-
scripts/atomic/fallbacks/cmpxchg | 4 +-
scripts/atomic/fallbacks/dec | 4 +-
scripts/atomic/fallbacks/dec_and_test | 4 +-
scripts/atomic/fallbacks/dec_if_positive | 6 +-
scripts/atomic/fallbacks/dec_unless_positive | 6 +-
scripts/atomic/fallbacks/fence | 2 +-
scripts/atomic/fallbacks/fetch_add_unless | 6 +-
scripts/atomic/fallbacks/inc | 4 +-
scripts/atomic/fallbacks/inc_and_test | 4 +-
scripts/atomic/fallbacks/inc_not_zero | 4 +-
scripts/atomic/fallbacks/inc_unless_negative | 6 +-
scripts/atomic/fallbacks/read_acquire | 4 +-
scripts/atomic/fallbacks/release | 2 +-
scripts/atomic/fallbacks/set_release | 4 +-
scripts/atomic/fallbacks/sub_and_test | 4 +-
scripts/atomic/fallbacks/try_cmpxchg | 4 +-
scripts/atomic/fallbacks/xchg | 4 +-
scripts/atomic/gen-atomic-fallback.sh | 236 +-
scripts/atomic/gen-atomic-raw.sh | 80 -
scripts/atomic/gen-atomics.sh | 1 -
27 files changed, 1866 insertions(+), 2851 deletions(-)
delete mode 100644 include/linux/atomic/atomic-raw.h
delete mode 100755 scripts/atomic/gen-atomic-raw.sh
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 296cfae0389fe..8dd57c3a99e9b 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -78,7 +78,6 @@
})
#include <linux/atomic/atomic-arch-fallback.h>
-#include <linux/atomic/atomic-raw.h>
#include <linux/atomic/atomic-long.h>
#include <linux/atomic/atomic-instrumented.h>
diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 1a2d81dbc2e48..99bc1a871dc12 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -8,2749 +8,2911 @@
#include <linux/compiler.h>
-#ifndef arch_xchg_relaxed
-#define arch_xchg_acquire arch_xchg
-#define arch_xchg_release arch_xchg
-#define arch_xchg_relaxed arch_xchg
-#else /* arch_xchg_relaxed */
-
-#ifndef arch_xchg_acquire
-#define arch_xchg_acquire(...) \
- __atomic_op_acquire(arch_xchg, __VA_ARGS__)
+#if defined(arch_xchg)
+#define raw_xchg arch_xchg
+#elif defined(arch_xchg_relaxed)
+#define raw_xchg(...) \
+ __atomic_op_fence(arch_xchg, __VA_ARGS__)
+#else
+extern void raw_xchg_not_implemented(void);
+#define raw_xchg(...) raw_xchg_not_implemented()
#endif
-#ifndef arch_xchg_release
-#define arch_xchg_release(...) \
- __atomic_op_release(arch_xchg, __VA_ARGS__)
+#if defined(arch_xchg_acquire)
+#define raw_xchg_acquire arch_xchg_acquire
+#elif defined(arch_xchg_relaxed)
+#define raw_xchg_acquire(...) \
+ __atomic_op_acquire(arch_xchg, __VA_ARGS__)
+#elif defined(arch_xchg)
+#define raw_xchg_acquire arch_xchg
+#else
+extern void raw_xchg_acquire_not_implemented(void);
+#define raw_xchg_acquire(...) raw_xchg_acquire_not_implemented()
#endif
-#ifndef arch_xchg
-#define arch_xchg(...) \
- __atomic_op_fence(arch_xchg, __VA_ARGS__)
+#if defined(arch_xchg_release)
+#define raw_xchg_release arch_xchg_release
+#elif defined(arch_xchg_relaxed)
+#define raw_xchg_release(...) \
+ __atomic_op_release(arch_xchg, __VA_ARGS__)
+#elif defined(arch_xchg)
+#define raw_xchg_release arch_xchg
+#else
+extern void raw_xchg_release_not_implemented(void);
+#define raw_xchg_release(...) raw_xchg_release_not_implemented()
+#endif
+
+#if defined(arch_xchg_relaxed)
+#define raw_xchg_relaxed arch_xchg_relaxed
+#elif defined(arch_xchg)
+#define raw_xchg_relaxed arch_xchg
+#else
+extern void raw_xchg_relaxed_not_implemented(void);
+#define raw_xchg_relaxed(...) raw_xchg_relaxed_not_implemented()
+#endif
+
+#if defined(arch_cmpxchg)
+#define raw_cmpxchg arch_cmpxchg
+#elif defined(arch_cmpxchg_relaxed)
+#define raw_cmpxchg(...) \
+ __atomic_op_fence(arch_cmpxchg, __VA_ARGS__)
+#else
+extern void raw_cmpxchg_not_implemented(void);
+#define raw_cmpxchg(...) raw_cmpxchg_not_implemented()
#endif
-#endif /* arch_xchg_relaxed */
-
-#ifndef arch_cmpxchg_relaxed
-#define arch_cmpxchg_acquire arch_cmpxchg
-#define arch_cmpxchg_release arch_cmpxchg
-#define arch_cmpxchg_relaxed arch_cmpxchg
-#else /* arch_cmpxchg_relaxed */
-
-#ifndef arch_cmpxchg_acquire
-#define arch_cmpxchg_acquire(...) \
+#if defined(arch_cmpxchg_acquire)
+#define raw_cmpxchg_acquire arch_cmpxchg_acquire
+#elif defined(arch_cmpxchg_relaxed)
+#define raw_cmpxchg_acquire(...) \
__atomic_op_acquire(arch_cmpxchg, __VA_ARGS__)
+#elif defined(arch_cmpxchg)
+#define raw_cmpxchg_acquire arch_cmpxchg
+#else
+extern void raw_cmpxchg_acquire_not_implemented(void);
+#define raw_cmpxchg_acquire(...) raw_cmpxchg_acquire_not_implemented()
#endif
-#ifndef arch_cmpxchg_release
-#define arch_cmpxchg_release(...) \
+#if defined(arch_cmpxchg_release)
+#define raw_cmpxchg_release arch_cmpxchg_release
+#elif defined(arch_cmpxchg_relaxed)
+#define raw_cmpxchg_release(...) \
__atomic_op_release(arch_cmpxchg, __VA_ARGS__)
+#elif defined(arch_cmpxchg)
+#define raw_cmpxchg_release arch_cmpxchg
+#else
+extern void raw_cmpxchg_release_not_implemented(void);
+#define raw_cmpxchg_release(...) raw_cmpxchg_release_not_implemented()
+#endif
+
+#if defined(arch_cmpxchg_relaxed)
+#define raw_cmpxchg_relaxed arch_cmpxchg_relaxed
+#elif defined(arch_cmpxchg)
+#define raw_cmpxchg_relaxed arch_cmpxchg
+#else
+extern void raw_cmpxchg_relaxed_not_implemented(void);
+#define raw_cmpxchg_relaxed(...) raw_cmpxchg_relaxed_not_implemented()
+#endif
+
+#if defined(arch_cmpxchg64)
+#define raw_cmpxchg64 arch_cmpxchg64
+#elif defined(arch_cmpxchg64_relaxed)
+#define raw_cmpxchg64(...) \
+ __atomic_op_fence(arch_cmpxchg64, __VA_ARGS__)
+#else
+extern void raw_cmpxchg64_not_implemented(void);
+#define raw_cmpxchg64(...) raw_cmpxchg64_not_implemented()
#endif
-#ifndef arch_cmpxchg
-#define arch_cmpxchg(...) \
- __atomic_op_fence(arch_cmpxchg, __VA_ARGS__)
-#endif
-
-#endif /* arch_cmpxchg_relaxed */
-
-#ifndef arch_cmpxchg64_relaxed
-#define arch_cmpxchg64_acquire arch_cmpxchg64
-#define arch_cmpxchg64_release arch_cmpxchg64
-#define arch_cmpxchg64_relaxed arch_cmpxchg64
-#else /* arch_cmpxchg64_relaxed */
-
-#ifndef arch_cmpxchg64_acquire
-#define arch_cmpxchg64_acquire(...) \
+#if defined(arch_cmpxchg64_acquire)
+#define raw_cmpxchg64_acquire arch_cmpxchg64_acquire
+#elif defined(arch_cmpxchg64_relaxed)
+#define raw_cmpxchg64_acquire(...) \
__atomic_op_acquire(arch_cmpxchg64, __VA_ARGS__)
+#elif defined(arch_cmpxchg64)
+#define raw_cmpxchg64_acquire arch_cmpxchg64
+#else
+extern void raw_cmpxchg64_acquire_not_implemented(void);
+#define raw_cmpxchg64_acquire(...) raw_cmpxchg64_acquire_not_implemented()
#endif
-#ifndef arch_cmpxchg64_release
-#define arch_cmpxchg64_release(...) \
+#if defined(arch_cmpxchg64_release)
+#define raw_cmpxchg64_release arch_cmpxchg64_release
+#elif defined(arch_cmpxchg64_relaxed)
+#define raw_cmpxchg64_release(...) \
__atomic_op_release(arch_cmpxchg64, __VA_ARGS__)
+#elif defined(arch_cmpxchg64)
+#define raw_cmpxchg64_release arch_cmpxchg64
+#else
+extern void raw_cmpxchg64_release_not_implemented(void);
+#define raw_cmpxchg64_release(...) raw_cmpxchg64_release_not_implemented()
+#endif
+
+#if defined(arch_cmpxchg64_relaxed)
+#define raw_cmpxchg64_relaxed arch_cmpxchg64_relaxed
+#elif defined(arch_cmpxchg64)
+#define raw_cmpxchg64_relaxed arch_cmpxchg64
+#else
+extern void raw_cmpxchg64_relaxed_not_implemented(void);
+#define raw_cmpxchg64_relaxed(...) raw_cmpxchg64_relaxed_not_implemented()
+#endif
+
+#if defined(arch_cmpxchg128)
+#define raw_cmpxchg128 arch_cmpxchg128
+#elif defined(arch_cmpxchg128_relaxed)
+#define raw_cmpxchg128(...) \
+ __atomic_op_fence(arch_cmpxchg128, __VA_ARGS__)
+#else
+extern void raw_cmpxchg128_not_implemented(void);
+#define raw_cmpxchg128(...) raw_cmpxchg128_not_implemented()
#endif
-#ifndef arch_cmpxchg64
-#define arch_cmpxchg64(...) \
- __atomic_op_fence(arch_cmpxchg64, __VA_ARGS__)
-#endif
-
-#endif /* arch_cmpxchg64_relaxed */
-
-#ifndef arch_cmpxchg128_relaxed
-#define arch_cmpxchg128_acquire arch_cmpxchg128
-#define arch_cmpxchg128_release arch_cmpxchg128
-#define arch_cmpxchg128_relaxed arch_cmpxchg128
-#else /* arch_cmpxchg128_relaxed */
-
-#ifndef arch_cmpxchg128_acquire
-#define arch_cmpxchg128_acquire(...) \
+#if defined(arch_cmpxchg128_acquire)
+#define raw_cmpxchg128_acquire arch_cmpxchg128_acquire
+#elif defined(arch_cmpxchg128_relaxed)
+#define raw_cmpxchg128_acquire(...) \
__atomic_op_acquire(arch_cmpxchg128, __VA_ARGS__)
+#elif defined(arch_cmpxchg128)
+#define raw_cmpxchg128_acquire arch_cmpxchg128
+#else
+extern void raw_cmpxchg128_acquire_not_implemented(void);
+#define raw_cmpxchg128_acquire(...) raw_cmpxchg128_acquire_not_implemented()
#endif
-#ifndef arch_cmpxchg128_release
-#define arch_cmpxchg128_release(...) \
+#if defined(arch_cmpxchg128_release)
+#define raw_cmpxchg128_release arch_cmpxchg128_release
+#elif defined(arch_cmpxchg128_relaxed)
+#define raw_cmpxchg128_release(...) \
__atomic_op_release(arch_cmpxchg128, __VA_ARGS__)
-#endif
-
-#ifndef arch_cmpxchg128
-#define arch_cmpxchg128(...) \
- __atomic_op_fence(arch_cmpxchg128, __VA_ARGS__)
-#endif
-
-#endif /* arch_cmpxchg128_relaxed */
-
-#ifndef arch_try_cmpxchg_relaxed
-#ifdef arch_try_cmpxchg
-#define arch_try_cmpxchg_acquire arch_try_cmpxchg
-#define arch_try_cmpxchg_release arch_try_cmpxchg
-#define arch_try_cmpxchg_relaxed arch_try_cmpxchg
-#endif /* arch_try_cmpxchg */
-
-#ifndef arch_try_cmpxchg
-#define arch_try_cmpxchg(_ptr, _oldp, _new) \
+#elif defined(arch_cmpxchg128)
+#define raw_cmpxchg128_release arch_cmpxchg128
+#else
+extern void raw_cmpxchg128_release_not_implemented(void);
+#define raw_cmpxchg128_release(...) raw_cmpxchg128_release_not_implemented()
+#endif
+
+#if defined(arch_cmpxchg128_relaxed)
+#define raw_cmpxchg128_relaxed arch_cmpxchg128_relaxed
+#elif defined(arch_cmpxchg128)
+#define raw_cmpxchg128_relaxed arch_cmpxchg128
+#else
+extern void raw_cmpxchg128_relaxed_not_implemented(void);
+#define raw_cmpxchg128_relaxed(...) raw_cmpxchg128_relaxed_not_implemented()
+#endif
+
+#if defined(arch_try_cmpxchg)
+#define raw_try_cmpxchg arch_try_cmpxchg
+#elif defined(arch_try_cmpxchg_relaxed)
+#define raw_try_cmpxchg(...) \
+ __atomic_op_fence(arch_try_cmpxchg, __VA_ARGS__)
+#else
+#define raw_try_cmpxchg(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg */
+#endif
-#ifndef arch_try_cmpxchg_acquire
-#define arch_try_cmpxchg_acquire(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg_acquire)
+#define raw_try_cmpxchg_acquire arch_try_cmpxchg_acquire
+#elif defined(arch_try_cmpxchg_relaxed)
+#define raw_try_cmpxchg_acquire(...) \
+ __atomic_op_acquire(arch_try_cmpxchg, __VA_ARGS__)
+#elif defined(arch_try_cmpxchg)
+#define raw_try_cmpxchg_acquire arch_try_cmpxchg
+#else
+#define raw_try_cmpxchg_acquire(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg_acquire((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg_acquire((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg_acquire */
+#endif
-#ifndef arch_try_cmpxchg_release
-#define arch_try_cmpxchg_release(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg_release)
+#define raw_try_cmpxchg_release arch_try_cmpxchg_release
+#elif defined(arch_try_cmpxchg_relaxed)
+#define raw_try_cmpxchg_release(...) \
+ __atomic_op_release(arch_try_cmpxchg, __VA_ARGS__)
+#elif defined(arch_try_cmpxchg)
+#define raw_try_cmpxchg_release arch_try_cmpxchg
+#else
+#define raw_try_cmpxchg_release(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg_release((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg_release((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg_release */
+#endif
-#ifndef arch_try_cmpxchg_relaxed
-#define arch_try_cmpxchg_relaxed(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg_relaxed)
+#define raw_try_cmpxchg_relaxed arch_try_cmpxchg_relaxed
+#elif defined(arch_try_cmpxchg)
+#define raw_try_cmpxchg_relaxed arch_try_cmpxchg
+#else
+#define raw_try_cmpxchg_relaxed(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg_relaxed((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg_relaxed((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg_relaxed */
-
-#else /* arch_try_cmpxchg_relaxed */
-
-#ifndef arch_try_cmpxchg_acquire
-#define arch_try_cmpxchg_acquire(...) \
- __atomic_op_acquire(arch_try_cmpxchg, __VA_ARGS__)
-#endif
-
-#ifndef arch_try_cmpxchg_release
-#define arch_try_cmpxchg_release(...) \
- __atomic_op_release(arch_try_cmpxchg, __VA_ARGS__)
#endif
-#ifndef arch_try_cmpxchg
-#define arch_try_cmpxchg(...) \
- __atomic_op_fence(arch_try_cmpxchg, __VA_ARGS__)
-#endif
-
-#endif /* arch_try_cmpxchg_relaxed */
-
-#ifndef arch_try_cmpxchg64_relaxed
-#ifdef arch_try_cmpxchg64
-#define arch_try_cmpxchg64_acquire arch_try_cmpxchg64
-#define arch_try_cmpxchg64_release arch_try_cmpxchg64
-#define arch_try_cmpxchg64_relaxed arch_try_cmpxchg64
-#endif /* arch_try_cmpxchg64 */
-
-#ifndef arch_try_cmpxchg64
-#define arch_try_cmpxchg64(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg64)
+#define raw_try_cmpxchg64 arch_try_cmpxchg64
+#elif defined(arch_try_cmpxchg64_relaxed)
+#define raw_try_cmpxchg64(...) \
+ __atomic_op_fence(arch_try_cmpxchg64, __VA_ARGS__)
+#else
+#define raw_try_cmpxchg64(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg64((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg64((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg64 */
+#endif
-#ifndef arch_try_cmpxchg64_acquire
-#define arch_try_cmpxchg64_acquire(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg64_acquire)
+#define raw_try_cmpxchg64_acquire arch_try_cmpxchg64_acquire
+#elif defined(arch_try_cmpxchg64_relaxed)
+#define raw_try_cmpxchg64_acquire(...) \
+ __atomic_op_acquire(arch_try_cmpxchg64, __VA_ARGS__)
+#elif defined(arch_try_cmpxchg64)
+#define raw_try_cmpxchg64_acquire arch_try_cmpxchg64
+#else
+#define raw_try_cmpxchg64_acquire(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg64_acquire((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg64_acquire((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg64_acquire */
+#endif
-#ifndef arch_try_cmpxchg64_release
-#define arch_try_cmpxchg64_release(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg64_release)
+#define raw_try_cmpxchg64_release arch_try_cmpxchg64_release
+#elif defined(arch_try_cmpxchg64_relaxed)
+#define raw_try_cmpxchg64_release(...) \
+ __atomic_op_release(arch_try_cmpxchg64, __VA_ARGS__)
+#elif defined(arch_try_cmpxchg64)
+#define raw_try_cmpxchg64_release arch_try_cmpxchg64
+#else
+#define raw_try_cmpxchg64_release(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg64_release((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg64_release((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg64_release */
+#endif
-#ifndef arch_try_cmpxchg64_relaxed
-#define arch_try_cmpxchg64_relaxed(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg64_relaxed)
+#define raw_try_cmpxchg64_relaxed arch_try_cmpxchg64_relaxed
+#elif defined(arch_try_cmpxchg64)
+#define raw_try_cmpxchg64_relaxed arch_try_cmpxchg64
+#else
+#define raw_try_cmpxchg64_relaxed(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg64_relaxed((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg64_relaxed((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg64_relaxed */
-
-#else /* arch_try_cmpxchg64_relaxed */
-
-#ifndef arch_try_cmpxchg64_acquire
-#define arch_try_cmpxchg64_acquire(...) \
- __atomic_op_acquire(arch_try_cmpxchg64, __VA_ARGS__)
-#endif
-
-#ifndef arch_try_cmpxchg64_release
-#define arch_try_cmpxchg64_release(...) \
- __atomic_op_release(arch_try_cmpxchg64, __VA_ARGS__)
-#endif
-
-#ifndef arch_try_cmpxchg64
-#define arch_try_cmpxchg64(...) \
- __atomic_op_fence(arch_try_cmpxchg64, __VA_ARGS__)
#endif
-#endif /* arch_try_cmpxchg64_relaxed */
-
-#ifndef arch_try_cmpxchg128_relaxed
-#ifdef arch_try_cmpxchg128
-#define arch_try_cmpxchg128_acquire arch_try_cmpxchg128
-#define arch_try_cmpxchg128_release arch_try_cmpxchg128
-#define arch_try_cmpxchg128_relaxed arch_try_cmpxchg128
-#endif /* arch_try_cmpxchg128 */
-
-#ifndef arch_try_cmpxchg128
-#define arch_try_cmpxchg128(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg128)
+#define raw_try_cmpxchg128 arch_try_cmpxchg128
+#elif defined(arch_try_cmpxchg128_relaxed)
+#define raw_try_cmpxchg128(...) \
+ __atomic_op_fence(arch_try_cmpxchg128, __VA_ARGS__)
+#else
+#define raw_try_cmpxchg128(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg128((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg128((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg128 */
+#endif
-#ifndef arch_try_cmpxchg128_acquire
-#define arch_try_cmpxchg128_acquire(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg128_acquire)
+#define raw_try_cmpxchg128_acquire arch_try_cmpxchg128_acquire
+#elif defined(arch_try_cmpxchg128_relaxed)
+#define raw_try_cmpxchg128_acquire(...) \
+ __atomic_op_acquire(arch_try_cmpxchg128, __VA_ARGS__)
+#elif defined(arch_try_cmpxchg128)
+#define raw_try_cmpxchg128_acquire arch_try_cmpxchg128
+#else
+#define raw_try_cmpxchg128_acquire(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg128_acquire((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg128_acquire((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg128_acquire */
+#endif
-#ifndef arch_try_cmpxchg128_release
-#define arch_try_cmpxchg128_release(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg128_release)
+#define raw_try_cmpxchg128_release arch_try_cmpxchg128_release
+#elif defined(arch_try_cmpxchg128_relaxed)
+#define raw_try_cmpxchg128_release(...) \
+ __atomic_op_release(arch_try_cmpxchg128, __VA_ARGS__)
+#elif defined(arch_try_cmpxchg128)
+#define raw_try_cmpxchg128_release arch_try_cmpxchg128
+#else
+#define raw_try_cmpxchg128_release(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg128_release((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg128_release((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg128_release */
+#endif
-#ifndef arch_try_cmpxchg128_relaxed
-#define arch_try_cmpxchg128_relaxed(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg128_relaxed)
+#define raw_try_cmpxchg128_relaxed arch_try_cmpxchg128_relaxed
+#elif defined(arch_try_cmpxchg128)
+#define raw_try_cmpxchg128_relaxed arch_try_cmpxchg128
+#else
+#define raw_try_cmpxchg128_relaxed(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg128_relaxed((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg128_relaxed((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg128_relaxed */
-
-#else /* arch_try_cmpxchg128_relaxed */
-
-#ifndef arch_try_cmpxchg128_acquire
-#define arch_try_cmpxchg128_acquire(...) \
- __atomic_op_acquire(arch_try_cmpxchg128, __VA_ARGS__)
#endif
-#ifndef arch_try_cmpxchg128_release
-#define arch_try_cmpxchg128_release(...) \
- __atomic_op_release(arch_try_cmpxchg128, __VA_ARGS__)
-#endif
+#define raw_cmpxchg_local arch_cmpxchg_local
-#ifndef arch_try_cmpxchg128
-#define arch_try_cmpxchg128(...) \
- __atomic_op_fence(arch_try_cmpxchg128, __VA_ARGS__)
+#ifdef arch_try_cmpxchg_local
+#define raw_try_cmpxchg_local arch_try_cmpxchg_local
+#else
+#define raw_try_cmpxchg_local(_ptr, _oldp, _new) \
+({ \
+ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
+ ___r = raw_cmpxchg_local((_ptr), ___o, (_new)); \
+ if (unlikely(___r != ___o)) \
+ *___op = ___r; \
+ likely(___r == ___o); \
+})
#endif
-#endif /* arch_try_cmpxchg128_relaxed */
+#define raw_cmpxchg64_local arch_cmpxchg64_local
-#ifndef arch_try_cmpxchg_local
-#define arch_try_cmpxchg_local(_ptr, _oldp, _new) \
+#ifdef arch_try_cmpxchg64_local
+#define raw_try_cmpxchg64_local arch_try_cmpxchg64_local
+#else
+#define raw_try_cmpxchg64_local(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg_local((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg64_local((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg_local */
+#endif
+
+#define raw_cmpxchg128_local arch_cmpxchg128_local
-#ifndef arch_try_cmpxchg64_local
-#define arch_try_cmpxchg64_local(_ptr, _oldp, _new) \
+#ifdef arch_try_cmpxchg128_local
+#define raw_try_cmpxchg128_local arch_try_cmpxchg128_local
+#else
+#define raw_try_cmpxchg128_local(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg64_local((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg128_local((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg64_local */
+#endif
+
+#define raw_sync_cmpxchg arch_sync_cmpxchg
-#ifndef arch_atomic_read_acquire
+#define raw_atomic_read arch_atomic_read
+
+#if defined(arch_atomic_read_acquire)
+#define raw_atomic_read_acquire arch_atomic_read_acquire
+#elif defined(arch_atomic_read)
+#define raw_atomic_read_acquire arch_atomic_read
+#else
static __always_inline int
-arch_atomic_read_acquire(const atomic_t *v)
+raw_atomic_read_acquire(const atomic_t *v)
{
int ret;
if (__native_word(atomic_t)) {
ret = smp_load_acquire(&(v)->counter);
} else {
- ret = arch_atomic_read(v);
+ ret = raw_atomic_read(v);
__atomic_acquire_fence();
}
return ret;
}
-#define arch_atomic_read_acquire arch_atomic_read_acquire
#endif
-#ifndef arch_atomic_set_release
+#define raw_atomic_set arch_atomic_set
+
+#if defined(arch_atomic_set_release)
+#define raw_atomic_set_release arch_atomic_set_release
+#elif defined(arch_atomic_set)
+#define raw_atomic_set_release arch_atomic_set
+#else
static __always_inline void
-arch_atomic_set_release(atomic_t *v, int i)
+raw_atomic_set_release(atomic_t *v, int i)
{
if (__native_word(atomic_t)) {
smp_store_release(&(v)->counter, i);
} else {
__atomic_release_fence();
- arch_atomic_set(v, i);
+ raw_atomic_set(v, i);
}
}
-#define arch_atomic_set_release arch_atomic_set_release
#endif
-#ifndef arch_atomic_add_return_relaxed
-#define arch_atomic_add_return_acquire arch_atomic_add_return
-#define arch_atomic_add_return_release arch_atomic_add_return
-#define arch_atomic_add_return_relaxed arch_atomic_add_return
-#else /* arch_atomic_add_return_relaxed */
+#define raw_atomic_add arch_atomic_add
+
+#if defined(arch_atomic_add_return)
+#define raw_atomic_add_return arch_atomic_add_return
+#elif defined(arch_atomic_add_return_relaxed)
+static __always_inline int
+raw_atomic_add_return(int i, atomic_t *v)
+{
+ int ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic_add_return_relaxed(i, v);
+ __atomic_post_full_fence();
+ return ret;
+}
+#else
+#error "Unable to define raw_atomic_add_return"
+#endif
-#ifndef arch_atomic_add_return_acquire
+#if defined(arch_atomic_add_return_acquire)
+#define raw_atomic_add_return_acquire arch_atomic_add_return_acquire
+#elif defined(arch_atomic_add_return_relaxed)
static __always_inline int
-arch_atomic_add_return_acquire(int i, atomic_t *v)
+raw_atomic_add_return_acquire(int i, atomic_t *v)
{
int ret = arch_atomic_add_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic_add_return_acquire arch_atomic_add_return_acquire
+#elif defined(arch_atomic_add_return)
+#define raw_atomic_add_return_acquire arch_atomic_add_return
+#else
+#error "Unable to define raw_atomic_add_return_acquire"
#endif
-#ifndef arch_atomic_add_return_release
+#if defined(arch_atomic_add_return_release)
+#define raw_atomic_add_return_release arch_atomic_add_return_release
+#elif defined(arch_atomic_add_return_relaxed)
static __always_inline int
-arch_atomic_add_return_release(int i, atomic_t *v)
+raw_atomic_add_return_release(int i, atomic_t *v)
{
__atomic_release_fence();
return arch_atomic_add_return_relaxed(i, v);
}
-#define arch_atomic_add_return_release arch_atomic_add_return_release
+#elif defined(arch_atomic_add_return)
+#define raw_atomic_add_return_release arch_atomic_add_return
+#else
+#error "Unable to define raw_atomic_add_return_release"
#endif
-#ifndef arch_atomic_add_return
+#if defined(arch_atomic_add_return_relaxed)
+#define raw_atomic_add_return_relaxed arch_atomic_add_return_relaxed
+#elif defined(arch_atomic_add_return)
+#define raw_atomic_add_return_relaxed arch_atomic_add_return
+#else
+#error "Unable to define raw_atomic_add_return_relaxed"
+#endif
+
+#if defined(arch_atomic_fetch_add)
+#define raw_atomic_fetch_add arch_atomic_fetch_add
+#elif defined(arch_atomic_fetch_add_relaxed)
static __always_inline int
-arch_atomic_add_return(int i, atomic_t *v)
+raw_atomic_fetch_add(int i, atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_add_return_relaxed(i, v);
+ ret = arch_atomic_fetch_add_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_add_return arch_atomic_add_return
+#else
+#error "Unable to define raw_atomic_fetch_add"
#endif
-#endif /* arch_atomic_add_return_relaxed */
-
-#ifndef arch_atomic_fetch_add_relaxed
-#define arch_atomic_fetch_add_acquire arch_atomic_fetch_add
-#define arch_atomic_fetch_add_release arch_atomic_fetch_add
-#define arch_atomic_fetch_add_relaxed arch_atomic_fetch_add
-#else /* arch_atomic_fetch_add_relaxed */
-
-#ifndef arch_atomic_fetch_add_acquire
+#if defined(arch_atomic_fetch_add_acquire)
+#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire
+#elif defined(arch_atomic_fetch_add_relaxed)
static __always_inline int
-arch_atomic_fetch_add_acquire(int i, atomic_t *v)
+raw_atomic_fetch_add_acquire(int i, atomic_t *v)
{
int ret = arch_atomic_fetch_add_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire
+#elif defined(arch_atomic_fetch_add)
+#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add
+#else
+#error "Unable to define raw_atomic_fetch_add_acquire"
#endif
-#ifndef arch_atomic_fetch_add_release
+#if defined(arch_atomic_fetch_add_release)
+#define raw_atomic_fetch_add_release arch_atomic_fetch_add_release
+#elif defined(arch_atomic_fetch_add_relaxed)
static __always_inline int
-arch_atomic_fetch_add_release(int i, atomic_t *v)
+raw_atomic_fetch_add_release(int i, atomic_t *v)
{
__atomic_release_fence();
return arch_atomic_fetch_add_relaxed(i, v);
}
-#define arch_atomic_fetch_add_release arch_atomic_fetch_add_release
+#elif defined(arch_atomic_fetch_add)
+#define raw_atomic_fetch_add_release arch_atomic_fetch_add
+#else
+#error "Unable to define raw_atomic_fetch_add_release"
+#endif
+
+#if defined(arch_atomic_fetch_add_relaxed)
+#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed
+#elif defined(arch_atomic_fetch_add)
+#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add
+#else
+#error "Unable to define raw_atomic_fetch_add_relaxed"
#endif
-#ifndef arch_atomic_fetch_add
+#define raw_atomic_sub arch_atomic_sub
+
+#if defined(arch_atomic_sub_return)
+#define raw_atomic_sub_return arch_atomic_sub_return
+#elif defined(arch_atomic_sub_return_relaxed)
static __always_inline int
-arch_atomic_fetch_add(int i, atomic_t *v)
+raw_atomic_sub_return(int i, atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_fetch_add_relaxed(i, v);
+ ret = arch_atomic_sub_return_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_fetch_add arch_atomic_fetch_add
+#else
+#error "Unable to define raw_atomic_sub_return"
#endif
-#endif /* arch_atomic_fetch_add_relaxed */
-
-#ifndef arch_atomic_sub_return_relaxed
-#define arch_atomic_sub_return_acquire arch_atomic_sub_return
-#define arch_atomic_sub_return_release arch_atomic_sub_return
-#define arch_atomic_sub_return_relaxed arch_atomic_sub_return
-#else /* arch_atomic_sub_return_relaxed */
-
-#ifndef arch_atomic_sub_return_acquire
+#if defined(arch_atomic_sub_return_acquire)
+#define raw_atomic_sub_return_acquire arch_atomic_sub_return_acquire
+#elif defined(arch_atomic_sub_return_relaxed)
static __always_inline int
-arch_atomic_sub_return_acquire(int i, atomic_t *v)
+raw_atomic_sub_return_acquire(int i, atomic_t *v)
{
int ret = arch_atomic_sub_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic_sub_return_acquire arch_atomic_sub_return_acquire
+#elif defined(arch_atomic_sub_return)
+#define raw_atomic_sub_return_acquire arch_atomic_sub_return
+#else
+#error "Unable to define raw_atomic_sub_return_acquire"
#endif
-#ifndef arch_atomic_sub_return_release
+#if defined(arch_atomic_sub_return_release)
+#define raw_atomic_sub_return_release arch_atomic_sub_return_release
+#elif defined(arch_atomic_sub_return_relaxed)
static __always_inline int
-arch_atomic_sub_return_release(int i, atomic_t *v)
+raw_atomic_sub_return_release(int i, atomic_t *v)
{
__atomic_release_fence();
return arch_atomic_sub_return_relaxed(i, v);
}
-#define arch_atomic_sub_return_release arch_atomic_sub_return_release
+#elif defined(arch_atomic_sub_return)
+#define raw_atomic_sub_return_release arch_atomic_sub_return
+#else
+#error "Unable to define raw_atomic_sub_return_release"
+#endif
+
+#if defined(arch_atomic_sub_return_relaxed)
+#define raw_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed
+#elif defined(arch_atomic_sub_return)
+#define raw_atomic_sub_return_relaxed arch_atomic_sub_return
+#else
+#error "Unable to define raw_atomic_sub_return_relaxed"
#endif
-#ifndef arch_atomic_sub_return
+#if defined(arch_atomic_fetch_sub)
+#define raw_atomic_fetch_sub arch_atomic_fetch_sub
+#elif defined(arch_atomic_fetch_sub_relaxed)
static __always_inline int
-arch_atomic_sub_return(int i, atomic_t *v)
+raw_atomic_fetch_sub(int i, atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_sub_return_relaxed(i, v);
+ ret = arch_atomic_fetch_sub_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_sub_return arch_atomic_sub_return
+#else
+#error "Unable to define raw_atomic_fetch_sub"
#endif
-#endif /* arch_atomic_sub_return_relaxed */
-
-#ifndef arch_atomic_fetch_sub_relaxed
-#define arch_atomic_fetch_sub_acquire arch_atomic_fetch_sub
-#define arch_atomic_fetch_sub_release arch_atomic_fetch_sub
-#define arch_atomic_fetch_sub_relaxed arch_atomic_fetch_sub
-#else /* arch_atomic_fetch_sub_relaxed */
-
-#ifndef arch_atomic_fetch_sub_acquire
+#if defined(arch_atomic_fetch_sub_acquire)
+#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire
+#elif defined(arch_atomic_fetch_sub_relaxed)
static __always_inline int
-arch_atomic_fetch_sub_acquire(int i, atomic_t *v)
+raw_atomic_fetch_sub_acquire(int i, atomic_t *v)
{
int ret = arch_atomic_fetch_sub_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire
+#elif defined(arch_atomic_fetch_sub)
+#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub
+#else
+#error "Unable to define raw_atomic_fetch_sub_acquire"
#endif
-#ifndef arch_atomic_fetch_sub_release
+#if defined(arch_atomic_fetch_sub_release)
+#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub_release
+#elif defined(arch_atomic_fetch_sub_relaxed)
static __always_inline int
-arch_atomic_fetch_sub_release(int i, atomic_t *v)
+raw_atomic_fetch_sub_release(int i, atomic_t *v)
{
__atomic_release_fence();
return arch_atomic_fetch_sub_relaxed(i, v);
}
-#define arch_atomic_fetch_sub_release arch_atomic_fetch_sub_release
+#elif defined(arch_atomic_fetch_sub)
+#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub
+#else
+#error "Unable to define raw_atomic_fetch_sub_release"
#endif
-#ifndef arch_atomic_fetch_sub
-static __always_inline int
-arch_atomic_fetch_sub(int i, atomic_t *v)
-{
- int ret;
- __atomic_pre_full_fence();
- ret = arch_atomic_fetch_sub_relaxed(i, v);
- __atomic_post_full_fence();
- return ret;
-}
-#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+#if defined(arch_atomic_fetch_sub_relaxed)
+#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed
+#elif defined(arch_atomic_fetch_sub)
+#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub
+#else
+#error "Unable to define raw_atomic_fetch_sub_relaxed"
#endif
-#endif /* arch_atomic_fetch_sub_relaxed */
-
-#ifndef arch_atomic_inc
+#if defined(arch_atomic_inc)
+#define raw_atomic_inc arch_atomic_inc
+#else
static __always_inline void
-arch_atomic_inc(atomic_t *v)
+raw_atomic_inc(atomic_t *v)
{
- arch_atomic_add(1, v);
+ raw_atomic_add(1, v);
}
-#define arch_atomic_inc arch_atomic_inc
#endif
-#ifndef arch_atomic_inc_return_relaxed
-#ifdef arch_atomic_inc_return
-#define arch_atomic_inc_return_acquire arch_atomic_inc_return
-#define arch_atomic_inc_return_release arch_atomic_inc_return
-#define arch_atomic_inc_return_relaxed arch_atomic_inc_return
-#endif /* arch_atomic_inc_return */
-
-#ifndef arch_atomic_inc_return
+#if defined(arch_atomic_inc_return)
+#define raw_atomic_inc_return arch_atomic_inc_return
+#elif defined(arch_atomic_inc_return_relaxed)
+static __always_inline int
+raw_atomic_inc_return(atomic_t *v)
+{
+ int ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic_inc_return_relaxed(v);
+ __atomic_post_full_fence();
+ return ret;
+}
+#else
static __always_inline int
-arch_atomic_inc_return(atomic_t *v)
+raw_atomic_inc_return(atomic_t *v)
{
- return arch_atomic_add_return(1, v);
+ return raw_atomic_add_return(1, v);
}
-#define arch_atomic_inc_return arch_atomic_inc_return
#endif
-#ifndef arch_atomic_inc_return_acquire
+#if defined(arch_atomic_inc_return_acquire)
+#define raw_atomic_inc_return_acquire arch_atomic_inc_return_acquire
+#elif defined(arch_atomic_inc_return_relaxed)
static __always_inline int
-arch_atomic_inc_return_acquire(atomic_t *v)
+raw_atomic_inc_return_acquire(atomic_t *v)
{
- return arch_atomic_add_return_acquire(1, v);
+ int ret = arch_atomic_inc_return_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic_inc_return_acquire arch_atomic_inc_return_acquire
-#endif
-
-#ifndef arch_atomic_inc_return_release
+#elif defined(arch_atomic_inc_return)
+#define raw_atomic_inc_return_acquire arch_atomic_inc_return
+#else
static __always_inline int
-arch_atomic_inc_return_release(atomic_t *v)
+raw_atomic_inc_return_acquire(atomic_t *v)
{
- return arch_atomic_add_return_release(1, v);
+ return raw_atomic_add_return_acquire(1, v);
}
-#define arch_atomic_inc_return_release arch_atomic_inc_return_release
#endif
-#ifndef arch_atomic_inc_return_relaxed
+#if defined(arch_atomic_inc_return_release)
+#define raw_atomic_inc_return_release arch_atomic_inc_return_release
+#elif defined(arch_atomic_inc_return_relaxed)
static __always_inline int
-arch_atomic_inc_return_relaxed(atomic_t *v)
+raw_atomic_inc_return_release(atomic_t *v)
{
- return arch_atomic_add_return_relaxed(1, v);
+ __atomic_release_fence();
+ return arch_atomic_inc_return_relaxed(v);
}
-#define arch_atomic_inc_return_relaxed arch_atomic_inc_return_relaxed
-#endif
-
-#else /* arch_atomic_inc_return_relaxed */
-
-#ifndef arch_atomic_inc_return_acquire
+#elif defined(arch_atomic_inc_return)
+#define raw_atomic_inc_return_release arch_atomic_inc_return
+#else
static __always_inline int
-arch_atomic_inc_return_acquire(atomic_t *v)
+raw_atomic_inc_return_release(atomic_t *v)
{
- int ret = arch_atomic_inc_return_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ return raw_atomic_add_return_release(1, v);
}
-#define arch_atomic_inc_return_acquire arch_atomic_inc_return_acquire
#endif
-#ifndef arch_atomic_inc_return_release
+#if defined(arch_atomic_inc_return_relaxed)
+#define raw_atomic_inc_return_relaxed arch_atomic_inc_return_relaxed
+#elif defined(arch_atomic_inc_return)
+#define raw_atomic_inc_return_relaxed arch_atomic_inc_return
+#else
static __always_inline int
-arch_atomic_inc_return_release(atomic_t *v)
+raw_atomic_inc_return_relaxed(atomic_t *v)
{
- __atomic_release_fence();
- return arch_atomic_inc_return_relaxed(v);
+ return raw_atomic_add_return_relaxed(1, v);
}
-#define arch_atomic_inc_return_release arch_atomic_inc_return_release
#endif
-#ifndef arch_atomic_inc_return
+#if defined(arch_atomic_fetch_inc)
+#define raw_atomic_fetch_inc arch_atomic_fetch_inc
+#elif defined(arch_atomic_fetch_inc_relaxed)
static __always_inline int
-arch_atomic_inc_return(atomic_t *v)
+raw_atomic_fetch_inc(atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_inc_return_relaxed(v);
+ ret = arch_atomic_fetch_inc_relaxed(v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_inc_return arch_atomic_inc_return
-#endif
-
-#endif /* arch_atomic_inc_return_relaxed */
-
-#ifndef arch_atomic_fetch_inc_relaxed
-#ifdef arch_atomic_fetch_inc
-#define arch_atomic_fetch_inc_acquire arch_atomic_fetch_inc
-#define arch_atomic_fetch_inc_release arch_atomic_fetch_inc
-#define arch_atomic_fetch_inc_relaxed arch_atomic_fetch_inc
-#endif /* arch_atomic_fetch_inc */
-
-#ifndef arch_atomic_fetch_inc
+#else
static __always_inline int
-arch_atomic_fetch_inc(atomic_t *v)
+raw_atomic_fetch_inc(atomic_t *v)
{
- return arch_atomic_fetch_add(1, v);
+ return raw_atomic_fetch_add(1, v);
}
-#define arch_atomic_fetch_inc arch_atomic_fetch_inc
#endif
-#ifndef arch_atomic_fetch_inc_acquire
+#if defined(arch_atomic_fetch_inc_acquire)
+#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire
+#elif defined(arch_atomic_fetch_inc_relaxed)
static __always_inline int
-arch_atomic_fetch_inc_acquire(atomic_t *v)
+raw_atomic_fetch_inc_acquire(atomic_t *v)
{
- return arch_atomic_fetch_add_acquire(1, v);
+ int ret = arch_atomic_fetch_inc_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire
-#endif
-
-#ifndef arch_atomic_fetch_inc_release
+#elif defined(arch_atomic_fetch_inc)
+#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc
+#else
static __always_inline int
-arch_atomic_fetch_inc_release(atomic_t *v)
+raw_atomic_fetch_inc_acquire(atomic_t *v)
{
- return arch_atomic_fetch_add_release(1, v);
+ return raw_atomic_fetch_add_acquire(1, v);
}
-#define arch_atomic_fetch_inc_release arch_atomic_fetch_inc_release
#endif
-#ifndef arch_atomic_fetch_inc_relaxed
+#if defined(arch_atomic_fetch_inc_release)
+#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc_release
+#elif defined(arch_atomic_fetch_inc_relaxed)
+static __always_inline int
+raw_atomic_fetch_inc_release(atomic_t *v)
+{
+ __atomic_release_fence();
+ return arch_atomic_fetch_inc_relaxed(v);
+}
+#elif defined(arch_atomic_fetch_inc)
+#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc
+#else
static __always_inline int
-arch_atomic_fetch_inc_relaxed(atomic_t *v)
+raw_atomic_fetch_inc_release(atomic_t *v)
{
- return arch_atomic_fetch_add_relaxed(1, v);
+ return raw_atomic_fetch_add_release(1, v);
}
-#define arch_atomic_fetch_inc_relaxed arch_atomic_fetch_inc_relaxed
#endif
-#else /* arch_atomic_fetch_inc_relaxed */
-
-#ifndef arch_atomic_fetch_inc_acquire
+#if defined(arch_atomic_fetch_inc_relaxed)
+#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc_relaxed
+#elif defined(arch_atomic_fetch_inc)
+#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc
+#else
static __always_inline int
-arch_atomic_fetch_inc_acquire(atomic_t *v)
+raw_atomic_fetch_inc_relaxed(atomic_t *v)
{
- int ret = arch_atomic_fetch_inc_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ return raw_atomic_fetch_add_relaxed(1, v);
}
-#define arch_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire
#endif
-#ifndef arch_atomic_fetch_inc_release
-static __always_inline int
-arch_atomic_fetch_inc_release(atomic_t *v)
+#if defined(arch_atomic_dec)
+#define raw_atomic_dec arch_atomic_dec
+#else
+static __always_inline void
+raw_atomic_dec(atomic_t *v)
{
- __atomic_release_fence();
- return arch_atomic_fetch_inc_relaxed(v);
+ raw_atomic_sub(1, v);
}
-#define arch_atomic_fetch_inc_release arch_atomic_fetch_inc_release
#endif
-#ifndef arch_atomic_fetch_inc
+#if defined(arch_atomic_dec_return)
+#define raw_atomic_dec_return arch_atomic_dec_return
+#elif defined(arch_atomic_dec_return_relaxed)
static __always_inline int
-arch_atomic_fetch_inc(atomic_t *v)
+raw_atomic_dec_return(atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_fetch_inc_relaxed(v);
+ ret = arch_atomic_dec_return_relaxed(v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_fetch_inc arch_atomic_fetch_inc
-#endif
-
-#endif /* arch_atomic_fetch_inc_relaxed */
-
-#ifndef arch_atomic_dec
-static __always_inline void
-arch_atomic_dec(atomic_t *v)
-{
- arch_atomic_sub(1, v);
-}
-#define arch_atomic_dec arch_atomic_dec
-#endif
-
-#ifndef arch_atomic_dec_return_relaxed
-#ifdef arch_atomic_dec_return
-#define arch_atomic_dec_return_acquire arch_atomic_dec_return
-#define arch_atomic_dec_return_release arch_atomic_dec_return
-#define arch_atomic_dec_return_relaxed arch_atomic_dec_return
-#endif /* arch_atomic_dec_return */
-
-#ifndef arch_atomic_dec_return
+#else
static __always_inline int
-arch_atomic_dec_return(atomic_t *v)
+raw_atomic_dec_return(atomic_t *v)
{
- return arch_atomic_sub_return(1, v);
+ return raw_atomic_sub_return(1, v);
}
-#define arch_atomic_dec_return arch_atomic_dec_return
#endif
-#ifndef arch_atomic_dec_return_acquire
+#if defined(arch_atomic_dec_return_acquire)
+#define raw_atomic_dec_return_acquire arch_atomic_dec_return_acquire
+#elif defined(arch_atomic_dec_return_relaxed)
static __always_inline int
-arch_atomic_dec_return_acquire(atomic_t *v)
+raw_atomic_dec_return_acquire(atomic_t *v)
{
- return arch_atomic_sub_return_acquire(1, v);
+ int ret = arch_atomic_dec_return_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic_dec_return_acquire arch_atomic_dec_return_acquire
-#endif
-
-#ifndef arch_atomic_dec_return_release
+#elif defined(arch_atomic_dec_return)
+#define raw_atomic_dec_return_acquire arch_atomic_dec_return
+#else
static __always_inline int
-arch_atomic_dec_return_release(atomic_t *v)
+raw_atomic_dec_return_acquire(atomic_t *v)
{
- return arch_atomic_sub_return_release(1, v);
+ return raw_atomic_sub_return_acquire(1, v);
}
-#define arch_atomic_dec_return_release arch_atomic_dec_return_release
#endif
-#ifndef arch_atomic_dec_return_relaxed
+#if defined(arch_atomic_dec_return_release)
+#define raw_atomic_dec_return_release arch_atomic_dec_return_release
+#elif defined(arch_atomic_dec_return_relaxed)
static __always_inline int
-arch_atomic_dec_return_relaxed(atomic_t *v)
+raw_atomic_dec_return_release(atomic_t *v)
{
- return arch_atomic_sub_return_relaxed(1, v);
+ __atomic_release_fence();
+ return arch_atomic_dec_return_relaxed(v);
}
-#define arch_atomic_dec_return_relaxed arch_atomic_dec_return_relaxed
-#endif
-
-#else /* arch_atomic_dec_return_relaxed */
-
-#ifndef arch_atomic_dec_return_acquire
+#elif defined(arch_atomic_dec_return)
+#define raw_atomic_dec_return_release arch_atomic_dec_return
+#else
static __always_inline int
-arch_atomic_dec_return_acquire(atomic_t *v)
+raw_atomic_dec_return_release(atomic_t *v)
{
- int ret = arch_atomic_dec_return_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ return raw_atomic_sub_return_release(1, v);
}
-#define arch_atomic_dec_return_acquire arch_atomic_dec_return_acquire
#endif
-#ifndef arch_atomic_dec_return_release
+#if defined(arch_atomic_dec_return_relaxed)
+#define raw_atomic_dec_return_relaxed arch_atomic_dec_return_relaxed
+#elif defined(arch_atomic_dec_return)
+#define raw_atomic_dec_return_relaxed arch_atomic_dec_return
+#else
static __always_inline int
-arch_atomic_dec_return_release(atomic_t *v)
+raw_atomic_dec_return_relaxed(atomic_t *v)
{
- __atomic_release_fence();
- return arch_atomic_dec_return_relaxed(v);
+ return raw_atomic_sub_return_relaxed(1, v);
}
-#define arch_atomic_dec_return_release arch_atomic_dec_return_release
#endif
-#ifndef arch_atomic_dec_return
+#if defined(arch_atomic_fetch_dec)
+#define raw_atomic_fetch_dec arch_atomic_fetch_dec
+#elif defined(arch_atomic_fetch_dec_relaxed)
static __always_inline int
-arch_atomic_dec_return(atomic_t *v)
+raw_atomic_fetch_dec(atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_dec_return_relaxed(v);
+ ret = arch_atomic_fetch_dec_relaxed(v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_dec_return arch_atomic_dec_return
-#endif
-
-#endif /* arch_atomic_dec_return_relaxed */
-
-#ifndef arch_atomic_fetch_dec_relaxed
-#ifdef arch_atomic_fetch_dec
-#define arch_atomic_fetch_dec_acquire arch_atomic_fetch_dec
-#define arch_atomic_fetch_dec_release arch_atomic_fetch_dec
-#define arch_atomic_fetch_dec_relaxed arch_atomic_fetch_dec
-#endif /* arch_atomic_fetch_dec */
-
-#ifndef arch_atomic_fetch_dec
+#else
static __always_inline int
-arch_atomic_fetch_dec(atomic_t *v)
+raw_atomic_fetch_dec(atomic_t *v)
{
- return arch_atomic_fetch_sub(1, v);
+ return raw_atomic_fetch_sub(1, v);
}
-#define arch_atomic_fetch_dec arch_atomic_fetch_dec
#endif
-#ifndef arch_atomic_fetch_dec_acquire
+#if defined(arch_atomic_fetch_dec_acquire)
+#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire
+#elif defined(arch_atomic_fetch_dec_relaxed)
static __always_inline int
-arch_atomic_fetch_dec_acquire(atomic_t *v)
+raw_atomic_fetch_dec_acquire(atomic_t *v)
{
- return arch_atomic_fetch_sub_acquire(1, v);
+ int ret = arch_atomic_fetch_dec_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire
-#endif
-
-#ifndef arch_atomic_fetch_dec_release
+#elif defined(arch_atomic_fetch_dec)
+#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec
+#else
static __always_inline int
-arch_atomic_fetch_dec_release(atomic_t *v)
+raw_atomic_fetch_dec_acquire(atomic_t *v)
{
- return arch_atomic_fetch_sub_release(1, v);
+ return raw_atomic_fetch_sub_acquire(1, v);
}
-#define arch_atomic_fetch_dec_release arch_atomic_fetch_dec_release
#endif
-#ifndef arch_atomic_fetch_dec_relaxed
+#if defined(arch_atomic_fetch_dec_release)
+#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec_release
+#elif defined(arch_atomic_fetch_dec_relaxed)
static __always_inline int
-arch_atomic_fetch_dec_relaxed(atomic_t *v)
+raw_atomic_fetch_dec_release(atomic_t *v)
{
- return arch_atomic_fetch_sub_relaxed(1, v);
+ __atomic_release_fence();
+ return arch_atomic_fetch_dec_relaxed(v);
}
-#define arch_atomic_fetch_dec_relaxed arch_atomic_fetch_dec_relaxed
-#endif
-
-#else /* arch_atomic_fetch_dec_relaxed */
-
-#ifndef arch_atomic_fetch_dec_acquire
+#elif defined(arch_atomic_fetch_dec)
+#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec
+#else
static __always_inline int
-arch_atomic_fetch_dec_acquire(atomic_t *v)
+raw_atomic_fetch_dec_release(atomic_t *v)
{
- int ret = arch_atomic_fetch_dec_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ return raw_atomic_fetch_sub_release(1, v);
}
-#define arch_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire
#endif
-#ifndef arch_atomic_fetch_dec_release
+#if defined(arch_atomic_fetch_dec_relaxed)
+#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec_relaxed
+#elif defined(arch_atomic_fetch_dec)
+#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec
+#else
static __always_inline int
-arch_atomic_fetch_dec_release(atomic_t *v)
+raw_atomic_fetch_dec_relaxed(atomic_t *v)
{
- __atomic_release_fence();
- return arch_atomic_fetch_dec_relaxed(v);
+ return raw_atomic_fetch_sub_relaxed(1, v);
}
-#define arch_atomic_fetch_dec_release arch_atomic_fetch_dec_release
#endif
-#ifndef arch_atomic_fetch_dec
+#define raw_atomic_and arch_atomic_and
+
+#if defined(arch_atomic_fetch_and)
+#define raw_atomic_fetch_and arch_atomic_fetch_and
+#elif defined(arch_atomic_fetch_and_relaxed)
static __always_inline int
-arch_atomic_fetch_dec(atomic_t *v)
+raw_atomic_fetch_and(int i, atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_fetch_dec_relaxed(v);
+ ret = arch_atomic_fetch_and_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_fetch_dec arch_atomic_fetch_dec
+#else
+#error "Unable to define raw_atomic_fetch_and"
#endif
-#endif /* arch_atomic_fetch_dec_relaxed */
-
-#ifndef arch_atomic_fetch_and_relaxed
-#define arch_atomic_fetch_and_acquire arch_atomic_fetch_and
-#define arch_atomic_fetch_and_release arch_atomic_fetch_and
-#define arch_atomic_fetch_and_relaxed arch_atomic_fetch_and
-#else /* arch_atomic_fetch_and_relaxed */
-
-#ifndef arch_atomic_fetch_and_acquire
+#if defined(arch_atomic_fetch_and_acquire)
+#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire
+#elif defined(arch_atomic_fetch_and_relaxed)
static __always_inline int
-arch_atomic_fetch_and_acquire(int i, atomic_t *v)
+raw_atomic_fetch_and_acquire(int i, atomic_t *v)
{
int ret = arch_atomic_fetch_and_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire
+#elif defined(arch_atomic_fetch_and)
+#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and
+#else
+#error "Unable to define raw_atomic_fetch_and_acquire"
#endif
-#ifndef arch_atomic_fetch_and_release
+#if defined(arch_atomic_fetch_and_release)
+#define raw_atomic_fetch_and_release arch_atomic_fetch_and_release
+#elif defined(arch_atomic_fetch_and_relaxed)
static __always_inline int
-arch_atomic_fetch_and_release(int i, atomic_t *v)
+raw_atomic_fetch_and_release(int i, atomic_t *v)
{
__atomic_release_fence();
return arch_atomic_fetch_and_relaxed(i, v);
}
-#define arch_atomic_fetch_and_release arch_atomic_fetch_and_release
+#elif defined(arch_atomic_fetch_and)
+#define raw_atomic_fetch_and_release arch_atomic_fetch_and
+#else
+#error "Unable to define raw_atomic_fetch_and_release"
#endif
-#ifndef arch_atomic_fetch_and
+#if defined(arch_atomic_fetch_and_relaxed)
+#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed
+#elif defined(arch_atomic_fetch_and)
+#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and
+#else
+#error "Unable to define raw_atomic_fetch_and_relaxed"
+#endif
+
+#if defined(arch_atomic_andnot)
+#define raw_atomic_andnot arch_atomic_andnot
+#else
+static __always_inline void
+raw_atomic_andnot(int i, atomic_t *v)
+{
+ raw_atomic_and(~i, v);
+}
+#endif
+
+#if defined(arch_atomic_fetch_andnot)
+#define raw_atomic_fetch_andnot arch_atomic_fetch_andnot
+#elif defined(arch_atomic_fetch_andnot_relaxed)
static __always_inline int
-arch_atomic_fetch_and(int i, atomic_t *v)
+raw_atomic_fetch_andnot(int i, atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_fetch_and_relaxed(i, v);
+ ret = arch_atomic_fetch_andnot_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_fetch_and arch_atomic_fetch_and
-#endif
-
-#endif /* arch_atomic_fetch_and_relaxed */
-
-#ifndef arch_atomic_andnot
-static __always_inline void
-arch_atomic_andnot(int i, atomic_t *v)
+#else
+static __always_inline int
+raw_atomic_fetch_andnot(int i, atomic_t *v)
{
- arch_atomic_and(~i, v);
+ return raw_atomic_fetch_and(~i, v);
}
-#define arch_atomic_andnot arch_atomic_andnot
#endif
-#ifndef arch_atomic_fetch_andnot_relaxed
-#ifdef arch_atomic_fetch_andnot
-#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot
-#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot
-#define arch_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot
-#endif /* arch_atomic_fetch_andnot */
-
-#ifndef arch_atomic_fetch_andnot
+#if defined(arch_atomic_fetch_andnot_acquire)
+#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
+#elif defined(arch_atomic_fetch_andnot_relaxed)
+static __always_inline int
+raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
+{
+ int ret = arch_atomic_fetch_andnot_relaxed(i, v);
+ __atomic_acquire_fence();
+ return ret;
+}
+#elif defined(arch_atomic_fetch_andnot)
+#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot
+#else
static __always_inline int
-arch_atomic_fetch_andnot(int i, atomic_t *v)
+raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
- return arch_atomic_fetch_and(~i, v);
+ return raw_atomic_fetch_and_acquire(~i, v);
}
-#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot
#endif
-#ifndef arch_atomic_fetch_andnot_acquire
+#if defined(arch_atomic_fetch_andnot_release)
+#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release
+#elif defined(arch_atomic_fetch_andnot_relaxed)
static __always_inline int
-arch_atomic_fetch_andnot_acquire(int i, atomic_t *v)
+raw_atomic_fetch_andnot_release(int i, atomic_t *v)
{
- return arch_atomic_fetch_and_acquire(~i, v);
+ __atomic_release_fence();
+ return arch_atomic_fetch_andnot_relaxed(i, v);
}
-#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
-#endif
-
-#ifndef arch_atomic_fetch_andnot_release
+#elif defined(arch_atomic_fetch_andnot)
+#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot
+#else
static __always_inline int
-arch_atomic_fetch_andnot_release(int i, atomic_t *v)
+raw_atomic_fetch_andnot_release(int i, atomic_t *v)
{
- return arch_atomic_fetch_and_release(~i, v);
+ return raw_atomic_fetch_and_release(~i, v);
}
-#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release
#endif
-#ifndef arch_atomic_fetch_andnot_relaxed
+#if defined(arch_atomic_fetch_andnot_relaxed)
+#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed
+#elif defined(arch_atomic_fetch_andnot)
+#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot
+#else
static __always_inline int
-arch_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
+raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
{
- return arch_atomic_fetch_and_relaxed(~i, v);
+ return raw_atomic_fetch_and_relaxed(~i, v);
}
-#define arch_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed
#endif
-#else /* arch_atomic_fetch_andnot_relaxed */
+#define raw_atomic_or arch_atomic_or
-#ifndef arch_atomic_fetch_andnot_acquire
+#if defined(arch_atomic_fetch_or)
+#define raw_atomic_fetch_or arch_atomic_fetch_or
+#elif defined(arch_atomic_fetch_or_relaxed)
static __always_inline int
-arch_atomic_fetch_andnot_acquire(int i, atomic_t *v)
+raw_atomic_fetch_or(int i, atomic_t *v)
{
- int ret = arch_atomic_fetch_andnot_relaxed(i, v);
- __atomic_acquire_fence();
+ int ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic_fetch_or_relaxed(i, v);
+ __atomic_post_full_fence();
return ret;
}
-#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
+#else
+#error "Unable to define raw_atomic_fetch_or"
#endif
-#ifndef arch_atomic_fetch_andnot_release
+#if defined(arch_atomic_fetch_or_acquire)
+#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire
+#elif defined(arch_atomic_fetch_or_relaxed)
static __always_inline int
-arch_atomic_fetch_andnot_release(int i, atomic_t *v)
-{
- __atomic_release_fence();
- return arch_atomic_fetch_andnot_relaxed(i, v);
-}
-#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release
-#endif
-
-#ifndef arch_atomic_fetch_andnot
-static __always_inline int
-arch_atomic_fetch_andnot(int i, atomic_t *v)
-{
- int ret;
- __atomic_pre_full_fence();
- ret = arch_atomic_fetch_andnot_relaxed(i, v);
- __atomic_post_full_fence();
- return ret;
-}
-#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot
-#endif
-
-#endif /* arch_atomic_fetch_andnot_relaxed */
-
-#ifndef arch_atomic_fetch_or_relaxed
-#define arch_atomic_fetch_or_acquire arch_atomic_fetch_or
-#define arch_atomic_fetch_or_release arch_atomic_fetch_or
-#define arch_atomic_fetch_or_relaxed arch_atomic_fetch_or
-#else /* arch_atomic_fetch_or_relaxed */
-
-#ifndef arch_atomic_fetch_or_acquire
-static __always_inline int
-arch_atomic_fetch_or_acquire(int i, atomic_t *v)
+raw_atomic_fetch_or_acquire(int i, atomic_t *v)
{
int ret = arch_atomic_fetch_or_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire
+#elif defined(arch_atomic_fetch_or)
+#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or
+#else
+#error "Unable to define raw_atomic_fetch_or_acquire"
#endif
-#ifndef arch_atomic_fetch_or_release
+#if defined(arch_atomic_fetch_or_release)
+#define raw_atomic_fetch_or_release arch_atomic_fetch_or_release
+#elif defined(arch_atomic_fetch_or_relaxed)
static __always_inline int
-arch_atomic_fetch_or_release(int i, atomic_t *v)
+raw_atomic_fetch_or_release(int i, atomic_t *v)
{
__atomic_release_fence();
return arch_atomic_fetch_or_relaxed(i, v);
}
-#define arch_atomic_fetch_or_release arch_atomic_fetch_or_release
+#elif defined(arch_atomic_fetch_or)
+#define raw_atomic_fetch_or_release arch_atomic_fetch_or
+#else
+#error "Unable to define raw_atomic_fetch_or_release"
#endif
-#ifndef arch_atomic_fetch_or
+#if defined(arch_atomic_fetch_or_relaxed)
+#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed
+#elif defined(arch_atomic_fetch_or)
+#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or
+#else
+#error "Unable to define raw_atomic_fetch_or_relaxed"
+#endif
+
+#define raw_atomic_xor arch_atomic_xor
+
+#if defined(arch_atomic_fetch_xor)
+#define raw_atomic_fetch_xor arch_atomic_fetch_xor
+#elif defined(arch_atomic_fetch_xor_relaxed)
static __always_inline int
-arch_atomic_fetch_or(int i, atomic_t *v)
+raw_atomic_fetch_xor(int i, atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_fetch_or_relaxed(i, v);
+ ret = arch_atomic_fetch_xor_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_fetch_or arch_atomic_fetch_or
+#else
+#error "Unable to define raw_atomic_fetch_xor"
#endif
-#endif /* arch_atomic_fetch_or_relaxed */
-
-#ifndef arch_atomic_fetch_xor_relaxed
-#define arch_atomic_fetch_xor_acquire arch_atomic_fetch_xor
-#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor
-#define arch_atomic_fetch_xor_relaxed arch_atomic_fetch_xor
-#else /* arch_atomic_fetch_xor_relaxed */
-
-#ifndef arch_atomic_fetch_xor_acquire
+#if defined(arch_atomic_fetch_xor_acquire)
+#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire
+#elif defined(arch_atomic_fetch_xor_relaxed)
static __always_inline int
-arch_atomic_fetch_xor_acquire(int i, atomic_t *v)
+raw_atomic_fetch_xor_acquire(int i, atomic_t *v)
{
int ret = arch_atomic_fetch_xor_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire
+#elif defined(arch_atomic_fetch_xor)
+#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor
+#else
+#error "Unable to define raw_atomic_fetch_xor_acquire"
#endif
-#ifndef arch_atomic_fetch_xor_release
+#if defined(arch_atomic_fetch_xor_release)
+#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor_release
+#elif defined(arch_atomic_fetch_xor_relaxed)
static __always_inline int
-arch_atomic_fetch_xor_release(int i, atomic_t *v)
+raw_atomic_fetch_xor_release(int i, atomic_t *v)
{
__atomic_release_fence();
return arch_atomic_fetch_xor_relaxed(i, v);
}
-#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor_release
+#elif defined(arch_atomic_fetch_xor)
+#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor
+#else
+#error "Unable to define raw_atomic_fetch_xor_release"
#endif
-#ifndef arch_atomic_fetch_xor
+#if defined(arch_atomic_fetch_xor_relaxed)
+#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed
+#elif defined(arch_atomic_fetch_xor)
+#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor
+#else
+#error "Unable to define raw_atomic_fetch_xor_relaxed"
+#endif
+
+#if defined(arch_atomic_xchg)
+#define raw_atomic_xchg arch_atomic_xchg
+#elif defined(arch_atomic_xchg_relaxed)
static __always_inline int
-arch_atomic_fetch_xor(int i, atomic_t *v)
+raw_atomic_xchg(atomic_t *v, int i)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_fetch_xor_relaxed(i, v);
+ ret = arch_atomic_xchg_relaxed(v, i);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_fetch_xor arch_atomic_fetch_xor
-#endif
-
-#endif /* arch_atomic_fetch_xor_relaxed */
-
-#ifndef arch_atomic_xchg_relaxed
-#ifdef arch_atomic_xchg
-#define arch_atomic_xchg_acquire arch_atomic_xchg
-#define arch_atomic_xchg_release arch_atomic_xchg
-#define arch_atomic_xchg_relaxed arch_atomic_xchg
-#endif /* arch_atomic_xchg */
-
-#ifndef arch_atomic_xchg
+#else
static __always_inline int
-arch_atomic_xchg(atomic_t *v, int new)
+raw_atomic_xchg(atomic_t *v, int new)
{
- return arch_xchg(&v->counter, new);
+ return raw_xchg(&v->counter, new);
}
-#define arch_atomic_xchg arch_atomic_xchg
#endif
-#ifndef arch_atomic_xchg_acquire
+#if defined(arch_atomic_xchg_acquire)
+#define raw_atomic_xchg_acquire arch_atomic_xchg_acquire
+#elif defined(arch_atomic_xchg_relaxed)
static __always_inline int
-arch_atomic_xchg_acquire(atomic_t *v, int new)
+raw_atomic_xchg_acquire(atomic_t *v, int i)
{
- return arch_xchg_acquire(&v->counter, new);
+ int ret = arch_atomic_xchg_relaxed(v, i);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire
-#endif
-
-#ifndef arch_atomic_xchg_release
+#elif defined(arch_atomic_xchg)
+#define raw_atomic_xchg_acquire arch_atomic_xchg
+#else
static __always_inline int
-arch_atomic_xchg_release(atomic_t *v, int new)
+raw_atomic_xchg_acquire(atomic_t *v, int new)
{
- return arch_xchg_release(&v->counter, new);
+ return raw_xchg_acquire(&v->counter, new);
}
-#define arch_atomic_xchg_release arch_atomic_xchg_release
#endif
-#ifndef arch_atomic_xchg_relaxed
+#if defined(arch_atomic_xchg_release)
+#define raw_atomic_xchg_release arch_atomic_xchg_release
+#elif defined(arch_atomic_xchg_relaxed)
static __always_inline int
-arch_atomic_xchg_relaxed(atomic_t *v, int new)
+raw_atomic_xchg_release(atomic_t *v, int i)
{
- return arch_xchg_relaxed(&v->counter, new);
+ __atomic_release_fence();
+ return arch_atomic_xchg_relaxed(v, i);
}
-#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed
-#endif
-
-#else /* arch_atomic_xchg_relaxed */
-
-#ifndef arch_atomic_xchg_acquire
+#elif defined(arch_atomic_xchg)
+#define raw_atomic_xchg_release arch_atomic_xchg
+#else
static __always_inline int
-arch_atomic_xchg_acquire(atomic_t *v, int i)
+raw_atomic_xchg_release(atomic_t *v, int new)
{
- int ret = arch_atomic_xchg_relaxed(v, i);
- __atomic_acquire_fence();
- return ret;
+ return raw_xchg_release(&v->counter, new);
}
-#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire
#endif
-#ifndef arch_atomic_xchg_release
+#if defined(arch_atomic_xchg_relaxed)
+#define raw_atomic_xchg_relaxed arch_atomic_xchg_relaxed
+#elif defined(arch_atomic_xchg)
+#define raw_atomic_xchg_relaxed arch_atomic_xchg
+#else
static __always_inline int
-arch_atomic_xchg_release(atomic_t *v, int i)
+raw_atomic_xchg_relaxed(atomic_t *v, int new)
{
- __atomic_release_fence();
- return arch_atomic_xchg_relaxed(v, i);
+ return raw_xchg_relaxed(&v->counter, new);
}
-#define arch_atomic_xchg_release arch_atomic_xchg_release
#endif
-#ifndef arch_atomic_xchg
+#if defined(arch_atomic_cmpxchg)
+#define raw_atomic_cmpxchg arch_atomic_cmpxchg
+#elif defined(arch_atomic_cmpxchg_relaxed)
static __always_inline int
-arch_atomic_xchg(atomic_t *v, int i)
+raw_atomic_cmpxchg(atomic_t *v, int old, int new)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_xchg_relaxed(v, i);
+ ret = arch_atomic_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_xchg arch_atomic_xchg
-#endif
-
-#endif /* arch_atomic_xchg_relaxed */
-
-#ifndef arch_atomic_cmpxchg_relaxed
-#ifdef arch_atomic_cmpxchg
-#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg
-#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg
-#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg
-#endif /* arch_atomic_cmpxchg */
-
-#ifndef arch_atomic_cmpxchg
+#else
static __always_inline int
-arch_atomic_cmpxchg(atomic_t *v, int old, int new)
+raw_atomic_cmpxchg(atomic_t *v, int old, int new)
{
- return arch_cmpxchg(&v->counter, old, new);
+ return raw_cmpxchg(&v->counter, old, new);
}
-#define arch_atomic_cmpxchg arch_atomic_cmpxchg
#endif
-#ifndef arch_atomic_cmpxchg_acquire
+#if defined(arch_atomic_cmpxchg_acquire)
+#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire
+#elif defined(arch_atomic_cmpxchg_relaxed)
static __always_inline int
-arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
+raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
- return arch_cmpxchg_acquire(&v->counter, old, new);
+ int ret = arch_atomic_cmpxchg_relaxed(v, old, new);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire
-#endif
-
-#ifndef arch_atomic_cmpxchg_release
+#elif defined(arch_atomic_cmpxchg)
+#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg
+#else
static __always_inline int
-arch_atomic_cmpxchg_release(atomic_t *v, int old, int new)
+raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
- return arch_cmpxchg_release(&v->counter, old, new);
+ return raw_cmpxchg_acquire(&v->counter, old, new);
}
-#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release
#endif
-#ifndef arch_atomic_cmpxchg_relaxed
+#if defined(arch_atomic_cmpxchg_release)
+#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg_release
+#elif defined(arch_atomic_cmpxchg_relaxed)
static __always_inline int
-arch_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
+raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
- return arch_cmpxchg_relaxed(&v->counter, old, new);
+ __atomic_release_fence();
+ return arch_atomic_cmpxchg_relaxed(v, old, new);
}
-#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
-#endif
-
-#else /* arch_atomic_cmpxchg_relaxed */
-
-#ifndef arch_atomic_cmpxchg_acquire
+#elif defined(arch_atomic_cmpxchg)
+#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg
+#else
static __always_inline int
-arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
+raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
- int ret = arch_atomic_cmpxchg_relaxed(v, old, new);
- __atomic_acquire_fence();
- return ret;
+ return raw_cmpxchg_release(&v->counter, old, new);
}
-#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire
#endif
-#ifndef arch_atomic_cmpxchg_release
+#if defined(arch_atomic_cmpxchg_relaxed)
+#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
+#elif defined(arch_atomic_cmpxchg)
+#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg
+#else
static __always_inline int
-arch_atomic_cmpxchg_release(atomic_t *v, int old, int new)
+raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
{
- __atomic_release_fence();
- return arch_atomic_cmpxchg_relaxed(v, old, new);
+ return raw_cmpxchg_relaxed(&v->counter, old, new);
}
-#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release
#endif
-#ifndef arch_atomic_cmpxchg
-static __always_inline int
-arch_atomic_cmpxchg(atomic_t *v, int old, int new)
+#if defined(arch_atomic_try_cmpxchg)
+#define raw_atomic_try_cmpxchg arch_atomic_try_cmpxchg
+#elif defined(arch_atomic_try_cmpxchg_relaxed)
+static __always_inline bool
+raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
- int ret;
+ bool ret;
__atomic_pre_full_fence();
- ret = arch_atomic_cmpxchg_relaxed(v, old, new);
+ ret = arch_atomic_try_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_cmpxchg arch_atomic_cmpxchg
-#endif
-
-#endif /* arch_atomic_cmpxchg_relaxed */
-
-#ifndef arch_atomic_try_cmpxchg_relaxed
-#ifdef arch_atomic_try_cmpxchg
-#define arch_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg
-#define arch_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg
-#define arch_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg
-#endif /* arch_atomic_try_cmpxchg */
-
-#ifndef arch_atomic_try_cmpxchg
+#else
static __always_inline bool
-arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
+raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
int r, o = *old;
- r = arch_atomic_cmpxchg(v, o, new);
+ r = raw_atomic_cmpxchg(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg
#endif
-#ifndef arch_atomic_try_cmpxchg_acquire
+#if defined(arch_atomic_try_cmpxchg_acquire)
+#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire
+#elif defined(arch_atomic_try_cmpxchg_relaxed)
+static __always_inline bool
+raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
+{
+ bool ret = arch_atomic_try_cmpxchg_relaxed(v, old, new);
+ __atomic_acquire_fence();
+ return ret;
+}
+#elif defined(arch_atomic_try_cmpxchg)
+#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg
+#else
static __always_inline bool
-arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
+raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
int r, o = *old;
- r = arch_atomic_cmpxchg_acquire(v, o, new);
+ r = raw_atomic_cmpxchg_acquire(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire
#endif
-#ifndef arch_atomic_try_cmpxchg_release
+#if defined(arch_atomic_try_cmpxchg_release)
+#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release
+#elif defined(arch_atomic_try_cmpxchg_relaxed)
static __always_inline bool
-arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
+raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
+{
+ __atomic_release_fence();
+ return arch_atomic_try_cmpxchg_relaxed(v, old, new);
+}
+#elif defined(arch_atomic_try_cmpxchg)
+#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg
+#else
+static __always_inline bool
+raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
{
int r, o = *old;
- r = arch_atomic_cmpxchg_release(v, o, new);
+ r = raw_atomic_cmpxchg_release(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release
#endif
-#ifndef arch_atomic_try_cmpxchg_relaxed
+#if defined(arch_atomic_try_cmpxchg_relaxed)
+#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg_relaxed
+#elif defined(arch_atomic_try_cmpxchg)
+#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg
+#else
static __always_inline bool
-arch_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
+raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
{
int r, o = *old;
- r = arch_atomic_cmpxchg_relaxed(v, o, new);
+ r = raw_atomic_cmpxchg_relaxed(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg_relaxed
-#endif
-
-#else /* arch_atomic_try_cmpxchg_relaxed */
-
-#ifndef arch_atomic_try_cmpxchg_acquire
-static __always_inline bool
-arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
-{
- bool ret = arch_atomic_try_cmpxchg_relaxed(v, old, new);
- __atomic_acquire_fence();
- return ret;
-}
-#define arch_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire
-#endif
-
-#ifndef arch_atomic_try_cmpxchg_release
-static __always_inline bool
-arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
-{
- __atomic_release_fence();
- return arch_atomic_try_cmpxchg_relaxed(v, old, new);
-}
-#define arch_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release
-#endif
-
-#ifndef arch_atomic_try_cmpxchg
-static __always_inline bool
-arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
-{
- bool ret;
- __atomic_pre_full_fence();
- ret = arch_atomic_try_cmpxchg_relaxed(v, old, new);
- __atomic_post_full_fence();
- return ret;
-}
-#define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg
#endif
-#endif /* arch_atomic_try_cmpxchg_relaxed */
-
-#ifndef arch_atomic_sub_and_test
+#if defined(arch_atomic_sub_and_test)
+#define raw_atomic_sub_and_test arch_atomic_sub_and_test
+#else
static __always_inline bool
-arch_atomic_sub_and_test(int i, atomic_t *v)
+raw_atomic_sub_and_test(int i, atomic_t *v)
{
- return arch_atomic_sub_return(i, v) == 0;
+ return raw_atomic_sub_return(i, v) == 0;
}
-#define arch_atomic_sub_and_test arch_atomic_sub_and_test
#endif
-#ifndef arch_atomic_dec_and_test
+#if defined(arch_atomic_dec_and_test)
+#define raw_atomic_dec_and_test arch_atomic_dec_and_test
+#else
static __always_inline bool
-arch_atomic_dec_and_test(atomic_t *v)
+raw_atomic_dec_and_test(atomic_t *v)
{
- return arch_atomic_dec_return(v) == 0;
+ return raw_atomic_dec_return(v) == 0;
}
-#define arch_atomic_dec_and_test arch_atomic_dec_and_test
#endif
-#ifndef arch_atomic_inc_and_test
+#if defined(arch_atomic_inc_and_test)
+#define raw_atomic_inc_and_test arch_atomic_inc_and_test
+#else
static __always_inline bool
-arch_atomic_inc_and_test(atomic_t *v)
+raw_atomic_inc_and_test(atomic_t *v)
{
- return arch_atomic_inc_return(v) == 0;
+ return raw_atomic_inc_return(v) == 0;
}
-#define arch_atomic_inc_and_test arch_atomic_inc_and_test
#endif
-#ifndef arch_atomic_add_negative_relaxed
-#ifdef arch_atomic_add_negative
-#define arch_atomic_add_negative_acquire arch_atomic_add_negative
-#define arch_atomic_add_negative_release arch_atomic_add_negative
-#define arch_atomic_add_negative_relaxed arch_atomic_add_negative
-#endif /* arch_atomic_add_negative */
-
-#ifndef arch_atomic_add_negative
+#if defined(arch_atomic_add_negative)
+#define raw_atomic_add_negative arch_atomic_add_negative
+#elif defined(arch_atomic_add_negative_relaxed)
static __always_inline bool
-arch_atomic_add_negative(int i, atomic_t *v)
+raw_atomic_add_negative(int i, atomic_t *v)
{
- return arch_atomic_add_return(i, v) < 0;
+ bool ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic_add_negative_relaxed(i, v);
+ __atomic_post_full_fence();
+ return ret;
}
-#define arch_atomic_add_negative arch_atomic_add_negative
-#endif
-
-#ifndef arch_atomic_add_negative_acquire
+#else
static __always_inline bool
-arch_atomic_add_negative_acquire(int i, atomic_t *v)
+raw_atomic_add_negative(int i, atomic_t *v)
{
- return arch_atomic_add_return_acquire(i, v) < 0;
+ return raw_atomic_add_return(i, v) < 0;
}
-#define arch_atomic_add_negative_acquire arch_atomic_add_negative_acquire
#endif
-#ifndef arch_atomic_add_negative_release
+#if defined(arch_atomic_add_negative_acquire)
+#define raw_atomic_add_negative_acquire arch_atomic_add_negative_acquire
+#elif defined(arch_atomic_add_negative_relaxed)
static __always_inline bool
-arch_atomic_add_negative_release(int i, atomic_t *v)
+raw_atomic_add_negative_acquire(int i, atomic_t *v)
{
- return arch_atomic_add_return_release(i, v) < 0;
+ bool ret = arch_atomic_add_negative_relaxed(i, v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic_add_negative_release arch_atomic_add_negative_release
-#endif
-
-#ifndef arch_atomic_add_negative_relaxed
+#elif defined(arch_atomic_add_negative)
+#define raw_atomic_add_negative_acquire arch_atomic_add_negative
+#else
static __always_inline bool
-arch_atomic_add_negative_relaxed(int i, atomic_t *v)
+raw_atomic_add_negative_acquire(int i, atomic_t *v)
{
- return arch_atomic_add_return_relaxed(i, v) < 0;
+ return raw_atomic_add_return_acquire(i, v) < 0;
}
-#define arch_atomic_add_negative_relaxed arch_atomic_add_negative_relaxed
#endif
-#else /* arch_atomic_add_negative_relaxed */
-
-#ifndef arch_atomic_add_negative_acquire
+#if defined(arch_atomic_add_negative_release)
+#define raw_atomic_add_negative_release arch_atomic_add_negative_release
+#elif defined(arch_atomic_add_negative_relaxed)
static __always_inline bool
-arch_atomic_add_negative_acquire(int i, atomic_t *v)
+raw_atomic_add_negative_release(int i, atomic_t *v)
{
- bool ret = arch_atomic_add_negative_relaxed(i, v);
- __atomic_acquire_fence();
- return ret;
+ __atomic_release_fence();
+ return arch_atomic_add_negative_relaxed(i, v);
}
-#define arch_atomic_add_negative_acquire arch_atomic_add_negative_acquire
-#endif
-
-#ifndef arch_atomic_add_negative_release
+#elif defined(arch_atomic_add_negative)
+#define raw_atomic_add_negative_release arch_atomic_add_negative
+#else
static __always_inline bool
-arch_atomic_add_negative_release(int i, atomic_t *v)
+raw_atomic_add_negative_release(int i, atomic_t *v)
{
- __atomic_release_fence();
- return arch_atomic_add_negative_relaxed(i, v);
+ return raw_atomic_add_return_release(i, v) < 0;
}
-#define arch_atomic_add_negative_release arch_atomic_add_negative_release
#endif
-#ifndef arch_atomic_add_negative
+#if defined(arch_atomic_add_negative_relaxed)
+#define raw_atomic_add_negative_relaxed arch_atomic_add_negative_relaxed
+#elif defined(arch_atomic_add_negative)
+#define raw_atomic_add_negative_relaxed arch_atomic_add_negative
+#else
static __always_inline bool
-arch_atomic_add_negative(int i, atomic_t *v)
+raw_atomic_add_negative_relaxed(int i, atomic_t *v)
{
- bool ret;
- __atomic_pre_full_fence();
- ret = arch_atomic_add_negative_relaxed(i, v);
- __atomic_post_full_fence();
- return ret;
+ return raw_atomic_add_return_relaxed(i, v) < 0;
}
-#define arch_atomic_add_negative arch_atomic_add_negative
#endif
-#endif /* arch_atomic_add_negative_relaxed */
-
-#ifndef arch_atomic_fetch_add_unless
+#if defined(arch_atomic_fetch_add_unless)
+#define raw_atomic_fetch_add_unless arch_atomic_fetch_add_unless
+#else
static __always_inline int
-arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
+raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
- int c = arch_atomic_read(v);
+ int c = raw_atomic_read(v);
do {
if (unlikely(c == u))
break;
- } while (!arch_atomic_try_cmpxchg(v, &c, c + a));
+ } while (!raw_atomic_try_cmpxchg(v, &c, c + a));
return c;
}
-#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
#endif
-#ifndef arch_atomic_add_unless
+#if defined(arch_atomic_add_unless)
+#define raw_atomic_add_unless arch_atomic_add_unless
+#else
static __always_inline bool
-arch_atomic_add_unless(atomic_t *v, int a, int u)
+raw_atomic_add_unless(atomic_t *v, int a, int u)
{
- return arch_atomic_fetch_add_unless(v, a, u) != u;
+ return raw_atomic_fetch_add_unless(v, a, u) != u;
}
-#define arch_atomic_add_unless arch_atomic_add_unless
#endif
-#ifndef arch_atomic_inc_not_zero
+#if defined(arch_atomic_inc_not_zero)
+#define raw_atomic_inc_not_zero arch_atomic_inc_not_zero
+#else
static __always_inline bool
-arch_atomic_inc_not_zero(atomic_t *v)
+raw_atomic_inc_not_zero(atomic_t *v)
{
- return arch_atomic_add_unless(v, 1, 0);
+ return raw_atomic_add_unless(v, 1, 0);
}
-#define arch_atomic_inc_not_zero arch_atomic_inc_not_zero
#endif
-#ifndef arch_atomic_inc_unless_negative
+#if defined(arch_atomic_inc_unless_negative)
+#define raw_atomic_inc_unless_negative arch_atomic_inc_unless_negative
+#else
static __always_inline bool
-arch_atomic_inc_unless_negative(atomic_t *v)
+raw_atomic_inc_unless_negative(atomic_t *v)
{
- int c = arch_atomic_read(v);
+ int c = raw_atomic_read(v);
do {
if (unlikely(c < 0))
return false;
- } while (!arch_atomic_try_cmpxchg(v, &c, c + 1));
+ } while (!raw_atomic_try_cmpxchg(v, &c, c + 1));
return true;
}
-#define arch_atomic_inc_unless_negative arch_atomic_inc_unless_negative
#endif
-#ifndef arch_atomic_dec_unless_positive
+#if defined(arch_atomic_dec_unless_positive)
+#define raw_atomic_dec_unless_positive arch_atomic_dec_unless_positive
+#else
static __always_inline bool
-arch_atomic_dec_unless_positive(atomic_t *v)
+raw_atomic_dec_unless_positive(atomic_t *v)
{
- int c = arch_atomic_read(v);
+ int c = raw_atomic_read(v);
do {
if (unlikely(c > 0))
return false;
- } while (!arch_atomic_try_cmpxchg(v, &c, c - 1));
+ } while (!raw_atomic_try_cmpxchg(v, &c, c - 1));
return true;
}
-#define arch_atomic_dec_unless_positive arch_atomic_dec_unless_positive
#endif
-#ifndef arch_atomic_dec_if_positive
+#if defined(arch_atomic_dec_if_positive)
+#define raw_atomic_dec_if_positive arch_atomic_dec_if_positive
+#else
static __always_inline int
-arch_atomic_dec_if_positive(atomic_t *v)
+raw_atomic_dec_if_positive(atomic_t *v)
{
- int dec, c = arch_atomic_read(v);
+ int dec, c = raw_atomic_read(v);
do {
dec = c - 1;
if (unlikely(dec < 0))
break;
- } while (!arch_atomic_try_cmpxchg(v, &c, dec));
+ } while (!raw_atomic_try_cmpxchg(v, &c, dec));
return dec;
}
-#define arch_atomic_dec_if_positive arch_atomic_dec_if_positive
#endif
#ifdef CONFIG_GENERIC_ATOMIC64
#include <asm-generic/atomic64.h>
#endif
-#ifndef arch_atomic64_read_acquire
+#define raw_atomic64_read arch_atomic64_read
+
+#if defined(arch_atomic64_read_acquire)
+#define raw_atomic64_read_acquire arch_atomic64_read_acquire
+#elif defined(arch_atomic64_read)
+#define raw_atomic64_read_acquire arch_atomic64_read
+#else
static __always_inline s64
-arch_atomic64_read_acquire(const atomic64_t *v)
+raw_atomic64_read_acquire(const atomic64_t *v)
{
s64 ret;
if (__native_word(atomic64_t)) {
ret = smp_load_acquire(&(v)->counter);
} else {
- ret = arch_atomic64_read(v);
+ ret = raw_atomic64_read(v);
__atomic_acquire_fence();
}
return ret;
}
-#define arch_atomic64_read_acquire arch_atomic64_read_acquire
#endif
-#ifndef arch_atomic64_set_release
+#define raw_atomic64_set arch_atomic64_set
+
+#if defined(arch_atomic64_set_release)
+#define raw_atomic64_set_release arch_atomic64_set_release
+#elif defined(arch_atomic64_set)
+#define raw_atomic64_set_release arch_atomic64_set
+#else
static __always_inline void
-arch_atomic64_set_release(atomic64_t *v, s64 i)
+raw_atomic64_set_release(atomic64_t *v, s64 i)
{
if (__native_word(atomic64_t)) {
smp_store_release(&(v)->counter, i);
} else {
__atomic_release_fence();
- arch_atomic64_set(v, i);
+ raw_atomic64_set(v, i);
}
}
-#define arch_atomic64_set_release arch_atomic64_set_release
#endif
-#ifndef arch_atomic64_add_return_relaxed
-#define arch_atomic64_add_return_acquire arch_atomic64_add_return
-#define arch_atomic64_add_return_release arch_atomic64_add_return
-#define arch_atomic64_add_return_relaxed arch_atomic64_add_return
-#else /* arch_atomic64_add_return_relaxed */
+#define raw_atomic64_add arch_atomic64_add
+
+#if defined(arch_atomic64_add_return)
+#define raw_atomic64_add_return arch_atomic64_add_return
+#elif defined(arch_atomic64_add_return_relaxed)
+static __always_inline s64
+raw_atomic64_add_return(s64 i, atomic64_t *v)
+{
+ s64 ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic64_add_return_relaxed(i, v);
+ __atomic_post_full_fence();
+ return ret;
+}
+#else
+#error "Unable to define raw_atomic64_add_return"
+#endif
-#ifndef arch_atomic64_add_return_acquire
+#if defined(arch_atomic64_add_return_acquire)
+#define raw_atomic64_add_return_acquire arch_atomic64_add_return_acquire
+#elif defined(arch_atomic64_add_return_relaxed)
static __always_inline s64
-arch_atomic64_add_return_acquire(s64 i, atomic64_t *v)
+raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)
{
s64 ret = arch_atomic64_add_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic64_add_return_acquire arch_atomic64_add_return_acquire
+#elif defined(arch_atomic64_add_return)
+#define raw_atomic64_add_return_acquire arch_atomic64_add_return
+#else
+#error "Unable to define raw_atomic64_add_return_acquire"
#endif
-#ifndef arch_atomic64_add_return_release
+#if defined(arch_atomic64_add_return_release)
+#define raw_atomic64_add_return_release arch_atomic64_add_return_release
+#elif defined(arch_atomic64_add_return_relaxed)
static __always_inline s64
-arch_atomic64_add_return_release(s64 i, atomic64_t *v)
+raw_atomic64_add_return_release(s64 i, atomic64_t *v)
{
__atomic_release_fence();
return arch_atomic64_add_return_relaxed(i, v);
}
-#define arch_atomic64_add_return_release arch_atomic64_add_return_release
+#elif defined(arch_atomic64_add_return)
+#define raw_atomic64_add_return_release arch_atomic64_add_return
+#else
+#error "Unable to define raw_atomic64_add_return_release"
+#endif
+
+#if defined(arch_atomic64_add_return_relaxed)
+#define raw_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed
+#elif defined(arch_atomic64_add_return)
+#define raw_atomic64_add_return_relaxed arch_atomic64_add_return
+#else
+#error "Unable to define raw_atomic64_add_return_relaxed"
#endif
-#ifndef arch_atomic64_add_return
+#if defined(arch_atomic64_fetch_add)
+#define raw_atomic64_fetch_add arch_atomic64_fetch_add
+#elif defined(arch_atomic64_fetch_add_relaxed)
static __always_inline s64
-arch_atomic64_add_return(s64 i, atomic64_t *v)
+raw_atomic64_fetch_add(s64 i, atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_add_return_relaxed(i, v);
+ ret = arch_atomic64_fetch_add_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_add_return arch_atomic64_add_return
+#else
+#error "Unable to define raw_atomic64_fetch_add"
#endif
-#endif /* arch_atomic64_add_return_relaxed */
-
-#ifndef arch_atomic64_fetch_add_relaxed
-#define arch_atomic64_fetch_add_acquire arch_atomic64_fetch_add
-#define arch_atomic64_fetch_add_release arch_atomic64_fetch_add
-#define arch_atomic64_fetch_add_relaxed arch_atomic64_fetch_add
-#else /* arch_atomic64_fetch_add_relaxed */
-
-#ifndef arch_atomic64_fetch_add_acquire
+#if defined(arch_atomic64_fetch_add_acquire)
+#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire
+#elif defined(arch_atomic64_fetch_add_relaxed)
static __always_inline s64
-arch_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
+raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
{
s64 ret = arch_atomic64_fetch_add_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire
+#elif defined(arch_atomic64_fetch_add)
+#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add
+#else
+#error "Unable to define raw_atomic64_fetch_add_acquire"
#endif
-#ifndef arch_atomic64_fetch_add_release
+#if defined(arch_atomic64_fetch_add_release)
+#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add_release
+#elif defined(arch_atomic64_fetch_add_relaxed)
static __always_inline s64
-arch_atomic64_fetch_add_release(s64 i, atomic64_t *v)
+raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)
{
__atomic_release_fence();
return arch_atomic64_fetch_add_relaxed(i, v);
}
-#define arch_atomic64_fetch_add_release arch_atomic64_fetch_add_release
+#elif defined(arch_atomic64_fetch_add)
+#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add
+#else
+#error "Unable to define raw_atomic64_fetch_add_release"
+#endif
+
+#if defined(arch_atomic64_fetch_add_relaxed)
+#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed
+#elif defined(arch_atomic64_fetch_add)
+#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add
+#else
+#error "Unable to define raw_atomic64_fetch_add_relaxed"
#endif
-#ifndef arch_atomic64_fetch_add
+#define raw_atomic64_sub arch_atomic64_sub
+
+#if defined(arch_atomic64_sub_return)
+#define raw_atomic64_sub_return arch_atomic64_sub_return
+#elif defined(arch_atomic64_sub_return_relaxed)
static __always_inline s64
-arch_atomic64_fetch_add(s64 i, atomic64_t *v)
+raw_atomic64_sub_return(s64 i, atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_fetch_add_relaxed(i, v);
+ ret = arch_atomic64_sub_return_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_fetch_add arch_atomic64_fetch_add
+#else
+#error "Unable to define raw_atomic64_sub_return"
#endif
-#endif /* arch_atomic64_fetch_add_relaxed */
-
-#ifndef arch_atomic64_sub_return_relaxed
-#define arch_atomic64_sub_return_acquire arch_atomic64_sub_return
-#define arch_atomic64_sub_return_release arch_atomic64_sub_return
-#define arch_atomic64_sub_return_relaxed arch_atomic64_sub_return
-#else /* arch_atomic64_sub_return_relaxed */
-
-#ifndef arch_atomic64_sub_return_acquire
+#if defined(arch_atomic64_sub_return_acquire)
+#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire
+#elif defined(arch_atomic64_sub_return_relaxed)
static __always_inline s64
-arch_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
+raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
{
s64 ret = arch_atomic64_sub_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire
+#elif defined(arch_atomic64_sub_return)
+#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return
+#else
+#error "Unable to define raw_atomic64_sub_return_acquire"
#endif
-#ifndef arch_atomic64_sub_return_release
+#if defined(arch_atomic64_sub_return_release)
+#define raw_atomic64_sub_return_release arch_atomic64_sub_return_release
+#elif defined(arch_atomic64_sub_return_relaxed)
static __always_inline s64
-arch_atomic64_sub_return_release(s64 i, atomic64_t *v)
+raw_atomic64_sub_return_release(s64 i, atomic64_t *v)
{
__atomic_release_fence();
return arch_atomic64_sub_return_relaxed(i, v);
}
-#define arch_atomic64_sub_return_release arch_atomic64_sub_return_release
+#elif defined(arch_atomic64_sub_return)
+#define raw_atomic64_sub_return_release arch_atomic64_sub_return
+#else
+#error "Unable to define raw_atomic64_sub_return_release"
+#endif
+
+#if defined(arch_atomic64_sub_return_relaxed)
+#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed
+#elif defined(arch_atomic64_sub_return)
+#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return
+#else
+#error "Unable to define raw_atomic64_sub_return_relaxed"
#endif
-#ifndef arch_atomic64_sub_return
+#if defined(arch_atomic64_fetch_sub)
+#define raw_atomic64_fetch_sub arch_atomic64_fetch_sub
+#elif defined(arch_atomic64_fetch_sub_relaxed)
static __always_inline s64
-arch_atomic64_sub_return(s64 i, atomic64_t *v)
+raw_atomic64_fetch_sub(s64 i, atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_sub_return_relaxed(i, v);
+ ret = arch_atomic64_fetch_sub_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_sub_return arch_atomic64_sub_return
+#else
+#error "Unable to define raw_atomic64_fetch_sub"
#endif
-#endif /* arch_atomic64_sub_return_relaxed */
-
-#ifndef arch_atomic64_fetch_sub_relaxed
-#define arch_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub
-#define arch_atomic64_fetch_sub_release arch_atomic64_fetch_sub
-#define arch_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub
-#else /* arch_atomic64_fetch_sub_relaxed */
-
-#ifndef arch_atomic64_fetch_sub_acquire
+#if defined(arch_atomic64_fetch_sub_acquire)
+#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire
+#elif defined(arch_atomic64_fetch_sub_relaxed)
static __always_inline s64
-arch_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
+raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
{
s64 ret = arch_atomic64_fetch_sub_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire
+#elif defined(arch_atomic64_fetch_sub)
+#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub
+#else
+#error "Unable to define raw_atomic64_fetch_sub_acquire"
#endif
-#ifndef arch_atomic64_fetch_sub_release
+#if defined(arch_atomic64_fetch_sub_release)
+#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release
+#elif defined(arch_atomic64_fetch_sub_relaxed)
static __always_inline s64
-arch_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
+raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
{
__atomic_release_fence();
return arch_atomic64_fetch_sub_relaxed(i, v);
}
-#define arch_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release
+#elif defined(arch_atomic64_fetch_sub)
+#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub
+#else
+#error "Unable to define raw_atomic64_fetch_sub_release"
#endif
-#ifndef arch_atomic64_fetch_sub
-static __always_inline s64
-arch_atomic64_fetch_sub(s64 i, atomic64_t *v)
-{
- s64 ret;
- __atomic_pre_full_fence();
- ret = arch_atomic64_fetch_sub_relaxed(i, v);
- __atomic_post_full_fence();
- return ret;
-}
-#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub
+#if defined(arch_atomic64_fetch_sub_relaxed)
+#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed
+#elif defined(arch_atomic64_fetch_sub)
+#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub
+#else
+#error "Unable to define raw_atomic64_fetch_sub_relaxed"
#endif
-#endif /* arch_atomic64_fetch_sub_relaxed */
-
-#ifndef arch_atomic64_inc
+#if defined(arch_atomic64_inc)
+#define raw_atomic64_inc arch_atomic64_inc
+#else
static __always_inline void
-arch_atomic64_inc(atomic64_t *v)
+raw_atomic64_inc(atomic64_t *v)
{
- arch_atomic64_add(1, v);
+ raw_atomic64_add(1, v);
}
-#define arch_atomic64_inc arch_atomic64_inc
#endif
-#ifndef arch_atomic64_inc_return_relaxed
-#ifdef arch_atomic64_inc_return
-#define arch_atomic64_inc_return_acquire arch_atomic64_inc_return
-#define arch_atomic64_inc_return_release arch_atomic64_inc_return
-#define arch_atomic64_inc_return_relaxed arch_atomic64_inc_return
-#endif /* arch_atomic64_inc_return */
-
-#ifndef arch_atomic64_inc_return
+#if defined(arch_atomic64_inc_return)
+#define raw_atomic64_inc_return arch_atomic64_inc_return
+#elif defined(arch_atomic64_inc_return_relaxed)
static __always_inline s64
-arch_atomic64_inc_return(atomic64_t *v)
+raw_atomic64_inc_return(atomic64_t *v)
{
- return arch_atomic64_add_return(1, v);
+ s64 ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic64_inc_return_relaxed(v);
+ __atomic_post_full_fence();
+ return ret;
}
-#define arch_atomic64_inc_return arch_atomic64_inc_return
-#endif
-
-#ifndef arch_atomic64_inc_return_acquire
+#else
static __always_inline s64
-arch_atomic64_inc_return_acquire(atomic64_t *v)
+raw_atomic64_inc_return(atomic64_t *v)
{
- return arch_atomic64_add_return_acquire(1, v);
+ return raw_atomic64_add_return(1, v);
}
-#define arch_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire
#endif
-#ifndef arch_atomic64_inc_return_release
+#if defined(arch_atomic64_inc_return_acquire)
+#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire
+#elif defined(arch_atomic64_inc_return_relaxed)
static __always_inline s64
-arch_atomic64_inc_return_release(atomic64_t *v)
+raw_atomic64_inc_return_acquire(atomic64_t *v)
{
- return arch_atomic64_add_return_release(1, v);
+ s64 ret = arch_atomic64_inc_return_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic64_inc_return_release arch_atomic64_inc_return_release
-#endif
-
-#ifndef arch_atomic64_inc_return_relaxed
+#elif defined(arch_atomic64_inc_return)
+#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return
+#else
static __always_inline s64
-arch_atomic64_inc_return_relaxed(atomic64_t *v)
+raw_atomic64_inc_return_acquire(atomic64_t *v)
{
- return arch_atomic64_add_return_relaxed(1, v);
+ return raw_atomic64_add_return_acquire(1, v);
}
-#define arch_atomic64_inc_return_relaxed arch_atomic64_inc_return_relaxed
#endif
-#else /* arch_atomic64_inc_return_relaxed */
-
-#ifndef arch_atomic64_inc_return_acquire
+#if defined(arch_atomic64_inc_return_release)
+#define raw_atomic64_inc_return_release arch_atomic64_inc_return_release
+#elif defined(arch_atomic64_inc_return_relaxed)
static __always_inline s64
-arch_atomic64_inc_return_acquire(atomic64_t *v)
+raw_atomic64_inc_return_release(atomic64_t *v)
{
- s64 ret = arch_atomic64_inc_return_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ __atomic_release_fence();
+ return arch_atomic64_inc_return_relaxed(v);
+}
+#elif defined(arch_atomic64_inc_return)
+#define raw_atomic64_inc_return_release arch_atomic64_inc_return
+#else
+static __always_inline s64
+raw_atomic64_inc_return_release(atomic64_t *v)
+{
+ return raw_atomic64_add_return_release(1, v);
}
-#define arch_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire
#endif
-#ifndef arch_atomic64_inc_return_release
+#if defined(arch_atomic64_inc_return_relaxed)
+#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return_relaxed
+#elif defined(arch_atomic64_inc_return)
+#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return
+#else
static __always_inline s64
-arch_atomic64_inc_return_release(atomic64_t *v)
+raw_atomic64_inc_return_relaxed(atomic64_t *v)
{
- __atomic_release_fence();
- return arch_atomic64_inc_return_relaxed(v);
+ return raw_atomic64_add_return_relaxed(1, v);
}
-#define arch_atomic64_inc_return_release arch_atomic64_inc_return_release
#endif
-#ifndef arch_atomic64_inc_return
+#if defined(arch_atomic64_fetch_inc)
+#define raw_atomic64_fetch_inc arch_atomic64_fetch_inc
+#elif defined(arch_atomic64_fetch_inc_relaxed)
static __always_inline s64
-arch_atomic64_inc_return(atomic64_t *v)
+raw_atomic64_fetch_inc(atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_inc_return_relaxed(v);
+ ret = arch_atomic64_fetch_inc_relaxed(v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_inc_return arch_atomic64_inc_return
-#endif
-
-#endif /* arch_atomic64_inc_return_relaxed */
-
-#ifndef arch_atomic64_fetch_inc_relaxed
-#ifdef arch_atomic64_fetch_inc
-#define arch_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc
-#define arch_atomic64_fetch_inc_release arch_atomic64_fetch_inc
-#define arch_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc
-#endif /* arch_atomic64_fetch_inc */
-
-#ifndef arch_atomic64_fetch_inc
+#else
static __always_inline s64
-arch_atomic64_fetch_inc(atomic64_t *v)
+raw_atomic64_fetch_inc(atomic64_t *v)
{
- return arch_atomic64_fetch_add(1, v);
+ return raw_atomic64_fetch_add(1, v);
}
-#define arch_atomic64_fetch_inc arch_atomic64_fetch_inc
#endif
-#ifndef arch_atomic64_fetch_inc_acquire
+#if defined(arch_atomic64_fetch_inc_acquire)
+#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire
+#elif defined(arch_atomic64_fetch_inc_relaxed)
+static __always_inline s64
+raw_atomic64_fetch_inc_acquire(atomic64_t *v)
+{
+ s64 ret = arch_atomic64_fetch_inc_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
+}
+#elif defined(arch_atomic64_fetch_inc)
+#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc
+#else
static __always_inline s64
-arch_atomic64_fetch_inc_acquire(atomic64_t *v)
+raw_atomic64_fetch_inc_acquire(atomic64_t *v)
{
- return arch_atomic64_fetch_add_acquire(1, v);
+ return raw_atomic64_fetch_add_acquire(1, v);
}
-#define arch_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire
#endif
-#ifndef arch_atomic64_fetch_inc_release
+#if defined(arch_atomic64_fetch_inc_release)
+#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release
+#elif defined(arch_atomic64_fetch_inc_relaxed)
static __always_inline s64
-arch_atomic64_fetch_inc_release(atomic64_t *v)
+raw_atomic64_fetch_inc_release(atomic64_t *v)
{
- return arch_atomic64_fetch_add_release(1, v);
+ __atomic_release_fence();
+ return arch_atomic64_fetch_inc_relaxed(v);
}
-#define arch_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release
-#endif
-
-#ifndef arch_atomic64_fetch_inc_relaxed
+#elif defined(arch_atomic64_fetch_inc)
+#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc
+#else
static __always_inline s64
-arch_atomic64_fetch_inc_relaxed(atomic64_t *v)
+raw_atomic64_fetch_inc_release(atomic64_t *v)
{
- return arch_atomic64_fetch_add_relaxed(1, v);
+ return raw_atomic64_fetch_add_release(1, v);
}
-#define arch_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc_relaxed
#endif
-#else /* arch_atomic64_fetch_inc_relaxed */
-
-#ifndef arch_atomic64_fetch_inc_acquire
+#if defined(arch_atomic64_fetch_inc_relaxed)
+#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc_relaxed
+#elif defined(arch_atomic64_fetch_inc)
+#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc
+#else
static __always_inline s64
-arch_atomic64_fetch_inc_acquire(atomic64_t *v)
+raw_atomic64_fetch_inc_relaxed(atomic64_t *v)
{
- s64 ret = arch_atomic64_fetch_inc_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ return raw_atomic64_fetch_add_relaxed(1, v);
}
-#define arch_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire
#endif
-#ifndef arch_atomic64_fetch_inc_release
-static __always_inline s64
-arch_atomic64_fetch_inc_release(atomic64_t *v)
+#if defined(arch_atomic64_dec)
+#define raw_atomic64_dec arch_atomic64_dec
+#else
+static __always_inline void
+raw_atomic64_dec(atomic64_t *v)
{
- __atomic_release_fence();
- return arch_atomic64_fetch_inc_relaxed(v);
+ raw_atomic64_sub(1, v);
}
-#define arch_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release
#endif
-#ifndef arch_atomic64_fetch_inc
+#if defined(arch_atomic64_dec_return)
+#define raw_atomic64_dec_return arch_atomic64_dec_return
+#elif defined(arch_atomic64_dec_return_relaxed)
static __always_inline s64
-arch_atomic64_fetch_inc(atomic64_t *v)
+raw_atomic64_dec_return(atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_fetch_inc_relaxed(v);
+ ret = arch_atomic64_dec_return_relaxed(v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_fetch_inc arch_atomic64_fetch_inc
-#endif
-
-#endif /* arch_atomic64_fetch_inc_relaxed */
-
-#ifndef arch_atomic64_dec
-static __always_inline void
-arch_atomic64_dec(atomic64_t *v)
-{
- arch_atomic64_sub(1, v);
-}
-#define arch_atomic64_dec arch_atomic64_dec
-#endif
-
-#ifndef arch_atomic64_dec_return_relaxed
-#ifdef arch_atomic64_dec_return
-#define arch_atomic64_dec_return_acquire arch_atomic64_dec_return
-#define arch_atomic64_dec_return_release arch_atomic64_dec_return
-#define arch_atomic64_dec_return_relaxed arch_atomic64_dec_return
-#endif /* arch_atomic64_dec_return */
-
-#ifndef arch_atomic64_dec_return
+#else
static __always_inline s64
-arch_atomic64_dec_return(atomic64_t *v)
+raw_atomic64_dec_return(atomic64_t *v)
{
- return arch_atomic64_sub_return(1, v);
+ return raw_atomic64_sub_return(1, v);
}
-#define arch_atomic64_dec_return arch_atomic64_dec_return
#endif
-#ifndef arch_atomic64_dec_return_acquire
+#if defined(arch_atomic64_dec_return_acquire)
+#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire
+#elif defined(arch_atomic64_dec_return_relaxed)
static __always_inline s64
-arch_atomic64_dec_return_acquire(atomic64_t *v)
+raw_atomic64_dec_return_acquire(atomic64_t *v)
{
- return arch_atomic64_sub_return_acquire(1, v);
+ s64 ret = arch_atomic64_dec_return_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire
-#endif
-
-#ifndef arch_atomic64_dec_return_release
+#elif defined(arch_atomic64_dec_return)
+#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return
+#else
static __always_inline s64
-arch_atomic64_dec_return_release(atomic64_t *v)
+raw_atomic64_dec_return_acquire(atomic64_t *v)
{
- return arch_atomic64_sub_return_release(1, v);
+ return raw_atomic64_sub_return_acquire(1, v);
}
-#define arch_atomic64_dec_return_release arch_atomic64_dec_return_release
#endif
-#ifndef arch_atomic64_dec_return_relaxed
+#if defined(arch_atomic64_dec_return_release)
+#define raw_atomic64_dec_return_release arch_atomic64_dec_return_release
+#elif defined(arch_atomic64_dec_return_relaxed)
static __always_inline s64
-arch_atomic64_dec_return_relaxed(atomic64_t *v)
+raw_atomic64_dec_return_release(atomic64_t *v)
{
- return arch_atomic64_sub_return_relaxed(1, v);
+ __atomic_release_fence();
+ return arch_atomic64_dec_return_relaxed(v);
}
-#define arch_atomic64_dec_return_relaxed arch_atomic64_dec_return_relaxed
-#endif
-
-#else /* arch_atomic64_dec_return_relaxed */
-
-#ifndef arch_atomic64_dec_return_acquire
+#elif defined(arch_atomic64_dec_return)
+#define raw_atomic64_dec_return_release arch_atomic64_dec_return
+#else
static __always_inline s64
-arch_atomic64_dec_return_acquire(atomic64_t *v)
+raw_atomic64_dec_return_release(atomic64_t *v)
{
- s64 ret = arch_atomic64_dec_return_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ return raw_atomic64_sub_return_release(1, v);
}
-#define arch_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire
#endif
-#ifndef arch_atomic64_dec_return_release
+#if defined(arch_atomic64_dec_return_relaxed)
+#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return_relaxed
+#elif defined(arch_atomic64_dec_return)
+#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return
+#else
static __always_inline s64
-arch_atomic64_dec_return_release(atomic64_t *v)
+raw_atomic64_dec_return_relaxed(atomic64_t *v)
{
- __atomic_release_fence();
- return arch_atomic64_dec_return_relaxed(v);
+ return raw_atomic64_sub_return_relaxed(1, v);
}
-#define arch_atomic64_dec_return_release arch_atomic64_dec_return_release
#endif
-#ifndef arch_atomic64_dec_return
+#if defined(arch_atomic64_fetch_dec)
+#define raw_atomic64_fetch_dec arch_atomic64_fetch_dec
+#elif defined(arch_atomic64_fetch_dec_relaxed)
static __always_inline s64
-arch_atomic64_dec_return(atomic64_t *v)
+raw_atomic64_fetch_dec(atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_dec_return_relaxed(v);
+ ret = arch_atomic64_fetch_dec_relaxed(v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_dec_return arch_atomic64_dec_return
-#endif
-
-#endif /* arch_atomic64_dec_return_relaxed */
-
-#ifndef arch_atomic64_fetch_dec_relaxed
-#ifdef arch_atomic64_fetch_dec
-#define arch_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec
-#define arch_atomic64_fetch_dec_release arch_atomic64_fetch_dec
-#define arch_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec
-#endif /* arch_atomic64_fetch_dec */
-
-#ifndef arch_atomic64_fetch_dec
+#else
static __always_inline s64
-arch_atomic64_fetch_dec(atomic64_t *v)
+raw_atomic64_fetch_dec(atomic64_t *v)
{
- return arch_atomic64_fetch_sub(1, v);
+ return raw_atomic64_fetch_sub(1, v);
}
-#define arch_atomic64_fetch_dec arch_atomic64_fetch_dec
#endif
-#ifndef arch_atomic64_fetch_dec_acquire
+#if defined(arch_atomic64_fetch_dec_acquire)
+#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire
+#elif defined(arch_atomic64_fetch_dec_relaxed)
static __always_inline s64
-arch_atomic64_fetch_dec_acquire(atomic64_t *v)
+raw_atomic64_fetch_dec_acquire(atomic64_t *v)
{
- return arch_atomic64_fetch_sub_acquire(1, v);
+ s64 ret = arch_atomic64_fetch_dec_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire
-#endif
-
-#ifndef arch_atomic64_fetch_dec_release
+#elif defined(arch_atomic64_fetch_dec)
+#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec
+#else
static __always_inline s64
-arch_atomic64_fetch_dec_release(atomic64_t *v)
+raw_atomic64_fetch_dec_acquire(atomic64_t *v)
{
- return arch_atomic64_fetch_sub_release(1, v);
+ return raw_atomic64_fetch_sub_acquire(1, v);
}
-#define arch_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release
#endif
-#ifndef arch_atomic64_fetch_dec_relaxed
+#if defined(arch_atomic64_fetch_dec_release)
+#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release
+#elif defined(arch_atomic64_fetch_dec_relaxed)
static __always_inline s64
-arch_atomic64_fetch_dec_relaxed(atomic64_t *v)
+raw_atomic64_fetch_dec_release(atomic64_t *v)
{
- return arch_atomic64_fetch_sub_relaxed(1, v);
+ __atomic_release_fence();
+ return arch_atomic64_fetch_dec_relaxed(v);
}
-#define arch_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec_relaxed
-#endif
-
-#else /* arch_atomic64_fetch_dec_relaxed */
-
-#ifndef arch_atomic64_fetch_dec_acquire
+#elif defined(arch_atomic64_fetch_dec)
+#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec
+#else
static __always_inline s64
-arch_atomic64_fetch_dec_acquire(atomic64_t *v)
+raw_atomic64_fetch_dec_release(atomic64_t *v)
{
- s64 ret = arch_atomic64_fetch_dec_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ return raw_atomic64_fetch_sub_release(1, v);
}
-#define arch_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire
#endif
-#ifndef arch_atomic64_fetch_dec_release
+#if defined(arch_atomic64_fetch_dec_relaxed)
+#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec_relaxed
+#elif defined(arch_atomic64_fetch_dec)
+#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec
+#else
static __always_inline s64
-arch_atomic64_fetch_dec_release(atomic64_t *v)
+raw_atomic64_fetch_dec_relaxed(atomic64_t *v)
{
- __atomic_release_fence();
- return arch_atomic64_fetch_dec_relaxed(v);
+ return raw_atomic64_fetch_sub_relaxed(1, v);
}
-#define arch_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release
#endif
-#ifndef arch_atomic64_fetch_dec
+#define raw_atomic64_and arch_atomic64_and
+
+#if defined(arch_atomic64_fetch_and)
+#define raw_atomic64_fetch_and arch_atomic64_fetch_and
+#elif defined(arch_atomic64_fetch_and_relaxed)
static __always_inline s64
-arch_atomic64_fetch_dec(atomic64_t *v)
+raw_atomic64_fetch_and(s64 i, atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_fetch_dec_relaxed(v);
+ ret = arch_atomic64_fetch_and_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_fetch_dec arch_atomic64_fetch_dec
+#else
+#error "Unable to define raw_atomic64_fetch_and"
#endif
-#endif /* arch_atomic64_fetch_dec_relaxed */
-
-#ifndef arch_atomic64_fetch_and_relaxed
-#define arch_atomic64_fetch_and_acquire arch_atomic64_fetch_and
-#define arch_atomic64_fetch_and_release arch_atomic64_fetch_and
-#define arch_atomic64_fetch_and_relaxed arch_atomic64_fetch_and
-#else /* arch_atomic64_fetch_and_relaxed */
-
-#ifndef arch_atomic64_fetch_and_acquire
+#if defined(arch_atomic64_fetch_and_acquire)
+#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire
+#elif defined(arch_atomic64_fetch_and_relaxed)
static __always_inline s64
-arch_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
+raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
{
s64 ret = arch_atomic64_fetch_and_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire
+#elif defined(arch_atomic64_fetch_and)
+#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and
+#else
+#error "Unable to define raw_atomic64_fetch_and_acquire"
#endif
-#ifndef arch_atomic64_fetch_and_release
+#if defined(arch_atomic64_fetch_and_release)
+#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and_release
+#elif defined(arch_atomic64_fetch_and_relaxed)
static __always_inline s64
-arch_atomic64_fetch_and_release(s64 i, atomic64_t *v)
+raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)
{
__atomic_release_fence();
return arch_atomic64_fetch_and_relaxed(i, v);
}
-#define arch_atomic64_fetch_and_release arch_atomic64_fetch_and_release
+#elif defined(arch_atomic64_fetch_and)
+#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and
+#else
+#error "Unable to define raw_atomic64_fetch_and_release"
#endif
-#ifndef arch_atomic64_fetch_and
-static __always_inline s64
-arch_atomic64_fetch_and(s64 i, atomic64_t *v)
-{
- s64 ret;
- __atomic_pre_full_fence();
- ret = arch_atomic64_fetch_and_relaxed(i, v);
- __atomic_post_full_fence();
- return ret;
-}
-#define arch_atomic64_fetch_and arch_atomic64_fetch_and
+#if defined(arch_atomic64_fetch_and_relaxed)
+#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed
+#elif defined(arch_atomic64_fetch_and)
+#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and
+#else
+#error "Unable to define raw_atomic64_fetch_and_relaxed"
#endif
-#endif /* arch_atomic64_fetch_and_relaxed */
-
-#ifndef arch_atomic64_andnot
+#if defined(arch_atomic64_andnot)
+#define raw_atomic64_andnot arch_atomic64_andnot
+#else
static __always_inline void
-arch_atomic64_andnot(s64 i, atomic64_t *v)
+raw_atomic64_andnot(s64 i, atomic64_t *v)
{
- arch_atomic64_and(~i, v);
+ raw_atomic64_and(~i, v);
}
-#define arch_atomic64_andnot arch_atomic64_andnot
#endif
-#ifndef arch_atomic64_fetch_andnot_relaxed
-#ifdef arch_atomic64_fetch_andnot
-#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot
-#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot
-#define arch_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot
-#endif /* arch_atomic64_fetch_andnot */
-
-#ifndef arch_atomic64_fetch_andnot
+#if defined(arch_atomic64_fetch_andnot)
+#define raw_atomic64_fetch_andnot arch_atomic64_fetch_andnot
+#elif defined(arch_atomic64_fetch_andnot_relaxed)
static __always_inline s64
-arch_atomic64_fetch_andnot(s64 i, atomic64_t *v)
+raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
- return arch_atomic64_fetch_and(~i, v);
+ s64 ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic64_fetch_andnot_relaxed(i, v);
+ __atomic_post_full_fence();
+ return ret;
}
-#define arch_atomic64_fetch_andnot arch_atomic64_fetch_andnot
-#endif
-
-#ifndef arch_atomic64_fetch_andnot_acquire
+#else
static __always_inline s64
-arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
+raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
- return arch_atomic64_fetch_and_acquire(~i, v);
+ return raw_atomic64_fetch_and(~i, v);
}
-#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire
#endif
-#ifndef arch_atomic64_fetch_andnot_release
+#if defined(arch_atomic64_fetch_andnot_acquire)
+#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire
+#elif defined(arch_atomic64_fetch_andnot_relaxed)
static __always_inline s64
-arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
+raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
- return arch_atomic64_fetch_and_release(~i, v);
+ s64 ret = arch_atomic64_fetch_andnot_relaxed(i, v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release
-#endif
-
-#ifndef arch_atomic64_fetch_andnot_relaxed
+#elif defined(arch_atomic64_fetch_andnot)
+#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot
+#else
static __always_inline s64
-arch_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
+raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
- return arch_atomic64_fetch_and_relaxed(~i, v);
+ return raw_atomic64_fetch_and_acquire(~i, v);
}
-#define arch_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed
#endif
-#else /* arch_atomic64_fetch_andnot_relaxed */
-
-#ifndef arch_atomic64_fetch_andnot_acquire
+#if defined(arch_atomic64_fetch_andnot_release)
+#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release
+#elif defined(arch_atomic64_fetch_andnot_relaxed)
static __always_inline s64
-arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
+raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
- s64 ret = arch_atomic64_fetch_andnot_relaxed(i, v);
- __atomic_acquire_fence();
- return ret;
+ __atomic_release_fence();
+ return arch_atomic64_fetch_andnot_relaxed(i, v);
+}
+#elif defined(arch_atomic64_fetch_andnot)
+#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot
+#else
+static __always_inline s64
+raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
+{
+ return raw_atomic64_fetch_and_release(~i, v);
}
-#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire
#endif
-#ifndef arch_atomic64_fetch_andnot_release
+#if defined(arch_atomic64_fetch_andnot_relaxed)
+#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed
+#elif defined(arch_atomic64_fetch_andnot)
+#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot
+#else
static __always_inline s64
-arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
+raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
{
- __atomic_release_fence();
- return arch_atomic64_fetch_andnot_relaxed(i, v);
+ return raw_atomic64_fetch_and_relaxed(~i, v);
}
-#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release
#endif
-#ifndef arch_atomic64_fetch_andnot
+#define raw_atomic64_or arch_atomic64_or
+
+#if defined(arch_atomic64_fetch_or)
+#define raw_atomic64_fetch_or arch_atomic64_fetch_or
+#elif defined(arch_atomic64_fetch_or_relaxed)
static __always_inline s64
-arch_atomic64_fetch_andnot(s64 i, atomic64_t *v)
+raw_atomic64_fetch_or(s64 i, atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_fetch_andnot_relaxed(i, v);
+ ret = arch_atomic64_fetch_or_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_fetch_andnot arch_atomic64_fetch_andnot
+#else
+#error "Unable to define raw_atomic64_fetch_or"
#endif
-#endif /* arch_atomic64_fetch_andnot_relaxed */
-
-#ifndef arch_atomic64_fetch_or_relaxed
-#define arch_atomic64_fetch_or_acquire arch_atomic64_fetch_or
-#define arch_atomic64_fetch_or_release arch_atomic64_fetch_or
-#define arch_atomic64_fetch_or_relaxed arch_atomic64_fetch_or
-#else /* arch_atomic64_fetch_or_relaxed */
-
-#ifndef arch_atomic64_fetch_or_acquire
+#if defined(arch_atomic64_fetch_or_acquire)
+#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire
+#elif defined(arch_atomic64_fetch_or_relaxed)
static __always_inline s64
-arch_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
+raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
{
s64 ret = arch_atomic64_fetch_or_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire
+#elif defined(arch_atomic64_fetch_or)
+#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or
+#else
+#error "Unable to define raw_atomic64_fetch_or_acquire"
#endif
-#ifndef arch_atomic64_fetch_or_release
+#if defined(arch_atomic64_fetch_or_release)
+#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or_release
+#elif defined(arch_atomic64_fetch_or_relaxed)
static __always_inline s64
-arch_atomic64_fetch_or_release(s64 i, atomic64_t *v)
+raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)
{
__atomic_release_fence();
return arch_atomic64_fetch_or_relaxed(i, v);
}
-#define arch_atomic64_fetch_or_release arch_atomic64_fetch_or_release
+#elif defined(arch_atomic64_fetch_or)
+#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or
+#else
+#error "Unable to define raw_atomic64_fetch_or_release"
+#endif
+
+#if defined(arch_atomic64_fetch_or_relaxed)
+#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed
+#elif defined(arch_atomic64_fetch_or)
+#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or
+#else
+#error "Unable to define raw_atomic64_fetch_or_relaxed"
#endif
-#ifndef arch_atomic64_fetch_or
+#define raw_atomic64_xor arch_atomic64_xor
+
+#if defined(arch_atomic64_fetch_xor)
+#define raw_atomic64_fetch_xor arch_atomic64_fetch_xor
+#elif defined(arch_atomic64_fetch_xor_relaxed)
static __always_inline s64
-arch_atomic64_fetch_or(s64 i, atomic64_t *v)
+raw_atomic64_fetch_xor(s64 i, atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_fetch_or_relaxed(i, v);
+ ret = arch_atomic64_fetch_xor_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_fetch_or arch_atomic64_fetch_or
+#else
+#error "Unable to define raw_atomic64_fetch_xor"
#endif
-#endif /* arch_atomic64_fetch_or_relaxed */
-
-#ifndef arch_atomic64_fetch_xor_relaxed
-#define arch_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor
-#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor
-#define arch_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor
-#else /* arch_atomic64_fetch_xor_relaxed */
-
-#ifndef arch_atomic64_fetch_xor_acquire
+#if defined(arch_atomic64_fetch_xor_acquire)
+#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire
+#elif defined(arch_atomic64_fetch_xor_relaxed)
static __always_inline s64
-arch_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
+raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
{
s64 ret = arch_atomic64_fetch_xor_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire
+#elif defined(arch_atomic64_fetch_xor)
+#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor
+#else
+#error "Unable to define raw_atomic64_fetch_xor_acquire"
#endif
-#ifndef arch_atomic64_fetch_xor_release
+#if defined(arch_atomic64_fetch_xor_release)
+#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release
+#elif defined(arch_atomic64_fetch_xor_relaxed)
static __always_inline s64
-arch_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
+raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
{
__atomic_release_fence();
return arch_atomic64_fetch_xor_relaxed(i, v);
}
-#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release
+#elif defined(arch_atomic64_fetch_xor)
+#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor
+#else
+#error "Unable to define raw_atomic64_fetch_xor_release"
+#endif
+
+#if defined(arch_atomic64_fetch_xor_relaxed)
+#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed
+#elif defined(arch_atomic64_fetch_xor)
+#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor
+#else
+#error "Unable to define raw_atomic64_fetch_xor_relaxed"
#endif
-#ifndef arch_atomic64_fetch_xor
+#if defined(arch_atomic64_xchg)
+#define raw_atomic64_xchg arch_atomic64_xchg
+#elif defined(arch_atomic64_xchg_relaxed)
static __always_inline s64
-arch_atomic64_fetch_xor(s64 i, atomic64_t *v)
+raw_atomic64_xchg(atomic64_t *v, s64 i)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_fetch_xor_relaxed(i, v);
+ ret = arch_atomic64_xchg_relaxed(v, i);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor
-#endif
-
-#endif /* arch_atomic64_fetch_xor_relaxed */
-
-#ifndef arch_atomic64_xchg_relaxed
-#ifdef arch_atomic64_xchg
-#define arch_atomic64_xchg_acquire arch_atomic64_xchg
-#define arch_atomic64_xchg_release arch_atomic64_xchg
-#define arch_atomic64_xchg_relaxed arch_atomic64_xchg
-#endif /* arch_atomic64_xchg */
-
-#ifndef arch_atomic64_xchg
+#else
static __always_inline s64
-arch_atomic64_xchg(atomic64_t *v, s64 new)
+raw_atomic64_xchg(atomic64_t *v, s64 new)
{
- return arch_xchg(&v->counter, new);
+ return raw_xchg(&v->counter, new);
}
-#define arch_atomic64_xchg arch_atomic64_xchg
#endif
-#ifndef arch_atomic64_xchg_acquire
+#if defined(arch_atomic64_xchg_acquire)
+#define raw_atomic64_xchg_acquire arch_atomic64_xchg_acquire
+#elif defined(arch_atomic64_xchg_relaxed)
static __always_inline s64
-arch_atomic64_xchg_acquire(atomic64_t *v, s64 new)
+raw_atomic64_xchg_acquire(atomic64_t *v, s64 i)
{
- return arch_xchg_acquire(&v->counter, new);
+ s64 ret = arch_atomic64_xchg_relaxed(v, i);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic64_xchg_acquire arch_atomic64_xchg_acquire
-#endif
-
-#ifndef arch_atomic64_xchg_release
+#elif defined(arch_atomic64_xchg)
+#define raw_atomic64_xchg_acquire arch_atomic64_xchg
+#else
static __always_inline s64
-arch_atomic64_xchg_release(atomic64_t *v, s64 new)
+raw_atomic64_xchg_acquire(atomic64_t *v, s64 new)
{
- return arch_xchg_release(&v->counter, new);
+ return raw_xchg_acquire(&v->counter, new);
}
-#define arch_atomic64_xchg_release arch_atomic64_xchg_release
#endif
-#ifndef arch_atomic64_xchg_relaxed
+#if defined(arch_atomic64_xchg_release)
+#define raw_atomic64_xchg_release arch_atomic64_xchg_release
+#elif defined(arch_atomic64_xchg_relaxed)
static __always_inline s64
-arch_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
+raw_atomic64_xchg_release(atomic64_t *v, s64 i)
{
- return arch_xchg_relaxed(&v->counter, new);
+ __atomic_release_fence();
+ return arch_atomic64_xchg_relaxed(v, i);
}
-#define arch_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed
-#endif
-
-#else /* arch_atomic64_xchg_relaxed */
-
-#ifndef arch_atomic64_xchg_acquire
+#elif defined(arch_atomic64_xchg)
+#define raw_atomic64_xchg_release arch_atomic64_xchg
+#else
static __always_inline s64
-arch_atomic64_xchg_acquire(atomic64_t *v, s64 i)
+raw_atomic64_xchg_release(atomic64_t *v, s64 new)
{
- s64 ret = arch_atomic64_xchg_relaxed(v, i);
- __atomic_acquire_fence();
- return ret;
+ return raw_xchg_release(&v->counter, new);
}
-#define arch_atomic64_xchg_acquire arch_atomic64_xchg_acquire
#endif
-#ifndef arch_atomic64_xchg_release
+#if defined(arch_atomic64_xchg_relaxed)
+#define raw_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed
+#elif defined(arch_atomic64_xchg)
+#define raw_atomic64_xchg_relaxed arch_atomic64_xchg
+#else
static __always_inline s64
-arch_atomic64_xchg_release(atomic64_t *v, s64 i)
+raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
{
- __atomic_release_fence();
- return arch_atomic64_xchg_relaxed(v, i);
+ return raw_xchg_relaxed(&v->counter, new);
}
-#define arch_atomic64_xchg_release arch_atomic64_xchg_release
#endif
-#ifndef arch_atomic64_xchg
+#if defined(arch_atomic64_cmpxchg)
+#define raw_atomic64_cmpxchg arch_atomic64_cmpxchg
+#elif defined(arch_atomic64_cmpxchg_relaxed)
static __always_inline s64
-arch_atomic64_xchg(atomic64_t *v, s64 i)
+raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_xchg_relaxed(v, i);
+ ret = arch_atomic64_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_xchg arch_atomic64_xchg
-#endif
-
-#endif /* arch_atomic64_xchg_relaxed */
-
-#ifndef arch_atomic64_cmpxchg_relaxed
-#ifdef arch_atomic64_cmpxchg
-#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg
-#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg
-#define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg
-#endif /* arch_atomic64_cmpxchg */
-
-#ifndef arch_atomic64_cmpxchg
+#else
static __always_inline s64
-arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
+raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
- return arch_cmpxchg(&v->counter, old, new);
+ return raw_cmpxchg(&v->counter, old, new);
}
-#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
#endif
-#ifndef arch_atomic64_cmpxchg_acquire
+#if defined(arch_atomic64_cmpxchg_acquire)
+#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire
+#elif defined(arch_atomic64_cmpxchg_relaxed)
static __always_inline s64
-arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
+raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
- return arch_cmpxchg_acquire(&v->counter, old, new);
+ s64 ret = arch_atomic64_cmpxchg_relaxed(v, old, new);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire
-#endif
-
-#ifndef arch_atomic64_cmpxchg_release
+#elif defined(arch_atomic64_cmpxchg)
+#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg
+#else
static __always_inline s64
-arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
+raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
- return arch_cmpxchg_release(&v->counter, old, new);
+ return raw_cmpxchg_acquire(&v->counter, old, new);
}
-#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release
#endif
-#ifndef arch_atomic64_cmpxchg_relaxed
+#if defined(arch_atomic64_cmpxchg_release)
+#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release
+#elif defined(arch_atomic64_cmpxchg_relaxed)
static __always_inline s64
-arch_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
+raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
- return arch_cmpxchg_relaxed(&v->counter, old, new);
+ __atomic_release_fence();
+ return arch_atomic64_cmpxchg_relaxed(v, old, new);
}
-#define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed
-#endif
-
-#else /* arch_atomic64_cmpxchg_relaxed */
-
-#ifndef arch_atomic64_cmpxchg_acquire
+#elif defined(arch_atomic64_cmpxchg)
+#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg
+#else
static __always_inline s64
-arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
+raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
- s64 ret = arch_atomic64_cmpxchg_relaxed(v, old, new);
- __atomic_acquire_fence();
- return ret;
+ return raw_cmpxchg_release(&v->counter, old, new);
}
-#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire
#endif
-#ifndef arch_atomic64_cmpxchg_release
+#if defined(arch_atomic64_cmpxchg_relaxed)
+#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed
+#elif defined(arch_atomic64_cmpxchg)
+#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg
+#else
static __always_inline s64
-arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
+raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
{
- __atomic_release_fence();
- return arch_atomic64_cmpxchg_relaxed(v, old, new);
+ return raw_cmpxchg_relaxed(&v->counter, old, new);
}
-#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release
#endif
-#ifndef arch_atomic64_cmpxchg
-static __always_inline s64
-arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
+#if defined(arch_atomic64_try_cmpxchg)
+#define raw_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
+#elif defined(arch_atomic64_try_cmpxchg_relaxed)
+static __always_inline bool
+raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
- s64 ret;
+ bool ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_cmpxchg_relaxed(v, old, new);
+ ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
-#endif
-
-#endif /* arch_atomic64_cmpxchg_relaxed */
-
-#ifndef arch_atomic64_try_cmpxchg_relaxed
-#ifdef arch_atomic64_try_cmpxchg
-#define arch_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg
-#define arch_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg
-#define arch_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg
-#endif /* arch_atomic64_try_cmpxchg */
-
-#ifndef arch_atomic64_try_cmpxchg
+#else
static __always_inline bool
-arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
+raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
s64 r, o = *old;
- r = arch_atomic64_cmpxchg(v, o, new);
+ r = raw_atomic64_cmpxchg(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
#endif
-#ifndef arch_atomic64_try_cmpxchg_acquire
+#if defined(arch_atomic64_try_cmpxchg_acquire)
+#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire
+#elif defined(arch_atomic64_try_cmpxchg_relaxed)
+static __always_inline bool
+raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
+{
+ bool ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new);
+ __atomic_acquire_fence();
+ return ret;
+}
+#elif defined(arch_atomic64_try_cmpxchg)
+#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg
+#else
static __always_inline bool
-arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
+raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
s64 r, o = *old;
- r = arch_atomic64_cmpxchg_acquire(v, o, new);
+ r = raw_atomic64_cmpxchg_acquire(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire
#endif
-#ifndef arch_atomic64_try_cmpxchg_release
+#if defined(arch_atomic64_try_cmpxchg_release)
+#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release
+#elif defined(arch_atomic64_try_cmpxchg_relaxed)
+static __always_inline bool
+raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
+{
+ __atomic_release_fence();
+ return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
+}
+#elif defined(arch_atomic64_try_cmpxchg)
+#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg
+#else
static __always_inline bool
-arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
+raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
{
s64 r, o = *old;
- r = arch_atomic64_cmpxchg_release(v, o, new);
+ r = raw_atomic64_cmpxchg_release(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release
#endif
-#ifndef arch_atomic64_try_cmpxchg_relaxed
+#if defined(arch_atomic64_try_cmpxchg_relaxed)
+#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg_relaxed
+#elif defined(arch_atomic64_try_cmpxchg)
+#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg
+#else
static __always_inline bool
-arch_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
+raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
{
s64 r, o = *old;
- r = arch_atomic64_cmpxchg_relaxed(v, o, new);
+ r = raw_atomic64_cmpxchg_relaxed(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg_relaxed
#endif
-#else /* arch_atomic64_try_cmpxchg_relaxed */
-
-#ifndef arch_atomic64_try_cmpxchg_acquire
-static __always_inline bool
-arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
-{
- bool ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new);
- __atomic_acquire_fence();
- return ret;
-}
-#define arch_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire
-#endif
-
-#ifndef arch_atomic64_try_cmpxchg_release
-static __always_inline bool
-arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
-{
- __atomic_release_fence();
- return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
-}
-#define arch_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release
-#endif
-
-#ifndef arch_atomic64_try_cmpxchg
-static __always_inline bool
-arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
-{
- bool ret;
- __atomic_pre_full_fence();
- ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new);
- __atomic_post_full_fence();
- return ret;
-}
-#define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
-#endif
-
-#endif /* arch_atomic64_try_cmpxchg_relaxed */
-
-#ifndef arch_atomic64_sub_and_test
+#if defined(arch_atomic64_sub_and_test)
+#define raw_atomic64_sub_and_test arch_atomic64_sub_and_test
+#else
static __always_inline bool
-arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
+raw_atomic64_sub_and_test(s64 i, atomic64_t *v)
{
- return arch_atomic64_sub_return(i, v) == 0;
+ return raw_atomic64_sub_return(i, v) == 0;
}
-#define arch_atomic64_sub_and_test arch_atomic64_sub_and_test
#endif
-#ifndef arch_atomic64_dec_and_test
+#if defined(arch_atomic64_dec_and_test)
+#define raw_atomic64_dec_and_test arch_atomic64_dec_and_test
+#else
static __always_inline bool
-arch_atomic64_dec_and_test(atomic64_t *v)
+raw_atomic64_dec_and_test(atomic64_t *v)
{
- return arch_atomic64_dec_return(v) == 0;
+ return raw_atomic64_dec_return(v) == 0;
}
-#define arch_atomic64_dec_and_test arch_atomic64_dec_and_test
#endif
-#ifndef arch_atomic64_inc_and_test
+#if defined(arch_atomic64_inc_and_test)
+#define raw_atomic64_inc_and_test arch_atomic64_inc_and_test
+#else
static __always_inline bool
-arch_atomic64_inc_and_test(atomic64_t *v)
+raw_atomic64_inc_and_test(atomic64_t *v)
{
- return arch_atomic64_inc_return(v) == 0;
+ return raw_atomic64_inc_return(v) == 0;
}
-#define arch_atomic64_inc_and_test arch_atomic64_inc_and_test
#endif
-#ifndef arch_atomic64_add_negative_relaxed
-#ifdef arch_atomic64_add_negative
-#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative
-#define arch_atomic64_add_negative_release arch_atomic64_add_negative
-#define arch_atomic64_add_negative_relaxed arch_atomic64_add_negative
-#endif /* arch_atomic64_add_negative */
-
-#ifndef arch_atomic64_add_negative
+#if defined(arch_atomic64_add_negative)
+#define raw_atomic64_add_negative arch_atomic64_add_negative
+#elif defined(arch_atomic64_add_negative_relaxed)
static __always_inline bool
-arch_atomic64_add_negative(s64 i, atomic64_t *v)
+raw_atomic64_add_negative(s64 i, atomic64_t *v)
{
- return arch_atomic64_add_return(i, v) < 0;
+ bool ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic64_add_negative_relaxed(i, v);
+ __atomic_post_full_fence();
+ return ret;
}
-#define arch_atomic64_add_negative arch_atomic64_add_negative
-#endif
-
-#ifndef arch_atomic64_add_negative_acquire
+#else
static __always_inline bool
-arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
+raw_atomic64_add_negative(s64 i, atomic64_t *v)
{
- return arch_atomic64_add_return_acquire(i, v) < 0;
+ return raw_atomic64_add_return(i, v) < 0;
}
-#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire
#endif
-#ifndef arch_atomic64_add_negative_release
+#if defined(arch_atomic64_add_negative_acquire)
+#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire
+#elif defined(arch_atomic64_add_negative_relaxed)
static __always_inline bool
-arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
+raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
- return arch_atomic64_add_return_release(i, v) < 0;
+ bool ret = arch_atomic64_add_negative_relaxed(i, v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic64_add_negative_release arch_atomic64_add_negative_release
-#endif
-
-#ifndef arch_atomic64_add_negative_relaxed
+#elif defined(arch_atomic64_add_negative)
+#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative
+#else
static __always_inline bool
-arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
+raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
- return arch_atomic64_add_return_relaxed(i, v) < 0;
+ return raw_atomic64_add_return_acquire(i, v) < 0;
}
-#define arch_atomic64_add_negative_relaxed arch_atomic64_add_negative_relaxed
#endif
-#else /* arch_atomic64_add_negative_relaxed */
-
-#ifndef arch_atomic64_add_negative_acquire
+#if defined(arch_atomic64_add_negative_release)
+#define raw_atomic64_add_negative_release arch_atomic64_add_negative_release
+#elif defined(arch_atomic64_add_negative_relaxed)
static __always_inline bool
-arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
+raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
{
- bool ret = arch_atomic64_add_negative_relaxed(i, v);
- __atomic_acquire_fence();
- return ret;
+ __atomic_release_fence();
+ return arch_atomic64_add_negative_relaxed(i, v);
}
-#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire
-#endif
-
-#ifndef arch_atomic64_add_negative_release
+#elif defined(arch_atomic64_add_negative)
+#define raw_atomic64_add_negative_release arch_atomic64_add_negative
+#else
static __always_inline bool
-arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
+raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
{
- __atomic_release_fence();
- return arch_atomic64_add_negative_relaxed(i, v);
+ return raw_atomic64_add_return_release(i, v) < 0;
}
-#define arch_atomic64_add_negative_release arch_atomic64_add_negative_release
#endif
-#ifndef arch_atomic64_add_negative
+#if defined(arch_atomic64_add_negative_relaxed)
+#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative_relaxed
+#elif defined(arch_atomic64_add_negative)
+#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative
+#else
static __always_inline bool
-arch_atomic64_add_negative(s64 i, atomic64_t *v)
+raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
{
- bool ret;
- __atomic_pre_full_fence();
- ret = arch_atomic64_add_negative_relaxed(i, v);
- __atomic_post_full_fence();
- return ret;
+ return raw_atomic64_add_return_relaxed(i, v) < 0;
}
-#define arch_atomic64_add_negative arch_atomic64_add_negative
#endif
-#endif /* arch_atomic64_add_negative_relaxed */
-
-#ifndef arch_atomic64_fetch_add_unless
+#if defined(arch_atomic64_fetch_add_unless)
+#define raw_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
+#else
static __always_inline s64
-arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
- s64 c = arch_atomic64_read(v);
+ s64 c = raw_atomic64_read(v);
do {
if (unlikely(c == u))
break;
- } while (!arch_atomic64_try_cmpxchg(v, &c, c + a));
+ } while (!raw_atomic64_try_cmpxchg(v, &c, c + a));
return c;
}
-#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
#endif
-#ifndef arch_atomic64_add_unless
+#if defined(arch_atomic64_add_unless)
+#define raw_atomic64_add_unless arch_atomic64_add_unless
+#else
static __always_inline bool
-arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
+raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
- return arch_atomic64_fetch_add_unless(v, a, u) != u;
+ return raw_atomic64_fetch_add_unless(v, a, u) != u;
}
-#define arch_atomic64_add_unless arch_atomic64_add_unless
#endif
-#ifndef arch_atomic64_inc_not_zero
+#if defined(arch_atomic64_inc_not_zero)
+#define raw_atomic64_inc_not_zero arch_atomic64_inc_not_zero
+#else
static __always_inline bool
-arch_atomic64_inc_not_zero(atomic64_t *v)
+raw_atomic64_inc_not_zero(atomic64_t *v)
{
- return arch_atomic64_add_unless(v, 1, 0);
+ return raw_atomic64_add_unless(v, 1, 0);
}
-#define arch_atomic64_inc_not_zero arch_atomic64_inc_not_zero
#endif
-#ifndef arch_atomic64_inc_unless_negative
+#if defined(arch_atomic64_inc_unless_negative)
+#define raw_atomic64_inc_unless_negative arch_atomic64_inc_unless_negative
+#else
static __always_inline bool
-arch_atomic64_inc_unless_negative(atomic64_t *v)
+raw_atomic64_inc_unless_negative(atomic64_t *v)
{
- s64 c = arch_atomic64_read(v);
+ s64 c = raw_atomic64_read(v);
do {
if (unlikely(c < 0))
return false;
- } while (!arch_atomic64_try_cmpxchg(v, &c, c + 1));
+ } while (!raw_atomic64_try_cmpxchg(v, &c, c + 1));
return true;
}
-#define arch_atomic64_inc_unless_negative arch_atomic64_inc_unless_negative
#endif
-#ifndef arch_atomic64_dec_unless_positive
+#if defined(arch_atomic64_dec_unless_positive)
+#define raw_atomic64_dec_unless_positive arch_atomic64_dec_unless_positive
+#else
static __always_inline bool
-arch_atomic64_dec_unless_positive(atomic64_t *v)
+raw_atomic64_dec_unless_positive(atomic64_t *v)
{
- s64 c = arch_atomic64_read(v);
+ s64 c = raw_atomic64_read(v);
do {
if (unlikely(c > 0))
return false;
- } while (!arch_atomic64_try_cmpxchg(v, &c, c - 1));
+ } while (!raw_atomic64_try_cmpxchg(v, &c, c - 1));
return true;
}
-#define arch_atomic64_dec_unless_positive arch_atomic64_dec_unless_positive
#endif
-#ifndef arch_atomic64_dec_if_positive
+#if defined(arch_atomic64_dec_if_positive)
+#define raw_atomic64_dec_if_positive arch_atomic64_dec_if_positive
+#else
static __always_inline s64
-arch_atomic64_dec_if_positive(atomic64_t *v)
+raw_atomic64_dec_if_positive(atomic64_t *v)
{
- s64 dec, c = arch_atomic64_read(v);
+ s64 dec, c = raw_atomic64_read(v);
do {
dec = c - 1;
if (unlikely(dec < 0))
break;
- } while (!arch_atomic64_try_cmpxchg(v, &c, dec));
+ } while (!raw_atomic64_try_cmpxchg(v, &c, dec));
return dec;
}
-#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
#endif
#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// e1cee558cc61cae887890db30fcdf93baca9f498
+// c2048fccede6fac923252290e2b303949d5dec83
diff --git a/include/linux/atomic/atomic-raw.h b/include/linux/atomic/atomic-raw.h
deleted file mode 100644
index 8b2fc04cf8c54..0000000000000
--- a/include/linux/atomic/atomic-raw.h
+++ /dev/null
@@ -1,1135 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-
-// Generated by scripts/atomic/gen-atomic-raw.sh
-// DO NOT MODIFY THIS FILE DIRECTLY
-
-#ifndef _LINUX_ATOMIC_RAW_H
-#define _LINUX_ATOMIC_RAW_H
-
-static __always_inline int
-raw_atomic_read(const atomic_t *v)
-{
- return arch_atomic_read(v);
-}
-
-static __always_inline int
-raw_atomic_read_acquire(const atomic_t *v)
-{
- return arch_atomic_read_acquire(v);
-}
-
-static __always_inline void
-raw_atomic_set(atomic_t *v, int i)
-{
- arch_atomic_set(v, i);
-}
-
-static __always_inline void
-raw_atomic_set_release(atomic_t *v, int i)
-{
- arch_atomic_set_release(v, i);
-}
-
-static __always_inline void
-raw_atomic_add(int i, atomic_t *v)
-{
- arch_atomic_add(i, v);
-}
-
-static __always_inline int
-raw_atomic_add_return(int i, atomic_t *v)
-{
- return arch_atomic_add_return(i, v);
-}
-
-static __always_inline int
-raw_atomic_add_return_acquire(int i, atomic_t *v)
-{
- return arch_atomic_add_return_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_add_return_release(int i, atomic_t *v)
-{
- return arch_atomic_add_return_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_add_return_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_add_return_relaxed(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_add(int i, atomic_t *v)
-{
- return arch_atomic_fetch_add(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_add_acquire(int i, atomic_t *v)
-{
- return arch_atomic_fetch_add_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_add_release(int i, atomic_t *v)
-{
- return arch_atomic_fetch_add_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_add_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_fetch_add_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_sub(int i, atomic_t *v)
-{
- arch_atomic_sub(i, v);
-}
-
-static __always_inline int
-raw_atomic_sub_return(int i, atomic_t *v)
-{
- return arch_atomic_sub_return(i, v);
-}
-
-static __always_inline int
-raw_atomic_sub_return_acquire(int i, atomic_t *v)
-{
- return arch_atomic_sub_return_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_sub_return_release(int i, atomic_t *v)
-{
- return arch_atomic_sub_return_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_sub_return_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_sub_return_relaxed(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_sub(int i, atomic_t *v)
-{
- return arch_atomic_fetch_sub(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_sub_acquire(int i, atomic_t *v)
-{
- return arch_atomic_fetch_sub_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_sub_release(int i, atomic_t *v)
-{
- return arch_atomic_fetch_sub_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_sub_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_fetch_sub_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_inc(atomic_t *v)
-{
- arch_atomic_inc(v);
-}
-
-static __always_inline int
-raw_atomic_inc_return(atomic_t *v)
-{
- return arch_atomic_inc_return(v);
-}
-
-static __always_inline int
-raw_atomic_inc_return_acquire(atomic_t *v)
-{
- return arch_atomic_inc_return_acquire(v);
-}
-
-static __always_inline int
-raw_atomic_inc_return_release(atomic_t *v)
-{
- return arch_atomic_inc_return_release(v);
-}
-
-static __always_inline int
-raw_atomic_inc_return_relaxed(atomic_t *v)
-{
- return arch_atomic_inc_return_relaxed(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_inc(atomic_t *v)
-{
- return arch_atomic_fetch_inc(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_inc_acquire(atomic_t *v)
-{
- return arch_atomic_fetch_inc_acquire(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_inc_release(atomic_t *v)
-{
- return arch_atomic_fetch_inc_release(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_inc_relaxed(atomic_t *v)
-{
- return arch_atomic_fetch_inc_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_dec(atomic_t *v)
-{
- arch_atomic_dec(v);
-}
-
-static __always_inline int
-raw_atomic_dec_return(atomic_t *v)
-{
- return arch_atomic_dec_return(v);
-}
-
-static __always_inline int
-raw_atomic_dec_return_acquire(atomic_t *v)
-{
- return arch_atomic_dec_return_acquire(v);
-}
-
-static __always_inline int
-raw_atomic_dec_return_release(atomic_t *v)
-{
- return arch_atomic_dec_return_release(v);
-}
-
-static __always_inline int
-raw_atomic_dec_return_relaxed(atomic_t *v)
-{
- return arch_atomic_dec_return_relaxed(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_dec(atomic_t *v)
-{
- return arch_atomic_fetch_dec(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_dec_acquire(atomic_t *v)
-{
- return arch_atomic_fetch_dec_acquire(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_dec_release(atomic_t *v)
-{
- return arch_atomic_fetch_dec_release(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_dec_relaxed(atomic_t *v)
-{
- return arch_atomic_fetch_dec_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_and(int i, atomic_t *v)
-{
- arch_atomic_and(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_and(int i, atomic_t *v)
-{
- return arch_atomic_fetch_and(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_and_acquire(int i, atomic_t *v)
-{
- return arch_atomic_fetch_and_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_and_release(int i, atomic_t *v)
-{
- return arch_atomic_fetch_and_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_and_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_fetch_and_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_andnot(int i, atomic_t *v)
-{
- arch_atomic_andnot(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_andnot(int i, atomic_t *v)
-{
- return arch_atomic_fetch_andnot(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
-{
- return arch_atomic_fetch_andnot_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_andnot_release(int i, atomic_t *v)
-{
- return arch_atomic_fetch_andnot_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_fetch_andnot_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_or(int i, atomic_t *v)
-{
- arch_atomic_or(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_or(int i, atomic_t *v)
-{
- return arch_atomic_fetch_or(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_or_acquire(int i, atomic_t *v)
-{
- return arch_atomic_fetch_or_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_or_release(int i, atomic_t *v)
-{
- return arch_atomic_fetch_or_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_or_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_fetch_or_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_xor(int i, atomic_t *v)
-{
- arch_atomic_xor(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_xor(int i, atomic_t *v)
-{
- return arch_atomic_fetch_xor(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_xor_acquire(int i, atomic_t *v)
-{
- return arch_atomic_fetch_xor_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_xor_release(int i, atomic_t *v)
-{
- return arch_atomic_fetch_xor_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_xor_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_fetch_xor_relaxed(i, v);
-}
-
-static __always_inline int
-raw_atomic_xchg(atomic_t *v, int i)
-{
- return arch_atomic_xchg(v, i);
-}
-
-static __always_inline int
-raw_atomic_xchg_acquire(atomic_t *v, int i)
-{
- return arch_atomic_xchg_acquire(v, i);
-}
-
-static __always_inline int
-raw_atomic_xchg_release(atomic_t *v, int i)
-{
- return arch_atomic_xchg_release(v, i);
-}
-
-static __always_inline int
-raw_atomic_xchg_relaxed(atomic_t *v, int i)
-{
- return arch_atomic_xchg_relaxed(v, i);
-}
-
-static __always_inline int
-raw_atomic_cmpxchg(atomic_t *v, int old, int new)
-{
- return arch_atomic_cmpxchg(v, old, new);
-}
-
-static __always_inline int
-raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
-{
- return arch_atomic_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline int
-raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
-{
- return arch_atomic_cmpxchg_release(v, old, new);
-}
-
-static __always_inline int
-raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
-{
- return arch_atomic_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
-{
- return arch_atomic_try_cmpxchg(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
-{
- return arch_atomic_try_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
-{
- return arch_atomic_try_cmpxchg_release(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
-{
- return arch_atomic_try_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_sub_and_test(int i, atomic_t *v)
-{
- return arch_atomic_sub_and_test(i, v);
-}
-
-static __always_inline bool
-raw_atomic_dec_and_test(atomic_t *v)
-{
- return arch_atomic_dec_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_inc_and_test(atomic_t *v)
-{
- return arch_atomic_inc_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_add_negative(int i, atomic_t *v)
-{
- return arch_atomic_add_negative(i, v);
-}
-
-static __always_inline bool
-raw_atomic_add_negative_acquire(int i, atomic_t *v)
-{
- return arch_atomic_add_negative_acquire(i, v);
-}
-
-static __always_inline bool
-raw_atomic_add_negative_release(int i, atomic_t *v)
-{
- return arch_atomic_add_negative_release(i, v);
-}
-
-static __always_inline bool
-raw_atomic_add_negative_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_add_negative_relaxed(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
-{
- return arch_atomic_fetch_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_add_unless(atomic_t *v, int a, int u)
-{
- return arch_atomic_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_inc_not_zero(atomic_t *v)
-{
- return arch_atomic_inc_not_zero(v);
-}
-
-static __always_inline bool
-raw_atomic_inc_unless_negative(atomic_t *v)
-{
- return arch_atomic_inc_unless_negative(v);
-}
-
-static __always_inline bool
-raw_atomic_dec_unless_positive(atomic_t *v)
-{
- return arch_atomic_dec_unless_positive(v);
-}
-
-static __always_inline int
-raw_atomic_dec_if_positive(atomic_t *v)
-{
- return arch_atomic_dec_if_positive(v);
-}
-
-static __always_inline s64
-raw_atomic64_read(const atomic64_t *v)
-{
- return arch_atomic64_read(v);
-}
-
-static __always_inline s64
-raw_atomic64_read_acquire(const atomic64_t *v)
-{
- return arch_atomic64_read_acquire(v);
-}
-
-static __always_inline void
-raw_atomic64_set(atomic64_t *v, s64 i)
-{
- arch_atomic64_set(v, i);
-}
-
-static __always_inline void
-raw_atomic64_set_release(atomic64_t *v, s64 i)
-{
- arch_atomic64_set_release(v, i);
-}
-
-static __always_inline void
-raw_atomic64_add(s64 i, atomic64_t *v)
-{
- arch_atomic64_add(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_add_return(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_return(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_return_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_add_return_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_return_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_return_relaxed(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_add(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_add(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_add_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_add_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_add_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic64_sub(s64 i, atomic64_t *v)
-{
- arch_atomic64_sub(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_sub_return(s64 i, atomic64_t *v)
-{
- return arch_atomic64_sub_return(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_sub_return_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_sub_return_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_sub_return_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_sub_return_relaxed(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_sub(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_sub(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_sub_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_sub_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_sub_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic64_inc(atomic64_t *v)
-{
- arch_atomic64_inc(v);
-}
-
-static __always_inline s64
-raw_atomic64_inc_return(atomic64_t *v)
-{
- return arch_atomic64_inc_return(v);
-}
-
-static __always_inline s64
-raw_atomic64_inc_return_acquire(atomic64_t *v)
-{
- return arch_atomic64_inc_return_acquire(v);
-}
-
-static __always_inline s64
-raw_atomic64_inc_return_release(atomic64_t *v)
-{
- return arch_atomic64_inc_return_release(v);
-}
-
-static __always_inline s64
-raw_atomic64_inc_return_relaxed(atomic64_t *v)
-{
- return arch_atomic64_inc_return_relaxed(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_inc(atomic64_t *v)
-{
- return arch_atomic64_fetch_inc(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_inc_acquire(atomic64_t *v)
-{
- return arch_atomic64_fetch_inc_acquire(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_inc_release(atomic64_t *v)
-{
- return arch_atomic64_fetch_inc_release(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_inc_relaxed(atomic64_t *v)
-{
- return arch_atomic64_fetch_inc_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic64_dec(atomic64_t *v)
-{
- arch_atomic64_dec(v);
-}
-
-static __always_inline s64
-raw_atomic64_dec_return(atomic64_t *v)
-{
- return arch_atomic64_dec_return(v);
-}
-
-static __always_inline s64
-raw_atomic64_dec_return_acquire(atomic64_t *v)
-{
- return arch_atomic64_dec_return_acquire(v);
-}
-
-static __always_inline s64
-raw_atomic64_dec_return_release(atomic64_t *v)
-{
- return arch_atomic64_dec_return_release(v);
-}
-
-static __always_inline s64
-raw_atomic64_dec_return_relaxed(atomic64_t *v)
-{
- return arch_atomic64_dec_return_relaxed(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_dec(atomic64_t *v)
-{
- return arch_atomic64_fetch_dec(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_dec_acquire(atomic64_t *v)
-{
- return arch_atomic64_fetch_dec_acquire(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_dec_release(atomic64_t *v)
-{
- return arch_atomic64_fetch_dec_release(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_dec_relaxed(atomic64_t *v)
-{
- return arch_atomic64_fetch_dec_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic64_and(s64 i, atomic64_t *v)
-{
- arch_atomic64_and(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_and(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_and(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_and_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_and_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_and_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic64_andnot(s64 i, atomic64_t *v)
-{
- arch_atomic64_andnot(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_andnot(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_andnot_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_andnot_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_andnot_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic64_or(s64 i, atomic64_t *v)
-{
- arch_atomic64_or(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_or(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_or(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_or_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_or_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_or_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic64_xor(s64 i, atomic64_t *v)
-{
- arch_atomic64_xor(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_xor(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_xor(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_xor_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_xor_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_xor_relaxed(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_xchg(atomic64_t *v, s64 i)
-{
- return arch_atomic64_xchg(v, i);
-}
-
-static __always_inline s64
-raw_atomic64_xchg_acquire(atomic64_t *v, s64 i)
-{
- return arch_atomic64_xchg_acquire(v, i);
-}
-
-static __always_inline s64
-raw_atomic64_xchg_release(atomic64_t *v, s64 i)
-{
- return arch_atomic64_xchg_release(v, i);
-}
-
-static __always_inline s64
-raw_atomic64_xchg_relaxed(atomic64_t *v, s64 i)
-{
- return arch_atomic64_xchg_relaxed(v, i);
-}
-
-static __always_inline s64
-raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
-{
- return arch_atomic64_cmpxchg(v, old, new);
-}
-
-static __always_inline s64
-raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
-{
- return arch_atomic64_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline s64
-raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
-{
- return arch_atomic64_cmpxchg_release(v, old, new);
-}
-
-static __always_inline s64
-raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
-{
- return arch_atomic64_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
-{
- return arch_atomic64_try_cmpxchg(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
-{
- return arch_atomic64_try_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
-{
- return arch_atomic64_try_cmpxchg_release(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
-{
- return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic64_sub_and_test(s64 i, atomic64_t *v)
-{
- return arch_atomic64_sub_and_test(i, v);
-}
-
-static __always_inline bool
-raw_atomic64_dec_and_test(atomic64_t *v)
-{
- return arch_atomic64_dec_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic64_inc_and_test(atomic64_t *v)
-{
- return arch_atomic64_inc_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic64_add_negative(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_negative(i, v);
-}
-
-static __always_inline bool
-raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_negative_acquire(i, v);
-}
-
-static __always_inline bool
-raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_negative_release(i, v);
-}
-
-static __always_inline bool
-raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_negative_relaxed(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
-{
- return arch_atomic64_fetch_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
-{
- return arch_atomic64_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic64_inc_not_zero(atomic64_t *v)
-{
- return arch_atomic64_inc_not_zero(v);
-}
-
-static __always_inline bool
-raw_atomic64_inc_unless_negative(atomic64_t *v)
-{
- return arch_atomic64_inc_unless_negative(v);
-}
-
-static __always_inline bool
-raw_atomic64_dec_unless_positive(atomic64_t *v)
-{
- return arch_atomic64_dec_unless_positive(v);
-}
-
-static __always_inline s64
-raw_atomic64_dec_if_positive(atomic64_t *v)
-{
- return arch_atomic64_dec_if_positive(v);
-}
-
-#define raw_xchg(...) \
- arch_xchg(__VA_ARGS__)
-
-#define raw_xchg_acquire(...) \
- arch_xchg_acquire(__VA_ARGS__)
-
-#define raw_xchg_release(...) \
- arch_xchg_release(__VA_ARGS__)
-
-#define raw_xchg_relaxed(...) \
- arch_xchg_relaxed(__VA_ARGS__)
-
-#define raw_cmpxchg(...) \
- arch_cmpxchg(__VA_ARGS__)
-
-#define raw_cmpxchg_acquire(...) \
- arch_cmpxchg_acquire(__VA_ARGS__)
-
-#define raw_cmpxchg_release(...) \
- arch_cmpxchg_release(__VA_ARGS__)
-
-#define raw_cmpxchg_relaxed(...) \
- arch_cmpxchg_relaxed(__VA_ARGS__)
-
-#define raw_cmpxchg64(...) \
- arch_cmpxchg64(__VA_ARGS__)
-
-#define raw_cmpxchg64_acquire(...) \
- arch_cmpxchg64_acquire(__VA_ARGS__)
-
-#define raw_cmpxchg64_release(...) \
- arch_cmpxchg64_release(__VA_ARGS__)
-
-#define raw_cmpxchg64_relaxed(...) \
- arch_cmpxchg64_relaxed(__VA_ARGS__)
-
-#define raw_cmpxchg128(...) \
- arch_cmpxchg128(__VA_ARGS__)
-
-#define raw_cmpxchg128_acquire(...) \
- arch_cmpxchg128_acquire(__VA_ARGS__)
-
-#define raw_cmpxchg128_release(...) \
- arch_cmpxchg128_release(__VA_ARGS__)
-
-#define raw_cmpxchg128_relaxed(...) \
- arch_cmpxchg128_relaxed(__VA_ARGS__)
-
-#define raw_try_cmpxchg(...) \
- arch_try_cmpxchg(__VA_ARGS__)
-
-#define raw_try_cmpxchg_acquire(...) \
- arch_try_cmpxchg_acquire(__VA_ARGS__)
-
-#define raw_try_cmpxchg_release(...) \
- arch_try_cmpxchg_release(__VA_ARGS__)
-
-#define raw_try_cmpxchg_relaxed(...) \
- arch_try_cmpxchg_relaxed(__VA_ARGS__)
-
-#define raw_try_cmpxchg64(...) \
- arch_try_cmpxchg64(__VA_ARGS__)
-
-#define raw_try_cmpxchg64_acquire(...) \
- arch_try_cmpxchg64_acquire(__VA_ARGS__)
-
-#define raw_try_cmpxchg64_release(...) \
- arch_try_cmpxchg64_release(__VA_ARGS__)
-
-#define raw_try_cmpxchg64_relaxed(...) \
- arch_try_cmpxchg64_relaxed(__VA_ARGS__)
-
-#define raw_try_cmpxchg128(...) \
- arch_try_cmpxchg128(__VA_ARGS__)
-
-#define raw_try_cmpxchg128_acquire(...) \
- arch_try_cmpxchg128_acquire(__VA_ARGS__)
-
-#define raw_try_cmpxchg128_release(...) \
- arch_try_cmpxchg128_release(__VA_ARGS__)
-
-#define raw_try_cmpxchg128_relaxed(...) \
- arch_try_cmpxchg128_relaxed(__VA_ARGS__)
-
-#define raw_cmpxchg_local(...) \
- arch_cmpxchg_local(__VA_ARGS__)
-
-#define raw_cmpxchg64_local(...) \
- arch_cmpxchg64_local(__VA_ARGS__)
-
-#define raw_cmpxchg128_local(...) \
- arch_cmpxchg128_local(__VA_ARGS__)
-
-#define raw_sync_cmpxchg(...) \
- arch_sync_cmpxchg(__VA_ARGS__)
-
-#define raw_try_cmpxchg_local(...) \
- arch_try_cmpxchg_local(__VA_ARGS__)
-
-#define raw_try_cmpxchg64_local(...) \
- arch_try_cmpxchg64_local(__VA_ARGS__)
-
-#define raw_try_cmpxchg128_local(...) \
- arch_try_cmpxchg128_local(__VA_ARGS__)
-
-#endif /* _LINUX_ATOMIC_RAW_H */
-// b23ed4424e85200e200ded094522e1d743b3a5b1
diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/acquire
index ef764085c79aa..b0f732a5c46ef 100755
--- a/scripts/atomic/fallbacks/acquire
+++ b/scripts/atomic/fallbacks/acquire
@@ -1,6 +1,6 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_${pfx}${name}${sfx}_acquire(${params})
+raw_${atomic}_${pfx}${name}${sfx}_acquire(${params})
{
${ret} ret = arch_${atomic}_${pfx}${name}${sfx}_relaxed(${args});
__atomic_acquire_fence();
diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative
index d0bd2dfbb244c..16876118019ec 100755
--- a/scripts/atomic/fallbacks/add_negative
+++ b/scripts/atomic/fallbacks/add_negative
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_add_negative${order}(${int} i, ${atomic}_t *v)
+raw_${atomic}_add_negative${order}(${int} i, ${atomic}_t *v)
{
- return arch_${atomic}_add_return${order}(i, v) < 0;
+ return raw_${atomic}_add_return${order}(i, v) < 0;
}
EOF
diff --git a/scripts/atomic/fallbacks/add_unless b/scripts/atomic/fallbacks/add_unless
index cf79b9da38dbb..88593e28b1637 100755
--- a/scripts/atomic/fallbacks/add_unless
+++ b/scripts/atomic/fallbacks/add_unless
@@ -1,7 +1,7 @@
cat << EOF
static __always_inline bool
-arch_${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
+raw_${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
{
- return arch_${atomic}_fetch_add_unless(v, a, u) != u;
+ return raw_${atomic}_fetch_add_unless(v, a, u) != u;
}
EOF
diff --git a/scripts/atomic/fallbacks/andnot b/scripts/atomic/fallbacks/andnot
index 5a42f54a35950..5b83bb63f7284 100755
--- a/scripts/atomic/fallbacks/andnot
+++ b/scripts/atomic/fallbacks/andnot
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_${pfx}andnot${sfx}${order}(${int} i, ${atomic}_t *v)
+raw_${atomic}_${pfx}andnot${sfx}${order}(${int} i, ${atomic}_t *v)
{
- ${retstmt}arch_${atomic}_${pfx}and${sfx}${order}(~i, v);
+ ${retstmt}raw_${atomic}_${pfx}and${sfx}${order}(~i, v);
}
EOF
diff --git a/scripts/atomic/fallbacks/cmpxchg b/scripts/atomic/fallbacks/cmpxchg
index 87cd010f98d58..312ee67f1743e 100755
--- a/scripts/atomic/fallbacks/cmpxchg
+++ b/scripts/atomic/fallbacks/cmpxchg
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline ${int}
-arch_${atomic}_cmpxchg${order}(${atomic}_t *v, ${int} old, ${int} new)
+raw_${atomic}_cmpxchg${order}(${atomic}_t *v, ${int} old, ${int} new)
{
- return arch_cmpxchg${order}(&v->counter, old, new);
+ return raw_cmpxchg${order}(&v->counter, old, new);
}
EOF
diff --git a/scripts/atomic/fallbacks/dec b/scripts/atomic/fallbacks/dec
index 8c144c818e9ed..a660ac65994bd 100755
--- a/scripts/atomic/fallbacks/dec
+++ b/scripts/atomic/fallbacks/dec
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_${pfx}dec${sfx}${order}(${atomic}_t *v)
+raw_${atomic}_${pfx}dec${sfx}${order}(${atomic}_t *v)
{
- ${retstmt}arch_${atomic}_${pfx}sub${sfx}${order}(1, v);
+ ${retstmt}raw_${atomic}_${pfx}sub${sfx}${order}(1, v);
}
EOF
diff --git a/scripts/atomic/fallbacks/dec_and_test b/scripts/atomic/fallbacks/dec_and_test
index 3f6b6a8b47733..521dfcae03f24 100755
--- a/scripts/atomic/fallbacks/dec_and_test
+++ b/scripts/atomic/fallbacks/dec_and_test
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_dec_and_test(${atomic}_t *v)
+raw_${atomic}_dec_and_test(${atomic}_t *v)
{
- return arch_${atomic}_dec_return(v) == 0;
+ return raw_${atomic}_dec_return(v) == 0;
}
EOF
diff --git a/scripts/atomic/fallbacks/dec_if_positive b/scripts/atomic/fallbacks/dec_if_positive
index 86bdced3428d6..7acb205e6ce35 100755
--- a/scripts/atomic/fallbacks/dec_if_positive
+++ b/scripts/atomic/fallbacks/dec_if_positive
@@ -1,14 +1,14 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_dec_if_positive(${atomic}_t *v)
+raw_${atomic}_dec_if_positive(${atomic}_t *v)
{
- ${int} dec, c = arch_${atomic}_read(v);
+ ${int} dec, c = raw_${atomic}_read(v);
do {
dec = c - 1;
if (unlikely(dec < 0))
break;
- } while (!arch_${atomic}_try_cmpxchg(v, &c, dec));
+ } while (!raw_${atomic}_try_cmpxchg(v, &c, dec));
return dec;
}
diff --git a/scripts/atomic/fallbacks/dec_unless_positive b/scripts/atomic/fallbacks/dec_unless_positive
index c531d5afecc47..bcb4f27945eaa 100755
--- a/scripts/atomic/fallbacks/dec_unless_positive
+++ b/scripts/atomic/fallbacks/dec_unless_positive
@@ -1,13 +1,13 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_dec_unless_positive(${atomic}_t *v)
+raw_${atomic}_dec_unless_positive(${atomic}_t *v)
{
- ${int} c = arch_${atomic}_read(v);
+ ${int} c = raw_${atomic}_read(v);
do {
if (unlikely(c > 0))
return false;
- } while (!arch_${atomic}_try_cmpxchg(v, &c, c - 1));
+ } while (!raw_${atomic}_try_cmpxchg(v, &c, c - 1));
return true;
}
diff --git a/scripts/atomic/fallbacks/fence b/scripts/atomic/fallbacks/fence
index 07757d8e338ef..067eea553f5e0 100755
--- a/scripts/atomic/fallbacks/fence
+++ b/scripts/atomic/fallbacks/fence
@@ -1,6 +1,6 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_${pfx}${name}${sfx}(${params})
+raw_${atomic}_${pfx}${name}${sfx}(${params})
{
${ret} ret;
__atomic_pre_full_fence();
diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fallbacks/fetch_add_unless
index 81d2834f03d23..c18b940153dfd 100755
--- a/scripts/atomic/fallbacks/fetch_add_unless
+++ b/scripts/atomic/fallbacks/fetch_add_unless
@@ -1,13 +1,13 @@
cat << EOF
static __always_inline ${int}
-arch_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
+raw_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
{
- ${int} c = arch_${atomic}_read(v);
+ ${int} c = raw_${atomic}_read(v);
do {
if (unlikely(c == u))
break;
- } while (!arch_${atomic}_try_cmpxchg(v, &c, c + a));
+ } while (!raw_${atomic}_try_cmpxchg(v, &c, c + a));
return c;
}
diff --git a/scripts/atomic/fallbacks/inc b/scripts/atomic/fallbacks/inc
index 3c2c3739169e5..7d838f0b66391 100755
--- a/scripts/atomic/fallbacks/inc
+++ b/scripts/atomic/fallbacks/inc
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_${pfx}inc${sfx}${order}(${atomic}_t *v)
+raw_${atomic}_${pfx}inc${sfx}${order}(${atomic}_t *v)
{
- ${retstmt}arch_${atomic}_${pfx}add${sfx}${order}(1, v);
+ ${retstmt}raw_${atomic}_${pfx}add${sfx}${order}(1, v);
}
EOF
diff --git a/scripts/atomic/fallbacks/inc_and_test b/scripts/atomic/fallbacks/inc_and_test
index c726a6d0634d3..de25aebee715d 100755
--- a/scripts/atomic/fallbacks/inc_and_test
+++ b/scripts/atomic/fallbacks/inc_and_test
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_inc_and_test(${atomic}_t *v)
+raw_${atomic}_inc_and_test(${atomic}_t *v)
{
- return arch_${atomic}_inc_return(v) == 0;
+ return raw_${atomic}_inc_return(v) == 0;
}
EOF
diff --git a/scripts/atomic/fallbacks/inc_not_zero b/scripts/atomic/fallbacks/inc_not_zero
index 97603591aac2a..e02206d017f62 100755
--- a/scripts/atomic/fallbacks/inc_not_zero
+++ b/scripts/atomic/fallbacks/inc_not_zero
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_inc_not_zero(${atomic}_t *v)
+raw_${atomic}_inc_not_zero(${atomic}_t *v)
{
- return arch_${atomic}_add_unless(v, 1, 0);
+ return raw_${atomic}_add_unless(v, 1, 0);
}
EOF
diff --git a/scripts/atomic/fallbacks/inc_unless_negative b/scripts/atomic/fallbacks/inc_unless_negative
index 95d8ce48233ff..7b85cc5b00d2b 100755
--- a/scripts/atomic/fallbacks/inc_unless_negative
+++ b/scripts/atomic/fallbacks/inc_unless_negative
@@ -1,13 +1,13 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_inc_unless_negative(${atomic}_t *v)
+raw_${atomic}_inc_unless_negative(${atomic}_t *v)
{
- ${int} c = arch_${atomic}_read(v);
+ ${int} c = raw_${atomic}_read(v);
do {
if (unlikely(c < 0))
return false;
- } while (!arch_${atomic}_try_cmpxchg(v, &c, c + 1));
+ } while (!raw_${atomic}_try_cmpxchg(v, &c, c + 1));
return true;
}
diff --git a/scripts/atomic/fallbacks/read_acquire b/scripts/atomic/fallbacks/read_acquire
index a0ea1d26e6b2e..26d15ad92d043 100755
--- a/scripts/atomic/fallbacks/read_acquire
+++ b/scripts/atomic/fallbacks/read_acquire
@@ -1,13 +1,13 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_read_acquire(const ${atomic}_t *v)
+raw_${atomic}_read_acquire(const ${atomic}_t *v)
{
${int} ret;
if (__native_word(${atomic}_t)) {
ret = smp_load_acquire(&(v)->counter);
} else {
- ret = arch_${atomic}_read(v);
+ ret = raw_${atomic}_read(v);
__atomic_acquire_fence();
}
diff --git a/scripts/atomic/fallbacks/release b/scripts/atomic/fallbacks/release
index b46feb56d69ca..cbbff708129b8 100755
--- a/scripts/atomic/fallbacks/release
+++ b/scripts/atomic/fallbacks/release
@@ -1,6 +1,6 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_${pfx}${name}${sfx}_release(${params})
+raw_${atomic}_${pfx}${name}${sfx}_release(${params})
{
__atomic_release_fence();
${retstmt}arch_${atomic}_${pfx}${name}${sfx}_relaxed(${args});
diff --git a/scripts/atomic/fallbacks/set_release b/scripts/atomic/fallbacks/set_release
index 05cdb7f42477a..104693bc3c660 100755
--- a/scripts/atomic/fallbacks/set_release
+++ b/scripts/atomic/fallbacks/set_release
@@ -1,12 +1,12 @@
cat <<EOF
static __always_inline void
-arch_${atomic}_set_release(${atomic}_t *v, ${int} i)
+raw_${atomic}_set_release(${atomic}_t *v, ${int} i)
{
if (__native_word(${atomic}_t)) {
smp_store_release(&(v)->counter, i);
} else {
__atomic_release_fence();
- arch_${atomic}_set(v, i);
+ raw_${atomic}_set(v, i);
}
}
EOF
diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test
index da8a049c9b02b..8975a496d495c 100755
--- a/scripts/atomic/fallbacks/sub_and_test
+++ b/scripts/atomic/fallbacks/sub_and_test
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_sub_and_test(${int} i, ${atomic}_t *v)
+raw_${atomic}_sub_and_test(${int} i, ${atomic}_t *v)
{
- return arch_${atomic}_sub_return(i, v) == 0;
+ return raw_${atomic}_sub_return(i, v) == 0;
}
EOF
diff --git a/scripts/atomic/fallbacks/try_cmpxchg b/scripts/atomic/fallbacks/try_cmpxchg
index 890f850ede378..4c911a6cced94 100755
--- a/scripts/atomic/fallbacks/try_cmpxchg
+++ b/scripts/atomic/fallbacks/try_cmpxchg
@@ -1,9 +1,9 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_try_cmpxchg${order}(${atomic}_t *v, ${int} *old, ${int} new)
+raw_${atomic}_try_cmpxchg${order}(${atomic}_t *v, ${int} *old, ${int} new)
{
${int} r, o = *old;
- r = arch_${atomic}_cmpxchg${order}(v, o, new);
+ r = raw_${atomic}_cmpxchg${order}(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
diff --git a/scripts/atomic/fallbacks/xchg b/scripts/atomic/fallbacks/xchg
index 733b8980b2f3b..bdd788aa575ff 100755
--- a/scripts/atomic/fallbacks/xchg
+++ b/scripts/atomic/fallbacks/xchg
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline ${int}
-arch_${atomic}_xchg${order}(${atomic}_t *v, ${int} new)
+raw_${atomic}_xchg${order}(${atomic}_t *v, ${int} new)
{
- return arch_xchg${order}(&v->counter, new);
+ return raw_xchg${order}(&v->counter, new);
}
EOF
diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh
index 337330865fa2e..86aca4f9f315a 100755
--- a/scripts/atomic/gen-atomic-fallback.sh
+++ b/scripts/atomic/gen-atomic-fallback.sh
@@ -17,19 +17,12 @@ gen_template_fallback()
local atomic="$1"; shift
local int="$1"; shift
- local atomicname="arch_${atomic}_${pfx}${name}${sfx}${order}"
-
local ret="$(gen_ret_type "${meta}" "${int}")"
local retstmt="$(gen_ret_stmt "${meta}")"
local params="$(gen_params "${int}" "${atomic}" "$@")"
local args="$(gen_args "$@")"
- if [ ! -z "${template}" ]; then
- printf "#ifndef ${atomicname}\n"
- . ${template}
- printf "#define ${atomicname} ${atomicname}\n"
- printf "#endif\n\n"
- fi
+ . ${template}
}
#gen_order_fallback(meta, pfx, name, sfx, order, atomic, int, args...)
@@ -59,69 +52,92 @@ gen_proto_fallback()
gen_template_fallback "${tmpl}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
}
-#gen_basic_fallbacks(basename)
-gen_basic_fallbacks()
-{
- local basename="$1"; shift
-cat << EOF
-#define ${basename}_acquire ${basename}
-#define ${basename}_release ${basename}
-#define ${basename}_relaxed ${basename}
-EOF
-}
-
-#gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...)
-gen_proto_order_variants()
+#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, args...)
+gen_proto_order_variant()
{
local meta="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
+ local order="$1"; shift
local atomic="$1"
- local basename="arch_${atomic}_${pfx}${name}${sfx}"
-
- local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "")"
+ local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
+ local basename="${atomic}_${pfx}${name}${sfx}"
- # If we don't have relaxed atomics, then we don't bother with ordering fallbacks
- # read_acquire and set_release need to be templated, though
- if ! meta_has_relaxed "${meta}"; then
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
+ local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")"
- if meta_has_acquire "${meta}"; then
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
- fi
+ # Where there is no possible fallback, this order variant is mandatory
+ # and must be provided by arch code. Add a comment to the header to
+ # make this obvious.
+ #
+ # Ideally we'd error on a missing definition, but arch code might
+ # define this order variant as a C function without a preprocessor
+ # symbol.
+ if [ -z ${template} ] && [ -z "${order}" ] && ! meta_has_relaxed "${meta}"; then
+ printf "#define raw_${atomicname} arch_${atomicname}\n\n"
+ return
+ fi
- if meta_has_release "${meta}"; then
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
- fi
+ printf "#if defined(arch_${atomicname})\n"
+ printf "#define raw_${atomicname} arch_${atomicname}\n"
- return
+ # Allow FULL/ACQUIRE/RELEASE ops to be defined in terms of RELAXED ops
+ if [ "${order}" != "_relaxed" ] && meta_has_relaxed "${meta}"; then
+ printf "#elif defined(arch_${basename}_relaxed)\n"
+ gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
fi
- printf "#ifndef ${basename}_relaxed\n"
+ # Allow ACQUIRE/RELEASE/RELAXED ops to be defined in terms of FULL ops
+ if [ ! -z "${order}" ]; then
+ printf "#elif defined(arch_${basename})\n"
+ printf "#define raw_${atomicname} arch_${basename}\n"
+ fi
+ printf "#else\n"
if [ ! -z "${template}" ]; then
- printf "#ifdef ${basename}\n"
+ gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
+ else
+ printf "#error \"Unable to define raw_${atomicname}\"\n"
fi
- gen_basic_fallbacks "${basename}"
+ printf "#endif\n\n"
+}
- if [ ! -z "${template}" ]; then
- printf "#endif /* ${basename} */\n\n"
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_relaxed" "$@"
+
+#gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...)
+gen_proto_order_variants()
+{
+ local meta="$1"; shift
+ local pfx="$1"; shift
+ local name="$1"; shift
+ local sfx="$1"; shift
+ local atomic="$1"
+
+ gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
+
+ if meta_has_acquire "${meta}"; then
+ gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
fi
- printf "#else /* ${basename}_relaxed */\n\n"
+ if meta_has_release "${meta}"; then
+ gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
+ fi
- gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
- gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
- gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
+ if meta_has_relaxed "${meta}"; then
+ gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_relaxed" "$@"
+ fi
+}
- printf "#endif /* ${basename}_relaxed */\n\n"
+#gen_basic_fallbacks(basename)
+gen_basic_fallbacks()
+{
+ local basename="$1"; shift
+cat << EOF
+#define raw_${basename}_acquire arch_${basename}
+#define raw_${basename}_release arch_${basename}
+#define raw_${basename}_relaxed arch_${basename}
+EOF
}
gen_order_fallbacks()
@@ -130,36 +146,65 @@ gen_order_fallbacks()
cat <<EOF
-#ifndef ${xchg}_acquire
-#define ${xchg}_acquire(...) \\
- __atomic_op_acquire(${xchg}, __VA_ARGS__)
+#define raw_${xchg}_relaxed arch_${xchg}_relaxed
+
+#ifdef arch_${xchg}_acquire
+#define raw_${xchg}_acquire arch_${xchg}_acquire
+#else
+#define raw_${xchg}_acquire(...) \\
+ __atomic_op_acquire(arch_${xchg}, __VA_ARGS__)
#endif
-#ifndef ${xchg}_release
-#define ${xchg}_release(...) \\
- __atomic_op_release(${xchg}, __VA_ARGS__)
+#ifdef arch_${xchg}_release
+#define raw_${xchg}_release arch_${xchg}_release
+#else
+#define raw_${xchg}_release(...) \\
+ __atomic_op_release(arch_${xchg}, __VA_ARGS__)
#endif
-#ifndef ${xchg}
-#define ${xchg}(...) \\
- __atomic_op_fence(${xchg}, __VA_ARGS__)
+#ifdef arch_${xchg}
+#define raw_${xchg} arch_${xchg}
+#else
+#define raw_${xchg}(...) \\
+ __atomic_op_fence(arch_${xchg}, __VA_ARGS__)
#endif
EOF
}
-gen_xchg_fallbacks()
+gen_xchg_order_fallback()
{
local xchg="$1"; shift
- printf "#ifndef ${xchg}_relaxed\n"
+ local order="$1"; shift
+ local forder="${order:-_fence}"
- gen_basic_fallbacks ${xchg}
+ printf "#if defined(arch_${xchg}${order})\n"
+ printf "#define raw_${xchg}${order} arch_${xchg}${order}\n"
- printf "#else /* ${xchg}_relaxed */\n"
+ if [ "${order}" != "_relaxed" ]; then
+ printf "#elif defined(arch_${xchg}_relaxed)\n"
+ printf "#define raw_${xchg}${order}(...) \\\\\n"
+ printf " __atomic_op${forder}(arch_${xchg}, __VA_ARGS__)\n"
+ fi
- gen_order_fallbacks ${xchg}
+ if [ ! -z "${order}" ]; then
+ printf "#elif defined(arch_${xchg})\n"
+ printf "#define raw_${xchg}${order} arch_${xchg}\n"
+ fi
- printf "#endif /* ${xchg}_relaxed */\n\n"
+ printf "#else\n"
+ printf "extern void raw_${xchg}${order}_not_implemented(void);\n"
+ printf "#define raw_${xchg}${order}(...) raw_${xchg}${order}_not_implemented()\n"
+ printf "#endif\n\n"
+}
+
+gen_xchg_fallbacks()
+{
+ local xchg="$1"; shift
+
+ for order in "" "_acquire" "_release" "_relaxed"; do
+ gen_xchg_order_fallback "${xchg}" "${order}"
+ done
}
gen_try_cmpxchg_fallback()
@@ -168,40 +213,61 @@ gen_try_cmpxchg_fallback()
local order="$1"; shift;
cat <<EOF
-#ifndef arch_try_${cmpxchg}${order}
-#define arch_try_${cmpxchg}${order}(_ptr, _oldp, _new) \\
+#define raw_try_${cmpxchg}${order}(_ptr, _oldp, _new) \\
({ \\
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \\
- ___r = arch_${cmpxchg}${order}((_ptr), ___o, (_new)); \\
+ ___r = raw_${cmpxchg}${order}((_ptr), ___o, (_new)); \\
if (unlikely(___r != ___o)) \\
*___op = ___r; \\
likely(___r == ___o); \\
})
-#endif /* arch_try_${cmpxchg}${order} */
-
EOF
}
-gen_try_cmpxchg_fallbacks()
+gen_try_cmpxchg_order_fallback()
{
- local cmpxchg="$1"; shift;
+ local cmpxchg="$1"; shift
+ local order="$1"; shift
+ local forder="${order:-_fence}"
- printf "#ifndef arch_try_${cmpxchg}_relaxed\n"
- printf "#ifdef arch_try_${cmpxchg}\n"
+ printf "#if defined(arch_try_${cmpxchg}${order})\n"
+ printf "#define raw_try_${cmpxchg}${order} arch_try_${cmpxchg}${order}\n"
- gen_basic_fallbacks "arch_try_${cmpxchg}"
+ if [ "${order}" != "_relaxed" ]; then
+ printf "#elif defined(arch_try_${cmpxchg}_relaxed)\n"
+ printf "#define raw_try_${cmpxchg}${order}(...) \\\\\n"
+ printf " __atomic_op${forder}(arch_try_${cmpxchg}, __VA_ARGS__)\n"
+ fi
+
+ if [ ! -z "${order}" ]; then
+ printf "#elif defined(arch_try_${cmpxchg})\n"
+ printf "#define raw_try_${cmpxchg}${order} arch_try_${cmpxchg}\n"
+ fi
- printf "#endif /* arch_try_${cmpxchg} */\n\n"
+ printf "#else\n"
+ gen_try_cmpxchg_fallback "${cmpxchg}" "${order}"
+ printf "#endif\n\n"
+}
+
+gen_try_cmpxchg_fallbacks()
+{
+ local cmpxchg="$1"; shift;
for order in "" "_acquire" "_release" "_relaxed"; do
- gen_try_cmpxchg_fallback "${cmpxchg}" "${order}"
+ gen_try_cmpxchg_order_fallback "${cmpxchg}" "${order}"
done
+}
- printf "#else /* arch_try_${cmpxchg}_relaxed */\n"
-
- gen_order_fallbacks "arch_try_${cmpxchg}"
+gen_cmpxchg_local_fallbacks()
+{
+ local cmpxchg="$1"; shift
- printf "#endif /* arch_try_${cmpxchg}_relaxed */\n\n"
+ printf "#define raw_${cmpxchg} arch_${cmpxchg}\n\n"
+ printf "#ifdef arch_try_${cmpxchg}\n"
+ printf "#define raw_try_${cmpxchg} arch_try_${cmpxchg}\n"
+ printf "#else\n"
+ gen_try_cmpxchg_fallback "${cmpxchg}" ""
+ printf "#endif\n\n"
}
cat << EOF
@@ -217,7 +283,7 @@ cat << EOF
EOF
-for xchg in "arch_xchg" "arch_cmpxchg" "arch_cmpxchg64" "arch_cmpxchg128"; do
+for xchg in "xchg" "cmpxchg" "cmpxchg64" "cmpxchg128"; do
gen_xchg_fallbacks "${xchg}"
done
@@ -225,8 +291,12 @@ for cmpxchg in "cmpxchg" "cmpxchg64" "cmpxchg128"; do
gen_try_cmpxchg_fallbacks "${cmpxchg}"
done
-for cmpxchg in "cmpxchg_local" "cmpxchg64_local"; do
- gen_try_cmpxchg_fallback "${cmpxchg}" ""
+for cmpxchg in "cmpxchg_local" "cmpxchg64_local" "cmpxchg128_local"; do
+ gen_cmpxchg_local_fallbacks "${cmpxchg}" ""
+done
+
+for cmpxchg in "sync_cmpxchg"; do
+ printf "#define raw_${cmpxchg} arch_${cmpxchg}\n\n"
done
grep '^[a-z]' "$1" | while read name meta args; do
diff --git a/scripts/atomic/gen-atomic-raw.sh b/scripts/atomic/gen-atomic-raw.sh
deleted file mode 100755
index c7e3c52b49279..0000000000000
--- a/scripts/atomic/gen-atomic-raw.sh
+++ /dev/null
@@ -1,80 +0,0 @@
-#!/bin/sh
-# SPDX-License-Identifier: GPL-2.0
-
-ATOMICDIR=$(dirname $0)
-
-. ${ATOMICDIR}/atomic-tbl.sh
-
-#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...)
-gen_proto_order_variant()
-{
- local meta="$1"; shift
- local pfx="$1"; shift
- local name="$1"; shift
- local sfx="$1"; shift
- local order="$1"; shift
- local atomic="$1"; shift
- local int="$1"; shift
-
- local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
-
- local ret="$(gen_ret_type "${meta}" "${int}")"
- local params="$(gen_params "${int}" "${atomic}" "$@")"
- local args="$(gen_args "$@")"
- local retstmt="$(gen_ret_stmt "${meta}")"
-
-cat <<EOF
-static __always_inline ${ret}
-raw_${atomicname}(${params})
-{
- ${retstmt}arch_${atomicname}(${args});
-}
-
-EOF
-}
-
-gen_xchg()
-{
- local xchg="$1"; shift
- local order="$1"; shift
-
-cat <<EOF
-#define raw_${xchg}${order}(...) \\
- arch_${xchg}${order}(__VA_ARGS__)
-EOF
-}
-
-cat << EOF
-// SPDX-License-Identifier: GPL-2.0
-
-// Generated by $0
-// DO NOT MODIFY THIS FILE DIRECTLY
-
-#ifndef _LINUX_ATOMIC_RAW_H
-#define _LINUX_ATOMIC_RAW_H
-
-EOF
-
-grep '^[a-z]' "$1" | while read name meta args; do
- gen_proto "${meta}" "${name}" "atomic" "int" ${args}
-done
-
-grep '^[a-z]' "$1" | while read name meta args; do
- gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
-done
-
-for xchg in "xchg" "cmpxchg" "cmpxchg64" "cmpxchg128" "try_cmpxchg" "try_cmpxchg64" "try_cmpxchg128"; do
- for order in "" "_acquire" "_release" "_relaxed"; do
- gen_xchg "${xchg}" "${order}"
- printf "\n"
- done
-done
-
-for xchg in "cmpxchg_local" "cmpxchg64_local" "cmpxchg128_local" "sync_cmpxchg" "try_cmpxchg_local" "try_cmpxchg64_local" "try_cmpxchg128_local"; do
- gen_xchg "${xchg}" ""
- printf "\n"
-done
-
-cat <<EOF
-#endif /* _LINUX_ATOMIC_RAW_H */
-EOF
diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh
index 631d351f9f1f3..5b98a83076932 100755
--- a/scripts/atomic/gen-atomics.sh
+++ b/scripts/atomic/gen-atomics.sh
@@ -11,7 +11,6 @@ cat <<EOF |
gen-atomic-instrumented.sh linux/atomic/atomic-instrumented.h
gen-atomic-long.sh linux/atomic/atomic-long.h
gen-atomic-fallback.sh linux/atomic/atomic-arch-fallback.h
-gen-atomic-raw.sh linux/atomic/atomic-raw.h
EOF
while read script header args; do
/bin/sh ${ATOMICDIR}/${script} ${ATOMICTBL} ${args} > ${LINUXDIR}/include/${header}
--
2.30.2
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/m68k.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/m68k/include/asm/atomic.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h
index 190a032f19be7..4bfbc25f6ecf4 100644
--- a/arch/m68k/include/asm/atomic.h
+++ b/arch/m68k/include/asm/atomic.h
@@ -106,6 +106,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t * v) \
ATOMIC_OPS(add, +=, add)
ATOMIC_OPS(sub, -=, sub)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op, c_op, asm_op) \
ATOMIC_OP(op, c_op, asm_op) \
@@ -115,6 +120,10 @@ ATOMIC_OPS(and, &=, and)
ATOMIC_OPS(or, |=, or)
ATOMIC_OPS(xor, ^=, eor)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
--
2.30.2
Now that arch_atomic*() usage is limited to the atomic headers, we no
longer have any users of arch_atomic_long_*(), and can generate
raw_atomic_long_*() directly.
Generate the raw_atomic_long_*() ops directly.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
include/linux/atomic.h | 2 +-
include/linux/atomic/atomic-long.h | 682 ++++++++++++++---------------
include/linux/atomic/atomic-raw.h | 512 +---------------------
scripts/atomic/gen-atomic-long.sh | 4 +-
scripts/atomic/gen-atomic-raw.sh | 4 -
5 files changed, 345 insertions(+), 859 deletions(-)
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 127f5dc63a7df..296cfae0389fe 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -78,8 +78,8 @@
})
#include <linux/atomic/atomic-arch-fallback.h>
-#include <linux/atomic/atomic-long.h>
#include <linux/atomic/atomic-raw.h>
+#include <linux/atomic/atomic-long.h>
#include <linux/atomic/atomic-instrumented.h>
#endif /* _LINUX_ATOMIC_H */
diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h
index 2fc51ba66bebd..92dc82ce1ce6d 100644
--- a/include/linux/atomic/atomic-long.h
+++ b/include/linux/atomic/atomic-long.h
@@ -24,1027 +24,1027 @@ typedef atomic_t atomic_long_t;
#ifdef CONFIG_64BIT
static __always_inline long
-arch_atomic_long_read(const atomic_long_t *v)
+raw_atomic_long_read(const atomic_long_t *v)
{
- return arch_atomic64_read(v);
+ return raw_atomic64_read(v);
}
static __always_inline long
-arch_atomic_long_read_acquire(const atomic_long_t *v)
+raw_atomic_long_read_acquire(const atomic_long_t *v)
{
- return arch_atomic64_read_acquire(v);
+ return raw_atomic64_read_acquire(v);
}
static __always_inline void
-arch_atomic_long_set(atomic_long_t *v, long i)
+raw_atomic_long_set(atomic_long_t *v, long i)
{
- arch_atomic64_set(v, i);
+ raw_atomic64_set(v, i);
}
static __always_inline void
-arch_atomic_long_set_release(atomic_long_t *v, long i)
+raw_atomic_long_set_release(atomic_long_t *v, long i)
{
- arch_atomic64_set_release(v, i);
+ raw_atomic64_set_release(v, i);
}
static __always_inline void
-arch_atomic_long_add(long i, atomic_long_t *v)
+raw_atomic_long_add(long i, atomic_long_t *v)
{
- arch_atomic64_add(i, v);
+ raw_atomic64_add(i, v);
}
static __always_inline long
-arch_atomic_long_add_return(long i, atomic_long_t *v)
+raw_atomic_long_add_return(long i, atomic_long_t *v)
{
- return arch_atomic64_add_return(i, v);
+ return raw_atomic64_add_return(i, v);
}
static __always_inline long
-arch_atomic_long_add_return_acquire(long i, atomic_long_t *v)
+raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_add_return_acquire(i, v);
+ return raw_atomic64_add_return_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_add_return_release(long i, atomic_long_t *v)
+raw_atomic_long_add_return_release(long i, atomic_long_t *v)
{
- return arch_atomic64_add_return_release(i, v);
+ return raw_atomic64_add_return_release(i, v);
}
static __always_inline long
-arch_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_add_return_relaxed(i, v);
+ return raw_atomic64_add_return_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_add(i, v);
+ return raw_atomic64_fetch_add(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_add_acquire(i, v);
+ return raw_atomic64_fetch_add_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_add_release(i, v);
+ return raw_atomic64_fetch_add_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_add_relaxed(i, v);
+ return raw_atomic64_fetch_add_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_sub(long i, atomic_long_t *v)
+raw_atomic_long_sub(long i, atomic_long_t *v)
{
- arch_atomic64_sub(i, v);
+ raw_atomic64_sub(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return(long i, atomic_long_t *v)
+raw_atomic_long_sub_return(long i, atomic_long_t *v)
{
- return arch_atomic64_sub_return(i, v);
+ return raw_atomic64_sub_return(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_sub_return_acquire(i, v);
+ return raw_atomic64_sub_return_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return_release(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
{
- return arch_atomic64_sub_return_release(i, v);
+ return raw_atomic64_sub_return_release(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_sub_return_relaxed(i, v);
+ return raw_atomic64_sub_return_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_sub(i, v);
+ return raw_atomic64_fetch_sub(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_sub_acquire(i, v);
+ return raw_atomic64_fetch_sub_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_sub_release(i, v);
+ return raw_atomic64_fetch_sub_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_sub_relaxed(i, v);
+ return raw_atomic64_fetch_sub_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_inc(atomic_long_t *v)
+raw_atomic_long_inc(atomic_long_t *v)
{
- arch_atomic64_inc(v);
+ raw_atomic64_inc(v);
}
static __always_inline long
-arch_atomic_long_inc_return(atomic_long_t *v)
+raw_atomic_long_inc_return(atomic_long_t *v)
{
- return arch_atomic64_inc_return(v);
+ return raw_atomic64_inc_return(v);
}
static __always_inline long
-arch_atomic_long_inc_return_acquire(atomic_long_t *v)
+raw_atomic_long_inc_return_acquire(atomic_long_t *v)
{
- return arch_atomic64_inc_return_acquire(v);
+ return raw_atomic64_inc_return_acquire(v);
}
static __always_inline long
-arch_atomic_long_inc_return_release(atomic_long_t *v)
+raw_atomic_long_inc_return_release(atomic_long_t *v)
{
- return arch_atomic64_inc_return_release(v);
+ return raw_atomic64_inc_return_release(v);
}
static __always_inline long
-arch_atomic_long_inc_return_relaxed(atomic_long_t *v)
+raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
{
- return arch_atomic64_inc_return_relaxed(v);
+ return raw_atomic64_inc_return_relaxed(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc(atomic_long_t *v)
+raw_atomic_long_fetch_inc(atomic_long_t *v)
{
- return arch_atomic64_fetch_inc(v);
+ return raw_atomic64_fetch_inc(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc_acquire(atomic_long_t *v)
+raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
{
- return arch_atomic64_fetch_inc_acquire(v);
+ return raw_atomic64_fetch_inc_acquire(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc_release(atomic_long_t *v)
+raw_atomic_long_fetch_inc_release(atomic_long_t *v)
{
- return arch_atomic64_fetch_inc_release(v);
+ return raw_atomic64_fetch_inc_release(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
+raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{
- return arch_atomic64_fetch_inc_relaxed(v);
+ return raw_atomic64_fetch_inc_relaxed(v);
}
static __always_inline void
-arch_atomic_long_dec(atomic_long_t *v)
+raw_atomic_long_dec(atomic_long_t *v)
{
- arch_atomic64_dec(v);
+ raw_atomic64_dec(v);
}
static __always_inline long
-arch_atomic_long_dec_return(atomic_long_t *v)
+raw_atomic_long_dec_return(atomic_long_t *v)
{
- return arch_atomic64_dec_return(v);
+ return raw_atomic64_dec_return(v);
}
static __always_inline long
-arch_atomic_long_dec_return_acquire(atomic_long_t *v)
+raw_atomic_long_dec_return_acquire(atomic_long_t *v)
{
- return arch_atomic64_dec_return_acquire(v);
+ return raw_atomic64_dec_return_acquire(v);
}
static __always_inline long
-arch_atomic_long_dec_return_release(atomic_long_t *v)
+raw_atomic_long_dec_return_release(atomic_long_t *v)
{
- return arch_atomic64_dec_return_release(v);
+ return raw_atomic64_dec_return_release(v);
}
static __always_inline long
-arch_atomic_long_dec_return_relaxed(atomic_long_t *v)
+raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
{
- return arch_atomic64_dec_return_relaxed(v);
+ return raw_atomic64_dec_return_relaxed(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec(atomic_long_t *v)
+raw_atomic_long_fetch_dec(atomic_long_t *v)
{
- return arch_atomic64_fetch_dec(v);
+ return raw_atomic64_fetch_dec(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec_acquire(atomic_long_t *v)
+raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
{
- return arch_atomic64_fetch_dec_acquire(v);
+ return raw_atomic64_fetch_dec_acquire(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec_release(atomic_long_t *v)
+raw_atomic_long_fetch_dec_release(atomic_long_t *v)
{
- return arch_atomic64_fetch_dec_release(v);
+ return raw_atomic64_fetch_dec_release(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
+raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{
- return arch_atomic64_fetch_dec_relaxed(v);
+ return raw_atomic64_fetch_dec_relaxed(v);
}
static __always_inline void
-arch_atomic_long_and(long i, atomic_long_t *v)
+raw_atomic_long_and(long i, atomic_long_t *v)
{
- arch_atomic64_and(i, v);
+ raw_atomic64_and(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_and(i, v);
+ return raw_atomic64_fetch_and(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_and_acquire(i, v);
+ return raw_atomic64_fetch_and_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_and_release(i, v);
+ return raw_atomic64_fetch_and_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_and_relaxed(i, v);
+ return raw_atomic64_fetch_and_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_andnot(long i, atomic_long_t *v)
+raw_atomic_long_andnot(long i, atomic_long_t *v)
{
- arch_atomic64_andnot(i, v);
+ raw_atomic64_andnot(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_andnot(i, v);
+ return raw_atomic64_fetch_andnot(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_andnot_acquire(i, v);
+ return raw_atomic64_fetch_andnot_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_andnot_release(i, v);
+ return raw_atomic64_fetch_andnot_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_andnot_relaxed(i, v);
+ return raw_atomic64_fetch_andnot_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_or(long i, atomic_long_t *v)
+raw_atomic_long_or(long i, atomic_long_t *v)
{
- arch_atomic64_or(i, v);
+ raw_atomic64_or(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_or(i, v);
+ return raw_atomic64_fetch_or(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_or_acquire(i, v);
+ return raw_atomic64_fetch_or_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_or_release(i, v);
+ return raw_atomic64_fetch_or_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_or_relaxed(i, v);
+ return raw_atomic64_fetch_or_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_xor(long i, atomic_long_t *v)
+raw_atomic_long_xor(long i, atomic_long_t *v)
{
- arch_atomic64_xor(i, v);
+ raw_atomic64_xor(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_xor(i, v);
+ return raw_atomic64_fetch_xor(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_xor_acquire(i, v);
+ return raw_atomic64_fetch_xor_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_xor_release(i, v);
+ return raw_atomic64_fetch_xor_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_xor_relaxed(i, v);
+ return raw_atomic64_fetch_xor_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_xchg(atomic_long_t *v, long i)
+raw_atomic_long_xchg(atomic_long_t *v, long i)
{
- return arch_atomic64_xchg(v, i);
+ return raw_atomic64_xchg(v, i);
}
static __always_inline long
-arch_atomic_long_xchg_acquire(atomic_long_t *v, long i)
+raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
{
- return arch_atomic64_xchg_acquire(v, i);
+ return raw_atomic64_xchg_acquire(v, i);
}
static __always_inline long
-arch_atomic_long_xchg_release(atomic_long_t *v, long i)
+raw_atomic_long_xchg_release(atomic_long_t *v, long i)
{
- return arch_atomic64_xchg_release(v, i);
+ return raw_atomic64_xchg_release(v, i);
}
static __always_inline long
-arch_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
+raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
{
- return arch_atomic64_xchg_relaxed(v, i);
+ return raw_atomic64_xchg_relaxed(v, i);
}
static __always_inline long
-arch_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{
- return arch_atomic64_cmpxchg(v, old, new);
+ return raw_atomic64_cmpxchg(v, old, new);
}
static __always_inline long
-arch_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{
- return arch_atomic64_cmpxchg_acquire(v, old, new);
+ return raw_atomic64_cmpxchg_acquire(v, old, new);
}
static __always_inline long
-arch_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{
- return arch_atomic64_cmpxchg_release(v, old, new);
+ return raw_atomic64_cmpxchg_release(v, old, new);
}
static __always_inline long
-arch_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{
- return arch_atomic64_cmpxchg_relaxed(v, old, new);
+ return raw_atomic64_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
{
- return arch_atomic64_try_cmpxchg(v, (s64 *)old, new);
+ return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{
- return arch_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new);
+ return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
{
- return arch_atomic64_try_cmpxchg_release(v, (s64 *)old, new);
+ return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{
- return arch_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new);
+ return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new);
}
static __always_inline bool
-arch_atomic_long_sub_and_test(long i, atomic_long_t *v)
+raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
{
- return arch_atomic64_sub_and_test(i, v);
+ return raw_atomic64_sub_and_test(i, v);
}
static __always_inline bool
-arch_atomic_long_dec_and_test(atomic_long_t *v)
+raw_atomic_long_dec_and_test(atomic_long_t *v)
{
- return arch_atomic64_dec_and_test(v);
+ return raw_atomic64_dec_and_test(v);
}
static __always_inline bool
-arch_atomic_long_inc_and_test(atomic_long_t *v)
+raw_atomic_long_inc_and_test(atomic_long_t *v)
{
- return arch_atomic64_inc_and_test(v);
+ return raw_atomic64_inc_and_test(v);
}
static __always_inline bool
-arch_atomic_long_add_negative(long i, atomic_long_t *v)
+raw_atomic_long_add_negative(long i, atomic_long_t *v)
{
- return arch_atomic64_add_negative(i, v);
+ return raw_atomic64_add_negative(i, v);
}
static __always_inline bool
-arch_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_add_negative_acquire(i, v);
+ return raw_atomic64_add_negative_acquire(i, v);
}
static __always_inline bool
-arch_atomic_long_add_negative_release(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
{
- return arch_atomic64_add_negative_release(i, v);
+ return raw_atomic64_add_negative_release(i, v);
}
static __always_inline bool
-arch_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_add_negative_relaxed(i, v);
+ return raw_atomic64_add_negative_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
+raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
- return arch_atomic64_fetch_add_unless(v, a, u);
+ return raw_atomic64_fetch_add_unless(v, a, u);
}
static __always_inline bool
-arch_atomic_long_add_unless(atomic_long_t *v, long a, long u)
+raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
{
- return arch_atomic64_add_unless(v, a, u);
+ return raw_atomic64_add_unless(v, a, u);
}
static __always_inline bool
-arch_atomic_long_inc_not_zero(atomic_long_t *v)
+raw_atomic_long_inc_not_zero(atomic_long_t *v)
{
- return arch_atomic64_inc_not_zero(v);
+ return raw_atomic64_inc_not_zero(v);
}
static __always_inline bool
-arch_atomic_long_inc_unless_negative(atomic_long_t *v)
+raw_atomic_long_inc_unless_negative(atomic_long_t *v)
{
- return arch_atomic64_inc_unless_negative(v);
+ return raw_atomic64_inc_unless_negative(v);
}
static __always_inline bool
-arch_atomic_long_dec_unless_positive(atomic_long_t *v)
+raw_atomic_long_dec_unless_positive(atomic_long_t *v)
{
- return arch_atomic64_dec_unless_positive(v);
+ return raw_atomic64_dec_unless_positive(v);
}
static __always_inline long
-arch_atomic_long_dec_if_positive(atomic_long_t *v)
+raw_atomic_long_dec_if_positive(atomic_long_t *v)
{
- return arch_atomic64_dec_if_positive(v);
+ return raw_atomic64_dec_if_positive(v);
}
#else /* CONFIG_64BIT */
static __always_inline long
-arch_atomic_long_read(const atomic_long_t *v)
+raw_atomic_long_read(const atomic_long_t *v)
{
- return arch_atomic_read(v);
+ return raw_atomic_read(v);
}
static __always_inline long
-arch_atomic_long_read_acquire(const atomic_long_t *v)
+raw_atomic_long_read_acquire(const atomic_long_t *v)
{
- return arch_atomic_read_acquire(v);
+ return raw_atomic_read_acquire(v);
}
static __always_inline void
-arch_atomic_long_set(atomic_long_t *v, long i)
+raw_atomic_long_set(atomic_long_t *v, long i)
{
- arch_atomic_set(v, i);
+ raw_atomic_set(v, i);
}
static __always_inline void
-arch_atomic_long_set_release(atomic_long_t *v, long i)
+raw_atomic_long_set_release(atomic_long_t *v, long i)
{
- arch_atomic_set_release(v, i);
+ raw_atomic_set_release(v, i);
}
static __always_inline void
-arch_atomic_long_add(long i, atomic_long_t *v)
+raw_atomic_long_add(long i, atomic_long_t *v)
{
- arch_atomic_add(i, v);
+ raw_atomic_add(i, v);
}
static __always_inline long
-arch_atomic_long_add_return(long i, atomic_long_t *v)
+raw_atomic_long_add_return(long i, atomic_long_t *v)
{
- return arch_atomic_add_return(i, v);
+ return raw_atomic_add_return(i, v);
}
static __always_inline long
-arch_atomic_long_add_return_acquire(long i, atomic_long_t *v)
+raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_add_return_acquire(i, v);
+ return raw_atomic_add_return_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_add_return_release(long i, atomic_long_t *v)
+raw_atomic_long_add_return_release(long i, atomic_long_t *v)
{
- return arch_atomic_add_return_release(i, v);
+ return raw_atomic_add_return_release(i, v);
}
static __always_inline long
-arch_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_add_return_relaxed(i, v);
+ return raw_atomic_add_return_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_add(i, v);
+ return raw_atomic_fetch_add(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_add_acquire(i, v);
+ return raw_atomic_fetch_add_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_add_release(i, v);
+ return raw_atomic_fetch_add_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_add_relaxed(i, v);
+ return raw_atomic_fetch_add_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_sub(long i, atomic_long_t *v)
+raw_atomic_long_sub(long i, atomic_long_t *v)
{
- arch_atomic_sub(i, v);
+ raw_atomic_sub(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return(long i, atomic_long_t *v)
+raw_atomic_long_sub_return(long i, atomic_long_t *v)
{
- return arch_atomic_sub_return(i, v);
+ return raw_atomic_sub_return(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_sub_return_acquire(i, v);
+ return raw_atomic_sub_return_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return_release(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
{
- return arch_atomic_sub_return_release(i, v);
+ return raw_atomic_sub_return_release(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_sub_return_relaxed(i, v);
+ return raw_atomic_sub_return_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_sub(i, v);
+ return raw_atomic_fetch_sub(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_sub_acquire(i, v);
+ return raw_atomic_fetch_sub_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_sub_release(i, v);
+ return raw_atomic_fetch_sub_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_sub_relaxed(i, v);
+ return raw_atomic_fetch_sub_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_inc(atomic_long_t *v)
+raw_atomic_long_inc(atomic_long_t *v)
{
- arch_atomic_inc(v);
+ raw_atomic_inc(v);
}
static __always_inline long
-arch_atomic_long_inc_return(atomic_long_t *v)
+raw_atomic_long_inc_return(atomic_long_t *v)
{
- return arch_atomic_inc_return(v);
+ return raw_atomic_inc_return(v);
}
static __always_inline long
-arch_atomic_long_inc_return_acquire(atomic_long_t *v)
+raw_atomic_long_inc_return_acquire(atomic_long_t *v)
{
- return arch_atomic_inc_return_acquire(v);
+ return raw_atomic_inc_return_acquire(v);
}
static __always_inline long
-arch_atomic_long_inc_return_release(atomic_long_t *v)
+raw_atomic_long_inc_return_release(atomic_long_t *v)
{
- return arch_atomic_inc_return_release(v);
+ return raw_atomic_inc_return_release(v);
}
static __always_inline long
-arch_atomic_long_inc_return_relaxed(atomic_long_t *v)
+raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
{
- return arch_atomic_inc_return_relaxed(v);
+ return raw_atomic_inc_return_relaxed(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc(atomic_long_t *v)
+raw_atomic_long_fetch_inc(atomic_long_t *v)
{
- return arch_atomic_fetch_inc(v);
+ return raw_atomic_fetch_inc(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc_acquire(atomic_long_t *v)
+raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
{
- return arch_atomic_fetch_inc_acquire(v);
+ return raw_atomic_fetch_inc_acquire(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc_release(atomic_long_t *v)
+raw_atomic_long_fetch_inc_release(atomic_long_t *v)
{
- return arch_atomic_fetch_inc_release(v);
+ return raw_atomic_fetch_inc_release(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
+raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{
- return arch_atomic_fetch_inc_relaxed(v);
+ return raw_atomic_fetch_inc_relaxed(v);
}
static __always_inline void
-arch_atomic_long_dec(atomic_long_t *v)
+raw_atomic_long_dec(atomic_long_t *v)
{
- arch_atomic_dec(v);
+ raw_atomic_dec(v);
}
static __always_inline long
-arch_atomic_long_dec_return(atomic_long_t *v)
+raw_atomic_long_dec_return(atomic_long_t *v)
{
- return arch_atomic_dec_return(v);
+ return raw_atomic_dec_return(v);
}
static __always_inline long
-arch_atomic_long_dec_return_acquire(atomic_long_t *v)
+raw_atomic_long_dec_return_acquire(atomic_long_t *v)
{
- return arch_atomic_dec_return_acquire(v);
+ return raw_atomic_dec_return_acquire(v);
}
static __always_inline long
-arch_atomic_long_dec_return_release(atomic_long_t *v)
+raw_atomic_long_dec_return_release(atomic_long_t *v)
{
- return arch_atomic_dec_return_release(v);
+ return raw_atomic_dec_return_release(v);
}
static __always_inline long
-arch_atomic_long_dec_return_relaxed(atomic_long_t *v)
+raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
{
- return arch_atomic_dec_return_relaxed(v);
+ return raw_atomic_dec_return_relaxed(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec(atomic_long_t *v)
+raw_atomic_long_fetch_dec(atomic_long_t *v)
{
- return arch_atomic_fetch_dec(v);
+ return raw_atomic_fetch_dec(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec_acquire(atomic_long_t *v)
+raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
{
- return arch_atomic_fetch_dec_acquire(v);
+ return raw_atomic_fetch_dec_acquire(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec_release(atomic_long_t *v)
+raw_atomic_long_fetch_dec_release(atomic_long_t *v)
{
- return arch_atomic_fetch_dec_release(v);
+ return raw_atomic_fetch_dec_release(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
+raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{
- return arch_atomic_fetch_dec_relaxed(v);
+ return raw_atomic_fetch_dec_relaxed(v);
}
static __always_inline void
-arch_atomic_long_and(long i, atomic_long_t *v)
+raw_atomic_long_and(long i, atomic_long_t *v)
{
- arch_atomic_and(i, v);
+ raw_atomic_and(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_and(i, v);
+ return raw_atomic_fetch_and(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_and_acquire(i, v);
+ return raw_atomic_fetch_and_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_and_release(i, v);
+ return raw_atomic_fetch_and_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_and_relaxed(i, v);
+ return raw_atomic_fetch_and_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_andnot(long i, atomic_long_t *v)
+raw_atomic_long_andnot(long i, atomic_long_t *v)
{
- arch_atomic_andnot(i, v);
+ raw_atomic_andnot(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_andnot(i, v);
+ return raw_atomic_fetch_andnot(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_andnot_acquire(i, v);
+ return raw_atomic_fetch_andnot_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_andnot_release(i, v);
+ return raw_atomic_fetch_andnot_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_andnot_relaxed(i, v);
+ return raw_atomic_fetch_andnot_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_or(long i, atomic_long_t *v)
+raw_atomic_long_or(long i, atomic_long_t *v)
{
- arch_atomic_or(i, v);
+ raw_atomic_or(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_or(i, v);
+ return raw_atomic_fetch_or(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_or_acquire(i, v);
+ return raw_atomic_fetch_or_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_or_release(i, v);
+ return raw_atomic_fetch_or_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_or_relaxed(i, v);
+ return raw_atomic_fetch_or_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_xor(long i, atomic_long_t *v)
+raw_atomic_long_xor(long i, atomic_long_t *v)
{
- arch_atomic_xor(i, v);
+ raw_atomic_xor(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_xor(i, v);
+ return raw_atomic_fetch_xor(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_xor_acquire(i, v);
+ return raw_atomic_fetch_xor_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_xor_release(i, v);
+ return raw_atomic_fetch_xor_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_xor_relaxed(i, v);
+ return raw_atomic_fetch_xor_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_xchg(atomic_long_t *v, long i)
+raw_atomic_long_xchg(atomic_long_t *v, long i)
{
- return arch_atomic_xchg(v, i);
+ return raw_atomic_xchg(v, i);
}
static __always_inline long
-arch_atomic_long_xchg_acquire(atomic_long_t *v, long i)
+raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
{
- return arch_atomic_xchg_acquire(v, i);
+ return raw_atomic_xchg_acquire(v, i);
}
static __always_inline long
-arch_atomic_long_xchg_release(atomic_long_t *v, long i)
+raw_atomic_long_xchg_release(atomic_long_t *v, long i)
{
- return arch_atomic_xchg_release(v, i);
+ return raw_atomic_xchg_release(v, i);
}
static __always_inline long
-arch_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
+raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
{
- return arch_atomic_xchg_relaxed(v, i);
+ return raw_atomic_xchg_relaxed(v, i);
}
static __always_inline long
-arch_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{
- return arch_atomic_cmpxchg(v, old, new);
+ return raw_atomic_cmpxchg(v, old, new);
}
static __always_inline long
-arch_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{
- return arch_atomic_cmpxchg_acquire(v, old, new);
+ return raw_atomic_cmpxchg_acquire(v, old, new);
}
static __always_inline long
-arch_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{
- return arch_atomic_cmpxchg_release(v, old, new);
+ return raw_atomic_cmpxchg_release(v, old, new);
}
static __always_inline long
-arch_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{
- return arch_atomic_cmpxchg_relaxed(v, old, new);
+ return raw_atomic_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
{
- return arch_atomic_try_cmpxchg(v, (int *)old, new);
+ return raw_atomic_try_cmpxchg(v, (int *)old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{
- return arch_atomic_try_cmpxchg_acquire(v, (int *)old, new);
+ return raw_atomic_try_cmpxchg_acquire(v, (int *)old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
{
- return arch_atomic_try_cmpxchg_release(v, (int *)old, new);
+ return raw_atomic_try_cmpxchg_release(v, (int *)old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{
- return arch_atomic_try_cmpxchg_relaxed(v, (int *)old, new);
+ return raw_atomic_try_cmpxchg_relaxed(v, (int *)old, new);
}
static __always_inline bool
-arch_atomic_long_sub_and_test(long i, atomic_long_t *v)
+raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
{
- return arch_atomic_sub_and_test(i, v);
+ return raw_atomic_sub_and_test(i, v);
}
static __always_inline bool
-arch_atomic_long_dec_and_test(atomic_long_t *v)
+raw_atomic_long_dec_and_test(atomic_long_t *v)
{
- return arch_atomic_dec_and_test(v);
+ return raw_atomic_dec_and_test(v);
}
static __always_inline bool
-arch_atomic_long_inc_and_test(atomic_long_t *v)
+raw_atomic_long_inc_and_test(atomic_long_t *v)
{
- return arch_atomic_inc_and_test(v);
+ return raw_atomic_inc_and_test(v);
}
static __always_inline bool
-arch_atomic_long_add_negative(long i, atomic_long_t *v)
+raw_atomic_long_add_negative(long i, atomic_long_t *v)
{
- return arch_atomic_add_negative(i, v);
+ return raw_atomic_add_negative(i, v);
}
static __always_inline bool
-arch_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_add_negative_acquire(i, v);
+ return raw_atomic_add_negative_acquire(i, v);
}
static __always_inline bool
-arch_atomic_long_add_negative_release(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
{
- return arch_atomic_add_negative_release(i, v);
+ return raw_atomic_add_negative_release(i, v);
}
static __always_inline bool
-arch_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_add_negative_relaxed(i, v);
+ return raw_atomic_add_negative_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
+raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
- return arch_atomic_fetch_add_unless(v, a, u);
+ return raw_atomic_fetch_add_unless(v, a, u);
}
static __always_inline bool
-arch_atomic_long_add_unless(atomic_long_t *v, long a, long u)
+raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
{
- return arch_atomic_add_unless(v, a, u);
+ return raw_atomic_add_unless(v, a, u);
}
static __always_inline bool
-arch_atomic_long_inc_not_zero(atomic_long_t *v)
+raw_atomic_long_inc_not_zero(atomic_long_t *v)
{
- return arch_atomic_inc_not_zero(v);
+ return raw_atomic_inc_not_zero(v);
}
static __always_inline bool
-arch_atomic_long_inc_unless_negative(atomic_long_t *v)
+raw_atomic_long_inc_unless_negative(atomic_long_t *v)
{
- return arch_atomic_inc_unless_negative(v);
+ return raw_atomic_inc_unless_negative(v);
}
static __always_inline bool
-arch_atomic_long_dec_unless_positive(atomic_long_t *v)
+raw_atomic_long_dec_unless_positive(atomic_long_t *v)
{
- return arch_atomic_dec_unless_positive(v);
+ return raw_atomic_dec_unless_positive(v);
}
static __always_inline long
-arch_atomic_long_dec_if_positive(atomic_long_t *v)
+raw_atomic_long_dec_if_positive(atomic_long_t *v)
{
- return arch_atomic_dec_if_positive(v);
+ return raw_atomic_dec_if_positive(v);
}
#endif /* CONFIG_64BIT */
#endif /* _LINUX_ATOMIC_LONG_H */
-// a194c07d7d2f4b0e178d3c118c919775d5d65f50
+// 108784846d3bbbb201b8dabe621c5dc30b216206
diff --git a/include/linux/atomic/atomic-raw.h b/include/linux/atomic/atomic-raw.h
index 83ff0269657e7..8b2fc04cf8c54 100644
--- a/include/linux/atomic/atomic-raw.h
+++ b/include/linux/atomic/atomic-raw.h
@@ -1026,516 +1026,6 @@ raw_atomic64_dec_if_positive(atomic64_t *v)
return arch_atomic64_dec_if_positive(v);
}
-static __always_inline long
-raw_atomic_long_read(const atomic_long_t *v)
-{
- return arch_atomic_long_read(v);
-}
-
-static __always_inline long
-raw_atomic_long_read_acquire(const atomic_long_t *v)
-{
- return arch_atomic_long_read_acquire(v);
-}
-
-static __always_inline void
-raw_atomic_long_set(atomic_long_t *v, long i)
-{
- arch_atomic_long_set(v, i);
-}
-
-static __always_inline void
-raw_atomic_long_set_release(atomic_long_t *v, long i)
-{
- arch_atomic_long_set_release(v, i);
-}
-
-static __always_inline void
-raw_atomic_long_add(long i, atomic_long_t *v)
-{
- arch_atomic_long_add(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_return(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_return_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_return_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_return_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_add(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_add_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_add_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_add_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_sub(long i, atomic_long_t *v)
-{
- arch_atomic_long_sub(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return(long i, atomic_long_t *v)
-{
- return arch_atomic_long_sub_return(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_sub_return_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_sub_return_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_sub_return_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_sub(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_sub_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_sub_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_sub_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_inc(atomic_long_t *v)
-{
- arch_atomic_long_inc(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return(atomic_long_t *v)
-{
- return arch_atomic_long_inc_return(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_acquire(atomic_long_t *v)
-{
- return arch_atomic_long_inc_return_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_release(atomic_long_t *v)
-{
- return arch_atomic_long_inc_return_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
-{
- return arch_atomic_long_inc_return_relaxed(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_inc(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_inc_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_release(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_inc_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_inc_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_long_dec(atomic_long_t *v)
-{
- arch_atomic_long_dec(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return(atomic_long_t *v)
-{
- return arch_atomic_long_dec_return(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_acquire(atomic_long_t *v)
-{
- return arch_atomic_long_dec_return_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_release(atomic_long_t *v)
-{
- return arch_atomic_long_dec_return_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
-{
- return arch_atomic_long_dec_return_relaxed(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_dec(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_dec_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_release(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_dec_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_dec_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_long_and(long i, atomic_long_t *v)
-{
- arch_atomic_long_and(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_and(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_and_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_and_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_and_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_andnot(long i, atomic_long_t *v)
-{
- arch_atomic_long_andnot(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_andnot(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_andnot_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_andnot_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_andnot_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_or(long i, atomic_long_t *v)
-{
- arch_atomic_long_or(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_or(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_or_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_or_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_or_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_xor(long i, atomic_long_t *v)
-{
- arch_atomic_long_xor(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_xor(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_xor_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_xor_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_xor_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_xchg(atomic_long_t *v, long i)
-{
- return arch_atomic_long_xchg(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
-{
- return arch_atomic_long_xchg_acquire(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_release(atomic_long_t *v, long i)
-{
- return arch_atomic_long_xchg_release(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
-{
- return arch_atomic_long_xchg_relaxed(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
-{
- return arch_atomic_long_cmpxchg(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
-{
- return arch_atomic_long_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
-{
- return arch_atomic_long_cmpxchg_release(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
-{
- return arch_atomic_long_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
-{
- return arch_atomic_long_try_cmpxchg(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
-{
- return arch_atomic_long_try_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
-{
- return arch_atomic_long_try_cmpxchg_release(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
-{
- return arch_atomic_long_try_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
-{
- return arch_atomic_long_sub_and_test(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_dec_and_test(atomic_long_t *v)
-{
- return arch_atomic_long_dec_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_and_test(atomic_long_t *v)
-{
- return arch_atomic_long_inc_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_negative(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_negative_acquire(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_negative_release(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_negative_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
-{
- return arch_atomic_long_fetch_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
-{
- return arch_atomic_long_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_not_zero(atomic_long_t *v)
-{
- return arch_atomic_long_inc_not_zero(v);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_unless_negative(atomic_long_t *v)
-{
- return arch_atomic_long_inc_unless_negative(v);
-}
-
-static __always_inline bool
-raw_atomic_long_dec_unless_positive(atomic_long_t *v)
-{
- return arch_atomic_long_dec_unless_positive(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_if_positive(atomic_long_t *v)
-{
- return arch_atomic_long_dec_if_positive(v);
-}
-
#define raw_xchg(...) \
arch_xchg(__VA_ARGS__)
@@ -1642,4 +1132,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v)
arch_try_cmpxchg128_local(__VA_ARGS__)
#endif /* _LINUX_ATOMIC_RAW_H */
-// 01d54200571b3857755a07c10074a4fd58cef6b1
+// b23ed4424e85200e200ded094522e1d743b3a5b1
diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh
index eda89cea6e1d1..75e91d6da30d3 100755
--- a/scripts/atomic/gen-atomic-long.sh
+++ b/scripts/atomic/gen-atomic-long.sh
@@ -47,9 +47,9 @@ gen_proto_order_variant()
cat <<EOF
static __always_inline ${ret}
-arch_atomic_long_${name}(${params})
+raw_atomic_long_${name}(${params})
{
- ${retstmt}arch_${atomic}_${name}(${argscast});
+ ${retstmt}raw_${atomic}_${name}(${argscast});
}
EOF
diff --git a/scripts/atomic/gen-atomic-raw.sh b/scripts/atomic/gen-atomic-raw.sh
index ba8d136f30e4c..c7e3c52b49279 100755
--- a/scripts/atomic/gen-atomic-raw.sh
+++ b/scripts/atomic/gen-atomic-raw.sh
@@ -63,10 +63,6 @@ grep '^[a-z]' "$1" | while read name meta args; do
gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
done
-grep '^[a-z]' "$1" | while read name meta args; do
- gen_proto "${meta}" "${name}" "atomic_long" "long" ${args}
-done
-
for xchg in "xchg" "cmpxchg" "cmpxchg64" "cmpxchg128" "try_cmpxchg" "try_cmpxchg64" "try_cmpxchg128"; do
for order in "" "_acquire" "_release" "_relaxed"; do
gen_xchg "${xchg}" "${order}"
--
2.30.2
From: "Paul E. McKenney" <[email protected]>
Add the generated atomic headers to driver-api/basics.rst in order to
provide documentation for the Linux kernel's atomic operations.
At the same time, dtop the x86 atomic header, which provides kerneldoc
comments for some arch_atomic*_*() operations. The arch_atomic*_*()
operations are now purely an implenentation detail of the
raw_atomic*_*() ops, and outside of implementing the atomics, code
should use the raw_atomic*_*() forms.
Signed-off-by: Paul E. McKenney <[email protected]>
[Mark: add atomic-{instrumented,long}.h, update commit message]
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: <[email protected]>
Cc: Akira Yokosawa <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
Documentation/driver-api/basics.rst | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/Documentation/driver-api/basics.rst b/Documentation/driver-api/basics.rst
index 4b4d8e28d3be4..7671b531ba1a8 100644
--- a/Documentation/driver-api/basics.rst
+++ b/Documentation/driver-api/basics.rst
@@ -84,7 +84,13 @@ Reference counting
Atomics
-------
-.. kernel-doc:: arch/x86/include/asm/atomic.h
+.. kernel-doc:: include/linux/atomic/atomic-instrumented.h
+ :internal:
+
+.. kernel-doc:: include/linux/atomic/atomic-arch-fallback.h
+ :internal:
+
+.. kernel-doc:: include/linux/atomic/atomic-long.h
:internal:
Kernel objects manipulation
--
2.30.2
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/arc.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arc/include/asm/atomic-spinlock.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/arc/include/asm/atomic-spinlock.h b/arch/arc/include/asm/atomic-spinlock.h
index 2c830347bfb4e..89d12a60f84c0 100644
--- a/arch/arc/include/asm/atomic-spinlock.h
+++ b/arch/arc/include/asm/atomic-spinlock.h
@@ -81,6 +81,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \
ATOMIC_OPS(add, +=, add)
ATOMIC_OPS(sub, -=, sub)
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op, c_op, asm_op) \
ATOMIC_OP(op, c_op, asm_op) \
@@ -92,7 +97,11 @@ ATOMIC_OPS(or, |=, or)
ATOMIC_OPS(xor, ^=, xor)
#define arch_atomic_andnot arch_atomic_andnot
+
+#define arch_atomic_fetch_and arch_atomic_fetch_and
#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
--
2.30.2
We removed cmpxchg_double() and variants in commit:
b4cf83b2d1da40b2 ("arch: Remove cmpxchg_double")
Which removed the need for "${mult}" in the instrumentation logic.
Unfortunately we missed an instance of "${mult}".
There is no change to the generated header.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
scripts/atomic/gen-atomic-instrumented.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh
index a2ef735be8ca9..68557bfbbdc5e 100755
--- a/scripts/atomic/gen-atomic-instrumented.sh
+++ b/scripts/atomic/gen-atomic-instrumented.sh
@@ -118,7 +118,7 @@ cat <<EOF
EOF
[ -n "$kcsan_barrier" ] && printf "\t${kcsan_barrier}; \\\\\n"
cat <<EOF
- instrument_atomic_read_write(__ai_ptr, ${mult}sizeof(*__ai_ptr)); \\
+ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \\
arch_${xchg}${order}(__ai_ptr, __VA_ARGS__); \\
})
EOF
--
2.30.2
Currently the atomics are documented in Documentation/atomic_t.txt, and
have no kerneldoc comments. There are a sufficient number of gotchas
(e.g. semantics, noinstr-safety) that it would be nice to have comments
to call these out, and it would be nice to have kerneldoc comments such
that these can be collated.
While it's possible to derive the semantics from the code, this can be
painful given the amount of indirection we currently have (e.g. fallback
paths), and it's easy to be mislead by naming, e.g.
* The unconditional void-returning ops *only* have relaxed variants
without a _relaxed suffix, and can easily be mistaken for being fully
ordered.
It would be nice to give these a _relaxed() suffix, but this would
result in significant churn throughout the kernel.
* Our naming of conditional and unconditional+test ops is rather
inconsistent, and it can be difficult to derive the name of an
operation, or to identify where an op is conditional or
unconditional+test.
Some ops are clearly conditional:
- dec_if_positive
- add_unless
- dec_unless_positive
- inc_unless_negative
Some ops are clearly unconditional+test:
- sub_and_test
- dec_and_test
- inc_and_test
However, what exactly those test is not obvious. A _test_zero suffix
might be clearer.
Others could be read ambiguously:
- inc_not_zero // conditional
- add_negative // unconditional+test
It would probably be worth renaming these, e.g. to inc_unless_zero and
add_test_negative.
As a step towards making this more consistent and easier to understand,
this patch adds kerneldoc comments for all generated *atomic*_*()
functions. These are generated from templates, with some common text
shared, making it easy to extend these in future if necessary.
I've tried to make these as consistent and clear as possible, and I've
deliberately ensured:
* All ops have their ordering explicitly mentioned in the short and long
description.
* All test ops have "test" in their short description.
* All ops are described as an expression using their usual C operator.
For example:
andnot: "Atomically updates @v to (@v & ~@i)"
inc: "Atomically updates @v to (@v + 1)"
Which may be clearer to non-naative English speakers, and allows all
the operations to be described in the same style.
* All conditional ops have their condition described as an expression
using the usual C operators. For example:
add_unless: "If (@v != @u), atomically updates @v to (@v + @i)"
cmpxchg: "If (@v == @old), atomically updates @v to @new"
Which may be clearer to non-naative English speakers, and allows all
the operations to be described in the same style.
* All bitwise ops (and,andnot,or,xor) explicitly mention that they are
bitwise in their short description, so that they are not mistaken for
performing their logical equivalents.
* The noinstr safety of each op is explicitly described, with a
description of whether or not to use the raw_ form of the op.
There should be no functional change as a result of this patch.
Reported-by: Paul E. McKenney <[email protected]>
Signed-off-by: Mark Rutland <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Cc: Boqun Feng <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
include/linux/atomic/atomic-arch-fallback.h | 1848 +++++++++++-
include/linux/atomic/atomic-instrumented.h | 2771 +++++++++++++++++-
include/linux/atomic/atomic-long.h | 925 +++++-
scripts/atomic/atomic-tbl.sh | 112 +-
scripts/atomic/gen-atomic-fallback.sh | 2 +
scripts/atomic/gen-atomic-instrumented.sh | 2 +
scripts/atomic/gen-atomic-long.sh | 2 +
scripts/atomic/kerneldoc/add | 13 +
scripts/atomic/kerneldoc/add_negative | 13 +
scripts/atomic/kerneldoc/add_unless | 18 +
scripts/atomic/kerneldoc/and | 13 +
scripts/atomic/kerneldoc/andnot | 13 +
scripts/atomic/kerneldoc/cmpxchg | 14 +
scripts/atomic/kerneldoc/dec | 12 +
scripts/atomic/kerneldoc/dec_and_test | 12 +
scripts/atomic/kerneldoc/dec_if_positive | 12 +
scripts/atomic/kerneldoc/dec_unless_positive | 12 +
scripts/atomic/kerneldoc/inc | 12 +
scripts/atomic/kerneldoc/inc_and_test | 12 +
scripts/atomic/kerneldoc/inc_not_zero | 12 +
scripts/atomic/kerneldoc/inc_unless_negative | 12 +
scripts/atomic/kerneldoc/or | 13 +
scripts/atomic/kerneldoc/read | 12 +
scripts/atomic/kerneldoc/set | 13 +
scripts/atomic/kerneldoc/sub | 13 +
scripts/atomic/kerneldoc/sub_and_test | 13 +
scripts/atomic/kerneldoc/try_cmpxchg | 15 +
scripts/atomic/kerneldoc/xchg | 13 +
scripts/atomic/kerneldoc/xor | 13 +
29 files changed, 5940 insertions(+), 7 deletions(-)
create mode 100644 scripts/atomic/kerneldoc/add
create mode 100644 scripts/atomic/kerneldoc/add_negative
create mode 100644 scripts/atomic/kerneldoc/add_unless
create mode 100644 scripts/atomic/kerneldoc/and
create mode 100644 scripts/atomic/kerneldoc/andnot
create mode 100644 scripts/atomic/kerneldoc/cmpxchg
create mode 100644 scripts/atomic/kerneldoc/dec
create mode 100644 scripts/atomic/kerneldoc/dec_and_test
create mode 100644 scripts/atomic/kerneldoc/dec_if_positive
create mode 100644 scripts/atomic/kerneldoc/dec_unless_positive
create mode 100644 scripts/atomic/kerneldoc/inc
create mode 100644 scripts/atomic/kerneldoc/inc_and_test
create mode 100644 scripts/atomic/kerneldoc/inc_not_zero
create mode 100644 scripts/atomic/kerneldoc/inc_unless_negative
create mode 100644 scripts/atomic/kerneldoc/or
create mode 100644 scripts/atomic/kerneldoc/read
create mode 100644 scripts/atomic/kerneldoc/set
create mode 100644 scripts/atomic/kerneldoc/sub
create mode 100644 scripts/atomic/kerneldoc/sub_and_test
create mode 100644 scripts/atomic/kerneldoc/try_cmpxchg
create mode 100644 scripts/atomic/kerneldoc/xchg
create mode 100644 scripts/atomic/kerneldoc/xor
diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 470c2890ab8d6..8cded57dd7a6f 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -428,12 +428,32 @@ extern void raw_cmpxchg128_relaxed_not_implemented(void);
#define raw_sync_cmpxchg arch_sync_cmpxchg
+/**
+ * raw_atomic_read() - atomic load with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically loads the value of @v with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_read() elsewhere.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline int
raw_atomic_read(const atomic_t *v)
{
return arch_atomic_read(v);
}
+/**
+ * raw_atomic_read_acquire() - atomic load with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically loads the value of @v with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_read_acquire() elsewhere.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline int
raw_atomic_read_acquire(const atomic_t *v)
{
@@ -455,12 +475,34 @@ raw_atomic_read_acquire(const atomic_t *v)
#endif
}
+/**
+ * raw_atomic_set() - atomic set with relaxed ordering
+ * @v: pointer to atomic_t
+ * @i: int value to assign
+ *
+ * Atomically sets @v to @i with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_set() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_set(atomic_t *v, int i)
{
arch_atomic_set(v, i);
}
+/**
+ * raw_atomic_set_release() - atomic set with release ordering
+ * @v: pointer to atomic_t
+ * @i: int value to assign
+ *
+ * Atomically sets @v to @i with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_set_release() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_set_release(atomic_t *v, int i)
{
@@ -478,12 +520,34 @@ raw_atomic_set_release(atomic_t *v, int i)
#endif
}
+/**
+ * raw_atomic_add() - atomic add with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_add(int i, atomic_t *v)
{
arch_atomic_add(i, v);
}
+/**
+ * raw_atomic_add_return() - atomic add with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_add_return(int i, atomic_t *v)
{
@@ -500,6 +564,17 @@ raw_atomic_add_return(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_add_return_acquire() - atomic add with acquire ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_add_return_acquire(int i, atomic_t *v)
{
@@ -516,6 +591,17 @@ raw_atomic_add_return_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_add_return_release() - atomic add with release ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_add_return_release(int i, atomic_t *v)
{
@@ -531,6 +617,17 @@ raw_atomic_add_return_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_add_return_relaxed() - atomic add with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_add_return_relaxed(int i, atomic_t *v)
{
@@ -543,6 +640,17 @@ raw_atomic_add_return_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_add() - atomic add with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_add() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_add(int i, atomic_t *v)
{
@@ -559,6 +667,17 @@ raw_atomic_fetch_add(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_add_acquire() - atomic add with acquire ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_add_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_add_acquire(int i, atomic_t *v)
{
@@ -575,6 +694,17 @@ raw_atomic_fetch_add_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_add_release() - atomic add with release ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_add_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_add_release(int i, atomic_t *v)
{
@@ -590,6 +720,17 @@ raw_atomic_fetch_add_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_add_relaxed() - atomic add with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_add_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_add_relaxed(int i, atomic_t *v)
{
@@ -602,12 +743,34 @@ raw_atomic_fetch_add_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_sub() - atomic subtract with relaxed ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_sub() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_sub(int i, atomic_t *v)
{
arch_atomic_sub(i, v);
}
+/**
+ * raw_atomic_sub_return() - atomic subtract with full ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_sub_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_sub_return(int i, atomic_t *v)
{
@@ -624,6 +787,17 @@ raw_atomic_sub_return(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_sub_return_acquire() - atomic subtract with acquire ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_sub_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_sub_return_acquire(int i, atomic_t *v)
{
@@ -640,6 +814,17 @@ raw_atomic_sub_return_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_sub_return_release() - atomic subtract with release ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_sub_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_sub_return_release(int i, atomic_t *v)
{
@@ -655,6 +840,17 @@ raw_atomic_sub_return_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_sub_return_relaxed() - atomic subtract with relaxed ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_sub_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_sub_return_relaxed(int i, atomic_t *v)
{
@@ -667,6 +863,17 @@ raw_atomic_sub_return_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_sub() - atomic subtract with full ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_sub() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_sub(int i, atomic_t *v)
{
@@ -683,6 +890,17 @@ raw_atomic_fetch_sub(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_sub_acquire() - atomic subtract with acquire ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_sub_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_sub_acquire(int i, atomic_t *v)
{
@@ -699,6 +917,17 @@ raw_atomic_fetch_sub_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_sub_release() - atomic subtract with release ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_sub_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_sub_release(int i, atomic_t *v)
{
@@ -714,6 +943,17 @@ raw_atomic_fetch_sub_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_sub_relaxed() - atomic subtract with relaxed ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_sub_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_sub_relaxed(int i, atomic_t *v)
{
@@ -726,6 +966,16 @@ raw_atomic_fetch_sub_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_inc() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_inc(atomic_t *v)
{
@@ -736,6 +986,16 @@ raw_atomic_inc(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_inc_return() - atomic increment with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_inc_return(atomic_t *v)
{
@@ -752,6 +1012,16 @@ raw_atomic_inc_return(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_inc_return_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_inc_return_acquire(atomic_t *v)
{
@@ -768,6 +1038,16 @@ raw_atomic_inc_return_acquire(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_inc_return_release() - atomic increment with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_inc_return_release(atomic_t *v)
{
@@ -783,6 +1063,16 @@ raw_atomic_inc_return_release(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_inc_return_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_inc_return_relaxed(atomic_t *v)
{
@@ -795,6 +1085,16 @@ raw_atomic_inc_return_relaxed(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_inc() - atomic increment with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_inc() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_inc(atomic_t *v)
{
@@ -811,6 +1111,16 @@ raw_atomic_fetch_inc(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_inc_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_inc_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_inc_acquire(atomic_t *v)
{
@@ -827,6 +1137,16 @@ raw_atomic_fetch_inc_acquire(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_inc_release() - atomic increment with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_inc_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_inc_release(atomic_t *v)
{
@@ -842,6 +1162,16 @@ raw_atomic_fetch_inc_release(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_inc_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_inc_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_inc_relaxed(atomic_t *v)
{
@@ -854,6 +1184,16 @@ raw_atomic_fetch_inc_relaxed(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_dec(atomic_t *v)
{
@@ -864,6 +1204,16 @@ raw_atomic_dec(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec_return() - atomic decrement with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_dec_return(atomic_t *v)
{
@@ -880,6 +1230,16 @@ raw_atomic_dec_return(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec_return_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_dec_return_acquire(atomic_t *v)
{
@@ -896,6 +1256,16 @@ raw_atomic_dec_return_acquire(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec_return_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_dec_return_release(atomic_t *v)
{
@@ -911,6 +1281,16 @@ raw_atomic_dec_return_release(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec_return_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_dec_return_relaxed(atomic_t *v)
{
@@ -923,6 +1303,16 @@ raw_atomic_dec_return_relaxed(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_dec() - atomic decrement with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_dec() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_dec(atomic_t *v)
{
@@ -939,6 +1329,16 @@ raw_atomic_fetch_dec(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_dec_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_dec_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_dec_acquire(atomic_t *v)
{
@@ -955,6 +1355,16 @@ raw_atomic_fetch_dec_acquire(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_dec_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_dec_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_dec_release(atomic_t *v)
{
@@ -970,6 +1380,16 @@ raw_atomic_fetch_dec_release(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_dec_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_dec_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_dec_relaxed(atomic_t *v)
{
@@ -982,12 +1402,34 @@ raw_atomic_fetch_dec_relaxed(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_and() - atomic bitwise AND with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_and() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_and(int i, atomic_t *v)
{
arch_atomic_and(i, v);
}
+/**
+ * raw_atomic_fetch_and() - atomic bitwise AND with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_and() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_and(int i, atomic_t *v)
{
@@ -1004,6 +1446,17 @@ raw_atomic_fetch_and(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_and_acquire() - atomic bitwise AND with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_and_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_and_acquire(int i, atomic_t *v)
{
@@ -1020,6 +1473,17 @@ raw_atomic_fetch_and_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_and_release() - atomic bitwise AND with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_and_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_and_release(int i, atomic_t *v)
{
@@ -1035,6 +1499,17 @@ raw_atomic_fetch_and_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_and_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_and_relaxed(int i, atomic_t *v)
{
@@ -1047,6 +1522,17 @@ raw_atomic_fetch_and_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_andnot() - atomic bitwise AND NOT with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_andnot() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_andnot(int i, atomic_t *v)
{
@@ -1057,6 +1543,17 @@ raw_atomic_andnot(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_andnot() - atomic bitwise AND NOT with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_andnot() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_andnot(int i, atomic_t *v)
{
@@ -1073,6 +1570,17 @@ raw_atomic_fetch_andnot(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_andnot_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
@@ -1089,6 +1597,17 @@ raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_andnot_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_andnot_release(int i, atomic_t *v)
{
@@ -1104,6 +1623,17 @@ raw_atomic_fetch_andnot_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_andnot_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
{
@@ -1116,12 +1646,34 @@ raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_or() - atomic bitwise OR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_or() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_or(int i, atomic_t *v)
{
arch_atomic_or(i, v);
}
+/**
+ * raw_atomic_fetch_or() - atomic bitwise OR with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_or() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_or(int i, atomic_t *v)
{
@@ -1138,6 +1690,17 @@ raw_atomic_fetch_or(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_or_acquire() - atomic bitwise OR with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_or_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_or_acquire(int i, atomic_t *v)
{
@@ -1154,6 +1717,17 @@ raw_atomic_fetch_or_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_or_release() - atomic bitwise OR with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_or_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_or_release(int i, atomic_t *v)
{
@@ -1169,6 +1743,17 @@ raw_atomic_fetch_or_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_or_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_or_relaxed(int i, atomic_t *v)
{
@@ -1181,12 +1766,34 @@ raw_atomic_fetch_or_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_xor() - atomic bitwise XOR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_xor() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_xor(int i, atomic_t *v)
{
arch_atomic_xor(i, v);
}
+/**
+ * raw_atomic_fetch_xor() - atomic bitwise XOR with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_xor() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_xor(int i, atomic_t *v)
{
@@ -1203,6 +1810,17 @@ raw_atomic_fetch_xor(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_xor_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_xor_acquire(int i, atomic_t *v)
{
@@ -1219,6 +1837,17 @@ raw_atomic_fetch_xor_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_xor_release() - atomic bitwise XOR with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_xor_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_xor_release(int i, atomic_t *v)
{
@@ -1234,6 +1863,17 @@ raw_atomic_fetch_xor_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_xor_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_xor_relaxed(int i, atomic_t *v)
{
@@ -1246,6 +1886,17 @@ raw_atomic_fetch_xor_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_xchg() - atomic exchange with full ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_xchg() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_xchg(atomic_t *v, int new)
{
@@ -1262,6 +1913,17 @@ raw_atomic_xchg(atomic_t *v, int new)
#endif
}
+/**
+ * raw_atomic_xchg_acquire() - atomic exchange with acquire ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_xchg_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_xchg_acquire(atomic_t *v, int new)
{
@@ -1278,6 +1940,17 @@ raw_atomic_xchg_acquire(atomic_t *v, int new)
#endif
}
+/**
+ * raw_atomic_xchg_release() - atomic exchange with release ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_xchg_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_xchg_release(atomic_t *v, int new)
{
@@ -1293,6 +1966,17 @@ raw_atomic_xchg_release(atomic_t *v, int new)
#endif
}
+/**
+ * raw_atomic_xchg_relaxed() - atomic exchange with relaxed ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_xchg_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_xchg_relaxed(atomic_t *v, int new)
{
@@ -1305,6 +1989,18 @@ raw_atomic_xchg_relaxed(atomic_t *v, int new)
#endif
}
+/**
+ * raw_atomic_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_cmpxchg() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_cmpxchg(atomic_t *v, int old, int new)
{
@@ -1321,6 +2017,18 @@ raw_atomic_cmpxchg(atomic_t *v, int old, int new)
#endif
}
+/**
+ * raw_atomic_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_cmpxchg_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
@@ -1337,6 +2045,18 @@ raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
#endif
}
+/**
+ * raw_atomic_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_cmpxchg_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
@@ -1352,6 +2072,18 @@ raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
#endif
}
+/**
+ * raw_atomic_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_cmpxchg_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
{
@@ -1364,6 +2096,19 @@ raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
#endif
}
+/**
+ * raw_atomic_try_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_try_cmpxchg() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
@@ -1384,6 +2129,19 @@ raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
#endif
}
+/**
+ * raw_atomic_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_try_cmpxchg_acquire() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
@@ -1404,6 +2162,19 @@ raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
#endif
}
+/**
+ * raw_atomic_try_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_try_cmpxchg_release() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
{
@@ -1423,6 +2194,19 @@ raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
#endif
}
+/**
+ * raw_atomic_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_try_cmpxchg_relaxed() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
{
@@ -1439,6 +2223,17 @@ raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
#endif
}
+/**
+ * raw_atomic_sub_and_test() - atomic subtract and test if zero with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_sub_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic_sub_and_test(int i, atomic_t *v)
{
@@ -1449,6 +2244,16 @@ raw_atomic_sub_and_test(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec_and_test() - atomic decrement and test if zero with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic_dec_and_test(atomic_t *v)
{
@@ -1459,6 +2264,16 @@ raw_atomic_dec_and_test(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_inc_and_test() - atomic increment and test if zero with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic_inc_and_test(atomic_t *v)
{
@@ -1469,6 +2284,17 @@ raw_atomic_inc_and_test(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_add_negative() - atomic add and test if negative with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_negative() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_add_negative(int i, atomic_t *v)
{
@@ -1485,6 +2311,17 @@ raw_atomic_add_negative(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_add_negative_acquire() - atomic add and test if negative with acquire ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_negative_acquire() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_add_negative_acquire(int i, atomic_t *v)
{
@@ -1501,6 +2338,17 @@ raw_atomic_add_negative_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_add_negative_release() - atomic add and test if negative with release ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_negative_release() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_add_negative_release(int i, atomic_t *v)
{
@@ -1516,6 +2364,17 @@ raw_atomic_add_negative_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_negative_relaxed() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_add_negative_relaxed(int i, atomic_t *v)
{
@@ -1528,6 +2387,18 @@ raw_atomic_add_negative_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_t
+ * @a: int value to add
+ * @u: int value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_add_unless() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
@@ -1545,6 +2416,18 @@ raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
#endif
}
+/**
+ * raw_atomic_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_t
+ * @a: int value to add
+ * @u: int value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_unless() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_add_unless(atomic_t *v, int a, int u)
{
@@ -1555,6 +2438,16 @@ raw_atomic_add_unless(atomic_t *v, int a, int u)
#endif
}
+/**
+ * raw_atomic_inc_not_zero() - atomic increment unless zero with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc_not_zero() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_inc_not_zero(atomic_t *v)
{
@@ -1565,6 +2458,16 @@ raw_atomic_inc_not_zero(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_inc_unless_negative() - atomic increment unless negative with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc_unless_negative() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_inc_unless_negative(atomic_t *v)
{
@@ -1582,6 +2485,16 @@ raw_atomic_inc_unless_negative(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec_unless_positive() - atomic decrement unless positive with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec_unless_positive() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_dec_unless_positive(atomic_t *v)
{
@@ -1599,6 +2512,16 @@ raw_atomic_dec_unless_positive(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec_if_positive() - atomic decrement if positive with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec_if_positive() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline int
raw_atomic_dec_if_positive(atomic_t *v)
{
@@ -1621,12 +2544,32 @@ raw_atomic_dec_if_positive(atomic_t *v)
#include <asm-generic/atomic64.h>
#endif
+/**
+ * raw_atomic64_read() - atomic load with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically loads the value of @v with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_read() elsewhere.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline s64
raw_atomic64_read(const atomic64_t *v)
{
return arch_atomic64_read(v);
}
+/**
+ * raw_atomic64_read_acquire() - atomic load with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically loads the value of @v with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_read_acquire() elsewhere.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline s64
raw_atomic64_read_acquire(const atomic64_t *v)
{
@@ -1648,12 +2591,34 @@ raw_atomic64_read_acquire(const atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_set() - atomic set with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @i: s64 value to assign
+ *
+ * Atomically sets @v to @i with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_set() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_set(atomic64_t *v, s64 i)
{
arch_atomic64_set(v, i);
}
+/**
+ * raw_atomic64_set_release() - atomic set with release ordering
+ * @v: pointer to atomic64_t
+ * @i: s64 value to assign
+ *
+ * Atomically sets @v to @i with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_set_release() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_set_release(atomic64_t *v, s64 i)
{
@@ -1671,12 +2636,34 @@ raw_atomic64_set_release(atomic64_t *v, s64 i)
#endif
}
+/**
+ * raw_atomic64_add() - atomic add with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_add(s64 i, atomic64_t *v)
{
arch_atomic64_add(i, v);
}
+/**
+ * raw_atomic64_add_return() - atomic add with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_add_return(s64 i, atomic64_t *v)
{
@@ -1693,6 +2680,17 @@ raw_atomic64_add_return(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_add_return_acquire() - atomic add with acquire ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)
{
@@ -1709,6 +2707,17 @@ raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_add_return_release() - atomic add with release ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_add_return_release(s64 i, atomic64_t *v)
{
@@ -1724,6 +2733,17 @@ raw_atomic64_add_return_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_add_return_relaxed() - atomic add with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
{
@@ -1736,6 +2756,17 @@ raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_add() - atomic add with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_add() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_add(s64 i, atomic64_t *v)
{
@@ -1752,6 +2783,17 @@ raw_atomic64_fetch_add(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_add_acquire() - atomic add with acquire ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_add_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
{
@@ -1768,6 +2810,17 @@ raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_add_release() - atomic add with release ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_add_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)
{
@@ -1783,6 +2836,17 @@ raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_add_relaxed() - atomic add with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_add_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
{
@@ -1795,12 +2859,34 @@ raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_sub() - atomic subtract with relaxed ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_sub() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_sub(s64 i, atomic64_t *v)
{
arch_atomic64_sub(i, v);
}
+/**
+ * raw_atomic64_sub_return() - atomic subtract with full ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_sub_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_sub_return(s64 i, atomic64_t *v)
{
@@ -1817,6 +2903,17 @@ raw_atomic64_sub_return(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_sub_return_acquire() - atomic subtract with acquire ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_sub_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
{
@@ -1833,6 +2930,17 @@ raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_sub_return_release() - atomic subtract with release ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_sub_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_sub_return_release(s64 i, atomic64_t *v)
{
@@ -1848,6 +2956,17 @@ raw_atomic64_sub_return_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_sub_return_relaxed() - atomic subtract with relaxed ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_sub_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
{
@@ -1860,6 +2979,17 @@ raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_sub() - atomic subtract with full ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_sub() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_sub(s64 i, atomic64_t *v)
{
@@ -1876,6 +3006,17 @@ raw_atomic64_fetch_sub(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_sub_acquire() - atomic subtract with acquire ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_sub_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
{
@@ -1892,6 +3033,17 @@ raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_sub_release() - atomic subtract with release ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_sub_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
{
@@ -1907,6 +3059,17 @@ raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_sub_relaxed() - atomic subtract with relaxed ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_sub_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
{
@@ -1919,6 +3082,16 @@ raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_inc() - atomic increment with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_inc(atomic64_t *v)
{
@@ -1929,6 +3102,16 @@ raw_atomic64_inc(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_inc_return() - atomic increment with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_inc_return(atomic64_t *v)
{
@@ -1945,6 +3128,16 @@ raw_atomic64_inc_return(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_inc_return_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_inc_return_acquire(atomic64_t *v)
{
@@ -1961,6 +3154,16 @@ raw_atomic64_inc_return_acquire(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_inc_return_release() - atomic increment with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_inc_return_release(atomic64_t *v)
{
@@ -1976,6 +3179,16 @@ raw_atomic64_inc_return_release(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_inc_return_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_inc_return_relaxed(atomic64_t *v)
{
@@ -1988,6 +3201,16 @@ raw_atomic64_inc_return_relaxed(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_inc() - atomic increment with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_inc() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_inc(atomic64_t *v)
{
@@ -2004,6 +3227,16 @@ raw_atomic64_fetch_inc(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_inc_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_inc_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_inc_acquire(atomic64_t *v)
{
@@ -2020,6 +3253,16 @@ raw_atomic64_fetch_inc_acquire(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_inc_release() - atomic increment with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_inc_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_inc_release(atomic64_t *v)
{
@@ -2035,6 +3278,16 @@ raw_atomic64_fetch_inc_release(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_inc_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_inc_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_inc_relaxed(atomic64_t *v)
{
@@ -2047,6 +3300,16 @@ raw_atomic64_fetch_inc_relaxed(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_dec(atomic64_t *v)
{
@@ -2057,6 +3320,16 @@ raw_atomic64_dec(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec_return() - atomic decrement with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_dec_return(atomic64_t *v)
{
@@ -2073,6 +3346,16 @@ raw_atomic64_dec_return(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec_return_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_dec_return_acquire(atomic64_t *v)
{
@@ -2089,6 +3372,16 @@ raw_atomic64_dec_return_acquire(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec_return_release() - atomic decrement with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_dec_return_release(atomic64_t *v)
{
@@ -2104,6 +3397,16 @@ raw_atomic64_dec_return_release(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec_return_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_dec_return_relaxed(atomic64_t *v)
{
@@ -2116,6 +3419,16 @@ raw_atomic64_dec_return_relaxed(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_dec() - atomic decrement with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_dec() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_dec(atomic64_t *v)
{
@@ -2132,6 +3445,16 @@ raw_atomic64_fetch_dec(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_dec_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_dec_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_dec_acquire(atomic64_t *v)
{
@@ -2148,6 +3471,16 @@ raw_atomic64_fetch_dec_acquire(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_dec_release() - atomic decrement with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_dec_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_dec_release(atomic64_t *v)
{
@@ -2163,6 +3496,16 @@ raw_atomic64_fetch_dec_release(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_dec_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_dec_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_dec_relaxed(atomic64_t *v)
{
@@ -2175,12 +3518,34 @@ raw_atomic64_fetch_dec_relaxed(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_and() - atomic bitwise AND with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_and() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_and(s64 i, atomic64_t *v)
{
arch_atomic64_and(i, v);
}
+/**
+ * raw_atomic64_fetch_and() - atomic bitwise AND with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_and() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_and(s64 i, atomic64_t *v)
{
@@ -2197,6 +3562,17 @@ raw_atomic64_fetch_and(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_and_acquire() - atomic bitwise AND with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_and_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
{
@@ -2213,6 +3589,17 @@ raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_and_release() - atomic bitwise AND with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_and_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)
{
@@ -2228,6 +3615,17 @@ raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_and_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
{
@@ -2240,6 +3638,17 @@ raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_andnot() - atomic bitwise AND NOT with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_andnot() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_andnot(s64 i, atomic64_t *v)
{
@@ -2250,6 +3659,17 @@ raw_atomic64_andnot(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_andnot() - atomic bitwise AND NOT with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_andnot() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
@@ -2266,6 +3686,17 @@ raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_andnot_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
@@ -2282,6 +3713,17 @@ raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_andnot_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
@@ -2297,6 +3739,17 @@ raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_andnot_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
{
@@ -2309,12 +3762,34 @@ raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_or() - atomic bitwise OR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_or() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_or(s64 i, atomic64_t *v)
{
arch_atomic64_or(i, v);
}
+/**
+ * raw_atomic64_fetch_or() - atomic bitwise OR with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_or() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_or(s64 i, atomic64_t *v)
{
@@ -2331,6 +3806,17 @@ raw_atomic64_fetch_or(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_or_acquire() - atomic bitwise OR with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_or_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
{
@@ -2347,6 +3833,17 @@ raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_or_release() - atomic bitwise OR with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_or_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)
{
@@ -2362,6 +3859,17 @@ raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_or_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
{
@@ -2374,12 +3882,34 @@ raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_xor() - atomic bitwise XOR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_xor() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_xor(s64 i, atomic64_t *v)
{
arch_atomic64_xor(i, v);
}
+/**
+ * raw_atomic64_fetch_xor() - atomic bitwise XOR with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_xor() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_xor(s64 i, atomic64_t *v)
{
@@ -2396,6 +3926,17 @@ raw_atomic64_fetch_xor(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_xor_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
{
@@ -2412,6 +3953,17 @@ raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_xor_release() - atomic bitwise XOR with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_xor_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
{
@@ -2427,6 +3979,17 @@ raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_xor_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
{
@@ -2439,6 +4002,17 @@ raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_xchg() - atomic exchange with full ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_xchg() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_xchg(atomic64_t *v, s64 new)
{
@@ -2455,6 +4029,17 @@ raw_atomic64_xchg(atomic64_t *v, s64 new)
#endif
}
+/**
+ * raw_atomic64_xchg_acquire() - atomic exchange with acquire ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_xchg_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_xchg_acquire(atomic64_t *v, s64 new)
{
@@ -2471,6 +4056,17 @@ raw_atomic64_xchg_acquire(atomic64_t *v, s64 new)
#endif
}
+/**
+ * raw_atomic64_xchg_release() - atomic exchange with release ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_xchg_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_xchg_release(atomic64_t *v, s64 new)
{
@@ -2486,6 +4082,17 @@ raw_atomic64_xchg_release(atomic64_t *v, s64 new)
#endif
}
+/**
+ * raw_atomic64_xchg_relaxed() - atomic exchange with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_xchg_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
{
@@ -2498,6 +4105,18 @@ raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
#endif
}
+/**
+ * raw_atomic64_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_cmpxchg() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
@@ -2514,6 +4133,18 @@ raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
#endif
}
+/**
+ * raw_atomic64_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_cmpxchg_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
@@ -2530,6 +4161,18 @@ raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
#endif
}
+/**
+ * raw_atomic64_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_cmpxchg_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
@@ -2545,6 +4188,18 @@ raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
#endif
}
+/**
+ * raw_atomic64_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_cmpxchg_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
{
@@ -2557,6 +4212,19 @@ raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
#endif
}
+/**
+ * raw_atomic64_try_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic64_try_cmpxchg() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
@@ -2577,6 +4245,19 @@ raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
#endif
}
+/**
+ * raw_atomic64_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_acquire() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
@@ -2597,6 +4278,19 @@ raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
#endif
}
+/**
+ * raw_atomic64_try_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_release() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
{
@@ -2616,6 +4310,19 @@ raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
#endif
}
+/**
+ * raw_atomic64_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_relaxed() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
{
@@ -2632,6 +4339,17 @@ raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
#endif
}
+/**
+ * raw_atomic64_sub_and_test() - atomic subtract and test if zero with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_sub_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_sub_and_test(s64 i, atomic64_t *v)
{
@@ -2642,6 +4360,16 @@ raw_atomic64_sub_and_test(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec_and_test() - atomic decrement and test if zero with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_dec_and_test(atomic64_t *v)
{
@@ -2652,6 +4380,16 @@ raw_atomic64_dec_and_test(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_inc_and_test() - atomic increment and test if zero with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_inc_and_test(atomic64_t *v)
{
@@ -2662,6 +4400,17 @@ raw_atomic64_inc_and_test(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_add_negative() - atomic add and test if negative with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_negative() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_add_negative(s64 i, atomic64_t *v)
{
@@ -2678,6 +4427,17 @@ raw_atomic64_add_negative(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_add_negative_acquire() - atomic add and test if negative with acquire ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_negative_acquire() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
@@ -2694,6 +4454,17 @@ raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_add_negative_release() - atomic add and test if negative with release ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_negative_release() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
{
@@ -2709,6 +4480,17 @@ raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_negative_relaxed() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
{
@@ -2721,6 +4503,18 @@ raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic64_t
+ * @a: s64 value to add
+ * @u: s64 value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_add_unless() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
@@ -2738,6 +4532,18 @@ raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
#endif
}
+/**
+ * raw_atomic64_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic64_t
+ * @a: s64 value to add
+ * @u: s64 value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_unless() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
@@ -2748,6 +4554,16 @@ raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
#endif
}
+/**
+ * raw_atomic64_inc_not_zero() - atomic increment unless zero with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc_not_zero() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_inc_not_zero(atomic64_t *v)
{
@@ -2758,6 +4574,16 @@ raw_atomic64_inc_not_zero(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_inc_unless_negative() - atomic increment unless negative with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc_unless_negative() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_inc_unless_negative(atomic64_t *v)
{
@@ -2775,6 +4601,16 @@ raw_atomic64_inc_unless_negative(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec_unless_positive() - atomic decrement unless positive with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec_unless_positive() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_dec_unless_positive(atomic64_t *v)
{
@@ -2792,6 +4628,16 @@ raw_atomic64_dec_unless_positive(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec_if_positive() - atomic decrement if positive with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec_if_positive() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline s64
raw_atomic64_dec_if_positive(atomic64_t *v)
{
@@ -2811,4 +4657,4 @@ raw_atomic64_dec_if_positive(atomic64_t *v)
}
#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// 205e090382132f1fc85e48b46e722865f9c81309
+// 3916f02c038baa3f5190d275f68b9211667fcc9d
diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h
index 5491c89dc03a0..ebfc795f921b9 100644
--- a/include/linux/atomic/atomic-instrumented.h
+++ b/include/linux/atomic/atomic-instrumented.h
@@ -16,6 +16,16 @@
#include <linux/compiler.h>
#include <linux/instrumented.h>
+/**
+ * atomic_read() - atomic load with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically loads the value of @v with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_read() there.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline int
atomic_read(const atomic_t *v)
{
@@ -23,6 +33,16 @@ atomic_read(const atomic_t *v)
return raw_atomic_read(v);
}
+/**
+ * atomic_read_acquire() - atomic load with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically loads the value of @v with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_read_acquire() there.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline int
atomic_read_acquire(const atomic_t *v)
{
@@ -30,6 +50,17 @@ atomic_read_acquire(const atomic_t *v)
return raw_atomic_read_acquire(v);
}
+/**
+ * atomic_set() - atomic set with relaxed ordering
+ * @v: pointer to atomic_t
+ * @i: int value to assign
+ *
+ * Atomically sets @v to @i with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_set() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_set(atomic_t *v, int i)
{
@@ -37,6 +68,17 @@ atomic_set(atomic_t *v, int i)
raw_atomic_set(v, i);
}
+/**
+ * atomic_set_release() - atomic set with release ordering
+ * @v: pointer to atomic_t
+ * @i: int value to assign
+ *
+ * Atomically sets @v to @i with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_set_release() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_set_release(atomic_t *v, int i)
{
@@ -45,6 +87,17 @@ atomic_set_release(atomic_t *v, int i)
raw_atomic_set_release(v, i);
}
+/**
+ * atomic_add() - atomic add with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_add(int i, atomic_t *v)
{
@@ -52,6 +105,17 @@ atomic_add(int i, atomic_t *v)
raw_atomic_add(i, v);
}
+/**
+ * atomic_add_return() - atomic add with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_add_return(int i, atomic_t *v)
{
@@ -60,6 +124,17 @@ atomic_add_return(int i, atomic_t *v)
return raw_atomic_add_return(i, v);
}
+/**
+ * atomic_add_return_acquire() - atomic add with acquire ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_add_return_acquire(int i, atomic_t *v)
{
@@ -67,6 +142,17 @@ atomic_add_return_acquire(int i, atomic_t *v)
return raw_atomic_add_return_acquire(i, v);
}
+/**
+ * atomic_add_return_release() - atomic add with release ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_add_return_release(int i, atomic_t *v)
{
@@ -75,6 +161,17 @@ atomic_add_return_release(int i, atomic_t *v)
return raw_atomic_add_return_release(i, v);
}
+/**
+ * atomic_add_return_relaxed() - atomic add with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_add_return_relaxed(int i, atomic_t *v)
{
@@ -82,6 +179,17 @@ atomic_add_return_relaxed(int i, atomic_t *v)
return raw_atomic_add_return_relaxed(i, v);
}
+/**
+ * atomic_fetch_add() - atomic add with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_add() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_add(int i, atomic_t *v)
{
@@ -90,6 +198,17 @@ atomic_fetch_add(int i, atomic_t *v)
return raw_atomic_fetch_add(i, v);
}
+/**
+ * atomic_fetch_add_acquire() - atomic add with acquire ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_add_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_add_acquire(int i, atomic_t *v)
{
@@ -97,6 +216,17 @@ atomic_fetch_add_acquire(int i, atomic_t *v)
return raw_atomic_fetch_add_acquire(i, v);
}
+/**
+ * atomic_fetch_add_release() - atomic add with release ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_add_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_add_release(int i, atomic_t *v)
{
@@ -105,6 +235,17 @@ atomic_fetch_add_release(int i, atomic_t *v)
return raw_atomic_fetch_add_release(i, v);
}
+/**
+ * atomic_fetch_add_relaxed() - atomic add with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_add_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_add_relaxed(int i, atomic_t *v)
{
@@ -112,6 +253,17 @@ atomic_fetch_add_relaxed(int i, atomic_t *v)
return raw_atomic_fetch_add_relaxed(i, v);
}
+/**
+ * atomic_sub() - atomic subtract with relaxed ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_sub() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_sub(int i, atomic_t *v)
{
@@ -119,6 +271,17 @@ atomic_sub(int i, atomic_t *v)
raw_atomic_sub(i, v);
}
+/**
+ * atomic_sub_return() - atomic subtract with full ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_sub_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_sub_return(int i, atomic_t *v)
{
@@ -127,6 +290,17 @@ atomic_sub_return(int i, atomic_t *v)
return raw_atomic_sub_return(i, v);
}
+/**
+ * atomic_sub_return_acquire() - atomic subtract with acquire ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_sub_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_sub_return_acquire(int i, atomic_t *v)
{
@@ -134,6 +308,17 @@ atomic_sub_return_acquire(int i, atomic_t *v)
return raw_atomic_sub_return_acquire(i, v);
}
+/**
+ * atomic_sub_return_release() - atomic subtract with release ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_sub_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_sub_return_release(int i, atomic_t *v)
{
@@ -142,6 +327,17 @@ atomic_sub_return_release(int i, atomic_t *v)
return raw_atomic_sub_return_release(i, v);
}
+/**
+ * atomic_sub_return_relaxed() - atomic subtract with relaxed ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_sub_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_sub_return_relaxed(int i, atomic_t *v)
{
@@ -149,6 +345,17 @@ atomic_sub_return_relaxed(int i, atomic_t *v)
return raw_atomic_sub_return_relaxed(i, v);
}
+/**
+ * atomic_fetch_sub() - atomic subtract with full ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_sub() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_sub(int i, atomic_t *v)
{
@@ -157,6 +364,17 @@ atomic_fetch_sub(int i, atomic_t *v)
return raw_atomic_fetch_sub(i, v);
}
+/**
+ * atomic_fetch_sub_acquire() - atomic subtract with acquire ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_sub_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_sub_acquire(int i, atomic_t *v)
{
@@ -164,6 +382,17 @@ atomic_fetch_sub_acquire(int i, atomic_t *v)
return raw_atomic_fetch_sub_acquire(i, v);
}
+/**
+ * atomic_fetch_sub_release() - atomic subtract with release ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_sub_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_sub_release(int i, atomic_t *v)
{
@@ -172,6 +401,17 @@ atomic_fetch_sub_release(int i, atomic_t *v)
return raw_atomic_fetch_sub_release(i, v);
}
+/**
+ * atomic_fetch_sub_relaxed() - atomic subtract with relaxed ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_sub_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_sub_relaxed(int i, atomic_t *v)
{
@@ -179,6 +419,16 @@ atomic_fetch_sub_relaxed(int i, atomic_t *v)
return raw_atomic_fetch_sub_relaxed(i, v);
}
+/**
+ * atomic_inc() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_inc(atomic_t *v)
{
@@ -186,6 +436,16 @@ atomic_inc(atomic_t *v)
raw_atomic_inc(v);
}
+/**
+ * atomic_inc_return() - atomic increment with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_inc_return(atomic_t *v)
{
@@ -194,6 +454,16 @@ atomic_inc_return(atomic_t *v)
return raw_atomic_inc_return(v);
}
+/**
+ * atomic_inc_return_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_inc_return_acquire(atomic_t *v)
{
@@ -201,6 +471,16 @@ atomic_inc_return_acquire(atomic_t *v)
return raw_atomic_inc_return_acquire(v);
}
+/**
+ * atomic_inc_return_release() - atomic increment with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_inc_return_release(atomic_t *v)
{
@@ -209,6 +489,16 @@ atomic_inc_return_release(atomic_t *v)
return raw_atomic_inc_return_release(v);
}
+/**
+ * atomic_inc_return_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_inc_return_relaxed(atomic_t *v)
{
@@ -216,6 +506,16 @@ atomic_inc_return_relaxed(atomic_t *v)
return raw_atomic_inc_return_relaxed(v);
}
+/**
+ * atomic_fetch_inc() - atomic increment with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_inc() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_inc(atomic_t *v)
{
@@ -224,6 +524,16 @@ atomic_fetch_inc(atomic_t *v)
return raw_atomic_fetch_inc(v);
}
+/**
+ * atomic_fetch_inc_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_inc_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_inc_acquire(atomic_t *v)
{
@@ -231,6 +541,16 @@ atomic_fetch_inc_acquire(atomic_t *v)
return raw_atomic_fetch_inc_acquire(v);
}
+/**
+ * atomic_fetch_inc_release() - atomic increment with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_inc_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_inc_release(atomic_t *v)
{
@@ -239,6 +559,16 @@ atomic_fetch_inc_release(atomic_t *v)
return raw_atomic_fetch_inc_release(v);
}
+/**
+ * atomic_fetch_inc_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_inc_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_inc_relaxed(atomic_t *v)
{
@@ -246,6 +576,16 @@ atomic_fetch_inc_relaxed(atomic_t *v)
return raw_atomic_fetch_inc_relaxed(v);
}
+/**
+ * atomic_dec() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_dec(atomic_t *v)
{
@@ -253,6 +593,16 @@ atomic_dec(atomic_t *v)
raw_atomic_dec(v);
}
+/**
+ * atomic_dec_return() - atomic decrement with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_dec_return(atomic_t *v)
{
@@ -261,6 +611,16 @@ atomic_dec_return(atomic_t *v)
return raw_atomic_dec_return(v);
}
+/**
+ * atomic_dec_return_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_dec_return_acquire(atomic_t *v)
{
@@ -268,6 +628,16 @@ atomic_dec_return_acquire(atomic_t *v)
return raw_atomic_dec_return_acquire(v);
}
+/**
+ * atomic_dec_return_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_dec_return_release(atomic_t *v)
{
@@ -276,6 +646,16 @@ atomic_dec_return_release(atomic_t *v)
return raw_atomic_dec_return_release(v);
}
+/**
+ * atomic_dec_return_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_dec_return_relaxed(atomic_t *v)
{
@@ -283,6 +663,16 @@ atomic_dec_return_relaxed(atomic_t *v)
return raw_atomic_dec_return_relaxed(v);
}
+/**
+ * atomic_fetch_dec() - atomic decrement with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_dec() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_dec(atomic_t *v)
{
@@ -291,6 +681,16 @@ atomic_fetch_dec(atomic_t *v)
return raw_atomic_fetch_dec(v);
}
+/**
+ * atomic_fetch_dec_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_dec_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_dec_acquire(atomic_t *v)
{
@@ -298,6 +698,16 @@ atomic_fetch_dec_acquire(atomic_t *v)
return raw_atomic_fetch_dec_acquire(v);
}
+/**
+ * atomic_fetch_dec_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_dec_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_dec_release(atomic_t *v)
{
@@ -306,6 +716,16 @@ atomic_fetch_dec_release(atomic_t *v)
return raw_atomic_fetch_dec_release(v);
}
+/**
+ * atomic_fetch_dec_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_dec_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_dec_relaxed(atomic_t *v)
{
@@ -313,6 +733,17 @@ atomic_fetch_dec_relaxed(atomic_t *v)
return raw_atomic_fetch_dec_relaxed(v);
}
+/**
+ * atomic_and() - atomic bitwise AND with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_and() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_and(int i, atomic_t *v)
{
@@ -320,6 +751,17 @@ atomic_and(int i, atomic_t *v)
raw_atomic_and(i, v);
}
+/**
+ * atomic_fetch_and() - atomic bitwise AND with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_and() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_and(int i, atomic_t *v)
{
@@ -328,6 +770,17 @@ atomic_fetch_and(int i, atomic_t *v)
return raw_atomic_fetch_and(i, v);
}
+/**
+ * atomic_fetch_and_acquire() - atomic bitwise AND with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_and_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_and_acquire(int i, atomic_t *v)
{
@@ -335,6 +788,17 @@ atomic_fetch_and_acquire(int i, atomic_t *v)
return raw_atomic_fetch_and_acquire(i, v);
}
+/**
+ * atomic_fetch_and_release() - atomic bitwise AND with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_and_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_and_release(int i, atomic_t *v)
{
@@ -343,6 +807,17 @@ atomic_fetch_and_release(int i, atomic_t *v)
return raw_atomic_fetch_and_release(i, v);
}
+/**
+ * atomic_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_and_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_and_relaxed(int i, atomic_t *v)
{
@@ -350,6 +825,17 @@ atomic_fetch_and_relaxed(int i, atomic_t *v)
return raw_atomic_fetch_and_relaxed(i, v);
}
+/**
+ * atomic_andnot() - atomic bitwise AND NOT with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_andnot() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_andnot(int i, atomic_t *v)
{
@@ -357,6 +843,17 @@ atomic_andnot(int i, atomic_t *v)
raw_atomic_andnot(i, v);
}
+/**
+ * atomic_fetch_andnot() - atomic bitwise AND NOT with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_andnot(int i, atomic_t *v)
{
@@ -365,6 +862,17 @@ atomic_fetch_andnot(int i, atomic_t *v)
return raw_atomic_fetch_andnot(i, v);
}
+/**
+ * atomic_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
@@ -372,6 +880,17 @@ atomic_fetch_andnot_acquire(int i, atomic_t *v)
return raw_atomic_fetch_andnot_acquire(i, v);
}
+/**
+ * atomic_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_andnot_release(int i, atomic_t *v)
{
@@ -380,6 +899,17 @@ atomic_fetch_andnot_release(int i, atomic_t *v)
return raw_atomic_fetch_andnot_release(i, v);
}
+/**
+ * atomic_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_andnot_relaxed(int i, atomic_t *v)
{
@@ -387,6 +917,17 @@ atomic_fetch_andnot_relaxed(int i, atomic_t *v)
return raw_atomic_fetch_andnot_relaxed(i, v);
}
+/**
+ * atomic_or() - atomic bitwise OR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_or() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_or(int i, atomic_t *v)
{
@@ -394,6 +935,17 @@ atomic_or(int i, atomic_t *v)
raw_atomic_or(i, v);
}
+/**
+ * atomic_fetch_or() - atomic bitwise OR with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_or() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_or(int i, atomic_t *v)
{
@@ -402,6 +954,17 @@ atomic_fetch_or(int i, atomic_t *v)
return raw_atomic_fetch_or(i, v);
}
+/**
+ * atomic_fetch_or_acquire() - atomic bitwise OR with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_or_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_or_acquire(int i, atomic_t *v)
{
@@ -409,6 +972,17 @@ atomic_fetch_or_acquire(int i, atomic_t *v)
return raw_atomic_fetch_or_acquire(i, v);
}
+/**
+ * atomic_fetch_or_release() - atomic bitwise OR with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_or_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_or_release(int i, atomic_t *v)
{
@@ -417,6 +991,17 @@ atomic_fetch_or_release(int i, atomic_t *v)
return raw_atomic_fetch_or_release(i, v);
}
+/**
+ * atomic_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_or_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_or_relaxed(int i, atomic_t *v)
{
@@ -424,6 +1009,17 @@ atomic_fetch_or_relaxed(int i, atomic_t *v)
return raw_atomic_fetch_or_relaxed(i, v);
}
+/**
+ * atomic_xor() - atomic bitwise XOR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_xor() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_xor(int i, atomic_t *v)
{
@@ -431,6 +1027,17 @@ atomic_xor(int i, atomic_t *v)
raw_atomic_xor(i, v);
}
+/**
+ * atomic_fetch_xor() - atomic bitwise XOR with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_xor() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_xor(int i, atomic_t *v)
{
@@ -439,6 +1046,17 @@ atomic_fetch_xor(int i, atomic_t *v)
return raw_atomic_fetch_xor(i, v);
}
+/**
+ * atomic_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_xor_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_xor_acquire(int i, atomic_t *v)
{
@@ -446,6 +1064,17 @@ atomic_fetch_xor_acquire(int i, atomic_t *v)
return raw_atomic_fetch_xor_acquire(i, v);
}
+/**
+ * atomic_fetch_xor_release() - atomic bitwise XOR with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_xor_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_xor_release(int i, atomic_t *v)
{
@@ -454,6 +1083,17 @@ atomic_fetch_xor_release(int i, atomic_t *v)
return raw_atomic_fetch_xor_release(i, v);
}
+/**
+ * atomic_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_xor_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_xor_relaxed(int i, atomic_t *v)
{
@@ -461,6 +1101,17 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v)
return raw_atomic_fetch_xor_relaxed(i, v);
}
+/**
+ * atomic_xchg() - atomic exchange with full ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_xchg() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_xchg(atomic_t *v, int new)
{
@@ -469,6 +1120,17 @@ atomic_xchg(atomic_t *v, int new)
return raw_atomic_xchg(v, new);
}
+/**
+ * atomic_xchg_acquire() - atomic exchange with acquire ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_xchg_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_xchg_acquire(atomic_t *v, int new)
{
@@ -476,6 +1138,17 @@ atomic_xchg_acquire(atomic_t *v, int new)
return raw_atomic_xchg_acquire(v, new);
}
+/**
+ * atomic_xchg_release() - atomic exchange with release ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_xchg_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_xchg_release(atomic_t *v, int new)
{
@@ -484,6 +1157,17 @@ atomic_xchg_release(atomic_t *v, int new)
return raw_atomic_xchg_release(v, new);
}
+/**
+ * atomic_xchg_relaxed() - atomic exchange with relaxed ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_xchg_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_xchg_relaxed(atomic_t *v, int new)
{
@@ -491,6 +1175,18 @@ atomic_xchg_relaxed(atomic_t *v, int new)
return raw_atomic_xchg_relaxed(v, new);
}
+/**
+ * atomic_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_cmpxchg() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_cmpxchg(atomic_t *v, int old, int new)
{
@@ -499,6 +1195,18 @@ atomic_cmpxchg(atomic_t *v, int old, int new)
return raw_atomic_cmpxchg(v, old, new);
}
+/**
+ * atomic_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_cmpxchg_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
@@ -506,6 +1214,18 @@ atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
return raw_atomic_cmpxchg_acquire(v, old, new);
}
+/**
+ * atomic_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_cmpxchg_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
@@ -514,6 +1234,18 @@ atomic_cmpxchg_release(atomic_t *v, int old, int new)
return raw_atomic_cmpxchg_release(v, old, new);
}
+/**
+ * atomic_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_cmpxchg_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
{
@@ -521,6 +1253,19 @@ atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
return raw_atomic_cmpxchg_relaxed(v, old, new);
}
+/**
+ * atomic_try_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
@@ -530,6 +1275,19 @@ atomic_try_cmpxchg(atomic_t *v, int *old, int new)
return raw_atomic_try_cmpxchg(v, old, new);
}
+/**
+ * atomic_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_acquire() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
@@ -538,6 +1296,19 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
return raw_atomic_try_cmpxchg_acquire(v, old, new);
}
+/**
+ * atomic_try_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_release() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
{
@@ -547,6 +1318,19 @@ atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
return raw_atomic_try_cmpxchg_release(v, old, new);
}
+/**
+ * atomic_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_relaxed() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
{
@@ -555,6 +1339,17 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
return raw_atomic_try_cmpxchg_relaxed(v, old, new);
}
+/**
+ * atomic_sub_and_test() - atomic subtract and test if zero with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_sub_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic_sub_and_test(int i, atomic_t *v)
{
@@ -563,6 +1358,16 @@ atomic_sub_and_test(int i, atomic_t *v)
return raw_atomic_sub_and_test(i, v);
}
+/**
+ * atomic_dec_and_test() - atomic decrement and test if zero with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic_dec_and_test(atomic_t *v)
{
@@ -571,6 +1376,16 @@ atomic_dec_and_test(atomic_t *v)
return raw_atomic_dec_and_test(v);
}
+/**
+ * atomic_inc_and_test() - atomic increment and test if zero with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic_inc_and_test(atomic_t *v)
{
@@ -579,6 +1394,17 @@ atomic_inc_and_test(atomic_t *v)
return raw_atomic_inc_and_test(v);
}
+/**
+ * atomic_add_negative() - atomic add and test if negative with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_negative() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_add_negative(int i, atomic_t *v)
{
@@ -587,6 +1413,17 @@ atomic_add_negative(int i, atomic_t *v)
return raw_atomic_add_negative(i, v);
}
+/**
+ * atomic_add_negative_acquire() - atomic add and test if negative with acquire ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_negative_acquire() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_add_negative_acquire(int i, atomic_t *v)
{
@@ -594,6 +1431,17 @@ atomic_add_negative_acquire(int i, atomic_t *v)
return raw_atomic_add_negative_acquire(i, v);
}
+/**
+ * atomic_add_negative_release() - atomic add and test if negative with release ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_negative_release() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_add_negative_release(int i, atomic_t *v)
{
@@ -602,6 +1450,17 @@ atomic_add_negative_release(int i, atomic_t *v)
return raw_atomic_add_negative_release(i, v);
}
+/**
+ * atomic_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_negative_relaxed() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_add_negative_relaxed(int i, atomic_t *v)
{
@@ -609,6 +1468,18 @@ atomic_add_negative_relaxed(int i, atomic_t *v)
return raw_atomic_add_negative_relaxed(i, v);
}
+/**
+ * atomic_fetch_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_t
+ * @a: int value to add
+ * @u: int value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_add_unless() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
@@ -617,6 +1488,18 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u)
return raw_atomic_fetch_add_unless(v, a, u);
}
+/**
+ * atomic_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_t
+ * @a: int value to add
+ * @u: int value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_unless() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_add_unless(atomic_t *v, int a, int u)
{
@@ -625,6 +1508,16 @@ atomic_add_unless(atomic_t *v, int a, int u)
return raw_atomic_add_unless(v, a, u);
}
+/**
+ * atomic_inc_not_zero() - atomic increment unless zero with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc_not_zero() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_inc_not_zero(atomic_t *v)
{
@@ -633,6 +1526,16 @@ atomic_inc_not_zero(atomic_t *v)
return raw_atomic_inc_not_zero(v);
}
+/**
+ * atomic_inc_unless_negative() - atomic increment unless negative with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc_unless_negative() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_inc_unless_negative(atomic_t *v)
{
@@ -641,6 +1544,16 @@ atomic_inc_unless_negative(atomic_t *v)
return raw_atomic_inc_unless_negative(v);
}
+/**
+ * atomic_dec_unless_positive() - atomic decrement unless positive with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec_unless_positive() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_dec_unless_positive(atomic_t *v)
{
@@ -649,6 +1562,16 @@ atomic_dec_unless_positive(atomic_t *v)
return raw_atomic_dec_unless_positive(v);
}
+/**
+ * atomic_dec_if_positive() - atomic decrement if positive with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec_if_positive() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline int
atomic_dec_if_positive(atomic_t *v)
{
@@ -657,6 +1580,16 @@ atomic_dec_if_positive(atomic_t *v)
return raw_atomic_dec_if_positive(v);
}
+/**
+ * atomic64_read() - atomic load with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically loads the value of @v with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_read() there.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline s64
atomic64_read(const atomic64_t *v)
{
@@ -664,6 +1597,16 @@ atomic64_read(const atomic64_t *v)
return raw_atomic64_read(v);
}
+/**
+ * atomic64_read_acquire() - atomic load with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically loads the value of @v with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_read_acquire() there.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline s64
atomic64_read_acquire(const atomic64_t *v)
{
@@ -671,6 +1614,17 @@ atomic64_read_acquire(const atomic64_t *v)
return raw_atomic64_read_acquire(v);
}
+/**
+ * atomic64_set() - atomic set with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @i: s64 value to assign
+ *
+ * Atomically sets @v to @i with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_set() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_set(atomic64_t *v, s64 i)
{
@@ -678,6 +1632,17 @@ atomic64_set(atomic64_t *v, s64 i)
raw_atomic64_set(v, i);
}
+/**
+ * atomic64_set_release() - atomic set with release ordering
+ * @v: pointer to atomic64_t
+ * @i: s64 value to assign
+ *
+ * Atomically sets @v to @i with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_set_release() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_set_release(atomic64_t *v, s64 i)
{
@@ -686,6 +1651,17 @@ atomic64_set_release(atomic64_t *v, s64 i)
raw_atomic64_set_release(v, i);
}
+/**
+ * atomic64_add() - atomic add with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_add(s64 i, atomic64_t *v)
{
@@ -693,6 +1669,17 @@ atomic64_add(s64 i, atomic64_t *v)
raw_atomic64_add(i, v);
}
+/**
+ * atomic64_add_return() - atomic add with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_add_return(s64 i, atomic64_t *v)
{
@@ -701,6 +1688,17 @@ atomic64_add_return(s64 i, atomic64_t *v)
return raw_atomic64_add_return(i, v);
}
+/**
+ * atomic64_add_return_acquire() - atomic add with acquire ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_add_return_acquire(s64 i, atomic64_t *v)
{
@@ -708,6 +1706,17 @@ atomic64_add_return_acquire(s64 i, atomic64_t *v)
return raw_atomic64_add_return_acquire(i, v);
}
+/**
+ * atomic64_add_return_release() - atomic add with release ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_add_return_release(s64 i, atomic64_t *v)
{
@@ -716,6 +1725,17 @@ atomic64_add_return_release(s64 i, atomic64_t *v)
return raw_atomic64_add_return_release(i, v);
}
+/**
+ * atomic64_add_return_relaxed() - atomic add with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_add_return_relaxed(s64 i, atomic64_t *v)
{
@@ -723,6 +1743,17 @@ atomic64_add_return_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_add_return_relaxed(i, v);
}
+/**
+ * atomic64_fetch_add() - atomic add with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_add() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_add(s64 i, atomic64_t *v)
{
@@ -731,6 +1762,17 @@ atomic64_fetch_add(s64 i, atomic64_t *v)
return raw_atomic64_fetch_add(i, v);
}
+/**
+ * atomic64_fetch_add_acquire() - atomic add with acquire ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
{
@@ -738,6 +1780,17 @@ atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
return raw_atomic64_fetch_add_acquire(i, v);
}
+/**
+ * atomic64_fetch_add_release() - atomic add with release ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_add_release(s64 i, atomic64_t *v)
{
@@ -746,6 +1799,17 @@ atomic64_fetch_add_release(s64 i, atomic64_t *v)
return raw_atomic64_fetch_add_release(i, v);
}
+/**
+ * atomic64_fetch_add_relaxed() - atomic add with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
{
@@ -753,6 +1817,17 @@ atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_fetch_add_relaxed(i, v);
}
+/**
+ * atomic64_sub() - atomic subtract with relaxed ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_sub() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_sub(s64 i, atomic64_t *v)
{
@@ -760,6 +1835,17 @@ atomic64_sub(s64 i, atomic64_t *v)
raw_atomic64_sub(i, v);
}
+/**
+ * atomic64_sub_return() - atomic subtract with full ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_sub_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_sub_return(s64 i, atomic64_t *v)
{
@@ -768,6 +1854,17 @@ atomic64_sub_return(s64 i, atomic64_t *v)
return raw_atomic64_sub_return(i, v);
}
+/**
+ * atomic64_sub_return_acquire() - atomic subtract with acquire ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_sub_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_sub_return_acquire(s64 i, atomic64_t *v)
{
@@ -775,6 +1872,17 @@ atomic64_sub_return_acquire(s64 i, atomic64_t *v)
return raw_atomic64_sub_return_acquire(i, v);
}
+/**
+ * atomic64_sub_return_release() - atomic subtract with release ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_sub_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_sub_return_release(s64 i, atomic64_t *v)
{
@@ -783,6 +1891,17 @@ atomic64_sub_return_release(s64 i, atomic64_t *v)
return raw_atomic64_sub_return_release(i, v);
}
+/**
+ * atomic64_sub_return_relaxed() - atomic subtract with relaxed ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_sub_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
{
@@ -790,6 +1909,17 @@ atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_sub_return_relaxed(i, v);
}
+/**
+ * atomic64_fetch_sub() - atomic subtract with full ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_sub(s64 i, atomic64_t *v)
{
@@ -798,6 +1928,17 @@ atomic64_fetch_sub(s64 i, atomic64_t *v)
return raw_atomic64_fetch_sub(i, v);
}
+/**
+ * atomic64_fetch_sub_acquire() - atomic subtract with acquire ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
{
@@ -805,6 +1946,17 @@ atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
return raw_atomic64_fetch_sub_acquire(i, v);
}
+/**
+ * atomic64_fetch_sub_release() - atomic subtract with release ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_sub_release(s64 i, atomic64_t *v)
{
@@ -813,6 +1965,17 @@ atomic64_fetch_sub_release(s64 i, atomic64_t *v)
return raw_atomic64_fetch_sub_release(i, v);
}
+/**
+ * atomic64_fetch_sub_relaxed() - atomic subtract with relaxed ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
{
@@ -820,6 +1983,16 @@ atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_fetch_sub_relaxed(i, v);
}
+/**
+ * atomic64_inc() - atomic increment with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_inc(atomic64_t *v)
{
@@ -827,6 +2000,16 @@ atomic64_inc(atomic64_t *v)
raw_atomic64_inc(v);
}
+/**
+ * atomic64_inc_return() - atomic increment with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_inc_return(atomic64_t *v)
{
@@ -835,6 +2018,16 @@ atomic64_inc_return(atomic64_t *v)
return raw_atomic64_inc_return(v);
}
+/**
+ * atomic64_inc_return_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_inc_return_acquire(atomic64_t *v)
{
@@ -842,6 +2035,16 @@ atomic64_inc_return_acquire(atomic64_t *v)
return raw_atomic64_inc_return_acquire(v);
}
+/**
+ * atomic64_inc_return_release() - atomic increment with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_inc_return_release(atomic64_t *v)
{
@@ -850,6 +2053,16 @@ atomic64_inc_return_release(atomic64_t *v)
return raw_atomic64_inc_return_release(v);
}
+/**
+ * atomic64_inc_return_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_inc_return_relaxed(atomic64_t *v)
{
@@ -857,6 +2070,16 @@ atomic64_inc_return_relaxed(atomic64_t *v)
return raw_atomic64_inc_return_relaxed(v);
}
+/**
+ * atomic64_fetch_inc() - atomic increment with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_inc(atomic64_t *v)
{
@@ -865,6 +2088,16 @@ atomic64_fetch_inc(atomic64_t *v)
return raw_atomic64_fetch_inc(v);
}
+/**
+ * atomic64_fetch_inc_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_inc_acquire(atomic64_t *v)
{
@@ -872,6 +2105,16 @@ atomic64_fetch_inc_acquire(atomic64_t *v)
return raw_atomic64_fetch_inc_acquire(v);
}
+/**
+ * atomic64_fetch_inc_release() - atomic increment with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_inc_release(atomic64_t *v)
{
@@ -880,6 +2123,16 @@ atomic64_fetch_inc_release(atomic64_t *v)
return raw_atomic64_fetch_inc_release(v);
}
+/**
+ * atomic64_fetch_inc_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_inc_relaxed(atomic64_t *v)
{
@@ -887,6 +2140,16 @@ atomic64_fetch_inc_relaxed(atomic64_t *v)
return raw_atomic64_fetch_inc_relaxed(v);
}
+/**
+ * atomic64_dec() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_dec(atomic64_t *v)
{
@@ -894,6 +2157,16 @@ atomic64_dec(atomic64_t *v)
raw_atomic64_dec(v);
}
+/**
+ * atomic64_dec_return() - atomic decrement with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_dec_return(atomic64_t *v)
{
@@ -902,6 +2175,16 @@ atomic64_dec_return(atomic64_t *v)
return raw_atomic64_dec_return(v);
}
+/**
+ * atomic64_dec_return_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_dec_return_acquire(atomic64_t *v)
{
@@ -909,6 +2192,16 @@ atomic64_dec_return_acquire(atomic64_t *v)
return raw_atomic64_dec_return_acquire(v);
}
+/**
+ * atomic64_dec_return_release() - atomic decrement with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_dec_return_release(atomic64_t *v)
{
@@ -917,6 +2210,16 @@ atomic64_dec_return_release(atomic64_t *v)
return raw_atomic64_dec_return_release(v);
}
+/**
+ * atomic64_dec_return_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_dec_return_relaxed(atomic64_t *v)
{
@@ -924,6 +2227,16 @@ atomic64_dec_return_relaxed(atomic64_t *v)
return raw_atomic64_dec_return_relaxed(v);
}
+/**
+ * atomic64_fetch_dec() - atomic decrement with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_dec(atomic64_t *v)
{
@@ -932,6 +2245,16 @@ atomic64_fetch_dec(atomic64_t *v)
return raw_atomic64_fetch_dec(v);
}
+/**
+ * atomic64_fetch_dec_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_dec_acquire(atomic64_t *v)
{
@@ -939,6 +2262,16 @@ atomic64_fetch_dec_acquire(atomic64_t *v)
return raw_atomic64_fetch_dec_acquire(v);
}
+/**
+ * atomic64_fetch_dec_release() - atomic decrement with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_dec_release(atomic64_t *v)
{
@@ -947,6 +2280,16 @@ atomic64_fetch_dec_release(atomic64_t *v)
return raw_atomic64_fetch_dec_release(v);
}
+/**
+ * atomic64_fetch_dec_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_dec_relaxed(atomic64_t *v)
{
@@ -954,6 +2297,17 @@ atomic64_fetch_dec_relaxed(atomic64_t *v)
return raw_atomic64_fetch_dec_relaxed(v);
}
+/**
+ * atomic64_and() - atomic bitwise AND with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_and() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_and(s64 i, atomic64_t *v)
{
@@ -961,6 +2315,17 @@ atomic64_and(s64 i, atomic64_t *v)
raw_atomic64_and(i, v);
}
+/**
+ * atomic64_fetch_and() - atomic bitwise AND with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_and() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_and(s64 i, atomic64_t *v)
{
@@ -969,6 +2334,17 @@ atomic64_fetch_and(s64 i, atomic64_t *v)
return raw_atomic64_fetch_and(i, v);
}
+/**
+ * atomic64_fetch_and_acquire() - atomic bitwise AND with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_and_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
{
@@ -976,6 +2352,17 @@ atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
return raw_atomic64_fetch_and_acquire(i, v);
}
+/**
+ * atomic64_fetch_and_release() - atomic bitwise AND with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_and_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_and_release(s64 i, atomic64_t *v)
{
@@ -984,6 +2371,17 @@ atomic64_fetch_and_release(s64 i, atomic64_t *v)
return raw_atomic64_fetch_and_release(i, v);
}
+/**
+ * atomic64_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_and_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
{
@@ -991,6 +2389,17 @@ atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_fetch_and_relaxed(i, v);
}
+/**
+ * atomic64_andnot() - atomic bitwise AND NOT with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_andnot() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_andnot(s64 i, atomic64_t *v)
{
@@ -998,6 +2407,17 @@ atomic64_andnot(s64 i, atomic64_t *v)
raw_atomic64_andnot(i, v);
}
+/**
+ * atomic64_fetch_andnot() - atomic bitwise AND NOT with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
@@ -1006,6 +2426,17 @@ atomic64_fetch_andnot(s64 i, atomic64_t *v)
return raw_atomic64_fetch_andnot(i, v);
}
+/**
+ * atomic64_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
@@ -1013,6 +2444,17 @@ atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
return raw_atomic64_fetch_andnot_acquire(i, v);
}
+/**
+ * atomic64_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
@@ -1021,6 +2463,17 @@ atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
return raw_atomic64_fetch_andnot_release(i, v);
}
+/**
+ * atomic64_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
{
@@ -1028,6 +2481,17 @@ atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_fetch_andnot_relaxed(i, v);
}
+/**
+ * atomic64_or() - atomic bitwise OR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_or() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_or(s64 i, atomic64_t *v)
{
@@ -1035,6 +2499,17 @@ atomic64_or(s64 i, atomic64_t *v)
raw_atomic64_or(i, v);
}
+/**
+ * atomic64_fetch_or() - atomic bitwise OR with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_or() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_or(s64 i, atomic64_t *v)
{
@@ -1043,6 +2518,17 @@ atomic64_fetch_or(s64 i, atomic64_t *v)
return raw_atomic64_fetch_or(i, v);
}
+/**
+ * atomic64_fetch_or_acquire() - atomic bitwise OR with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_or_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
{
@@ -1050,6 +2536,17 @@ atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
return raw_atomic64_fetch_or_acquire(i, v);
}
+/**
+ * atomic64_fetch_or_release() - atomic bitwise OR with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_or_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_or_release(s64 i, atomic64_t *v)
{
@@ -1058,6 +2555,17 @@ atomic64_fetch_or_release(s64 i, atomic64_t *v)
return raw_atomic64_fetch_or_release(i, v);
}
+/**
+ * atomic64_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_or_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
{
@@ -1065,6 +2573,17 @@ atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_fetch_or_relaxed(i, v);
}
+/**
+ * atomic64_xor() - atomic bitwise XOR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_xor() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_xor(s64 i, atomic64_t *v)
{
@@ -1072,6 +2591,17 @@ atomic64_xor(s64 i, atomic64_t *v)
raw_atomic64_xor(i, v);
}
+/**
+ * atomic64_fetch_xor() - atomic bitwise XOR with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_xor(s64 i, atomic64_t *v)
{
@@ -1080,6 +2610,17 @@ atomic64_fetch_xor(s64 i, atomic64_t *v)
return raw_atomic64_fetch_xor(i, v);
}
+/**
+ * atomic64_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
{
@@ -1087,6 +2628,17 @@ atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
return raw_atomic64_fetch_xor_acquire(i, v);
}
+/**
+ * atomic64_fetch_xor_release() - atomic bitwise XOR with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_xor_release(s64 i, atomic64_t *v)
{
@@ -1095,6 +2647,17 @@ atomic64_fetch_xor_release(s64 i, atomic64_t *v)
return raw_atomic64_fetch_xor_release(i, v);
}
+/**
+ * atomic64_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
{
@@ -1102,6 +2665,17 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_fetch_xor_relaxed(i, v);
}
+/**
+ * atomic64_xchg() - atomic exchange with full ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_xchg() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_xchg(atomic64_t *v, s64 new)
{
@@ -1110,6 +2684,17 @@ atomic64_xchg(atomic64_t *v, s64 new)
return raw_atomic64_xchg(v, new);
}
+/**
+ * atomic64_xchg_acquire() - atomic exchange with acquire ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_xchg_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_xchg_acquire(atomic64_t *v, s64 new)
{
@@ -1117,6 +2702,17 @@ atomic64_xchg_acquire(atomic64_t *v, s64 new)
return raw_atomic64_xchg_acquire(v, new);
}
+/**
+ * atomic64_xchg_release() - atomic exchange with release ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_xchg_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_xchg_release(atomic64_t *v, s64 new)
{
@@ -1125,6 +2721,17 @@ atomic64_xchg_release(atomic64_t *v, s64 new)
return raw_atomic64_xchg_release(v, new);
}
+/**
+ * atomic64_xchg_relaxed() - atomic exchange with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_xchg_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_xchg_relaxed(atomic64_t *v, s64 new)
{
@@ -1132,6 +2739,18 @@ atomic64_xchg_relaxed(atomic64_t *v, s64 new)
return raw_atomic64_xchg_relaxed(v, new);
}
+/**
+ * atomic64_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
@@ -1140,6 +2759,18 @@ atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
return raw_atomic64_cmpxchg(v, old, new);
}
+/**
+ * atomic64_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
@@ -1147,6 +2778,18 @@ atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
return raw_atomic64_cmpxchg_acquire(v, old, new);
}
+/**
+ * atomic64_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
@@ -1155,6 +2798,18 @@ atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
return raw_atomic64_cmpxchg_release(v, old, new);
}
+/**
+ * atomic64_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
{
@@ -1162,6 +2817,19 @@ atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
return raw_atomic64_cmpxchg_relaxed(v, old, new);
}
+/**
+ * atomic64_try_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
@@ -1171,6 +2839,19 @@ atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
return raw_atomic64_try_cmpxchg(v, old, new);
}
+/**
+ * atomic64_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_acquire() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
@@ -1179,6 +2860,19 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
return raw_atomic64_try_cmpxchg_acquire(v, old, new);
}
+/**
+ * atomic64_try_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_release() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
{
@@ -1188,6 +2882,19 @@ atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
return raw_atomic64_try_cmpxchg_release(v, old, new);
}
+/**
+ * atomic64_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_relaxed() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
{
@@ -1196,6 +2903,17 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
return raw_atomic64_try_cmpxchg_relaxed(v, old, new);
}
+/**
+ * atomic64_sub_and_test() - atomic subtract and test if zero with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_sub_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic64_sub_and_test(s64 i, atomic64_t *v)
{
@@ -1204,6 +2922,16 @@ atomic64_sub_and_test(s64 i, atomic64_t *v)
return raw_atomic64_sub_and_test(i, v);
}
+/**
+ * atomic64_dec_and_test() - atomic decrement and test if zero with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic64_dec_and_test(atomic64_t *v)
{
@@ -1212,6 +2940,16 @@ atomic64_dec_and_test(atomic64_t *v)
return raw_atomic64_dec_and_test(v);
}
+/**
+ * atomic64_inc_and_test() - atomic increment and test if zero with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic64_inc_and_test(atomic64_t *v)
{
@@ -1220,6 +2958,17 @@ atomic64_inc_and_test(atomic64_t *v)
return raw_atomic64_inc_and_test(v);
}
+/**
+ * atomic64_add_negative() - atomic add and test if negative with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_negative() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic64_add_negative(s64 i, atomic64_t *v)
{
@@ -1228,6 +2977,17 @@ atomic64_add_negative(s64 i, atomic64_t *v)
return raw_atomic64_add_negative(i, v);
}
+/**
+ * atomic64_add_negative_acquire() - atomic add and test if negative with acquire ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_negative_acquire() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
@@ -1235,6 +2995,17 @@ atomic64_add_negative_acquire(s64 i, atomic64_t *v)
return raw_atomic64_add_negative_acquire(i, v);
}
+/**
+ * atomic64_add_negative_release() - atomic add and test if negative with release ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_negative_release() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic64_add_negative_release(s64 i, atomic64_t *v)
{
@@ -1243,6 +3014,17 @@ atomic64_add_negative_release(s64 i, atomic64_t *v)
return raw_atomic64_add_negative_release(i, v);
}
+/**
+ * atomic64_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_negative_relaxed() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
{
@@ -1250,6 +3032,18 @@ atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_add_negative_relaxed(i, v);
}
+/**
+ * atomic64_fetch_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic64_t
+ * @a: s64 value to add
+ * @u: s64 value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_unless() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
@@ -1258,6 +3052,18 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
return raw_atomic64_fetch_add_unless(v, a, u);
}
+/**
+ * atomic64_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic64_t
+ * @a: s64 value to add
+ * @u: s64 value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_unless() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
@@ -1266,6 +3072,16 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
return raw_atomic64_add_unless(v, a, u);
}
+/**
+ * atomic64_inc_not_zero() - atomic increment unless zero with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc_not_zero() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic64_inc_not_zero(atomic64_t *v)
{
@@ -1274,6 +3090,16 @@ atomic64_inc_not_zero(atomic64_t *v)
return raw_atomic64_inc_not_zero(v);
}
+/**
+ * atomic64_inc_unless_negative() - atomic increment unless negative with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc_unless_negative() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic64_inc_unless_negative(atomic64_t *v)
{
@@ -1282,6 +3108,16 @@ atomic64_inc_unless_negative(atomic64_t *v)
return raw_atomic64_inc_unless_negative(v);
}
+/**
+ * atomic64_dec_unless_positive() - atomic decrement unless positive with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec_unless_positive() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic64_dec_unless_positive(atomic64_t *v)
{
@@ -1290,6 +3126,16 @@ atomic64_dec_unless_positive(atomic64_t *v)
return raw_atomic64_dec_unless_positive(v);
}
+/**
+ * atomic64_dec_if_positive() - atomic decrement if positive with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec_if_positive() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline s64
atomic64_dec_if_positive(atomic64_t *v)
{
@@ -1298,6 +3144,16 @@ atomic64_dec_if_positive(atomic64_t *v)
return raw_atomic64_dec_if_positive(v);
}
+/**
+ * atomic_long_read() - atomic load with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically loads the value of @v with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_read() there.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline long
atomic_long_read(const atomic_long_t *v)
{
@@ -1305,6 +3161,16 @@ atomic_long_read(const atomic_long_t *v)
return raw_atomic_long_read(v);
}
+/**
+ * atomic_long_read_acquire() - atomic load with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically loads the value of @v with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_read_acquire() there.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline long
atomic_long_read_acquire(const atomic_long_t *v)
{
@@ -1312,6 +3178,17 @@ atomic_long_read_acquire(const atomic_long_t *v)
return raw_atomic_long_read_acquire(v);
}
+/**
+ * atomic_long_set() - atomic set with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @i: long value to assign
+ *
+ * Atomically sets @v to @i with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_set() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_set(atomic_long_t *v, long i)
{
@@ -1319,6 +3196,17 @@ atomic_long_set(atomic_long_t *v, long i)
raw_atomic_long_set(v, i);
}
+/**
+ * atomic_long_set_release() - atomic set with release ordering
+ * @v: pointer to atomic_long_t
+ * @i: long value to assign
+ *
+ * Atomically sets @v to @i with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_set_release() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_set_release(atomic_long_t *v, long i)
{
@@ -1327,6 +3215,17 @@ atomic_long_set_release(atomic_long_t *v, long i)
raw_atomic_long_set_release(v, i);
}
+/**
+ * atomic_long_add() - atomic add with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_add(long i, atomic_long_t *v)
{
@@ -1334,6 +3233,17 @@ atomic_long_add(long i, atomic_long_t *v)
raw_atomic_long_add(i, v);
}
+/**
+ * atomic_long_add_return() - atomic add with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_add_return(long i, atomic_long_t *v)
{
@@ -1342,6 +3252,17 @@ atomic_long_add_return(long i, atomic_long_t *v)
return raw_atomic_long_add_return(i, v);
}
+/**
+ * atomic_long_add_return_acquire() - atomic add with acquire ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_add_return_acquire(long i, atomic_long_t *v)
{
@@ -1349,6 +3270,17 @@ atomic_long_add_return_acquire(long i, atomic_long_t *v)
return raw_atomic_long_add_return_acquire(i, v);
}
+/**
+ * atomic_long_add_return_release() - atomic add with release ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_add_return_release(long i, atomic_long_t *v)
{
@@ -1357,6 +3289,17 @@ atomic_long_add_return_release(long i, atomic_long_t *v)
return raw_atomic_long_add_return_release(i, v);
}
+/**
+ * atomic_long_add_return_relaxed() - atomic add with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{
@@ -1364,6 +3307,17 @@ atomic_long_add_return_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_add_return_relaxed(i, v);
}
+/**
+ * atomic_long_fetch_add() - atomic add with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_add(long i, atomic_long_t *v)
{
@@ -1372,6 +3326,17 @@ atomic_long_fetch_add(long i, atomic_long_t *v)
return raw_atomic_long_fetch_add(i, v);
}
+/**
+ * atomic_long_fetch_add_acquire() - atomic add with acquire ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{
@@ -1379,6 +3344,17 @@ atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
return raw_atomic_long_fetch_add_acquire(i, v);
}
+/**
+ * atomic_long_fetch_add_release() - atomic add with release ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_add_release(long i, atomic_long_t *v)
{
@@ -1387,6 +3363,17 @@ atomic_long_fetch_add_release(long i, atomic_long_t *v)
return raw_atomic_long_fetch_add_release(i, v);
}
+/**
+ * atomic_long_fetch_add_relaxed() - atomic add with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{
@@ -1394,6 +3381,17 @@ atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_fetch_add_relaxed(i, v);
}
+/**
+ * atomic_long_sub() - atomic subtract with relaxed ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_sub() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_sub(long i, atomic_long_t *v)
{
@@ -1401,6 +3399,17 @@ atomic_long_sub(long i, atomic_long_t *v)
raw_atomic_long_sub(i, v);
}
+/**
+ * atomic_long_sub_return() - atomic subtract with full ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_sub_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_sub_return(long i, atomic_long_t *v)
{
@@ -1409,6 +3418,17 @@ atomic_long_sub_return(long i, atomic_long_t *v)
return raw_atomic_long_sub_return(i, v);
}
+/**
+ * atomic_long_sub_return_acquire() - atomic subtract with acquire ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_sub_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{
@@ -1416,6 +3436,17 @@ atomic_long_sub_return_acquire(long i, atomic_long_t *v)
return raw_atomic_long_sub_return_acquire(i, v);
}
+/**
+ * atomic_long_sub_return_release() - atomic subtract with release ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_sub_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_sub_return_release(long i, atomic_long_t *v)
{
@@ -1424,6 +3455,17 @@ atomic_long_sub_return_release(long i, atomic_long_t *v)
return raw_atomic_long_sub_return_release(i, v);
}
+/**
+ * atomic_long_sub_return_relaxed() - atomic subtract with relaxed ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_sub_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{
@@ -1431,6 +3473,17 @@ atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_sub_return_relaxed(i, v);
}
+/**
+ * atomic_long_fetch_sub() - atomic subtract with full ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_sub(long i, atomic_long_t *v)
{
@@ -1439,6 +3492,17 @@ atomic_long_fetch_sub(long i, atomic_long_t *v)
return raw_atomic_long_fetch_sub(i, v);
}
+/**
+ * atomic_long_fetch_sub_acquire() - atomic subtract with acquire ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{
@@ -1446,6 +3510,17 @@ atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
return raw_atomic_long_fetch_sub_acquire(i, v);
}
+/**
+ * atomic_long_fetch_sub_release() - atomic subtract with release ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{
@@ -1454,6 +3529,17 @@ atomic_long_fetch_sub_release(long i, atomic_long_t *v)
return raw_atomic_long_fetch_sub_release(i, v);
}
+/**
+ * atomic_long_fetch_sub_relaxed() - atomic subtract with relaxed ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{
@@ -1461,6 +3547,16 @@ atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_fetch_sub_relaxed(i, v);
}
+/**
+ * atomic_long_inc() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_inc(atomic_long_t *v)
{
@@ -1468,6 +3564,16 @@ atomic_long_inc(atomic_long_t *v)
raw_atomic_long_inc(v);
}
+/**
+ * atomic_long_inc_return() - atomic increment with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_inc_return(atomic_long_t *v)
{
@@ -1476,6 +3582,16 @@ atomic_long_inc_return(atomic_long_t *v)
return raw_atomic_long_inc_return(v);
}
+/**
+ * atomic_long_inc_return_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_inc_return_acquire(atomic_long_t *v)
{
@@ -1483,6 +3599,16 @@ atomic_long_inc_return_acquire(atomic_long_t *v)
return raw_atomic_long_inc_return_acquire(v);
}
+/**
+ * atomic_long_inc_return_release() - atomic increment with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_inc_return_release(atomic_long_t *v)
{
@@ -1491,6 +3617,16 @@ atomic_long_inc_return_release(atomic_long_t *v)
return raw_atomic_long_inc_return_release(v);
}
+/**
+ * atomic_long_inc_return_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_inc_return_relaxed(atomic_long_t *v)
{
@@ -1498,6 +3634,16 @@ atomic_long_inc_return_relaxed(atomic_long_t *v)
return raw_atomic_long_inc_return_relaxed(v);
}
+/**
+ * atomic_long_fetch_inc() - atomic increment with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_inc(atomic_long_t *v)
{
@@ -1506,6 +3652,16 @@ atomic_long_fetch_inc(atomic_long_t *v)
return raw_atomic_long_fetch_inc(v);
}
+/**
+ * atomic_long_fetch_inc_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_inc_acquire(atomic_long_t *v)
{
@@ -1513,6 +3669,16 @@ atomic_long_fetch_inc_acquire(atomic_long_t *v)
return raw_atomic_long_fetch_inc_acquire(v);
}
+/**
+ * atomic_long_fetch_inc_release() - atomic increment with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_inc_release(atomic_long_t *v)
{
@@ -1521,6 +3687,16 @@ atomic_long_fetch_inc_release(atomic_long_t *v)
return raw_atomic_long_fetch_inc_release(v);
}
+/**
+ * atomic_long_fetch_inc_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{
@@ -1528,6 +3704,16 @@ atomic_long_fetch_inc_relaxed(atomic_long_t *v)
return raw_atomic_long_fetch_inc_relaxed(v);
}
+/**
+ * atomic_long_dec() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_dec(atomic_long_t *v)
{
@@ -1535,6 +3721,16 @@ atomic_long_dec(atomic_long_t *v)
raw_atomic_long_dec(v);
}
+/**
+ * atomic_long_dec_return() - atomic decrement with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_dec_return(atomic_long_t *v)
{
@@ -1543,6 +3739,16 @@ atomic_long_dec_return(atomic_long_t *v)
return raw_atomic_long_dec_return(v);
}
+/**
+ * atomic_long_dec_return_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_dec_return_acquire(atomic_long_t *v)
{
@@ -1550,6 +3756,16 @@ atomic_long_dec_return_acquire(atomic_long_t *v)
return raw_atomic_long_dec_return_acquire(v);
}
+/**
+ * atomic_long_dec_return_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_dec_return_release(atomic_long_t *v)
{
@@ -1558,6 +3774,16 @@ atomic_long_dec_return_release(atomic_long_t *v)
return raw_atomic_long_dec_return_release(v);
}
+/**
+ * atomic_long_dec_return_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_dec_return_relaxed(atomic_long_t *v)
{
@@ -1565,6 +3791,16 @@ atomic_long_dec_return_relaxed(atomic_long_t *v)
return raw_atomic_long_dec_return_relaxed(v);
}
+/**
+ * atomic_long_fetch_dec() - atomic decrement with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_dec(atomic_long_t *v)
{
@@ -1573,6 +3809,16 @@ atomic_long_fetch_dec(atomic_long_t *v)
return raw_atomic_long_fetch_dec(v);
}
+/**
+ * atomic_long_fetch_dec_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_dec_acquire(atomic_long_t *v)
{
@@ -1580,6 +3826,16 @@ atomic_long_fetch_dec_acquire(atomic_long_t *v)
return raw_atomic_long_fetch_dec_acquire(v);
}
+/**
+ * atomic_long_fetch_dec_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_dec_release(atomic_long_t *v)
{
@@ -1588,6 +3844,16 @@ atomic_long_fetch_dec_release(atomic_long_t *v)
return raw_atomic_long_fetch_dec_release(v);
}
+/**
+ * atomic_long_fetch_dec_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{
@@ -1595,6 +3861,17 @@ atomic_long_fetch_dec_relaxed(atomic_long_t *v)
return raw_atomic_long_fetch_dec_relaxed(v);
}
+/**
+ * atomic_long_and() - atomic bitwise AND with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_and() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_and(long i, atomic_long_t *v)
{
@@ -1602,6 +3879,17 @@ atomic_long_and(long i, atomic_long_t *v)
raw_atomic_long_and(i, v);
}
+/**
+ * atomic_long_fetch_and() - atomic bitwise AND with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_and(long i, atomic_long_t *v)
{
@@ -1610,6 +3898,17 @@ atomic_long_fetch_and(long i, atomic_long_t *v)
return raw_atomic_long_fetch_and(i, v);
}
+/**
+ * atomic_long_fetch_and_acquire() - atomic bitwise AND with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{
@@ -1617,6 +3916,17 @@ atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
return raw_atomic_long_fetch_and_acquire(i, v);
}
+/**
+ * atomic_long_fetch_and_release() - atomic bitwise AND with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_and_release(long i, atomic_long_t *v)
{
@@ -1625,6 +3935,17 @@ atomic_long_fetch_and_release(long i, atomic_long_t *v)
return raw_atomic_long_fetch_and_release(i, v);
}
+/**
+ * atomic_long_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{
@@ -1632,6 +3953,17 @@ atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_fetch_and_relaxed(i, v);
}
+/**
+ * atomic_long_andnot() - atomic bitwise AND NOT with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_andnot() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_andnot(long i, atomic_long_t *v)
{
@@ -1639,6 +3971,17 @@ atomic_long_andnot(long i, atomic_long_t *v)
raw_atomic_long_andnot(i, v);
}
+/**
+ * atomic_long_fetch_andnot() - atomic bitwise AND NOT with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_andnot(long i, atomic_long_t *v)
{
@@ -1647,6 +3990,17 @@ atomic_long_fetch_andnot(long i, atomic_long_t *v)
return raw_atomic_long_fetch_andnot(i, v);
}
+/**
+ * atomic_long_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{
@@ -1654,6 +4008,17 @@ atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
return raw_atomic_long_fetch_andnot_acquire(i, v);
}
+/**
+ * atomic_long_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{
@@ -1662,6 +4027,17 @@ atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
return raw_atomic_long_fetch_andnot_release(i, v);
}
+/**
+ * atomic_long_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{
@@ -1669,6 +4045,17 @@ atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_fetch_andnot_relaxed(i, v);
}
+/**
+ * atomic_long_or() - atomic bitwise OR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_or() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_or(long i, atomic_long_t *v)
{
@@ -1676,6 +4063,17 @@ atomic_long_or(long i, atomic_long_t *v)
raw_atomic_long_or(i, v);
}
+/**
+ * atomic_long_fetch_or() - atomic bitwise OR with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_or(long i, atomic_long_t *v)
{
@@ -1684,6 +4082,17 @@ atomic_long_fetch_or(long i, atomic_long_t *v)
return raw_atomic_long_fetch_or(i, v);
}
+/**
+ * atomic_long_fetch_or_acquire() - atomic bitwise OR with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{
@@ -1691,6 +4100,17 @@ atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
return raw_atomic_long_fetch_or_acquire(i, v);
}
+/**
+ * atomic_long_fetch_or_release() - atomic bitwise OR with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_or_release(long i, atomic_long_t *v)
{
@@ -1699,6 +4119,17 @@ atomic_long_fetch_or_release(long i, atomic_long_t *v)
return raw_atomic_long_fetch_or_release(i, v);
}
+/**
+ * atomic_long_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{
@@ -1706,6 +4137,17 @@ atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_fetch_or_relaxed(i, v);
}
+/**
+ * atomic_long_xor() - atomic bitwise XOR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_xor() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_xor(long i, atomic_long_t *v)
{
@@ -1713,6 +4155,17 @@ atomic_long_xor(long i, atomic_long_t *v)
raw_atomic_long_xor(i, v);
}
+/**
+ * atomic_long_fetch_xor() - atomic bitwise XOR with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_xor(long i, atomic_long_t *v)
{
@@ -1721,6 +4174,17 @@ atomic_long_fetch_xor(long i, atomic_long_t *v)
return raw_atomic_long_fetch_xor(i, v);
}
+/**
+ * atomic_long_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{
@@ -1728,6 +4192,17 @@ atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
return raw_atomic_long_fetch_xor_acquire(i, v);
}
+/**
+ * atomic_long_fetch_xor_release() - atomic bitwise XOR with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{
@@ -1736,6 +4211,17 @@ atomic_long_fetch_xor_release(long i, atomic_long_t *v)
return raw_atomic_long_fetch_xor_release(i, v);
}
+/**
+ * atomic_long_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{
@@ -1743,6 +4229,17 @@ atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_fetch_xor_relaxed(i, v);
}
+/**
+ * atomic_long_xchg() - atomic exchange with full ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_xchg() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_xchg(atomic_long_t *v, long new)
{
@@ -1751,6 +4248,17 @@ atomic_long_xchg(atomic_long_t *v, long new)
return raw_atomic_long_xchg(v, new);
}
+/**
+ * atomic_long_xchg_acquire() - atomic exchange with acquire ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_xchg_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_xchg_acquire(atomic_long_t *v, long new)
{
@@ -1758,6 +4266,17 @@ atomic_long_xchg_acquire(atomic_long_t *v, long new)
return raw_atomic_long_xchg_acquire(v, new);
}
+/**
+ * atomic_long_xchg_release() - atomic exchange with release ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_xchg_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_xchg_release(atomic_long_t *v, long new)
{
@@ -1766,6 +4285,17 @@ atomic_long_xchg_release(atomic_long_t *v, long new)
return raw_atomic_long_xchg_release(v, new);
}
+/**
+ * atomic_long_xchg_relaxed() - atomic exchange with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_xchg_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_xchg_relaxed(atomic_long_t *v, long new)
{
@@ -1773,6 +4303,18 @@ atomic_long_xchg_relaxed(atomic_long_t *v, long new)
return raw_atomic_long_xchg_relaxed(v, new);
}
+/**
+ * atomic_long_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{
@@ -1781,6 +4323,18 @@ atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
return raw_atomic_long_cmpxchg(v, old, new);
}
+/**
+ * atomic_long_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{
@@ -1788,6 +4342,18 @@ atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
return raw_atomic_long_cmpxchg_acquire(v, old, new);
}
+/**
+ * atomic_long_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{
@@ -1796,6 +4362,18 @@ atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
return raw_atomic_long_cmpxchg_release(v, old, new);
}
+/**
+ * atomic_long_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{
@@ -1803,6 +4381,19 @@ atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
return raw_atomic_long_cmpxchg_relaxed(v, old, new);
}
+/**
+ * atomic_long_try_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
{
@@ -1812,6 +4403,19 @@ atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
return raw_atomic_long_try_cmpxchg(v, old, new);
}
+/**
+ * atomic_long_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_acquire() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{
@@ -1820,6 +4424,19 @@ atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
return raw_atomic_long_try_cmpxchg_acquire(v, old, new);
}
+/**
+ * atomic_long_try_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_release() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
{
@@ -1829,6 +4446,19 @@ atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
return raw_atomic_long_try_cmpxchg_release(v, old, new);
}
+/**
+ * atomic_long_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_relaxed() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{
@@ -1837,6 +4467,17 @@ atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
return raw_atomic_long_try_cmpxchg_relaxed(v, old, new);
}
+/**
+ * atomic_long_sub_and_test() - atomic subtract and test if zero with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_sub_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic_long_sub_and_test(long i, atomic_long_t *v)
{
@@ -1845,6 +4486,16 @@ atomic_long_sub_and_test(long i, atomic_long_t *v)
return raw_atomic_long_sub_and_test(i, v);
}
+/**
+ * atomic_long_dec_and_test() - atomic decrement and test if zero with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic_long_dec_and_test(atomic_long_t *v)
{
@@ -1853,6 +4504,16 @@ atomic_long_dec_and_test(atomic_long_t *v)
return raw_atomic_long_dec_and_test(v);
}
+/**
+ * atomic_long_inc_and_test() - atomic increment and test if zero with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic_long_inc_and_test(atomic_long_t *v)
{
@@ -1861,6 +4522,17 @@ atomic_long_inc_and_test(atomic_long_t *v)
return raw_atomic_long_inc_and_test(v);
}
+/**
+ * atomic_long_add_negative() - atomic add and test if negative with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_negative() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_long_add_negative(long i, atomic_long_t *v)
{
@@ -1869,6 +4541,17 @@ atomic_long_add_negative(long i, atomic_long_t *v)
return raw_atomic_long_add_negative(i, v);
}
+/**
+ * atomic_long_add_negative_acquire() - atomic add and test if negative with acquire ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_negative_acquire() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_long_add_negative_acquire(long i, atomic_long_t *v)
{
@@ -1876,6 +4559,17 @@ atomic_long_add_negative_acquire(long i, atomic_long_t *v)
return raw_atomic_long_add_negative_acquire(i, v);
}
+/**
+ * atomic_long_add_negative_release() - atomic add and test if negative with release ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_negative_release() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_long_add_negative_release(long i, atomic_long_t *v)
{
@@ -1884,6 +4578,17 @@ atomic_long_add_negative_release(long i, atomic_long_t *v)
return raw_atomic_long_add_negative_release(i, v);
}
+/**
+ * atomic_long_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_negative_relaxed() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
{
@@ -1891,6 +4596,18 @@ atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_add_negative_relaxed(i, v);
}
+/**
+ * atomic_long_fetch_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_long_t
+ * @a: long value to add
+ * @u: long value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_unless() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
@@ -1899,6 +4616,18 @@ atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
return raw_atomic_long_fetch_add_unless(v, a, u);
}
+/**
+ * atomic_long_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_long_t
+ * @a: long value to add
+ * @u: long value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_unless() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_long_add_unless(atomic_long_t *v, long a, long u)
{
@@ -1907,6 +4636,16 @@ atomic_long_add_unless(atomic_long_t *v, long a, long u)
return raw_atomic_long_add_unless(v, a, u);
}
+/**
+ * atomic_long_inc_not_zero() - atomic increment unless zero with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc_not_zero() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_long_inc_not_zero(atomic_long_t *v)
{
@@ -1915,6 +4654,16 @@ atomic_long_inc_not_zero(atomic_long_t *v)
return raw_atomic_long_inc_not_zero(v);
}
+/**
+ * atomic_long_inc_unless_negative() - atomic increment unless negative with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc_unless_negative() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_long_inc_unless_negative(atomic_long_t *v)
{
@@ -1923,6 +4672,16 @@ atomic_long_inc_unless_negative(atomic_long_t *v)
return raw_atomic_long_inc_unless_negative(v);
}
+/**
+ * atomic_long_dec_unless_positive() - atomic decrement unless positive with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec_unless_positive() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_long_dec_unless_positive(atomic_long_t *v)
{
@@ -1931,6 +4690,16 @@ atomic_long_dec_unless_positive(atomic_long_t *v)
return raw_atomic_long_dec_unless_positive(v);
}
+/**
+ * atomic_long_dec_if_positive() - atomic decrement if positive with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec_if_positive() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline long
atomic_long_dec_if_positive(atomic_long_t *v)
{
@@ -2231,4 +5000,4 @@ atomic_long_dec_if_positive(atomic_long_t *v)
#endif /* _LINUX_ATOMIC_INSTRUMENTED_H */
-// a4c3d2b229f907654cc53cb5d40e80f7fed1ec9c
+// 06cec02e676a484857aee38b0071a1d846ec9457
diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h
index f564f71ff8afc..f6df2adadf997 100644
--- a/include/linux/atomic/atomic-long.h
+++ b/include/linux/atomic/atomic-long.h
@@ -21,6 +21,16 @@ typedef atomic_t atomic_long_t;
#define atomic_long_cond_read_relaxed atomic_cond_read_relaxed
#endif
+/**
+ * raw_atomic_long_read() - atomic load with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically loads the value of @v with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_read() elsewhere.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline long
raw_atomic_long_read(const atomic_long_t *v)
{
@@ -31,6 +41,16 @@ raw_atomic_long_read(const atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_read_acquire() - atomic load with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically loads the value of @v with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_read_acquire() elsewhere.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline long
raw_atomic_long_read_acquire(const atomic_long_t *v)
{
@@ -41,6 +61,17 @@ raw_atomic_long_read_acquire(const atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_set() - atomic set with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @i: long value to assign
+ *
+ * Atomically sets @v to @i with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_set() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_set(atomic_long_t *v, long i)
{
@@ -51,6 +82,17 @@ raw_atomic_long_set(atomic_long_t *v, long i)
#endif
}
+/**
+ * raw_atomic_long_set_release() - atomic set with release ordering
+ * @v: pointer to atomic_long_t
+ * @i: long value to assign
+ *
+ * Atomically sets @v to @i with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_set_release() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_set_release(atomic_long_t *v, long i)
{
@@ -61,6 +103,17 @@ raw_atomic_long_set_release(atomic_long_t *v, long i)
#endif
}
+/**
+ * raw_atomic_long_add() - atomic add with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_add(long i, atomic_long_t *v)
{
@@ -71,6 +124,17 @@ raw_atomic_long_add(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_return() - atomic add with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_add_return(long i, atomic_long_t *v)
{
@@ -81,6 +145,17 @@ raw_atomic_long_add_return(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_return_acquire() - atomic add with acquire ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
{
@@ -91,6 +166,17 @@ raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_return_release() - atomic add with release ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_add_return_release(long i, atomic_long_t *v)
{
@@ -101,6 +187,17 @@ raw_atomic_long_add_return_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_return_relaxed() - atomic add with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{
@@ -111,6 +208,17 @@ raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_add() - atomic add with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_add() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_add(long i, atomic_long_t *v)
{
@@ -121,6 +229,17 @@ raw_atomic_long_fetch_add(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_add_acquire() - atomic add with acquire ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_add_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{
@@ -131,6 +250,17 @@ raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_add_release() - atomic add with release ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_add_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
{
@@ -141,6 +271,17 @@ raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_add_relaxed() - atomic add with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_add_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{
@@ -151,6 +292,17 @@ raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_sub() - atomic subtract with relaxed ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_sub() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_sub(long i, atomic_long_t *v)
{
@@ -161,6 +313,17 @@ raw_atomic_long_sub(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_sub_return() - atomic subtract with full ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_sub_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_sub_return(long i, atomic_long_t *v)
{
@@ -171,6 +334,17 @@ raw_atomic_long_sub_return(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_sub_return_acquire() - atomic subtract with acquire ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_sub_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{
@@ -181,6 +355,17 @@ raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_sub_return_release() - atomic subtract with release ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_sub_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
{
@@ -191,6 +376,17 @@ raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_sub_return_relaxed() - atomic subtract with relaxed ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_sub_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{
@@ -201,6 +397,17 @@ raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_sub() - atomic subtract with full ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_sub() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
{
@@ -211,6 +418,17 @@ raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_sub_acquire() - atomic subtract with acquire ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_sub_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{
@@ -221,6 +439,17 @@ raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_sub_release() - atomic subtract with release ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_sub_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{
@@ -231,6 +460,17 @@ raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_sub_relaxed() - atomic subtract with relaxed ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_sub_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{
@@ -241,6 +481,16 @@ raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_inc() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_inc(atomic_long_t *v)
{
@@ -251,6 +501,16 @@ raw_atomic_long_inc(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_inc_return() - atomic increment with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_inc_return(atomic_long_t *v)
{
@@ -261,6 +521,16 @@ raw_atomic_long_inc_return(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_inc_return_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_inc_return_acquire(atomic_long_t *v)
{
@@ -271,6 +541,16 @@ raw_atomic_long_inc_return_acquire(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_inc_return_release() - atomic increment with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_inc_return_release(atomic_long_t *v)
{
@@ -281,6 +561,16 @@ raw_atomic_long_inc_return_release(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_inc_return_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
{
@@ -291,6 +581,16 @@ raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_inc() - atomic increment with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_inc() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_inc(atomic_long_t *v)
{
@@ -301,6 +601,16 @@ raw_atomic_long_fetch_inc(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_inc_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_inc_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
{
@@ -311,6 +621,16 @@ raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_inc_release() - atomic increment with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_inc_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_inc_release(atomic_long_t *v)
{
@@ -321,6 +641,16 @@ raw_atomic_long_fetch_inc_release(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_inc_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_inc_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{
@@ -331,6 +661,16 @@ raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_dec(atomic_long_t *v)
{
@@ -341,6 +681,16 @@ raw_atomic_long_dec(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec_return() - atomic decrement with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_dec_return(atomic_long_t *v)
{
@@ -351,6 +701,16 @@ raw_atomic_long_dec_return(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec_return_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_dec_return_acquire(atomic_long_t *v)
{
@@ -361,6 +721,16 @@ raw_atomic_long_dec_return_acquire(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec_return_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_dec_return_release(atomic_long_t *v)
{
@@ -371,6 +741,16 @@ raw_atomic_long_dec_return_release(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec_return_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
{
@@ -381,6 +761,16 @@ raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_dec() - atomic decrement with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_dec() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_dec(atomic_long_t *v)
{
@@ -391,6 +781,16 @@ raw_atomic_long_fetch_dec(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_dec_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_dec_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
{
@@ -401,6 +801,16 @@ raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_dec_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_dec_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_dec_release(atomic_long_t *v)
{
@@ -411,6 +821,16 @@ raw_atomic_long_fetch_dec_release(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_dec_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_dec_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{
@@ -421,6 +841,17 @@ raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_and() - atomic bitwise AND with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_and() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_and(long i, atomic_long_t *v)
{
@@ -431,6 +862,17 @@ raw_atomic_long_and(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_and() - atomic bitwise AND with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_and() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_and(long i, atomic_long_t *v)
{
@@ -441,6 +883,17 @@ raw_atomic_long_fetch_and(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_and_acquire() - atomic bitwise AND with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_and_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{
@@ -451,6 +904,17 @@ raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_and_release() - atomic bitwise AND with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_and_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
{
@@ -461,6 +925,17 @@ raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_and_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{
@@ -471,6 +946,17 @@ raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_andnot() - atomic bitwise AND NOT with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_andnot() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_andnot(long i, atomic_long_t *v)
{
@@ -481,6 +967,17 @@ raw_atomic_long_andnot(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_andnot() - atomic bitwise AND NOT with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_andnot() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
{
@@ -491,6 +988,17 @@ raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_andnot_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{
@@ -501,6 +1009,17 @@ raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_andnot_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{
@@ -511,6 +1030,17 @@ raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_andnot_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{
@@ -521,6 +1051,17 @@ raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_or() - atomic bitwise OR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_or() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_or(long i, atomic_long_t *v)
{
@@ -531,6 +1072,17 @@ raw_atomic_long_or(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_or() - atomic bitwise OR with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_or() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_or(long i, atomic_long_t *v)
{
@@ -541,6 +1093,17 @@ raw_atomic_long_fetch_or(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_or_acquire() - atomic bitwise OR with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_or_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{
@@ -551,6 +1114,17 @@ raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_or_release() - atomic bitwise OR with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_or_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
{
@@ -561,6 +1135,17 @@ raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_or_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{
@@ -571,6 +1156,17 @@ raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_xor() - atomic bitwise XOR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_xor() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_xor(long i, atomic_long_t *v)
{
@@ -581,6 +1177,17 @@ raw_atomic_long_xor(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_xor() - atomic bitwise XOR with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_xor() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
{
@@ -591,6 +1198,17 @@ raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_xor_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{
@@ -601,6 +1219,17 @@ raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_xor_release() - atomic bitwise XOR with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_xor_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{
@@ -611,6 +1240,17 @@ raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_xor_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{
@@ -621,6 +1261,17 @@ raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_xchg() - atomic exchange with full ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_xchg() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_xchg(atomic_long_t *v, long new)
{
@@ -631,6 +1282,17 @@ raw_atomic_long_xchg(atomic_long_t *v, long new)
#endif
}
+/**
+ * raw_atomic_long_xchg_acquire() - atomic exchange with acquire ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_xchg_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_xchg_acquire(atomic_long_t *v, long new)
{
@@ -641,6 +1303,17 @@ raw_atomic_long_xchg_acquire(atomic_long_t *v, long new)
#endif
}
+/**
+ * raw_atomic_long_xchg_release() - atomic exchange with release ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_xchg_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_xchg_release(atomic_long_t *v, long new)
{
@@ -651,6 +1324,17 @@ raw_atomic_long_xchg_release(atomic_long_t *v, long new)
#endif
}
+/**
+ * raw_atomic_long_xchg_relaxed() - atomic exchange with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_xchg_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new)
{
@@ -661,6 +1345,18 @@ raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new)
#endif
}
+/**
+ * raw_atomic_long_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_cmpxchg() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{
@@ -671,6 +1367,18 @@ raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
#endif
}
+/**
+ * raw_atomic_long_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_cmpxchg_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{
@@ -681,6 +1389,18 @@ raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
#endif
}
+/**
+ * raw_atomic_long_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_cmpxchg_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{
@@ -691,6 +1411,18 @@ raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
#endif
}
+/**
+ * raw_atomic_long_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_cmpxchg_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{
@@ -701,6 +1433,19 @@ raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
#endif
}
+/**
+ * raw_atomic_long_try_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
{
@@ -711,6 +1456,19 @@ raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
#endif
}
+/**
+ * raw_atomic_long_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_acquire() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{
@@ -721,6 +1479,19 @@ raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
#endif
}
+/**
+ * raw_atomic_long_try_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_release() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
{
@@ -731,6 +1502,19 @@ raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
#endif
}
+/**
+ * raw_atomic_long_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_relaxed() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{
@@ -741,6 +1525,17 @@ raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
#endif
}
+/**
+ * raw_atomic_long_sub_and_test() - atomic subtract and test if zero with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_sub_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
{
@@ -751,6 +1546,16 @@ raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec_and_test() - atomic decrement and test if zero with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_dec_and_test(atomic_long_t *v)
{
@@ -761,6 +1566,16 @@ raw_atomic_long_dec_and_test(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_inc_and_test() - atomic increment and test if zero with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_inc_and_test(atomic_long_t *v)
{
@@ -771,6 +1586,17 @@ raw_atomic_long_inc_and_test(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_negative() - atomic add and test if negative with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_negative() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_add_negative(long i, atomic_long_t *v)
{
@@ -781,6 +1607,17 @@ raw_atomic_long_add_negative(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_negative_acquire() - atomic add and test if negative with acquire ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_negative_acquire() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
{
@@ -791,6 +1628,17 @@ raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_negative_release() - atomic add and test if negative with release ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_negative_release() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
{
@@ -801,6 +1649,17 @@ raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_negative_relaxed() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
{
@@ -811,6 +1670,18 @@ raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_long_t
+ * @a: long value to add
+ * @u: long value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_add_unless() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
@@ -821,6 +1692,18 @@ raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
#endif
}
+/**
+ * raw_atomic_long_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_long_t
+ * @a: long value to add
+ * @u: long value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_unless() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
{
@@ -831,6 +1714,16 @@ raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
#endif
}
+/**
+ * raw_atomic_long_inc_not_zero() - atomic increment unless zero with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc_not_zero() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_inc_not_zero(atomic_long_t *v)
{
@@ -841,6 +1734,16 @@ raw_atomic_long_inc_not_zero(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_inc_unless_negative() - atomic increment unless negative with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc_unless_negative() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_inc_unless_negative(atomic_long_t *v)
{
@@ -851,6 +1754,16 @@ raw_atomic_long_inc_unless_negative(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec_unless_positive() - atomic decrement unless positive with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec_unless_positive() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_dec_unless_positive(atomic_long_t *v)
{
@@ -861,6 +1774,16 @@ raw_atomic_long_dec_unless_positive(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec_if_positive() - atomic decrement if positive with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec_if_positive() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline long
raw_atomic_long_dec_if_positive(atomic_long_t *v)
{
@@ -872,4 +1795,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v)
}
#endif /* _LINUX_ATOMIC_LONG_H */
-// e785d25cc3f220b7d473d36aac9da85dd7eb13a8
+// 029d2e3a493086671e874a4c2e0e42084be42403
diff --git a/scripts/atomic/atomic-tbl.sh b/scripts/atomic/atomic-tbl.sh
index 81d5c32039dd4..d4d4b474e8d56 100755
--- a/scripts/atomic/atomic-tbl.sh
+++ b/scripts/atomic/atomic-tbl.sh
@@ -36,9 +36,16 @@ meta_has_relaxed()
meta_in "$1" "BFIR"
}
-#find_fallback_template(pfx, name, sfx, order)
-find_fallback_template()
+#meta_is_implicitly_relaxed(meta)
+meta_is_implicitly_relaxed()
+{
+ meta_in "$1" "vls"
+}
+
+#find_template(tmpltype, pfx, name, sfx, order)
+find_template()
{
+ local tmpltype="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
@@ -52,8 +59,8 @@ find_fallback_template()
#
# Start at the most specific, and fall back to the most general. Once
# we find a specific fallback, don't bother looking for more.
- for base in "${pfx}${name}${sfx}${order}" "${name}"; do
- file="${ATOMICDIR}/fallbacks/${base}"
+ for base in "${pfx}${name}${sfx}${order}" "${pfx}${name}${sfx}" "${name}"; do
+ file="${ATOMICDIR}/${tmpltype}/${base}"
if [ -f "${file}" ]; then
printf "${file}"
@@ -62,6 +69,18 @@ find_fallback_template()
done
}
+#find_fallback_template(pfx, name, sfx, order)
+find_fallback_template()
+{
+ find_template "fallbacks" "$@"
+}
+
+#find_kerneldoc_template(pfx, name, sfx, order)
+find_kerneldoc_template()
+{
+ find_template "kerneldoc" "$@"
+}
+
#gen_ret_type(meta, int)
gen_ret_type() {
local meta="$1"; shift
@@ -142,6 +161,91 @@ gen_args()
done
}
+#gen_desc_return(meta)
+gen_desc_return()
+{
+ local meta="$1"; shift
+
+ case "${meta}" in
+ [v])
+ printf "Return: Nothing."
+ ;;
+ [Ff])
+ printf "Return: The original value of @v."
+ ;;
+ [R])
+ printf "Return: The updated value of @v."
+ ;;
+ [l])
+ printf "Return: The value of @v."
+ ;;
+ esac
+}
+
+#gen_template_kerneldoc(template, class, meta, pfx, name, sfx, order, atomic, int, args...)
+gen_template_kerneldoc()
+{
+ local template="$1"; shift
+ local class="$1"; shift
+ local meta="$1"; shift
+ local pfx="$1"; shift
+ local name="$1"; shift
+ local sfx="$1"; shift
+ local order="$1"; shift
+ local atomic="$1"; shift
+ local int="$1"; shift
+
+ local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
+
+ local ret="$(gen_ret_type "${meta}" "${int}")"
+ local retstmt="$(gen_ret_stmt "${meta}")"
+ local params="$(gen_params "${int}" "${atomic}" "$@")"
+ local args="$(gen_args "$@")"
+ local desc_order=""
+ local desc_instrumentation=""
+ local desc_return=""
+
+ if [ ! -z "${order}" ]; then
+ desc_order="${order##_}"
+ elif meta_is_implicitly_relaxed "${meta}"; then
+ desc_order="relaxed"
+ else
+ desc_order="full"
+ fi
+
+ if [ -z "${class}" ]; then
+ desc_noinstr="Unsafe to use in noinstr code; use raw_${atomicname}() there."
+ else
+ desc_noinstr="Safe to use in noinstr code; prefer ${atomicname}() elsewhere."
+ fi
+
+ desc_return="$(gen_desc_return "${meta}")"
+
+ . ${template}
+}
+
+#gen_kerneldoc(class, meta, pfx, name, sfx, order, atomic, int, args...)
+gen_kerneldoc()
+{
+ local class="$1"; shift
+ local meta="$1"; shift
+ local pfx="$1"; shift
+ local name="$1"; shift
+ local sfx="$1"; shift
+ local order="$1"; shift
+
+ local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
+
+ local tmpl="$(find_kerneldoc_template "${pfx}" "${name}" "${sfx}" "${order}")"
+ if [ -z "${tmpl}" ]; then
+ printf "/*\n"
+ printf " * No kerneldoc available for ${class}${atomicname}\n"
+ printf " */\n"
+ else
+ gen_template_kerneldoc "${tmpl}" "${class}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
+ fi
+}
+
#gen_proto_order_variants(meta, pfx, name, sfx, ...)
gen_proto_order_variants()
{
diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh
index 2b470d31e3539..c0c8a85d7c81b 100755
--- a/scripts/atomic/gen-atomic-fallback.sh
+++ b/scripts/atomic/gen-atomic-fallback.sh
@@ -73,6 +73,8 @@ gen_proto_order_variant()
local params="$(gen_params "${int}" "${atomic}" "$@")"
local args="$(gen_args "$@")"
+ gen_kerneldoc "raw_" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@"
+
printf "static __always_inline ${ret}\n"
printf "raw_${atomicname}(${params})\n"
printf "{\n"
diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh
index 93c949aa9e544..9d3863ceb4d48 100755
--- a/scripts/atomic/gen-atomic-instrumented.sh
+++ b/scripts/atomic/gen-atomic-instrumented.sh
@@ -67,6 +67,8 @@ gen_proto_order_variant()
local checks="$(gen_params_checks "${meta}" "${order}" "$@")"
local args="$(gen_args "$@")"
local retstmt="$(gen_ret_stmt "${meta}")"
+
+ gen_kerneldoc "" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@"
cat <<EOF
static __always_inline ${ret}
diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh
index af27a71b37ef1..9826be3ba9862 100755
--- a/scripts/atomic/gen-atomic-long.sh
+++ b/scripts/atomic/gen-atomic-long.sh
@@ -49,6 +49,8 @@ gen_proto_order_variant()
local argscast_64="$(gen_args_cast "s64" "atomic64" "$@")"
local retstmt="$(gen_ret_stmt "${meta}")"
+ gen_kerneldoc "raw_" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "atomic_long" "long" "$@"
+
cat <<EOF
static __always_inline ${ret}
raw_atomic_long_${atomicname}(${params})
diff --git a/scripts/atomic/kerneldoc/add b/scripts/atomic/kerneldoc/add
new file mode 100644
index 0000000000000..991f3dafceea3
--- /dev/null
+++ b/scripts/atomic/kerneldoc/add
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic add with ${desc_order} ordering
+ * @i: ${int} value to add
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v + @i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/add_negative b/scripts/atomic/kerneldoc/add_negative
new file mode 100644
index 0000000000000..f4ca1f05d1d81
--- /dev/null
+++ b/scripts/atomic/kerneldoc/add_negative
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic add and test if negative with ${desc_order} ordering
+ * @i: ${int} value to add
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v + @i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/add_unless b/scripts/atomic/kerneldoc/add_unless
new file mode 100644
index 0000000000000..f828e5f6750c2
--- /dev/null
+++ b/scripts/atomic/kerneldoc/add_unless
@@ -0,0 +1,18 @@
+if [ -z "${pfx}" ]; then
+ desc_return="Return: @true if @v was updated, @false otherwise."
+fi
+
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic add unless value with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ * @a: ${int} value to add
+ * @u: ${int} value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/and b/scripts/atomic/kerneldoc/and
new file mode 100644
index 0000000000000..a923574351fc2
--- /dev/null
+++ b/scripts/atomic/kerneldoc/and
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic bitwise AND with ${desc_order} ordering
+ * @i: ${int} value
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v & @i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/andnot b/scripts/atomic/kerneldoc/andnot
new file mode 100644
index 0000000000000..64bb509f866bf
--- /dev/null
+++ b/scripts/atomic/kerneldoc/andnot
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic bitwise AND NOT with ${desc_order} ordering
+ * @i: ${int} value
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v & ~@i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/cmpxchg b/scripts/atomic/kerneldoc/cmpxchg
new file mode 100644
index 0000000000000..3bce328f50cff
--- /dev/null
+++ b/scripts/atomic/kerneldoc/cmpxchg
@@ -0,0 +1,14 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic compare and exchange with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ * @old: ${int} value to compare with
+ * @new: ${int} value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: The original value of @v.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/dec b/scripts/atomic/kerneldoc/dec
new file mode 100644
index 0000000000000..bbeecbc4c20a4
--- /dev/null
+++ b/scripts/atomic/kerneldoc/dec
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic decrement with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v - 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/dec_and_test b/scripts/atomic/kerneldoc/dec_and_test
new file mode 100644
index 0000000000000..71bbd23ce4bca
--- /dev/null
+++ b/scripts/atomic/kerneldoc/dec_and_test
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic decrement and test if zero with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v - 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/dec_if_positive b/scripts/atomic/kerneldoc/dec_if_positive
new file mode 100644
index 0000000000000..7c742866fb6b6
--- /dev/null
+++ b/scripts/atomic/kerneldoc/dec_if_positive
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic decrement if positive with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * If (@v > 0), atomically updates @v to (@v - 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/dec_unless_positive b/scripts/atomic/kerneldoc/dec_unless_positive
new file mode 100644
index 0000000000000..ee73612f03547
--- /dev/null
+++ b/scripts/atomic/kerneldoc/dec_unless_positive
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic decrement unless positive with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * If (@v <= 0), atomically updates @v to (@v - 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/inc b/scripts/atomic/kerneldoc/inc
new file mode 100644
index 0000000000000..9f14f1b3d2ef2
--- /dev/null
+++ b/scripts/atomic/kerneldoc/inc
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic increment with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v + 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/inc_and_test b/scripts/atomic/kerneldoc/inc_and_test
new file mode 100644
index 0000000000000..971694d59bbd1
--- /dev/null
+++ b/scripts/atomic/kerneldoc/inc_and_test
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic increment and test if zero with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v + 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/inc_not_zero b/scripts/atomic/kerneldoc/inc_not_zero
new file mode 100644
index 0000000000000..618be08e653e5
--- /dev/null
+++ b/scripts/atomic/kerneldoc/inc_not_zero
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic increment unless zero with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * If (@v != 0), atomically updates @v to (@v + 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/inc_unless_negative b/scripts/atomic/kerneldoc/inc_unless_negative
new file mode 100644
index 0000000000000..597f23d4dc8dc
--- /dev/null
+++ b/scripts/atomic/kerneldoc/inc_unless_negative
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic increment unless negative with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * If (@v >= 0), atomically updates @v to (@v + 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/or b/scripts/atomic/kerneldoc/or
new file mode 100644
index 0000000000000..55b33de504165
--- /dev/null
+++ b/scripts/atomic/kerneldoc/or
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic bitwise OR with ${desc_order} ordering
+ * @i: ${int} value
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v | @i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/read b/scripts/atomic/kerneldoc/read
new file mode 100644
index 0000000000000..89fe6147c9643
--- /dev/null
+++ b/scripts/atomic/kerneldoc/read
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic load with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically loads the value of @v with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: The value loaded from @v.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/set b/scripts/atomic/kerneldoc/set
new file mode 100644
index 0000000000000..e82cb9ebbc423
--- /dev/null
+++ b/scripts/atomic/kerneldoc/set
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic set with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ * @i: ${int} value to assign
+ *
+ * Atomically sets @v to @i with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: Nothing.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/sub b/scripts/atomic/kerneldoc/sub
new file mode 100644
index 0000000000000..3ba642d04407a
--- /dev/null
+++ b/scripts/atomic/kerneldoc/sub
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic subtract with ${desc_order} ordering
+ * @i: ${int} value to subtract
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v - @i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/sub_and_test b/scripts/atomic/kerneldoc/sub_and_test
new file mode 100644
index 0000000000000..d3760f7749d4e
--- /dev/null
+++ b/scripts/atomic/kerneldoc/sub_and_test
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic subtract and test if zero with ${desc_order} ordering
+ * @i: ${int} value to add
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v - @i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/try_cmpxchg b/scripts/atomic/kerneldoc/try_cmpxchg
new file mode 100644
index 0000000000000..296553206c06e
--- /dev/null
+++ b/scripts/atomic/kerneldoc/try_cmpxchg
@@ -0,0 +1,15 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic compare and exchange with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ * @old: pointer to ${int} value to compare with
+ * @new: ${int} value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with ${desc_order} ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/xchg b/scripts/atomic/kerneldoc/xchg
new file mode 100644
index 0000000000000..75f04c085f252
--- /dev/null
+++ b/scripts/atomic/kerneldoc/xchg
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic exchange with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ * @new: ${int} value to assign
+ *
+ * Atomically updates @v to @new with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: The original value of @v.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/xor b/scripts/atomic/kerneldoc/xor
new file mode 100644
index 0000000000000..8837270f2806d
--- /dev/null
+++ b/scripts/atomic/kerneldoc/xor
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic bitwise XOR with ${desc_order} ordering
+ * @i: ${int} value
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v ^ @i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
--
2.30.2
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 630399469ffcb937936644fbaa5daf61e700a329
Gitweb: https://git.kernel.org/tip/630399469ffcb937936644fbaa5daf61e700a329
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:19 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:22 +02:00
locking/atomic: scripts: simplify raw_atomic_long*() definitions
Currently, atomic-long is split into two sections, one defining the
raw_atomic_long_*() ops for CONFIG_64BIT, and one defining the raw
atomic_long_*() ops for !CONFIG_64BIT.
With many lines elided, this looks like:
| #ifdef CONFIG_64BIT
| ...
| static __always_inline bool
| raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
| {
| return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
| }
| ...
| #else /* CONFIG_64BIT */
| ...
| static __always_inline bool
| raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
| {
| return raw_atomic_try_cmpxchg(v, (int *)old, new);
| }
| ...
| #endif
The two definitions are spread far apart in the file, and duplicate the
prototype, making it hard to have a legible set of kerneldoc comments.
Make this simpler by defining the C prototype once, and writing the two
definitions inline. For example, the above becomes:
| static __always_inline bool
| raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
| {
| #ifdef CONFIG_64BIT
| return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
| #else
| return raw_atomic_try_cmpxchg(v, (int *)old, new);
| #endif
| }
As we now always have a single copy of the C prototype wrapping all the
potential definitions, we now have an obvious single location for kerneldoc
comments. As a bonus, both the script and the generated file are
somewhat shorter.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
include/linux/atomic/atomic-long.h | 857 +++++++++++-----------------
scripts/atomic/gen-atomic-long.sh | 27 +-
2 files changed, 350 insertions(+), 534 deletions(-)
diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h
index 92dc82c..63e0b40 100644
--- a/include/linux/atomic/atomic-long.h
+++ b/include/linux/atomic/atomic-long.h
@@ -21,1030 +21,855 @@ typedef atomic_t atomic_long_t;
#define atomic_long_cond_read_relaxed atomic_cond_read_relaxed
#endif
-#ifdef CONFIG_64BIT
-
-static __always_inline long
-raw_atomic_long_read(const atomic_long_t *v)
-{
- return raw_atomic64_read(v);
-}
-
-static __always_inline long
-raw_atomic_long_read_acquire(const atomic_long_t *v)
-{
- return raw_atomic64_read_acquire(v);
-}
-
-static __always_inline void
-raw_atomic_long_set(atomic_long_t *v, long i)
-{
- raw_atomic64_set(v, i);
-}
-
-static __always_inline void
-raw_atomic_long_set_release(atomic_long_t *v, long i)
-{
- raw_atomic64_set_release(v, i);
-}
-
-static __always_inline void
-raw_atomic_long_add(long i, atomic_long_t *v)
-{
- raw_atomic64_add(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_return(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_return_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_return_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_return_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_add(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_add_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_add_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_add_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_sub(long i, atomic_long_t *v)
-{
- raw_atomic64_sub(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return(long i, atomic_long_t *v)
-{
- return raw_atomic64_sub_return(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_sub_return_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_sub_return_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_sub_return_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_sub(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_sub_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_sub_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_sub_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_inc(atomic_long_t *v)
-{
- raw_atomic64_inc(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return(atomic_long_t *v)
-{
- return raw_atomic64_inc_return(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_acquire(atomic_long_t *v)
-{
- return raw_atomic64_inc_return_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_release(atomic_long_t *v)
-{
- return raw_atomic64_inc_return_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
-{
- return raw_atomic64_inc_return_relaxed(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc(atomic_long_t *v)
-{
- return raw_atomic64_fetch_inc(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
-{
- return raw_atomic64_fetch_inc_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_release(atomic_long_t *v)
-{
- return raw_atomic64_fetch_inc_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
-{
- return raw_atomic64_fetch_inc_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_long_dec(atomic_long_t *v)
-{
- raw_atomic64_dec(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return(atomic_long_t *v)
-{
- return raw_atomic64_dec_return(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_acquire(atomic_long_t *v)
-{
- return raw_atomic64_dec_return_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_release(atomic_long_t *v)
-{
- return raw_atomic64_dec_return_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
-{
- return raw_atomic64_dec_return_relaxed(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec(atomic_long_t *v)
-{
- return raw_atomic64_fetch_dec(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
-{
- return raw_atomic64_fetch_dec_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_release(atomic_long_t *v)
-{
- return raw_atomic64_fetch_dec_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
-{
- return raw_atomic64_fetch_dec_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_long_and(long i, atomic_long_t *v)
-{
- raw_atomic64_and(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_and(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_and_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_and_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_and_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_andnot(long i, atomic_long_t *v)
-{
- raw_atomic64_andnot(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_andnot(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_andnot_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_andnot_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_andnot_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_or(long i, atomic_long_t *v)
-{
- raw_atomic64_or(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_or(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_or_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_or_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_or_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_xor(long i, atomic_long_t *v)
-{
- raw_atomic64_xor(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_xor(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_xor_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_xor_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_fetch_xor_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_xchg(atomic_long_t *v, long i)
-{
- return raw_atomic64_xchg(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
-{
- return raw_atomic64_xchg_acquire(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_release(atomic_long_t *v, long i)
-{
- return raw_atomic64_xchg_release(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
-{
- return raw_atomic64_xchg_relaxed(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
-{
- return raw_atomic64_cmpxchg(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
-{
- return raw_atomic64_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
-{
- return raw_atomic64_cmpxchg_release(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
-{
- return raw_atomic64_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
-{
- return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
-{
- return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
-{
- return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
-{
- return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
-{
- return raw_atomic64_sub_and_test(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_dec_and_test(atomic_long_t *v)
-{
- return raw_atomic64_dec_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_and_test(atomic_long_t *v)
-{
- return raw_atomic64_inc_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_negative(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_negative_acquire(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_negative_release(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
-{
- return raw_atomic64_add_negative_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
-{
- return raw_atomic64_fetch_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
-{
- return raw_atomic64_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_not_zero(atomic_long_t *v)
-{
- return raw_atomic64_inc_not_zero(v);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_unless_negative(atomic_long_t *v)
-{
- return raw_atomic64_inc_unless_negative(v);
-}
-
-static __always_inline bool
-raw_atomic_long_dec_unless_positive(atomic_long_t *v)
-{
- return raw_atomic64_dec_unless_positive(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_if_positive(atomic_long_t *v)
-{
- return raw_atomic64_dec_if_positive(v);
-}
-
-#else /* CONFIG_64BIT */
-
static __always_inline long
raw_atomic_long_read(const atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_read(v);
+#else
return raw_atomic_read(v);
+#endif
}
static __always_inline long
raw_atomic_long_read_acquire(const atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_read_acquire(v);
+#else
return raw_atomic_read_acquire(v);
+#endif
}
static __always_inline void
raw_atomic_long_set(atomic_long_t *v, long i)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_set(v, i);
+#else
raw_atomic_set(v, i);
+#endif
}
static __always_inline void
raw_atomic_long_set_release(atomic_long_t *v, long i)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_set_release(v, i);
+#else
raw_atomic_set_release(v, i);
+#endif
}
static __always_inline void
raw_atomic_long_add(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_add(i, v);
+#else
raw_atomic_add(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_add_return(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_return(i, v);
+#else
return raw_atomic_add_return(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_return_acquire(i, v);
+#else
return raw_atomic_add_return_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_add_return_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_return_release(i, v);
+#else
return raw_atomic_add_return_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_return_relaxed(i, v);
+#else
return raw_atomic_add_return_relaxed(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_add(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_add(i, v);
+#else
return raw_atomic_fetch_add(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_add_acquire(i, v);
+#else
return raw_atomic_fetch_add_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_add_release(i, v);
+#else
return raw_atomic_fetch_add_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_add_relaxed(i, v);
+#else
return raw_atomic_fetch_add_relaxed(i, v);
+#endif
}
static __always_inline void
raw_atomic_long_sub(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_sub(i, v);
+#else
raw_atomic_sub(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_sub_return(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_sub_return(i, v);
+#else
return raw_atomic_sub_return(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_sub_return_acquire(i, v);
+#else
return raw_atomic_sub_return_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_sub_return_release(i, v);
+#else
return raw_atomic_sub_return_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_sub_return_relaxed(i, v);
+#else
return raw_atomic_sub_return_relaxed(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_sub(i, v);
+#else
return raw_atomic_fetch_sub(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_sub_acquire(i, v);
+#else
return raw_atomic_fetch_sub_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_sub_release(i, v);
+#else
return raw_atomic_fetch_sub_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_sub_relaxed(i, v);
+#else
return raw_atomic_fetch_sub_relaxed(i, v);
+#endif
}
static __always_inline void
raw_atomic_long_inc(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_inc(v);
+#else
raw_atomic_inc(v);
+#endif
}
static __always_inline long
raw_atomic_long_inc_return(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_inc_return(v);
+#else
return raw_atomic_inc_return(v);
+#endif
}
static __always_inline long
raw_atomic_long_inc_return_acquire(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_inc_return_acquire(v);
+#else
return raw_atomic_inc_return_acquire(v);
+#endif
}
static __always_inline long
raw_atomic_long_inc_return_release(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_inc_return_release(v);
+#else
return raw_atomic_inc_return_release(v);
+#endif
}
static __always_inline long
raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_inc_return_relaxed(v);
+#else
return raw_atomic_inc_return_relaxed(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_inc(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_inc(v);
+#else
return raw_atomic_fetch_inc(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_inc_acquire(v);
+#else
return raw_atomic_fetch_inc_acquire(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_inc_release(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_inc_release(v);
+#else
return raw_atomic_fetch_inc_release(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_inc_relaxed(v);
+#else
return raw_atomic_fetch_inc_relaxed(v);
+#endif
}
static __always_inline void
raw_atomic_long_dec(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_dec(v);
+#else
raw_atomic_dec(v);
+#endif
}
static __always_inline long
raw_atomic_long_dec_return(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_dec_return(v);
+#else
return raw_atomic_dec_return(v);
+#endif
}
static __always_inline long
raw_atomic_long_dec_return_acquire(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_dec_return_acquire(v);
+#else
return raw_atomic_dec_return_acquire(v);
+#endif
}
static __always_inline long
raw_atomic_long_dec_return_release(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_dec_return_release(v);
+#else
return raw_atomic_dec_return_release(v);
+#endif
}
static __always_inline long
raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_dec_return_relaxed(v);
+#else
return raw_atomic_dec_return_relaxed(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_dec(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_dec(v);
+#else
return raw_atomic_fetch_dec(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_dec_acquire(v);
+#else
return raw_atomic_fetch_dec_acquire(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_dec_release(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_dec_release(v);
+#else
return raw_atomic_fetch_dec_release(v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_dec_relaxed(v);
+#else
return raw_atomic_fetch_dec_relaxed(v);
+#endif
}
static __always_inline void
raw_atomic_long_and(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_and(i, v);
+#else
raw_atomic_and(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_and(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_and(i, v);
+#else
return raw_atomic_fetch_and(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_and_acquire(i, v);
+#else
return raw_atomic_fetch_and_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_and_release(i, v);
+#else
return raw_atomic_fetch_and_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_and_relaxed(i, v);
+#else
return raw_atomic_fetch_and_relaxed(i, v);
+#endif
}
static __always_inline void
raw_atomic_long_andnot(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_andnot(i, v);
+#else
raw_atomic_andnot(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_andnot(i, v);
+#else
return raw_atomic_fetch_andnot(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_andnot_acquire(i, v);
+#else
return raw_atomic_fetch_andnot_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_andnot_release(i, v);
+#else
return raw_atomic_fetch_andnot_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_andnot_relaxed(i, v);
+#else
return raw_atomic_fetch_andnot_relaxed(i, v);
+#endif
}
static __always_inline void
raw_atomic_long_or(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_or(i, v);
+#else
raw_atomic_or(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_or(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_or(i, v);
+#else
return raw_atomic_fetch_or(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_or_acquire(i, v);
+#else
return raw_atomic_fetch_or_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_or_release(i, v);
+#else
return raw_atomic_fetch_or_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_or_relaxed(i, v);
+#else
return raw_atomic_fetch_or_relaxed(i, v);
+#endif
}
static __always_inline void
raw_atomic_long_xor(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ raw_atomic64_xor(i, v);
+#else
raw_atomic_xor(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_xor(i, v);
+#else
return raw_atomic_fetch_xor(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_xor_acquire(i, v);
+#else
return raw_atomic_fetch_xor_acquire(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_xor_release(i, v);
+#else
return raw_atomic_fetch_xor_release(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_xor_relaxed(i, v);
+#else
return raw_atomic_fetch_xor_relaxed(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_xchg(atomic_long_t *v, long i)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_xchg(v, i);
+#else
return raw_atomic_xchg(v, i);
+#endif
}
static __always_inline long
raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_xchg_acquire(v, i);
+#else
return raw_atomic_xchg_acquire(v, i);
+#endif
}
static __always_inline long
raw_atomic_long_xchg_release(atomic_long_t *v, long i)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_xchg_release(v, i);
+#else
return raw_atomic_xchg_release(v, i);
+#endif
}
static __always_inline long
raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_xchg_relaxed(v, i);
+#else
return raw_atomic_xchg_relaxed(v, i);
+#endif
}
static __always_inline long
raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_cmpxchg(v, old, new);
+#else
return raw_atomic_cmpxchg(v, old, new);
+#endif
}
static __always_inline long
raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_cmpxchg_acquire(v, old, new);
+#else
return raw_atomic_cmpxchg_acquire(v, old, new);
+#endif
}
static __always_inline long
raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_cmpxchg_release(v, old, new);
+#else
return raw_atomic_cmpxchg_release(v, old, new);
+#endif
}
static __always_inline long
raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_cmpxchg_relaxed(v, old, new);
+#else
return raw_atomic_cmpxchg_relaxed(v, old, new);
+#endif
}
static __always_inline bool
raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
+#else
return raw_atomic_try_cmpxchg(v, (int *)old, new);
+#endif
}
static __always_inline bool
raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new);
+#else
return raw_atomic_try_cmpxchg_acquire(v, (int *)old, new);
+#endif
}
static __always_inline bool
raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new);
+#else
return raw_atomic_try_cmpxchg_release(v, (int *)old, new);
+#endif
}
static __always_inline bool
raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new);
+#else
return raw_atomic_try_cmpxchg_relaxed(v, (int *)old, new);
+#endif
}
static __always_inline bool
raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_sub_and_test(i, v);
+#else
return raw_atomic_sub_and_test(i, v);
+#endif
}
static __always_inline bool
raw_atomic_long_dec_and_test(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_dec_and_test(v);
+#else
return raw_atomic_dec_and_test(v);
+#endif
}
static __always_inline bool
raw_atomic_long_inc_and_test(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_inc_and_test(v);
+#else
return raw_atomic_inc_and_test(v);
+#endif
}
static __always_inline bool
raw_atomic_long_add_negative(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_negative(i, v);
+#else
return raw_atomic_add_negative(i, v);
+#endif
}
static __always_inline bool
raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_negative_acquire(i, v);
+#else
return raw_atomic_add_negative_acquire(i, v);
+#endif
}
static __always_inline bool
raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_negative_release(i, v);
+#else
return raw_atomic_add_negative_release(i, v);
+#endif
}
static __always_inline bool
raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_negative_relaxed(i, v);
+#else
return raw_atomic_add_negative_relaxed(i, v);
+#endif
}
static __always_inline long
raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_fetch_add_unless(v, a, u);
+#else
return raw_atomic_fetch_add_unless(v, a, u);
+#endif
}
static __always_inline bool
raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_add_unless(v, a, u);
+#else
return raw_atomic_add_unless(v, a, u);
+#endif
}
static __always_inline bool
raw_atomic_long_inc_not_zero(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_inc_not_zero(v);
+#else
return raw_atomic_inc_not_zero(v);
+#endif
}
static __always_inline bool
raw_atomic_long_inc_unless_negative(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_inc_unless_negative(v);
+#else
return raw_atomic_inc_unless_negative(v);
+#endif
}
static __always_inline bool
raw_atomic_long_dec_unless_positive(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_dec_unless_positive(v);
+#else
return raw_atomic_dec_unless_positive(v);
+#endif
}
static __always_inline long
raw_atomic_long_dec_if_positive(atomic_long_t *v)
{
+#ifdef CONFIG_64BIT
+ return raw_atomic64_dec_if_positive(v);
+#else
return raw_atomic_dec_if_positive(v);
+#endif
}
-#endif /* CONFIG_64BIT */
#endif /* _LINUX_ATOMIC_LONG_H */
-// 108784846d3bbbb201b8dabe621c5dc30b216206
+// ad09f849db0db5b30c82e497eeb9056a394c5f22
diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh
index 1383217..af27a71 100755
--- a/scripts/atomic/gen-atomic-long.sh
+++ b/scripts/atomic/gen-atomic-long.sh
@@ -32,7 +32,7 @@ gen_args_cast()
done
}
-#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...)
+#gen_proto_order_variant(meta, pfx, name, sfx, order, arg...)
gen_proto_order_variant()
{
local meta="$1"; shift
@@ -40,21 +40,24 @@ gen_proto_order_variant()
local name="$1"; shift
local sfx="$1"; shift
local order="$1"; shift
- local atomic="$1"; shift
- local int="$1"; shift
local atomicname="${pfx}${name}${sfx}${order}"
local ret="$(gen_ret_type "${meta}" "long")"
local params="$(gen_params "long" "atomic_long" "$@")"
- local argscast="$(gen_args_cast "${int}" "${atomic}" "$@")"
+ local argscast_32="$(gen_args_cast "int" "atomic" "$@")"
+ local argscast_64="$(gen_args_cast "s64" "atomic64" "$@")"
local retstmt="$(gen_ret_stmt "${meta}")"
cat <<EOF
static __always_inline ${ret}
raw_atomic_long_${atomicname}(${params})
{
- ${retstmt}raw_${atomic}_${atomicname}(${argscast});
+#ifdef CONFIG_64BIT
+ ${retstmt}raw_atomic64_${atomicname}(${argscast_64});
+#else
+ ${retstmt}raw_atomic_${atomicname}(${argscast_32});
+#endif
}
EOF
@@ -84,24 +87,12 @@ typedef atomic_t atomic_long_t;
#define atomic_long_cond_read_relaxed atomic_cond_read_relaxed
#endif
-#ifdef CONFIG_64BIT
-
-EOF
-
-grep '^[a-z]' "$1" | while read name meta args; do
- gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
-done
-
-cat <<EOF
-#else /* CONFIG_64BIT */
-
EOF
grep '^[a-z]' "$1" | while read name meta args; do
- gen_proto "${meta}" "${name}" "atomic" "int" ${args}
+ gen_proto "${meta}" "${name}" ${args}
done
cat <<EOF
-#endif /* CONFIG_64BIT */
#endif /* _LINUX_ATOMIC_LONG_H */
EOF
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 770345adc38485c688e5d832d82306a4c2da828c
Gitweb: https://git.kernel.org/tip/770345adc38485c688e5d832d82306a4c2da828c
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:07 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:16 +02:00
locking/atomic: sh: add preprocessor symbols
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/sh.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/sh/include/asm/atomic-grb.h | 9 +++++++++
arch/sh/include/asm/atomic-irq.h | 9 +++++++++
arch/sh/include/asm/atomic-llsc.h | 9 +++++++++
3 files changed, 27 insertions(+)
diff --git a/arch/sh/include/asm/atomic-grb.h b/arch/sh/include/asm/atomic-grb.h
index 059791f..cf1c10f 100644
--- a/arch/sh/include/asm/atomic-grb.h
+++ b/arch/sh/include/asm/atomic-grb.h
@@ -71,6 +71,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \
ATOMIC_OPS(add)
ATOMIC_OPS(sub)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
@@ -78,6 +83,10 @@ ATOMIC_OPS(and)
ATOMIC_OPS(or)
ATOMIC_OPS(xor)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
diff --git a/arch/sh/include/asm/atomic-irq.h b/arch/sh/include/asm/atomic-irq.h
index 7665de9..b4090cc 100644
--- a/arch/sh/include/asm/atomic-irq.h
+++ b/arch/sh/include/asm/atomic-irq.h
@@ -55,6 +55,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \
ATOMIC_OPS(add, +=)
ATOMIC_OPS(sub, -=)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op, c_op) \
ATOMIC_OP(op, c_op) \
@@ -64,6 +69,10 @@ ATOMIC_OPS(and, &=)
ATOMIC_OPS(or, |=)
ATOMIC_OPS(xor, ^=)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
diff --git a/arch/sh/include/asm/atomic-llsc.h b/arch/sh/include/asm/atomic-llsc.h
index b63dcfb..9ef1fb1 100644
--- a/arch/sh/include/asm/atomic-llsc.h
+++ b/arch/sh/include/asm/atomic-llsc.h
@@ -73,6 +73,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \
ATOMIC_OPS(add)
ATOMIC_OPS(sub)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
@@ -80,6 +85,10 @@ ATOMIC_OPS(and)
ATOMIC_OPS(or)
ATOMIC_OPS(xor)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 1815da1718aa4c062b94cf3fc09432f552e25768
Gitweb: https://git.kernel.org/tip/1815da1718aa4c062b94cf3fc09432f552e25768
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:16 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:20 +02:00
locking/atomic: scripts: build raw_atomic_long*() directly
Now that arch_atomic*() usage is limited to the atomic headers, we no
longer have any users of arch_atomic_long_*(), and can generate
raw_atomic_long_*() directly.
Generate the raw_atomic_long_*() ops directly.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
include/linux/atomic.h | 2 +-
include/linux/atomic/atomic-long.h | 682 ++++++++++++++--------------
include/linux/atomic/atomic-raw.h | 512 +---------------------
scripts/atomic/gen-atomic-long.sh | 4 +-
scripts/atomic/gen-atomic-raw.sh | 4 +-
5 files changed, 345 insertions(+), 859 deletions(-)
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 127f5dc..296cfae 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -78,8 +78,8 @@
})
#include <linux/atomic/atomic-arch-fallback.h>
-#include <linux/atomic/atomic-long.h>
#include <linux/atomic/atomic-raw.h>
+#include <linux/atomic/atomic-long.h>
#include <linux/atomic/atomic-instrumented.h>
#endif /* _LINUX_ATOMIC_H */
diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h
index 2fc51ba..92dc82c 100644
--- a/include/linux/atomic/atomic-long.h
+++ b/include/linux/atomic/atomic-long.h
@@ -24,1027 +24,1027 @@ typedef atomic_t atomic_long_t;
#ifdef CONFIG_64BIT
static __always_inline long
-arch_atomic_long_read(const atomic_long_t *v)
+raw_atomic_long_read(const atomic_long_t *v)
{
- return arch_atomic64_read(v);
+ return raw_atomic64_read(v);
}
static __always_inline long
-arch_atomic_long_read_acquire(const atomic_long_t *v)
+raw_atomic_long_read_acquire(const atomic_long_t *v)
{
- return arch_atomic64_read_acquire(v);
+ return raw_atomic64_read_acquire(v);
}
static __always_inline void
-arch_atomic_long_set(atomic_long_t *v, long i)
+raw_atomic_long_set(atomic_long_t *v, long i)
{
- arch_atomic64_set(v, i);
+ raw_atomic64_set(v, i);
}
static __always_inline void
-arch_atomic_long_set_release(atomic_long_t *v, long i)
+raw_atomic_long_set_release(atomic_long_t *v, long i)
{
- arch_atomic64_set_release(v, i);
+ raw_atomic64_set_release(v, i);
}
static __always_inline void
-arch_atomic_long_add(long i, atomic_long_t *v)
+raw_atomic_long_add(long i, atomic_long_t *v)
{
- arch_atomic64_add(i, v);
+ raw_atomic64_add(i, v);
}
static __always_inline long
-arch_atomic_long_add_return(long i, atomic_long_t *v)
+raw_atomic_long_add_return(long i, atomic_long_t *v)
{
- return arch_atomic64_add_return(i, v);
+ return raw_atomic64_add_return(i, v);
}
static __always_inline long
-arch_atomic_long_add_return_acquire(long i, atomic_long_t *v)
+raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_add_return_acquire(i, v);
+ return raw_atomic64_add_return_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_add_return_release(long i, atomic_long_t *v)
+raw_atomic_long_add_return_release(long i, atomic_long_t *v)
{
- return arch_atomic64_add_return_release(i, v);
+ return raw_atomic64_add_return_release(i, v);
}
static __always_inline long
-arch_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_add_return_relaxed(i, v);
+ return raw_atomic64_add_return_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_add(i, v);
+ return raw_atomic64_fetch_add(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_add_acquire(i, v);
+ return raw_atomic64_fetch_add_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_add_release(i, v);
+ return raw_atomic64_fetch_add_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_add_relaxed(i, v);
+ return raw_atomic64_fetch_add_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_sub(long i, atomic_long_t *v)
+raw_atomic_long_sub(long i, atomic_long_t *v)
{
- arch_atomic64_sub(i, v);
+ raw_atomic64_sub(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return(long i, atomic_long_t *v)
+raw_atomic_long_sub_return(long i, atomic_long_t *v)
{
- return arch_atomic64_sub_return(i, v);
+ return raw_atomic64_sub_return(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_sub_return_acquire(i, v);
+ return raw_atomic64_sub_return_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return_release(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
{
- return arch_atomic64_sub_return_release(i, v);
+ return raw_atomic64_sub_return_release(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_sub_return_relaxed(i, v);
+ return raw_atomic64_sub_return_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_sub(i, v);
+ return raw_atomic64_fetch_sub(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_sub_acquire(i, v);
+ return raw_atomic64_fetch_sub_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_sub_release(i, v);
+ return raw_atomic64_fetch_sub_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_sub_relaxed(i, v);
+ return raw_atomic64_fetch_sub_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_inc(atomic_long_t *v)
+raw_atomic_long_inc(atomic_long_t *v)
{
- arch_atomic64_inc(v);
+ raw_atomic64_inc(v);
}
static __always_inline long
-arch_atomic_long_inc_return(atomic_long_t *v)
+raw_atomic_long_inc_return(atomic_long_t *v)
{
- return arch_atomic64_inc_return(v);
+ return raw_atomic64_inc_return(v);
}
static __always_inline long
-arch_atomic_long_inc_return_acquire(atomic_long_t *v)
+raw_atomic_long_inc_return_acquire(atomic_long_t *v)
{
- return arch_atomic64_inc_return_acquire(v);
+ return raw_atomic64_inc_return_acquire(v);
}
static __always_inline long
-arch_atomic_long_inc_return_release(atomic_long_t *v)
+raw_atomic_long_inc_return_release(atomic_long_t *v)
{
- return arch_atomic64_inc_return_release(v);
+ return raw_atomic64_inc_return_release(v);
}
static __always_inline long
-arch_atomic_long_inc_return_relaxed(atomic_long_t *v)
+raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
{
- return arch_atomic64_inc_return_relaxed(v);
+ return raw_atomic64_inc_return_relaxed(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc(atomic_long_t *v)
+raw_atomic_long_fetch_inc(atomic_long_t *v)
{
- return arch_atomic64_fetch_inc(v);
+ return raw_atomic64_fetch_inc(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc_acquire(atomic_long_t *v)
+raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
{
- return arch_atomic64_fetch_inc_acquire(v);
+ return raw_atomic64_fetch_inc_acquire(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc_release(atomic_long_t *v)
+raw_atomic_long_fetch_inc_release(atomic_long_t *v)
{
- return arch_atomic64_fetch_inc_release(v);
+ return raw_atomic64_fetch_inc_release(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
+raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{
- return arch_atomic64_fetch_inc_relaxed(v);
+ return raw_atomic64_fetch_inc_relaxed(v);
}
static __always_inline void
-arch_atomic_long_dec(atomic_long_t *v)
+raw_atomic_long_dec(atomic_long_t *v)
{
- arch_atomic64_dec(v);
+ raw_atomic64_dec(v);
}
static __always_inline long
-arch_atomic_long_dec_return(atomic_long_t *v)
+raw_atomic_long_dec_return(atomic_long_t *v)
{
- return arch_atomic64_dec_return(v);
+ return raw_atomic64_dec_return(v);
}
static __always_inline long
-arch_atomic_long_dec_return_acquire(atomic_long_t *v)
+raw_atomic_long_dec_return_acquire(atomic_long_t *v)
{
- return arch_atomic64_dec_return_acquire(v);
+ return raw_atomic64_dec_return_acquire(v);
}
static __always_inline long
-arch_atomic_long_dec_return_release(atomic_long_t *v)
+raw_atomic_long_dec_return_release(atomic_long_t *v)
{
- return arch_atomic64_dec_return_release(v);
+ return raw_atomic64_dec_return_release(v);
}
static __always_inline long
-arch_atomic_long_dec_return_relaxed(atomic_long_t *v)
+raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
{
- return arch_atomic64_dec_return_relaxed(v);
+ return raw_atomic64_dec_return_relaxed(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec(atomic_long_t *v)
+raw_atomic_long_fetch_dec(atomic_long_t *v)
{
- return arch_atomic64_fetch_dec(v);
+ return raw_atomic64_fetch_dec(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec_acquire(atomic_long_t *v)
+raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
{
- return arch_atomic64_fetch_dec_acquire(v);
+ return raw_atomic64_fetch_dec_acquire(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec_release(atomic_long_t *v)
+raw_atomic_long_fetch_dec_release(atomic_long_t *v)
{
- return arch_atomic64_fetch_dec_release(v);
+ return raw_atomic64_fetch_dec_release(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
+raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{
- return arch_atomic64_fetch_dec_relaxed(v);
+ return raw_atomic64_fetch_dec_relaxed(v);
}
static __always_inline void
-arch_atomic_long_and(long i, atomic_long_t *v)
+raw_atomic_long_and(long i, atomic_long_t *v)
{
- arch_atomic64_and(i, v);
+ raw_atomic64_and(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_and(i, v);
+ return raw_atomic64_fetch_and(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_and_acquire(i, v);
+ return raw_atomic64_fetch_and_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_and_release(i, v);
+ return raw_atomic64_fetch_and_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_and_relaxed(i, v);
+ return raw_atomic64_fetch_and_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_andnot(long i, atomic_long_t *v)
+raw_atomic_long_andnot(long i, atomic_long_t *v)
{
- arch_atomic64_andnot(i, v);
+ raw_atomic64_andnot(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_andnot(i, v);
+ return raw_atomic64_fetch_andnot(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_andnot_acquire(i, v);
+ return raw_atomic64_fetch_andnot_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_andnot_release(i, v);
+ return raw_atomic64_fetch_andnot_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_andnot_relaxed(i, v);
+ return raw_atomic64_fetch_andnot_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_or(long i, atomic_long_t *v)
+raw_atomic_long_or(long i, atomic_long_t *v)
{
- arch_atomic64_or(i, v);
+ raw_atomic64_or(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_or(i, v);
+ return raw_atomic64_fetch_or(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_or_acquire(i, v);
+ return raw_atomic64_fetch_or_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_or_release(i, v);
+ return raw_atomic64_fetch_or_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_or_relaxed(i, v);
+ return raw_atomic64_fetch_or_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_xor(long i, atomic_long_t *v)
+raw_atomic_long_xor(long i, atomic_long_t *v)
{
- arch_atomic64_xor(i, v);
+ raw_atomic64_xor(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_xor(i, v);
+ return raw_atomic64_fetch_xor(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_xor_acquire(i, v);
+ return raw_atomic64_fetch_xor_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_xor_release(i, v);
+ return raw_atomic64_fetch_xor_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_fetch_xor_relaxed(i, v);
+ return raw_atomic64_fetch_xor_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_xchg(atomic_long_t *v, long i)
+raw_atomic_long_xchg(atomic_long_t *v, long i)
{
- return arch_atomic64_xchg(v, i);
+ return raw_atomic64_xchg(v, i);
}
static __always_inline long
-arch_atomic_long_xchg_acquire(atomic_long_t *v, long i)
+raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
{
- return arch_atomic64_xchg_acquire(v, i);
+ return raw_atomic64_xchg_acquire(v, i);
}
static __always_inline long
-arch_atomic_long_xchg_release(atomic_long_t *v, long i)
+raw_atomic_long_xchg_release(atomic_long_t *v, long i)
{
- return arch_atomic64_xchg_release(v, i);
+ return raw_atomic64_xchg_release(v, i);
}
static __always_inline long
-arch_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
+raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
{
- return arch_atomic64_xchg_relaxed(v, i);
+ return raw_atomic64_xchg_relaxed(v, i);
}
static __always_inline long
-arch_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{
- return arch_atomic64_cmpxchg(v, old, new);
+ return raw_atomic64_cmpxchg(v, old, new);
}
static __always_inline long
-arch_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{
- return arch_atomic64_cmpxchg_acquire(v, old, new);
+ return raw_atomic64_cmpxchg_acquire(v, old, new);
}
static __always_inline long
-arch_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{
- return arch_atomic64_cmpxchg_release(v, old, new);
+ return raw_atomic64_cmpxchg_release(v, old, new);
}
static __always_inline long
-arch_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{
- return arch_atomic64_cmpxchg_relaxed(v, old, new);
+ return raw_atomic64_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
{
- return arch_atomic64_try_cmpxchg(v, (s64 *)old, new);
+ return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{
- return arch_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new);
+ return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
{
- return arch_atomic64_try_cmpxchg_release(v, (s64 *)old, new);
+ return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{
- return arch_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new);
+ return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new);
}
static __always_inline bool
-arch_atomic_long_sub_and_test(long i, atomic_long_t *v)
+raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
{
- return arch_atomic64_sub_and_test(i, v);
+ return raw_atomic64_sub_and_test(i, v);
}
static __always_inline bool
-arch_atomic_long_dec_and_test(atomic_long_t *v)
+raw_atomic_long_dec_and_test(atomic_long_t *v)
{
- return arch_atomic64_dec_and_test(v);
+ return raw_atomic64_dec_and_test(v);
}
static __always_inline bool
-arch_atomic_long_inc_and_test(atomic_long_t *v)
+raw_atomic_long_inc_and_test(atomic_long_t *v)
{
- return arch_atomic64_inc_and_test(v);
+ return raw_atomic64_inc_and_test(v);
}
static __always_inline bool
-arch_atomic_long_add_negative(long i, atomic_long_t *v)
+raw_atomic_long_add_negative(long i, atomic_long_t *v)
{
- return arch_atomic64_add_negative(i, v);
+ return raw_atomic64_add_negative(i, v);
}
static __always_inline bool
-arch_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
{
- return arch_atomic64_add_negative_acquire(i, v);
+ return raw_atomic64_add_negative_acquire(i, v);
}
static __always_inline bool
-arch_atomic_long_add_negative_release(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
{
- return arch_atomic64_add_negative_release(i, v);
+ return raw_atomic64_add_negative_release(i, v);
}
static __always_inline bool
-arch_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic64_add_negative_relaxed(i, v);
+ return raw_atomic64_add_negative_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
+raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
- return arch_atomic64_fetch_add_unless(v, a, u);
+ return raw_atomic64_fetch_add_unless(v, a, u);
}
static __always_inline bool
-arch_atomic_long_add_unless(atomic_long_t *v, long a, long u)
+raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
{
- return arch_atomic64_add_unless(v, a, u);
+ return raw_atomic64_add_unless(v, a, u);
}
static __always_inline bool
-arch_atomic_long_inc_not_zero(atomic_long_t *v)
+raw_atomic_long_inc_not_zero(atomic_long_t *v)
{
- return arch_atomic64_inc_not_zero(v);
+ return raw_atomic64_inc_not_zero(v);
}
static __always_inline bool
-arch_atomic_long_inc_unless_negative(atomic_long_t *v)
+raw_atomic_long_inc_unless_negative(atomic_long_t *v)
{
- return arch_atomic64_inc_unless_negative(v);
+ return raw_atomic64_inc_unless_negative(v);
}
static __always_inline bool
-arch_atomic_long_dec_unless_positive(atomic_long_t *v)
+raw_atomic_long_dec_unless_positive(atomic_long_t *v)
{
- return arch_atomic64_dec_unless_positive(v);
+ return raw_atomic64_dec_unless_positive(v);
}
static __always_inline long
-arch_atomic_long_dec_if_positive(atomic_long_t *v)
+raw_atomic_long_dec_if_positive(atomic_long_t *v)
{
- return arch_atomic64_dec_if_positive(v);
+ return raw_atomic64_dec_if_positive(v);
}
#else /* CONFIG_64BIT */
static __always_inline long
-arch_atomic_long_read(const atomic_long_t *v)
+raw_atomic_long_read(const atomic_long_t *v)
{
- return arch_atomic_read(v);
+ return raw_atomic_read(v);
}
static __always_inline long
-arch_atomic_long_read_acquire(const atomic_long_t *v)
+raw_atomic_long_read_acquire(const atomic_long_t *v)
{
- return arch_atomic_read_acquire(v);
+ return raw_atomic_read_acquire(v);
}
static __always_inline void
-arch_atomic_long_set(atomic_long_t *v, long i)
+raw_atomic_long_set(atomic_long_t *v, long i)
{
- arch_atomic_set(v, i);
+ raw_atomic_set(v, i);
}
static __always_inline void
-arch_atomic_long_set_release(atomic_long_t *v, long i)
+raw_atomic_long_set_release(atomic_long_t *v, long i)
{
- arch_atomic_set_release(v, i);
+ raw_atomic_set_release(v, i);
}
static __always_inline void
-arch_atomic_long_add(long i, atomic_long_t *v)
+raw_atomic_long_add(long i, atomic_long_t *v)
{
- arch_atomic_add(i, v);
+ raw_atomic_add(i, v);
}
static __always_inline long
-arch_atomic_long_add_return(long i, atomic_long_t *v)
+raw_atomic_long_add_return(long i, atomic_long_t *v)
{
- return arch_atomic_add_return(i, v);
+ return raw_atomic_add_return(i, v);
}
static __always_inline long
-arch_atomic_long_add_return_acquire(long i, atomic_long_t *v)
+raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_add_return_acquire(i, v);
+ return raw_atomic_add_return_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_add_return_release(long i, atomic_long_t *v)
+raw_atomic_long_add_return_release(long i, atomic_long_t *v)
{
- return arch_atomic_add_return_release(i, v);
+ return raw_atomic_add_return_release(i, v);
}
static __always_inline long
-arch_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_add_return_relaxed(i, v);
+ return raw_atomic_add_return_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_add(i, v);
+ return raw_atomic_fetch_add(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_add_acquire(i, v);
+ return raw_atomic_fetch_add_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_add_release(i, v);
+ return raw_atomic_fetch_add_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_add_relaxed(i, v);
+ return raw_atomic_fetch_add_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_sub(long i, atomic_long_t *v)
+raw_atomic_long_sub(long i, atomic_long_t *v)
{
- arch_atomic_sub(i, v);
+ raw_atomic_sub(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return(long i, atomic_long_t *v)
+raw_atomic_long_sub_return(long i, atomic_long_t *v)
{
- return arch_atomic_sub_return(i, v);
+ return raw_atomic_sub_return(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_sub_return_acquire(i, v);
+ return raw_atomic_sub_return_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return_release(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
{
- return arch_atomic_sub_return_release(i, v);
+ return raw_atomic_sub_return_release(i, v);
}
static __always_inline long
-arch_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_sub_return_relaxed(i, v);
+ return raw_atomic_sub_return_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_sub(i, v);
+ return raw_atomic_fetch_sub(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_sub_acquire(i, v);
+ return raw_atomic_fetch_sub_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_sub_release(i, v);
+ return raw_atomic_fetch_sub_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_sub_relaxed(i, v);
+ return raw_atomic_fetch_sub_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_inc(atomic_long_t *v)
+raw_atomic_long_inc(atomic_long_t *v)
{
- arch_atomic_inc(v);
+ raw_atomic_inc(v);
}
static __always_inline long
-arch_atomic_long_inc_return(atomic_long_t *v)
+raw_atomic_long_inc_return(atomic_long_t *v)
{
- return arch_atomic_inc_return(v);
+ return raw_atomic_inc_return(v);
}
static __always_inline long
-arch_atomic_long_inc_return_acquire(atomic_long_t *v)
+raw_atomic_long_inc_return_acquire(atomic_long_t *v)
{
- return arch_atomic_inc_return_acquire(v);
+ return raw_atomic_inc_return_acquire(v);
}
static __always_inline long
-arch_atomic_long_inc_return_release(atomic_long_t *v)
+raw_atomic_long_inc_return_release(atomic_long_t *v)
{
- return arch_atomic_inc_return_release(v);
+ return raw_atomic_inc_return_release(v);
}
static __always_inline long
-arch_atomic_long_inc_return_relaxed(atomic_long_t *v)
+raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
{
- return arch_atomic_inc_return_relaxed(v);
+ return raw_atomic_inc_return_relaxed(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc(atomic_long_t *v)
+raw_atomic_long_fetch_inc(atomic_long_t *v)
{
- return arch_atomic_fetch_inc(v);
+ return raw_atomic_fetch_inc(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc_acquire(atomic_long_t *v)
+raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
{
- return arch_atomic_fetch_inc_acquire(v);
+ return raw_atomic_fetch_inc_acquire(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc_release(atomic_long_t *v)
+raw_atomic_long_fetch_inc_release(atomic_long_t *v)
{
- return arch_atomic_fetch_inc_release(v);
+ return raw_atomic_fetch_inc_release(v);
}
static __always_inline long
-arch_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
+raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{
- return arch_atomic_fetch_inc_relaxed(v);
+ return raw_atomic_fetch_inc_relaxed(v);
}
static __always_inline void
-arch_atomic_long_dec(atomic_long_t *v)
+raw_atomic_long_dec(atomic_long_t *v)
{
- arch_atomic_dec(v);
+ raw_atomic_dec(v);
}
static __always_inline long
-arch_atomic_long_dec_return(atomic_long_t *v)
+raw_atomic_long_dec_return(atomic_long_t *v)
{
- return arch_atomic_dec_return(v);
+ return raw_atomic_dec_return(v);
}
static __always_inline long
-arch_atomic_long_dec_return_acquire(atomic_long_t *v)
+raw_atomic_long_dec_return_acquire(atomic_long_t *v)
{
- return arch_atomic_dec_return_acquire(v);
+ return raw_atomic_dec_return_acquire(v);
}
static __always_inline long
-arch_atomic_long_dec_return_release(atomic_long_t *v)
+raw_atomic_long_dec_return_release(atomic_long_t *v)
{
- return arch_atomic_dec_return_release(v);
+ return raw_atomic_dec_return_release(v);
}
static __always_inline long
-arch_atomic_long_dec_return_relaxed(atomic_long_t *v)
+raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
{
- return arch_atomic_dec_return_relaxed(v);
+ return raw_atomic_dec_return_relaxed(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec(atomic_long_t *v)
+raw_atomic_long_fetch_dec(atomic_long_t *v)
{
- return arch_atomic_fetch_dec(v);
+ return raw_atomic_fetch_dec(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec_acquire(atomic_long_t *v)
+raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
{
- return arch_atomic_fetch_dec_acquire(v);
+ return raw_atomic_fetch_dec_acquire(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec_release(atomic_long_t *v)
+raw_atomic_long_fetch_dec_release(atomic_long_t *v)
{
- return arch_atomic_fetch_dec_release(v);
+ return raw_atomic_fetch_dec_release(v);
}
static __always_inline long
-arch_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
+raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{
- return arch_atomic_fetch_dec_relaxed(v);
+ return raw_atomic_fetch_dec_relaxed(v);
}
static __always_inline void
-arch_atomic_long_and(long i, atomic_long_t *v)
+raw_atomic_long_and(long i, atomic_long_t *v)
{
- arch_atomic_and(i, v);
+ raw_atomic_and(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_and(i, v);
+ return raw_atomic_fetch_and(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_and_acquire(i, v);
+ return raw_atomic_fetch_and_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_and_release(i, v);
+ return raw_atomic_fetch_and_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_and_relaxed(i, v);
+ return raw_atomic_fetch_and_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_andnot(long i, atomic_long_t *v)
+raw_atomic_long_andnot(long i, atomic_long_t *v)
{
- arch_atomic_andnot(i, v);
+ raw_atomic_andnot(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_andnot(i, v);
+ return raw_atomic_fetch_andnot(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_andnot_acquire(i, v);
+ return raw_atomic_fetch_andnot_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_andnot_release(i, v);
+ return raw_atomic_fetch_andnot_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_andnot_relaxed(i, v);
+ return raw_atomic_fetch_andnot_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_or(long i, atomic_long_t *v)
+raw_atomic_long_or(long i, atomic_long_t *v)
{
- arch_atomic_or(i, v);
+ raw_atomic_or(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_or(i, v);
+ return raw_atomic_fetch_or(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_or_acquire(i, v);
+ return raw_atomic_fetch_or_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_or_release(i, v);
+ return raw_atomic_fetch_or_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_or_relaxed(i, v);
+ return raw_atomic_fetch_or_relaxed(i, v);
}
static __always_inline void
-arch_atomic_long_xor(long i, atomic_long_t *v)
+raw_atomic_long_xor(long i, atomic_long_t *v)
{
- arch_atomic_xor(i, v);
+ raw_atomic_xor(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_xor(i, v);
+ return raw_atomic_fetch_xor(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_xor_acquire(i, v);
+ return raw_atomic_fetch_xor_acquire(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_xor_release(i, v);
+ return raw_atomic_fetch_xor_release(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_fetch_xor_relaxed(i, v);
+ return raw_atomic_fetch_xor_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_xchg(atomic_long_t *v, long i)
+raw_atomic_long_xchg(atomic_long_t *v, long i)
{
- return arch_atomic_xchg(v, i);
+ return raw_atomic_xchg(v, i);
}
static __always_inline long
-arch_atomic_long_xchg_acquire(atomic_long_t *v, long i)
+raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
{
- return arch_atomic_xchg_acquire(v, i);
+ return raw_atomic_xchg_acquire(v, i);
}
static __always_inline long
-arch_atomic_long_xchg_release(atomic_long_t *v, long i)
+raw_atomic_long_xchg_release(atomic_long_t *v, long i)
{
- return arch_atomic_xchg_release(v, i);
+ return raw_atomic_xchg_release(v, i);
}
static __always_inline long
-arch_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
+raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
{
- return arch_atomic_xchg_relaxed(v, i);
+ return raw_atomic_xchg_relaxed(v, i);
}
static __always_inline long
-arch_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{
- return arch_atomic_cmpxchg(v, old, new);
+ return raw_atomic_cmpxchg(v, old, new);
}
static __always_inline long
-arch_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{
- return arch_atomic_cmpxchg_acquire(v, old, new);
+ return raw_atomic_cmpxchg_acquire(v, old, new);
}
static __always_inline long
-arch_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{
- return arch_atomic_cmpxchg_release(v, old, new);
+ return raw_atomic_cmpxchg_release(v, old, new);
}
static __always_inline long
-arch_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
+raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{
- return arch_atomic_cmpxchg_relaxed(v, old, new);
+ return raw_atomic_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
{
- return arch_atomic_try_cmpxchg(v, (int *)old, new);
+ return raw_atomic_try_cmpxchg(v, (int *)old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{
- return arch_atomic_try_cmpxchg_acquire(v, (int *)old, new);
+ return raw_atomic_try_cmpxchg_acquire(v, (int *)old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
{
- return arch_atomic_try_cmpxchg_release(v, (int *)old, new);
+ return raw_atomic_try_cmpxchg_release(v, (int *)old, new);
}
static __always_inline bool
-arch_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
+raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{
- return arch_atomic_try_cmpxchg_relaxed(v, (int *)old, new);
+ return raw_atomic_try_cmpxchg_relaxed(v, (int *)old, new);
}
static __always_inline bool
-arch_atomic_long_sub_and_test(long i, atomic_long_t *v)
+raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
{
- return arch_atomic_sub_and_test(i, v);
+ return raw_atomic_sub_and_test(i, v);
}
static __always_inline bool
-arch_atomic_long_dec_and_test(atomic_long_t *v)
+raw_atomic_long_dec_and_test(atomic_long_t *v)
{
- return arch_atomic_dec_and_test(v);
+ return raw_atomic_dec_and_test(v);
}
static __always_inline bool
-arch_atomic_long_inc_and_test(atomic_long_t *v)
+raw_atomic_long_inc_and_test(atomic_long_t *v)
{
- return arch_atomic_inc_and_test(v);
+ return raw_atomic_inc_and_test(v);
}
static __always_inline bool
-arch_atomic_long_add_negative(long i, atomic_long_t *v)
+raw_atomic_long_add_negative(long i, atomic_long_t *v)
{
- return arch_atomic_add_negative(i, v);
+ return raw_atomic_add_negative(i, v);
}
static __always_inline bool
-arch_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
{
- return arch_atomic_add_negative_acquire(i, v);
+ return raw_atomic_add_negative_acquire(i, v);
}
static __always_inline bool
-arch_atomic_long_add_negative_release(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
{
- return arch_atomic_add_negative_release(i, v);
+ return raw_atomic_add_negative_release(i, v);
}
static __always_inline bool
-arch_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
+raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
{
- return arch_atomic_add_negative_relaxed(i, v);
+ return raw_atomic_add_negative_relaxed(i, v);
}
static __always_inline long
-arch_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
+raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
- return arch_atomic_fetch_add_unless(v, a, u);
+ return raw_atomic_fetch_add_unless(v, a, u);
}
static __always_inline bool
-arch_atomic_long_add_unless(atomic_long_t *v, long a, long u)
+raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
{
- return arch_atomic_add_unless(v, a, u);
+ return raw_atomic_add_unless(v, a, u);
}
static __always_inline bool
-arch_atomic_long_inc_not_zero(atomic_long_t *v)
+raw_atomic_long_inc_not_zero(atomic_long_t *v)
{
- return arch_atomic_inc_not_zero(v);
+ return raw_atomic_inc_not_zero(v);
}
static __always_inline bool
-arch_atomic_long_inc_unless_negative(atomic_long_t *v)
+raw_atomic_long_inc_unless_negative(atomic_long_t *v)
{
- return arch_atomic_inc_unless_negative(v);
+ return raw_atomic_inc_unless_negative(v);
}
static __always_inline bool
-arch_atomic_long_dec_unless_positive(atomic_long_t *v)
+raw_atomic_long_dec_unless_positive(atomic_long_t *v)
{
- return arch_atomic_dec_unless_positive(v);
+ return raw_atomic_dec_unless_positive(v);
}
static __always_inline long
-arch_atomic_long_dec_if_positive(atomic_long_t *v)
+raw_atomic_long_dec_if_positive(atomic_long_t *v)
{
- return arch_atomic_dec_if_positive(v);
+ return raw_atomic_dec_if_positive(v);
}
#endif /* CONFIG_64BIT */
#endif /* _LINUX_ATOMIC_LONG_H */
-// a194c07d7d2f4b0e178d3c118c919775d5d65f50
+// 108784846d3bbbb201b8dabe621c5dc30b216206
diff --git a/include/linux/atomic/atomic-raw.h b/include/linux/atomic/atomic-raw.h
index 83ff026..8b2fc04 100644
--- a/include/linux/atomic/atomic-raw.h
+++ b/include/linux/atomic/atomic-raw.h
@@ -1026,516 +1026,6 @@ raw_atomic64_dec_if_positive(atomic64_t *v)
return arch_atomic64_dec_if_positive(v);
}
-static __always_inline long
-raw_atomic_long_read(const atomic_long_t *v)
-{
- return arch_atomic_long_read(v);
-}
-
-static __always_inline long
-raw_atomic_long_read_acquire(const atomic_long_t *v)
-{
- return arch_atomic_long_read_acquire(v);
-}
-
-static __always_inline void
-raw_atomic_long_set(atomic_long_t *v, long i)
-{
- arch_atomic_long_set(v, i);
-}
-
-static __always_inline void
-raw_atomic_long_set_release(atomic_long_t *v, long i)
-{
- arch_atomic_long_set_release(v, i);
-}
-
-static __always_inline void
-raw_atomic_long_add(long i, atomic_long_t *v)
-{
- arch_atomic_long_add(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_return(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_return_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_return_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_return_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_add(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_add_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_add_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_add_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_sub(long i, atomic_long_t *v)
-{
- arch_atomic_long_sub(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return(long i, atomic_long_t *v)
-{
- return arch_atomic_long_sub_return(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_sub_return_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_sub_return_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_sub_return_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_sub(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_sub_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_sub_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_sub_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_inc(atomic_long_t *v)
-{
- arch_atomic_long_inc(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return(atomic_long_t *v)
-{
- return arch_atomic_long_inc_return(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_acquire(atomic_long_t *v)
-{
- return arch_atomic_long_inc_return_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_release(atomic_long_t *v)
-{
- return arch_atomic_long_inc_return_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
-{
- return arch_atomic_long_inc_return_relaxed(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_inc(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_inc_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_release(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_inc_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_inc_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_long_dec(atomic_long_t *v)
-{
- arch_atomic_long_dec(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return(atomic_long_t *v)
-{
- return arch_atomic_long_dec_return(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_acquire(atomic_long_t *v)
-{
- return arch_atomic_long_dec_return_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_release(atomic_long_t *v)
-{
- return arch_atomic_long_dec_return_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
-{
- return arch_atomic_long_dec_return_relaxed(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_dec(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_dec_acquire(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_release(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_dec_release(v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
-{
- return arch_atomic_long_fetch_dec_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_long_and(long i, atomic_long_t *v)
-{
- arch_atomic_long_and(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_and(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_and_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_and_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_and_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_andnot(long i, atomic_long_t *v)
-{
- arch_atomic_long_andnot(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_andnot(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_andnot_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_andnot_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_andnot_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_or(long i, atomic_long_t *v)
-{
- arch_atomic_long_or(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_or(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_or_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_or_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_or_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_long_xor(long i, atomic_long_t *v)
-{
- arch_atomic_long_xor(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_xor(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_xor_acquire(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_xor_release(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_fetch_xor_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_xchg(atomic_long_t *v, long i)
-{
- return arch_atomic_long_xchg(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
-{
- return arch_atomic_long_xchg_acquire(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_release(atomic_long_t *v, long i)
-{
- return arch_atomic_long_xchg_release(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
-{
- return arch_atomic_long_xchg_relaxed(v, i);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
-{
- return arch_atomic_long_cmpxchg(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
-{
- return arch_atomic_long_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
-{
- return arch_atomic_long_cmpxchg_release(v, old, new);
-}
-
-static __always_inline long
-raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
-{
- return arch_atomic_long_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
-{
- return arch_atomic_long_try_cmpxchg(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
-{
- return arch_atomic_long_try_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
-{
- return arch_atomic_long_try_cmpxchg_release(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
-{
- return arch_atomic_long_try_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
-{
- return arch_atomic_long_sub_and_test(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_dec_and_test(atomic_long_t *v)
-{
- return arch_atomic_long_dec_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_and_test(atomic_long_t *v)
-{
- return arch_atomic_long_inc_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_negative(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_negative_acquire(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_negative_release(i, v);
-}
-
-static __always_inline bool
-raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
-{
- return arch_atomic_long_add_negative_relaxed(i, v);
-}
-
-static __always_inline long
-raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
-{
- return arch_atomic_long_fetch_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
-{
- return arch_atomic_long_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_not_zero(atomic_long_t *v)
-{
- return arch_atomic_long_inc_not_zero(v);
-}
-
-static __always_inline bool
-raw_atomic_long_inc_unless_negative(atomic_long_t *v)
-{
- return arch_atomic_long_inc_unless_negative(v);
-}
-
-static __always_inline bool
-raw_atomic_long_dec_unless_positive(atomic_long_t *v)
-{
- return arch_atomic_long_dec_unless_positive(v);
-}
-
-static __always_inline long
-raw_atomic_long_dec_if_positive(atomic_long_t *v)
-{
- return arch_atomic_long_dec_if_positive(v);
-}
-
#define raw_xchg(...) \
arch_xchg(__VA_ARGS__)
@@ -1642,4 +1132,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v)
arch_try_cmpxchg128_local(__VA_ARGS__)
#endif /* _LINUX_ATOMIC_RAW_H */
-// 01d54200571b3857755a07c10074a4fd58cef6b1
+// b23ed4424e85200e200ded094522e1d743b3a5b1
diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh
index eda89ce..75e91d6 100755
--- a/scripts/atomic/gen-atomic-long.sh
+++ b/scripts/atomic/gen-atomic-long.sh
@@ -47,9 +47,9 @@ gen_proto_order_variant()
cat <<EOF
static __always_inline ${ret}
-arch_atomic_long_${name}(${params})
+raw_atomic_long_${name}(${params})
{
- ${retstmt}arch_${atomic}_${name}(${argscast});
+ ${retstmt}raw_${atomic}_${name}(${argscast});
}
EOF
diff --git a/scripts/atomic/gen-atomic-raw.sh b/scripts/atomic/gen-atomic-raw.sh
index ba8d136..c7e3c52 100644
--- a/scripts/atomic/gen-atomic-raw.sh
+++ b/scripts/atomic/gen-atomic-raw.sh
@@ -63,10 +63,6 @@ grep '^[a-z]' "$1" | while read name meta args; do
gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
done
-grep '^[a-z]' "$1" | while read name meta args; do
- gen_proto "${meta}" "${name}" "atomic_long" "long" ${args}
-done
-
for xchg in "xchg" "cmpxchg" "cmpxchg64" "cmpxchg128" "try_cmpxchg" "try_cmpxchg64" "try_cmpxchg128"; do
for order in "" "_acquire" "_release" "_relaxed"; do
gen_xchg "${xchg}" "${order}"
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 7ed7a1564090fdd265f49d1ad94ee92845b14c76
Gitweb: https://git.kernel.org/tip/7ed7a1564090fdd265f49d1ad94ee92845b14c76
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:13 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:19 +02:00
locking/atomic: scripts: factor out order template generation
Currently gen_proto_order_variants() hard codes the path for the templates used
for order fallbacks. Factor this out into a helper so that it can be reused
elsewhere.
This results in no change to the generated headers, so there should be
no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
scripts/atomic/gen-atomic-fallback.sh | 34 +++++++++++++-------------
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh
index 7a6bcea..3373308 100755
--- a/scripts/atomic/gen-atomic-fallback.sh
+++ b/scripts/atomic/gen-atomic-fallback.sh
@@ -32,6 +32,20 @@ gen_template_fallback()
fi
}
+#gen_order_fallback(meta, pfx, name, sfx, order, atomic, int, args...)
+gen_order_fallback()
+{
+ local meta="$1"; shift
+ local pfx="$1"; shift
+ local name="$1"; shift
+ local sfx="$1"; shift
+ local order="$1"; shift
+
+ local tmpl_order=${order#_}
+ local tmpl="${ATOMICDIR}/fallbacks/${tmpl_order:-fence}"
+ gen_template_fallback "${tmpl}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
+}
+
#gen_proto_fallback(meta, pfx, name, sfx, order, atomic, int, args...)
gen_proto_fallback()
{
@@ -56,20 +70,6 @@ cat << EOF
EOF
}
-gen_proto_order_variant()
-{
- local meta="$1"; shift
- local pfx="$1"; shift
- local name="$1"; shift
- local sfx="$1"; shift
- local order="$1"; shift
- local atomic="$1"
-
- local basename="arch_${atomic}_${pfx}${name}${sfx}"
-
- printf "#define ${basename}${order} ${basename}${order}\n"
-}
-
#gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...)
gen_proto_order_variants()
{
@@ -117,9 +117,9 @@ gen_proto_order_variants()
printf "#else /* ${basename}_relaxed */\n\n"
- gen_template_fallback "${ATOMICDIR}/fallbacks/acquire" "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
- gen_template_fallback "${ATOMICDIR}/fallbacks/release" "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
- gen_template_fallback "${ATOMICDIR}/fallbacks/fence" "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
+ gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
+ gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
+ gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
printf "#endif /* ${basename}_relaxed */\n\n"
}
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 7c7084f3ba4031a9c2858afed696a577fcfe41d2
Gitweb: https://git.kernel.org/tip/7c7084f3ba4031a9c2858afed696a577fcfe41d2
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:10 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:18 +02:00
locking/atomic: xtensa: add preprocessor symbols
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/xtensa.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/xtensa/include/asm/atomic.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h
index 1d323a8..7308b7f 100644
--- a/arch/xtensa/include/asm/atomic.h
+++ b/arch/xtensa/include/asm/atomic.h
@@ -245,6 +245,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t * v) \
ATOMIC_OPS(add)
ATOMIC_OPS(sub)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
@@ -252,6 +257,10 @@ ATOMIC_OPS(and)
ATOMIC_OPS(or)
ATOMIC_OPS(xor)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 9257959a6e5b4fca6fc8e985790bff62c2046f20
Gitweb: https://git.kernel.org/tip/9257959a6e5b4fca6fc8e985790bff62c2046f20
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:17 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:21 +02:00
locking/atomic: scripts: restructure fallback ifdeffery
Currently the various ordering variants of an atomic operation are
defined in groups of full/acquire/release/relaxed ordering variants with
some shared ifdeffery and several potential definitions of each ordering
variant in different branches of the shared ifdeffery.
As an ordering variant can have several potential definitions down
different branches of the shared ifdeffery, it can be painful for a
human to find a relevant definition, and we don't have a good location
to place anything common to all definitions of an ordering variant (e.g.
kerneldoc).
Historically the grouping of full/acquire/release/relaxed ordering
variants was necessary as we filled in the missing atomics in the same
namespace as the architecture used. It would be easy to accidentally
define one ordering fallback in terms of another ordering fallback with
redundant barriers, and avoiding that would otherwise require a lot of
baroque ifdeffery.
With recent changes we no longer need to fill in the missing atomics in
the arch_atomic*_<op>() namespace, and only need to fill in the
raw_atomic*_<op>() namespace. Due to this, there's no risk of a
namespace collision, and we can define each raw_atomic*_<op> ordering
variant with its own ifdeffery checking for the arch_atomic*_<op>
ordering variants.
Restructure the fallbacks in this way, with each ordering variant having
its own ifdeffery of the form:
| #if defined(arch_atomic_fetch_andnot_acquire)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
| #elif defined(arch_atomic_fetch_andnot_relaxed)
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| int ret = arch_atomic_fetch_andnot_relaxed(i, v);
| __atomic_acquire_fence();
| return ret;
| }
| #elif defined(arch_atomic_fetch_andnot)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot
| #else
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| return raw_atomic_fetch_and_acquire(~i, v);
| }
| #endif
Note that where there's no relevant arch_atomic*_<op>() ordering
variant, we'll define the operation in terms of a distinct
raw_atomic*_<otherop>(), as this itself might have been filled in with a
fallback.
As we now generate the raw_atomic*_<op>() implementations directly, we
no longer need the trivial wrappers, so they are removed.
This makes the ifdeffery easier to follow, and will allow for further
improvements in subsequent patches.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
include/linux/atomic.h | 1 +-
include/linux/atomic/atomic-arch-fallback.h | 3178 ++++++++---------
include/linux/atomic/atomic-raw.h | 1135 +------
scripts/atomic/fallbacks/acquire | 2 +-
scripts/atomic/fallbacks/add_negative | 4 +-
scripts/atomic/fallbacks/add_unless | 4 +-
scripts/atomic/fallbacks/andnot | 4 +-
scripts/atomic/fallbacks/cmpxchg | 4 +-
scripts/atomic/fallbacks/dec | 4 +-
scripts/atomic/fallbacks/dec_and_test | 4 +-
scripts/atomic/fallbacks/dec_if_positive | 6 +-
scripts/atomic/fallbacks/dec_unless_positive | 6 +-
scripts/atomic/fallbacks/fence | 2 +-
scripts/atomic/fallbacks/fetch_add_unless | 6 +-
scripts/atomic/fallbacks/inc | 4 +-
scripts/atomic/fallbacks/inc_and_test | 4 +-
scripts/atomic/fallbacks/inc_not_zero | 4 +-
scripts/atomic/fallbacks/inc_unless_negative | 6 +-
scripts/atomic/fallbacks/read_acquire | 4 +-
scripts/atomic/fallbacks/release | 2 +-
scripts/atomic/fallbacks/set_release | 4 +-
scripts/atomic/fallbacks/sub_and_test | 4 +-
scripts/atomic/fallbacks/try_cmpxchg | 4 +-
scripts/atomic/fallbacks/xchg | 4 +-
scripts/atomic/gen-atomic-fallback.sh | 236 +-
scripts/atomic/gen-atomic-raw.sh | 80 +-
scripts/atomic/gen-atomics.sh | 1 +-
27 files changed, 1866 insertions(+), 2851 deletions(-)
delete mode 100644 include/linux/atomic/atomic-raw.h
delete mode 100644 scripts/atomic/gen-atomic-raw.sh
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 296cfae..8dd57c3 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -78,7 +78,6 @@
})
#include <linux/atomic/atomic-arch-fallback.h>
-#include <linux/atomic/atomic-raw.h>
#include <linux/atomic/atomic-long.h>
#include <linux/atomic/atomic-instrumented.h>
diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 1a2d81d..99bc1a8 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -8,2749 +8,2911 @@
#include <linux/compiler.h>
-#ifndef arch_xchg_relaxed
-#define arch_xchg_acquire arch_xchg
-#define arch_xchg_release arch_xchg
-#define arch_xchg_relaxed arch_xchg
-#else /* arch_xchg_relaxed */
-
-#ifndef arch_xchg_acquire
-#define arch_xchg_acquire(...) \
- __atomic_op_acquire(arch_xchg, __VA_ARGS__)
+#if defined(arch_xchg)
+#define raw_xchg arch_xchg
+#elif defined(arch_xchg_relaxed)
+#define raw_xchg(...) \
+ __atomic_op_fence(arch_xchg, __VA_ARGS__)
+#else
+extern void raw_xchg_not_implemented(void);
+#define raw_xchg(...) raw_xchg_not_implemented()
#endif
-#ifndef arch_xchg_release
-#define arch_xchg_release(...) \
- __atomic_op_release(arch_xchg, __VA_ARGS__)
+#if defined(arch_xchg_acquire)
+#define raw_xchg_acquire arch_xchg_acquire
+#elif defined(arch_xchg_relaxed)
+#define raw_xchg_acquire(...) \
+ __atomic_op_acquire(arch_xchg, __VA_ARGS__)
+#elif defined(arch_xchg)
+#define raw_xchg_acquire arch_xchg
+#else
+extern void raw_xchg_acquire_not_implemented(void);
+#define raw_xchg_acquire(...) raw_xchg_acquire_not_implemented()
#endif
-#ifndef arch_xchg
-#define arch_xchg(...) \
- __atomic_op_fence(arch_xchg, __VA_ARGS__)
+#if defined(arch_xchg_release)
+#define raw_xchg_release arch_xchg_release
+#elif defined(arch_xchg_relaxed)
+#define raw_xchg_release(...) \
+ __atomic_op_release(arch_xchg, __VA_ARGS__)
+#elif defined(arch_xchg)
+#define raw_xchg_release arch_xchg
+#else
+extern void raw_xchg_release_not_implemented(void);
+#define raw_xchg_release(...) raw_xchg_release_not_implemented()
+#endif
+
+#if defined(arch_xchg_relaxed)
+#define raw_xchg_relaxed arch_xchg_relaxed
+#elif defined(arch_xchg)
+#define raw_xchg_relaxed arch_xchg
+#else
+extern void raw_xchg_relaxed_not_implemented(void);
+#define raw_xchg_relaxed(...) raw_xchg_relaxed_not_implemented()
+#endif
+
+#if defined(arch_cmpxchg)
+#define raw_cmpxchg arch_cmpxchg
+#elif defined(arch_cmpxchg_relaxed)
+#define raw_cmpxchg(...) \
+ __atomic_op_fence(arch_cmpxchg, __VA_ARGS__)
+#else
+extern void raw_cmpxchg_not_implemented(void);
+#define raw_cmpxchg(...) raw_cmpxchg_not_implemented()
#endif
-#endif /* arch_xchg_relaxed */
-
-#ifndef arch_cmpxchg_relaxed
-#define arch_cmpxchg_acquire arch_cmpxchg
-#define arch_cmpxchg_release arch_cmpxchg
-#define arch_cmpxchg_relaxed arch_cmpxchg
-#else /* arch_cmpxchg_relaxed */
-
-#ifndef arch_cmpxchg_acquire
-#define arch_cmpxchg_acquire(...) \
+#if defined(arch_cmpxchg_acquire)
+#define raw_cmpxchg_acquire arch_cmpxchg_acquire
+#elif defined(arch_cmpxchg_relaxed)
+#define raw_cmpxchg_acquire(...) \
__atomic_op_acquire(arch_cmpxchg, __VA_ARGS__)
+#elif defined(arch_cmpxchg)
+#define raw_cmpxchg_acquire arch_cmpxchg
+#else
+extern void raw_cmpxchg_acquire_not_implemented(void);
+#define raw_cmpxchg_acquire(...) raw_cmpxchg_acquire_not_implemented()
#endif
-#ifndef arch_cmpxchg_release
-#define arch_cmpxchg_release(...) \
+#if defined(arch_cmpxchg_release)
+#define raw_cmpxchg_release arch_cmpxchg_release
+#elif defined(arch_cmpxchg_relaxed)
+#define raw_cmpxchg_release(...) \
__atomic_op_release(arch_cmpxchg, __VA_ARGS__)
+#elif defined(arch_cmpxchg)
+#define raw_cmpxchg_release arch_cmpxchg
+#else
+extern void raw_cmpxchg_release_not_implemented(void);
+#define raw_cmpxchg_release(...) raw_cmpxchg_release_not_implemented()
+#endif
+
+#if defined(arch_cmpxchg_relaxed)
+#define raw_cmpxchg_relaxed arch_cmpxchg_relaxed
+#elif defined(arch_cmpxchg)
+#define raw_cmpxchg_relaxed arch_cmpxchg
+#else
+extern void raw_cmpxchg_relaxed_not_implemented(void);
+#define raw_cmpxchg_relaxed(...) raw_cmpxchg_relaxed_not_implemented()
+#endif
+
+#if defined(arch_cmpxchg64)
+#define raw_cmpxchg64 arch_cmpxchg64
+#elif defined(arch_cmpxchg64_relaxed)
+#define raw_cmpxchg64(...) \
+ __atomic_op_fence(arch_cmpxchg64, __VA_ARGS__)
+#else
+extern void raw_cmpxchg64_not_implemented(void);
+#define raw_cmpxchg64(...) raw_cmpxchg64_not_implemented()
#endif
-#ifndef arch_cmpxchg
-#define arch_cmpxchg(...) \
- __atomic_op_fence(arch_cmpxchg, __VA_ARGS__)
-#endif
-
-#endif /* arch_cmpxchg_relaxed */
-
-#ifndef arch_cmpxchg64_relaxed
-#define arch_cmpxchg64_acquire arch_cmpxchg64
-#define arch_cmpxchg64_release arch_cmpxchg64
-#define arch_cmpxchg64_relaxed arch_cmpxchg64
-#else /* arch_cmpxchg64_relaxed */
-
-#ifndef arch_cmpxchg64_acquire
-#define arch_cmpxchg64_acquire(...) \
+#if defined(arch_cmpxchg64_acquire)
+#define raw_cmpxchg64_acquire arch_cmpxchg64_acquire
+#elif defined(arch_cmpxchg64_relaxed)
+#define raw_cmpxchg64_acquire(...) \
__atomic_op_acquire(arch_cmpxchg64, __VA_ARGS__)
+#elif defined(arch_cmpxchg64)
+#define raw_cmpxchg64_acquire arch_cmpxchg64
+#else
+extern void raw_cmpxchg64_acquire_not_implemented(void);
+#define raw_cmpxchg64_acquire(...) raw_cmpxchg64_acquire_not_implemented()
#endif
-#ifndef arch_cmpxchg64_release
-#define arch_cmpxchg64_release(...) \
+#if defined(arch_cmpxchg64_release)
+#define raw_cmpxchg64_release arch_cmpxchg64_release
+#elif defined(arch_cmpxchg64_relaxed)
+#define raw_cmpxchg64_release(...) \
__atomic_op_release(arch_cmpxchg64, __VA_ARGS__)
+#elif defined(arch_cmpxchg64)
+#define raw_cmpxchg64_release arch_cmpxchg64
+#else
+extern void raw_cmpxchg64_release_not_implemented(void);
+#define raw_cmpxchg64_release(...) raw_cmpxchg64_release_not_implemented()
+#endif
+
+#if defined(arch_cmpxchg64_relaxed)
+#define raw_cmpxchg64_relaxed arch_cmpxchg64_relaxed
+#elif defined(arch_cmpxchg64)
+#define raw_cmpxchg64_relaxed arch_cmpxchg64
+#else
+extern void raw_cmpxchg64_relaxed_not_implemented(void);
+#define raw_cmpxchg64_relaxed(...) raw_cmpxchg64_relaxed_not_implemented()
+#endif
+
+#if defined(arch_cmpxchg128)
+#define raw_cmpxchg128 arch_cmpxchg128
+#elif defined(arch_cmpxchg128_relaxed)
+#define raw_cmpxchg128(...) \
+ __atomic_op_fence(arch_cmpxchg128, __VA_ARGS__)
+#else
+extern void raw_cmpxchg128_not_implemented(void);
+#define raw_cmpxchg128(...) raw_cmpxchg128_not_implemented()
#endif
-#ifndef arch_cmpxchg64
-#define arch_cmpxchg64(...) \
- __atomic_op_fence(arch_cmpxchg64, __VA_ARGS__)
-#endif
-
-#endif /* arch_cmpxchg64_relaxed */
-
-#ifndef arch_cmpxchg128_relaxed
-#define arch_cmpxchg128_acquire arch_cmpxchg128
-#define arch_cmpxchg128_release arch_cmpxchg128
-#define arch_cmpxchg128_relaxed arch_cmpxchg128
-#else /* arch_cmpxchg128_relaxed */
-
-#ifndef arch_cmpxchg128_acquire
-#define arch_cmpxchg128_acquire(...) \
+#if defined(arch_cmpxchg128_acquire)
+#define raw_cmpxchg128_acquire arch_cmpxchg128_acquire
+#elif defined(arch_cmpxchg128_relaxed)
+#define raw_cmpxchg128_acquire(...) \
__atomic_op_acquire(arch_cmpxchg128, __VA_ARGS__)
+#elif defined(arch_cmpxchg128)
+#define raw_cmpxchg128_acquire arch_cmpxchg128
+#else
+extern void raw_cmpxchg128_acquire_not_implemented(void);
+#define raw_cmpxchg128_acquire(...) raw_cmpxchg128_acquire_not_implemented()
#endif
-#ifndef arch_cmpxchg128_release
-#define arch_cmpxchg128_release(...) \
+#if defined(arch_cmpxchg128_release)
+#define raw_cmpxchg128_release arch_cmpxchg128_release
+#elif defined(arch_cmpxchg128_relaxed)
+#define raw_cmpxchg128_release(...) \
__atomic_op_release(arch_cmpxchg128, __VA_ARGS__)
-#endif
-
-#ifndef arch_cmpxchg128
-#define arch_cmpxchg128(...) \
- __atomic_op_fence(arch_cmpxchg128, __VA_ARGS__)
-#endif
-
-#endif /* arch_cmpxchg128_relaxed */
-
-#ifndef arch_try_cmpxchg_relaxed
-#ifdef arch_try_cmpxchg
-#define arch_try_cmpxchg_acquire arch_try_cmpxchg
-#define arch_try_cmpxchg_release arch_try_cmpxchg
-#define arch_try_cmpxchg_relaxed arch_try_cmpxchg
-#endif /* arch_try_cmpxchg */
-
-#ifndef arch_try_cmpxchg
-#define arch_try_cmpxchg(_ptr, _oldp, _new) \
+#elif defined(arch_cmpxchg128)
+#define raw_cmpxchg128_release arch_cmpxchg128
+#else
+extern void raw_cmpxchg128_release_not_implemented(void);
+#define raw_cmpxchg128_release(...) raw_cmpxchg128_release_not_implemented()
+#endif
+
+#if defined(arch_cmpxchg128_relaxed)
+#define raw_cmpxchg128_relaxed arch_cmpxchg128_relaxed
+#elif defined(arch_cmpxchg128)
+#define raw_cmpxchg128_relaxed arch_cmpxchg128
+#else
+extern void raw_cmpxchg128_relaxed_not_implemented(void);
+#define raw_cmpxchg128_relaxed(...) raw_cmpxchg128_relaxed_not_implemented()
+#endif
+
+#if defined(arch_try_cmpxchg)
+#define raw_try_cmpxchg arch_try_cmpxchg
+#elif defined(arch_try_cmpxchg_relaxed)
+#define raw_try_cmpxchg(...) \
+ __atomic_op_fence(arch_try_cmpxchg, __VA_ARGS__)
+#else
+#define raw_try_cmpxchg(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg */
+#endif
-#ifndef arch_try_cmpxchg_acquire
-#define arch_try_cmpxchg_acquire(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg_acquire)
+#define raw_try_cmpxchg_acquire arch_try_cmpxchg_acquire
+#elif defined(arch_try_cmpxchg_relaxed)
+#define raw_try_cmpxchg_acquire(...) \
+ __atomic_op_acquire(arch_try_cmpxchg, __VA_ARGS__)
+#elif defined(arch_try_cmpxchg)
+#define raw_try_cmpxchg_acquire arch_try_cmpxchg
+#else
+#define raw_try_cmpxchg_acquire(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg_acquire((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg_acquire((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg_acquire */
+#endif
-#ifndef arch_try_cmpxchg_release
-#define arch_try_cmpxchg_release(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg_release)
+#define raw_try_cmpxchg_release arch_try_cmpxchg_release
+#elif defined(arch_try_cmpxchg_relaxed)
+#define raw_try_cmpxchg_release(...) \
+ __atomic_op_release(arch_try_cmpxchg, __VA_ARGS__)
+#elif defined(arch_try_cmpxchg)
+#define raw_try_cmpxchg_release arch_try_cmpxchg
+#else
+#define raw_try_cmpxchg_release(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg_release((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg_release((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg_release */
+#endif
-#ifndef arch_try_cmpxchg_relaxed
-#define arch_try_cmpxchg_relaxed(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg_relaxed)
+#define raw_try_cmpxchg_relaxed arch_try_cmpxchg_relaxed
+#elif defined(arch_try_cmpxchg)
+#define raw_try_cmpxchg_relaxed arch_try_cmpxchg
+#else
+#define raw_try_cmpxchg_relaxed(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg_relaxed((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg_relaxed((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg_relaxed */
-
-#else /* arch_try_cmpxchg_relaxed */
-
-#ifndef arch_try_cmpxchg_acquire
-#define arch_try_cmpxchg_acquire(...) \
- __atomic_op_acquire(arch_try_cmpxchg, __VA_ARGS__)
-#endif
-
-#ifndef arch_try_cmpxchg_release
-#define arch_try_cmpxchg_release(...) \
- __atomic_op_release(arch_try_cmpxchg, __VA_ARGS__)
#endif
-#ifndef arch_try_cmpxchg
-#define arch_try_cmpxchg(...) \
- __atomic_op_fence(arch_try_cmpxchg, __VA_ARGS__)
-#endif
-
-#endif /* arch_try_cmpxchg_relaxed */
-
-#ifndef arch_try_cmpxchg64_relaxed
-#ifdef arch_try_cmpxchg64
-#define arch_try_cmpxchg64_acquire arch_try_cmpxchg64
-#define arch_try_cmpxchg64_release arch_try_cmpxchg64
-#define arch_try_cmpxchg64_relaxed arch_try_cmpxchg64
-#endif /* arch_try_cmpxchg64 */
-
-#ifndef arch_try_cmpxchg64
-#define arch_try_cmpxchg64(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg64)
+#define raw_try_cmpxchg64 arch_try_cmpxchg64
+#elif defined(arch_try_cmpxchg64_relaxed)
+#define raw_try_cmpxchg64(...) \
+ __atomic_op_fence(arch_try_cmpxchg64, __VA_ARGS__)
+#else
+#define raw_try_cmpxchg64(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg64((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg64((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg64 */
+#endif
-#ifndef arch_try_cmpxchg64_acquire
-#define arch_try_cmpxchg64_acquire(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg64_acquire)
+#define raw_try_cmpxchg64_acquire arch_try_cmpxchg64_acquire
+#elif defined(arch_try_cmpxchg64_relaxed)
+#define raw_try_cmpxchg64_acquire(...) \
+ __atomic_op_acquire(arch_try_cmpxchg64, __VA_ARGS__)
+#elif defined(arch_try_cmpxchg64)
+#define raw_try_cmpxchg64_acquire arch_try_cmpxchg64
+#else
+#define raw_try_cmpxchg64_acquire(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg64_acquire((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg64_acquire((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg64_acquire */
+#endif
-#ifndef arch_try_cmpxchg64_release
-#define arch_try_cmpxchg64_release(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg64_release)
+#define raw_try_cmpxchg64_release arch_try_cmpxchg64_release
+#elif defined(arch_try_cmpxchg64_relaxed)
+#define raw_try_cmpxchg64_release(...) \
+ __atomic_op_release(arch_try_cmpxchg64, __VA_ARGS__)
+#elif defined(arch_try_cmpxchg64)
+#define raw_try_cmpxchg64_release arch_try_cmpxchg64
+#else
+#define raw_try_cmpxchg64_release(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg64_release((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg64_release((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg64_release */
+#endif
-#ifndef arch_try_cmpxchg64_relaxed
-#define arch_try_cmpxchg64_relaxed(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg64_relaxed)
+#define raw_try_cmpxchg64_relaxed arch_try_cmpxchg64_relaxed
+#elif defined(arch_try_cmpxchg64)
+#define raw_try_cmpxchg64_relaxed arch_try_cmpxchg64
+#else
+#define raw_try_cmpxchg64_relaxed(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg64_relaxed((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg64_relaxed((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg64_relaxed */
-
-#else /* arch_try_cmpxchg64_relaxed */
-
-#ifndef arch_try_cmpxchg64_acquire
-#define arch_try_cmpxchg64_acquire(...) \
- __atomic_op_acquire(arch_try_cmpxchg64, __VA_ARGS__)
-#endif
-
-#ifndef arch_try_cmpxchg64_release
-#define arch_try_cmpxchg64_release(...) \
- __atomic_op_release(arch_try_cmpxchg64, __VA_ARGS__)
-#endif
-
-#ifndef arch_try_cmpxchg64
-#define arch_try_cmpxchg64(...) \
- __atomic_op_fence(arch_try_cmpxchg64, __VA_ARGS__)
#endif
-#endif /* arch_try_cmpxchg64_relaxed */
-
-#ifndef arch_try_cmpxchg128_relaxed
-#ifdef arch_try_cmpxchg128
-#define arch_try_cmpxchg128_acquire arch_try_cmpxchg128
-#define arch_try_cmpxchg128_release arch_try_cmpxchg128
-#define arch_try_cmpxchg128_relaxed arch_try_cmpxchg128
-#endif /* arch_try_cmpxchg128 */
-
-#ifndef arch_try_cmpxchg128
-#define arch_try_cmpxchg128(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg128)
+#define raw_try_cmpxchg128 arch_try_cmpxchg128
+#elif defined(arch_try_cmpxchg128_relaxed)
+#define raw_try_cmpxchg128(...) \
+ __atomic_op_fence(arch_try_cmpxchg128, __VA_ARGS__)
+#else
+#define raw_try_cmpxchg128(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg128((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg128((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg128 */
+#endif
-#ifndef arch_try_cmpxchg128_acquire
-#define arch_try_cmpxchg128_acquire(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg128_acquire)
+#define raw_try_cmpxchg128_acquire arch_try_cmpxchg128_acquire
+#elif defined(arch_try_cmpxchg128_relaxed)
+#define raw_try_cmpxchg128_acquire(...) \
+ __atomic_op_acquire(arch_try_cmpxchg128, __VA_ARGS__)
+#elif defined(arch_try_cmpxchg128)
+#define raw_try_cmpxchg128_acquire arch_try_cmpxchg128
+#else
+#define raw_try_cmpxchg128_acquire(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg128_acquire((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg128_acquire((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg128_acquire */
+#endif
-#ifndef arch_try_cmpxchg128_release
-#define arch_try_cmpxchg128_release(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg128_release)
+#define raw_try_cmpxchg128_release arch_try_cmpxchg128_release
+#elif defined(arch_try_cmpxchg128_relaxed)
+#define raw_try_cmpxchg128_release(...) \
+ __atomic_op_release(arch_try_cmpxchg128, __VA_ARGS__)
+#elif defined(arch_try_cmpxchg128)
+#define raw_try_cmpxchg128_release arch_try_cmpxchg128
+#else
+#define raw_try_cmpxchg128_release(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg128_release((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg128_release((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg128_release */
+#endif
-#ifndef arch_try_cmpxchg128_relaxed
-#define arch_try_cmpxchg128_relaxed(_ptr, _oldp, _new) \
+#if defined(arch_try_cmpxchg128_relaxed)
+#define raw_try_cmpxchg128_relaxed arch_try_cmpxchg128_relaxed
+#elif defined(arch_try_cmpxchg128)
+#define raw_try_cmpxchg128_relaxed arch_try_cmpxchg128
+#else
+#define raw_try_cmpxchg128_relaxed(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg128_relaxed((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg128_relaxed((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg128_relaxed */
-
-#else /* arch_try_cmpxchg128_relaxed */
-
-#ifndef arch_try_cmpxchg128_acquire
-#define arch_try_cmpxchg128_acquire(...) \
- __atomic_op_acquire(arch_try_cmpxchg128, __VA_ARGS__)
#endif
-#ifndef arch_try_cmpxchg128_release
-#define arch_try_cmpxchg128_release(...) \
- __atomic_op_release(arch_try_cmpxchg128, __VA_ARGS__)
-#endif
+#define raw_cmpxchg_local arch_cmpxchg_local
-#ifndef arch_try_cmpxchg128
-#define arch_try_cmpxchg128(...) \
- __atomic_op_fence(arch_try_cmpxchg128, __VA_ARGS__)
+#ifdef arch_try_cmpxchg_local
+#define raw_try_cmpxchg_local arch_try_cmpxchg_local
+#else
+#define raw_try_cmpxchg_local(_ptr, _oldp, _new) \
+({ \
+ typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
+ ___r = raw_cmpxchg_local((_ptr), ___o, (_new)); \
+ if (unlikely(___r != ___o)) \
+ *___op = ___r; \
+ likely(___r == ___o); \
+})
#endif
-#endif /* arch_try_cmpxchg128_relaxed */
+#define raw_cmpxchg64_local arch_cmpxchg64_local
-#ifndef arch_try_cmpxchg_local
-#define arch_try_cmpxchg_local(_ptr, _oldp, _new) \
+#ifdef arch_try_cmpxchg64_local
+#define raw_try_cmpxchg64_local arch_try_cmpxchg64_local
+#else
+#define raw_try_cmpxchg64_local(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg_local((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg64_local((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg_local */
+#endif
+
+#define raw_cmpxchg128_local arch_cmpxchg128_local
-#ifndef arch_try_cmpxchg64_local
-#define arch_try_cmpxchg64_local(_ptr, _oldp, _new) \
+#ifdef arch_try_cmpxchg128_local
+#define raw_try_cmpxchg128_local arch_try_cmpxchg128_local
+#else
+#define raw_try_cmpxchg128_local(_ptr, _oldp, _new) \
({ \
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \
- ___r = arch_cmpxchg64_local((_ptr), ___o, (_new)); \
+ ___r = raw_cmpxchg128_local((_ptr), ___o, (_new)); \
if (unlikely(___r != ___o)) \
*___op = ___r; \
likely(___r == ___o); \
})
-#endif /* arch_try_cmpxchg64_local */
+#endif
+
+#define raw_sync_cmpxchg arch_sync_cmpxchg
-#ifndef arch_atomic_read_acquire
+#define raw_atomic_read arch_atomic_read
+
+#if defined(arch_atomic_read_acquire)
+#define raw_atomic_read_acquire arch_atomic_read_acquire
+#elif defined(arch_atomic_read)
+#define raw_atomic_read_acquire arch_atomic_read
+#else
static __always_inline int
-arch_atomic_read_acquire(const atomic_t *v)
+raw_atomic_read_acquire(const atomic_t *v)
{
int ret;
if (__native_word(atomic_t)) {
ret = smp_load_acquire(&(v)->counter);
} else {
- ret = arch_atomic_read(v);
+ ret = raw_atomic_read(v);
__atomic_acquire_fence();
}
return ret;
}
-#define arch_atomic_read_acquire arch_atomic_read_acquire
#endif
-#ifndef arch_atomic_set_release
+#define raw_atomic_set arch_atomic_set
+
+#if defined(arch_atomic_set_release)
+#define raw_atomic_set_release arch_atomic_set_release
+#elif defined(arch_atomic_set)
+#define raw_atomic_set_release arch_atomic_set
+#else
static __always_inline void
-arch_atomic_set_release(atomic_t *v, int i)
+raw_atomic_set_release(atomic_t *v, int i)
{
if (__native_word(atomic_t)) {
smp_store_release(&(v)->counter, i);
} else {
__atomic_release_fence();
- arch_atomic_set(v, i);
+ raw_atomic_set(v, i);
}
}
-#define arch_atomic_set_release arch_atomic_set_release
#endif
-#ifndef arch_atomic_add_return_relaxed
-#define arch_atomic_add_return_acquire arch_atomic_add_return
-#define arch_atomic_add_return_release arch_atomic_add_return
-#define arch_atomic_add_return_relaxed arch_atomic_add_return
-#else /* arch_atomic_add_return_relaxed */
+#define raw_atomic_add arch_atomic_add
+
+#if defined(arch_atomic_add_return)
+#define raw_atomic_add_return arch_atomic_add_return
+#elif defined(arch_atomic_add_return_relaxed)
+static __always_inline int
+raw_atomic_add_return(int i, atomic_t *v)
+{
+ int ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic_add_return_relaxed(i, v);
+ __atomic_post_full_fence();
+ return ret;
+}
+#else
+#error "Unable to define raw_atomic_add_return"
+#endif
-#ifndef arch_atomic_add_return_acquire
+#if defined(arch_atomic_add_return_acquire)
+#define raw_atomic_add_return_acquire arch_atomic_add_return_acquire
+#elif defined(arch_atomic_add_return_relaxed)
static __always_inline int
-arch_atomic_add_return_acquire(int i, atomic_t *v)
+raw_atomic_add_return_acquire(int i, atomic_t *v)
{
int ret = arch_atomic_add_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic_add_return_acquire arch_atomic_add_return_acquire
+#elif defined(arch_atomic_add_return)
+#define raw_atomic_add_return_acquire arch_atomic_add_return
+#else
+#error "Unable to define raw_atomic_add_return_acquire"
#endif
-#ifndef arch_atomic_add_return_release
+#if defined(arch_atomic_add_return_release)
+#define raw_atomic_add_return_release arch_atomic_add_return_release
+#elif defined(arch_atomic_add_return_relaxed)
static __always_inline int
-arch_atomic_add_return_release(int i, atomic_t *v)
+raw_atomic_add_return_release(int i, atomic_t *v)
{
__atomic_release_fence();
return arch_atomic_add_return_relaxed(i, v);
}
-#define arch_atomic_add_return_release arch_atomic_add_return_release
+#elif defined(arch_atomic_add_return)
+#define raw_atomic_add_return_release arch_atomic_add_return
+#else
+#error "Unable to define raw_atomic_add_return_release"
#endif
-#ifndef arch_atomic_add_return
+#if defined(arch_atomic_add_return_relaxed)
+#define raw_atomic_add_return_relaxed arch_atomic_add_return_relaxed
+#elif defined(arch_atomic_add_return)
+#define raw_atomic_add_return_relaxed arch_atomic_add_return
+#else
+#error "Unable to define raw_atomic_add_return_relaxed"
+#endif
+
+#if defined(arch_atomic_fetch_add)
+#define raw_atomic_fetch_add arch_atomic_fetch_add
+#elif defined(arch_atomic_fetch_add_relaxed)
static __always_inline int
-arch_atomic_add_return(int i, atomic_t *v)
+raw_atomic_fetch_add(int i, atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_add_return_relaxed(i, v);
+ ret = arch_atomic_fetch_add_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_add_return arch_atomic_add_return
+#else
+#error "Unable to define raw_atomic_fetch_add"
#endif
-#endif /* arch_atomic_add_return_relaxed */
-
-#ifndef arch_atomic_fetch_add_relaxed
-#define arch_atomic_fetch_add_acquire arch_atomic_fetch_add
-#define arch_atomic_fetch_add_release arch_atomic_fetch_add
-#define arch_atomic_fetch_add_relaxed arch_atomic_fetch_add
-#else /* arch_atomic_fetch_add_relaxed */
-
-#ifndef arch_atomic_fetch_add_acquire
+#if defined(arch_atomic_fetch_add_acquire)
+#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire
+#elif defined(arch_atomic_fetch_add_relaxed)
static __always_inline int
-arch_atomic_fetch_add_acquire(int i, atomic_t *v)
+raw_atomic_fetch_add_acquire(int i, atomic_t *v)
{
int ret = arch_atomic_fetch_add_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire
+#elif defined(arch_atomic_fetch_add)
+#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add
+#else
+#error "Unable to define raw_atomic_fetch_add_acquire"
#endif
-#ifndef arch_atomic_fetch_add_release
+#if defined(arch_atomic_fetch_add_release)
+#define raw_atomic_fetch_add_release arch_atomic_fetch_add_release
+#elif defined(arch_atomic_fetch_add_relaxed)
static __always_inline int
-arch_atomic_fetch_add_release(int i, atomic_t *v)
+raw_atomic_fetch_add_release(int i, atomic_t *v)
{
__atomic_release_fence();
return arch_atomic_fetch_add_relaxed(i, v);
}
-#define arch_atomic_fetch_add_release arch_atomic_fetch_add_release
+#elif defined(arch_atomic_fetch_add)
+#define raw_atomic_fetch_add_release arch_atomic_fetch_add
+#else
+#error "Unable to define raw_atomic_fetch_add_release"
+#endif
+
+#if defined(arch_atomic_fetch_add_relaxed)
+#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed
+#elif defined(arch_atomic_fetch_add)
+#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add
+#else
+#error "Unable to define raw_atomic_fetch_add_relaxed"
#endif
-#ifndef arch_atomic_fetch_add
+#define raw_atomic_sub arch_atomic_sub
+
+#if defined(arch_atomic_sub_return)
+#define raw_atomic_sub_return arch_atomic_sub_return
+#elif defined(arch_atomic_sub_return_relaxed)
static __always_inline int
-arch_atomic_fetch_add(int i, atomic_t *v)
+raw_atomic_sub_return(int i, atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_fetch_add_relaxed(i, v);
+ ret = arch_atomic_sub_return_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_fetch_add arch_atomic_fetch_add
+#else
+#error "Unable to define raw_atomic_sub_return"
#endif
-#endif /* arch_atomic_fetch_add_relaxed */
-
-#ifndef arch_atomic_sub_return_relaxed
-#define arch_atomic_sub_return_acquire arch_atomic_sub_return
-#define arch_atomic_sub_return_release arch_atomic_sub_return
-#define arch_atomic_sub_return_relaxed arch_atomic_sub_return
-#else /* arch_atomic_sub_return_relaxed */
-
-#ifndef arch_atomic_sub_return_acquire
+#if defined(arch_atomic_sub_return_acquire)
+#define raw_atomic_sub_return_acquire arch_atomic_sub_return_acquire
+#elif defined(arch_atomic_sub_return_relaxed)
static __always_inline int
-arch_atomic_sub_return_acquire(int i, atomic_t *v)
+raw_atomic_sub_return_acquire(int i, atomic_t *v)
{
int ret = arch_atomic_sub_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic_sub_return_acquire arch_atomic_sub_return_acquire
+#elif defined(arch_atomic_sub_return)
+#define raw_atomic_sub_return_acquire arch_atomic_sub_return
+#else
+#error "Unable to define raw_atomic_sub_return_acquire"
#endif
-#ifndef arch_atomic_sub_return_release
+#if defined(arch_atomic_sub_return_release)
+#define raw_atomic_sub_return_release arch_atomic_sub_return_release
+#elif defined(arch_atomic_sub_return_relaxed)
static __always_inline int
-arch_atomic_sub_return_release(int i, atomic_t *v)
+raw_atomic_sub_return_release(int i, atomic_t *v)
{
__atomic_release_fence();
return arch_atomic_sub_return_relaxed(i, v);
}
-#define arch_atomic_sub_return_release arch_atomic_sub_return_release
+#elif defined(arch_atomic_sub_return)
+#define raw_atomic_sub_return_release arch_atomic_sub_return
+#else
+#error "Unable to define raw_atomic_sub_return_release"
+#endif
+
+#if defined(arch_atomic_sub_return_relaxed)
+#define raw_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed
+#elif defined(arch_atomic_sub_return)
+#define raw_atomic_sub_return_relaxed arch_atomic_sub_return
+#else
+#error "Unable to define raw_atomic_sub_return_relaxed"
#endif
-#ifndef arch_atomic_sub_return
+#if defined(arch_atomic_fetch_sub)
+#define raw_atomic_fetch_sub arch_atomic_fetch_sub
+#elif defined(arch_atomic_fetch_sub_relaxed)
static __always_inline int
-arch_atomic_sub_return(int i, atomic_t *v)
+raw_atomic_fetch_sub(int i, atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_sub_return_relaxed(i, v);
+ ret = arch_atomic_fetch_sub_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_sub_return arch_atomic_sub_return
+#else
+#error "Unable to define raw_atomic_fetch_sub"
#endif
-#endif /* arch_atomic_sub_return_relaxed */
-
-#ifndef arch_atomic_fetch_sub_relaxed
-#define arch_atomic_fetch_sub_acquire arch_atomic_fetch_sub
-#define arch_atomic_fetch_sub_release arch_atomic_fetch_sub
-#define arch_atomic_fetch_sub_relaxed arch_atomic_fetch_sub
-#else /* arch_atomic_fetch_sub_relaxed */
-
-#ifndef arch_atomic_fetch_sub_acquire
+#if defined(arch_atomic_fetch_sub_acquire)
+#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire
+#elif defined(arch_atomic_fetch_sub_relaxed)
static __always_inline int
-arch_atomic_fetch_sub_acquire(int i, atomic_t *v)
+raw_atomic_fetch_sub_acquire(int i, atomic_t *v)
{
int ret = arch_atomic_fetch_sub_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire
+#elif defined(arch_atomic_fetch_sub)
+#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub
+#else
+#error "Unable to define raw_atomic_fetch_sub_acquire"
#endif
-#ifndef arch_atomic_fetch_sub_release
+#if defined(arch_atomic_fetch_sub_release)
+#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub_release
+#elif defined(arch_atomic_fetch_sub_relaxed)
static __always_inline int
-arch_atomic_fetch_sub_release(int i, atomic_t *v)
+raw_atomic_fetch_sub_release(int i, atomic_t *v)
{
__atomic_release_fence();
return arch_atomic_fetch_sub_relaxed(i, v);
}
-#define arch_atomic_fetch_sub_release arch_atomic_fetch_sub_release
+#elif defined(arch_atomic_fetch_sub)
+#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub
+#else
+#error "Unable to define raw_atomic_fetch_sub_release"
#endif
-#ifndef arch_atomic_fetch_sub
-static __always_inline int
-arch_atomic_fetch_sub(int i, atomic_t *v)
-{
- int ret;
- __atomic_pre_full_fence();
- ret = arch_atomic_fetch_sub_relaxed(i, v);
- __atomic_post_full_fence();
- return ret;
-}
-#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+#if defined(arch_atomic_fetch_sub_relaxed)
+#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed
+#elif defined(arch_atomic_fetch_sub)
+#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub
+#else
+#error "Unable to define raw_atomic_fetch_sub_relaxed"
#endif
-#endif /* arch_atomic_fetch_sub_relaxed */
-
-#ifndef arch_atomic_inc
+#if defined(arch_atomic_inc)
+#define raw_atomic_inc arch_atomic_inc
+#else
static __always_inline void
-arch_atomic_inc(atomic_t *v)
+raw_atomic_inc(atomic_t *v)
{
- arch_atomic_add(1, v);
+ raw_atomic_add(1, v);
}
-#define arch_atomic_inc arch_atomic_inc
#endif
-#ifndef arch_atomic_inc_return_relaxed
-#ifdef arch_atomic_inc_return
-#define arch_atomic_inc_return_acquire arch_atomic_inc_return
-#define arch_atomic_inc_return_release arch_atomic_inc_return
-#define arch_atomic_inc_return_relaxed arch_atomic_inc_return
-#endif /* arch_atomic_inc_return */
-
-#ifndef arch_atomic_inc_return
+#if defined(arch_atomic_inc_return)
+#define raw_atomic_inc_return arch_atomic_inc_return
+#elif defined(arch_atomic_inc_return_relaxed)
+static __always_inline int
+raw_atomic_inc_return(atomic_t *v)
+{
+ int ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic_inc_return_relaxed(v);
+ __atomic_post_full_fence();
+ return ret;
+}
+#else
static __always_inline int
-arch_atomic_inc_return(atomic_t *v)
+raw_atomic_inc_return(atomic_t *v)
{
- return arch_atomic_add_return(1, v);
+ return raw_atomic_add_return(1, v);
}
-#define arch_atomic_inc_return arch_atomic_inc_return
#endif
-#ifndef arch_atomic_inc_return_acquire
+#if defined(arch_atomic_inc_return_acquire)
+#define raw_atomic_inc_return_acquire arch_atomic_inc_return_acquire
+#elif defined(arch_atomic_inc_return_relaxed)
static __always_inline int
-arch_atomic_inc_return_acquire(atomic_t *v)
+raw_atomic_inc_return_acquire(atomic_t *v)
{
- return arch_atomic_add_return_acquire(1, v);
+ int ret = arch_atomic_inc_return_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic_inc_return_acquire arch_atomic_inc_return_acquire
-#endif
-
-#ifndef arch_atomic_inc_return_release
+#elif defined(arch_atomic_inc_return)
+#define raw_atomic_inc_return_acquire arch_atomic_inc_return
+#else
static __always_inline int
-arch_atomic_inc_return_release(atomic_t *v)
+raw_atomic_inc_return_acquire(atomic_t *v)
{
- return arch_atomic_add_return_release(1, v);
+ return raw_atomic_add_return_acquire(1, v);
}
-#define arch_atomic_inc_return_release arch_atomic_inc_return_release
#endif
-#ifndef arch_atomic_inc_return_relaxed
+#if defined(arch_atomic_inc_return_release)
+#define raw_atomic_inc_return_release arch_atomic_inc_return_release
+#elif defined(arch_atomic_inc_return_relaxed)
static __always_inline int
-arch_atomic_inc_return_relaxed(atomic_t *v)
+raw_atomic_inc_return_release(atomic_t *v)
{
- return arch_atomic_add_return_relaxed(1, v);
+ __atomic_release_fence();
+ return arch_atomic_inc_return_relaxed(v);
}
-#define arch_atomic_inc_return_relaxed arch_atomic_inc_return_relaxed
-#endif
-
-#else /* arch_atomic_inc_return_relaxed */
-
-#ifndef arch_atomic_inc_return_acquire
+#elif defined(arch_atomic_inc_return)
+#define raw_atomic_inc_return_release arch_atomic_inc_return
+#else
static __always_inline int
-arch_atomic_inc_return_acquire(atomic_t *v)
+raw_atomic_inc_return_release(atomic_t *v)
{
- int ret = arch_atomic_inc_return_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ return raw_atomic_add_return_release(1, v);
}
-#define arch_atomic_inc_return_acquire arch_atomic_inc_return_acquire
#endif
-#ifndef arch_atomic_inc_return_release
+#if defined(arch_atomic_inc_return_relaxed)
+#define raw_atomic_inc_return_relaxed arch_atomic_inc_return_relaxed
+#elif defined(arch_atomic_inc_return)
+#define raw_atomic_inc_return_relaxed arch_atomic_inc_return
+#else
static __always_inline int
-arch_atomic_inc_return_release(atomic_t *v)
+raw_atomic_inc_return_relaxed(atomic_t *v)
{
- __atomic_release_fence();
- return arch_atomic_inc_return_relaxed(v);
+ return raw_atomic_add_return_relaxed(1, v);
}
-#define arch_atomic_inc_return_release arch_atomic_inc_return_release
#endif
-#ifndef arch_atomic_inc_return
+#if defined(arch_atomic_fetch_inc)
+#define raw_atomic_fetch_inc arch_atomic_fetch_inc
+#elif defined(arch_atomic_fetch_inc_relaxed)
static __always_inline int
-arch_atomic_inc_return(atomic_t *v)
+raw_atomic_fetch_inc(atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_inc_return_relaxed(v);
+ ret = arch_atomic_fetch_inc_relaxed(v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_inc_return arch_atomic_inc_return
-#endif
-
-#endif /* arch_atomic_inc_return_relaxed */
-
-#ifndef arch_atomic_fetch_inc_relaxed
-#ifdef arch_atomic_fetch_inc
-#define arch_atomic_fetch_inc_acquire arch_atomic_fetch_inc
-#define arch_atomic_fetch_inc_release arch_atomic_fetch_inc
-#define arch_atomic_fetch_inc_relaxed arch_atomic_fetch_inc
-#endif /* arch_atomic_fetch_inc */
-
-#ifndef arch_atomic_fetch_inc
+#else
static __always_inline int
-arch_atomic_fetch_inc(atomic_t *v)
+raw_atomic_fetch_inc(atomic_t *v)
{
- return arch_atomic_fetch_add(1, v);
+ return raw_atomic_fetch_add(1, v);
}
-#define arch_atomic_fetch_inc arch_atomic_fetch_inc
#endif
-#ifndef arch_atomic_fetch_inc_acquire
+#if defined(arch_atomic_fetch_inc_acquire)
+#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire
+#elif defined(arch_atomic_fetch_inc_relaxed)
static __always_inline int
-arch_atomic_fetch_inc_acquire(atomic_t *v)
+raw_atomic_fetch_inc_acquire(atomic_t *v)
{
- return arch_atomic_fetch_add_acquire(1, v);
+ int ret = arch_atomic_fetch_inc_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire
-#endif
-
-#ifndef arch_atomic_fetch_inc_release
+#elif defined(arch_atomic_fetch_inc)
+#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc
+#else
static __always_inline int
-arch_atomic_fetch_inc_release(atomic_t *v)
+raw_atomic_fetch_inc_acquire(atomic_t *v)
{
- return arch_atomic_fetch_add_release(1, v);
+ return raw_atomic_fetch_add_acquire(1, v);
}
-#define arch_atomic_fetch_inc_release arch_atomic_fetch_inc_release
#endif
-#ifndef arch_atomic_fetch_inc_relaxed
+#if defined(arch_atomic_fetch_inc_release)
+#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc_release
+#elif defined(arch_atomic_fetch_inc_relaxed)
+static __always_inline int
+raw_atomic_fetch_inc_release(atomic_t *v)
+{
+ __atomic_release_fence();
+ return arch_atomic_fetch_inc_relaxed(v);
+}
+#elif defined(arch_atomic_fetch_inc)
+#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc
+#else
static __always_inline int
-arch_atomic_fetch_inc_relaxed(atomic_t *v)
+raw_atomic_fetch_inc_release(atomic_t *v)
{
- return arch_atomic_fetch_add_relaxed(1, v);
+ return raw_atomic_fetch_add_release(1, v);
}
-#define arch_atomic_fetch_inc_relaxed arch_atomic_fetch_inc_relaxed
#endif
-#else /* arch_atomic_fetch_inc_relaxed */
-
-#ifndef arch_atomic_fetch_inc_acquire
+#if defined(arch_atomic_fetch_inc_relaxed)
+#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc_relaxed
+#elif defined(arch_atomic_fetch_inc)
+#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc
+#else
static __always_inline int
-arch_atomic_fetch_inc_acquire(atomic_t *v)
+raw_atomic_fetch_inc_relaxed(atomic_t *v)
{
- int ret = arch_atomic_fetch_inc_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ return raw_atomic_fetch_add_relaxed(1, v);
}
-#define arch_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire
#endif
-#ifndef arch_atomic_fetch_inc_release
-static __always_inline int
-arch_atomic_fetch_inc_release(atomic_t *v)
+#if defined(arch_atomic_dec)
+#define raw_atomic_dec arch_atomic_dec
+#else
+static __always_inline void
+raw_atomic_dec(atomic_t *v)
{
- __atomic_release_fence();
- return arch_atomic_fetch_inc_relaxed(v);
+ raw_atomic_sub(1, v);
}
-#define arch_atomic_fetch_inc_release arch_atomic_fetch_inc_release
#endif
-#ifndef arch_atomic_fetch_inc
+#if defined(arch_atomic_dec_return)
+#define raw_atomic_dec_return arch_atomic_dec_return
+#elif defined(arch_atomic_dec_return_relaxed)
static __always_inline int
-arch_atomic_fetch_inc(atomic_t *v)
+raw_atomic_dec_return(atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_fetch_inc_relaxed(v);
+ ret = arch_atomic_dec_return_relaxed(v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_fetch_inc arch_atomic_fetch_inc
-#endif
-
-#endif /* arch_atomic_fetch_inc_relaxed */
-
-#ifndef arch_atomic_dec
-static __always_inline void
-arch_atomic_dec(atomic_t *v)
-{
- arch_atomic_sub(1, v);
-}
-#define arch_atomic_dec arch_atomic_dec
-#endif
-
-#ifndef arch_atomic_dec_return_relaxed
-#ifdef arch_atomic_dec_return
-#define arch_atomic_dec_return_acquire arch_atomic_dec_return
-#define arch_atomic_dec_return_release arch_atomic_dec_return
-#define arch_atomic_dec_return_relaxed arch_atomic_dec_return
-#endif /* arch_atomic_dec_return */
-
-#ifndef arch_atomic_dec_return
+#else
static __always_inline int
-arch_atomic_dec_return(atomic_t *v)
+raw_atomic_dec_return(atomic_t *v)
{
- return arch_atomic_sub_return(1, v);
+ return raw_atomic_sub_return(1, v);
}
-#define arch_atomic_dec_return arch_atomic_dec_return
#endif
-#ifndef arch_atomic_dec_return_acquire
+#if defined(arch_atomic_dec_return_acquire)
+#define raw_atomic_dec_return_acquire arch_atomic_dec_return_acquire
+#elif defined(arch_atomic_dec_return_relaxed)
static __always_inline int
-arch_atomic_dec_return_acquire(atomic_t *v)
+raw_atomic_dec_return_acquire(atomic_t *v)
{
- return arch_atomic_sub_return_acquire(1, v);
+ int ret = arch_atomic_dec_return_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic_dec_return_acquire arch_atomic_dec_return_acquire
-#endif
-
-#ifndef arch_atomic_dec_return_release
+#elif defined(arch_atomic_dec_return)
+#define raw_atomic_dec_return_acquire arch_atomic_dec_return
+#else
static __always_inline int
-arch_atomic_dec_return_release(atomic_t *v)
+raw_atomic_dec_return_acquire(atomic_t *v)
{
- return arch_atomic_sub_return_release(1, v);
+ return raw_atomic_sub_return_acquire(1, v);
}
-#define arch_atomic_dec_return_release arch_atomic_dec_return_release
#endif
-#ifndef arch_atomic_dec_return_relaxed
+#if defined(arch_atomic_dec_return_release)
+#define raw_atomic_dec_return_release arch_atomic_dec_return_release
+#elif defined(arch_atomic_dec_return_relaxed)
static __always_inline int
-arch_atomic_dec_return_relaxed(atomic_t *v)
+raw_atomic_dec_return_release(atomic_t *v)
{
- return arch_atomic_sub_return_relaxed(1, v);
+ __atomic_release_fence();
+ return arch_atomic_dec_return_relaxed(v);
}
-#define arch_atomic_dec_return_relaxed arch_atomic_dec_return_relaxed
-#endif
-
-#else /* arch_atomic_dec_return_relaxed */
-
-#ifndef arch_atomic_dec_return_acquire
+#elif defined(arch_atomic_dec_return)
+#define raw_atomic_dec_return_release arch_atomic_dec_return
+#else
static __always_inline int
-arch_atomic_dec_return_acquire(atomic_t *v)
+raw_atomic_dec_return_release(atomic_t *v)
{
- int ret = arch_atomic_dec_return_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ return raw_atomic_sub_return_release(1, v);
}
-#define arch_atomic_dec_return_acquire arch_atomic_dec_return_acquire
#endif
-#ifndef arch_atomic_dec_return_release
+#if defined(arch_atomic_dec_return_relaxed)
+#define raw_atomic_dec_return_relaxed arch_atomic_dec_return_relaxed
+#elif defined(arch_atomic_dec_return)
+#define raw_atomic_dec_return_relaxed arch_atomic_dec_return
+#else
static __always_inline int
-arch_atomic_dec_return_release(atomic_t *v)
+raw_atomic_dec_return_relaxed(atomic_t *v)
{
- __atomic_release_fence();
- return arch_atomic_dec_return_relaxed(v);
+ return raw_atomic_sub_return_relaxed(1, v);
}
-#define arch_atomic_dec_return_release arch_atomic_dec_return_release
#endif
-#ifndef arch_atomic_dec_return
+#if defined(arch_atomic_fetch_dec)
+#define raw_atomic_fetch_dec arch_atomic_fetch_dec
+#elif defined(arch_atomic_fetch_dec_relaxed)
static __always_inline int
-arch_atomic_dec_return(atomic_t *v)
+raw_atomic_fetch_dec(atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_dec_return_relaxed(v);
+ ret = arch_atomic_fetch_dec_relaxed(v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_dec_return arch_atomic_dec_return
-#endif
-
-#endif /* arch_atomic_dec_return_relaxed */
-
-#ifndef arch_atomic_fetch_dec_relaxed
-#ifdef arch_atomic_fetch_dec
-#define arch_atomic_fetch_dec_acquire arch_atomic_fetch_dec
-#define arch_atomic_fetch_dec_release arch_atomic_fetch_dec
-#define arch_atomic_fetch_dec_relaxed arch_atomic_fetch_dec
-#endif /* arch_atomic_fetch_dec */
-
-#ifndef arch_atomic_fetch_dec
+#else
static __always_inline int
-arch_atomic_fetch_dec(atomic_t *v)
+raw_atomic_fetch_dec(atomic_t *v)
{
- return arch_atomic_fetch_sub(1, v);
+ return raw_atomic_fetch_sub(1, v);
}
-#define arch_atomic_fetch_dec arch_atomic_fetch_dec
#endif
-#ifndef arch_atomic_fetch_dec_acquire
+#if defined(arch_atomic_fetch_dec_acquire)
+#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire
+#elif defined(arch_atomic_fetch_dec_relaxed)
static __always_inline int
-arch_atomic_fetch_dec_acquire(atomic_t *v)
+raw_atomic_fetch_dec_acquire(atomic_t *v)
{
- return arch_atomic_fetch_sub_acquire(1, v);
+ int ret = arch_atomic_fetch_dec_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire
-#endif
-
-#ifndef arch_atomic_fetch_dec_release
+#elif defined(arch_atomic_fetch_dec)
+#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec
+#else
static __always_inline int
-arch_atomic_fetch_dec_release(atomic_t *v)
+raw_atomic_fetch_dec_acquire(atomic_t *v)
{
- return arch_atomic_fetch_sub_release(1, v);
+ return raw_atomic_fetch_sub_acquire(1, v);
}
-#define arch_atomic_fetch_dec_release arch_atomic_fetch_dec_release
#endif
-#ifndef arch_atomic_fetch_dec_relaxed
+#if defined(arch_atomic_fetch_dec_release)
+#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec_release
+#elif defined(arch_atomic_fetch_dec_relaxed)
static __always_inline int
-arch_atomic_fetch_dec_relaxed(atomic_t *v)
+raw_atomic_fetch_dec_release(atomic_t *v)
{
- return arch_atomic_fetch_sub_relaxed(1, v);
+ __atomic_release_fence();
+ return arch_atomic_fetch_dec_relaxed(v);
}
-#define arch_atomic_fetch_dec_relaxed arch_atomic_fetch_dec_relaxed
-#endif
-
-#else /* arch_atomic_fetch_dec_relaxed */
-
-#ifndef arch_atomic_fetch_dec_acquire
+#elif defined(arch_atomic_fetch_dec)
+#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec
+#else
static __always_inline int
-arch_atomic_fetch_dec_acquire(atomic_t *v)
+raw_atomic_fetch_dec_release(atomic_t *v)
{
- int ret = arch_atomic_fetch_dec_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ return raw_atomic_fetch_sub_release(1, v);
}
-#define arch_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire
#endif
-#ifndef arch_atomic_fetch_dec_release
+#if defined(arch_atomic_fetch_dec_relaxed)
+#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec_relaxed
+#elif defined(arch_atomic_fetch_dec)
+#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec
+#else
static __always_inline int
-arch_atomic_fetch_dec_release(atomic_t *v)
+raw_atomic_fetch_dec_relaxed(atomic_t *v)
{
- __atomic_release_fence();
- return arch_atomic_fetch_dec_relaxed(v);
+ return raw_atomic_fetch_sub_relaxed(1, v);
}
-#define arch_atomic_fetch_dec_release arch_atomic_fetch_dec_release
#endif
-#ifndef arch_atomic_fetch_dec
+#define raw_atomic_and arch_atomic_and
+
+#if defined(arch_atomic_fetch_and)
+#define raw_atomic_fetch_and arch_atomic_fetch_and
+#elif defined(arch_atomic_fetch_and_relaxed)
static __always_inline int
-arch_atomic_fetch_dec(atomic_t *v)
+raw_atomic_fetch_and(int i, atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_fetch_dec_relaxed(v);
+ ret = arch_atomic_fetch_and_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_fetch_dec arch_atomic_fetch_dec
+#else
+#error "Unable to define raw_atomic_fetch_and"
#endif
-#endif /* arch_atomic_fetch_dec_relaxed */
-
-#ifndef arch_atomic_fetch_and_relaxed
-#define arch_atomic_fetch_and_acquire arch_atomic_fetch_and
-#define arch_atomic_fetch_and_release arch_atomic_fetch_and
-#define arch_atomic_fetch_and_relaxed arch_atomic_fetch_and
-#else /* arch_atomic_fetch_and_relaxed */
-
-#ifndef arch_atomic_fetch_and_acquire
+#if defined(arch_atomic_fetch_and_acquire)
+#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire
+#elif defined(arch_atomic_fetch_and_relaxed)
static __always_inline int
-arch_atomic_fetch_and_acquire(int i, atomic_t *v)
+raw_atomic_fetch_and_acquire(int i, atomic_t *v)
{
int ret = arch_atomic_fetch_and_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire
+#elif defined(arch_atomic_fetch_and)
+#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and
+#else
+#error "Unable to define raw_atomic_fetch_and_acquire"
#endif
-#ifndef arch_atomic_fetch_and_release
+#if defined(arch_atomic_fetch_and_release)
+#define raw_atomic_fetch_and_release arch_atomic_fetch_and_release
+#elif defined(arch_atomic_fetch_and_relaxed)
static __always_inline int
-arch_atomic_fetch_and_release(int i, atomic_t *v)
+raw_atomic_fetch_and_release(int i, atomic_t *v)
{
__atomic_release_fence();
return arch_atomic_fetch_and_relaxed(i, v);
}
-#define arch_atomic_fetch_and_release arch_atomic_fetch_and_release
+#elif defined(arch_atomic_fetch_and)
+#define raw_atomic_fetch_and_release arch_atomic_fetch_and
+#else
+#error "Unable to define raw_atomic_fetch_and_release"
#endif
-#ifndef arch_atomic_fetch_and
+#if defined(arch_atomic_fetch_and_relaxed)
+#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed
+#elif defined(arch_atomic_fetch_and)
+#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and
+#else
+#error "Unable to define raw_atomic_fetch_and_relaxed"
+#endif
+
+#if defined(arch_atomic_andnot)
+#define raw_atomic_andnot arch_atomic_andnot
+#else
+static __always_inline void
+raw_atomic_andnot(int i, atomic_t *v)
+{
+ raw_atomic_and(~i, v);
+}
+#endif
+
+#if defined(arch_atomic_fetch_andnot)
+#define raw_atomic_fetch_andnot arch_atomic_fetch_andnot
+#elif defined(arch_atomic_fetch_andnot_relaxed)
static __always_inline int
-arch_atomic_fetch_and(int i, atomic_t *v)
+raw_atomic_fetch_andnot(int i, atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_fetch_and_relaxed(i, v);
+ ret = arch_atomic_fetch_andnot_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_fetch_and arch_atomic_fetch_and
-#endif
-
-#endif /* arch_atomic_fetch_and_relaxed */
-
-#ifndef arch_atomic_andnot
-static __always_inline void
-arch_atomic_andnot(int i, atomic_t *v)
+#else
+static __always_inline int
+raw_atomic_fetch_andnot(int i, atomic_t *v)
{
- arch_atomic_and(~i, v);
+ return raw_atomic_fetch_and(~i, v);
}
-#define arch_atomic_andnot arch_atomic_andnot
#endif
-#ifndef arch_atomic_fetch_andnot_relaxed
-#ifdef arch_atomic_fetch_andnot
-#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot
-#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot
-#define arch_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot
-#endif /* arch_atomic_fetch_andnot */
-
-#ifndef arch_atomic_fetch_andnot
+#if defined(arch_atomic_fetch_andnot_acquire)
+#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
+#elif defined(arch_atomic_fetch_andnot_relaxed)
+static __always_inline int
+raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
+{
+ int ret = arch_atomic_fetch_andnot_relaxed(i, v);
+ __atomic_acquire_fence();
+ return ret;
+}
+#elif defined(arch_atomic_fetch_andnot)
+#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot
+#else
static __always_inline int
-arch_atomic_fetch_andnot(int i, atomic_t *v)
+raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
- return arch_atomic_fetch_and(~i, v);
+ return raw_atomic_fetch_and_acquire(~i, v);
}
-#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot
#endif
-#ifndef arch_atomic_fetch_andnot_acquire
+#if defined(arch_atomic_fetch_andnot_release)
+#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release
+#elif defined(arch_atomic_fetch_andnot_relaxed)
static __always_inline int
-arch_atomic_fetch_andnot_acquire(int i, atomic_t *v)
+raw_atomic_fetch_andnot_release(int i, atomic_t *v)
{
- return arch_atomic_fetch_and_acquire(~i, v);
+ __atomic_release_fence();
+ return arch_atomic_fetch_andnot_relaxed(i, v);
}
-#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
-#endif
-
-#ifndef arch_atomic_fetch_andnot_release
+#elif defined(arch_atomic_fetch_andnot)
+#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot
+#else
static __always_inline int
-arch_atomic_fetch_andnot_release(int i, atomic_t *v)
+raw_atomic_fetch_andnot_release(int i, atomic_t *v)
{
- return arch_atomic_fetch_and_release(~i, v);
+ return raw_atomic_fetch_and_release(~i, v);
}
-#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release
#endif
-#ifndef arch_atomic_fetch_andnot_relaxed
+#if defined(arch_atomic_fetch_andnot_relaxed)
+#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed
+#elif defined(arch_atomic_fetch_andnot)
+#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot
+#else
static __always_inline int
-arch_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
+raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
{
- return arch_atomic_fetch_and_relaxed(~i, v);
+ return raw_atomic_fetch_and_relaxed(~i, v);
}
-#define arch_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed
#endif
-#else /* arch_atomic_fetch_andnot_relaxed */
+#define raw_atomic_or arch_atomic_or
-#ifndef arch_atomic_fetch_andnot_acquire
+#if defined(arch_atomic_fetch_or)
+#define raw_atomic_fetch_or arch_atomic_fetch_or
+#elif defined(arch_atomic_fetch_or_relaxed)
static __always_inline int
-arch_atomic_fetch_andnot_acquire(int i, atomic_t *v)
+raw_atomic_fetch_or(int i, atomic_t *v)
{
- int ret = arch_atomic_fetch_andnot_relaxed(i, v);
- __atomic_acquire_fence();
+ int ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic_fetch_or_relaxed(i, v);
+ __atomic_post_full_fence();
return ret;
}
-#define arch_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
+#else
+#error "Unable to define raw_atomic_fetch_or"
#endif
-#ifndef arch_atomic_fetch_andnot_release
+#if defined(arch_atomic_fetch_or_acquire)
+#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire
+#elif defined(arch_atomic_fetch_or_relaxed)
static __always_inline int
-arch_atomic_fetch_andnot_release(int i, atomic_t *v)
-{
- __atomic_release_fence();
- return arch_atomic_fetch_andnot_relaxed(i, v);
-}
-#define arch_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release
-#endif
-
-#ifndef arch_atomic_fetch_andnot
-static __always_inline int
-arch_atomic_fetch_andnot(int i, atomic_t *v)
-{
- int ret;
- __atomic_pre_full_fence();
- ret = arch_atomic_fetch_andnot_relaxed(i, v);
- __atomic_post_full_fence();
- return ret;
-}
-#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot
-#endif
-
-#endif /* arch_atomic_fetch_andnot_relaxed */
-
-#ifndef arch_atomic_fetch_or_relaxed
-#define arch_atomic_fetch_or_acquire arch_atomic_fetch_or
-#define arch_atomic_fetch_or_release arch_atomic_fetch_or
-#define arch_atomic_fetch_or_relaxed arch_atomic_fetch_or
-#else /* arch_atomic_fetch_or_relaxed */
-
-#ifndef arch_atomic_fetch_or_acquire
-static __always_inline int
-arch_atomic_fetch_or_acquire(int i, atomic_t *v)
+raw_atomic_fetch_or_acquire(int i, atomic_t *v)
{
int ret = arch_atomic_fetch_or_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire
+#elif defined(arch_atomic_fetch_or)
+#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or
+#else
+#error "Unable to define raw_atomic_fetch_or_acquire"
#endif
-#ifndef arch_atomic_fetch_or_release
+#if defined(arch_atomic_fetch_or_release)
+#define raw_atomic_fetch_or_release arch_atomic_fetch_or_release
+#elif defined(arch_atomic_fetch_or_relaxed)
static __always_inline int
-arch_atomic_fetch_or_release(int i, atomic_t *v)
+raw_atomic_fetch_or_release(int i, atomic_t *v)
{
__atomic_release_fence();
return arch_atomic_fetch_or_relaxed(i, v);
}
-#define arch_atomic_fetch_or_release arch_atomic_fetch_or_release
+#elif defined(arch_atomic_fetch_or)
+#define raw_atomic_fetch_or_release arch_atomic_fetch_or
+#else
+#error "Unable to define raw_atomic_fetch_or_release"
#endif
-#ifndef arch_atomic_fetch_or
+#if defined(arch_atomic_fetch_or_relaxed)
+#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed
+#elif defined(arch_atomic_fetch_or)
+#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or
+#else
+#error "Unable to define raw_atomic_fetch_or_relaxed"
+#endif
+
+#define raw_atomic_xor arch_atomic_xor
+
+#if defined(arch_atomic_fetch_xor)
+#define raw_atomic_fetch_xor arch_atomic_fetch_xor
+#elif defined(arch_atomic_fetch_xor_relaxed)
static __always_inline int
-arch_atomic_fetch_or(int i, atomic_t *v)
+raw_atomic_fetch_xor(int i, atomic_t *v)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_fetch_or_relaxed(i, v);
+ ret = arch_atomic_fetch_xor_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_fetch_or arch_atomic_fetch_or
+#else
+#error "Unable to define raw_atomic_fetch_xor"
#endif
-#endif /* arch_atomic_fetch_or_relaxed */
-
-#ifndef arch_atomic_fetch_xor_relaxed
-#define arch_atomic_fetch_xor_acquire arch_atomic_fetch_xor
-#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor
-#define arch_atomic_fetch_xor_relaxed arch_atomic_fetch_xor
-#else /* arch_atomic_fetch_xor_relaxed */
-
-#ifndef arch_atomic_fetch_xor_acquire
+#if defined(arch_atomic_fetch_xor_acquire)
+#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire
+#elif defined(arch_atomic_fetch_xor_relaxed)
static __always_inline int
-arch_atomic_fetch_xor_acquire(int i, atomic_t *v)
+raw_atomic_fetch_xor_acquire(int i, atomic_t *v)
{
int ret = arch_atomic_fetch_xor_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire
+#elif defined(arch_atomic_fetch_xor)
+#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor
+#else
+#error "Unable to define raw_atomic_fetch_xor_acquire"
#endif
-#ifndef arch_atomic_fetch_xor_release
+#if defined(arch_atomic_fetch_xor_release)
+#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor_release
+#elif defined(arch_atomic_fetch_xor_relaxed)
static __always_inline int
-arch_atomic_fetch_xor_release(int i, atomic_t *v)
+raw_atomic_fetch_xor_release(int i, atomic_t *v)
{
__atomic_release_fence();
return arch_atomic_fetch_xor_relaxed(i, v);
}
-#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor_release
+#elif defined(arch_atomic_fetch_xor)
+#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor
+#else
+#error "Unable to define raw_atomic_fetch_xor_release"
#endif
-#ifndef arch_atomic_fetch_xor
+#if defined(arch_atomic_fetch_xor_relaxed)
+#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed
+#elif defined(arch_atomic_fetch_xor)
+#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor
+#else
+#error "Unable to define raw_atomic_fetch_xor_relaxed"
+#endif
+
+#if defined(arch_atomic_xchg)
+#define raw_atomic_xchg arch_atomic_xchg
+#elif defined(arch_atomic_xchg_relaxed)
static __always_inline int
-arch_atomic_fetch_xor(int i, atomic_t *v)
+raw_atomic_xchg(atomic_t *v, int i)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_fetch_xor_relaxed(i, v);
+ ret = arch_atomic_xchg_relaxed(v, i);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_fetch_xor arch_atomic_fetch_xor
-#endif
-
-#endif /* arch_atomic_fetch_xor_relaxed */
-
-#ifndef arch_atomic_xchg_relaxed
-#ifdef arch_atomic_xchg
-#define arch_atomic_xchg_acquire arch_atomic_xchg
-#define arch_atomic_xchg_release arch_atomic_xchg
-#define arch_atomic_xchg_relaxed arch_atomic_xchg
-#endif /* arch_atomic_xchg */
-
-#ifndef arch_atomic_xchg
+#else
static __always_inline int
-arch_atomic_xchg(atomic_t *v, int new)
+raw_atomic_xchg(atomic_t *v, int new)
{
- return arch_xchg(&v->counter, new);
+ return raw_xchg(&v->counter, new);
}
-#define arch_atomic_xchg arch_atomic_xchg
#endif
-#ifndef arch_atomic_xchg_acquire
+#if defined(arch_atomic_xchg_acquire)
+#define raw_atomic_xchg_acquire arch_atomic_xchg_acquire
+#elif defined(arch_atomic_xchg_relaxed)
static __always_inline int
-arch_atomic_xchg_acquire(atomic_t *v, int new)
+raw_atomic_xchg_acquire(atomic_t *v, int i)
{
- return arch_xchg_acquire(&v->counter, new);
+ int ret = arch_atomic_xchg_relaxed(v, i);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire
-#endif
-
-#ifndef arch_atomic_xchg_release
+#elif defined(arch_atomic_xchg)
+#define raw_atomic_xchg_acquire arch_atomic_xchg
+#else
static __always_inline int
-arch_atomic_xchg_release(atomic_t *v, int new)
+raw_atomic_xchg_acquire(atomic_t *v, int new)
{
- return arch_xchg_release(&v->counter, new);
+ return raw_xchg_acquire(&v->counter, new);
}
-#define arch_atomic_xchg_release arch_atomic_xchg_release
#endif
-#ifndef arch_atomic_xchg_relaxed
+#if defined(arch_atomic_xchg_release)
+#define raw_atomic_xchg_release arch_atomic_xchg_release
+#elif defined(arch_atomic_xchg_relaxed)
static __always_inline int
-arch_atomic_xchg_relaxed(atomic_t *v, int new)
+raw_atomic_xchg_release(atomic_t *v, int i)
{
- return arch_xchg_relaxed(&v->counter, new);
+ __atomic_release_fence();
+ return arch_atomic_xchg_relaxed(v, i);
}
-#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed
-#endif
-
-#else /* arch_atomic_xchg_relaxed */
-
-#ifndef arch_atomic_xchg_acquire
+#elif defined(arch_atomic_xchg)
+#define raw_atomic_xchg_release arch_atomic_xchg
+#else
static __always_inline int
-arch_atomic_xchg_acquire(atomic_t *v, int i)
+raw_atomic_xchg_release(atomic_t *v, int new)
{
- int ret = arch_atomic_xchg_relaxed(v, i);
- __atomic_acquire_fence();
- return ret;
+ return raw_xchg_release(&v->counter, new);
}
-#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire
#endif
-#ifndef arch_atomic_xchg_release
+#if defined(arch_atomic_xchg_relaxed)
+#define raw_atomic_xchg_relaxed arch_atomic_xchg_relaxed
+#elif defined(arch_atomic_xchg)
+#define raw_atomic_xchg_relaxed arch_atomic_xchg
+#else
static __always_inline int
-arch_atomic_xchg_release(atomic_t *v, int i)
+raw_atomic_xchg_relaxed(atomic_t *v, int new)
{
- __atomic_release_fence();
- return arch_atomic_xchg_relaxed(v, i);
+ return raw_xchg_relaxed(&v->counter, new);
}
-#define arch_atomic_xchg_release arch_atomic_xchg_release
#endif
-#ifndef arch_atomic_xchg
+#if defined(arch_atomic_cmpxchg)
+#define raw_atomic_cmpxchg arch_atomic_cmpxchg
+#elif defined(arch_atomic_cmpxchg_relaxed)
static __always_inline int
-arch_atomic_xchg(atomic_t *v, int i)
+raw_atomic_cmpxchg(atomic_t *v, int old, int new)
{
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_xchg_relaxed(v, i);
+ ret = arch_atomic_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_xchg arch_atomic_xchg
-#endif
-
-#endif /* arch_atomic_xchg_relaxed */
-
-#ifndef arch_atomic_cmpxchg_relaxed
-#ifdef arch_atomic_cmpxchg
-#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg
-#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg
-#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg
-#endif /* arch_atomic_cmpxchg */
-
-#ifndef arch_atomic_cmpxchg
+#else
static __always_inline int
-arch_atomic_cmpxchg(atomic_t *v, int old, int new)
+raw_atomic_cmpxchg(atomic_t *v, int old, int new)
{
- return arch_cmpxchg(&v->counter, old, new);
+ return raw_cmpxchg(&v->counter, old, new);
}
-#define arch_atomic_cmpxchg arch_atomic_cmpxchg
#endif
-#ifndef arch_atomic_cmpxchg_acquire
+#if defined(arch_atomic_cmpxchg_acquire)
+#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire
+#elif defined(arch_atomic_cmpxchg_relaxed)
static __always_inline int
-arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
+raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
- return arch_cmpxchg_acquire(&v->counter, old, new);
+ int ret = arch_atomic_cmpxchg_relaxed(v, old, new);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire
-#endif
-
-#ifndef arch_atomic_cmpxchg_release
+#elif defined(arch_atomic_cmpxchg)
+#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg
+#else
static __always_inline int
-arch_atomic_cmpxchg_release(atomic_t *v, int old, int new)
+raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
- return arch_cmpxchg_release(&v->counter, old, new);
+ return raw_cmpxchg_acquire(&v->counter, old, new);
}
-#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release
#endif
-#ifndef arch_atomic_cmpxchg_relaxed
+#if defined(arch_atomic_cmpxchg_release)
+#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg_release
+#elif defined(arch_atomic_cmpxchg_relaxed)
static __always_inline int
-arch_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
+raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
- return arch_cmpxchg_relaxed(&v->counter, old, new);
+ __atomic_release_fence();
+ return arch_atomic_cmpxchg_relaxed(v, old, new);
}
-#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
-#endif
-
-#else /* arch_atomic_cmpxchg_relaxed */
-
-#ifndef arch_atomic_cmpxchg_acquire
+#elif defined(arch_atomic_cmpxchg)
+#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg
+#else
static __always_inline int
-arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
+raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
- int ret = arch_atomic_cmpxchg_relaxed(v, old, new);
- __atomic_acquire_fence();
- return ret;
+ return raw_cmpxchg_release(&v->counter, old, new);
}
-#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire
#endif
-#ifndef arch_atomic_cmpxchg_release
+#if defined(arch_atomic_cmpxchg_relaxed)
+#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
+#elif defined(arch_atomic_cmpxchg)
+#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg
+#else
static __always_inline int
-arch_atomic_cmpxchg_release(atomic_t *v, int old, int new)
+raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
{
- __atomic_release_fence();
- return arch_atomic_cmpxchg_relaxed(v, old, new);
+ return raw_cmpxchg_relaxed(&v->counter, old, new);
}
-#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release
#endif
-#ifndef arch_atomic_cmpxchg
-static __always_inline int
-arch_atomic_cmpxchg(atomic_t *v, int old, int new)
+#if defined(arch_atomic_try_cmpxchg)
+#define raw_atomic_try_cmpxchg arch_atomic_try_cmpxchg
+#elif defined(arch_atomic_try_cmpxchg_relaxed)
+static __always_inline bool
+raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
- int ret;
+ bool ret;
__atomic_pre_full_fence();
- ret = arch_atomic_cmpxchg_relaxed(v, old, new);
+ ret = arch_atomic_try_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic_cmpxchg arch_atomic_cmpxchg
-#endif
-
-#endif /* arch_atomic_cmpxchg_relaxed */
-
-#ifndef arch_atomic_try_cmpxchg_relaxed
-#ifdef arch_atomic_try_cmpxchg
-#define arch_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg
-#define arch_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg
-#define arch_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg
-#endif /* arch_atomic_try_cmpxchg */
-
-#ifndef arch_atomic_try_cmpxchg
+#else
static __always_inline bool
-arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
+raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
int r, o = *old;
- r = arch_atomic_cmpxchg(v, o, new);
+ r = raw_atomic_cmpxchg(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg
#endif
-#ifndef arch_atomic_try_cmpxchg_acquire
+#if defined(arch_atomic_try_cmpxchg_acquire)
+#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire
+#elif defined(arch_atomic_try_cmpxchg_relaxed)
+static __always_inline bool
+raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
+{
+ bool ret = arch_atomic_try_cmpxchg_relaxed(v, old, new);
+ __atomic_acquire_fence();
+ return ret;
+}
+#elif defined(arch_atomic_try_cmpxchg)
+#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg
+#else
static __always_inline bool
-arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
+raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
int r, o = *old;
- r = arch_atomic_cmpxchg_acquire(v, o, new);
+ r = raw_atomic_cmpxchg_acquire(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire
#endif
-#ifndef arch_atomic_try_cmpxchg_release
+#if defined(arch_atomic_try_cmpxchg_release)
+#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release
+#elif defined(arch_atomic_try_cmpxchg_relaxed)
static __always_inline bool
-arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
+raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
+{
+ __atomic_release_fence();
+ return arch_atomic_try_cmpxchg_relaxed(v, old, new);
+}
+#elif defined(arch_atomic_try_cmpxchg)
+#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg
+#else
+static __always_inline bool
+raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
{
int r, o = *old;
- r = arch_atomic_cmpxchg_release(v, o, new);
+ r = raw_atomic_cmpxchg_release(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release
#endif
-#ifndef arch_atomic_try_cmpxchg_relaxed
+#if defined(arch_atomic_try_cmpxchg_relaxed)
+#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg_relaxed
+#elif defined(arch_atomic_try_cmpxchg)
+#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg
+#else
static __always_inline bool
-arch_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
+raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
{
int r, o = *old;
- r = arch_atomic_cmpxchg_relaxed(v, o, new);
+ r = raw_atomic_cmpxchg_relaxed(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg_relaxed
-#endif
-
-#else /* arch_atomic_try_cmpxchg_relaxed */
-
-#ifndef arch_atomic_try_cmpxchg_acquire
-static __always_inline bool
-arch_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
-{
- bool ret = arch_atomic_try_cmpxchg_relaxed(v, old, new);
- __atomic_acquire_fence();
- return ret;
-}
-#define arch_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire
-#endif
-
-#ifndef arch_atomic_try_cmpxchg_release
-static __always_inline bool
-arch_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
-{
- __atomic_release_fence();
- return arch_atomic_try_cmpxchg_relaxed(v, old, new);
-}
-#define arch_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release
-#endif
-
-#ifndef arch_atomic_try_cmpxchg
-static __always_inline bool
-arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
-{
- bool ret;
- __atomic_pre_full_fence();
- ret = arch_atomic_try_cmpxchg_relaxed(v, old, new);
- __atomic_post_full_fence();
- return ret;
-}
-#define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg
#endif
-#endif /* arch_atomic_try_cmpxchg_relaxed */
-
-#ifndef arch_atomic_sub_and_test
+#if defined(arch_atomic_sub_and_test)
+#define raw_atomic_sub_and_test arch_atomic_sub_and_test
+#else
static __always_inline bool
-arch_atomic_sub_and_test(int i, atomic_t *v)
+raw_atomic_sub_and_test(int i, atomic_t *v)
{
- return arch_atomic_sub_return(i, v) == 0;
+ return raw_atomic_sub_return(i, v) == 0;
}
-#define arch_atomic_sub_and_test arch_atomic_sub_and_test
#endif
-#ifndef arch_atomic_dec_and_test
+#if defined(arch_atomic_dec_and_test)
+#define raw_atomic_dec_and_test arch_atomic_dec_and_test
+#else
static __always_inline bool
-arch_atomic_dec_and_test(atomic_t *v)
+raw_atomic_dec_and_test(atomic_t *v)
{
- return arch_atomic_dec_return(v) == 0;
+ return raw_atomic_dec_return(v) == 0;
}
-#define arch_atomic_dec_and_test arch_atomic_dec_and_test
#endif
-#ifndef arch_atomic_inc_and_test
+#if defined(arch_atomic_inc_and_test)
+#define raw_atomic_inc_and_test arch_atomic_inc_and_test
+#else
static __always_inline bool
-arch_atomic_inc_and_test(atomic_t *v)
+raw_atomic_inc_and_test(atomic_t *v)
{
- return arch_atomic_inc_return(v) == 0;
+ return raw_atomic_inc_return(v) == 0;
}
-#define arch_atomic_inc_and_test arch_atomic_inc_and_test
#endif
-#ifndef arch_atomic_add_negative_relaxed
-#ifdef arch_atomic_add_negative
-#define arch_atomic_add_negative_acquire arch_atomic_add_negative
-#define arch_atomic_add_negative_release arch_atomic_add_negative
-#define arch_atomic_add_negative_relaxed arch_atomic_add_negative
-#endif /* arch_atomic_add_negative */
-
-#ifndef arch_atomic_add_negative
+#if defined(arch_atomic_add_negative)
+#define raw_atomic_add_negative arch_atomic_add_negative
+#elif defined(arch_atomic_add_negative_relaxed)
static __always_inline bool
-arch_atomic_add_negative(int i, atomic_t *v)
+raw_atomic_add_negative(int i, atomic_t *v)
{
- return arch_atomic_add_return(i, v) < 0;
+ bool ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic_add_negative_relaxed(i, v);
+ __atomic_post_full_fence();
+ return ret;
}
-#define arch_atomic_add_negative arch_atomic_add_negative
-#endif
-
-#ifndef arch_atomic_add_negative_acquire
+#else
static __always_inline bool
-arch_atomic_add_negative_acquire(int i, atomic_t *v)
+raw_atomic_add_negative(int i, atomic_t *v)
{
- return arch_atomic_add_return_acquire(i, v) < 0;
+ return raw_atomic_add_return(i, v) < 0;
}
-#define arch_atomic_add_negative_acquire arch_atomic_add_negative_acquire
#endif
-#ifndef arch_atomic_add_negative_release
+#if defined(arch_atomic_add_negative_acquire)
+#define raw_atomic_add_negative_acquire arch_atomic_add_negative_acquire
+#elif defined(arch_atomic_add_negative_relaxed)
static __always_inline bool
-arch_atomic_add_negative_release(int i, atomic_t *v)
+raw_atomic_add_negative_acquire(int i, atomic_t *v)
{
- return arch_atomic_add_return_release(i, v) < 0;
+ bool ret = arch_atomic_add_negative_relaxed(i, v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic_add_negative_release arch_atomic_add_negative_release
-#endif
-
-#ifndef arch_atomic_add_negative_relaxed
+#elif defined(arch_atomic_add_negative)
+#define raw_atomic_add_negative_acquire arch_atomic_add_negative
+#else
static __always_inline bool
-arch_atomic_add_negative_relaxed(int i, atomic_t *v)
+raw_atomic_add_negative_acquire(int i, atomic_t *v)
{
- return arch_atomic_add_return_relaxed(i, v) < 0;
+ return raw_atomic_add_return_acquire(i, v) < 0;
}
-#define arch_atomic_add_negative_relaxed arch_atomic_add_negative_relaxed
#endif
-#else /* arch_atomic_add_negative_relaxed */
-
-#ifndef arch_atomic_add_negative_acquire
+#if defined(arch_atomic_add_negative_release)
+#define raw_atomic_add_negative_release arch_atomic_add_negative_release
+#elif defined(arch_atomic_add_negative_relaxed)
static __always_inline bool
-arch_atomic_add_negative_acquire(int i, atomic_t *v)
+raw_atomic_add_negative_release(int i, atomic_t *v)
{
- bool ret = arch_atomic_add_negative_relaxed(i, v);
- __atomic_acquire_fence();
- return ret;
+ __atomic_release_fence();
+ return arch_atomic_add_negative_relaxed(i, v);
}
-#define arch_atomic_add_negative_acquire arch_atomic_add_negative_acquire
-#endif
-
-#ifndef arch_atomic_add_negative_release
+#elif defined(arch_atomic_add_negative)
+#define raw_atomic_add_negative_release arch_atomic_add_negative
+#else
static __always_inline bool
-arch_atomic_add_negative_release(int i, atomic_t *v)
+raw_atomic_add_negative_release(int i, atomic_t *v)
{
- __atomic_release_fence();
- return arch_atomic_add_negative_relaxed(i, v);
+ return raw_atomic_add_return_release(i, v) < 0;
}
-#define arch_atomic_add_negative_release arch_atomic_add_negative_release
#endif
-#ifndef arch_atomic_add_negative
+#if defined(arch_atomic_add_negative_relaxed)
+#define raw_atomic_add_negative_relaxed arch_atomic_add_negative_relaxed
+#elif defined(arch_atomic_add_negative)
+#define raw_atomic_add_negative_relaxed arch_atomic_add_negative
+#else
static __always_inline bool
-arch_atomic_add_negative(int i, atomic_t *v)
+raw_atomic_add_negative_relaxed(int i, atomic_t *v)
{
- bool ret;
- __atomic_pre_full_fence();
- ret = arch_atomic_add_negative_relaxed(i, v);
- __atomic_post_full_fence();
- return ret;
+ return raw_atomic_add_return_relaxed(i, v) < 0;
}
-#define arch_atomic_add_negative arch_atomic_add_negative
#endif
-#endif /* arch_atomic_add_negative_relaxed */
-
-#ifndef arch_atomic_fetch_add_unless
+#if defined(arch_atomic_fetch_add_unless)
+#define raw_atomic_fetch_add_unless arch_atomic_fetch_add_unless
+#else
static __always_inline int
-arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
+raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
- int c = arch_atomic_read(v);
+ int c = raw_atomic_read(v);
do {
if (unlikely(c == u))
break;
- } while (!arch_atomic_try_cmpxchg(v, &c, c + a));
+ } while (!raw_atomic_try_cmpxchg(v, &c, c + a));
return c;
}
-#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
#endif
-#ifndef arch_atomic_add_unless
+#if defined(arch_atomic_add_unless)
+#define raw_atomic_add_unless arch_atomic_add_unless
+#else
static __always_inline bool
-arch_atomic_add_unless(atomic_t *v, int a, int u)
+raw_atomic_add_unless(atomic_t *v, int a, int u)
{
- return arch_atomic_fetch_add_unless(v, a, u) != u;
+ return raw_atomic_fetch_add_unless(v, a, u) != u;
}
-#define arch_atomic_add_unless arch_atomic_add_unless
#endif
-#ifndef arch_atomic_inc_not_zero
+#if defined(arch_atomic_inc_not_zero)
+#define raw_atomic_inc_not_zero arch_atomic_inc_not_zero
+#else
static __always_inline bool
-arch_atomic_inc_not_zero(atomic_t *v)
+raw_atomic_inc_not_zero(atomic_t *v)
{
- return arch_atomic_add_unless(v, 1, 0);
+ return raw_atomic_add_unless(v, 1, 0);
}
-#define arch_atomic_inc_not_zero arch_atomic_inc_not_zero
#endif
-#ifndef arch_atomic_inc_unless_negative
+#if defined(arch_atomic_inc_unless_negative)
+#define raw_atomic_inc_unless_negative arch_atomic_inc_unless_negative
+#else
static __always_inline bool
-arch_atomic_inc_unless_negative(atomic_t *v)
+raw_atomic_inc_unless_negative(atomic_t *v)
{
- int c = arch_atomic_read(v);
+ int c = raw_atomic_read(v);
do {
if (unlikely(c < 0))
return false;
- } while (!arch_atomic_try_cmpxchg(v, &c, c + 1));
+ } while (!raw_atomic_try_cmpxchg(v, &c, c + 1));
return true;
}
-#define arch_atomic_inc_unless_negative arch_atomic_inc_unless_negative
#endif
-#ifndef arch_atomic_dec_unless_positive
+#if defined(arch_atomic_dec_unless_positive)
+#define raw_atomic_dec_unless_positive arch_atomic_dec_unless_positive
+#else
static __always_inline bool
-arch_atomic_dec_unless_positive(atomic_t *v)
+raw_atomic_dec_unless_positive(atomic_t *v)
{
- int c = arch_atomic_read(v);
+ int c = raw_atomic_read(v);
do {
if (unlikely(c > 0))
return false;
- } while (!arch_atomic_try_cmpxchg(v, &c, c - 1));
+ } while (!raw_atomic_try_cmpxchg(v, &c, c - 1));
return true;
}
-#define arch_atomic_dec_unless_positive arch_atomic_dec_unless_positive
#endif
-#ifndef arch_atomic_dec_if_positive
+#if defined(arch_atomic_dec_if_positive)
+#define raw_atomic_dec_if_positive arch_atomic_dec_if_positive
+#else
static __always_inline int
-arch_atomic_dec_if_positive(atomic_t *v)
+raw_atomic_dec_if_positive(atomic_t *v)
{
- int dec, c = arch_atomic_read(v);
+ int dec, c = raw_atomic_read(v);
do {
dec = c - 1;
if (unlikely(dec < 0))
break;
- } while (!arch_atomic_try_cmpxchg(v, &c, dec));
+ } while (!raw_atomic_try_cmpxchg(v, &c, dec));
return dec;
}
-#define arch_atomic_dec_if_positive arch_atomic_dec_if_positive
#endif
#ifdef CONFIG_GENERIC_ATOMIC64
#include <asm-generic/atomic64.h>
#endif
-#ifndef arch_atomic64_read_acquire
+#define raw_atomic64_read arch_atomic64_read
+
+#if defined(arch_atomic64_read_acquire)
+#define raw_atomic64_read_acquire arch_atomic64_read_acquire
+#elif defined(arch_atomic64_read)
+#define raw_atomic64_read_acquire arch_atomic64_read
+#else
static __always_inline s64
-arch_atomic64_read_acquire(const atomic64_t *v)
+raw_atomic64_read_acquire(const atomic64_t *v)
{
s64 ret;
if (__native_word(atomic64_t)) {
ret = smp_load_acquire(&(v)->counter);
} else {
- ret = arch_atomic64_read(v);
+ ret = raw_atomic64_read(v);
__atomic_acquire_fence();
}
return ret;
}
-#define arch_atomic64_read_acquire arch_atomic64_read_acquire
#endif
-#ifndef arch_atomic64_set_release
+#define raw_atomic64_set arch_atomic64_set
+
+#if defined(arch_atomic64_set_release)
+#define raw_atomic64_set_release arch_atomic64_set_release
+#elif defined(arch_atomic64_set)
+#define raw_atomic64_set_release arch_atomic64_set
+#else
static __always_inline void
-arch_atomic64_set_release(atomic64_t *v, s64 i)
+raw_atomic64_set_release(atomic64_t *v, s64 i)
{
if (__native_word(atomic64_t)) {
smp_store_release(&(v)->counter, i);
} else {
__atomic_release_fence();
- arch_atomic64_set(v, i);
+ raw_atomic64_set(v, i);
}
}
-#define arch_atomic64_set_release arch_atomic64_set_release
#endif
-#ifndef arch_atomic64_add_return_relaxed
-#define arch_atomic64_add_return_acquire arch_atomic64_add_return
-#define arch_atomic64_add_return_release arch_atomic64_add_return
-#define arch_atomic64_add_return_relaxed arch_atomic64_add_return
-#else /* arch_atomic64_add_return_relaxed */
+#define raw_atomic64_add arch_atomic64_add
+
+#if defined(arch_atomic64_add_return)
+#define raw_atomic64_add_return arch_atomic64_add_return
+#elif defined(arch_atomic64_add_return_relaxed)
+static __always_inline s64
+raw_atomic64_add_return(s64 i, atomic64_t *v)
+{
+ s64 ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic64_add_return_relaxed(i, v);
+ __atomic_post_full_fence();
+ return ret;
+}
+#else
+#error "Unable to define raw_atomic64_add_return"
+#endif
-#ifndef arch_atomic64_add_return_acquire
+#if defined(arch_atomic64_add_return_acquire)
+#define raw_atomic64_add_return_acquire arch_atomic64_add_return_acquire
+#elif defined(arch_atomic64_add_return_relaxed)
static __always_inline s64
-arch_atomic64_add_return_acquire(s64 i, atomic64_t *v)
+raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)
{
s64 ret = arch_atomic64_add_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic64_add_return_acquire arch_atomic64_add_return_acquire
+#elif defined(arch_atomic64_add_return)
+#define raw_atomic64_add_return_acquire arch_atomic64_add_return
+#else
+#error "Unable to define raw_atomic64_add_return_acquire"
#endif
-#ifndef arch_atomic64_add_return_release
+#if defined(arch_atomic64_add_return_release)
+#define raw_atomic64_add_return_release arch_atomic64_add_return_release
+#elif defined(arch_atomic64_add_return_relaxed)
static __always_inline s64
-arch_atomic64_add_return_release(s64 i, atomic64_t *v)
+raw_atomic64_add_return_release(s64 i, atomic64_t *v)
{
__atomic_release_fence();
return arch_atomic64_add_return_relaxed(i, v);
}
-#define arch_atomic64_add_return_release arch_atomic64_add_return_release
+#elif defined(arch_atomic64_add_return)
+#define raw_atomic64_add_return_release arch_atomic64_add_return
+#else
+#error "Unable to define raw_atomic64_add_return_release"
+#endif
+
+#if defined(arch_atomic64_add_return_relaxed)
+#define raw_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed
+#elif defined(arch_atomic64_add_return)
+#define raw_atomic64_add_return_relaxed arch_atomic64_add_return
+#else
+#error "Unable to define raw_atomic64_add_return_relaxed"
#endif
-#ifndef arch_atomic64_add_return
+#if defined(arch_atomic64_fetch_add)
+#define raw_atomic64_fetch_add arch_atomic64_fetch_add
+#elif defined(arch_atomic64_fetch_add_relaxed)
static __always_inline s64
-arch_atomic64_add_return(s64 i, atomic64_t *v)
+raw_atomic64_fetch_add(s64 i, atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_add_return_relaxed(i, v);
+ ret = arch_atomic64_fetch_add_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_add_return arch_atomic64_add_return
+#else
+#error "Unable to define raw_atomic64_fetch_add"
#endif
-#endif /* arch_atomic64_add_return_relaxed */
-
-#ifndef arch_atomic64_fetch_add_relaxed
-#define arch_atomic64_fetch_add_acquire arch_atomic64_fetch_add
-#define arch_atomic64_fetch_add_release arch_atomic64_fetch_add
-#define arch_atomic64_fetch_add_relaxed arch_atomic64_fetch_add
-#else /* arch_atomic64_fetch_add_relaxed */
-
-#ifndef arch_atomic64_fetch_add_acquire
+#if defined(arch_atomic64_fetch_add_acquire)
+#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire
+#elif defined(arch_atomic64_fetch_add_relaxed)
static __always_inline s64
-arch_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
+raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
{
s64 ret = arch_atomic64_fetch_add_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire
+#elif defined(arch_atomic64_fetch_add)
+#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add
+#else
+#error "Unable to define raw_atomic64_fetch_add_acquire"
#endif
-#ifndef arch_atomic64_fetch_add_release
+#if defined(arch_atomic64_fetch_add_release)
+#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add_release
+#elif defined(arch_atomic64_fetch_add_relaxed)
static __always_inline s64
-arch_atomic64_fetch_add_release(s64 i, atomic64_t *v)
+raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)
{
__atomic_release_fence();
return arch_atomic64_fetch_add_relaxed(i, v);
}
-#define arch_atomic64_fetch_add_release arch_atomic64_fetch_add_release
+#elif defined(arch_atomic64_fetch_add)
+#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add
+#else
+#error "Unable to define raw_atomic64_fetch_add_release"
+#endif
+
+#if defined(arch_atomic64_fetch_add_relaxed)
+#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed
+#elif defined(arch_atomic64_fetch_add)
+#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add
+#else
+#error "Unable to define raw_atomic64_fetch_add_relaxed"
#endif
-#ifndef arch_atomic64_fetch_add
+#define raw_atomic64_sub arch_atomic64_sub
+
+#if defined(arch_atomic64_sub_return)
+#define raw_atomic64_sub_return arch_atomic64_sub_return
+#elif defined(arch_atomic64_sub_return_relaxed)
static __always_inline s64
-arch_atomic64_fetch_add(s64 i, atomic64_t *v)
+raw_atomic64_sub_return(s64 i, atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_fetch_add_relaxed(i, v);
+ ret = arch_atomic64_sub_return_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_fetch_add arch_atomic64_fetch_add
+#else
+#error "Unable to define raw_atomic64_sub_return"
#endif
-#endif /* arch_atomic64_fetch_add_relaxed */
-
-#ifndef arch_atomic64_sub_return_relaxed
-#define arch_atomic64_sub_return_acquire arch_atomic64_sub_return
-#define arch_atomic64_sub_return_release arch_atomic64_sub_return
-#define arch_atomic64_sub_return_relaxed arch_atomic64_sub_return
-#else /* arch_atomic64_sub_return_relaxed */
-
-#ifndef arch_atomic64_sub_return_acquire
+#if defined(arch_atomic64_sub_return_acquire)
+#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire
+#elif defined(arch_atomic64_sub_return_relaxed)
static __always_inline s64
-arch_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
+raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
{
s64 ret = arch_atomic64_sub_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire
+#elif defined(arch_atomic64_sub_return)
+#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return
+#else
+#error "Unable to define raw_atomic64_sub_return_acquire"
#endif
-#ifndef arch_atomic64_sub_return_release
+#if defined(arch_atomic64_sub_return_release)
+#define raw_atomic64_sub_return_release arch_atomic64_sub_return_release
+#elif defined(arch_atomic64_sub_return_relaxed)
static __always_inline s64
-arch_atomic64_sub_return_release(s64 i, atomic64_t *v)
+raw_atomic64_sub_return_release(s64 i, atomic64_t *v)
{
__atomic_release_fence();
return arch_atomic64_sub_return_relaxed(i, v);
}
-#define arch_atomic64_sub_return_release arch_atomic64_sub_return_release
+#elif defined(arch_atomic64_sub_return)
+#define raw_atomic64_sub_return_release arch_atomic64_sub_return
+#else
+#error "Unable to define raw_atomic64_sub_return_release"
+#endif
+
+#if defined(arch_atomic64_sub_return_relaxed)
+#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed
+#elif defined(arch_atomic64_sub_return)
+#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return
+#else
+#error "Unable to define raw_atomic64_sub_return_relaxed"
#endif
-#ifndef arch_atomic64_sub_return
+#if defined(arch_atomic64_fetch_sub)
+#define raw_atomic64_fetch_sub arch_atomic64_fetch_sub
+#elif defined(arch_atomic64_fetch_sub_relaxed)
static __always_inline s64
-arch_atomic64_sub_return(s64 i, atomic64_t *v)
+raw_atomic64_fetch_sub(s64 i, atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_sub_return_relaxed(i, v);
+ ret = arch_atomic64_fetch_sub_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_sub_return arch_atomic64_sub_return
+#else
+#error "Unable to define raw_atomic64_fetch_sub"
#endif
-#endif /* arch_atomic64_sub_return_relaxed */
-
-#ifndef arch_atomic64_fetch_sub_relaxed
-#define arch_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub
-#define arch_atomic64_fetch_sub_release arch_atomic64_fetch_sub
-#define arch_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub
-#else /* arch_atomic64_fetch_sub_relaxed */
-
-#ifndef arch_atomic64_fetch_sub_acquire
+#if defined(arch_atomic64_fetch_sub_acquire)
+#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire
+#elif defined(arch_atomic64_fetch_sub_relaxed)
static __always_inline s64
-arch_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
+raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
{
s64 ret = arch_atomic64_fetch_sub_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire
+#elif defined(arch_atomic64_fetch_sub)
+#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub
+#else
+#error "Unable to define raw_atomic64_fetch_sub_acquire"
#endif
-#ifndef arch_atomic64_fetch_sub_release
+#if defined(arch_atomic64_fetch_sub_release)
+#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release
+#elif defined(arch_atomic64_fetch_sub_relaxed)
static __always_inline s64
-arch_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
+raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
{
__atomic_release_fence();
return arch_atomic64_fetch_sub_relaxed(i, v);
}
-#define arch_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release
+#elif defined(arch_atomic64_fetch_sub)
+#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub
+#else
+#error "Unable to define raw_atomic64_fetch_sub_release"
#endif
-#ifndef arch_atomic64_fetch_sub
-static __always_inline s64
-arch_atomic64_fetch_sub(s64 i, atomic64_t *v)
-{
- s64 ret;
- __atomic_pre_full_fence();
- ret = arch_atomic64_fetch_sub_relaxed(i, v);
- __atomic_post_full_fence();
- return ret;
-}
-#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub
+#if defined(arch_atomic64_fetch_sub_relaxed)
+#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed
+#elif defined(arch_atomic64_fetch_sub)
+#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub
+#else
+#error "Unable to define raw_atomic64_fetch_sub_relaxed"
#endif
-#endif /* arch_atomic64_fetch_sub_relaxed */
-
-#ifndef arch_atomic64_inc
+#if defined(arch_atomic64_inc)
+#define raw_atomic64_inc arch_atomic64_inc
+#else
static __always_inline void
-arch_atomic64_inc(atomic64_t *v)
+raw_atomic64_inc(atomic64_t *v)
{
- arch_atomic64_add(1, v);
+ raw_atomic64_add(1, v);
}
-#define arch_atomic64_inc arch_atomic64_inc
#endif
-#ifndef arch_atomic64_inc_return_relaxed
-#ifdef arch_atomic64_inc_return
-#define arch_atomic64_inc_return_acquire arch_atomic64_inc_return
-#define arch_atomic64_inc_return_release arch_atomic64_inc_return
-#define arch_atomic64_inc_return_relaxed arch_atomic64_inc_return
-#endif /* arch_atomic64_inc_return */
-
-#ifndef arch_atomic64_inc_return
+#if defined(arch_atomic64_inc_return)
+#define raw_atomic64_inc_return arch_atomic64_inc_return
+#elif defined(arch_atomic64_inc_return_relaxed)
static __always_inline s64
-arch_atomic64_inc_return(atomic64_t *v)
+raw_atomic64_inc_return(atomic64_t *v)
{
- return arch_atomic64_add_return(1, v);
+ s64 ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic64_inc_return_relaxed(v);
+ __atomic_post_full_fence();
+ return ret;
}
-#define arch_atomic64_inc_return arch_atomic64_inc_return
-#endif
-
-#ifndef arch_atomic64_inc_return_acquire
+#else
static __always_inline s64
-arch_atomic64_inc_return_acquire(atomic64_t *v)
+raw_atomic64_inc_return(atomic64_t *v)
{
- return arch_atomic64_add_return_acquire(1, v);
+ return raw_atomic64_add_return(1, v);
}
-#define arch_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire
#endif
-#ifndef arch_atomic64_inc_return_release
+#if defined(arch_atomic64_inc_return_acquire)
+#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire
+#elif defined(arch_atomic64_inc_return_relaxed)
static __always_inline s64
-arch_atomic64_inc_return_release(atomic64_t *v)
+raw_atomic64_inc_return_acquire(atomic64_t *v)
{
- return arch_atomic64_add_return_release(1, v);
+ s64 ret = arch_atomic64_inc_return_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic64_inc_return_release arch_atomic64_inc_return_release
-#endif
-
-#ifndef arch_atomic64_inc_return_relaxed
+#elif defined(arch_atomic64_inc_return)
+#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return
+#else
static __always_inline s64
-arch_atomic64_inc_return_relaxed(atomic64_t *v)
+raw_atomic64_inc_return_acquire(atomic64_t *v)
{
- return arch_atomic64_add_return_relaxed(1, v);
+ return raw_atomic64_add_return_acquire(1, v);
}
-#define arch_atomic64_inc_return_relaxed arch_atomic64_inc_return_relaxed
#endif
-#else /* arch_atomic64_inc_return_relaxed */
-
-#ifndef arch_atomic64_inc_return_acquire
+#if defined(arch_atomic64_inc_return_release)
+#define raw_atomic64_inc_return_release arch_atomic64_inc_return_release
+#elif defined(arch_atomic64_inc_return_relaxed)
static __always_inline s64
-arch_atomic64_inc_return_acquire(atomic64_t *v)
+raw_atomic64_inc_return_release(atomic64_t *v)
{
- s64 ret = arch_atomic64_inc_return_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ __atomic_release_fence();
+ return arch_atomic64_inc_return_relaxed(v);
+}
+#elif defined(arch_atomic64_inc_return)
+#define raw_atomic64_inc_return_release arch_atomic64_inc_return
+#else
+static __always_inline s64
+raw_atomic64_inc_return_release(atomic64_t *v)
+{
+ return raw_atomic64_add_return_release(1, v);
}
-#define arch_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire
#endif
-#ifndef arch_atomic64_inc_return_release
+#if defined(arch_atomic64_inc_return_relaxed)
+#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return_relaxed
+#elif defined(arch_atomic64_inc_return)
+#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return
+#else
static __always_inline s64
-arch_atomic64_inc_return_release(atomic64_t *v)
+raw_atomic64_inc_return_relaxed(atomic64_t *v)
{
- __atomic_release_fence();
- return arch_atomic64_inc_return_relaxed(v);
+ return raw_atomic64_add_return_relaxed(1, v);
}
-#define arch_atomic64_inc_return_release arch_atomic64_inc_return_release
#endif
-#ifndef arch_atomic64_inc_return
+#if defined(arch_atomic64_fetch_inc)
+#define raw_atomic64_fetch_inc arch_atomic64_fetch_inc
+#elif defined(arch_atomic64_fetch_inc_relaxed)
static __always_inline s64
-arch_atomic64_inc_return(atomic64_t *v)
+raw_atomic64_fetch_inc(atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_inc_return_relaxed(v);
+ ret = arch_atomic64_fetch_inc_relaxed(v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_inc_return arch_atomic64_inc_return
-#endif
-
-#endif /* arch_atomic64_inc_return_relaxed */
-
-#ifndef arch_atomic64_fetch_inc_relaxed
-#ifdef arch_atomic64_fetch_inc
-#define arch_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc
-#define arch_atomic64_fetch_inc_release arch_atomic64_fetch_inc
-#define arch_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc
-#endif /* arch_atomic64_fetch_inc */
-
-#ifndef arch_atomic64_fetch_inc
+#else
static __always_inline s64
-arch_atomic64_fetch_inc(atomic64_t *v)
+raw_atomic64_fetch_inc(atomic64_t *v)
{
- return arch_atomic64_fetch_add(1, v);
+ return raw_atomic64_fetch_add(1, v);
}
-#define arch_atomic64_fetch_inc arch_atomic64_fetch_inc
#endif
-#ifndef arch_atomic64_fetch_inc_acquire
+#if defined(arch_atomic64_fetch_inc_acquire)
+#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire
+#elif defined(arch_atomic64_fetch_inc_relaxed)
+static __always_inline s64
+raw_atomic64_fetch_inc_acquire(atomic64_t *v)
+{
+ s64 ret = arch_atomic64_fetch_inc_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
+}
+#elif defined(arch_atomic64_fetch_inc)
+#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc
+#else
static __always_inline s64
-arch_atomic64_fetch_inc_acquire(atomic64_t *v)
+raw_atomic64_fetch_inc_acquire(atomic64_t *v)
{
- return arch_atomic64_fetch_add_acquire(1, v);
+ return raw_atomic64_fetch_add_acquire(1, v);
}
-#define arch_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire
#endif
-#ifndef arch_atomic64_fetch_inc_release
+#if defined(arch_atomic64_fetch_inc_release)
+#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release
+#elif defined(arch_atomic64_fetch_inc_relaxed)
static __always_inline s64
-arch_atomic64_fetch_inc_release(atomic64_t *v)
+raw_atomic64_fetch_inc_release(atomic64_t *v)
{
- return arch_atomic64_fetch_add_release(1, v);
+ __atomic_release_fence();
+ return arch_atomic64_fetch_inc_relaxed(v);
}
-#define arch_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release
-#endif
-
-#ifndef arch_atomic64_fetch_inc_relaxed
+#elif defined(arch_atomic64_fetch_inc)
+#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc
+#else
static __always_inline s64
-arch_atomic64_fetch_inc_relaxed(atomic64_t *v)
+raw_atomic64_fetch_inc_release(atomic64_t *v)
{
- return arch_atomic64_fetch_add_relaxed(1, v);
+ return raw_atomic64_fetch_add_release(1, v);
}
-#define arch_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc_relaxed
#endif
-#else /* arch_atomic64_fetch_inc_relaxed */
-
-#ifndef arch_atomic64_fetch_inc_acquire
+#if defined(arch_atomic64_fetch_inc_relaxed)
+#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc_relaxed
+#elif defined(arch_atomic64_fetch_inc)
+#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc
+#else
static __always_inline s64
-arch_atomic64_fetch_inc_acquire(atomic64_t *v)
+raw_atomic64_fetch_inc_relaxed(atomic64_t *v)
{
- s64 ret = arch_atomic64_fetch_inc_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ return raw_atomic64_fetch_add_relaxed(1, v);
}
-#define arch_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire
#endif
-#ifndef arch_atomic64_fetch_inc_release
-static __always_inline s64
-arch_atomic64_fetch_inc_release(atomic64_t *v)
+#if defined(arch_atomic64_dec)
+#define raw_atomic64_dec arch_atomic64_dec
+#else
+static __always_inline void
+raw_atomic64_dec(atomic64_t *v)
{
- __atomic_release_fence();
- return arch_atomic64_fetch_inc_relaxed(v);
+ raw_atomic64_sub(1, v);
}
-#define arch_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release
#endif
-#ifndef arch_atomic64_fetch_inc
+#if defined(arch_atomic64_dec_return)
+#define raw_atomic64_dec_return arch_atomic64_dec_return
+#elif defined(arch_atomic64_dec_return_relaxed)
static __always_inline s64
-arch_atomic64_fetch_inc(atomic64_t *v)
+raw_atomic64_dec_return(atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_fetch_inc_relaxed(v);
+ ret = arch_atomic64_dec_return_relaxed(v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_fetch_inc arch_atomic64_fetch_inc
-#endif
-
-#endif /* arch_atomic64_fetch_inc_relaxed */
-
-#ifndef arch_atomic64_dec
-static __always_inline void
-arch_atomic64_dec(atomic64_t *v)
-{
- arch_atomic64_sub(1, v);
-}
-#define arch_atomic64_dec arch_atomic64_dec
-#endif
-
-#ifndef arch_atomic64_dec_return_relaxed
-#ifdef arch_atomic64_dec_return
-#define arch_atomic64_dec_return_acquire arch_atomic64_dec_return
-#define arch_atomic64_dec_return_release arch_atomic64_dec_return
-#define arch_atomic64_dec_return_relaxed arch_atomic64_dec_return
-#endif /* arch_atomic64_dec_return */
-
-#ifndef arch_atomic64_dec_return
+#else
static __always_inline s64
-arch_atomic64_dec_return(atomic64_t *v)
+raw_atomic64_dec_return(atomic64_t *v)
{
- return arch_atomic64_sub_return(1, v);
+ return raw_atomic64_sub_return(1, v);
}
-#define arch_atomic64_dec_return arch_atomic64_dec_return
#endif
-#ifndef arch_atomic64_dec_return_acquire
+#if defined(arch_atomic64_dec_return_acquire)
+#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire
+#elif defined(arch_atomic64_dec_return_relaxed)
static __always_inline s64
-arch_atomic64_dec_return_acquire(atomic64_t *v)
+raw_atomic64_dec_return_acquire(atomic64_t *v)
{
- return arch_atomic64_sub_return_acquire(1, v);
+ s64 ret = arch_atomic64_dec_return_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire
-#endif
-
-#ifndef arch_atomic64_dec_return_release
+#elif defined(arch_atomic64_dec_return)
+#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return
+#else
static __always_inline s64
-arch_atomic64_dec_return_release(atomic64_t *v)
+raw_atomic64_dec_return_acquire(atomic64_t *v)
{
- return arch_atomic64_sub_return_release(1, v);
+ return raw_atomic64_sub_return_acquire(1, v);
}
-#define arch_atomic64_dec_return_release arch_atomic64_dec_return_release
#endif
-#ifndef arch_atomic64_dec_return_relaxed
+#if defined(arch_atomic64_dec_return_release)
+#define raw_atomic64_dec_return_release arch_atomic64_dec_return_release
+#elif defined(arch_atomic64_dec_return_relaxed)
static __always_inline s64
-arch_atomic64_dec_return_relaxed(atomic64_t *v)
+raw_atomic64_dec_return_release(atomic64_t *v)
{
- return arch_atomic64_sub_return_relaxed(1, v);
+ __atomic_release_fence();
+ return arch_atomic64_dec_return_relaxed(v);
}
-#define arch_atomic64_dec_return_relaxed arch_atomic64_dec_return_relaxed
-#endif
-
-#else /* arch_atomic64_dec_return_relaxed */
-
-#ifndef arch_atomic64_dec_return_acquire
+#elif defined(arch_atomic64_dec_return)
+#define raw_atomic64_dec_return_release arch_atomic64_dec_return
+#else
static __always_inline s64
-arch_atomic64_dec_return_acquire(atomic64_t *v)
+raw_atomic64_dec_return_release(atomic64_t *v)
{
- s64 ret = arch_atomic64_dec_return_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ return raw_atomic64_sub_return_release(1, v);
}
-#define arch_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire
#endif
-#ifndef arch_atomic64_dec_return_release
+#if defined(arch_atomic64_dec_return_relaxed)
+#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return_relaxed
+#elif defined(arch_atomic64_dec_return)
+#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return
+#else
static __always_inline s64
-arch_atomic64_dec_return_release(atomic64_t *v)
+raw_atomic64_dec_return_relaxed(atomic64_t *v)
{
- __atomic_release_fence();
- return arch_atomic64_dec_return_relaxed(v);
+ return raw_atomic64_sub_return_relaxed(1, v);
}
-#define arch_atomic64_dec_return_release arch_atomic64_dec_return_release
#endif
-#ifndef arch_atomic64_dec_return
+#if defined(arch_atomic64_fetch_dec)
+#define raw_atomic64_fetch_dec arch_atomic64_fetch_dec
+#elif defined(arch_atomic64_fetch_dec_relaxed)
static __always_inline s64
-arch_atomic64_dec_return(atomic64_t *v)
+raw_atomic64_fetch_dec(atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_dec_return_relaxed(v);
+ ret = arch_atomic64_fetch_dec_relaxed(v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_dec_return arch_atomic64_dec_return
-#endif
-
-#endif /* arch_atomic64_dec_return_relaxed */
-
-#ifndef arch_atomic64_fetch_dec_relaxed
-#ifdef arch_atomic64_fetch_dec
-#define arch_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec
-#define arch_atomic64_fetch_dec_release arch_atomic64_fetch_dec
-#define arch_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec
-#endif /* arch_atomic64_fetch_dec */
-
-#ifndef arch_atomic64_fetch_dec
+#else
static __always_inline s64
-arch_atomic64_fetch_dec(atomic64_t *v)
+raw_atomic64_fetch_dec(atomic64_t *v)
{
- return arch_atomic64_fetch_sub(1, v);
+ return raw_atomic64_fetch_sub(1, v);
}
-#define arch_atomic64_fetch_dec arch_atomic64_fetch_dec
#endif
-#ifndef arch_atomic64_fetch_dec_acquire
+#if defined(arch_atomic64_fetch_dec_acquire)
+#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire
+#elif defined(arch_atomic64_fetch_dec_relaxed)
static __always_inline s64
-arch_atomic64_fetch_dec_acquire(atomic64_t *v)
+raw_atomic64_fetch_dec_acquire(atomic64_t *v)
{
- return arch_atomic64_fetch_sub_acquire(1, v);
+ s64 ret = arch_atomic64_fetch_dec_relaxed(v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire
-#endif
-
-#ifndef arch_atomic64_fetch_dec_release
+#elif defined(arch_atomic64_fetch_dec)
+#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec
+#else
static __always_inline s64
-arch_atomic64_fetch_dec_release(atomic64_t *v)
+raw_atomic64_fetch_dec_acquire(atomic64_t *v)
{
- return arch_atomic64_fetch_sub_release(1, v);
+ return raw_atomic64_fetch_sub_acquire(1, v);
}
-#define arch_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release
#endif
-#ifndef arch_atomic64_fetch_dec_relaxed
+#if defined(arch_atomic64_fetch_dec_release)
+#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release
+#elif defined(arch_atomic64_fetch_dec_relaxed)
static __always_inline s64
-arch_atomic64_fetch_dec_relaxed(atomic64_t *v)
+raw_atomic64_fetch_dec_release(atomic64_t *v)
{
- return arch_atomic64_fetch_sub_relaxed(1, v);
+ __atomic_release_fence();
+ return arch_atomic64_fetch_dec_relaxed(v);
}
-#define arch_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec_relaxed
-#endif
-
-#else /* arch_atomic64_fetch_dec_relaxed */
-
-#ifndef arch_atomic64_fetch_dec_acquire
+#elif defined(arch_atomic64_fetch_dec)
+#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec
+#else
static __always_inline s64
-arch_atomic64_fetch_dec_acquire(atomic64_t *v)
+raw_atomic64_fetch_dec_release(atomic64_t *v)
{
- s64 ret = arch_atomic64_fetch_dec_relaxed(v);
- __atomic_acquire_fence();
- return ret;
+ return raw_atomic64_fetch_sub_release(1, v);
}
-#define arch_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire
#endif
-#ifndef arch_atomic64_fetch_dec_release
+#if defined(arch_atomic64_fetch_dec_relaxed)
+#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec_relaxed
+#elif defined(arch_atomic64_fetch_dec)
+#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec
+#else
static __always_inline s64
-arch_atomic64_fetch_dec_release(atomic64_t *v)
+raw_atomic64_fetch_dec_relaxed(atomic64_t *v)
{
- __atomic_release_fence();
- return arch_atomic64_fetch_dec_relaxed(v);
+ return raw_atomic64_fetch_sub_relaxed(1, v);
}
-#define arch_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release
#endif
-#ifndef arch_atomic64_fetch_dec
+#define raw_atomic64_and arch_atomic64_and
+
+#if defined(arch_atomic64_fetch_and)
+#define raw_atomic64_fetch_and arch_atomic64_fetch_and
+#elif defined(arch_atomic64_fetch_and_relaxed)
static __always_inline s64
-arch_atomic64_fetch_dec(atomic64_t *v)
+raw_atomic64_fetch_and(s64 i, atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_fetch_dec_relaxed(v);
+ ret = arch_atomic64_fetch_and_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_fetch_dec arch_atomic64_fetch_dec
+#else
+#error "Unable to define raw_atomic64_fetch_and"
#endif
-#endif /* arch_atomic64_fetch_dec_relaxed */
-
-#ifndef arch_atomic64_fetch_and_relaxed
-#define arch_atomic64_fetch_and_acquire arch_atomic64_fetch_and
-#define arch_atomic64_fetch_and_release arch_atomic64_fetch_and
-#define arch_atomic64_fetch_and_relaxed arch_atomic64_fetch_and
-#else /* arch_atomic64_fetch_and_relaxed */
-
-#ifndef arch_atomic64_fetch_and_acquire
+#if defined(arch_atomic64_fetch_and_acquire)
+#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire
+#elif defined(arch_atomic64_fetch_and_relaxed)
static __always_inline s64
-arch_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
+raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
{
s64 ret = arch_atomic64_fetch_and_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire
+#elif defined(arch_atomic64_fetch_and)
+#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and
+#else
+#error "Unable to define raw_atomic64_fetch_and_acquire"
#endif
-#ifndef arch_atomic64_fetch_and_release
+#if defined(arch_atomic64_fetch_and_release)
+#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and_release
+#elif defined(arch_atomic64_fetch_and_relaxed)
static __always_inline s64
-arch_atomic64_fetch_and_release(s64 i, atomic64_t *v)
+raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)
{
__atomic_release_fence();
return arch_atomic64_fetch_and_relaxed(i, v);
}
-#define arch_atomic64_fetch_and_release arch_atomic64_fetch_and_release
+#elif defined(arch_atomic64_fetch_and)
+#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and
+#else
+#error "Unable to define raw_atomic64_fetch_and_release"
#endif
-#ifndef arch_atomic64_fetch_and
-static __always_inline s64
-arch_atomic64_fetch_and(s64 i, atomic64_t *v)
-{
- s64 ret;
- __atomic_pre_full_fence();
- ret = arch_atomic64_fetch_and_relaxed(i, v);
- __atomic_post_full_fence();
- return ret;
-}
-#define arch_atomic64_fetch_and arch_atomic64_fetch_and
+#if defined(arch_atomic64_fetch_and_relaxed)
+#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed
+#elif defined(arch_atomic64_fetch_and)
+#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and
+#else
+#error "Unable to define raw_atomic64_fetch_and_relaxed"
#endif
-#endif /* arch_atomic64_fetch_and_relaxed */
-
-#ifndef arch_atomic64_andnot
+#if defined(arch_atomic64_andnot)
+#define raw_atomic64_andnot arch_atomic64_andnot
+#else
static __always_inline void
-arch_atomic64_andnot(s64 i, atomic64_t *v)
+raw_atomic64_andnot(s64 i, atomic64_t *v)
{
- arch_atomic64_and(~i, v);
+ raw_atomic64_and(~i, v);
}
-#define arch_atomic64_andnot arch_atomic64_andnot
#endif
-#ifndef arch_atomic64_fetch_andnot_relaxed
-#ifdef arch_atomic64_fetch_andnot
-#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot
-#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot
-#define arch_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot
-#endif /* arch_atomic64_fetch_andnot */
-
-#ifndef arch_atomic64_fetch_andnot
+#if defined(arch_atomic64_fetch_andnot)
+#define raw_atomic64_fetch_andnot arch_atomic64_fetch_andnot
+#elif defined(arch_atomic64_fetch_andnot_relaxed)
static __always_inline s64
-arch_atomic64_fetch_andnot(s64 i, atomic64_t *v)
+raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
- return arch_atomic64_fetch_and(~i, v);
+ s64 ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic64_fetch_andnot_relaxed(i, v);
+ __atomic_post_full_fence();
+ return ret;
}
-#define arch_atomic64_fetch_andnot arch_atomic64_fetch_andnot
-#endif
-
-#ifndef arch_atomic64_fetch_andnot_acquire
+#else
static __always_inline s64
-arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
+raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
- return arch_atomic64_fetch_and_acquire(~i, v);
+ return raw_atomic64_fetch_and(~i, v);
}
-#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire
#endif
-#ifndef arch_atomic64_fetch_andnot_release
+#if defined(arch_atomic64_fetch_andnot_acquire)
+#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire
+#elif defined(arch_atomic64_fetch_andnot_relaxed)
static __always_inline s64
-arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
+raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
- return arch_atomic64_fetch_and_release(~i, v);
+ s64 ret = arch_atomic64_fetch_andnot_relaxed(i, v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release
-#endif
-
-#ifndef arch_atomic64_fetch_andnot_relaxed
+#elif defined(arch_atomic64_fetch_andnot)
+#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot
+#else
static __always_inline s64
-arch_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
+raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
- return arch_atomic64_fetch_and_relaxed(~i, v);
+ return raw_atomic64_fetch_and_acquire(~i, v);
}
-#define arch_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed
#endif
-#else /* arch_atomic64_fetch_andnot_relaxed */
-
-#ifndef arch_atomic64_fetch_andnot_acquire
+#if defined(arch_atomic64_fetch_andnot_release)
+#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release
+#elif defined(arch_atomic64_fetch_andnot_relaxed)
static __always_inline s64
-arch_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
+raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
- s64 ret = arch_atomic64_fetch_andnot_relaxed(i, v);
- __atomic_acquire_fence();
- return ret;
+ __atomic_release_fence();
+ return arch_atomic64_fetch_andnot_relaxed(i, v);
+}
+#elif defined(arch_atomic64_fetch_andnot)
+#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot
+#else
+static __always_inline s64
+raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
+{
+ return raw_atomic64_fetch_and_release(~i, v);
}
-#define arch_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire
#endif
-#ifndef arch_atomic64_fetch_andnot_release
+#if defined(arch_atomic64_fetch_andnot_relaxed)
+#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed
+#elif defined(arch_atomic64_fetch_andnot)
+#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot
+#else
static __always_inline s64
-arch_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
+raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
{
- __atomic_release_fence();
- return arch_atomic64_fetch_andnot_relaxed(i, v);
+ return raw_atomic64_fetch_and_relaxed(~i, v);
}
-#define arch_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release
#endif
-#ifndef arch_atomic64_fetch_andnot
+#define raw_atomic64_or arch_atomic64_or
+
+#if defined(arch_atomic64_fetch_or)
+#define raw_atomic64_fetch_or arch_atomic64_fetch_or
+#elif defined(arch_atomic64_fetch_or_relaxed)
static __always_inline s64
-arch_atomic64_fetch_andnot(s64 i, atomic64_t *v)
+raw_atomic64_fetch_or(s64 i, atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_fetch_andnot_relaxed(i, v);
+ ret = arch_atomic64_fetch_or_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_fetch_andnot arch_atomic64_fetch_andnot
+#else
+#error "Unable to define raw_atomic64_fetch_or"
#endif
-#endif /* arch_atomic64_fetch_andnot_relaxed */
-
-#ifndef arch_atomic64_fetch_or_relaxed
-#define arch_atomic64_fetch_or_acquire arch_atomic64_fetch_or
-#define arch_atomic64_fetch_or_release arch_atomic64_fetch_or
-#define arch_atomic64_fetch_or_relaxed arch_atomic64_fetch_or
-#else /* arch_atomic64_fetch_or_relaxed */
-
-#ifndef arch_atomic64_fetch_or_acquire
+#if defined(arch_atomic64_fetch_or_acquire)
+#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire
+#elif defined(arch_atomic64_fetch_or_relaxed)
static __always_inline s64
-arch_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
+raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
{
s64 ret = arch_atomic64_fetch_or_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire
+#elif defined(arch_atomic64_fetch_or)
+#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or
+#else
+#error "Unable to define raw_atomic64_fetch_or_acquire"
#endif
-#ifndef arch_atomic64_fetch_or_release
+#if defined(arch_atomic64_fetch_or_release)
+#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or_release
+#elif defined(arch_atomic64_fetch_or_relaxed)
static __always_inline s64
-arch_atomic64_fetch_or_release(s64 i, atomic64_t *v)
+raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)
{
__atomic_release_fence();
return arch_atomic64_fetch_or_relaxed(i, v);
}
-#define arch_atomic64_fetch_or_release arch_atomic64_fetch_or_release
+#elif defined(arch_atomic64_fetch_or)
+#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or
+#else
+#error "Unable to define raw_atomic64_fetch_or_release"
+#endif
+
+#if defined(arch_atomic64_fetch_or_relaxed)
+#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed
+#elif defined(arch_atomic64_fetch_or)
+#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or
+#else
+#error "Unable to define raw_atomic64_fetch_or_relaxed"
#endif
-#ifndef arch_atomic64_fetch_or
+#define raw_atomic64_xor arch_atomic64_xor
+
+#if defined(arch_atomic64_fetch_xor)
+#define raw_atomic64_fetch_xor arch_atomic64_fetch_xor
+#elif defined(arch_atomic64_fetch_xor_relaxed)
static __always_inline s64
-arch_atomic64_fetch_or(s64 i, atomic64_t *v)
+raw_atomic64_fetch_xor(s64 i, atomic64_t *v)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_fetch_or_relaxed(i, v);
+ ret = arch_atomic64_fetch_xor_relaxed(i, v);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_fetch_or arch_atomic64_fetch_or
+#else
+#error "Unable to define raw_atomic64_fetch_xor"
#endif
-#endif /* arch_atomic64_fetch_or_relaxed */
-
-#ifndef arch_atomic64_fetch_xor_relaxed
-#define arch_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor
-#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor
-#define arch_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor
-#else /* arch_atomic64_fetch_xor_relaxed */
-
-#ifndef arch_atomic64_fetch_xor_acquire
+#if defined(arch_atomic64_fetch_xor_acquire)
+#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire
+#elif defined(arch_atomic64_fetch_xor_relaxed)
static __always_inline s64
-arch_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
+raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
{
s64 ret = arch_atomic64_fetch_xor_relaxed(i, v);
__atomic_acquire_fence();
return ret;
}
-#define arch_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire
+#elif defined(arch_atomic64_fetch_xor)
+#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor
+#else
+#error "Unable to define raw_atomic64_fetch_xor_acquire"
#endif
-#ifndef arch_atomic64_fetch_xor_release
+#if defined(arch_atomic64_fetch_xor_release)
+#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release
+#elif defined(arch_atomic64_fetch_xor_relaxed)
static __always_inline s64
-arch_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
+raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
{
__atomic_release_fence();
return arch_atomic64_fetch_xor_relaxed(i, v);
}
-#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release
+#elif defined(arch_atomic64_fetch_xor)
+#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor
+#else
+#error "Unable to define raw_atomic64_fetch_xor_release"
+#endif
+
+#if defined(arch_atomic64_fetch_xor_relaxed)
+#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed
+#elif defined(arch_atomic64_fetch_xor)
+#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor
+#else
+#error "Unable to define raw_atomic64_fetch_xor_relaxed"
#endif
-#ifndef arch_atomic64_fetch_xor
+#if defined(arch_atomic64_xchg)
+#define raw_atomic64_xchg arch_atomic64_xchg
+#elif defined(arch_atomic64_xchg_relaxed)
static __always_inline s64
-arch_atomic64_fetch_xor(s64 i, atomic64_t *v)
+raw_atomic64_xchg(atomic64_t *v, s64 i)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_fetch_xor_relaxed(i, v);
+ ret = arch_atomic64_xchg_relaxed(v, i);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor
-#endif
-
-#endif /* arch_atomic64_fetch_xor_relaxed */
-
-#ifndef arch_atomic64_xchg_relaxed
-#ifdef arch_atomic64_xchg
-#define arch_atomic64_xchg_acquire arch_atomic64_xchg
-#define arch_atomic64_xchg_release arch_atomic64_xchg
-#define arch_atomic64_xchg_relaxed arch_atomic64_xchg
-#endif /* arch_atomic64_xchg */
-
-#ifndef arch_atomic64_xchg
+#else
static __always_inline s64
-arch_atomic64_xchg(atomic64_t *v, s64 new)
+raw_atomic64_xchg(atomic64_t *v, s64 new)
{
- return arch_xchg(&v->counter, new);
+ return raw_xchg(&v->counter, new);
}
-#define arch_atomic64_xchg arch_atomic64_xchg
#endif
-#ifndef arch_atomic64_xchg_acquire
+#if defined(arch_atomic64_xchg_acquire)
+#define raw_atomic64_xchg_acquire arch_atomic64_xchg_acquire
+#elif defined(arch_atomic64_xchg_relaxed)
static __always_inline s64
-arch_atomic64_xchg_acquire(atomic64_t *v, s64 new)
+raw_atomic64_xchg_acquire(atomic64_t *v, s64 i)
{
- return arch_xchg_acquire(&v->counter, new);
+ s64 ret = arch_atomic64_xchg_relaxed(v, i);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic64_xchg_acquire arch_atomic64_xchg_acquire
-#endif
-
-#ifndef arch_atomic64_xchg_release
+#elif defined(arch_atomic64_xchg)
+#define raw_atomic64_xchg_acquire arch_atomic64_xchg
+#else
static __always_inline s64
-arch_atomic64_xchg_release(atomic64_t *v, s64 new)
+raw_atomic64_xchg_acquire(atomic64_t *v, s64 new)
{
- return arch_xchg_release(&v->counter, new);
+ return raw_xchg_acquire(&v->counter, new);
}
-#define arch_atomic64_xchg_release arch_atomic64_xchg_release
#endif
-#ifndef arch_atomic64_xchg_relaxed
+#if defined(arch_atomic64_xchg_release)
+#define raw_atomic64_xchg_release arch_atomic64_xchg_release
+#elif defined(arch_atomic64_xchg_relaxed)
static __always_inline s64
-arch_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
+raw_atomic64_xchg_release(atomic64_t *v, s64 i)
{
- return arch_xchg_relaxed(&v->counter, new);
+ __atomic_release_fence();
+ return arch_atomic64_xchg_relaxed(v, i);
}
-#define arch_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed
-#endif
-
-#else /* arch_atomic64_xchg_relaxed */
-
-#ifndef arch_atomic64_xchg_acquire
+#elif defined(arch_atomic64_xchg)
+#define raw_atomic64_xchg_release arch_atomic64_xchg
+#else
static __always_inline s64
-arch_atomic64_xchg_acquire(atomic64_t *v, s64 i)
+raw_atomic64_xchg_release(atomic64_t *v, s64 new)
{
- s64 ret = arch_atomic64_xchg_relaxed(v, i);
- __atomic_acquire_fence();
- return ret;
+ return raw_xchg_release(&v->counter, new);
}
-#define arch_atomic64_xchg_acquire arch_atomic64_xchg_acquire
#endif
-#ifndef arch_atomic64_xchg_release
+#if defined(arch_atomic64_xchg_relaxed)
+#define raw_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed
+#elif defined(arch_atomic64_xchg)
+#define raw_atomic64_xchg_relaxed arch_atomic64_xchg
+#else
static __always_inline s64
-arch_atomic64_xchg_release(atomic64_t *v, s64 i)
+raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
{
- __atomic_release_fence();
- return arch_atomic64_xchg_relaxed(v, i);
+ return raw_xchg_relaxed(&v->counter, new);
}
-#define arch_atomic64_xchg_release arch_atomic64_xchg_release
#endif
-#ifndef arch_atomic64_xchg
+#if defined(arch_atomic64_cmpxchg)
+#define raw_atomic64_cmpxchg arch_atomic64_cmpxchg
+#elif defined(arch_atomic64_cmpxchg_relaxed)
static __always_inline s64
-arch_atomic64_xchg(atomic64_t *v, s64 i)
+raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_xchg_relaxed(v, i);
+ ret = arch_atomic64_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_xchg arch_atomic64_xchg
-#endif
-
-#endif /* arch_atomic64_xchg_relaxed */
-
-#ifndef arch_atomic64_cmpxchg_relaxed
-#ifdef arch_atomic64_cmpxchg
-#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg
-#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg
-#define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg
-#endif /* arch_atomic64_cmpxchg */
-
-#ifndef arch_atomic64_cmpxchg
+#else
static __always_inline s64
-arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
+raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
- return arch_cmpxchg(&v->counter, old, new);
+ return raw_cmpxchg(&v->counter, old, new);
}
-#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
#endif
-#ifndef arch_atomic64_cmpxchg_acquire
+#if defined(arch_atomic64_cmpxchg_acquire)
+#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire
+#elif defined(arch_atomic64_cmpxchg_relaxed)
static __always_inline s64
-arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
+raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
- return arch_cmpxchg_acquire(&v->counter, old, new);
+ s64 ret = arch_atomic64_cmpxchg_relaxed(v, old, new);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire
-#endif
-
-#ifndef arch_atomic64_cmpxchg_release
+#elif defined(arch_atomic64_cmpxchg)
+#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg
+#else
static __always_inline s64
-arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
+raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
- return arch_cmpxchg_release(&v->counter, old, new);
+ return raw_cmpxchg_acquire(&v->counter, old, new);
}
-#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release
#endif
-#ifndef arch_atomic64_cmpxchg_relaxed
+#if defined(arch_atomic64_cmpxchg_release)
+#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release
+#elif defined(arch_atomic64_cmpxchg_relaxed)
static __always_inline s64
-arch_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
+raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
- return arch_cmpxchg_relaxed(&v->counter, old, new);
+ __atomic_release_fence();
+ return arch_atomic64_cmpxchg_relaxed(v, old, new);
}
-#define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed
-#endif
-
-#else /* arch_atomic64_cmpxchg_relaxed */
-
-#ifndef arch_atomic64_cmpxchg_acquire
+#elif defined(arch_atomic64_cmpxchg)
+#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg
+#else
static __always_inline s64
-arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
+raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
- s64 ret = arch_atomic64_cmpxchg_relaxed(v, old, new);
- __atomic_acquire_fence();
- return ret;
+ return raw_cmpxchg_release(&v->counter, old, new);
}
-#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire
#endif
-#ifndef arch_atomic64_cmpxchg_release
+#if defined(arch_atomic64_cmpxchg_relaxed)
+#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed
+#elif defined(arch_atomic64_cmpxchg)
+#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg
+#else
static __always_inline s64
-arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
+raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
{
- __atomic_release_fence();
- return arch_atomic64_cmpxchg_relaxed(v, old, new);
+ return raw_cmpxchg_relaxed(&v->counter, old, new);
}
-#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release
#endif
-#ifndef arch_atomic64_cmpxchg
-static __always_inline s64
-arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
+#if defined(arch_atomic64_try_cmpxchg)
+#define raw_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
+#elif defined(arch_atomic64_try_cmpxchg_relaxed)
+static __always_inline bool
+raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
- s64 ret;
+ bool ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_cmpxchg_relaxed(v, old, new);
+ ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
}
-#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
-#endif
-
-#endif /* arch_atomic64_cmpxchg_relaxed */
-
-#ifndef arch_atomic64_try_cmpxchg_relaxed
-#ifdef arch_atomic64_try_cmpxchg
-#define arch_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg
-#define arch_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg
-#define arch_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg
-#endif /* arch_atomic64_try_cmpxchg */
-
-#ifndef arch_atomic64_try_cmpxchg
+#else
static __always_inline bool
-arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
+raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
s64 r, o = *old;
- r = arch_atomic64_cmpxchg(v, o, new);
+ r = raw_atomic64_cmpxchg(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
#endif
-#ifndef arch_atomic64_try_cmpxchg_acquire
+#if defined(arch_atomic64_try_cmpxchg_acquire)
+#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire
+#elif defined(arch_atomic64_try_cmpxchg_relaxed)
+static __always_inline bool
+raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
+{
+ bool ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new);
+ __atomic_acquire_fence();
+ return ret;
+}
+#elif defined(arch_atomic64_try_cmpxchg)
+#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg
+#else
static __always_inline bool
-arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
+raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
s64 r, o = *old;
- r = arch_atomic64_cmpxchg_acquire(v, o, new);
+ r = raw_atomic64_cmpxchg_acquire(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire
#endif
-#ifndef arch_atomic64_try_cmpxchg_release
+#if defined(arch_atomic64_try_cmpxchg_release)
+#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release
+#elif defined(arch_atomic64_try_cmpxchg_relaxed)
+static __always_inline bool
+raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
+{
+ __atomic_release_fence();
+ return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
+}
+#elif defined(arch_atomic64_try_cmpxchg)
+#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg
+#else
static __always_inline bool
-arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
+raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
{
s64 r, o = *old;
- r = arch_atomic64_cmpxchg_release(v, o, new);
+ r = raw_atomic64_cmpxchg_release(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release
#endif
-#ifndef arch_atomic64_try_cmpxchg_relaxed
+#if defined(arch_atomic64_try_cmpxchg_relaxed)
+#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg_relaxed
+#elif defined(arch_atomic64_try_cmpxchg)
+#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg
+#else
static __always_inline bool
-arch_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
+raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
{
s64 r, o = *old;
- r = arch_atomic64_cmpxchg_relaxed(v, o, new);
+ r = raw_atomic64_cmpxchg_relaxed(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
}
-#define arch_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg_relaxed
#endif
-#else /* arch_atomic64_try_cmpxchg_relaxed */
-
-#ifndef arch_atomic64_try_cmpxchg_acquire
-static __always_inline bool
-arch_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
-{
- bool ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new);
- __atomic_acquire_fence();
- return ret;
-}
-#define arch_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire
-#endif
-
-#ifndef arch_atomic64_try_cmpxchg_release
-static __always_inline bool
-arch_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
-{
- __atomic_release_fence();
- return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
-}
-#define arch_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release
-#endif
-
-#ifndef arch_atomic64_try_cmpxchg
-static __always_inline bool
-arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
-{
- bool ret;
- __atomic_pre_full_fence();
- ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new);
- __atomic_post_full_fence();
- return ret;
-}
-#define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
-#endif
-
-#endif /* arch_atomic64_try_cmpxchg_relaxed */
-
-#ifndef arch_atomic64_sub_and_test
+#if defined(arch_atomic64_sub_and_test)
+#define raw_atomic64_sub_and_test arch_atomic64_sub_and_test
+#else
static __always_inline bool
-arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
+raw_atomic64_sub_and_test(s64 i, atomic64_t *v)
{
- return arch_atomic64_sub_return(i, v) == 0;
+ return raw_atomic64_sub_return(i, v) == 0;
}
-#define arch_atomic64_sub_and_test arch_atomic64_sub_and_test
#endif
-#ifndef arch_atomic64_dec_and_test
+#if defined(arch_atomic64_dec_and_test)
+#define raw_atomic64_dec_and_test arch_atomic64_dec_and_test
+#else
static __always_inline bool
-arch_atomic64_dec_and_test(atomic64_t *v)
+raw_atomic64_dec_and_test(atomic64_t *v)
{
- return arch_atomic64_dec_return(v) == 0;
+ return raw_atomic64_dec_return(v) == 0;
}
-#define arch_atomic64_dec_and_test arch_atomic64_dec_and_test
#endif
-#ifndef arch_atomic64_inc_and_test
+#if defined(arch_atomic64_inc_and_test)
+#define raw_atomic64_inc_and_test arch_atomic64_inc_and_test
+#else
static __always_inline bool
-arch_atomic64_inc_and_test(atomic64_t *v)
+raw_atomic64_inc_and_test(atomic64_t *v)
{
- return arch_atomic64_inc_return(v) == 0;
+ return raw_atomic64_inc_return(v) == 0;
}
-#define arch_atomic64_inc_and_test arch_atomic64_inc_and_test
#endif
-#ifndef arch_atomic64_add_negative_relaxed
-#ifdef arch_atomic64_add_negative
-#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative
-#define arch_atomic64_add_negative_release arch_atomic64_add_negative
-#define arch_atomic64_add_negative_relaxed arch_atomic64_add_negative
-#endif /* arch_atomic64_add_negative */
-
-#ifndef arch_atomic64_add_negative
+#if defined(arch_atomic64_add_negative)
+#define raw_atomic64_add_negative arch_atomic64_add_negative
+#elif defined(arch_atomic64_add_negative_relaxed)
static __always_inline bool
-arch_atomic64_add_negative(s64 i, atomic64_t *v)
+raw_atomic64_add_negative(s64 i, atomic64_t *v)
{
- return arch_atomic64_add_return(i, v) < 0;
+ bool ret;
+ __atomic_pre_full_fence();
+ ret = arch_atomic64_add_negative_relaxed(i, v);
+ __atomic_post_full_fence();
+ return ret;
}
-#define arch_atomic64_add_negative arch_atomic64_add_negative
-#endif
-
-#ifndef arch_atomic64_add_negative_acquire
+#else
static __always_inline bool
-arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
+raw_atomic64_add_negative(s64 i, atomic64_t *v)
{
- return arch_atomic64_add_return_acquire(i, v) < 0;
+ return raw_atomic64_add_return(i, v) < 0;
}
-#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire
#endif
-#ifndef arch_atomic64_add_negative_release
+#if defined(arch_atomic64_add_negative_acquire)
+#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire
+#elif defined(arch_atomic64_add_negative_relaxed)
static __always_inline bool
-arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
+raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
- return arch_atomic64_add_return_release(i, v) < 0;
+ bool ret = arch_atomic64_add_negative_relaxed(i, v);
+ __atomic_acquire_fence();
+ return ret;
}
-#define arch_atomic64_add_negative_release arch_atomic64_add_negative_release
-#endif
-
-#ifndef arch_atomic64_add_negative_relaxed
+#elif defined(arch_atomic64_add_negative)
+#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative
+#else
static __always_inline bool
-arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
+raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
- return arch_atomic64_add_return_relaxed(i, v) < 0;
+ return raw_atomic64_add_return_acquire(i, v) < 0;
}
-#define arch_atomic64_add_negative_relaxed arch_atomic64_add_negative_relaxed
#endif
-#else /* arch_atomic64_add_negative_relaxed */
-
-#ifndef arch_atomic64_add_negative_acquire
+#if defined(arch_atomic64_add_negative_release)
+#define raw_atomic64_add_negative_release arch_atomic64_add_negative_release
+#elif defined(arch_atomic64_add_negative_relaxed)
static __always_inline bool
-arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
+raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
{
- bool ret = arch_atomic64_add_negative_relaxed(i, v);
- __atomic_acquire_fence();
- return ret;
+ __atomic_release_fence();
+ return arch_atomic64_add_negative_relaxed(i, v);
}
-#define arch_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire
-#endif
-
-#ifndef arch_atomic64_add_negative_release
+#elif defined(arch_atomic64_add_negative)
+#define raw_atomic64_add_negative_release arch_atomic64_add_negative
+#else
static __always_inline bool
-arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
+raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
{
- __atomic_release_fence();
- return arch_atomic64_add_negative_relaxed(i, v);
+ return raw_atomic64_add_return_release(i, v) < 0;
}
-#define arch_atomic64_add_negative_release arch_atomic64_add_negative_release
#endif
-#ifndef arch_atomic64_add_negative
+#if defined(arch_atomic64_add_negative_relaxed)
+#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative_relaxed
+#elif defined(arch_atomic64_add_negative)
+#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative
+#else
static __always_inline bool
-arch_atomic64_add_negative(s64 i, atomic64_t *v)
+raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
{
- bool ret;
- __atomic_pre_full_fence();
- ret = arch_atomic64_add_negative_relaxed(i, v);
- __atomic_post_full_fence();
- return ret;
+ return raw_atomic64_add_return_relaxed(i, v) < 0;
}
-#define arch_atomic64_add_negative arch_atomic64_add_negative
#endif
-#endif /* arch_atomic64_add_negative_relaxed */
-
-#ifndef arch_atomic64_fetch_add_unless
+#if defined(arch_atomic64_fetch_add_unless)
+#define raw_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
+#else
static __always_inline s64
-arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
- s64 c = arch_atomic64_read(v);
+ s64 c = raw_atomic64_read(v);
do {
if (unlikely(c == u))
break;
- } while (!arch_atomic64_try_cmpxchg(v, &c, c + a));
+ } while (!raw_atomic64_try_cmpxchg(v, &c, c + a));
return c;
}
-#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
#endif
-#ifndef arch_atomic64_add_unless
+#if defined(arch_atomic64_add_unless)
+#define raw_atomic64_add_unless arch_atomic64_add_unless
+#else
static __always_inline bool
-arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
+raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
- return arch_atomic64_fetch_add_unless(v, a, u) != u;
+ return raw_atomic64_fetch_add_unless(v, a, u) != u;
}
-#define arch_atomic64_add_unless arch_atomic64_add_unless
#endif
-#ifndef arch_atomic64_inc_not_zero
+#if defined(arch_atomic64_inc_not_zero)
+#define raw_atomic64_inc_not_zero arch_atomic64_inc_not_zero
+#else
static __always_inline bool
-arch_atomic64_inc_not_zero(atomic64_t *v)
+raw_atomic64_inc_not_zero(atomic64_t *v)
{
- return arch_atomic64_add_unless(v, 1, 0);
+ return raw_atomic64_add_unless(v, 1, 0);
}
-#define arch_atomic64_inc_not_zero arch_atomic64_inc_not_zero
#endif
-#ifndef arch_atomic64_inc_unless_negative
+#if defined(arch_atomic64_inc_unless_negative)
+#define raw_atomic64_inc_unless_negative arch_atomic64_inc_unless_negative
+#else
static __always_inline bool
-arch_atomic64_inc_unless_negative(atomic64_t *v)
+raw_atomic64_inc_unless_negative(atomic64_t *v)
{
- s64 c = arch_atomic64_read(v);
+ s64 c = raw_atomic64_read(v);
do {
if (unlikely(c < 0))
return false;
- } while (!arch_atomic64_try_cmpxchg(v, &c, c + 1));
+ } while (!raw_atomic64_try_cmpxchg(v, &c, c + 1));
return true;
}
-#define arch_atomic64_inc_unless_negative arch_atomic64_inc_unless_negative
#endif
-#ifndef arch_atomic64_dec_unless_positive
+#if defined(arch_atomic64_dec_unless_positive)
+#define raw_atomic64_dec_unless_positive arch_atomic64_dec_unless_positive
+#else
static __always_inline bool
-arch_atomic64_dec_unless_positive(atomic64_t *v)
+raw_atomic64_dec_unless_positive(atomic64_t *v)
{
- s64 c = arch_atomic64_read(v);
+ s64 c = raw_atomic64_read(v);
do {
if (unlikely(c > 0))
return false;
- } while (!arch_atomic64_try_cmpxchg(v, &c, c - 1));
+ } while (!raw_atomic64_try_cmpxchg(v, &c, c - 1));
return true;
}
-#define arch_atomic64_dec_unless_positive arch_atomic64_dec_unless_positive
#endif
-#ifndef arch_atomic64_dec_if_positive
+#if defined(arch_atomic64_dec_if_positive)
+#define raw_atomic64_dec_if_positive arch_atomic64_dec_if_positive
+#else
static __always_inline s64
-arch_atomic64_dec_if_positive(atomic64_t *v)
+raw_atomic64_dec_if_positive(atomic64_t *v)
{
- s64 dec, c = arch_atomic64_read(v);
+ s64 dec, c = raw_atomic64_read(v);
do {
dec = c - 1;
if (unlikely(dec < 0))
break;
- } while (!arch_atomic64_try_cmpxchg(v, &c, dec));
+ } while (!raw_atomic64_try_cmpxchg(v, &c, dec));
return dec;
}
-#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
#endif
#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// e1cee558cc61cae887890db30fcdf93baca9f498
+// c2048fccede6fac923252290e2b303949d5dec83
diff --git a/include/linux/atomic/atomic-raw.h b/include/linux/atomic/atomic-raw.h
deleted file mode 100644
index 8b2fc04..0000000
--- a/include/linux/atomic/atomic-raw.h
+++ /dev/null
@@ -1,1135 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-
-// Generated by scripts/atomic/gen-atomic-raw.sh
-// DO NOT MODIFY THIS FILE DIRECTLY
-
-#ifndef _LINUX_ATOMIC_RAW_H
-#define _LINUX_ATOMIC_RAW_H
-
-static __always_inline int
-raw_atomic_read(const atomic_t *v)
-{
- return arch_atomic_read(v);
-}
-
-static __always_inline int
-raw_atomic_read_acquire(const atomic_t *v)
-{
- return arch_atomic_read_acquire(v);
-}
-
-static __always_inline void
-raw_atomic_set(atomic_t *v, int i)
-{
- arch_atomic_set(v, i);
-}
-
-static __always_inline void
-raw_atomic_set_release(atomic_t *v, int i)
-{
- arch_atomic_set_release(v, i);
-}
-
-static __always_inline void
-raw_atomic_add(int i, atomic_t *v)
-{
- arch_atomic_add(i, v);
-}
-
-static __always_inline int
-raw_atomic_add_return(int i, atomic_t *v)
-{
- return arch_atomic_add_return(i, v);
-}
-
-static __always_inline int
-raw_atomic_add_return_acquire(int i, atomic_t *v)
-{
- return arch_atomic_add_return_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_add_return_release(int i, atomic_t *v)
-{
- return arch_atomic_add_return_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_add_return_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_add_return_relaxed(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_add(int i, atomic_t *v)
-{
- return arch_atomic_fetch_add(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_add_acquire(int i, atomic_t *v)
-{
- return arch_atomic_fetch_add_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_add_release(int i, atomic_t *v)
-{
- return arch_atomic_fetch_add_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_add_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_fetch_add_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_sub(int i, atomic_t *v)
-{
- arch_atomic_sub(i, v);
-}
-
-static __always_inline int
-raw_atomic_sub_return(int i, atomic_t *v)
-{
- return arch_atomic_sub_return(i, v);
-}
-
-static __always_inline int
-raw_atomic_sub_return_acquire(int i, atomic_t *v)
-{
- return arch_atomic_sub_return_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_sub_return_release(int i, atomic_t *v)
-{
- return arch_atomic_sub_return_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_sub_return_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_sub_return_relaxed(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_sub(int i, atomic_t *v)
-{
- return arch_atomic_fetch_sub(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_sub_acquire(int i, atomic_t *v)
-{
- return arch_atomic_fetch_sub_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_sub_release(int i, atomic_t *v)
-{
- return arch_atomic_fetch_sub_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_sub_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_fetch_sub_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_inc(atomic_t *v)
-{
- arch_atomic_inc(v);
-}
-
-static __always_inline int
-raw_atomic_inc_return(atomic_t *v)
-{
- return arch_atomic_inc_return(v);
-}
-
-static __always_inline int
-raw_atomic_inc_return_acquire(atomic_t *v)
-{
- return arch_atomic_inc_return_acquire(v);
-}
-
-static __always_inline int
-raw_atomic_inc_return_release(atomic_t *v)
-{
- return arch_atomic_inc_return_release(v);
-}
-
-static __always_inline int
-raw_atomic_inc_return_relaxed(atomic_t *v)
-{
- return arch_atomic_inc_return_relaxed(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_inc(atomic_t *v)
-{
- return arch_atomic_fetch_inc(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_inc_acquire(atomic_t *v)
-{
- return arch_atomic_fetch_inc_acquire(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_inc_release(atomic_t *v)
-{
- return arch_atomic_fetch_inc_release(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_inc_relaxed(atomic_t *v)
-{
- return arch_atomic_fetch_inc_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_dec(atomic_t *v)
-{
- arch_atomic_dec(v);
-}
-
-static __always_inline int
-raw_atomic_dec_return(atomic_t *v)
-{
- return arch_atomic_dec_return(v);
-}
-
-static __always_inline int
-raw_atomic_dec_return_acquire(atomic_t *v)
-{
- return arch_atomic_dec_return_acquire(v);
-}
-
-static __always_inline int
-raw_atomic_dec_return_release(atomic_t *v)
-{
- return arch_atomic_dec_return_release(v);
-}
-
-static __always_inline int
-raw_atomic_dec_return_relaxed(atomic_t *v)
-{
- return arch_atomic_dec_return_relaxed(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_dec(atomic_t *v)
-{
- return arch_atomic_fetch_dec(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_dec_acquire(atomic_t *v)
-{
- return arch_atomic_fetch_dec_acquire(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_dec_release(atomic_t *v)
-{
- return arch_atomic_fetch_dec_release(v);
-}
-
-static __always_inline int
-raw_atomic_fetch_dec_relaxed(atomic_t *v)
-{
- return arch_atomic_fetch_dec_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic_and(int i, atomic_t *v)
-{
- arch_atomic_and(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_and(int i, atomic_t *v)
-{
- return arch_atomic_fetch_and(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_and_acquire(int i, atomic_t *v)
-{
- return arch_atomic_fetch_and_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_and_release(int i, atomic_t *v)
-{
- return arch_atomic_fetch_and_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_and_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_fetch_and_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_andnot(int i, atomic_t *v)
-{
- arch_atomic_andnot(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_andnot(int i, atomic_t *v)
-{
- return arch_atomic_fetch_andnot(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
-{
- return arch_atomic_fetch_andnot_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_andnot_release(int i, atomic_t *v)
-{
- return arch_atomic_fetch_andnot_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_fetch_andnot_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_or(int i, atomic_t *v)
-{
- arch_atomic_or(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_or(int i, atomic_t *v)
-{
- return arch_atomic_fetch_or(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_or_acquire(int i, atomic_t *v)
-{
- return arch_atomic_fetch_or_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_or_release(int i, atomic_t *v)
-{
- return arch_atomic_fetch_or_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_or_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_fetch_or_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic_xor(int i, atomic_t *v)
-{
- arch_atomic_xor(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_xor(int i, atomic_t *v)
-{
- return arch_atomic_fetch_xor(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_xor_acquire(int i, atomic_t *v)
-{
- return arch_atomic_fetch_xor_acquire(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_xor_release(int i, atomic_t *v)
-{
- return arch_atomic_fetch_xor_release(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_xor_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_fetch_xor_relaxed(i, v);
-}
-
-static __always_inline int
-raw_atomic_xchg(atomic_t *v, int i)
-{
- return arch_atomic_xchg(v, i);
-}
-
-static __always_inline int
-raw_atomic_xchg_acquire(atomic_t *v, int i)
-{
- return arch_atomic_xchg_acquire(v, i);
-}
-
-static __always_inline int
-raw_atomic_xchg_release(atomic_t *v, int i)
-{
- return arch_atomic_xchg_release(v, i);
-}
-
-static __always_inline int
-raw_atomic_xchg_relaxed(atomic_t *v, int i)
-{
- return arch_atomic_xchg_relaxed(v, i);
-}
-
-static __always_inline int
-raw_atomic_cmpxchg(atomic_t *v, int old, int new)
-{
- return arch_atomic_cmpxchg(v, old, new);
-}
-
-static __always_inline int
-raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
-{
- return arch_atomic_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline int
-raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
-{
- return arch_atomic_cmpxchg_release(v, old, new);
-}
-
-static __always_inline int
-raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
-{
- return arch_atomic_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
-{
- return arch_atomic_try_cmpxchg(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
-{
- return arch_atomic_try_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
-{
- return arch_atomic_try_cmpxchg_release(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
-{
- return arch_atomic_try_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic_sub_and_test(int i, atomic_t *v)
-{
- return arch_atomic_sub_and_test(i, v);
-}
-
-static __always_inline bool
-raw_atomic_dec_and_test(atomic_t *v)
-{
- return arch_atomic_dec_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_inc_and_test(atomic_t *v)
-{
- return arch_atomic_inc_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic_add_negative(int i, atomic_t *v)
-{
- return arch_atomic_add_negative(i, v);
-}
-
-static __always_inline bool
-raw_atomic_add_negative_acquire(int i, atomic_t *v)
-{
- return arch_atomic_add_negative_acquire(i, v);
-}
-
-static __always_inline bool
-raw_atomic_add_negative_release(int i, atomic_t *v)
-{
- return arch_atomic_add_negative_release(i, v);
-}
-
-static __always_inline bool
-raw_atomic_add_negative_relaxed(int i, atomic_t *v)
-{
- return arch_atomic_add_negative_relaxed(i, v);
-}
-
-static __always_inline int
-raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
-{
- return arch_atomic_fetch_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_add_unless(atomic_t *v, int a, int u)
-{
- return arch_atomic_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic_inc_not_zero(atomic_t *v)
-{
- return arch_atomic_inc_not_zero(v);
-}
-
-static __always_inline bool
-raw_atomic_inc_unless_negative(atomic_t *v)
-{
- return arch_atomic_inc_unless_negative(v);
-}
-
-static __always_inline bool
-raw_atomic_dec_unless_positive(atomic_t *v)
-{
- return arch_atomic_dec_unless_positive(v);
-}
-
-static __always_inline int
-raw_atomic_dec_if_positive(atomic_t *v)
-{
- return arch_atomic_dec_if_positive(v);
-}
-
-static __always_inline s64
-raw_atomic64_read(const atomic64_t *v)
-{
- return arch_atomic64_read(v);
-}
-
-static __always_inline s64
-raw_atomic64_read_acquire(const atomic64_t *v)
-{
- return arch_atomic64_read_acquire(v);
-}
-
-static __always_inline void
-raw_atomic64_set(atomic64_t *v, s64 i)
-{
- arch_atomic64_set(v, i);
-}
-
-static __always_inline void
-raw_atomic64_set_release(atomic64_t *v, s64 i)
-{
- arch_atomic64_set_release(v, i);
-}
-
-static __always_inline void
-raw_atomic64_add(s64 i, atomic64_t *v)
-{
- arch_atomic64_add(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_add_return(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_return(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_return_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_add_return_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_return_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_return_relaxed(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_add(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_add(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_add_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_add_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_add_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic64_sub(s64 i, atomic64_t *v)
-{
- arch_atomic64_sub(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_sub_return(s64 i, atomic64_t *v)
-{
- return arch_atomic64_sub_return(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_sub_return_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_sub_return_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_sub_return_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_sub_return_relaxed(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_sub(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_sub(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_sub_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_sub_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_sub_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic64_inc(atomic64_t *v)
-{
- arch_atomic64_inc(v);
-}
-
-static __always_inline s64
-raw_atomic64_inc_return(atomic64_t *v)
-{
- return arch_atomic64_inc_return(v);
-}
-
-static __always_inline s64
-raw_atomic64_inc_return_acquire(atomic64_t *v)
-{
- return arch_atomic64_inc_return_acquire(v);
-}
-
-static __always_inline s64
-raw_atomic64_inc_return_release(atomic64_t *v)
-{
- return arch_atomic64_inc_return_release(v);
-}
-
-static __always_inline s64
-raw_atomic64_inc_return_relaxed(atomic64_t *v)
-{
- return arch_atomic64_inc_return_relaxed(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_inc(atomic64_t *v)
-{
- return arch_atomic64_fetch_inc(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_inc_acquire(atomic64_t *v)
-{
- return arch_atomic64_fetch_inc_acquire(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_inc_release(atomic64_t *v)
-{
- return arch_atomic64_fetch_inc_release(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_inc_relaxed(atomic64_t *v)
-{
- return arch_atomic64_fetch_inc_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic64_dec(atomic64_t *v)
-{
- arch_atomic64_dec(v);
-}
-
-static __always_inline s64
-raw_atomic64_dec_return(atomic64_t *v)
-{
- return arch_atomic64_dec_return(v);
-}
-
-static __always_inline s64
-raw_atomic64_dec_return_acquire(atomic64_t *v)
-{
- return arch_atomic64_dec_return_acquire(v);
-}
-
-static __always_inline s64
-raw_atomic64_dec_return_release(atomic64_t *v)
-{
- return arch_atomic64_dec_return_release(v);
-}
-
-static __always_inline s64
-raw_atomic64_dec_return_relaxed(atomic64_t *v)
-{
- return arch_atomic64_dec_return_relaxed(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_dec(atomic64_t *v)
-{
- return arch_atomic64_fetch_dec(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_dec_acquire(atomic64_t *v)
-{
- return arch_atomic64_fetch_dec_acquire(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_dec_release(atomic64_t *v)
-{
- return arch_atomic64_fetch_dec_release(v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_dec_relaxed(atomic64_t *v)
-{
- return arch_atomic64_fetch_dec_relaxed(v);
-}
-
-static __always_inline void
-raw_atomic64_and(s64 i, atomic64_t *v)
-{
- arch_atomic64_and(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_and(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_and(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_and_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_and_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_and_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic64_andnot(s64 i, atomic64_t *v)
-{
- arch_atomic64_andnot(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_andnot(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_andnot_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_andnot_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_andnot_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic64_or(s64 i, atomic64_t *v)
-{
- arch_atomic64_or(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_or(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_or(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_or_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_or_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_or_relaxed(i, v);
-}
-
-static __always_inline void
-raw_atomic64_xor(s64 i, atomic64_t *v)
-{
- arch_atomic64_xor(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_xor(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_xor(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_xor_acquire(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_xor_release(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_fetch_xor_relaxed(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_xchg(atomic64_t *v, s64 i)
-{
- return arch_atomic64_xchg(v, i);
-}
-
-static __always_inline s64
-raw_atomic64_xchg_acquire(atomic64_t *v, s64 i)
-{
- return arch_atomic64_xchg_acquire(v, i);
-}
-
-static __always_inline s64
-raw_atomic64_xchg_release(atomic64_t *v, s64 i)
-{
- return arch_atomic64_xchg_release(v, i);
-}
-
-static __always_inline s64
-raw_atomic64_xchg_relaxed(atomic64_t *v, s64 i)
-{
- return arch_atomic64_xchg_relaxed(v, i);
-}
-
-static __always_inline s64
-raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
-{
- return arch_atomic64_cmpxchg(v, old, new);
-}
-
-static __always_inline s64
-raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
-{
- return arch_atomic64_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline s64
-raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
-{
- return arch_atomic64_cmpxchg_release(v, old, new);
-}
-
-static __always_inline s64
-raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
-{
- return arch_atomic64_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
-{
- return arch_atomic64_try_cmpxchg(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
-{
- return arch_atomic64_try_cmpxchg_acquire(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
-{
- return arch_atomic64_try_cmpxchg_release(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
-{
- return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
-}
-
-static __always_inline bool
-raw_atomic64_sub_and_test(s64 i, atomic64_t *v)
-{
- return arch_atomic64_sub_and_test(i, v);
-}
-
-static __always_inline bool
-raw_atomic64_dec_and_test(atomic64_t *v)
-{
- return arch_atomic64_dec_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic64_inc_and_test(atomic64_t *v)
-{
- return arch_atomic64_inc_and_test(v);
-}
-
-static __always_inline bool
-raw_atomic64_add_negative(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_negative(i, v);
-}
-
-static __always_inline bool
-raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_negative_acquire(i, v);
-}
-
-static __always_inline bool
-raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_negative_release(i, v);
-}
-
-static __always_inline bool
-raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
-{
- return arch_atomic64_add_negative_relaxed(i, v);
-}
-
-static __always_inline s64
-raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
-{
- return arch_atomic64_fetch_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
-{
- return arch_atomic64_add_unless(v, a, u);
-}
-
-static __always_inline bool
-raw_atomic64_inc_not_zero(atomic64_t *v)
-{
- return arch_atomic64_inc_not_zero(v);
-}
-
-static __always_inline bool
-raw_atomic64_inc_unless_negative(atomic64_t *v)
-{
- return arch_atomic64_inc_unless_negative(v);
-}
-
-static __always_inline bool
-raw_atomic64_dec_unless_positive(atomic64_t *v)
-{
- return arch_atomic64_dec_unless_positive(v);
-}
-
-static __always_inline s64
-raw_atomic64_dec_if_positive(atomic64_t *v)
-{
- return arch_atomic64_dec_if_positive(v);
-}
-
-#define raw_xchg(...) \
- arch_xchg(__VA_ARGS__)
-
-#define raw_xchg_acquire(...) \
- arch_xchg_acquire(__VA_ARGS__)
-
-#define raw_xchg_release(...) \
- arch_xchg_release(__VA_ARGS__)
-
-#define raw_xchg_relaxed(...) \
- arch_xchg_relaxed(__VA_ARGS__)
-
-#define raw_cmpxchg(...) \
- arch_cmpxchg(__VA_ARGS__)
-
-#define raw_cmpxchg_acquire(...) \
- arch_cmpxchg_acquire(__VA_ARGS__)
-
-#define raw_cmpxchg_release(...) \
- arch_cmpxchg_release(__VA_ARGS__)
-
-#define raw_cmpxchg_relaxed(...) \
- arch_cmpxchg_relaxed(__VA_ARGS__)
-
-#define raw_cmpxchg64(...) \
- arch_cmpxchg64(__VA_ARGS__)
-
-#define raw_cmpxchg64_acquire(...) \
- arch_cmpxchg64_acquire(__VA_ARGS__)
-
-#define raw_cmpxchg64_release(...) \
- arch_cmpxchg64_release(__VA_ARGS__)
-
-#define raw_cmpxchg64_relaxed(...) \
- arch_cmpxchg64_relaxed(__VA_ARGS__)
-
-#define raw_cmpxchg128(...) \
- arch_cmpxchg128(__VA_ARGS__)
-
-#define raw_cmpxchg128_acquire(...) \
- arch_cmpxchg128_acquire(__VA_ARGS__)
-
-#define raw_cmpxchg128_release(...) \
- arch_cmpxchg128_release(__VA_ARGS__)
-
-#define raw_cmpxchg128_relaxed(...) \
- arch_cmpxchg128_relaxed(__VA_ARGS__)
-
-#define raw_try_cmpxchg(...) \
- arch_try_cmpxchg(__VA_ARGS__)
-
-#define raw_try_cmpxchg_acquire(...) \
- arch_try_cmpxchg_acquire(__VA_ARGS__)
-
-#define raw_try_cmpxchg_release(...) \
- arch_try_cmpxchg_release(__VA_ARGS__)
-
-#define raw_try_cmpxchg_relaxed(...) \
- arch_try_cmpxchg_relaxed(__VA_ARGS__)
-
-#define raw_try_cmpxchg64(...) \
- arch_try_cmpxchg64(__VA_ARGS__)
-
-#define raw_try_cmpxchg64_acquire(...) \
- arch_try_cmpxchg64_acquire(__VA_ARGS__)
-
-#define raw_try_cmpxchg64_release(...) \
- arch_try_cmpxchg64_release(__VA_ARGS__)
-
-#define raw_try_cmpxchg64_relaxed(...) \
- arch_try_cmpxchg64_relaxed(__VA_ARGS__)
-
-#define raw_try_cmpxchg128(...) \
- arch_try_cmpxchg128(__VA_ARGS__)
-
-#define raw_try_cmpxchg128_acquire(...) \
- arch_try_cmpxchg128_acquire(__VA_ARGS__)
-
-#define raw_try_cmpxchg128_release(...) \
- arch_try_cmpxchg128_release(__VA_ARGS__)
-
-#define raw_try_cmpxchg128_relaxed(...) \
- arch_try_cmpxchg128_relaxed(__VA_ARGS__)
-
-#define raw_cmpxchg_local(...) \
- arch_cmpxchg_local(__VA_ARGS__)
-
-#define raw_cmpxchg64_local(...) \
- arch_cmpxchg64_local(__VA_ARGS__)
-
-#define raw_cmpxchg128_local(...) \
- arch_cmpxchg128_local(__VA_ARGS__)
-
-#define raw_sync_cmpxchg(...) \
- arch_sync_cmpxchg(__VA_ARGS__)
-
-#define raw_try_cmpxchg_local(...) \
- arch_try_cmpxchg_local(__VA_ARGS__)
-
-#define raw_try_cmpxchg64_local(...) \
- arch_try_cmpxchg64_local(__VA_ARGS__)
-
-#define raw_try_cmpxchg128_local(...) \
- arch_try_cmpxchg128_local(__VA_ARGS__)
-
-#endif /* _LINUX_ATOMIC_RAW_H */
-// b23ed4424e85200e200ded094522e1d743b3a5b1
diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/acquire
index ef76408..b0f732a 100755
--- a/scripts/atomic/fallbacks/acquire
+++ b/scripts/atomic/fallbacks/acquire
@@ -1,6 +1,6 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_${pfx}${name}${sfx}_acquire(${params})
+raw_${atomic}_${pfx}${name}${sfx}_acquire(${params})
{
${ret} ret = arch_${atomic}_${pfx}${name}${sfx}_relaxed(${args});
__atomic_acquire_fence();
diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative
index d0bd2df..1687611 100755
--- a/scripts/atomic/fallbacks/add_negative
+++ b/scripts/atomic/fallbacks/add_negative
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_add_negative${order}(${int} i, ${atomic}_t *v)
+raw_${atomic}_add_negative${order}(${int} i, ${atomic}_t *v)
{
- return arch_${atomic}_add_return${order}(i, v) < 0;
+ return raw_${atomic}_add_return${order}(i, v) < 0;
}
EOF
diff --git a/scripts/atomic/fallbacks/add_unless b/scripts/atomic/fallbacks/add_unless
index cf79b9d..88593e2 100755
--- a/scripts/atomic/fallbacks/add_unless
+++ b/scripts/atomic/fallbacks/add_unless
@@ -1,7 +1,7 @@
cat << EOF
static __always_inline bool
-arch_${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
+raw_${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
{
- return arch_${atomic}_fetch_add_unless(v, a, u) != u;
+ return raw_${atomic}_fetch_add_unless(v, a, u) != u;
}
EOF
diff --git a/scripts/atomic/fallbacks/andnot b/scripts/atomic/fallbacks/andnot
index 5a42f54..5b83bb6 100755
--- a/scripts/atomic/fallbacks/andnot
+++ b/scripts/atomic/fallbacks/andnot
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_${pfx}andnot${sfx}${order}(${int} i, ${atomic}_t *v)
+raw_${atomic}_${pfx}andnot${sfx}${order}(${int} i, ${atomic}_t *v)
{
- ${retstmt}arch_${atomic}_${pfx}and${sfx}${order}(~i, v);
+ ${retstmt}raw_${atomic}_${pfx}and${sfx}${order}(~i, v);
}
EOF
diff --git a/scripts/atomic/fallbacks/cmpxchg b/scripts/atomic/fallbacks/cmpxchg
index 87cd010..312ee67 100644
--- a/scripts/atomic/fallbacks/cmpxchg
+++ b/scripts/atomic/fallbacks/cmpxchg
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline ${int}
-arch_${atomic}_cmpxchg${order}(${atomic}_t *v, ${int} old, ${int} new)
+raw_${atomic}_cmpxchg${order}(${atomic}_t *v, ${int} old, ${int} new)
{
- return arch_cmpxchg${order}(&v->counter, old, new);
+ return raw_cmpxchg${order}(&v->counter, old, new);
}
EOF
diff --git a/scripts/atomic/fallbacks/dec b/scripts/atomic/fallbacks/dec
index 8c144c8..a660ac6 100755
--- a/scripts/atomic/fallbacks/dec
+++ b/scripts/atomic/fallbacks/dec
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_${pfx}dec${sfx}${order}(${atomic}_t *v)
+raw_${atomic}_${pfx}dec${sfx}${order}(${atomic}_t *v)
{
- ${retstmt}arch_${atomic}_${pfx}sub${sfx}${order}(1, v);
+ ${retstmt}raw_${atomic}_${pfx}sub${sfx}${order}(1, v);
}
EOF
diff --git a/scripts/atomic/fallbacks/dec_and_test b/scripts/atomic/fallbacks/dec_and_test
index 3f6b6a8..521dfca 100755
--- a/scripts/atomic/fallbacks/dec_and_test
+++ b/scripts/atomic/fallbacks/dec_and_test
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_dec_and_test(${atomic}_t *v)
+raw_${atomic}_dec_and_test(${atomic}_t *v)
{
- return arch_${atomic}_dec_return(v) == 0;
+ return raw_${atomic}_dec_return(v) == 0;
}
EOF
diff --git a/scripts/atomic/fallbacks/dec_if_positive b/scripts/atomic/fallbacks/dec_if_positive
index 86bdced..7acb205 100755
--- a/scripts/atomic/fallbacks/dec_if_positive
+++ b/scripts/atomic/fallbacks/dec_if_positive
@@ -1,14 +1,14 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_dec_if_positive(${atomic}_t *v)
+raw_${atomic}_dec_if_positive(${atomic}_t *v)
{
- ${int} dec, c = arch_${atomic}_read(v);
+ ${int} dec, c = raw_${atomic}_read(v);
do {
dec = c - 1;
if (unlikely(dec < 0))
break;
- } while (!arch_${atomic}_try_cmpxchg(v, &c, dec));
+ } while (!raw_${atomic}_try_cmpxchg(v, &c, dec));
return dec;
}
diff --git a/scripts/atomic/fallbacks/dec_unless_positive b/scripts/atomic/fallbacks/dec_unless_positive
index c531d5a..bcb4f27 100755
--- a/scripts/atomic/fallbacks/dec_unless_positive
+++ b/scripts/atomic/fallbacks/dec_unless_positive
@@ -1,13 +1,13 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_dec_unless_positive(${atomic}_t *v)
+raw_${atomic}_dec_unless_positive(${atomic}_t *v)
{
- ${int} c = arch_${atomic}_read(v);
+ ${int} c = raw_${atomic}_read(v);
do {
if (unlikely(c > 0))
return false;
- } while (!arch_${atomic}_try_cmpxchg(v, &c, c - 1));
+ } while (!raw_${atomic}_try_cmpxchg(v, &c, c - 1));
return true;
}
diff --git a/scripts/atomic/fallbacks/fence b/scripts/atomic/fallbacks/fence
index 07757d8..067eea5 100755
--- a/scripts/atomic/fallbacks/fence
+++ b/scripts/atomic/fallbacks/fence
@@ -1,6 +1,6 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_${pfx}${name}${sfx}(${params})
+raw_${atomic}_${pfx}${name}${sfx}(${params})
{
${ret} ret;
__atomic_pre_full_fence();
diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fallbacks/fetch_add_unless
index 81d2834..c18b940 100755
--- a/scripts/atomic/fallbacks/fetch_add_unless
+++ b/scripts/atomic/fallbacks/fetch_add_unless
@@ -1,13 +1,13 @@
cat << EOF
static __always_inline ${int}
-arch_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
+raw_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
{
- ${int} c = arch_${atomic}_read(v);
+ ${int} c = raw_${atomic}_read(v);
do {
if (unlikely(c == u))
break;
- } while (!arch_${atomic}_try_cmpxchg(v, &c, c + a));
+ } while (!raw_${atomic}_try_cmpxchg(v, &c, c + a));
return c;
}
diff --git a/scripts/atomic/fallbacks/inc b/scripts/atomic/fallbacks/inc
index 3c2c373..7d838f0 100755
--- a/scripts/atomic/fallbacks/inc
+++ b/scripts/atomic/fallbacks/inc
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_${pfx}inc${sfx}${order}(${atomic}_t *v)
+raw_${atomic}_${pfx}inc${sfx}${order}(${atomic}_t *v)
{
- ${retstmt}arch_${atomic}_${pfx}add${sfx}${order}(1, v);
+ ${retstmt}raw_${atomic}_${pfx}add${sfx}${order}(1, v);
}
EOF
diff --git a/scripts/atomic/fallbacks/inc_and_test b/scripts/atomic/fallbacks/inc_and_test
index c726a6d..de25aeb 100755
--- a/scripts/atomic/fallbacks/inc_and_test
+++ b/scripts/atomic/fallbacks/inc_and_test
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_inc_and_test(${atomic}_t *v)
+raw_${atomic}_inc_and_test(${atomic}_t *v)
{
- return arch_${atomic}_inc_return(v) == 0;
+ return raw_${atomic}_inc_return(v) == 0;
}
EOF
diff --git a/scripts/atomic/fallbacks/inc_not_zero b/scripts/atomic/fallbacks/inc_not_zero
index 9760359..e02206d 100755
--- a/scripts/atomic/fallbacks/inc_not_zero
+++ b/scripts/atomic/fallbacks/inc_not_zero
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_inc_not_zero(${atomic}_t *v)
+raw_${atomic}_inc_not_zero(${atomic}_t *v)
{
- return arch_${atomic}_add_unless(v, 1, 0);
+ return raw_${atomic}_add_unless(v, 1, 0);
}
EOF
diff --git a/scripts/atomic/fallbacks/inc_unless_negative b/scripts/atomic/fallbacks/inc_unless_negative
index 95d8ce4..7b85cc5 100755
--- a/scripts/atomic/fallbacks/inc_unless_negative
+++ b/scripts/atomic/fallbacks/inc_unless_negative
@@ -1,13 +1,13 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_inc_unless_negative(${atomic}_t *v)
+raw_${atomic}_inc_unless_negative(${atomic}_t *v)
{
- ${int} c = arch_${atomic}_read(v);
+ ${int} c = raw_${atomic}_read(v);
do {
if (unlikely(c < 0))
return false;
- } while (!arch_${atomic}_try_cmpxchg(v, &c, c + 1));
+ } while (!raw_${atomic}_try_cmpxchg(v, &c, c + 1));
return true;
}
diff --git a/scripts/atomic/fallbacks/read_acquire b/scripts/atomic/fallbacks/read_acquire
index a0ea1d2..26d15ad 100755
--- a/scripts/atomic/fallbacks/read_acquire
+++ b/scripts/atomic/fallbacks/read_acquire
@@ -1,13 +1,13 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_read_acquire(const ${atomic}_t *v)
+raw_${atomic}_read_acquire(const ${atomic}_t *v)
{
${int} ret;
if (__native_word(${atomic}_t)) {
ret = smp_load_acquire(&(v)->counter);
} else {
- ret = arch_${atomic}_read(v);
+ ret = raw_${atomic}_read(v);
__atomic_acquire_fence();
}
diff --git a/scripts/atomic/fallbacks/release b/scripts/atomic/fallbacks/release
index b46feb5..cbbff70 100755
--- a/scripts/atomic/fallbacks/release
+++ b/scripts/atomic/fallbacks/release
@@ -1,6 +1,6 @@
cat <<EOF
static __always_inline ${ret}
-arch_${atomic}_${pfx}${name}${sfx}_release(${params})
+raw_${atomic}_${pfx}${name}${sfx}_release(${params})
{
__atomic_release_fence();
${retstmt}arch_${atomic}_${pfx}${name}${sfx}_relaxed(${args});
diff --git a/scripts/atomic/fallbacks/set_release b/scripts/atomic/fallbacks/set_release
index 05cdb7f..104693b 100755
--- a/scripts/atomic/fallbacks/set_release
+++ b/scripts/atomic/fallbacks/set_release
@@ -1,12 +1,12 @@
cat <<EOF
static __always_inline void
-arch_${atomic}_set_release(${atomic}_t *v, ${int} i)
+raw_${atomic}_set_release(${atomic}_t *v, ${int} i)
{
if (__native_word(${atomic}_t)) {
smp_store_release(&(v)->counter, i);
} else {
__atomic_release_fence();
- arch_${atomic}_set(v, i);
+ raw_${atomic}_set(v, i);
}
}
EOF
diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test
index da8a049..8975a49 100755
--- a/scripts/atomic/fallbacks/sub_and_test
+++ b/scripts/atomic/fallbacks/sub_and_test
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_sub_and_test(${int} i, ${atomic}_t *v)
+raw_${atomic}_sub_and_test(${int} i, ${atomic}_t *v)
{
- return arch_${atomic}_sub_return(i, v) == 0;
+ return raw_${atomic}_sub_return(i, v) == 0;
}
EOF
diff --git a/scripts/atomic/fallbacks/try_cmpxchg b/scripts/atomic/fallbacks/try_cmpxchg
index 890f850..4c911a6 100755
--- a/scripts/atomic/fallbacks/try_cmpxchg
+++ b/scripts/atomic/fallbacks/try_cmpxchg
@@ -1,9 +1,9 @@
cat <<EOF
static __always_inline bool
-arch_${atomic}_try_cmpxchg${order}(${atomic}_t *v, ${int} *old, ${int} new)
+raw_${atomic}_try_cmpxchg${order}(${atomic}_t *v, ${int} *old, ${int} new)
{
${int} r, o = *old;
- r = arch_${atomic}_cmpxchg${order}(v, o, new);
+ r = raw_${atomic}_cmpxchg${order}(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
diff --git a/scripts/atomic/fallbacks/xchg b/scripts/atomic/fallbacks/xchg
index 733b898..bdd788a 100644
--- a/scripts/atomic/fallbacks/xchg
+++ b/scripts/atomic/fallbacks/xchg
@@ -1,7 +1,7 @@
cat <<EOF
static __always_inline ${int}
-arch_${atomic}_xchg${order}(${atomic}_t *v, ${int} new)
+raw_${atomic}_xchg${order}(${atomic}_t *v, ${int} new)
{
- return arch_xchg${order}(&v->counter, new);
+ return raw_xchg${order}(&v->counter, new);
}
EOF
diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh
index 3373308..86aca4f 100755
--- a/scripts/atomic/gen-atomic-fallback.sh
+++ b/scripts/atomic/gen-atomic-fallback.sh
@@ -17,19 +17,12 @@ gen_template_fallback()
local atomic="$1"; shift
local int="$1"; shift
- local atomicname="arch_${atomic}_${pfx}${name}${sfx}${order}"
-
local ret="$(gen_ret_type "${meta}" "${int}")"
local retstmt="$(gen_ret_stmt "${meta}")"
local params="$(gen_params "${int}" "${atomic}" "$@")"
local args="$(gen_args "$@")"
- if [ ! -z "${template}" ]; then
- printf "#ifndef ${atomicname}\n"
- . ${template}
- printf "#define ${atomicname} ${atomicname}\n"
- printf "#endif\n\n"
- fi
+ . ${template}
}
#gen_order_fallback(meta, pfx, name, sfx, order, atomic, int, args...)
@@ -59,69 +52,92 @@ gen_proto_fallback()
gen_template_fallback "${tmpl}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
}
-#gen_basic_fallbacks(basename)
-gen_basic_fallbacks()
-{
- local basename="$1"; shift
-cat << EOF
-#define ${basename}_acquire ${basename}
-#define ${basename}_release ${basename}
-#define ${basename}_relaxed ${basename}
-EOF
-}
-
-#gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...)
-gen_proto_order_variants()
+#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, args...)
+gen_proto_order_variant()
{
local meta="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
+ local order="$1"; shift
local atomic="$1"
- local basename="arch_${atomic}_${pfx}${name}${sfx}"
-
- local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "")"
+ local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
+ local basename="${atomic}_${pfx}${name}${sfx}"
- # If we don't have relaxed atomics, then we don't bother with ordering fallbacks
- # read_acquire and set_release need to be templated, though
- if ! meta_has_relaxed "${meta}"; then
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
+ local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")"
- if meta_has_acquire "${meta}"; then
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
- fi
+ # Where there is no possible fallback, this order variant is mandatory
+ # and must be provided by arch code. Add a comment to the header to
+ # make this obvious.
+ #
+ # Ideally we'd error on a missing definition, but arch code might
+ # define this order variant as a C function without a preprocessor
+ # symbol.
+ if [ -z ${template} ] && [ -z "${order}" ] && ! meta_has_relaxed "${meta}"; then
+ printf "#define raw_${atomicname} arch_${atomicname}\n\n"
+ return
+ fi
- if meta_has_release "${meta}"; then
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
- fi
+ printf "#if defined(arch_${atomicname})\n"
+ printf "#define raw_${atomicname} arch_${atomicname}\n"
- return
+ # Allow FULL/ACQUIRE/RELEASE ops to be defined in terms of RELAXED ops
+ if [ "${order}" != "_relaxed" ] && meta_has_relaxed "${meta}"; then
+ printf "#elif defined(arch_${basename}_relaxed)\n"
+ gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
fi
- printf "#ifndef ${basename}_relaxed\n"
+ # Allow ACQUIRE/RELEASE/RELAXED ops to be defined in terms of FULL ops
+ if [ ! -z "${order}" ]; then
+ printf "#elif defined(arch_${basename})\n"
+ printf "#define raw_${atomicname} arch_${basename}\n"
+ fi
+ printf "#else\n"
if [ ! -z "${template}" ]; then
- printf "#ifdef ${basename}\n"
+ gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
+ else
+ printf "#error \"Unable to define raw_${atomicname}\"\n"
fi
- gen_basic_fallbacks "${basename}"
+ printf "#endif\n\n"
+}
- if [ ! -z "${template}" ]; then
- printf "#endif /* ${basename} */\n\n"
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_relaxed" "$@"
+
+#gen_proto_order_variants(meta, pfx, name, sfx, atomic, int, args...)
+gen_proto_order_variants()
+{
+ local meta="$1"; shift
+ local pfx="$1"; shift
+ local name="$1"; shift
+ local sfx="$1"; shift
+ local atomic="$1"
+
+ gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
+
+ if meta_has_acquire "${meta}"; then
+ gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
fi
- printf "#else /* ${basename}_relaxed */\n\n"
+ if meta_has_release "${meta}"; then
+ gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
+ fi
- gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_acquire" "$@"
- gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "_release" "$@"
- gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "" "$@"
+ if meta_has_relaxed "${meta}"; then
+ gen_proto_order_variant "${meta}" "${pfx}" "${name}" "${sfx}" "_relaxed" "$@"
+ fi
+}
- printf "#endif /* ${basename}_relaxed */\n\n"
+#gen_basic_fallbacks(basename)
+gen_basic_fallbacks()
+{
+ local basename="$1"; shift
+cat << EOF
+#define raw_${basename}_acquire arch_${basename}
+#define raw_${basename}_release arch_${basename}
+#define raw_${basename}_relaxed arch_${basename}
+EOF
}
gen_order_fallbacks()
@@ -130,36 +146,65 @@ gen_order_fallbacks()
cat <<EOF
-#ifndef ${xchg}_acquire
-#define ${xchg}_acquire(...) \\
- __atomic_op_acquire(${xchg}, __VA_ARGS__)
+#define raw_${xchg}_relaxed arch_${xchg}_relaxed
+
+#ifdef arch_${xchg}_acquire
+#define raw_${xchg}_acquire arch_${xchg}_acquire
+#else
+#define raw_${xchg}_acquire(...) \\
+ __atomic_op_acquire(arch_${xchg}, __VA_ARGS__)
#endif
-#ifndef ${xchg}_release
-#define ${xchg}_release(...) \\
- __atomic_op_release(${xchg}, __VA_ARGS__)
+#ifdef arch_${xchg}_release
+#define raw_${xchg}_release arch_${xchg}_release
+#else
+#define raw_${xchg}_release(...) \\
+ __atomic_op_release(arch_${xchg}, __VA_ARGS__)
#endif
-#ifndef ${xchg}
-#define ${xchg}(...) \\
- __atomic_op_fence(${xchg}, __VA_ARGS__)
+#ifdef arch_${xchg}
+#define raw_${xchg} arch_${xchg}
+#else
+#define raw_${xchg}(...) \\
+ __atomic_op_fence(arch_${xchg}, __VA_ARGS__)
#endif
EOF
}
-gen_xchg_fallbacks()
+gen_xchg_order_fallback()
{
local xchg="$1"; shift
- printf "#ifndef ${xchg}_relaxed\n"
+ local order="$1"; shift
+ local forder="${order:-_fence}"
- gen_basic_fallbacks ${xchg}
+ printf "#if defined(arch_${xchg}${order})\n"
+ printf "#define raw_${xchg}${order} arch_${xchg}${order}\n"
- printf "#else /* ${xchg}_relaxed */\n"
+ if [ "${order}" != "_relaxed" ]; then
+ printf "#elif defined(arch_${xchg}_relaxed)\n"
+ printf "#define raw_${xchg}${order}(...) \\\\\n"
+ printf " __atomic_op${forder}(arch_${xchg}, __VA_ARGS__)\n"
+ fi
- gen_order_fallbacks ${xchg}
+ if [ ! -z "${order}" ]; then
+ printf "#elif defined(arch_${xchg})\n"
+ printf "#define raw_${xchg}${order} arch_${xchg}\n"
+ fi
- printf "#endif /* ${xchg}_relaxed */\n\n"
+ printf "#else\n"
+ printf "extern void raw_${xchg}${order}_not_implemented(void);\n"
+ printf "#define raw_${xchg}${order}(...) raw_${xchg}${order}_not_implemented()\n"
+ printf "#endif\n\n"
+}
+
+gen_xchg_fallbacks()
+{
+ local xchg="$1"; shift
+
+ for order in "" "_acquire" "_release" "_relaxed"; do
+ gen_xchg_order_fallback "${xchg}" "${order}"
+ done
}
gen_try_cmpxchg_fallback()
@@ -168,40 +213,61 @@ gen_try_cmpxchg_fallback()
local order="$1"; shift;
cat <<EOF
-#ifndef arch_try_${cmpxchg}${order}
-#define arch_try_${cmpxchg}${order}(_ptr, _oldp, _new) \\
+#define raw_try_${cmpxchg}${order}(_ptr, _oldp, _new) \\
({ \\
typeof(*(_ptr)) *___op = (_oldp), ___o = *___op, ___r; \\
- ___r = arch_${cmpxchg}${order}((_ptr), ___o, (_new)); \\
+ ___r = raw_${cmpxchg}${order}((_ptr), ___o, (_new)); \\
if (unlikely(___r != ___o)) \\
*___op = ___r; \\
likely(___r == ___o); \\
})
-#endif /* arch_try_${cmpxchg}${order} */
-
EOF
}
-gen_try_cmpxchg_fallbacks()
+gen_try_cmpxchg_order_fallback()
{
- local cmpxchg="$1"; shift;
+ local cmpxchg="$1"; shift
+ local order="$1"; shift
+ local forder="${order:-_fence}"
- printf "#ifndef arch_try_${cmpxchg}_relaxed\n"
- printf "#ifdef arch_try_${cmpxchg}\n"
+ printf "#if defined(arch_try_${cmpxchg}${order})\n"
+ printf "#define raw_try_${cmpxchg}${order} arch_try_${cmpxchg}${order}\n"
- gen_basic_fallbacks "arch_try_${cmpxchg}"
+ if [ "${order}" != "_relaxed" ]; then
+ printf "#elif defined(arch_try_${cmpxchg}_relaxed)\n"
+ printf "#define raw_try_${cmpxchg}${order}(...) \\\\\n"
+ printf " __atomic_op${forder}(arch_try_${cmpxchg}, __VA_ARGS__)\n"
+ fi
+
+ if [ ! -z "${order}" ]; then
+ printf "#elif defined(arch_try_${cmpxchg})\n"
+ printf "#define raw_try_${cmpxchg}${order} arch_try_${cmpxchg}\n"
+ fi
- printf "#endif /* arch_try_${cmpxchg} */\n\n"
+ printf "#else\n"
+ gen_try_cmpxchg_fallback "${cmpxchg}" "${order}"
+ printf "#endif\n\n"
+}
+
+gen_try_cmpxchg_fallbacks()
+{
+ local cmpxchg="$1"; shift;
for order in "" "_acquire" "_release" "_relaxed"; do
- gen_try_cmpxchg_fallback "${cmpxchg}" "${order}"
+ gen_try_cmpxchg_order_fallback "${cmpxchg}" "${order}"
done
+}
- printf "#else /* arch_try_${cmpxchg}_relaxed */\n"
-
- gen_order_fallbacks "arch_try_${cmpxchg}"
+gen_cmpxchg_local_fallbacks()
+{
+ local cmpxchg="$1"; shift
- printf "#endif /* arch_try_${cmpxchg}_relaxed */\n\n"
+ printf "#define raw_${cmpxchg} arch_${cmpxchg}\n\n"
+ printf "#ifdef arch_try_${cmpxchg}\n"
+ printf "#define raw_try_${cmpxchg} arch_try_${cmpxchg}\n"
+ printf "#else\n"
+ gen_try_cmpxchg_fallback "${cmpxchg}" ""
+ printf "#endif\n\n"
}
cat << EOF
@@ -217,7 +283,7 @@ cat << EOF
EOF
-for xchg in "arch_xchg" "arch_cmpxchg" "arch_cmpxchg64" "arch_cmpxchg128"; do
+for xchg in "xchg" "cmpxchg" "cmpxchg64" "cmpxchg128"; do
gen_xchg_fallbacks "${xchg}"
done
@@ -225,8 +291,12 @@ for cmpxchg in "cmpxchg" "cmpxchg64" "cmpxchg128"; do
gen_try_cmpxchg_fallbacks "${cmpxchg}"
done
-for cmpxchg in "cmpxchg_local" "cmpxchg64_local"; do
- gen_try_cmpxchg_fallback "${cmpxchg}" ""
+for cmpxchg in "cmpxchg_local" "cmpxchg64_local" "cmpxchg128_local"; do
+ gen_cmpxchg_local_fallbacks "${cmpxchg}" ""
+done
+
+for cmpxchg in "sync_cmpxchg"; do
+ printf "#define raw_${cmpxchg} arch_${cmpxchg}\n\n"
done
grep '^[a-z]' "$1" | while read name meta args; do
diff --git a/scripts/atomic/gen-atomic-raw.sh b/scripts/atomic/gen-atomic-raw.sh
deleted file mode 100644
index c7e3c52..0000000
--- a/scripts/atomic/gen-atomic-raw.sh
+++ /dev/null
@@ -1,80 +0,0 @@
-#!/bin/sh
-# SPDX-License-Identifier: GPL-2.0
-
-ATOMICDIR=$(dirname $0)
-
-. ${ATOMICDIR}/atomic-tbl.sh
-
-#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...)
-gen_proto_order_variant()
-{
- local meta="$1"; shift
- local pfx="$1"; shift
- local name="$1"; shift
- local sfx="$1"; shift
- local order="$1"; shift
- local atomic="$1"; shift
- local int="$1"; shift
-
- local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
-
- local ret="$(gen_ret_type "${meta}" "${int}")"
- local params="$(gen_params "${int}" "${atomic}" "$@")"
- local args="$(gen_args "$@")"
- local retstmt="$(gen_ret_stmt "${meta}")"
-
-cat <<EOF
-static __always_inline ${ret}
-raw_${atomicname}(${params})
-{
- ${retstmt}arch_${atomicname}(${args});
-}
-
-EOF
-}
-
-gen_xchg()
-{
- local xchg="$1"; shift
- local order="$1"; shift
-
-cat <<EOF
-#define raw_${xchg}${order}(...) \\
- arch_${xchg}${order}(__VA_ARGS__)
-EOF
-}
-
-cat << EOF
-// SPDX-License-Identifier: GPL-2.0
-
-// Generated by $0
-// DO NOT MODIFY THIS FILE DIRECTLY
-
-#ifndef _LINUX_ATOMIC_RAW_H
-#define _LINUX_ATOMIC_RAW_H
-
-EOF
-
-grep '^[a-z]' "$1" | while read name meta args; do
- gen_proto "${meta}" "${name}" "atomic" "int" ${args}
-done
-
-grep '^[a-z]' "$1" | while read name meta args; do
- gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
-done
-
-for xchg in "xchg" "cmpxchg" "cmpxchg64" "cmpxchg128" "try_cmpxchg" "try_cmpxchg64" "try_cmpxchg128"; do
- for order in "" "_acquire" "_release" "_relaxed"; do
- gen_xchg "${xchg}" "${order}"
- printf "\n"
- done
-done
-
-for xchg in "cmpxchg_local" "cmpxchg64_local" "cmpxchg128_local" "sync_cmpxchg" "try_cmpxchg_local" "try_cmpxchg64_local" "try_cmpxchg128_local"; do
- gen_xchg "${xchg}" ""
- printf "\n"
-done
-
-cat <<EOF
-#endif /* _LINUX_ATOMIC_RAW_H */
-EOF
diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh
index 631d351..5b98a83 100755
--- a/scripts/atomic/gen-atomics.sh
+++ b/scripts/atomic/gen-atomics.sh
@@ -11,7 +11,6 @@ cat <<EOF |
gen-atomic-instrumented.sh linux/atomic/atomic-instrumented.h
gen-atomic-long.sh linux/atomic/atomic-long.h
gen-atomic-fallback.sh linux/atomic/atomic-arch-fallback.h
-gen-atomic-raw.sh linux/atomic/atomic-raw.h
EOF
while read script header args; do
/bin/sh ${ATOMICDIR}/${script} ${ATOMICTBL} ${args} > ${LINUXDIR}/include/${header}
The following commit has been merged into the locking/core branch of tip:
Commit-ID: a083ecc9333c62237551ad93f42e86a42a3c7cc2
Gitweb: https://git.kernel.org/tip/a083ecc9333c62237551ad93f42e86a42a3c7cc2
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:11 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:18 +02:00
locking/atomic: scripts: remove bogus order parameter
At the start of gen_proto_order_variants(), the ${order} variable is not
yet defined, and will be substituted with an empty string.
Replace the current bogus use of ${order} with an empty string instead.
This results in no change to the generated headers.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
scripts/atomic/gen-atomic-fallback.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh
index a70acd5..7a6bcea 100755
--- a/scripts/atomic/gen-atomic-fallback.sh
+++ b/scripts/atomic/gen-atomic-fallback.sh
@@ -81,7 +81,7 @@ gen_proto_order_variants()
local basename="arch_${atomic}_${pfx}${name}${sfx}"
- local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")"
+ local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "")"
# If we don't have relaxed atomics, then we don't bother with ordering fallbacks
# read_acquire and set_release need to be templated, though
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 8ad17f2183fd7e37ceafddbdff334a3e2608cc84
Gitweb: https://git.kernel.org/tip/8ad17f2183fd7e37ceafddbdff334a3e2608cc84
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:04 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:15 +02:00
locking/atomic: hexagon: add preprocessor symbols
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/hexagon.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/hexagon/include/asm/atomic.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h
index ad6c111..5c84400 100644
--- a/arch/hexagon/include/asm/atomic.h
+++ b/arch/hexagon/include/asm/atomic.h
@@ -91,6 +91,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \
ATOMIC_OPS(add)
ATOMIC_OPS(sub)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
@@ -98,6 +103,10 @@ ATOMIC_OPS(and)
ATOMIC_OPS(or)
ATOMIC_OPS(xor)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
The following commit has been merged into the locking/core branch of tip:
Commit-ID: ef558b4b7bbbf7e115c87e4da21ce86444d6ec3b
Gitweb: https://git.kernel.org/tip/ef558b4b7bbbf7e115c87e4da21ce86444d6ec3b
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:24 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:24 +02:00
locking/atomic: treewide: delete arch_atomic_*() kerneldoc
Currently several architectures have kerneldoc comments for
arch_atomic_*(), which is unhelpful as these live in a shared namespace
where they clash, and the arch_atomic_*() ops are now an implementation
detail of the raw_atomic_*() ops, which no-one should use those
directly.
Delete the kerneldoc comments for arch_atomic_*(), along with
pseudo-kerneldoc comments which are in the correct style but are missing
the leading '/**' necessary to be true kerneldoc comments.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/alpha/include/asm/atomic.h | 25 +-------
arch/arc/include/asm/atomic64-arcv2.h | 17 +-----
arch/hexagon/include/asm/atomic.h | 16 +-----
arch/loongarch/include/asm/atomic.h | 49 +---------------
arch/x86/include/asm/atomic.h | 87 +--------------------------
arch/x86/include/asm/atomic64_32.h | 76 +-----------------------
arch/x86/include/asm/atomic64_64.h | 81 +------------------------
7 files changed, 351 deletions(-)
diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index ec8ab55..cbd9244 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -200,15 +200,6 @@ ATOMIC_OPS(xor, xor)
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
-/**
- * arch_atomic_fetch_add_unless - add unless the number is a given value
- * @v: pointer of type atomic_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as it was not @u.
- * Returns the old value of @v.
- */
static __inline__ int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
int c, new, old;
@@ -232,15 +223,6 @@ static __inline__ int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
}
#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
-/**
- * arch_atomic64_fetch_add_unless - add unless the number is a given value
- * @v: pointer of type atomic64_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as it was not @u.
- * Returns the old value of @v.
- */
static __inline__ s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
s64 c, new, old;
@@ -264,13 +246,6 @@ static __inline__ s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u
}
#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
-/*
- * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
- * @v: pointer of type atomic_t
- *
- * The function returns the old value of *v minus 1, even if
- * the atomic variable, v, was not decremented.
- */
static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
{
s64 old, tmp;
diff --git a/arch/arc/include/asm/atomic64-arcv2.h b/arch/arc/include/asm/atomic64-arcv2.h
index 2b7c9e6..6b6db98 100644
--- a/arch/arc/include/asm/atomic64-arcv2.h
+++ b/arch/arc/include/asm/atomic64-arcv2.h
@@ -182,14 +182,6 @@ static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new)
}
#define arch_atomic64_xchg arch_atomic64_xchg
-/**
- * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
- * @v: pointer of type atomic64_t
- *
- * The function returns the old value of *v minus 1, even if
- * the atomic variable, v, was not decremented.
- */
-
static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
{
s64 val;
@@ -214,15 +206,6 @@ static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
}
#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
-/**
- * arch_atomic64_fetch_add_unless - add unless the number is a given value
- * @v: pointer of type atomic64_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, if it was not @u.
- * Returns the old value of @v
- */
static inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
s64 old, temp;
diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h
index 5c84400..2447d08 100644
--- a/arch/hexagon/include/asm/atomic.h
+++ b/arch/hexagon/include/asm/atomic.h
@@ -28,12 +28,6 @@ static inline void arch_atomic_set(atomic_t *v, int new)
#define arch_atomic_set_release(v, i) arch_atomic_set((v), (i))
-/**
- * arch_atomic_read - reads a word, atomically
- * @v: pointer to atomic value
- *
- * Assumes all word reads on our architecture are atomic.
- */
#define arch_atomic_read(v) READ_ONCE((v)->counter)
#define ATOMIC_OP(op) \
@@ -112,16 +106,6 @@ ATOMIC_OPS(xor)
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
-/**
- * arch_atomic_fetch_add_unless - add unless the number is a given value
- * @v: pointer to value
- * @a: amount to add
- * @u: unless value is equal to u
- *
- * Returns old value.
- *
- */
-
static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
int __oldval;
diff --git a/arch/loongarch/include/asm/atomic.h b/arch/loongarch/include/asm/atomic.h
index 8d73c85..e27f0c7 100644
--- a/arch/loongarch/include/asm/atomic.h
+++ b/arch/loongarch/include/asm/atomic.h
@@ -29,21 +29,7 @@
#define ATOMIC_INIT(i) { (i) }
-/*
- * arch_atomic_read - read atomic variable
- * @v: pointer of type atomic_t
- *
- * Atomically reads the value of @v.
- */
#define arch_atomic_read(v) READ_ONCE((v)->counter)
-
-/*
- * arch_atomic_set - set atomic variable
- * @v: pointer of type atomic_t
- * @i: required value
- *
- * Atomically sets the value of @v to @i.
- */
#define arch_atomic_set(v, i) WRITE_ONCE((v)->counter, (i))
#define ATOMIC_OP(op, I, asm_op) \
@@ -139,14 +125,6 @@ static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
}
#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
-/*
- * arch_atomic_sub_if_positive - conditionally subtract integer from atomic variable
- * @i: integer value to subtract
- * @v: pointer of type atomic_t
- *
- * Atomically test @v and subtract @i if @v is greater or equal than @i.
- * The function returns the old value of @v minus @i.
- */
static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
{
int result;
@@ -181,28 +159,13 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
return result;
}
-/*
- * arch_atomic_dec_if_positive - decrement by 1 if old value positive
- * @v: pointer of type atomic_t
- */
#define arch_atomic_dec_if_positive(v) arch_atomic_sub_if_positive(1, v)
#ifdef CONFIG_64BIT
#define ATOMIC64_INIT(i) { (i) }
-/*
- * arch_atomic64_read - read atomic variable
- * @v: pointer of type atomic64_t
- *
- */
#define arch_atomic64_read(v) READ_ONCE((v)->counter)
-
-/*
- * arch_atomic64_set - set atomic variable
- * @v: pointer of type atomic64_t
- * @i: required value
- */
#define arch_atomic64_set(v, i) WRITE_ONCE((v)->counter, (i))
#define ATOMIC64_OP(op, I, asm_op) \
@@ -297,14 +260,6 @@ static inline long arch_atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
}
#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
-/*
- * arch_atomic64_sub_if_positive - conditionally subtract integer from atomic variable
- * @i: integer value to subtract
- * @v: pointer of type atomic64_t
- *
- * Atomically test @v and subtract @i if @v is greater or equal than @i.
- * The function returns the old value of @v minus @i.
- */
static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
{
long result;
@@ -339,10 +294,6 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
return result;
}
-/*
- * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
- * @v: pointer of type atomic64_t
- */
#define arch_atomic64_dec_if_positive(v) arch_atomic64_sub_if_positive(1, v)
#endif /* CONFIG_64BIT */
diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 5e754e8..55a55ec 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -14,12 +14,6 @@
* resource counting etc..
*/
-/**
- * arch_atomic_read - read atomic variable
- * @v: pointer of type atomic_t
- *
- * Atomically reads the value of @v.
- */
static __always_inline int arch_atomic_read(const atomic_t *v)
{
/*
@@ -29,25 +23,11 @@ static __always_inline int arch_atomic_read(const atomic_t *v)
return __READ_ONCE((v)->counter);
}
-/**
- * arch_atomic_set - set atomic variable
- * @v: pointer of type atomic_t
- * @i: required value
- *
- * Atomically sets the value of @v to @i.
- */
static __always_inline void arch_atomic_set(atomic_t *v, int i)
{
__WRITE_ONCE(v->counter, i);
}
-/**
- * arch_atomic_add - add integer to atomic variable
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v.
- */
static __always_inline void arch_atomic_add(int i, atomic_t *v)
{
asm volatile(LOCK_PREFIX "addl %1,%0"
@@ -55,13 +35,6 @@ static __always_inline void arch_atomic_add(int i, atomic_t *v)
: "ir" (i) : "memory");
}
-/**
- * arch_atomic_sub - subtract integer from atomic variable
- * @i: integer value to subtract
- * @v: pointer of type atomic_t
- *
- * Atomically subtracts @i from @v.
- */
static __always_inline void arch_atomic_sub(int i, atomic_t *v)
{
asm volatile(LOCK_PREFIX "subl %1,%0"
@@ -69,27 +42,12 @@ static __always_inline void arch_atomic_sub(int i, atomic_t *v)
: "ir" (i) : "memory");
}
-/**
- * arch_atomic_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer of type atomic_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v)
{
return GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, e, "er", i);
}
#define arch_atomic_sub_and_test arch_atomic_sub_and_test
-/**
- * arch_atomic_inc - increment atomic variable
- * @v: pointer of type atomic_t
- *
- * Atomically increments @v by 1.
- */
static __always_inline void arch_atomic_inc(atomic_t *v)
{
asm volatile(LOCK_PREFIX "incl %0"
@@ -97,12 +55,6 @@ static __always_inline void arch_atomic_inc(atomic_t *v)
}
#define arch_atomic_inc arch_atomic_inc
-/**
- * arch_atomic_dec - decrement atomic variable
- * @v: pointer of type atomic_t
- *
- * Atomically decrements @v by 1.
- */
static __always_inline void arch_atomic_dec(atomic_t *v)
{
asm volatile(LOCK_PREFIX "decl %0"
@@ -110,69 +62,30 @@ static __always_inline void arch_atomic_dec(atomic_t *v)
}
#define arch_atomic_dec arch_atomic_dec
-/**
- * arch_atomic_dec_and_test - decrement and test
- * @v: pointer of type atomic_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
static __always_inline bool arch_atomic_dec_and_test(atomic_t *v)
{
return GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, e);
}
#define arch_atomic_dec_and_test arch_atomic_dec_and_test
-/**
- * arch_atomic_inc_and_test - increment and test
- * @v: pointer of type atomic_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool arch_atomic_inc_and_test(atomic_t *v)
{
return GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, e);
}
#define arch_atomic_inc_and_test arch_atomic_inc_and_test
-/**
- * arch_atomic_add_negative - add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true
- * if the result is negative, or false when
- * result is greater than or equal to zero.
- */
static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
{
return GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, s, "er", i);
}
#define arch_atomic_add_negative arch_atomic_add_negative
-/**
- * arch_atomic_add_return - add integer and return
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns @i + @v
- */
static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
{
return i + xadd(&v->counter, i);
}
#define arch_atomic_add_return arch_atomic_add_return
-/**
- * arch_atomic_sub_return - subtract integer and return
- * @v: pointer of type atomic_t
- * @i: integer value to subtract
- *
- * Atomically subtracts @i from @v and returns @v - @i
- */
static __always_inline int arch_atomic_sub_return(int i, atomic_t *v)
{
return arch_atomic_add_return(-i, v);
diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 808b4ee..3486d91 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -61,30 +61,12 @@ ATOMIC64_DECL(add_unless);
#undef __ATOMIC64_DECL
#undef ATOMIC64_EXPORT
-/**
- * arch_atomic64_cmpxchg - cmpxchg atomic64 variable
- * @v: pointer to type atomic64_t
- * @o: expected value
- * @n: new value
- *
- * Atomically sets @v to @n if it was equal to @o and returns
- * the old value.
- */
-
static __always_inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
{
return arch_cmpxchg64(&v->counter, o, n);
}
#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
-/**
- * arch_atomic64_xchg - xchg atomic64 variable
- * @v: pointer to type atomic64_t
- * @n: value to assign
- *
- * Atomically xchgs the value of @v to @n and returns
- * the old value.
- */
static __always_inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n)
{
s64 o;
@@ -97,13 +79,6 @@ static __always_inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n)
}
#define arch_atomic64_xchg arch_atomic64_xchg
-/**
- * arch_atomic64_set - set atomic64 variable
- * @v: pointer to type atomic64_t
- * @i: value to assign
- *
- * Atomically sets the value of @v to @n.
- */
static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i)
{
unsigned high = (unsigned)(i >> 32);
@@ -113,12 +88,6 @@ static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i)
: "eax", "edx", "memory");
}
-/**
- * arch_atomic64_read - read atomic64 variable
- * @v: pointer to type atomic64_t
- *
- * Atomically reads the value of @v and returns it.
- */
static __always_inline s64 arch_atomic64_read(const atomic64_t *v)
{
s64 r;
@@ -126,13 +95,6 @@ static __always_inline s64 arch_atomic64_read(const atomic64_t *v)
return r;
}
-/**
- * arch_atomic64_add_return - add and return
- * @i: integer value to add
- * @v: pointer to type atomic64_t
- *
- * Atomically adds @i to @v and returns @i + *@v
- */
static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
{
alternative_atomic64(add_return,
@@ -142,9 +104,6 @@ static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
}
#define arch_atomic64_add_return arch_atomic64_add_return
-/*
- * Other variants with different arithmetic operators:
- */
static __always_inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v)
{
alternative_atomic64(sub_return,
@@ -172,13 +131,6 @@ static __always_inline s64 arch_atomic64_dec_return(atomic64_t *v)
}
#define arch_atomic64_dec_return arch_atomic64_dec_return
-/**
- * arch_atomic64_add - add integer to atomic64 variable
- * @i: integer value to add
- * @v: pointer to type atomic64_t
- *
- * Atomically adds @i to @v.
- */
static __always_inline s64 arch_atomic64_add(s64 i, atomic64_t *v)
{
__alternative_atomic64(add, add_return,
@@ -187,13 +139,6 @@ static __always_inline s64 arch_atomic64_add(s64 i, atomic64_t *v)
return i;
}
-/**
- * arch_atomic64_sub - subtract the atomic64 variable
- * @i: integer value to subtract
- * @v: pointer to type atomic64_t
- *
- * Atomically subtracts @i from @v.
- */
static __always_inline s64 arch_atomic64_sub(s64 i, atomic64_t *v)
{
__alternative_atomic64(sub, sub_return,
@@ -202,12 +147,6 @@ static __always_inline s64 arch_atomic64_sub(s64 i, atomic64_t *v)
return i;
}
-/**
- * arch_atomic64_inc - increment atomic64 variable
- * @v: pointer to type atomic64_t
- *
- * Atomically increments @v by 1.
- */
static __always_inline void arch_atomic64_inc(atomic64_t *v)
{
__alternative_atomic64(inc, inc_return, /* no output */,
@@ -215,12 +154,6 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v)
}
#define arch_atomic64_inc arch_atomic64_inc
-/**
- * arch_atomic64_dec - decrement atomic64 variable
- * @v: pointer to type atomic64_t
- *
- * Atomically decrements @v by 1.
- */
static __always_inline void arch_atomic64_dec(atomic64_t *v)
{
__alternative_atomic64(dec, dec_return, /* no output */,
@@ -228,15 +161,6 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v)
}
#define arch_atomic64_dec arch_atomic64_dec
-/**
- * arch_atomic64_add_unless - add unless the number is a given value
- * @v: pointer of type atomic64_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as it was not @u.
- * Returns non-zero if the add was done, zero otherwise.
- */
static __always_inline int arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
unsigned low = (unsigned)u;
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index c496595..3165c0f 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -10,37 +10,16 @@
#define ATOMIC64_INIT(i) { (i) }
-/**
- * arch_atomic64_read - read atomic64 variable
- * @v: pointer of type atomic64_t
- *
- * Atomically reads the value of @v.
- * Doesn't imply a read memory barrier.
- */
static __always_inline s64 arch_atomic64_read(const atomic64_t *v)
{
return __READ_ONCE((v)->counter);
}
-/**
- * arch_atomic64_set - set atomic64 variable
- * @v: pointer to type atomic64_t
- * @i: required value
- *
- * Atomically sets the value of @v to @i.
- */
static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i)
{
__WRITE_ONCE(v->counter, i);
}
-/**
- * arch_atomic64_add - add integer to atomic64 variable
- * @i: integer value to add
- * @v: pointer to type atomic64_t
- *
- * Atomically adds @i to @v.
- */
static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "addq %1,%0"
@@ -48,13 +27,6 @@ static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v)
: "er" (i), "m" (v->counter) : "memory");
}
-/**
- * arch_atomic64_sub - subtract the atomic64 variable
- * @i: integer value to subtract
- * @v: pointer to type atomic64_t
- *
- * Atomically subtracts @i from @v.
- */
static __always_inline void arch_atomic64_sub(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "subq %1,%0"
@@ -62,27 +34,12 @@ static __always_inline void arch_atomic64_sub(s64 i, atomic64_t *v)
: "er" (i), "m" (v->counter) : "memory");
}
-/**
- * arch_atomic64_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer to type atomic64_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
{
return GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, e, "er", i);
}
#define arch_atomic64_sub_and_test arch_atomic64_sub_and_test
-/**
- * arch_atomic64_inc - increment atomic64 variable
- * @v: pointer to type atomic64_t
- *
- * Atomically increments @v by 1.
- */
static __always_inline void arch_atomic64_inc(atomic64_t *v)
{
asm volatile(LOCK_PREFIX "incq %0"
@@ -91,12 +48,6 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v)
}
#define arch_atomic64_inc arch_atomic64_inc
-/**
- * arch_atomic64_dec - decrement atomic64 variable
- * @v: pointer to type atomic64_t
- *
- * Atomically decrements @v by 1.
- */
static __always_inline void arch_atomic64_dec(atomic64_t *v)
{
asm volatile(LOCK_PREFIX "decq %0"
@@ -105,56 +56,24 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v)
}
#define arch_atomic64_dec arch_atomic64_dec
-/**
- * arch_atomic64_dec_and_test - decrement and test
- * @v: pointer to type atomic64_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
static __always_inline bool arch_atomic64_dec_and_test(atomic64_t *v)
{
return GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, e);
}
#define arch_atomic64_dec_and_test arch_atomic64_dec_and_test
-/**
- * arch_atomic64_inc_and_test - increment and test
- * @v: pointer to type atomic64_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool arch_atomic64_inc_and_test(atomic64_t *v)
{
return GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, e);
}
#define arch_atomic64_inc_and_test arch_atomic64_inc_and_test
-/**
- * arch_atomic64_add_negative - add and test if negative
- * @i: integer value to add
- * @v: pointer to type atomic64_t
- *
- * Atomically adds @i to @v and returns true
- * if the result is negative, or false when
- * result is greater than or equal to zero.
- */
static __always_inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v)
{
return GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, s, "er", i);
}
#define arch_atomic64_add_negative arch_atomic64_add_negative
-/**
- * arch_atomic64_add_return - add and return
- * @i: integer value to add
- * @v: pointer to type atomic64_t
- *
- * Atomically adds @i to @v and returns @i + @v
- */
static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
{
return i + xadd(&v->counter, i);
The following commit has been merged into the locking/core branch of tip:
Commit-ID: b916a8c765692444388891f5b9c5b6e941e16d42
Gitweb: https://git.kernel.org/tip/b916a8c765692444388891f5b9c5b6e941e16d42
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:18 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:21 +02:00
locking/atomic: scripts: split pfx/name/sfx/order
Currently gen-atomic-long.sh's gen_proto_order_variant() function
combines the pfx/name/sfx/order variables immediately, unlike other
functions in gen-atomic-*.sh.
This is fine today, but subsequent patches will require the individual
individual pfx/name/sfx/order variables within gen-atomic-long.sh's
gen_proto_order_variant() function. In preparation for this, split the
variables in the style of other gen-atomic-*.sh scripts.
This results in no change to the generated headers, so there should be
no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
scripts/atomic/gen-atomic-long.sh | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh
index 75e91d6..1383217 100755
--- a/scripts/atomic/gen-atomic-long.sh
+++ b/scripts/atomic/gen-atomic-long.sh
@@ -36,10 +36,15 @@ gen_args_cast()
gen_proto_order_variant()
{
local meta="$1"; shift
- local name="$1$2$3$4"; shift; shift; shift; shift
+ local pfx="$1"; shift
+ local name="$1"; shift
+ local sfx="$1"; shift
+ local order="$1"; shift
local atomic="$1"; shift
local int="$1"; shift
+ local atomicname="${pfx}${name}${sfx}${order}"
+
local ret="$(gen_ret_type "${meta}" "long")"
local params="$(gen_params "long" "atomic_long" "$@")"
local argscast="$(gen_args_cast "${int}" "${atomic}" "$@")"
@@ -47,9 +52,9 @@ gen_proto_order_variant()
cat <<EOF
static __always_inline ${ret}
-raw_atomic_long_${name}(${params})
+raw_atomic_long_${atomicname}(${params})
{
- ${retstmt}raw_${atomic}_${name}(${argscast});
+ ${retstmt}raw_${atomic}_${atomicname}(${argscast});
}
EOF
The following commit has been merged into the locking/core branch of tip:
Commit-ID: a7bafa7969da1c0e9c342c792d8224078d1c491c
Gitweb: https://git.kernel.org/tip/a7bafa7969da1c0e9c342c792d8224078d1c491c
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:00 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:13 +02:00
locking/atomic: hexagon: remove redundant arch_atomic_cmpxchg
Hexagon's implementation of arch_atomic_cmpxchg() is identical to its
implementation of arch_cmpxchg(). Have it define arch_atomic_cmpxchg()
in terms of arch_cmpxchg(), matching what it does for arch_atomic_xchg()
and arch_xchg().
At the same time, remove the kerneldoc comments for hexagon's
arch_atomic_xchg() and arch_atomic_cmpxchg(). The arch_atomic_*()
namespace is shared by all architectures and the API should be
documented centrally, and the comments aren't all that helpful as-is.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/hexagon/include/asm/atomic.h | 46 ++----------------------------
1 file changed, 4 insertions(+), 42 deletions(-)
diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h
index 6e94f8d..738857e 100644
--- a/arch/hexagon/include/asm/atomic.h
+++ b/arch/hexagon/include/asm/atomic.h
@@ -36,49 +36,11 @@ static inline void arch_atomic_set(atomic_t *v, int new)
*/
#define arch_atomic_read(v) READ_ONCE((v)->counter)
-/**
- * arch_atomic_xchg - atomic
- * @v: pointer to memory to change
- * @new: new value (technically passed in a register -- see xchg)
- */
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), (new)))
-
-
-/**
- * arch_atomic_cmpxchg - atomic compare-and-exchange values
- * @v: pointer to value to change
- * @old: desired old value to match
- * @new: new value to put in
- *
- * Parameters are then pointer, value-in-register, value-in-register,
- * and the output is the old value.
- *
- * Apparently this is complicated for archs that don't support
- * the memw_locked like we do (or it's broken or whatever).
- *
- * Kind of the lynchpin of the rest of the generically defined routines.
- * Remember V2 had that bug with dotnew predicate set by memw_locked.
- *
- * "old" is "expected" old val, __oldval is actual old value
- */
-static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
-{
- int __oldval;
+#define arch_atomic_xchg(v, new) \
+ (arch_xchg(&((v)->counter), (new)))
- asm volatile(
- "1: %0 = memw_locked(%1);\n"
- " { P0 = cmp.eq(%0,%2);\n"
- " if (!P0.new) jump:nt 2f; }\n"
- " memw_locked(%1,P0) = %3;\n"
- " if (!P0) jump 1b;\n"
- "2:\n"
- : "=&r" (__oldval)
- : "r" (&v->counter), "r" (old), "r" (new)
- : "memory", "p0"
- );
-
- return __oldval;
-}
+#define arch_atomic_cmpxchg(v, old, new) \
+ (arch_cmpxchg(&((v)->counter), (old), (new)))
#define ATOMIC_OP(op) \
static inline void arch_atomic_##op(int i, atomic_t *v) \
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 0f613bfa8268a89be25f2b6b58fc6fe8ccd9a2ba
Gitweb: https://git.kernel.org/tip/0f613bfa8268a89be25f2b6b58fc6fe8ccd9a2ba
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:15 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:20 +02:00
locking/atomic: treewide: use raw_atomic*_<op>()
Now that we have raw_atomic*_<op>() definitions, there's no need to use
arch_atomic*_<op>() definitions outside of the low-level atomic
definitions.
Move treewide users of arch_atomic*_<op>() over to the equivalent
raw_atomic*_<op>().
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/powerpc/kernel/smp.c | 12 ++++++------
arch/x86/kernel/alternative.c | 4 ++--
arch/x86/kernel/cpu/mce/core.c | 16 ++++++++--------
arch/x86/kernel/nmi.c | 2 +-
arch/x86/kernel/pvclock.c | 4 ++--
arch/x86/kvm/x86.c | 2 +-
include/asm-generic/bitops/atomic.h | 12 ++++++------
include/asm-generic/bitops/lock.h | 8 ++++----
include/linux/context_tracking.h | 4 ++--
include/linux/context_tracking_state.h | 2 +-
include/linux/cpumask.h | 2 +-
include/linux/jump_label.h | 2 +-
kernel/context_tracking.c | 12 ++++++------
kernel/sched/clock.c | 2 +-
14 files changed, 42 insertions(+), 42 deletions(-)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 265801a..e8965f1 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -417,9 +417,9 @@ noinstr static void nmi_ipi_lock_start(unsigned long *flags)
{
raw_local_irq_save(*flags);
hard_irq_disable();
- while (arch_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) {
+ while (raw_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1) {
raw_local_irq_restore(*flags);
- spin_until_cond(arch_atomic_read(&__nmi_ipi_lock) == 0);
+ spin_until_cond(raw_atomic_read(&__nmi_ipi_lock) == 0);
raw_local_irq_save(*flags);
hard_irq_disable();
}
@@ -427,15 +427,15 @@ noinstr static void nmi_ipi_lock_start(unsigned long *flags)
noinstr static void nmi_ipi_lock(void)
{
- while (arch_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1)
- spin_until_cond(arch_atomic_read(&__nmi_ipi_lock) == 0);
+ while (raw_atomic_cmpxchg(&__nmi_ipi_lock, 0, 1) == 1)
+ spin_until_cond(raw_atomic_read(&__nmi_ipi_lock) == 0);
}
noinstr static void nmi_ipi_unlock(void)
{
smp_mb();
- WARN_ON(arch_atomic_read(&__nmi_ipi_lock) != 1);
- arch_atomic_set(&__nmi_ipi_lock, 0);
+ WARN_ON(raw_atomic_read(&__nmi_ipi_lock) != 1);
+ raw_atomic_set(&__nmi_ipi_lock, 0);
}
noinstr static void nmi_ipi_unlock_end(unsigned long *flags)
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index f615e0c..18f16e9 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -1799,7 +1799,7 @@ struct bp_patching_desc *try_get_desc(void)
{
struct bp_patching_desc *desc = &bp_desc;
- if (!arch_atomic_inc_not_zero(&desc->refs))
+ if (!raw_atomic_inc_not_zero(&desc->refs))
return NULL;
return desc;
@@ -1810,7 +1810,7 @@ static __always_inline void put_desc(void)
struct bp_patching_desc *desc = &bp_desc;
smp_mb__before_atomic();
- arch_atomic_dec(&desc->refs);
+ raw_atomic_dec(&desc->refs);
}
static __always_inline void *text_poke_addr(struct text_poke_loc *tp)
diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 2eec60f..ab156e6 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1022,12 +1022,12 @@ static noinstr int mce_start(int *no_way_out)
if (!timeout)
return ret;
- arch_atomic_add(*no_way_out, &global_nwo);
+ raw_atomic_add(*no_way_out, &global_nwo);
/*
* Rely on the implied barrier below, such that global_nwo
* is updated before mce_callin.
*/
- order = arch_atomic_inc_return(&mce_callin);
+ order = raw_atomic_inc_return(&mce_callin);
arch_cpumask_clear_cpu(smp_processor_id(), &mce_missing_cpus);
/* Enable instrumentation around calls to external facilities */
@@ -1036,10 +1036,10 @@ static noinstr int mce_start(int *no_way_out)
/*
* Wait for everyone.
*/
- while (arch_atomic_read(&mce_callin) != num_online_cpus()) {
+ while (raw_atomic_read(&mce_callin) != num_online_cpus()) {
if (mce_timed_out(&timeout,
"Timeout: Not all CPUs entered broadcast exception handler")) {
- arch_atomic_set(&global_nwo, 0);
+ raw_atomic_set(&global_nwo, 0);
goto out;
}
ndelay(SPINUNIT);
@@ -1054,7 +1054,7 @@ static noinstr int mce_start(int *no_way_out)
/*
* Monarch: Starts executing now, the others wait.
*/
- arch_atomic_set(&mce_executing, 1);
+ raw_atomic_set(&mce_executing, 1);
} else {
/*
* Subject: Now start the scanning loop one by one in
@@ -1062,10 +1062,10 @@ static noinstr int mce_start(int *no_way_out)
* This way when there are any shared banks it will be
* only seen by one CPU before cleared, avoiding duplicates.
*/
- while (arch_atomic_read(&mce_executing) < order) {
+ while (raw_atomic_read(&mce_executing) < order) {
if (mce_timed_out(&timeout,
"Timeout: Subject CPUs unable to finish machine check processing")) {
- arch_atomic_set(&global_nwo, 0);
+ raw_atomic_set(&global_nwo, 0);
goto out;
}
ndelay(SPINUNIT);
@@ -1075,7 +1075,7 @@ static noinstr int mce_start(int *no_way_out)
/*
* Cache the global no_way_out state.
*/
- *no_way_out = arch_atomic_read(&global_nwo);
+ *no_way_out = raw_atomic_read(&global_nwo);
ret = order;
diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
index 776f4b1..a0c5518 100644
--- a/arch/x86/kernel/nmi.c
+++ b/arch/x86/kernel/nmi.c
@@ -496,7 +496,7 @@ DEFINE_IDTENTRY_RAW(exc_nmi)
*/
sev_es_nmi_complete();
if (IS_ENABLED(CONFIG_NMI_CHECK_CPU))
- arch_atomic_long_inc(&nsp->idt_calls);
+ raw_atomic_long_inc(&nsp->idt_calls);
if (IS_ENABLED(CONFIG_SMP) && arch_cpu_is_offline(smp_processor_id()))
return;
diff --git a/arch/x86/kernel/pvclock.c b/arch/x86/kernel/pvclock.c
index 56acf53..b3f8137 100644
--- a/arch/x86/kernel/pvclock.c
+++ b/arch/x86/kernel/pvclock.c
@@ -101,11 +101,11 @@ u64 __pvclock_clocksource_read(struct pvclock_vcpu_time_info *src, bool dowd)
* updating at the same time, and one of them could be slightly behind,
* making the assumption that last_value always go forward fail to hold.
*/
- last = arch_atomic64_read(&last_value);
+ last = raw_atomic64_read(&last_value);
do {
if (ret <= last)
return last;
- } while (!arch_atomic64_try_cmpxchg(&last_value, &last, ret));
+ } while (!raw_atomic64_try_cmpxchg(&last_value, &last, ret));
return ret;
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ceb7c5e..ac6f609 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -13155,7 +13155,7 @@ EXPORT_SYMBOL_GPL(kvm_arch_end_assignment);
bool noinstr kvm_arch_has_assigned_device(struct kvm *kvm)
{
- return arch_atomic_read(&kvm->arch.assigned_device_count);
+ return raw_atomic_read(&kvm->arch.assigned_device_count);
}
EXPORT_SYMBOL_GPL(kvm_arch_has_assigned_device);
diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
index 71ab4ba..e076e07 100644
--- a/include/asm-generic/bitops/atomic.h
+++ b/include/asm-generic/bitops/atomic.h
@@ -15,21 +15,21 @@ static __always_inline void
arch_set_bit(unsigned int nr, volatile unsigned long *p)
{
p += BIT_WORD(nr);
- arch_atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p);
+ raw_atomic_long_or(BIT_MASK(nr), (atomic_long_t *)p);
}
static __always_inline void
arch_clear_bit(unsigned int nr, volatile unsigned long *p)
{
p += BIT_WORD(nr);
- arch_atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p);
+ raw_atomic_long_andnot(BIT_MASK(nr), (atomic_long_t *)p);
}
static __always_inline void
arch_change_bit(unsigned int nr, volatile unsigned long *p)
{
p += BIT_WORD(nr);
- arch_atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p);
+ raw_atomic_long_xor(BIT_MASK(nr), (atomic_long_t *)p);
}
static __always_inline int
@@ -39,7 +39,7 @@ arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p)
unsigned long mask = BIT_MASK(nr);
p += BIT_WORD(nr);
- old = arch_atomic_long_fetch_or(mask, (atomic_long_t *)p);
+ old = raw_atomic_long_fetch_or(mask, (atomic_long_t *)p);
return !!(old & mask);
}
@@ -50,7 +50,7 @@ arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p)
unsigned long mask = BIT_MASK(nr);
p += BIT_WORD(nr);
- old = arch_atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
+ old = raw_atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
return !!(old & mask);
}
@@ -61,7 +61,7 @@ arch_test_and_change_bit(unsigned int nr, volatile unsigned long *p)
unsigned long mask = BIT_MASK(nr);
p += BIT_WORD(nr);
- old = arch_atomic_long_fetch_xor(mask, (atomic_long_t *)p);
+ old = raw_atomic_long_fetch_xor(mask, (atomic_long_t *)p);
return !!(old & mask);
}
diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h
index 630f2f6..4091351 100644
--- a/include/asm-generic/bitops/lock.h
+++ b/include/asm-generic/bitops/lock.h
@@ -25,7 +25,7 @@ arch_test_and_set_bit_lock(unsigned int nr, volatile unsigned long *p)
if (READ_ONCE(*p) & mask)
return 1;
- old = arch_atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p);
+ old = raw_atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p);
return !!(old & mask);
}
@@ -41,7 +41,7 @@ static __always_inline void
arch_clear_bit_unlock(unsigned int nr, volatile unsigned long *p)
{
p += BIT_WORD(nr);
- arch_atomic_long_fetch_andnot_release(BIT_MASK(nr), (atomic_long_t *)p);
+ raw_atomic_long_fetch_andnot_release(BIT_MASK(nr), (atomic_long_t *)p);
}
/**
@@ -63,7 +63,7 @@ arch___clear_bit_unlock(unsigned int nr, volatile unsigned long *p)
p += BIT_WORD(nr);
old = READ_ONCE(*p);
old &= ~BIT_MASK(nr);
- arch_atomic_long_set_release((atomic_long_t *)p, old);
+ raw_atomic_long_set_release((atomic_long_t *)p, old);
}
/**
@@ -83,7 +83,7 @@ static inline bool arch_clear_bit_unlock_is_negative_byte(unsigned int nr,
unsigned long mask = BIT_MASK(nr);
p += BIT_WORD(nr);
- old = arch_atomic_long_fetch_andnot_release(mask, (atomic_long_t *)p);
+ old = raw_atomic_long_fetch_andnot_release(mask, (atomic_long_t *)p);
return !!(old & BIT(7));
}
#define arch_clear_bit_unlock_is_negative_byte arch_clear_bit_unlock_is_negative_byte
diff --git a/include/linux/context_tracking.h b/include/linux/context_tracking.h
index d3cbb6c..6e76b9d 100644
--- a/include/linux/context_tracking.h
+++ b/include/linux/context_tracking.h
@@ -119,7 +119,7 @@ extern void ct_idle_exit(void);
*/
static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void)
{
- return !(arch_atomic_read(this_cpu_ptr(&context_tracking.state)) & RCU_DYNTICKS_IDX);
+ return !(raw_atomic_read(this_cpu_ptr(&context_tracking.state)) & RCU_DYNTICKS_IDX);
}
/*
@@ -128,7 +128,7 @@ static __always_inline bool rcu_dynticks_curr_cpu_in_eqs(void)
*/
static __always_inline unsigned long ct_state_inc(int incby)
{
- return arch_atomic_add_return(incby, this_cpu_ptr(&context_tracking.state));
+ return raw_atomic_add_return(incby, this_cpu_ptr(&context_tracking.state));
}
static __always_inline bool warn_rcu_enter(void)
diff --git a/include/linux/context_tracking_state.h b/include/linux/context_tracking_state.h
index fdd537e..bbff5f7 100644
--- a/include/linux/context_tracking_state.h
+++ b/include/linux/context_tracking_state.h
@@ -51,7 +51,7 @@ DECLARE_PER_CPU(struct context_tracking, context_tracking);
#ifdef CONFIG_CONTEXT_TRACKING_USER
static __always_inline int __ct_state(void)
{
- return arch_atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_STATE_MASK;
+ return raw_atomic_read(this_cpu_ptr(&context_tracking.state)) & CT_STATE_MASK;
}
#endif
diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index ca736b0..0d2e2a3 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -1071,7 +1071,7 @@ static inline const struct cpumask *get_cpu_mask(unsigned int cpu)
*/
static __always_inline unsigned int num_online_cpus(void)
{
- return arch_atomic_read(&__num_online_cpus);
+ return raw_atomic_read(&__num_online_cpus);
}
#define num_possible_cpus() cpumask_weight(cpu_possible_mask)
#define num_present_cpus() cpumask_weight(cpu_present_mask)
diff --git a/include/linux/jump_label.h b/include/linux/jump_label.h
index 4e968eb..f0a949b 100644
--- a/include/linux/jump_label.h
+++ b/include/linux/jump_label.h
@@ -257,7 +257,7 @@ extern enum jump_label_type jump_label_init_type(struct jump_entry *entry);
static __always_inline int static_key_count(struct static_key *key)
{
- return arch_atomic_read(&key->enabled);
+ return raw_atomic_read(&key->enabled);
}
static __always_inline void jump_label_init(void)
diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index a09f1c1..6ef0b35 100644
--- a/kernel/context_tracking.c
+++ b/kernel/context_tracking.c
@@ -510,7 +510,7 @@ void noinstr __ct_user_enter(enum ctx_state state)
* In this we case we don't care about any concurrency/ordering.
*/
if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE))
- arch_atomic_set(&ct->state, state);
+ raw_atomic_set(&ct->state, state);
} else {
/*
* Even if context tracking is disabled on this CPU, because it's outside
@@ -527,7 +527,7 @@ void noinstr __ct_user_enter(enum ctx_state state)
*/
if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) {
/* Tracking for vtime only, no concurrent RCU EQS accounting */
- arch_atomic_set(&ct->state, state);
+ raw_atomic_set(&ct->state, state);
} else {
/*
* Tracking for vtime and RCU EQS. Make sure we don't race
@@ -535,7 +535,7 @@ void noinstr __ct_user_enter(enum ctx_state state)
* RCU only requires RCU_DYNTICKS_IDX increments to be fully
* ordered.
*/
- arch_atomic_add(state, &ct->state);
+ raw_atomic_add(state, &ct->state);
}
}
}
@@ -630,12 +630,12 @@ void noinstr __ct_user_exit(enum ctx_state state)
* In this we case we don't care about any concurrency/ordering.
*/
if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE))
- arch_atomic_set(&ct->state, CONTEXT_KERNEL);
+ raw_atomic_set(&ct->state, CONTEXT_KERNEL);
} else {
if (!IS_ENABLED(CONFIG_CONTEXT_TRACKING_IDLE)) {
/* Tracking for vtime only, no concurrent RCU EQS accounting */
- arch_atomic_set(&ct->state, CONTEXT_KERNEL);
+ raw_atomic_set(&ct->state, CONTEXT_KERNEL);
} else {
/*
* Tracking for vtime and RCU EQS. Make sure we don't race
@@ -643,7 +643,7 @@ void noinstr __ct_user_exit(enum ctx_state state)
* RCU only requires RCU_DYNTICKS_IDX increments to be fully
* ordered.
*/
- arch_atomic_sub(state, &ct->state);
+ raw_atomic_sub(state, &ct->state);
}
}
}
diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
index b5cc2b5..71443cf 100644
--- a/kernel/sched/clock.c
+++ b/kernel/sched/clock.c
@@ -287,7 +287,7 @@ again:
clock = wrap_max(clock, min_clock);
clock = wrap_min(clock, max_clock);
- if (!arch_try_cmpxchg64(&scd->clock, &old_clock, clock))
+ if (!raw_try_cmpxchg64(&scd->clock, &old_clock, clock))
goto again;
return clock;
The following commit has been merged into the locking/core branch of tip:
Commit-ID: ad8110706f381170c9f9975f1cb06010fd3ca381
Gitweb: https://git.kernel.org/tip/ad8110706f381170c9f9975f1cb06010fd3ca381
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:22 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:23 +02:00
locking/atomic: scripts: generate kerneldoc comments
Currently the atomics are documented in Documentation/atomic_t.txt, and
have no kerneldoc comments. There are a sufficient number of gotchas
(e.g. semantics, noinstr-safety) that it would be nice to have comments
to call these out, and it would be nice to have kerneldoc comments such
that these can be collated.
While it's possible to derive the semantics from the code, this can be
painful given the amount of indirection we currently have (e.g. fallback
paths), and it's easy to be mislead by naming, e.g.
* The unconditional void-returning ops *only* have relaxed variants
without a _relaxed suffix, and can easily be mistaken for being fully
ordered.
It would be nice to give these a _relaxed() suffix, but this would
result in significant churn throughout the kernel.
* Our naming of conditional and unconditional+test ops is rather
inconsistent, and it can be difficult to derive the name of an
operation, or to identify where an op is conditional or
unconditional+test.
Some ops are clearly conditional:
- dec_if_positive
- add_unless
- dec_unless_positive
- inc_unless_negative
Some ops are clearly unconditional+test:
- sub_and_test
- dec_and_test
- inc_and_test
However, what exactly those test is not obvious. A _test_zero suffix
might be clearer.
Others could be read ambiguously:
- inc_not_zero // conditional
- add_negative // unconditional+test
It would probably be worth renaming these, e.g. to inc_unless_zero and
add_test_negative.
As a step towards making this more consistent and easier to understand,
this patch adds kerneldoc comments for all generated *atomic*_*()
functions. These are generated from templates, with some common text
shared, making it easy to extend these in future if necessary.
I've tried to make these as consistent and clear as possible, and I've
deliberately ensured:
* All ops have their ordering explicitly mentioned in the short and long
description.
* All test ops have "test" in their short description.
* All ops are described as an expression using their usual C operator.
For example:
andnot: "Atomically updates @v to (@v & ~@i)"
inc: "Atomically updates @v to (@v + 1)"
Which may be clearer to non-naative English speakers, and allows all
the operations to be described in the same style.
* All conditional ops have their condition described as an expression
using the usual C operators. For example:
add_unless: "If (@v != @u), atomically updates @v to (@v + @i)"
cmpxchg: "If (@v == @old), atomically updates @v to @new"
Which may be clearer to non-naative English speakers, and allows all
the operations to be described in the same style.
* All bitwise ops (and,andnot,or,xor) explicitly mention that they are
bitwise in their short description, so that they are not mistaken for
performing their logical equivalents.
* The noinstr safety of each op is explicitly described, with a
description of whether or not to use the raw_ form of the op.
There should be no functional change as a result of this patch.
Reported-by: Paul E. McKenney <[email protected]>
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
include/linux/atomic/atomic-arch-fallback.h | 1848 ++++++++++-
include/linux/atomic/atomic-instrumented.h | 2771 ++++++++++++++++-
include/linux/atomic/atomic-long.h | 925 +++++-
scripts/atomic/atomic-tbl.sh | 112 +-
scripts/atomic/gen-atomic-fallback.sh | 2 +-
scripts/atomic/gen-atomic-instrumented.sh | 2 +-
scripts/atomic/gen-atomic-long.sh | 2 +-
scripts/atomic/kerneldoc/add | 13 +-
scripts/atomic/kerneldoc/add_negative | 13 +-
scripts/atomic/kerneldoc/add_unless | 18 +-
scripts/atomic/kerneldoc/and | 13 +-
scripts/atomic/kerneldoc/andnot | 13 +-
scripts/atomic/kerneldoc/cmpxchg | 14 +-
scripts/atomic/kerneldoc/dec | 12 +-
scripts/atomic/kerneldoc/dec_and_test | 12 +-
scripts/atomic/kerneldoc/dec_if_positive | 12 +-
scripts/atomic/kerneldoc/dec_unless_positive | 12 +-
scripts/atomic/kerneldoc/inc | 12 +-
scripts/atomic/kerneldoc/inc_and_test | 12 +-
scripts/atomic/kerneldoc/inc_not_zero | 12 +-
scripts/atomic/kerneldoc/inc_unless_negative | 12 +-
scripts/atomic/kerneldoc/or | 13 +-
scripts/atomic/kerneldoc/read | 12 +-
scripts/atomic/kerneldoc/set | 13 +-
scripts/atomic/kerneldoc/sub | 13 +-
scripts/atomic/kerneldoc/sub_and_test | 13 +-
scripts/atomic/kerneldoc/try_cmpxchg | 15 +-
scripts/atomic/kerneldoc/xchg | 13 +-
scripts/atomic/kerneldoc/xor | 13 +-
29 files changed, 5940 insertions(+), 7 deletions(-)
create mode 100644 scripts/atomic/kerneldoc/add
create mode 100644 scripts/atomic/kerneldoc/add_negative
create mode 100644 scripts/atomic/kerneldoc/add_unless
create mode 100644 scripts/atomic/kerneldoc/and
create mode 100644 scripts/atomic/kerneldoc/andnot
create mode 100644 scripts/atomic/kerneldoc/cmpxchg
create mode 100644 scripts/atomic/kerneldoc/dec
create mode 100644 scripts/atomic/kerneldoc/dec_and_test
create mode 100644 scripts/atomic/kerneldoc/dec_if_positive
create mode 100644 scripts/atomic/kerneldoc/dec_unless_positive
create mode 100644 scripts/atomic/kerneldoc/inc
create mode 100644 scripts/atomic/kerneldoc/inc_and_test
create mode 100644 scripts/atomic/kerneldoc/inc_not_zero
create mode 100644 scripts/atomic/kerneldoc/inc_unless_negative
create mode 100644 scripts/atomic/kerneldoc/or
create mode 100644 scripts/atomic/kerneldoc/read
create mode 100644 scripts/atomic/kerneldoc/set
create mode 100644 scripts/atomic/kerneldoc/sub
create mode 100644 scripts/atomic/kerneldoc/sub_and_test
create mode 100644 scripts/atomic/kerneldoc/try_cmpxchg
create mode 100644 scripts/atomic/kerneldoc/xchg
create mode 100644 scripts/atomic/kerneldoc/xor
diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 470c289..8cded57 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -428,12 +428,32 @@ extern void raw_cmpxchg128_relaxed_not_implemented(void);
#define raw_sync_cmpxchg arch_sync_cmpxchg
+/**
+ * raw_atomic_read() - atomic load with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically loads the value of @v with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_read() elsewhere.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline int
raw_atomic_read(const atomic_t *v)
{
return arch_atomic_read(v);
}
+/**
+ * raw_atomic_read_acquire() - atomic load with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically loads the value of @v with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_read_acquire() elsewhere.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline int
raw_atomic_read_acquire(const atomic_t *v)
{
@@ -455,12 +475,34 @@ raw_atomic_read_acquire(const atomic_t *v)
#endif
}
+/**
+ * raw_atomic_set() - atomic set with relaxed ordering
+ * @v: pointer to atomic_t
+ * @i: int value to assign
+ *
+ * Atomically sets @v to @i with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_set() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_set(atomic_t *v, int i)
{
arch_atomic_set(v, i);
}
+/**
+ * raw_atomic_set_release() - atomic set with release ordering
+ * @v: pointer to atomic_t
+ * @i: int value to assign
+ *
+ * Atomically sets @v to @i with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_set_release() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_set_release(atomic_t *v, int i)
{
@@ -478,12 +520,34 @@ raw_atomic_set_release(atomic_t *v, int i)
#endif
}
+/**
+ * raw_atomic_add() - atomic add with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_add(int i, atomic_t *v)
{
arch_atomic_add(i, v);
}
+/**
+ * raw_atomic_add_return() - atomic add with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_add_return(int i, atomic_t *v)
{
@@ -500,6 +564,17 @@ raw_atomic_add_return(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_add_return_acquire() - atomic add with acquire ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_add_return_acquire(int i, atomic_t *v)
{
@@ -516,6 +591,17 @@ raw_atomic_add_return_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_add_return_release() - atomic add with release ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_add_return_release(int i, atomic_t *v)
{
@@ -531,6 +617,17 @@ raw_atomic_add_return_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_add_return_relaxed() - atomic add with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_add_return_relaxed(int i, atomic_t *v)
{
@@ -543,6 +640,17 @@ raw_atomic_add_return_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_add() - atomic add with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_add() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_add(int i, atomic_t *v)
{
@@ -559,6 +667,17 @@ raw_atomic_fetch_add(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_add_acquire() - atomic add with acquire ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_add_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_add_acquire(int i, atomic_t *v)
{
@@ -575,6 +694,17 @@ raw_atomic_fetch_add_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_add_release() - atomic add with release ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_add_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_add_release(int i, atomic_t *v)
{
@@ -590,6 +720,17 @@ raw_atomic_fetch_add_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_add_relaxed() - atomic add with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_add_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_add_relaxed(int i, atomic_t *v)
{
@@ -602,12 +743,34 @@ raw_atomic_fetch_add_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_sub() - atomic subtract with relaxed ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_sub() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_sub(int i, atomic_t *v)
{
arch_atomic_sub(i, v);
}
+/**
+ * raw_atomic_sub_return() - atomic subtract with full ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_sub_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_sub_return(int i, atomic_t *v)
{
@@ -624,6 +787,17 @@ raw_atomic_sub_return(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_sub_return_acquire() - atomic subtract with acquire ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_sub_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_sub_return_acquire(int i, atomic_t *v)
{
@@ -640,6 +814,17 @@ raw_atomic_sub_return_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_sub_return_release() - atomic subtract with release ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_sub_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_sub_return_release(int i, atomic_t *v)
{
@@ -655,6 +840,17 @@ raw_atomic_sub_return_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_sub_return_relaxed() - atomic subtract with relaxed ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_sub_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_sub_return_relaxed(int i, atomic_t *v)
{
@@ -667,6 +863,17 @@ raw_atomic_sub_return_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_sub() - atomic subtract with full ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_sub() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_sub(int i, atomic_t *v)
{
@@ -683,6 +890,17 @@ raw_atomic_fetch_sub(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_sub_acquire() - atomic subtract with acquire ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_sub_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_sub_acquire(int i, atomic_t *v)
{
@@ -699,6 +917,17 @@ raw_atomic_fetch_sub_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_sub_release() - atomic subtract with release ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_sub_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_sub_release(int i, atomic_t *v)
{
@@ -714,6 +943,17 @@ raw_atomic_fetch_sub_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_sub_relaxed() - atomic subtract with relaxed ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_sub_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_sub_relaxed(int i, atomic_t *v)
{
@@ -726,6 +966,16 @@ raw_atomic_fetch_sub_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_inc() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_inc(atomic_t *v)
{
@@ -736,6 +986,16 @@ raw_atomic_inc(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_inc_return() - atomic increment with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_inc_return(atomic_t *v)
{
@@ -752,6 +1012,16 @@ raw_atomic_inc_return(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_inc_return_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_inc_return_acquire(atomic_t *v)
{
@@ -768,6 +1038,16 @@ raw_atomic_inc_return_acquire(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_inc_return_release() - atomic increment with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_inc_return_release(atomic_t *v)
{
@@ -783,6 +1063,16 @@ raw_atomic_inc_return_release(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_inc_return_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_inc_return_relaxed(atomic_t *v)
{
@@ -795,6 +1085,16 @@ raw_atomic_inc_return_relaxed(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_inc() - atomic increment with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_inc() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_inc(atomic_t *v)
{
@@ -811,6 +1111,16 @@ raw_atomic_fetch_inc(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_inc_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_inc_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_inc_acquire(atomic_t *v)
{
@@ -827,6 +1137,16 @@ raw_atomic_fetch_inc_acquire(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_inc_release() - atomic increment with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_inc_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_inc_release(atomic_t *v)
{
@@ -842,6 +1162,16 @@ raw_atomic_fetch_inc_release(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_inc_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_inc_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_inc_relaxed(atomic_t *v)
{
@@ -854,6 +1184,16 @@ raw_atomic_fetch_inc_relaxed(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_dec(atomic_t *v)
{
@@ -864,6 +1204,16 @@ raw_atomic_dec(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec_return() - atomic decrement with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_dec_return(atomic_t *v)
{
@@ -880,6 +1230,16 @@ raw_atomic_dec_return(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec_return_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_dec_return_acquire(atomic_t *v)
{
@@ -896,6 +1256,16 @@ raw_atomic_dec_return_acquire(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec_return_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_dec_return_release(atomic_t *v)
{
@@ -911,6 +1281,16 @@ raw_atomic_dec_return_release(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec_return_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
raw_atomic_dec_return_relaxed(atomic_t *v)
{
@@ -923,6 +1303,16 @@ raw_atomic_dec_return_relaxed(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_dec() - atomic decrement with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_dec() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_dec(atomic_t *v)
{
@@ -939,6 +1329,16 @@ raw_atomic_fetch_dec(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_dec_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_dec_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_dec_acquire(atomic_t *v)
{
@@ -955,6 +1355,16 @@ raw_atomic_fetch_dec_acquire(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_dec_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_dec_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_dec_release(atomic_t *v)
{
@@ -970,6 +1380,16 @@ raw_atomic_fetch_dec_release(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_dec_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_dec_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_dec_relaxed(atomic_t *v)
{
@@ -982,12 +1402,34 @@ raw_atomic_fetch_dec_relaxed(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_and() - atomic bitwise AND with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_and() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_and(int i, atomic_t *v)
{
arch_atomic_and(i, v);
}
+/**
+ * raw_atomic_fetch_and() - atomic bitwise AND with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_and() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_and(int i, atomic_t *v)
{
@@ -1004,6 +1446,17 @@ raw_atomic_fetch_and(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_and_acquire() - atomic bitwise AND with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_and_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_and_acquire(int i, atomic_t *v)
{
@@ -1020,6 +1473,17 @@ raw_atomic_fetch_and_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_and_release() - atomic bitwise AND with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_and_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_and_release(int i, atomic_t *v)
{
@@ -1035,6 +1499,17 @@ raw_atomic_fetch_and_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_and_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_and_relaxed(int i, atomic_t *v)
{
@@ -1047,6 +1522,17 @@ raw_atomic_fetch_and_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_andnot() - atomic bitwise AND NOT with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_andnot() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_andnot(int i, atomic_t *v)
{
@@ -1057,6 +1543,17 @@ raw_atomic_andnot(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_andnot() - atomic bitwise AND NOT with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_andnot() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_andnot(int i, atomic_t *v)
{
@@ -1073,6 +1570,17 @@ raw_atomic_fetch_andnot(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_andnot_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
@@ -1089,6 +1597,17 @@ raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_andnot_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_andnot_release(int i, atomic_t *v)
{
@@ -1104,6 +1623,17 @@ raw_atomic_fetch_andnot_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_andnot_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
{
@@ -1116,12 +1646,34 @@ raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_or() - atomic bitwise OR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_or() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_or(int i, atomic_t *v)
{
arch_atomic_or(i, v);
}
+/**
+ * raw_atomic_fetch_or() - atomic bitwise OR with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_or() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_or(int i, atomic_t *v)
{
@@ -1138,6 +1690,17 @@ raw_atomic_fetch_or(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_or_acquire() - atomic bitwise OR with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_or_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_or_acquire(int i, atomic_t *v)
{
@@ -1154,6 +1717,17 @@ raw_atomic_fetch_or_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_or_release() - atomic bitwise OR with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_or_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_or_release(int i, atomic_t *v)
{
@@ -1169,6 +1743,17 @@ raw_atomic_fetch_or_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_or_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_or_relaxed(int i, atomic_t *v)
{
@@ -1181,12 +1766,34 @@ raw_atomic_fetch_or_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_xor() - atomic bitwise XOR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_xor() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_xor(int i, atomic_t *v)
{
arch_atomic_xor(i, v);
}
+/**
+ * raw_atomic_fetch_xor() - atomic bitwise XOR with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_xor() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_xor(int i, atomic_t *v)
{
@@ -1203,6 +1810,17 @@ raw_atomic_fetch_xor(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_xor_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_xor_acquire(int i, atomic_t *v)
{
@@ -1219,6 +1837,17 @@ raw_atomic_fetch_xor_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_xor_release() - atomic bitwise XOR with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_xor_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_xor_release(int i, atomic_t *v)
{
@@ -1234,6 +1863,17 @@ raw_atomic_fetch_xor_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_xor_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_xor_relaxed(int i, atomic_t *v)
{
@@ -1246,6 +1886,17 @@ raw_atomic_fetch_xor_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_xchg() - atomic exchange with full ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_xchg() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_xchg(atomic_t *v, int new)
{
@@ -1262,6 +1913,17 @@ raw_atomic_xchg(atomic_t *v, int new)
#endif
}
+/**
+ * raw_atomic_xchg_acquire() - atomic exchange with acquire ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_xchg_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_xchg_acquire(atomic_t *v, int new)
{
@@ -1278,6 +1940,17 @@ raw_atomic_xchg_acquire(atomic_t *v, int new)
#endif
}
+/**
+ * raw_atomic_xchg_release() - atomic exchange with release ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_xchg_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_xchg_release(atomic_t *v, int new)
{
@@ -1293,6 +1966,17 @@ raw_atomic_xchg_release(atomic_t *v, int new)
#endif
}
+/**
+ * raw_atomic_xchg_relaxed() - atomic exchange with relaxed ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_xchg_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_xchg_relaxed(atomic_t *v, int new)
{
@@ -1305,6 +1989,18 @@ raw_atomic_xchg_relaxed(atomic_t *v, int new)
#endif
}
+/**
+ * raw_atomic_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_cmpxchg() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_cmpxchg(atomic_t *v, int old, int new)
{
@@ -1321,6 +2017,18 @@ raw_atomic_cmpxchg(atomic_t *v, int old, int new)
#endif
}
+/**
+ * raw_atomic_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_cmpxchg_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
@@ -1337,6 +2045,18 @@ raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
#endif
}
+/**
+ * raw_atomic_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_cmpxchg_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
@@ -1352,6 +2072,18 @@ raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
#endif
}
+/**
+ * raw_atomic_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_cmpxchg_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
{
@@ -1364,6 +2096,19 @@ raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
#endif
}
+/**
+ * raw_atomic_try_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_try_cmpxchg() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
@@ -1384,6 +2129,19 @@ raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
#endif
}
+/**
+ * raw_atomic_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_try_cmpxchg_acquire() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
@@ -1404,6 +2162,19 @@ raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
#endif
}
+/**
+ * raw_atomic_try_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_try_cmpxchg_release() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
{
@@ -1423,6 +2194,19 @@ raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
#endif
}
+/**
+ * raw_atomic_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_try_cmpxchg_relaxed() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
{
@@ -1439,6 +2223,17 @@ raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
#endif
}
+/**
+ * raw_atomic_sub_and_test() - atomic subtract and test if zero with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_sub_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic_sub_and_test(int i, atomic_t *v)
{
@@ -1449,6 +2244,16 @@ raw_atomic_sub_and_test(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec_and_test() - atomic decrement and test if zero with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic_dec_and_test(atomic_t *v)
{
@@ -1459,6 +2264,16 @@ raw_atomic_dec_and_test(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_inc_and_test() - atomic increment and test if zero with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic_inc_and_test(atomic_t *v)
{
@@ -1469,6 +2284,17 @@ raw_atomic_inc_and_test(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_add_negative() - atomic add and test if negative with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_negative() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_add_negative(int i, atomic_t *v)
{
@@ -1485,6 +2311,17 @@ raw_atomic_add_negative(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_add_negative_acquire() - atomic add and test if negative with acquire ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_negative_acquire() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_add_negative_acquire(int i, atomic_t *v)
{
@@ -1501,6 +2338,17 @@ raw_atomic_add_negative_acquire(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_add_negative_release() - atomic add and test if negative with release ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_negative_release() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_add_negative_release(int i, atomic_t *v)
{
@@ -1516,6 +2364,17 @@ raw_atomic_add_negative_release(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_negative_relaxed() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_add_negative_relaxed(int i, atomic_t *v)
{
@@ -1528,6 +2387,18 @@ raw_atomic_add_negative_relaxed(int i, atomic_t *v)
#endif
}
+/**
+ * raw_atomic_fetch_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_t
+ * @a: int value to add
+ * @u: int value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_fetch_add_unless() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
@@ -1545,6 +2416,18 @@ raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
#endif
}
+/**
+ * raw_atomic_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_t
+ * @a: int value to add
+ * @u: int value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_add_unless() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_add_unless(atomic_t *v, int a, int u)
{
@@ -1555,6 +2438,16 @@ raw_atomic_add_unless(atomic_t *v, int a, int u)
#endif
}
+/**
+ * raw_atomic_inc_not_zero() - atomic increment unless zero with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc_not_zero() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_inc_not_zero(atomic_t *v)
{
@@ -1565,6 +2458,16 @@ raw_atomic_inc_not_zero(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_inc_unless_negative() - atomic increment unless negative with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_inc_unless_negative() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_inc_unless_negative(atomic_t *v)
{
@@ -1582,6 +2485,16 @@ raw_atomic_inc_unless_negative(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec_unless_positive() - atomic decrement unless positive with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec_unless_positive() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_dec_unless_positive(atomic_t *v)
{
@@ -1599,6 +2512,16 @@ raw_atomic_dec_unless_positive(atomic_t *v)
#endif
}
+/**
+ * raw_atomic_dec_if_positive() - atomic decrement if positive with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_dec_if_positive() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline int
raw_atomic_dec_if_positive(atomic_t *v)
{
@@ -1621,12 +2544,32 @@ raw_atomic_dec_if_positive(atomic_t *v)
#include <asm-generic/atomic64.h>
#endif
+/**
+ * raw_atomic64_read() - atomic load with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically loads the value of @v with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_read() elsewhere.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline s64
raw_atomic64_read(const atomic64_t *v)
{
return arch_atomic64_read(v);
}
+/**
+ * raw_atomic64_read_acquire() - atomic load with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically loads the value of @v with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_read_acquire() elsewhere.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline s64
raw_atomic64_read_acquire(const atomic64_t *v)
{
@@ -1648,12 +2591,34 @@ raw_atomic64_read_acquire(const atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_set() - atomic set with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @i: s64 value to assign
+ *
+ * Atomically sets @v to @i with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_set() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_set(atomic64_t *v, s64 i)
{
arch_atomic64_set(v, i);
}
+/**
+ * raw_atomic64_set_release() - atomic set with release ordering
+ * @v: pointer to atomic64_t
+ * @i: s64 value to assign
+ *
+ * Atomically sets @v to @i with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_set_release() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_set_release(atomic64_t *v, s64 i)
{
@@ -1671,12 +2636,34 @@ raw_atomic64_set_release(atomic64_t *v, s64 i)
#endif
}
+/**
+ * raw_atomic64_add() - atomic add with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_add(s64 i, atomic64_t *v)
{
arch_atomic64_add(i, v);
}
+/**
+ * raw_atomic64_add_return() - atomic add with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_add_return(s64 i, atomic64_t *v)
{
@@ -1693,6 +2680,17 @@ raw_atomic64_add_return(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_add_return_acquire() - atomic add with acquire ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)
{
@@ -1709,6 +2707,17 @@ raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_add_return_release() - atomic add with release ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_add_return_release(s64 i, atomic64_t *v)
{
@@ -1724,6 +2733,17 @@ raw_atomic64_add_return_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_add_return_relaxed() - atomic add with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
{
@@ -1736,6 +2756,17 @@ raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_add() - atomic add with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_add() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_add(s64 i, atomic64_t *v)
{
@@ -1752,6 +2783,17 @@ raw_atomic64_fetch_add(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_add_acquire() - atomic add with acquire ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_add_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
{
@@ -1768,6 +2810,17 @@ raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_add_release() - atomic add with release ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_add_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)
{
@@ -1783,6 +2836,17 @@ raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_add_relaxed() - atomic add with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_add_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
{
@@ -1795,12 +2859,34 @@ raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_sub() - atomic subtract with relaxed ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_sub() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_sub(s64 i, atomic64_t *v)
{
arch_atomic64_sub(i, v);
}
+/**
+ * raw_atomic64_sub_return() - atomic subtract with full ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_sub_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_sub_return(s64 i, atomic64_t *v)
{
@@ -1817,6 +2903,17 @@ raw_atomic64_sub_return(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_sub_return_acquire() - atomic subtract with acquire ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_sub_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
{
@@ -1833,6 +2930,17 @@ raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_sub_return_release() - atomic subtract with release ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_sub_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_sub_return_release(s64 i, atomic64_t *v)
{
@@ -1848,6 +2956,17 @@ raw_atomic64_sub_return_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_sub_return_relaxed() - atomic subtract with relaxed ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_sub_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
{
@@ -1860,6 +2979,17 @@ raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_sub() - atomic subtract with full ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_sub() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_sub(s64 i, atomic64_t *v)
{
@@ -1876,6 +3006,17 @@ raw_atomic64_fetch_sub(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_sub_acquire() - atomic subtract with acquire ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_sub_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
{
@@ -1892,6 +3033,17 @@ raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_sub_release() - atomic subtract with release ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_sub_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
{
@@ -1907,6 +3059,17 @@ raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_sub_relaxed() - atomic subtract with relaxed ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_sub_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
{
@@ -1919,6 +3082,16 @@ raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_inc() - atomic increment with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_inc(atomic64_t *v)
{
@@ -1929,6 +3102,16 @@ raw_atomic64_inc(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_inc_return() - atomic increment with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_inc_return(atomic64_t *v)
{
@@ -1945,6 +3128,16 @@ raw_atomic64_inc_return(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_inc_return_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_inc_return_acquire(atomic64_t *v)
{
@@ -1961,6 +3154,16 @@ raw_atomic64_inc_return_acquire(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_inc_return_release() - atomic increment with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_inc_return_release(atomic64_t *v)
{
@@ -1976,6 +3179,16 @@ raw_atomic64_inc_return_release(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_inc_return_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_inc_return_relaxed(atomic64_t *v)
{
@@ -1988,6 +3201,16 @@ raw_atomic64_inc_return_relaxed(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_inc() - atomic increment with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_inc() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_inc(atomic64_t *v)
{
@@ -2004,6 +3227,16 @@ raw_atomic64_fetch_inc(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_inc_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_inc_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_inc_acquire(atomic64_t *v)
{
@@ -2020,6 +3253,16 @@ raw_atomic64_fetch_inc_acquire(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_inc_release() - atomic increment with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_inc_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_inc_release(atomic64_t *v)
{
@@ -2035,6 +3278,16 @@ raw_atomic64_fetch_inc_release(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_inc_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_inc_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_inc_relaxed(atomic64_t *v)
{
@@ -2047,6 +3300,16 @@ raw_atomic64_fetch_inc_relaxed(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_dec(atomic64_t *v)
{
@@ -2057,6 +3320,16 @@ raw_atomic64_dec(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec_return() - atomic decrement with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_dec_return(atomic64_t *v)
{
@@ -2073,6 +3346,16 @@ raw_atomic64_dec_return(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec_return_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_dec_return_acquire(atomic64_t *v)
{
@@ -2089,6 +3372,16 @@ raw_atomic64_dec_return_acquire(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec_return_release() - atomic decrement with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_dec_return_release(atomic64_t *v)
{
@@ -2104,6 +3397,16 @@ raw_atomic64_dec_return_release(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec_return_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
raw_atomic64_dec_return_relaxed(atomic64_t *v)
{
@@ -2116,6 +3419,16 @@ raw_atomic64_dec_return_relaxed(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_dec() - atomic decrement with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_dec() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_dec(atomic64_t *v)
{
@@ -2132,6 +3445,16 @@ raw_atomic64_fetch_dec(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_dec_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_dec_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_dec_acquire(atomic64_t *v)
{
@@ -2148,6 +3471,16 @@ raw_atomic64_fetch_dec_acquire(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_dec_release() - atomic decrement with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_dec_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_dec_release(atomic64_t *v)
{
@@ -2163,6 +3496,16 @@ raw_atomic64_fetch_dec_release(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_dec_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_dec_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_dec_relaxed(atomic64_t *v)
{
@@ -2175,12 +3518,34 @@ raw_atomic64_fetch_dec_relaxed(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_and() - atomic bitwise AND with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_and() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_and(s64 i, atomic64_t *v)
{
arch_atomic64_and(i, v);
}
+/**
+ * raw_atomic64_fetch_and() - atomic bitwise AND with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_and() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_and(s64 i, atomic64_t *v)
{
@@ -2197,6 +3562,17 @@ raw_atomic64_fetch_and(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_and_acquire() - atomic bitwise AND with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_and_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
{
@@ -2213,6 +3589,17 @@ raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_and_release() - atomic bitwise AND with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_and_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)
{
@@ -2228,6 +3615,17 @@ raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_and_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
{
@@ -2240,6 +3638,17 @@ raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_andnot() - atomic bitwise AND NOT with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_andnot() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_andnot(s64 i, atomic64_t *v)
{
@@ -2250,6 +3659,17 @@ raw_atomic64_andnot(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_andnot() - atomic bitwise AND NOT with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_andnot() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
@@ -2266,6 +3686,17 @@ raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_andnot_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
@@ -2282,6 +3713,17 @@ raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_andnot_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
@@ -2297,6 +3739,17 @@ raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_andnot_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
{
@@ -2309,12 +3762,34 @@ raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_or() - atomic bitwise OR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_or() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_or(s64 i, atomic64_t *v)
{
arch_atomic64_or(i, v);
}
+/**
+ * raw_atomic64_fetch_or() - atomic bitwise OR with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_or() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_or(s64 i, atomic64_t *v)
{
@@ -2331,6 +3806,17 @@ raw_atomic64_fetch_or(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_or_acquire() - atomic bitwise OR with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_or_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
{
@@ -2347,6 +3833,17 @@ raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_or_release() - atomic bitwise OR with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_or_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)
{
@@ -2362,6 +3859,17 @@ raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_or_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
{
@@ -2374,12 +3882,34 @@ raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_xor() - atomic bitwise XOR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_xor() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic64_xor(s64 i, atomic64_t *v)
{
arch_atomic64_xor(i, v);
}
+/**
+ * raw_atomic64_fetch_xor() - atomic bitwise XOR with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_xor() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_xor(s64 i, atomic64_t *v)
{
@@ -2396,6 +3926,17 @@ raw_atomic64_fetch_xor(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_xor_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
{
@@ -2412,6 +3953,17 @@ raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_xor_release() - atomic bitwise XOR with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_xor_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
{
@@ -2427,6 +3979,17 @@ raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_xor_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
{
@@ -2439,6 +4002,17 @@ raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_xchg() - atomic exchange with full ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_xchg() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_xchg(atomic64_t *v, s64 new)
{
@@ -2455,6 +4029,17 @@ raw_atomic64_xchg(atomic64_t *v, s64 new)
#endif
}
+/**
+ * raw_atomic64_xchg_acquire() - atomic exchange with acquire ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_xchg_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_xchg_acquire(atomic64_t *v, s64 new)
{
@@ -2471,6 +4056,17 @@ raw_atomic64_xchg_acquire(atomic64_t *v, s64 new)
#endif
}
+/**
+ * raw_atomic64_xchg_release() - atomic exchange with release ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_xchg_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_xchg_release(atomic64_t *v, s64 new)
{
@@ -2486,6 +4082,17 @@ raw_atomic64_xchg_release(atomic64_t *v, s64 new)
#endif
}
+/**
+ * raw_atomic64_xchg_relaxed() - atomic exchange with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_xchg_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
{
@@ -2498,6 +4105,18 @@ raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
#endif
}
+/**
+ * raw_atomic64_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_cmpxchg() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
@@ -2514,6 +4133,18 @@ raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
#endif
}
+/**
+ * raw_atomic64_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_cmpxchg_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
@@ -2530,6 +4161,18 @@ raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
#endif
}
+/**
+ * raw_atomic64_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_cmpxchg_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
@@ -2545,6 +4188,18 @@ raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
#endif
}
+/**
+ * raw_atomic64_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_cmpxchg_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
{
@@ -2557,6 +4212,19 @@ raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
#endif
}
+/**
+ * raw_atomic64_try_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic64_try_cmpxchg() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
@@ -2577,6 +4245,19 @@ raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
#endif
}
+/**
+ * raw_atomic64_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_acquire() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
@@ -2597,6 +4278,19 @@ raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
#endif
}
+/**
+ * raw_atomic64_try_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_release() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
{
@@ -2616,6 +4310,19 @@ raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
#endif
}
+/**
+ * raw_atomic64_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_relaxed() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
{
@@ -2632,6 +4339,17 @@ raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
#endif
}
+/**
+ * raw_atomic64_sub_and_test() - atomic subtract and test if zero with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_sub_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_sub_and_test(s64 i, atomic64_t *v)
{
@@ -2642,6 +4360,16 @@ raw_atomic64_sub_and_test(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec_and_test() - atomic decrement and test if zero with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_dec_and_test(atomic64_t *v)
{
@@ -2652,6 +4380,16 @@ raw_atomic64_dec_and_test(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_inc_and_test() - atomic increment and test if zero with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_inc_and_test(atomic64_t *v)
{
@@ -2662,6 +4400,17 @@ raw_atomic64_inc_and_test(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_add_negative() - atomic add and test if negative with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_negative() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_add_negative(s64 i, atomic64_t *v)
{
@@ -2678,6 +4427,17 @@ raw_atomic64_add_negative(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_add_negative_acquire() - atomic add and test if negative with acquire ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_negative_acquire() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
@@ -2694,6 +4454,17 @@ raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_add_negative_release() - atomic add and test if negative with release ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_negative_release() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
{
@@ -2709,6 +4480,17 @@ raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_negative_relaxed() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
{
@@ -2721,6 +4503,18 @@ raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_fetch_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic64_t
+ * @a: s64 value to add
+ * @u: s64 value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_fetch_add_unless() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
@@ -2738,6 +4532,18 @@ raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
#endif
}
+/**
+ * raw_atomic64_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic64_t
+ * @a: s64 value to add
+ * @u: s64 value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_add_unless() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
@@ -2748,6 +4554,16 @@ raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
#endif
}
+/**
+ * raw_atomic64_inc_not_zero() - atomic increment unless zero with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc_not_zero() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_inc_not_zero(atomic64_t *v)
{
@@ -2758,6 +4574,16 @@ raw_atomic64_inc_not_zero(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_inc_unless_negative() - atomic increment unless negative with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_inc_unless_negative() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_inc_unless_negative(atomic64_t *v)
{
@@ -2775,6 +4601,16 @@ raw_atomic64_inc_unless_negative(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec_unless_positive() - atomic decrement unless positive with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec_unless_positive() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic64_dec_unless_positive(atomic64_t *v)
{
@@ -2792,6 +4628,16 @@ raw_atomic64_dec_unless_positive(atomic64_t *v)
#endif
}
+/**
+ * raw_atomic64_dec_if_positive() - atomic decrement if positive with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic64_dec_if_positive() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline s64
raw_atomic64_dec_if_positive(atomic64_t *v)
{
@@ -2811,4 +4657,4 @@ raw_atomic64_dec_if_positive(atomic64_t *v)
}
#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// 205e090382132f1fc85e48b46e722865f9c81309
+// 3916f02c038baa3f5190d275f68b9211667fcc9d
diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h
index 5491c89..ebfc795 100644
--- a/include/linux/atomic/atomic-instrumented.h
+++ b/include/linux/atomic/atomic-instrumented.h
@@ -16,6 +16,16 @@
#include <linux/compiler.h>
#include <linux/instrumented.h>
+/**
+ * atomic_read() - atomic load with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically loads the value of @v with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_read() there.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline int
atomic_read(const atomic_t *v)
{
@@ -23,6 +33,16 @@ atomic_read(const atomic_t *v)
return raw_atomic_read(v);
}
+/**
+ * atomic_read_acquire() - atomic load with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically loads the value of @v with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_read_acquire() there.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline int
atomic_read_acquire(const atomic_t *v)
{
@@ -30,6 +50,17 @@ atomic_read_acquire(const atomic_t *v)
return raw_atomic_read_acquire(v);
}
+/**
+ * atomic_set() - atomic set with relaxed ordering
+ * @v: pointer to atomic_t
+ * @i: int value to assign
+ *
+ * Atomically sets @v to @i with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_set() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_set(atomic_t *v, int i)
{
@@ -37,6 +68,17 @@ atomic_set(atomic_t *v, int i)
raw_atomic_set(v, i);
}
+/**
+ * atomic_set_release() - atomic set with release ordering
+ * @v: pointer to atomic_t
+ * @i: int value to assign
+ *
+ * Atomically sets @v to @i with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_set_release() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_set_release(atomic_t *v, int i)
{
@@ -45,6 +87,17 @@ atomic_set_release(atomic_t *v, int i)
raw_atomic_set_release(v, i);
}
+/**
+ * atomic_add() - atomic add with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_add(int i, atomic_t *v)
{
@@ -52,6 +105,17 @@ atomic_add(int i, atomic_t *v)
raw_atomic_add(i, v);
}
+/**
+ * atomic_add_return() - atomic add with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_add_return(int i, atomic_t *v)
{
@@ -60,6 +124,17 @@ atomic_add_return(int i, atomic_t *v)
return raw_atomic_add_return(i, v);
}
+/**
+ * atomic_add_return_acquire() - atomic add with acquire ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_add_return_acquire(int i, atomic_t *v)
{
@@ -67,6 +142,17 @@ atomic_add_return_acquire(int i, atomic_t *v)
return raw_atomic_add_return_acquire(i, v);
}
+/**
+ * atomic_add_return_release() - atomic add with release ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_add_return_release(int i, atomic_t *v)
{
@@ -75,6 +161,17 @@ atomic_add_return_release(int i, atomic_t *v)
return raw_atomic_add_return_release(i, v);
}
+/**
+ * atomic_add_return_relaxed() - atomic add with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_add_return_relaxed(int i, atomic_t *v)
{
@@ -82,6 +179,17 @@ atomic_add_return_relaxed(int i, atomic_t *v)
return raw_atomic_add_return_relaxed(i, v);
}
+/**
+ * atomic_fetch_add() - atomic add with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_add() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_add(int i, atomic_t *v)
{
@@ -90,6 +198,17 @@ atomic_fetch_add(int i, atomic_t *v)
return raw_atomic_fetch_add(i, v);
}
+/**
+ * atomic_fetch_add_acquire() - atomic add with acquire ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_add_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_add_acquire(int i, atomic_t *v)
{
@@ -97,6 +216,17 @@ atomic_fetch_add_acquire(int i, atomic_t *v)
return raw_atomic_fetch_add_acquire(i, v);
}
+/**
+ * atomic_fetch_add_release() - atomic add with release ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_add_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_add_release(int i, atomic_t *v)
{
@@ -105,6 +235,17 @@ atomic_fetch_add_release(int i, atomic_t *v)
return raw_atomic_fetch_add_release(i, v);
}
+/**
+ * atomic_fetch_add_relaxed() - atomic add with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_add_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_add_relaxed(int i, atomic_t *v)
{
@@ -112,6 +253,17 @@ atomic_fetch_add_relaxed(int i, atomic_t *v)
return raw_atomic_fetch_add_relaxed(i, v);
}
+/**
+ * atomic_sub() - atomic subtract with relaxed ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_sub() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_sub(int i, atomic_t *v)
{
@@ -119,6 +271,17 @@ atomic_sub(int i, atomic_t *v)
raw_atomic_sub(i, v);
}
+/**
+ * atomic_sub_return() - atomic subtract with full ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_sub_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_sub_return(int i, atomic_t *v)
{
@@ -127,6 +290,17 @@ atomic_sub_return(int i, atomic_t *v)
return raw_atomic_sub_return(i, v);
}
+/**
+ * atomic_sub_return_acquire() - atomic subtract with acquire ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_sub_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_sub_return_acquire(int i, atomic_t *v)
{
@@ -134,6 +308,17 @@ atomic_sub_return_acquire(int i, atomic_t *v)
return raw_atomic_sub_return_acquire(i, v);
}
+/**
+ * atomic_sub_return_release() - atomic subtract with release ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_sub_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_sub_return_release(int i, atomic_t *v)
{
@@ -142,6 +327,17 @@ atomic_sub_return_release(int i, atomic_t *v)
return raw_atomic_sub_return_release(i, v);
}
+/**
+ * atomic_sub_return_relaxed() - atomic subtract with relaxed ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_sub_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_sub_return_relaxed(int i, atomic_t *v)
{
@@ -149,6 +345,17 @@ atomic_sub_return_relaxed(int i, atomic_t *v)
return raw_atomic_sub_return_relaxed(i, v);
}
+/**
+ * atomic_fetch_sub() - atomic subtract with full ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_sub() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_sub(int i, atomic_t *v)
{
@@ -157,6 +364,17 @@ atomic_fetch_sub(int i, atomic_t *v)
return raw_atomic_fetch_sub(i, v);
}
+/**
+ * atomic_fetch_sub_acquire() - atomic subtract with acquire ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_sub_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_sub_acquire(int i, atomic_t *v)
{
@@ -164,6 +382,17 @@ atomic_fetch_sub_acquire(int i, atomic_t *v)
return raw_atomic_fetch_sub_acquire(i, v);
}
+/**
+ * atomic_fetch_sub_release() - atomic subtract with release ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_sub_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_sub_release(int i, atomic_t *v)
{
@@ -172,6 +401,17 @@ atomic_fetch_sub_release(int i, atomic_t *v)
return raw_atomic_fetch_sub_release(i, v);
}
+/**
+ * atomic_fetch_sub_relaxed() - atomic subtract with relaxed ordering
+ * @i: int value to subtract
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_sub_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_sub_relaxed(int i, atomic_t *v)
{
@@ -179,6 +419,16 @@ atomic_fetch_sub_relaxed(int i, atomic_t *v)
return raw_atomic_fetch_sub_relaxed(i, v);
}
+/**
+ * atomic_inc() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_inc(atomic_t *v)
{
@@ -186,6 +436,16 @@ atomic_inc(atomic_t *v)
raw_atomic_inc(v);
}
+/**
+ * atomic_inc_return() - atomic increment with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_inc_return(atomic_t *v)
{
@@ -194,6 +454,16 @@ atomic_inc_return(atomic_t *v)
return raw_atomic_inc_return(v);
}
+/**
+ * atomic_inc_return_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_inc_return_acquire(atomic_t *v)
{
@@ -201,6 +471,16 @@ atomic_inc_return_acquire(atomic_t *v)
return raw_atomic_inc_return_acquire(v);
}
+/**
+ * atomic_inc_return_release() - atomic increment with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_inc_return_release(atomic_t *v)
{
@@ -209,6 +489,16 @@ atomic_inc_return_release(atomic_t *v)
return raw_atomic_inc_return_release(v);
}
+/**
+ * atomic_inc_return_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_inc_return_relaxed(atomic_t *v)
{
@@ -216,6 +506,16 @@ atomic_inc_return_relaxed(atomic_t *v)
return raw_atomic_inc_return_relaxed(v);
}
+/**
+ * atomic_fetch_inc() - atomic increment with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_inc() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_inc(atomic_t *v)
{
@@ -224,6 +524,16 @@ atomic_fetch_inc(atomic_t *v)
return raw_atomic_fetch_inc(v);
}
+/**
+ * atomic_fetch_inc_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_inc_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_inc_acquire(atomic_t *v)
{
@@ -231,6 +541,16 @@ atomic_fetch_inc_acquire(atomic_t *v)
return raw_atomic_fetch_inc_acquire(v);
}
+/**
+ * atomic_fetch_inc_release() - atomic increment with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_inc_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_inc_release(atomic_t *v)
{
@@ -239,6 +559,16 @@ atomic_fetch_inc_release(atomic_t *v)
return raw_atomic_fetch_inc_release(v);
}
+/**
+ * atomic_fetch_inc_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_inc_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_inc_relaxed(atomic_t *v)
{
@@ -246,6 +576,16 @@ atomic_fetch_inc_relaxed(atomic_t *v)
return raw_atomic_fetch_inc_relaxed(v);
}
+/**
+ * atomic_dec() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_dec(atomic_t *v)
{
@@ -253,6 +593,16 @@ atomic_dec(atomic_t *v)
raw_atomic_dec(v);
}
+/**
+ * atomic_dec_return() - atomic decrement with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_dec_return(atomic_t *v)
{
@@ -261,6 +611,16 @@ atomic_dec_return(atomic_t *v)
return raw_atomic_dec_return(v);
}
+/**
+ * atomic_dec_return_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_dec_return_acquire(atomic_t *v)
{
@@ -268,6 +628,16 @@ atomic_dec_return_acquire(atomic_t *v)
return raw_atomic_dec_return_acquire(v);
}
+/**
+ * atomic_dec_return_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_dec_return_release(atomic_t *v)
{
@@ -276,6 +646,16 @@ atomic_dec_return_release(atomic_t *v)
return raw_atomic_dec_return_release(v);
}
+/**
+ * atomic_dec_return_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline int
atomic_dec_return_relaxed(atomic_t *v)
{
@@ -283,6 +663,16 @@ atomic_dec_return_relaxed(atomic_t *v)
return raw_atomic_dec_return_relaxed(v);
}
+/**
+ * atomic_fetch_dec() - atomic decrement with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_dec() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_dec(atomic_t *v)
{
@@ -291,6 +681,16 @@ atomic_fetch_dec(atomic_t *v)
return raw_atomic_fetch_dec(v);
}
+/**
+ * atomic_fetch_dec_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_dec_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_dec_acquire(atomic_t *v)
{
@@ -298,6 +698,16 @@ atomic_fetch_dec_acquire(atomic_t *v)
return raw_atomic_fetch_dec_acquire(v);
}
+/**
+ * atomic_fetch_dec_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_dec_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_dec_release(atomic_t *v)
{
@@ -306,6 +716,16 @@ atomic_fetch_dec_release(atomic_t *v)
return raw_atomic_fetch_dec_release(v);
}
+/**
+ * atomic_fetch_dec_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_dec_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_dec_relaxed(atomic_t *v)
{
@@ -313,6 +733,17 @@ atomic_fetch_dec_relaxed(atomic_t *v)
return raw_atomic_fetch_dec_relaxed(v);
}
+/**
+ * atomic_and() - atomic bitwise AND with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_and() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_and(int i, atomic_t *v)
{
@@ -320,6 +751,17 @@ atomic_and(int i, atomic_t *v)
raw_atomic_and(i, v);
}
+/**
+ * atomic_fetch_and() - atomic bitwise AND with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_and() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_and(int i, atomic_t *v)
{
@@ -328,6 +770,17 @@ atomic_fetch_and(int i, atomic_t *v)
return raw_atomic_fetch_and(i, v);
}
+/**
+ * atomic_fetch_and_acquire() - atomic bitwise AND with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_and_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_and_acquire(int i, atomic_t *v)
{
@@ -335,6 +788,17 @@ atomic_fetch_and_acquire(int i, atomic_t *v)
return raw_atomic_fetch_and_acquire(i, v);
}
+/**
+ * atomic_fetch_and_release() - atomic bitwise AND with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_and_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_and_release(int i, atomic_t *v)
{
@@ -343,6 +807,17 @@ atomic_fetch_and_release(int i, atomic_t *v)
return raw_atomic_fetch_and_release(i, v);
}
+/**
+ * atomic_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_and_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_and_relaxed(int i, atomic_t *v)
{
@@ -350,6 +825,17 @@ atomic_fetch_and_relaxed(int i, atomic_t *v)
return raw_atomic_fetch_and_relaxed(i, v);
}
+/**
+ * atomic_andnot() - atomic bitwise AND NOT with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_andnot() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_andnot(int i, atomic_t *v)
{
@@ -357,6 +843,17 @@ atomic_andnot(int i, atomic_t *v)
raw_atomic_andnot(i, v);
}
+/**
+ * atomic_fetch_andnot() - atomic bitwise AND NOT with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_andnot(int i, atomic_t *v)
{
@@ -365,6 +862,17 @@ atomic_fetch_andnot(int i, atomic_t *v)
return raw_atomic_fetch_andnot(i, v);
}
+/**
+ * atomic_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
@@ -372,6 +880,17 @@ atomic_fetch_andnot_acquire(int i, atomic_t *v)
return raw_atomic_fetch_andnot_acquire(i, v);
}
+/**
+ * atomic_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_andnot_release(int i, atomic_t *v)
{
@@ -380,6 +899,17 @@ atomic_fetch_andnot_release(int i, atomic_t *v)
return raw_atomic_fetch_andnot_release(i, v);
}
+/**
+ * atomic_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_andnot_relaxed(int i, atomic_t *v)
{
@@ -387,6 +917,17 @@ atomic_fetch_andnot_relaxed(int i, atomic_t *v)
return raw_atomic_fetch_andnot_relaxed(i, v);
}
+/**
+ * atomic_or() - atomic bitwise OR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_or() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_or(int i, atomic_t *v)
{
@@ -394,6 +935,17 @@ atomic_or(int i, atomic_t *v)
raw_atomic_or(i, v);
}
+/**
+ * atomic_fetch_or() - atomic bitwise OR with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_or() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_or(int i, atomic_t *v)
{
@@ -402,6 +954,17 @@ atomic_fetch_or(int i, atomic_t *v)
return raw_atomic_fetch_or(i, v);
}
+/**
+ * atomic_fetch_or_acquire() - atomic bitwise OR with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_or_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_or_acquire(int i, atomic_t *v)
{
@@ -409,6 +972,17 @@ atomic_fetch_or_acquire(int i, atomic_t *v)
return raw_atomic_fetch_or_acquire(i, v);
}
+/**
+ * atomic_fetch_or_release() - atomic bitwise OR with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_or_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_or_release(int i, atomic_t *v)
{
@@ -417,6 +991,17 @@ atomic_fetch_or_release(int i, atomic_t *v)
return raw_atomic_fetch_or_release(i, v);
}
+/**
+ * atomic_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_or_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_or_relaxed(int i, atomic_t *v)
{
@@ -424,6 +1009,17 @@ atomic_fetch_or_relaxed(int i, atomic_t *v)
return raw_atomic_fetch_or_relaxed(i, v);
}
+/**
+ * atomic_xor() - atomic bitwise XOR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_xor() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_xor(int i, atomic_t *v)
{
@@ -431,6 +1027,17 @@ atomic_xor(int i, atomic_t *v)
raw_atomic_xor(i, v);
}
+/**
+ * atomic_fetch_xor() - atomic bitwise XOR with full ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_xor() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_xor(int i, atomic_t *v)
{
@@ -439,6 +1046,17 @@ atomic_fetch_xor(int i, atomic_t *v)
return raw_atomic_fetch_xor(i, v);
}
+/**
+ * atomic_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_xor_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_xor_acquire(int i, atomic_t *v)
{
@@ -446,6 +1064,17 @@ atomic_fetch_xor_acquire(int i, atomic_t *v)
return raw_atomic_fetch_xor_acquire(i, v);
}
+/**
+ * atomic_fetch_xor_release() - atomic bitwise XOR with release ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_xor_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_xor_release(int i, atomic_t *v)
{
@@ -454,6 +1083,17 @@ atomic_fetch_xor_release(int i, atomic_t *v)
return raw_atomic_fetch_xor_release(i, v);
}
+/**
+ * atomic_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
+ * @i: int value
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_xor_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_xor_relaxed(int i, atomic_t *v)
{
@@ -461,6 +1101,17 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v)
return raw_atomic_fetch_xor_relaxed(i, v);
}
+/**
+ * atomic_xchg() - atomic exchange with full ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_xchg() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_xchg(atomic_t *v, int new)
{
@@ -469,6 +1120,17 @@ atomic_xchg(atomic_t *v, int new)
return raw_atomic_xchg(v, new);
}
+/**
+ * atomic_xchg_acquire() - atomic exchange with acquire ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_xchg_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_xchg_acquire(atomic_t *v, int new)
{
@@ -476,6 +1138,17 @@ atomic_xchg_acquire(atomic_t *v, int new)
return raw_atomic_xchg_acquire(v, new);
}
+/**
+ * atomic_xchg_release() - atomic exchange with release ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_xchg_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_xchg_release(atomic_t *v, int new)
{
@@ -484,6 +1157,17 @@ atomic_xchg_release(atomic_t *v, int new)
return raw_atomic_xchg_release(v, new);
}
+/**
+ * atomic_xchg_relaxed() - atomic exchange with relaxed ordering
+ * @v: pointer to atomic_t
+ * @new: int value to assign
+ *
+ * Atomically updates @v to @new with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_xchg_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_xchg_relaxed(atomic_t *v, int new)
{
@@ -491,6 +1175,18 @@ atomic_xchg_relaxed(atomic_t *v, int new)
return raw_atomic_xchg_relaxed(v, new);
}
+/**
+ * atomic_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_cmpxchg() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_cmpxchg(atomic_t *v, int old, int new)
{
@@ -499,6 +1195,18 @@ atomic_cmpxchg(atomic_t *v, int old, int new)
return raw_atomic_cmpxchg(v, old, new);
}
+/**
+ * atomic_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_cmpxchg_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
@@ -506,6 +1214,18 @@ atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
return raw_atomic_cmpxchg_acquire(v, old, new);
}
+/**
+ * atomic_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_cmpxchg_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
@@ -514,6 +1234,18 @@ atomic_cmpxchg_release(atomic_t *v, int old, int new)
return raw_atomic_cmpxchg_release(v, old, new);
}
+/**
+ * atomic_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_t
+ * @old: int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_cmpxchg_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
{
@@ -521,6 +1253,19 @@ atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
return raw_atomic_cmpxchg_relaxed(v, old, new);
}
+/**
+ * atomic_try_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
@@ -530,6 +1275,19 @@ atomic_try_cmpxchg(atomic_t *v, int *old, int new)
return raw_atomic_try_cmpxchg(v, old, new);
}
+/**
+ * atomic_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_acquire() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
@@ -538,6 +1296,19 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
return raw_atomic_try_cmpxchg_acquire(v, old, new);
}
+/**
+ * atomic_try_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_release() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
{
@@ -547,6 +1318,19 @@ atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
return raw_atomic_try_cmpxchg_release(v, old, new);
}
+/**
+ * atomic_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_t
+ * @old: pointer to int value to compare with
+ * @new: int value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_relaxed() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
{
@@ -555,6 +1339,17 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
return raw_atomic_try_cmpxchg_relaxed(v, old, new);
}
+/**
+ * atomic_sub_and_test() - atomic subtract and test if zero with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_sub_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic_sub_and_test(int i, atomic_t *v)
{
@@ -563,6 +1358,16 @@ atomic_sub_and_test(int i, atomic_t *v)
return raw_atomic_sub_and_test(i, v);
}
+/**
+ * atomic_dec_and_test() - atomic decrement and test if zero with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic_dec_and_test(atomic_t *v)
{
@@ -571,6 +1376,16 @@ atomic_dec_and_test(atomic_t *v)
return raw_atomic_dec_and_test(v);
}
+/**
+ * atomic_inc_and_test() - atomic increment and test if zero with full ordering
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic_inc_and_test(atomic_t *v)
{
@@ -579,6 +1394,17 @@ atomic_inc_and_test(atomic_t *v)
return raw_atomic_inc_and_test(v);
}
+/**
+ * atomic_add_negative() - atomic add and test if negative with full ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_negative() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_add_negative(int i, atomic_t *v)
{
@@ -587,6 +1413,17 @@ atomic_add_negative(int i, atomic_t *v)
return raw_atomic_add_negative(i, v);
}
+/**
+ * atomic_add_negative_acquire() - atomic add and test if negative with acquire ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_negative_acquire() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_add_negative_acquire(int i, atomic_t *v)
{
@@ -594,6 +1431,17 @@ atomic_add_negative_acquire(int i, atomic_t *v)
return raw_atomic_add_negative_acquire(i, v);
}
+/**
+ * atomic_add_negative_release() - atomic add and test if negative with release ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_negative_release() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_add_negative_release(int i, atomic_t *v)
{
@@ -602,6 +1450,17 @@ atomic_add_negative_release(int i, atomic_t *v)
return raw_atomic_add_negative_release(i, v);
}
+/**
+ * atomic_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
+ * @i: int value to add
+ * @v: pointer to atomic_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_negative_relaxed() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_add_negative_relaxed(int i, atomic_t *v)
{
@@ -609,6 +1468,18 @@ atomic_add_negative_relaxed(int i, atomic_t *v)
return raw_atomic_add_negative_relaxed(i, v);
}
+/**
+ * atomic_fetch_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_t
+ * @a: int value to add
+ * @u: int value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_fetch_add_unless() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline int
atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
@@ -617,6 +1488,18 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u)
return raw_atomic_fetch_add_unless(v, a, u);
}
+/**
+ * atomic_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_t
+ * @a: int value to add
+ * @u: int value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_add_unless() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_add_unless(atomic_t *v, int a, int u)
{
@@ -625,6 +1508,16 @@ atomic_add_unless(atomic_t *v, int a, int u)
return raw_atomic_add_unless(v, a, u);
}
+/**
+ * atomic_inc_not_zero() - atomic increment unless zero with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc_not_zero() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_inc_not_zero(atomic_t *v)
{
@@ -633,6 +1526,16 @@ atomic_inc_not_zero(atomic_t *v)
return raw_atomic_inc_not_zero(v);
}
+/**
+ * atomic_inc_unless_negative() - atomic increment unless negative with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_inc_unless_negative() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_inc_unless_negative(atomic_t *v)
{
@@ -641,6 +1544,16 @@ atomic_inc_unless_negative(atomic_t *v)
return raw_atomic_inc_unless_negative(v);
}
+/**
+ * atomic_dec_unless_positive() - atomic decrement unless positive with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec_unless_positive() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_dec_unless_positive(atomic_t *v)
{
@@ -649,6 +1562,16 @@ atomic_dec_unless_positive(atomic_t *v)
return raw_atomic_dec_unless_positive(v);
}
+/**
+ * atomic_dec_if_positive() - atomic decrement if positive with full ordering
+ * @v: pointer to atomic_t
+ *
+ * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_dec_if_positive() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline int
atomic_dec_if_positive(atomic_t *v)
{
@@ -657,6 +1580,16 @@ atomic_dec_if_positive(atomic_t *v)
return raw_atomic_dec_if_positive(v);
}
+/**
+ * atomic64_read() - atomic load with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically loads the value of @v with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_read() there.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline s64
atomic64_read(const atomic64_t *v)
{
@@ -664,6 +1597,16 @@ atomic64_read(const atomic64_t *v)
return raw_atomic64_read(v);
}
+/**
+ * atomic64_read_acquire() - atomic load with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically loads the value of @v with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_read_acquire() there.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline s64
atomic64_read_acquire(const atomic64_t *v)
{
@@ -671,6 +1614,17 @@ atomic64_read_acquire(const atomic64_t *v)
return raw_atomic64_read_acquire(v);
}
+/**
+ * atomic64_set() - atomic set with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @i: s64 value to assign
+ *
+ * Atomically sets @v to @i with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_set() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_set(atomic64_t *v, s64 i)
{
@@ -678,6 +1632,17 @@ atomic64_set(atomic64_t *v, s64 i)
raw_atomic64_set(v, i);
}
+/**
+ * atomic64_set_release() - atomic set with release ordering
+ * @v: pointer to atomic64_t
+ * @i: s64 value to assign
+ *
+ * Atomically sets @v to @i with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_set_release() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_set_release(atomic64_t *v, s64 i)
{
@@ -686,6 +1651,17 @@ atomic64_set_release(atomic64_t *v, s64 i)
raw_atomic64_set_release(v, i);
}
+/**
+ * atomic64_add() - atomic add with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_add(s64 i, atomic64_t *v)
{
@@ -693,6 +1669,17 @@ atomic64_add(s64 i, atomic64_t *v)
raw_atomic64_add(i, v);
}
+/**
+ * atomic64_add_return() - atomic add with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_add_return(s64 i, atomic64_t *v)
{
@@ -701,6 +1688,17 @@ atomic64_add_return(s64 i, atomic64_t *v)
return raw_atomic64_add_return(i, v);
}
+/**
+ * atomic64_add_return_acquire() - atomic add with acquire ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_add_return_acquire(s64 i, atomic64_t *v)
{
@@ -708,6 +1706,17 @@ atomic64_add_return_acquire(s64 i, atomic64_t *v)
return raw_atomic64_add_return_acquire(i, v);
}
+/**
+ * atomic64_add_return_release() - atomic add with release ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_add_return_release(s64 i, atomic64_t *v)
{
@@ -716,6 +1725,17 @@ atomic64_add_return_release(s64 i, atomic64_t *v)
return raw_atomic64_add_return_release(i, v);
}
+/**
+ * atomic64_add_return_relaxed() - atomic add with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_add_return_relaxed(s64 i, atomic64_t *v)
{
@@ -723,6 +1743,17 @@ atomic64_add_return_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_add_return_relaxed(i, v);
}
+/**
+ * atomic64_fetch_add() - atomic add with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_add() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_add(s64 i, atomic64_t *v)
{
@@ -731,6 +1762,17 @@ atomic64_fetch_add(s64 i, atomic64_t *v)
return raw_atomic64_fetch_add(i, v);
}
+/**
+ * atomic64_fetch_add_acquire() - atomic add with acquire ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
{
@@ -738,6 +1780,17 @@ atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
return raw_atomic64_fetch_add_acquire(i, v);
}
+/**
+ * atomic64_fetch_add_release() - atomic add with release ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_add_release(s64 i, atomic64_t *v)
{
@@ -746,6 +1799,17 @@ atomic64_fetch_add_release(s64 i, atomic64_t *v)
return raw_atomic64_fetch_add_release(i, v);
}
+/**
+ * atomic64_fetch_add_relaxed() - atomic add with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
{
@@ -753,6 +1817,17 @@ atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_fetch_add_relaxed(i, v);
}
+/**
+ * atomic64_sub() - atomic subtract with relaxed ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_sub() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_sub(s64 i, atomic64_t *v)
{
@@ -760,6 +1835,17 @@ atomic64_sub(s64 i, atomic64_t *v)
raw_atomic64_sub(i, v);
}
+/**
+ * atomic64_sub_return() - atomic subtract with full ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_sub_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_sub_return(s64 i, atomic64_t *v)
{
@@ -768,6 +1854,17 @@ atomic64_sub_return(s64 i, atomic64_t *v)
return raw_atomic64_sub_return(i, v);
}
+/**
+ * atomic64_sub_return_acquire() - atomic subtract with acquire ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_sub_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_sub_return_acquire(s64 i, atomic64_t *v)
{
@@ -775,6 +1872,17 @@ atomic64_sub_return_acquire(s64 i, atomic64_t *v)
return raw_atomic64_sub_return_acquire(i, v);
}
+/**
+ * atomic64_sub_return_release() - atomic subtract with release ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_sub_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_sub_return_release(s64 i, atomic64_t *v)
{
@@ -783,6 +1891,17 @@ atomic64_sub_return_release(s64 i, atomic64_t *v)
return raw_atomic64_sub_return_release(i, v);
}
+/**
+ * atomic64_sub_return_relaxed() - atomic subtract with relaxed ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_sub_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
{
@@ -790,6 +1909,17 @@ atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_sub_return_relaxed(i, v);
}
+/**
+ * atomic64_fetch_sub() - atomic subtract with full ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_sub(s64 i, atomic64_t *v)
{
@@ -798,6 +1928,17 @@ atomic64_fetch_sub(s64 i, atomic64_t *v)
return raw_atomic64_fetch_sub(i, v);
}
+/**
+ * atomic64_fetch_sub_acquire() - atomic subtract with acquire ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
{
@@ -805,6 +1946,17 @@ atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
return raw_atomic64_fetch_sub_acquire(i, v);
}
+/**
+ * atomic64_fetch_sub_release() - atomic subtract with release ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_sub_release(s64 i, atomic64_t *v)
{
@@ -813,6 +1965,17 @@ atomic64_fetch_sub_release(s64 i, atomic64_t *v)
return raw_atomic64_fetch_sub_release(i, v);
}
+/**
+ * atomic64_fetch_sub_relaxed() - atomic subtract with relaxed ordering
+ * @i: s64 value to subtract
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
{
@@ -820,6 +1983,16 @@ atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_fetch_sub_relaxed(i, v);
}
+/**
+ * atomic64_inc() - atomic increment with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_inc(atomic64_t *v)
{
@@ -827,6 +2000,16 @@ atomic64_inc(atomic64_t *v)
raw_atomic64_inc(v);
}
+/**
+ * atomic64_inc_return() - atomic increment with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_inc_return(atomic64_t *v)
{
@@ -835,6 +2018,16 @@ atomic64_inc_return(atomic64_t *v)
return raw_atomic64_inc_return(v);
}
+/**
+ * atomic64_inc_return_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_inc_return_acquire(atomic64_t *v)
{
@@ -842,6 +2035,16 @@ atomic64_inc_return_acquire(atomic64_t *v)
return raw_atomic64_inc_return_acquire(v);
}
+/**
+ * atomic64_inc_return_release() - atomic increment with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_inc_return_release(atomic64_t *v)
{
@@ -850,6 +2053,16 @@ atomic64_inc_return_release(atomic64_t *v)
return raw_atomic64_inc_return_release(v);
}
+/**
+ * atomic64_inc_return_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_inc_return_relaxed(atomic64_t *v)
{
@@ -857,6 +2070,16 @@ atomic64_inc_return_relaxed(atomic64_t *v)
return raw_atomic64_inc_return_relaxed(v);
}
+/**
+ * atomic64_fetch_inc() - atomic increment with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_inc(atomic64_t *v)
{
@@ -865,6 +2088,16 @@ atomic64_fetch_inc(atomic64_t *v)
return raw_atomic64_fetch_inc(v);
}
+/**
+ * atomic64_fetch_inc_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_inc_acquire(atomic64_t *v)
{
@@ -872,6 +2105,16 @@ atomic64_fetch_inc_acquire(atomic64_t *v)
return raw_atomic64_fetch_inc_acquire(v);
}
+/**
+ * atomic64_fetch_inc_release() - atomic increment with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_inc_release(atomic64_t *v)
{
@@ -880,6 +2123,16 @@ atomic64_fetch_inc_release(atomic64_t *v)
return raw_atomic64_fetch_inc_release(v);
}
+/**
+ * atomic64_fetch_inc_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_inc_relaxed(atomic64_t *v)
{
@@ -887,6 +2140,16 @@ atomic64_fetch_inc_relaxed(atomic64_t *v)
return raw_atomic64_fetch_inc_relaxed(v);
}
+/**
+ * atomic64_dec() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_dec(atomic64_t *v)
{
@@ -894,6 +2157,16 @@ atomic64_dec(atomic64_t *v)
raw_atomic64_dec(v);
}
+/**
+ * atomic64_dec_return() - atomic decrement with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_dec_return(atomic64_t *v)
{
@@ -902,6 +2175,16 @@ atomic64_dec_return(atomic64_t *v)
return raw_atomic64_dec_return(v);
}
+/**
+ * atomic64_dec_return_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_dec_return_acquire(atomic64_t *v)
{
@@ -909,6 +2192,16 @@ atomic64_dec_return_acquire(atomic64_t *v)
return raw_atomic64_dec_return_acquire(v);
}
+/**
+ * atomic64_dec_return_release() - atomic decrement with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_dec_return_release(atomic64_t *v)
{
@@ -917,6 +2210,16 @@ atomic64_dec_return_release(atomic64_t *v)
return raw_atomic64_dec_return_release(v);
}
+/**
+ * atomic64_dec_return_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline s64
atomic64_dec_return_relaxed(atomic64_t *v)
{
@@ -924,6 +2227,16 @@ atomic64_dec_return_relaxed(atomic64_t *v)
return raw_atomic64_dec_return_relaxed(v);
}
+/**
+ * atomic64_fetch_dec() - atomic decrement with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_dec(atomic64_t *v)
{
@@ -932,6 +2245,16 @@ atomic64_fetch_dec(atomic64_t *v)
return raw_atomic64_fetch_dec(v);
}
+/**
+ * atomic64_fetch_dec_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_dec_acquire(atomic64_t *v)
{
@@ -939,6 +2262,16 @@ atomic64_fetch_dec_acquire(atomic64_t *v)
return raw_atomic64_fetch_dec_acquire(v);
}
+/**
+ * atomic64_fetch_dec_release() - atomic decrement with release ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_dec_release(atomic64_t *v)
{
@@ -947,6 +2280,16 @@ atomic64_fetch_dec_release(atomic64_t *v)
return raw_atomic64_fetch_dec_release(v);
}
+/**
+ * atomic64_fetch_dec_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_dec_relaxed(atomic64_t *v)
{
@@ -954,6 +2297,17 @@ atomic64_fetch_dec_relaxed(atomic64_t *v)
return raw_atomic64_fetch_dec_relaxed(v);
}
+/**
+ * atomic64_and() - atomic bitwise AND with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_and() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_and(s64 i, atomic64_t *v)
{
@@ -961,6 +2315,17 @@ atomic64_and(s64 i, atomic64_t *v)
raw_atomic64_and(i, v);
}
+/**
+ * atomic64_fetch_and() - atomic bitwise AND with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_and() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_and(s64 i, atomic64_t *v)
{
@@ -969,6 +2334,17 @@ atomic64_fetch_and(s64 i, atomic64_t *v)
return raw_atomic64_fetch_and(i, v);
}
+/**
+ * atomic64_fetch_and_acquire() - atomic bitwise AND with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_and_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
{
@@ -976,6 +2352,17 @@ atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
return raw_atomic64_fetch_and_acquire(i, v);
}
+/**
+ * atomic64_fetch_and_release() - atomic bitwise AND with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_and_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_and_release(s64 i, atomic64_t *v)
{
@@ -984,6 +2371,17 @@ atomic64_fetch_and_release(s64 i, atomic64_t *v)
return raw_atomic64_fetch_and_release(i, v);
}
+/**
+ * atomic64_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_and_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
{
@@ -991,6 +2389,17 @@ atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_fetch_and_relaxed(i, v);
}
+/**
+ * atomic64_andnot() - atomic bitwise AND NOT with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_andnot() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_andnot(s64 i, atomic64_t *v)
{
@@ -998,6 +2407,17 @@ atomic64_andnot(s64 i, atomic64_t *v)
raw_atomic64_andnot(i, v);
}
+/**
+ * atomic64_fetch_andnot() - atomic bitwise AND NOT with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
@@ -1006,6 +2426,17 @@ atomic64_fetch_andnot(s64 i, atomic64_t *v)
return raw_atomic64_fetch_andnot(i, v);
}
+/**
+ * atomic64_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
@@ -1013,6 +2444,17 @@ atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
return raw_atomic64_fetch_andnot_acquire(i, v);
}
+/**
+ * atomic64_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
@@ -1021,6 +2463,17 @@ atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
return raw_atomic64_fetch_andnot_release(i, v);
}
+/**
+ * atomic64_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
{
@@ -1028,6 +2481,17 @@ atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_fetch_andnot_relaxed(i, v);
}
+/**
+ * atomic64_or() - atomic bitwise OR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_or() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_or(s64 i, atomic64_t *v)
{
@@ -1035,6 +2499,17 @@ atomic64_or(s64 i, atomic64_t *v)
raw_atomic64_or(i, v);
}
+/**
+ * atomic64_fetch_or() - atomic bitwise OR with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_or() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_or(s64 i, atomic64_t *v)
{
@@ -1043,6 +2518,17 @@ atomic64_fetch_or(s64 i, atomic64_t *v)
return raw_atomic64_fetch_or(i, v);
}
+/**
+ * atomic64_fetch_or_acquire() - atomic bitwise OR with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_or_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
{
@@ -1050,6 +2536,17 @@ atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
return raw_atomic64_fetch_or_acquire(i, v);
}
+/**
+ * atomic64_fetch_or_release() - atomic bitwise OR with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_or_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_or_release(s64 i, atomic64_t *v)
{
@@ -1058,6 +2555,17 @@ atomic64_fetch_or_release(s64 i, atomic64_t *v)
return raw_atomic64_fetch_or_release(i, v);
}
+/**
+ * atomic64_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_or_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
{
@@ -1065,6 +2573,17 @@ atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_fetch_or_relaxed(i, v);
}
+/**
+ * atomic64_xor() - atomic bitwise XOR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_xor() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic64_xor(s64 i, atomic64_t *v)
{
@@ -1072,6 +2591,17 @@ atomic64_xor(s64 i, atomic64_t *v)
raw_atomic64_xor(i, v);
}
+/**
+ * atomic64_fetch_xor() - atomic bitwise XOR with full ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_xor(s64 i, atomic64_t *v)
{
@@ -1080,6 +2610,17 @@ atomic64_fetch_xor(s64 i, atomic64_t *v)
return raw_atomic64_fetch_xor(i, v);
}
+/**
+ * atomic64_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
{
@@ -1087,6 +2628,17 @@ atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
return raw_atomic64_fetch_xor_acquire(i, v);
}
+/**
+ * atomic64_fetch_xor_release() - atomic bitwise XOR with release ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_xor_release(s64 i, atomic64_t *v)
{
@@ -1095,6 +2647,17 @@ atomic64_fetch_xor_release(s64 i, atomic64_t *v)
return raw_atomic64_fetch_xor_release(i, v);
}
+/**
+ * atomic64_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
+ * @i: s64 value
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
{
@@ -1102,6 +2665,17 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_fetch_xor_relaxed(i, v);
}
+/**
+ * atomic64_xchg() - atomic exchange with full ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_xchg() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_xchg(atomic64_t *v, s64 new)
{
@@ -1110,6 +2684,17 @@ atomic64_xchg(atomic64_t *v, s64 new)
return raw_atomic64_xchg(v, new);
}
+/**
+ * atomic64_xchg_acquire() - atomic exchange with acquire ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_xchg_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_xchg_acquire(atomic64_t *v, s64 new)
{
@@ -1117,6 +2702,17 @@ atomic64_xchg_acquire(atomic64_t *v, s64 new)
return raw_atomic64_xchg_acquire(v, new);
}
+/**
+ * atomic64_xchg_release() - atomic exchange with release ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_xchg_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_xchg_release(atomic64_t *v, s64 new)
{
@@ -1125,6 +2721,17 @@ atomic64_xchg_release(atomic64_t *v, s64 new)
return raw_atomic64_xchg_release(v, new);
}
+/**
+ * atomic64_xchg_relaxed() - atomic exchange with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @new: s64 value to assign
+ *
+ * Atomically updates @v to @new with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_xchg_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_xchg_relaxed(atomic64_t *v, s64 new)
{
@@ -1132,6 +2739,18 @@ atomic64_xchg_relaxed(atomic64_t *v, s64 new)
return raw_atomic64_xchg_relaxed(v, new);
}
+/**
+ * atomic64_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
@@ -1140,6 +2759,18 @@ atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
return raw_atomic64_cmpxchg(v, old, new);
}
+/**
+ * atomic64_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
@@ -1147,6 +2778,18 @@ atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
return raw_atomic64_cmpxchg_acquire(v, old, new);
}
+/**
+ * atomic64_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
@@ -1155,6 +2798,18 @@ atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
return raw_atomic64_cmpxchg_release(v, old, new);
}
+/**
+ * atomic64_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @old: s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
{
@@ -1162,6 +2817,19 @@ atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
return raw_atomic64_cmpxchg_relaxed(v, old, new);
}
+/**
+ * atomic64_try_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
@@ -1171,6 +2839,19 @@ atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
return raw_atomic64_try_cmpxchg(v, old, new);
}
+/**
+ * atomic64_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_acquire() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
@@ -1179,6 +2860,19 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
return raw_atomic64_try_cmpxchg_acquire(v, old, new);
}
+/**
+ * atomic64_try_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_release() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
{
@@ -1188,6 +2882,19 @@ atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
return raw_atomic64_try_cmpxchg_release(v, old, new);
}
+/**
+ * atomic64_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic64_t
+ * @old: pointer to s64 value to compare with
+ * @new: s64 value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_relaxed() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
{
@@ -1196,6 +2903,17 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
return raw_atomic64_try_cmpxchg_relaxed(v, old, new);
}
+/**
+ * atomic64_sub_and_test() - atomic subtract and test if zero with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_sub_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic64_sub_and_test(s64 i, atomic64_t *v)
{
@@ -1204,6 +2922,16 @@ atomic64_sub_and_test(s64 i, atomic64_t *v)
return raw_atomic64_sub_and_test(i, v);
}
+/**
+ * atomic64_dec_and_test() - atomic decrement and test if zero with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic64_dec_and_test(atomic64_t *v)
{
@@ -1212,6 +2940,16 @@ atomic64_dec_and_test(atomic64_t *v)
return raw_atomic64_dec_and_test(v);
}
+/**
+ * atomic64_inc_and_test() - atomic increment and test if zero with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic64_inc_and_test(atomic64_t *v)
{
@@ -1220,6 +2958,17 @@ atomic64_inc_and_test(atomic64_t *v)
return raw_atomic64_inc_and_test(v);
}
+/**
+ * atomic64_add_negative() - atomic add and test if negative with full ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_negative() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic64_add_negative(s64 i, atomic64_t *v)
{
@@ -1228,6 +2977,17 @@ atomic64_add_negative(s64 i, atomic64_t *v)
return raw_atomic64_add_negative(i, v);
}
+/**
+ * atomic64_add_negative_acquire() - atomic add and test if negative with acquire ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_negative_acquire() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
@@ -1235,6 +2995,17 @@ atomic64_add_negative_acquire(s64 i, atomic64_t *v)
return raw_atomic64_add_negative_acquire(i, v);
}
+/**
+ * atomic64_add_negative_release() - atomic add and test if negative with release ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_negative_release() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic64_add_negative_release(s64 i, atomic64_t *v)
{
@@ -1243,6 +3014,17 @@ atomic64_add_negative_release(s64 i, atomic64_t *v)
return raw_atomic64_add_negative_release(i, v);
}
+/**
+ * atomic64_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
+ * @i: s64 value to add
+ * @v: pointer to atomic64_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_negative_relaxed() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
{
@@ -1250,6 +3032,18 @@ atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
return raw_atomic64_add_negative_relaxed(i, v);
}
+/**
+ * atomic64_fetch_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic64_t
+ * @a: s64 value to add
+ * @u: s64 value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_unless() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline s64
atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
@@ -1258,6 +3052,18 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
return raw_atomic64_fetch_add_unless(v, a, u);
}
+/**
+ * atomic64_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic64_t
+ * @a: s64 value to add
+ * @u: s64 value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_add_unless() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
@@ -1266,6 +3072,16 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
return raw_atomic64_add_unless(v, a, u);
}
+/**
+ * atomic64_inc_not_zero() - atomic increment unless zero with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc_not_zero() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic64_inc_not_zero(atomic64_t *v)
{
@@ -1274,6 +3090,16 @@ atomic64_inc_not_zero(atomic64_t *v)
return raw_atomic64_inc_not_zero(v);
}
+/**
+ * atomic64_inc_unless_negative() - atomic increment unless negative with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_inc_unless_negative() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic64_inc_unless_negative(atomic64_t *v)
{
@@ -1282,6 +3108,16 @@ atomic64_inc_unless_negative(atomic64_t *v)
return raw_atomic64_inc_unless_negative(v);
}
+/**
+ * atomic64_dec_unless_positive() - atomic decrement unless positive with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec_unless_positive() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic64_dec_unless_positive(atomic64_t *v)
{
@@ -1290,6 +3126,16 @@ atomic64_dec_unless_positive(atomic64_t *v)
return raw_atomic64_dec_unless_positive(v);
}
+/**
+ * atomic64_dec_if_positive() - atomic decrement if positive with full ordering
+ * @v: pointer to atomic64_t
+ *
+ * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic64_dec_if_positive() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline s64
atomic64_dec_if_positive(atomic64_t *v)
{
@@ -1298,6 +3144,16 @@ atomic64_dec_if_positive(atomic64_t *v)
return raw_atomic64_dec_if_positive(v);
}
+/**
+ * atomic_long_read() - atomic load with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically loads the value of @v with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_read() there.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline long
atomic_long_read(const atomic_long_t *v)
{
@@ -1305,6 +3161,16 @@ atomic_long_read(const atomic_long_t *v)
return raw_atomic_long_read(v);
}
+/**
+ * atomic_long_read_acquire() - atomic load with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically loads the value of @v with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_read_acquire() there.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline long
atomic_long_read_acquire(const atomic_long_t *v)
{
@@ -1312,6 +3178,17 @@ atomic_long_read_acquire(const atomic_long_t *v)
return raw_atomic_long_read_acquire(v);
}
+/**
+ * atomic_long_set() - atomic set with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @i: long value to assign
+ *
+ * Atomically sets @v to @i with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_set() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_set(atomic_long_t *v, long i)
{
@@ -1319,6 +3196,17 @@ atomic_long_set(atomic_long_t *v, long i)
raw_atomic_long_set(v, i);
}
+/**
+ * atomic_long_set_release() - atomic set with release ordering
+ * @v: pointer to atomic_long_t
+ * @i: long value to assign
+ *
+ * Atomically sets @v to @i with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_set_release() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_set_release(atomic_long_t *v, long i)
{
@@ -1327,6 +3215,17 @@ atomic_long_set_release(atomic_long_t *v, long i)
raw_atomic_long_set_release(v, i);
}
+/**
+ * atomic_long_add() - atomic add with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_add(long i, atomic_long_t *v)
{
@@ -1334,6 +3233,17 @@ atomic_long_add(long i, atomic_long_t *v)
raw_atomic_long_add(i, v);
}
+/**
+ * atomic_long_add_return() - atomic add with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_add_return(long i, atomic_long_t *v)
{
@@ -1342,6 +3252,17 @@ atomic_long_add_return(long i, atomic_long_t *v)
return raw_atomic_long_add_return(i, v);
}
+/**
+ * atomic_long_add_return_acquire() - atomic add with acquire ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_add_return_acquire(long i, atomic_long_t *v)
{
@@ -1349,6 +3270,17 @@ atomic_long_add_return_acquire(long i, atomic_long_t *v)
return raw_atomic_long_add_return_acquire(i, v);
}
+/**
+ * atomic_long_add_return_release() - atomic add with release ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_add_return_release(long i, atomic_long_t *v)
{
@@ -1357,6 +3289,17 @@ atomic_long_add_return_release(long i, atomic_long_t *v)
return raw_atomic_long_add_return_release(i, v);
}
+/**
+ * atomic_long_add_return_relaxed() - atomic add with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{
@@ -1364,6 +3307,17 @@ atomic_long_add_return_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_add_return_relaxed(i, v);
}
+/**
+ * atomic_long_fetch_add() - atomic add with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_add(long i, atomic_long_t *v)
{
@@ -1372,6 +3326,17 @@ atomic_long_fetch_add(long i, atomic_long_t *v)
return raw_atomic_long_fetch_add(i, v);
}
+/**
+ * atomic_long_fetch_add_acquire() - atomic add with acquire ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{
@@ -1379,6 +3344,17 @@ atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
return raw_atomic_long_fetch_add_acquire(i, v);
}
+/**
+ * atomic_long_fetch_add_release() - atomic add with release ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_add_release(long i, atomic_long_t *v)
{
@@ -1387,6 +3363,17 @@ atomic_long_fetch_add_release(long i, atomic_long_t *v)
return raw_atomic_long_fetch_add_release(i, v);
}
+/**
+ * atomic_long_fetch_add_relaxed() - atomic add with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{
@@ -1394,6 +3381,17 @@ atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_fetch_add_relaxed(i, v);
}
+/**
+ * atomic_long_sub() - atomic subtract with relaxed ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_sub() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_sub(long i, atomic_long_t *v)
{
@@ -1401,6 +3399,17 @@ atomic_long_sub(long i, atomic_long_t *v)
raw_atomic_long_sub(i, v);
}
+/**
+ * atomic_long_sub_return() - atomic subtract with full ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_sub_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_sub_return(long i, atomic_long_t *v)
{
@@ -1409,6 +3418,17 @@ atomic_long_sub_return(long i, atomic_long_t *v)
return raw_atomic_long_sub_return(i, v);
}
+/**
+ * atomic_long_sub_return_acquire() - atomic subtract with acquire ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_sub_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{
@@ -1416,6 +3436,17 @@ atomic_long_sub_return_acquire(long i, atomic_long_t *v)
return raw_atomic_long_sub_return_acquire(i, v);
}
+/**
+ * atomic_long_sub_return_release() - atomic subtract with release ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_sub_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_sub_return_release(long i, atomic_long_t *v)
{
@@ -1424,6 +3455,17 @@ atomic_long_sub_return_release(long i, atomic_long_t *v)
return raw_atomic_long_sub_return_release(i, v);
}
+/**
+ * atomic_long_sub_return_relaxed() - atomic subtract with relaxed ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_sub_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{
@@ -1431,6 +3473,17 @@ atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_sub_return_relaxed(i, v);
}
+/**
+ * atomic_long_fetch_sub() - atomic subtract with full ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_sub(long i, atomic_long_t *v)
{
@@ -1439,6 +3492,17 @@ atomic_long_fetch_sub(long i, atomic_long_t *v)
return raw_atomic_long_fetch_sub(i, v);
}
+/**
+ * atomic_long_fetch_sub_acquire() - atomic subtract with acquire ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{
@@ -1446,6 +3510,17 @@ atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
return raw_atomic_long_fetch_sub_acquire(i, v);
}
+/**
+ * atomic_long_fetch_sub_release() - atomic subtract with release ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{
@@ -1454,6 +3529,17 @@ atomic_long_fetch_sub_release(long i, atomic_long_t *v)
return raw_atomic_long_fetch_sub_release(i, v);
}
+/**
+ * atomic_long_fetch_sub_relaxed() - atomic subtract with relaxed ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{
@@ -1461,6 +3547,16 @@ atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_fetch_sub_relaxed(i, v);
}
+/**
+ * atomic_long_inc() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_inc(atomic_long_t *v)
{
@@ -1468,6 +3564,16 @@ atomic_long_inc(atomic_long_t *v)
raw_atomic_long_inc(v);
}
+/**
+ * atomic_long_inc_return() - atomic increment with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_inc_return(atomic_long_t *v)
{
@@ -1476,6 +3582,16 @@ atomic_long_inc_return(atomic_long_t *v)
return raw_atomic_long_inc_return(v);
}
+/**
+ * atomic_long_inc_return_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_inc_return_acquire(atomic_long_t *v)
{
@@ -1483,6 +3599,16 @@ atomic_long_inc_return_acquire(atomic_long_t *v)
return raw_atomic_long_inc_return_acquire(v);
}
+/**
+ * atomic_long_inc_return_release() - atomic increment with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_inc_return_release(atomic_long_t *v)
{
@@ -1491,6 +3617,16 @@ atomic_long_inc_return_release(atomic_long_t *v)
return raw_atomic_long_inc_return_release(v);
}
+/**
+ * atomic_long_inc_return_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_inc_return_relaxed(atomic_long_t *v)
{
@@ -1498,6 +3634,16 @@ atomic_long_inc_return_relaxed(atomic_long_t *v)
return raw_atomic_long_inc_return_relaxed(v);
}
+/**
+ * atomic_long_fetch_inc() - atomic increment with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_inc(atomic_long_t *v)
{
@@ -1506,6 +3652,16 @@ atomic_long_fetch_inc(atomic_long_t *v)
return raw_atomic_long_fetch_inc(v);
}
+/**
+ * atomic_long_fetch_inc_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_inc_acquire(atomic_long_t *v)
{
@@ -1513,6 +3669,16 @@ atomic_long_fetch_inc_acquire(atomic_long_t *v)
return raw_atomic_long_fetch_inc_acquire(v);
}
+/**
+ * atomic_long_fetch_inc_release() - atomic increment with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_inc_release(atomic_long_t *v)
{
@@ -1521,6 +3687,16 @@ atomic_long_fetch_inc_release(atomic_long_t *v)
return raw_atomic_long_fetch_inc_release(v);
}
+/**
+ * atomic_long_fetch_inc_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{
@@ -1528,6 +3704,16 @@ atomic_long_fetch_inc_relaxed(atomic_long_t *v)
return raw_atomic_long_fetch_inc_relaxed(v);
}
+/**
+ * atomic_long_dec() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_dec(atomic_long_t *v)
{
@@ -1535,6 +3721,16 @@ atomic_long_dec(atomic_long_t *v)
raw_atomic_long_dec(v);
}
+/**
+ * atomic_long_dec_return() - atomic decrement with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec_return() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_dec_return(atomic_long_t *v)
{
@@ -1543,6 +3739,16 @@ atomic_long_dec_return(atomic_long_t *v)
return raw_atomic_long_dec_return(v);
}
+/**
+ * atomic_long_dec_return_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec_return_acquire() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_dec_return_acquire(atomic_long_t *v)
{
@@ -1550,6 +3756,16 @@ atomic_long_dec_return_acquire(atomic_long_t *v)
return raw_atomic_long_dec_return_acquire(v);
}
+/**
+ * atomic_long_dec_return_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec_return_release() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_dec_return_release(atomic_long_t *v)
{
@@ -1558,6 +3774,16 @@ atomic_long_dec_return_release(atomic_long_t *v)
return raw_atomic_long_dec_return_release(v);
}
+/**
+ * atomic_long_dec_return_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec_return_relaxed() there.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
atomic_long_dec_return_relaxed(atomic_long_t *v)
{
@@ -1565,6 +3791,16 @@ atomic_long_dec_return_relaxed(atomic_long_t *v)
return raw_atomic_long_dec_return_relaxed(v);
}
+/**
+ * atomic_long_fetch_dec() - atomic decrement with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_dec(atomic_long_t *v)
{
@@ -1573,6 +3809,16 @@ atomic_long_fetch_dec(atomic_long_t *v)
return raw_atomic_long_fetch_dec(v);
}
+/**
+ * atomic_long_fetch_dec_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_dec_acquire(atomic_long_t *v)
{
@@ -1580,6 +3826,16 @@ atomic_long_fetch_dec_acquire(atomic_long_t *v)
return raw_atomic_long_fetch_dec_acquire(v);
}
+/**
+ * atomic_long_fetch_dec_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_dec_release(atomic_long_t *v)
{
@@ -1588,6 +3844,16 @@ atomic_long_fetch_dec_release(atomic_long_t *v)
return raw_atomic_long_fetch_dec_release(v);
}
+/**
+ * atomic_long_fetch_dec_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{
@@ -1595,6 +3861,17 @@ atomic_long_fetch_dec_relaxed(atomic_long_t *v)
return raw_atomic_long_fetch_dec_relaxed(v);
}
+/**
+ * atomic_long_and() - atomic bitwise AND with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_and() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_and(long i, atomic_long_t *v)
{
@@ -1602,6 +3879,17 @@ atomic_long_and(long i, atomic_long_t *v)
raw_atomic_long_and(i, v);
}
+/**
+ * atomic_long_fetch_and() - atomic bitwise AND with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_and(long i, atomic_long_t *v)
{
@@ -1610,6 +3898,17 @@ atomic_long_fetch_and(long i, atomic_long_t *v)
return raw_atomic_long_fetch_and(i, v);
}
+/**
+ * atomic_long_fetch_and_acquire() - atomic bitwise AND with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{
@@ -1617,6 +3916,17 @@ atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
return raw_atomic_long_fetch_and_acquire(i, v);
}
+/**
+ * atomic_long_fetch_and_release() - atomic bitwise AND with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_and_release(long i, atomic_long_t *v)
{
@@ -1625,6 +3935,17 @@ atomic_long_fetch_and_release(long i, atomic_long_t *v)
return raw_atomic_long_fetch_and_release(i, v);
}
+/**
+ * atomic_long_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{
@@ -1632,6 +3953,17 @@ atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_fetch_and_relaxed(i, v);
}
+/**
+ * atomic_long_andnot() - atomic bitwise AND NOT with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_andnot() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_andnot(long i, atomic_long_t *v)
{
@@ -1639,6 +3971,17 @@ atomic_long_andnot(long i, atomic_long_t *v)
raw_atomic_long_andnot(i, v);
}
+/**
+ * atomic_long_fetch_andnot() - atomic bitwise AND NOT with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_andnot(long i, atomic_long_t *v)
{
@@ -1647,6 +3990,17 @@ atomic_long_fetch_andnot(long i, atomic_long_t *v)
return raw_atomic_long_fetch_andnot(i, v);
}
+/**
+ * atomic_long_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{
@@ -1654,6 +4008,17 @@ atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
return raw_atomic_long_fetch_andnot_acquire(i, v);
}
+/**
+ * atomic_long_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{
@@ -1662,6 +4027,17 @@ atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
return raw_atomic_long_fetch_andnot_release(i, v);
}
+/**
+ * atomic_long_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{
@@ -1669,6 +4045,17 @@ atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_fetch_andnot_relaxed(i, v);
}
+/**
+ * atomic_long_or() - atomic bitwise OR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_or() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_or(long i, atomic_long_t *v)
{
@@ -1676,6 +4063,17 @@ atomic_long_or(long i, atomic_long_t *v)
raw_atomic_long_or(i, v);
}
+/**
+ * atomic_long_fetch_or() - atomic bitwise OR with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_or(long i, atomic_long_t *v)
{
@@ -1684,6 +4082,17 @@ atomic_long_fetch_or(long i, atomic_long_t *v)
return raw_atomic_long_fetch_or(i, v);
}
+/**
+ * atomic_long_fetch_or_acquire() - atomic bitwise OR with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{
@@ -1691,6 +4100,17 @@ atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
return raw_atomic_long_fetch_or_acquire(i, v);
}
+/**
+ * atomic_long_fetch_or_release() - atomic bitwise OR with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_or_release(long i, atomic_long_t *v)
{
@@ -1699,6 +4119,17 @@ atomic_long_fetch_or_release(long i, atomic_long_t *v)
return raw_atomic_long_fetch_or_release(i, v);
}
+/**
+ * atomic_long_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{
@@ -1706,6 +4137,17 @@ atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_fetch_or_relaxed(i, v);
}
+/**
+ * atomic_long_xor() - atomic bitwise XOR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_xor() there.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
atomic_long_xor(long i, atomic_long_t *v)
{
@@ -1713,6 +4155,17 @@ atomic_long_xor(long i, atomic_long_t *v)
raw_atomic_long_xor(i, v);
}
+/**
+ * atomic_long_fetch_xor() - atomic bitwise XOR with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_xor(long i, atomic_long_t *v)
{
@@ -1721,6 +4174,17 @@ atomic_long_fetch_xor(long i, atomic_long_t *v)
return raw_atomic_long_fetch_xor(i, v);
}
+/**
+ * atomic_long_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{
@@ -1728,6 +4192,17 @@ atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
return raw_atomic_long_fetch_xor_acquire(i, v);
}
+/**
+ * atomic_long_fetch_xor_release() - atomic bitwise XOR with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{
@@ -1736,6 +4211,17 @@ atomic_long_fetch_xor_release(long i, atomic_long_t *v)
return raw_atomic_long_fetch_xor_release(i, v);
}
+/**
+ * atomic_long_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{
@@ -1743,6 +4229,17 @@ atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_fetch_xor_relaxed(i, v);
}
+/**
+ * atomic_long_xchg() - atomic exchange with full ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_xchg() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_xchg(atomic_long_t *v, long new)
{
@@ -1751,6 +4248,17 @@ atomic_long_xchg(atomic_long_t *v, long new)
return raw_atomic_long_xchg(v, new);
}
+/**
+ * atomic_long_xchg_acquire() - atomic exchange with acquire ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_xchg_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_xchg_acquire(atomic_long_t *v, long new)
{
@@ -1758,6 +4266,17 @@ atomic_long_xchg_acquire(atomic_long_t *v, long new)
return raw_atomic_long_xchg_acquire(v, new);
}
+/**
+ * atomic_long_xchg_release() - atomic exchange with release ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_xchg_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_xchg_release(atomic_long_t *v, long new)
{
@@ -1766,6 +4285,17 @@ atomic_long_xchg_release(atomic_long_t *v, long new)
return raw_atomic_long_xchg_release(v, new);
}
+/**
+ * atomic_long_xchg_relaxed() - atomic exchange with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_xchg_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_xchg_relaxed(atomic_long_t *v, long new)
{
@@ -1773,6 +4303,18 @@ atomic_long_xchg_relaxed(atomic_long_t *v, long new)
return raw_atomic_long_xchg_relaxed(v, new);
}
+/**
+ * atomic_long_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{
@@ -1781,6 +4323,18 @@ atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
return raw_atomic_long_cmpxchg(v, old, new);
}
+/**
+ * atomic_long_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_acquire() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{
@@ -1788,6 +4342,18 @@ atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
return raw_atomic_long_cmpxchg_acquire(v, old, new);
}
+/**
+ * atomic_long_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_release() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{
@@ -1796,6 +4362,18 @@ atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
return raw_atomic_long_cmpxchg_release(v, old, new);
}
+/**
+ * atomic_long_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_relaxed() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{
@@ -1803,6 +4381,19 @@ atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
return raw_atomic_long_cmpxchg_relaxed(v, old, new);
}
+/**
+ * atomic_long_try_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
{
@@ -1812,6 +4403,19 @@ atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
return raw_atomic_long_try_cmpxchg(v, old, new);
}
+/**
+ * atomic_long_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_acquire() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{
@@ -1820,6 +4424,19 @@ atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
return raw_atomic_long_try_cmpxchg_acquire(v, old, new);
}
+/**
+ * atomic_long_try_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_release() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
{
@@ -1829,6 +4446,19 @@ atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
return raw_atomic_long_try_cmpxchg_release(v, old, new);
}
+/**
+ * atomic_long_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_relaxed() there.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{
@@ -1837,6 +4467,17 @@ atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
return raw_atomic_long_try_cmpxchg_relaxed(v, old, new);
}
+/**
+ * atomic_long_sub_and_test() - atomic subtract and test if zero with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_sub_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic_long_sub_and_test(long i, atomic_long_t *v)
{
@@ -1845,6 +4486,16 @@ atomic_long_sub_and_test(long i, atomic_long_t *v)
return raw_atomic_long_sub_and_test(i, v);
}
+/**
+ * atomic_long_dec_and_test() - atomic decrement and test if zero with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic_long_dec_and_test(atomic_long_t *v)
{
@@ -1853,6 +4504,16 @@ atomic_long_dec_and_test(atomic_long_t *v)
return raw_atomic_long_dec_and_test(v);
}
+/**
+ * atomic_long_inc_and_test() - atomic increment and test if zero with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc_and_test() there.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
atomic_long_inc_and_test(atomic_long_t *v)
{
@@ -1861,6 +4522,17 @@ atomic_long_inc_and_test(atomic_long_t *v)
return raw_atomic_long_inc_and_test(v);
}
+/**
+ * atomic_long_add_negative() - atomic add and test if negative with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_negative() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_long_add_negative(long i, atomic_long_t *v)
{
@@ -1869,6 +4541,17 @@ atomic_long_add_negative(long i, atomic_long_t *v)
return raw_atomic_long_add_negative(i, v);
}
+/**
+ * atomic_long_add_negative_acquire() - atomic add and test if negative with acquire ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_negative_acquire() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_long_add_negative_acquire(long i, atomic_long_t *v)
{
@@ -1876,6 +4559,17 @@ atomic_long_add_negative_acquire(long i, atomic_long_t *v)
return raw_atomic_long_add_negative_acquire(i, v);
}
+/**
+ * atomic_long_add_negative_release() - atomic add and test if negative with release ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_negative_release() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_long_add_negative_release(long i, atomic_long_t *v)
{
@@ -1884,6 +4578,17 @@ atomic_long_add_negative_release(long i, atomic_long_t *v)
return raw_atomic_long_add_negative_release(i, v);
}
+/**
+ * atomic_long_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_negative_relaxed() there.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
{
@@ -1891,6 +4596,18 @@ atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
return raw_atomic_long_add_negative_relaxed(i, v);
}
+/**
+ * atomic_long_fetch_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_long_t
+ * @a: long value to add
+ * @u: long value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_unless() there.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
@@ -1899,6 +4616,18 @@ atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
return raw_atomic_long_fetch_add_unless(v, a, u);
}
+/**
+ * atomic_long_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_long_t
+ * @a: long value to add
+ * @u: long value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_add_unless() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_long_add_unless(atomic_long_t *v, long a, long u)
{
@@ -1907,6 +4636,16 @@ atomic_long_add_unless(atomic_long_t *v, long a, long u)
return raw_atomic_long_add_unless(v, a, u);
}
+/**
+ * atomic_long_inc_not_zero() - atomic increment unless zero with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc_not_zero() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_long_inc_not_zero(atomic_long_t *v)
{
@@ -1915,6 +4654,16 @@ atomic_long_inc_not_zero(atomic_long_t *v)
return raw_atomic_long_inc_not_zero(v);
}
+/**
+ * atomic_long_inc_unless_negative() - atomic increment unless negative with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_inc_unless_negative() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_long_inc_unless_negative(atomic_long_t *v)
{
@@ -1923,6 +4672,16 @@ atomic_long_inc_unless_negative(atomic_long_t *v)
return raw_atomic_long_inc_unless_negative(v);
}
+/**
+ * atomic_long_dec_unless_positive() - atomic decrement unless positive with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec_unless_positive() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
atomic_long_dec_unless_positive(atomic_long_t *v)
{
@@ -1931,6 +4690,16 @@ atomic_long_dec_unless_positive(atomic_long_t *v)
return raw_atomic_long_dec_unless_positive(v);
}
+/**
+ * atomic_long_dec_if_positive() - atomic decrement if positive with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Unsafe to use in noinstr code; use raw_atomic_long_dec_if_positive() there.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline long
atomic_long_dec_if_positive(atomic_long_t *v)
{
@@ -2231,4 +5000,4 @@ atomic_long_dec_if_positive(atomic_long_t *v)
#endif /* _LINUX_ATOMIC_INSTRUMENTED_H */
-// a4c3d2b229f907654cc53cb5d40e80f7fed1ec9c
+// 06cec02e676a484857aee38b0071a1d846ec9457
diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h
index f564f71..f6df2ad 100644
--- a/include/linux/atomic/atomic-long.h
+++ b/include/linux/atomic/atomic-long.h
@@ -21,6 +21,16 @@ typedef atomic_t atomic_long_t;
#define atomic_long_cond_read_relaxed atomic_cond_read_relaxed
#endif
+/**
+ * raw_atomic_long_read() - atomic load with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically loads the value of @v with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_read() elsewhere.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline long
raw_atomic_long_read(const atomic_long_t *v)
{
@@ -31,6 +41,16 @@ raw_atomic_long_read(const atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_read_acquire() - atomic load with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically loads the value of @v with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_read_acquire() elsewhere.
+ *
+ * Return: The value loaded from @v.
+ */
static __always_inline long
raw_atomic_long_read_acquire(const atomic_long_t *v)
{
@@ -41,6 +61,17 @@ raw_atomic_long_read_acquire(const atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_set() - atomic set with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @i: long value to assign
+ *
+ * Atomically sets @v to @i with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_set() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_set(atomic_long_t *v, long i)
{
@@ -51,6 +82,17 @@ raw_atomic_long_set(atomic_long_t *v, long i)
#endif
}
+/**
+ * raw_atomic_long_set_release() - atomic set with release ordering
+ * @v: pointer to atomic_long_t
+ * @i: long value to assign
+ *
+ * Atomically sets @v to @i with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_set_release() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_set_release(atomic_long_t *v, long i)
{
@@ -61,6 +103,17 @@ raw_atomic_long_set_release(atomic_long_t *v, long i)
#endif
}
+/**
+ * raw_atomic_long_add() - atomic add with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_add(long i, atomic_long_t *v)
{
@@ -71,6 +124,17 @@ raw_atomic_long_add(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_return() - atomic add with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_add_return(long i, atomic_long_t *v)
{
@@ -81,6 +145,17 @@ raw_atomic_long_add_return(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_return_acquire() - atomic add with acquire ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
{
@@ -91,6 +166,17 @@ raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_return_release() - atomic add with release ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_add_return_release(long i, atomic_long_t *v)
{
@@ -101,6 +187,17 @@ raw_atomic_long_add_return_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_return_relaxed() - atomic add with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{
@@ -111,6 +208,17 @@ raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_add() - atomic add with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_add() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_add(long i, atomic_long_t *v)
{
@@ -121,6 +229,17 @@ raw_atomic_long_fetch_add(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_add_acquire() - atomic add with acquire ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_add_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{
@@ -131,6 +250,17 @@ raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_add_release() - atomic add with release ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_add_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
{
@@ -141,6 +271,17 @@ raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_add_relaxed() - atomic add with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_add_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{
@@ -151,6 +292,17 @@ raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_sub() - atomic subtract with relaxed ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_sub() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_sub(long i, atomic_long_t *v)
{
@@ -161,6 +313,17 @@ raw_atomic_long_sub(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_sub_return() - atomic subtract with full ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_sub_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_sub_return(long i, atomic_long_t *v)
{
@@ -171,6 +334,17 @@ raw_atomic_long_sub_return(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_sub_return_acquire() - atomic subtract with acquire ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_sub_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{
@@ -181,6 +355,17 @@ raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_sub_return_release() - atomic subtract with release ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_sub_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
{
@@ -191,6 +376,17 @@ raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_sub_return_relaxed() - atomic subtract with relaxed ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_sub_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{
@@ -201,6 +397,17 @@ raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_sub() - atomic subtract with full ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_sub() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
{
@@ -211,6 +418,17 @@ raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_sub_acquire() - atomic subtract with acquire ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_sub_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{
@@ -221,6 +439,17 @@ raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_sub_release() - atomic subtract with release ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_sub_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{
@@ -231,6 +460,17 @@ raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_sub_relaxed() - atomic subtract with relaxed ordering
+ * @i: long value to subtract
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_sub_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{
@@ -241,6 +481,16 @@ raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_inc() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_inc(atomic_long_t *v)
{
@@ -251,6 +501,16 @@ raw_atomic_long_inc(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_inc_return() - atomic increment with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_inc_return(atomic_long_t *v)
{
@@ -261,6 +521,16 @@ raw_atomic_long_inc_return(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_inc_return_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_inc_return_acquire(atomic_long_t *v)
{
@@ -271,6 +541,16 @@ raw_atomic_long_inc_return_acquire(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_inc_return_release() - atomic increment with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_inc_return_release(atomic_long_t *v)
{
@@ -281,6 +561,16 @@ raw_atomic_long_inc_return_release(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_inc_return_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
{
@@ -291,6 +581,16 @@ raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_inc() - atomic increment with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_inc() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_inc(atomic_long_t *v)
{
@@ -301,6 +601,16 @@ raw_atomic_long_fetch_inc(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_inc_acquire() - atomic increment with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_inc_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
{
@@ -311,6 +621,16 @@ raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_inc_release() - atomic increment with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_inc_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_inc_release(atomic_long_t *v)
{
@@ -321,6 +641,16 @@ raw_atomic_long_fetch_inc_release(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_inc_relaxed() - atomic increment with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_inc_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{
@@ -331,6 +661,16 @@ raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_dec(atomic_long_t *v)
{
@@ -341,6 +681,16 @@ raw_atomic_long_dec(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec_return() - atomic decrement with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec_return() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_dec_return(atomic_long_t *v)
{
@@ -351,6 +701,16 @@ raw_atomic_long_dec_return(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec_return_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec_return_acquire() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_dec_return_acquire(atomic_long_t *v)
{
@@ -361,6 +721,16 @@ raw_atomic_long_dec_return_acquire(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec_return_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec_return_release() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_dec_return_release(atomic_long_t *v)
{
@@ -371,6 +741,16 @@ raw_atomic_long_dec_return_release(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec_return_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec_return_relaxed() elsewhere.
+ *
+ * Return: The updated value of @v.
+ */
static __always_inline long
raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
{
@@ -381,6 +761,16 @@ raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_dec() - atomic decrement with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_dec() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_dec(atomic_long_t *v)
{
@@ -391,6 +781,16 @@ raw_atomic_long_fetch_dec(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_dec_acquire() - atomic decrement with acquire ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_dec_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
{
@@ -401,6 +801,16 @@ raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_dec_release() - atomic decrement with release ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_dec_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_dec_release(atomic_long_t *v)
{
@@ -411,6 +821,16 @@ raw_atomic_long_fetch_dec_release(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_dec_relaxed() - atomic decrement with relaxed ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_dec_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{
@@ -421,6 +841,17 @@ raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_and() - atomic bitwise AND with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_and() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_and(long i, atomic_long_t *v)
{
@@ -431,6 +862,17 @@ raw_atomic_long_and(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_and() - atomic bitwise AND with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_and() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_and(long i, atomic_long_t *v)
{
@@ -441,6 +883,17 @@ raw_atomic_long_fetch_and(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_and_acquire() - atomic bitwise AND with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_and_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{
@@ -451,6 +904,17 @@ raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_and_release() - atomic bitwise AND with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_and_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
{
@@ -461,6 +925,17 @@ raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_and_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{
@@ -471,6 +946,17 @@ raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_andnot() - atomic bitwise AND NOT with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_andnot() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_andnot(long i, atomic_long_t *v)
{
@@ -481,6 +967,17 @@ raw_atomic_long_andnot(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_andnot() - atomic bitwise AND NOT with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_andnot() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
{
@@ -491,6 +988,17 @@ raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_andnot_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{
@@ -501,6 +1009,17 @@ raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_andnot_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{
@@ -511,6 +1030,17 @@ raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v & ~@i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_andnot_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{
@@ -521,6 +1051,17 @@ raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_or() - atomic bitwise OR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_or() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_or(long i, atomic_long_t *v)
{
@@ -531,6 +1072,17 @@ raw_atomic_long_or(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_or() - atomic bitwise OR with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_or() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_or(long i, atomic_long_t *v)
{
@@ -541,6 +1093,17 @@ raw_atomic_long_fetch_or(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_or_acquire() - atomic bitwise OR with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_or_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{
@@ -551,6 +1114,17 @@ raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_or_release() - atomic bitwise OR with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_or_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
{
@@ -561,6 +1135,17 @@ raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v | @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_or_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{
@@ -571,6 +1156,17 @@ raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_xor() - atomic bitwise XOR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_xor() elsewhere.
+ *
+ * Return: Nothing.
+ */
static __always_inline void
raw_atomic_long_xor(long i, atomic_long_t *v)
{
@@ -581,6 +1177,17 @@ raw_atomic_long_xor(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_xor() - atomic bitwise XOR with full ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_xor() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
{
@@ -591,6 +1198,17 @@ raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_xor_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{
@@ -601,6 +1219,17 @@ raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_xor_release() - atomic bitwise XOR with release ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_xor_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{
@@ -611,6 +1240,17 @@ raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
+ * @i: long value
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v ^ @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_xor_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{
@@ -621,6 +1261,17 @@ raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_xchg() - atomic exchange with full ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_xchg() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_xchg(atomic_long_t *v, long new)
{
@@ -631,6 +1282,17 @@ raw_atomic_long_xchg(atomic_long_t *v, long new)
#endif
}
+/**
+ * raw_atomic_long_xchg_acquire() - atomic exchange with acquire ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_xchg_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_xchg_acquire(atomic_long_t *v, long new)
{
@@ -641,6 +1303,17 @@ raw_atomic_long_xchg_acquire(atomic_long_t *v, long new)
#endif
}
+/**
+ * raw_atomic_long_xchg_release() - atomic exchange with release ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_xchg_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_xchg_release(atomic_long_t *v, long new)
{
@@ -651,6 +1324,17 @@ raw_atomic_long_xchg_release(atomic_long_t *v, long new)
#endif
}
+/**
+ * raw_atomic_long_xchg_relaxed() - atomic exchange with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @new: long value to assign
+ *
+ * Atomically updates @v to @new with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_xchg_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new)
{
@@ -661,6 +1345,18 @@ raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new)
#endif
}
+/**
+ * raw_atomic_long_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_cmpxchg() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{
@@ -671,6 +1367,18 @@ raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
#endif
}
+/**
+ * raw_atomic_long_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_cmpxchg_acquire() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{
@@ -681,6 +1389,18 @@ raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
#endif
}
+/**
+ * raw_atomic_long_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_cmpxchg_release() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{
@@ -691,6 +1411,18 @@ raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
#endif
}
+/**
+ * raw_atomic_long_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @old: long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_cmpxchg_relaxed() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{
@@ -701,6 +1433,19 @@ raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
#endif
}
+/**
+ * raw_atomic_long_try_cmpxchg() - atomic compare and exchange with full ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with full ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
{
@@ -711,6 +1456,19 @@ raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
#endif
}
+/**
+ * raw_atomic_long_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with acquire ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_acquire() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{
@@ -721,6 +1479,19 @@ raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
#endif
}
+/**
+ * raw_atomic_long_try_cmpxchg_release() - atomic compare and exchange with release ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with release ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_release() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
{
@@ -731,6 +1502,19 @@ raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
#endif
}
+/**
+ * raw_atomic_long_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
+ * @v: pointer to atomic_long_t
+ * @old: pointer to long value to compare with
+ * @new: long value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with relaxed ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_relaxed() elsewhere.
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{
@@ -741,6 +1525,17 @@ raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
#endif
}
+/**
+ * raw_atomic_long_sub_and_test() - atomic subtract and test if zero with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_sub_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
{
@@ -751,6 +1546,16 @@ raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec_and_test() - atomic decrement and test if zero with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_dec_and_test(atomic_long_t *v)
{
@@ -761,6 +1566,16 @@ raw_atomic_long_dec_and_test(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_inc_and_test() - atomic increment and test if zero with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc_and_test() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_inc_and_test(atomic_long_t *v)
{
@@ -771,6 +1586,17 @@ raw_atomic_long_inc_and_test(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_negative() - atomic add and test if negative with full ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_negative() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_add_negative(long i, atomic_long_t *v)
{
@@ -781,6 +1607,17 @@ raw_atomic_long_add_negative(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_negative_acquire() - atomic add and test if negative with acquire ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with acquire ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_negative_acquire() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
{
@@ -791,6 +1628,17 @@ raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_negative_release() - atomic add and test if negative with release ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with release ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_negative_release() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
{
@@ -801,6 +1649,17 @@ raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
+ * @i: long value to add
+ * @v: pointer to atomic_long_t
+ *
+ * Atomically updates @v to (@v + @i) with relaxed ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_negative_relaxed() elsewhere.
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
{
@@ -811,6 +1670,18 @@ raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_fetch_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_long_t
+ * @a: long value to add
+ * @u: long value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_fetch_add_unless() elsewhere.
+ *
+ * Return: The original value of @v.
+ */
static __always_inline long
raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
@@ -821,6 +1692,18 @@ raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
#endif
}
+/**
+ * raw_atomic_long_add_unless() - atomic add unless value with full ordering
+ * @v: pointer to atomic_long_t
+ * @a: long value to add
+ * @u: long value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_add_unless() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
{
@@ -831,6 +1714,16 @@ raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
#endif
}
+/**
+ * raw_atomic_long_inc_not_zero() - atomic increment unless zero with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc_not_zero() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_inc_not_zero(atomic_long_t *v)
{
@@ -841,6 +1734,16 @@ raw_atomic_long_inc_not_zero(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_inc_unless_negative() - atomic increment unless negative with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_inc_unless_negative() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_inc_unless_negative(atomic_long_t *v)
{
@@ -851,6 +1754,16 @@ raw_atomic_long_inc_unless_negative(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec_unless_positive() - atomic decrement unless positive with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec_unless_positive() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline bool
raw_atomic_long_dec_unless_positive(atomic_long_t *v)
{
@@ -861,6 +1774,16 @@ raw_atomic_long_dec_unless_positive(atomic_long_t *v)
#endif
}
+/**
+ * raw_atomic_long_dec_if_positive() - atomic decrement if positive with full ordering
+ * @v: pointer to atomic_long_t
+ *
+ * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
+ *
+ * Safe to use in noinstr code; prefer atomic_long_dec_if_positive() elsewhere.
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
static __always_inline long
raw_atomic_long_dec_if_positive(atomic_long_t *v)
{
@@ -872,4 +1795,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v)
}
#endif /* _LINUX_ATOMIC_LONG_H */
-// e785d25cc3f220b7d473d36aac9da85dd7eb13a8
+// 029d2e3a493086671e874a4c2e0e42084be42403
diff --git a/scripts/atomic/atomic-tbl.sh b/scripts/atomic/atomic-tbl.sh
index 81d5c32..608ff39 100755
--- a/scripts/atomic/atomic-tbl.sh
+++ b/scripts/atomic/atomic-tbl.sh
@@ -36,9 +36,16 @@ meta_has_relaxed()
meta_in "$1" "BFIR"
}
-#find_fallback_template(pfx, name, sfx, order)
-find_fallback_template()
+#meta_is_implicitly_relaxed(meta)
+meta_is_implicitly_relaxed()
{
+ meta_in "$1" "vls"
+}
+
+#find_template(tmpltype, pfx, name, sfx, order)
+find_template()
+{
+ local tmpltype="$1"; shift
local pfx="$1"; shift
local name="$1"; shift
local sfx="$1"; shift
@@ -52,8 +59,8 @@ find_fallback_template()
#
# Start at the most specific, and fall back to the most general. Once
# we find a specific fallback, don't bother looking for more.
- for base in "${pfx}${name}${sfx}${order}" "${name}"; do
- file="${ATOMICDIR}/fallbacks/${base}"
+ for base in "${pfx}${name}${sfx}${order}" "${pfx}${name}${sfx}" "${name}"; do
+ file="${ATOMICDIR}/${tmpltype}/${base}"
if [ -f "${file}" ]; then
printf "${file}"
@@ -62,6 +69,18 @@ find_fallback_template()
done
}
+#find_fallback_template(pfx, name, sfx, order)
+find_fallback_template()
+{
+ find_template "fallbacks" "$@"
+}
+
+#find_kerneldoc_template(pfx, name, sfx, order)
+find_kerneldoc_template()
+{
+ find_template "kerneldoc" "$@"
+}
+
#gen_ret_type(meta, int)
gen_ret_type() {
local meta="$1"; shift
@@ -142,6 +161,91 @@ gen_args()
done
}
+#gen_desc_return(meta)
+gen_desc_return()
+{
+ local meta="$1"; shift
+
+ case "${meta}" in
+ [v])
+ printf "Return: Nothing."
+ ;;
+ [Ff])
+ printf "Return: The original value of @v."
+ ;;
+ [R])
+ printf "Return: The updated value of @v."
+ ;;
+ [l])
+ printf "Return: The value of @v."
+ ;;
+ esac
+}
+
+#gen_template_kerneldoc(template, class, meta, pfx, name, sfx, order, atomic, int, args...)
+gen_template_kerneldoc()
+{
+ local template="$1"; shift
+ local class="$1"; shift
+ local meta="$1"; shift
+ local pfx="$1"; shift
+ local name="$1"; shift
+ local sfx="$1"; shift
+ local order="$1"; shift
+ local atomic="$1"; shift
+ local int="$1"; shift
+
+ local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
+
+ local ret="$(gen_ret_type "${meta}" "${int}")"
+ local retstmt="$(gen_ret_stmt "${meta}")"
+ local params="$(gen_params "${int}" "${atomic}" "$@")"
+ local args="$(gen_args "$@")"
+ local desc_order=""
+ local desc_instrumentation=""
+ local desc_return=""
+
+ if [ ! -z "${order}" ]; then
+ desc_order="${order##_}"
+ elif meta_is_implicitly_relaxed "${meta}"; then
+ desc_order="relaxed"
+ else
+ desc_order="full"
+ fi
+
+ if [ -z "${class}" ]; then
+ desc_noinstr="Unsafe to use in noinstr code; use raw_${atomicname}() there."
+ else
+ desc_noinstr="Safe to use in noinstr code; prefer ${atomicname}() elsewhere."
+ fi
+
+ desc_return="$(gen_desc_return "${meta}")"
+
+ . ${template}
+}
+
+#gen_kerneldoc(class, meta, pfx, name, sfx, order, atomic, int, args...)
+gen_kerneldoc()
+{
+ local class="$1"; shift
+ local meta="$1"; shift
+ local pfx="$1"; shift
+ local name="$1"; shift
+ local sfx="$1"; shift
+ local order="$1"; shift
+
+ local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
+
+ local tmpl="$(find_kerneldoc_template "${pfx}" "${name}" "${sfx}" "${order}")"
+ if [ -z "${tmpl}" ]; then
+ printf "/*\n"
+ printf " * No kerneldoc available for ${class}${atomicname}\n"
+ printf " */\n"
+ else
+ gen_template_kerneldoc "${tmpl}" "${class}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
+ fi
+}
+
#gen_proto_order_variants(meta, pfx, name, sfx, ...)
gen_proto_order_variants()
{
diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh
index 2b470d3..c0c8a85 100755
--- a/scripts/atomic/gen-atomic-fallback.sh
+++ b/scripts/atomic/gen-atomic-fallback.sh
@@ -73,6 +73,8 @@ gen_proto_order_variant()
local params="$(gen_params "${int}" "${atomic}" "$@")"
local args="$(gen_args "$@")"
+ gen_kerneldoc "raw_" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@"
+
printf "static __always_inline ${ret}\n"
printf "raw_${atomicname}(${params})\n"
printf "{\n"
diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh
index 93c949a..8f8f8e3 100755
--- a/scripts/atomic/gen-atomic-instrumented.sh
+++ b/scripts/atomic/gen-atomic-instrumented.sh
@@ -68,6 +68,8 @@ gen_proto_order_variant()
local args="$(gen_args "$@")"
local retstmt="$(gen_ret_stmt "${meta}")"
+ gen_kerneldoc "" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@"
+
cat <<EOF
static __always_inline ${ret}
${atomicname}(${params})
diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh
index af27a71..9826be3 100755
--- a/scripts/atomic/gen-atomic-long.sh
+++ b/scripts/atomic/gen-atomic-long.sh
@@ -49,6 +49,8 @@ gen_proto_order_variant()
local argscast_64="$(gen_args_cast "s64" "atomic64" "$@")"
local retstmt="$(gen_ret_stmt "${meta}")"
+ gen_kerneldoc "raw_" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "atomic_long" "long" "$@"
+
cat <<EOF
static __always_inline ${ret}
raw_atomic_long_${atomicname}(${params})
diff --git a/scripts/atomic/kerneldoc/add b/scripts/atomic/kerneldoc/add
new file mode 100644
index 0000000..991f3da
--- /dev/null
+++ b/scripts/atomic/kerneldoc/add
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic add with ${desc_order} ordering
+ * @i: ${int} value to add
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v + @i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/add_negative b/scripts/atomic/kerneldoc/add_negative
new file mode 100644
index 0000000..f4ca1f0
--- /dev/null
+++ b/scripts/atomic/kerneldoc/add_negative
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic add and test if negative with ${desc_order} ordering
+ * @i: ${int} value to add
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v + @i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if the resulting value of @v is negative, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/add_unless b/scripts/atomic/kerneldoc/add_unless
new file mode 100644
index 0000000..f828e5f
--- /dev/null
+++ b/scripts/atomic/kerneldoc/add_unless
@@ -0,0 +1,18 @@
+if [ -z "${pfx}" ]; then
+ desc_return="Return: @true if @v was updated, @false otherwise."
+fi
+
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic add unless value with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ * @a: ${int} value to add
+ * @u: ${int} value to compare with
+ *
+ * If (@v != @u), atomically updates @v to (@v + @a) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/and b/scripts/atomic/kerneldoc/and
new file mode 100644
index 0000000..a923574
--- /dev/null
+++ b/scripts/atomic/kerneldoc/and
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic bitwise AND with ${desc_order} ordering
+ * @i: ${int} value
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v & @i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/andnot b/scripts/atomic/kerneldoc/andnot
new file mode 100644
index 0000000..64bb509
--- /dev/null
+++ b/scripts/atomic/kerneldoc/andnot
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic bitwise AND NOT with ${desc_order} ordering
+ * @i: ${int} value
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v & ~@i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/cmpxchg b/scripts/atomic/kerneldoc/cmpxchg
new file mode 100644
index 0000000..3bce328
--- /dev/null
+++ b/scripts/atomic/kerneldoc/cmpxchg
@@ -0,0 +1,14 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic compare and exchange with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ * @old: ${int} value to compare with
+ * @new: ${int} value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: The original value of @v.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/dec b/scripts/atomic/kerneldoc/dec
new file mode 100644
index 0000000..bbeecbc
--- /dev/null
+++ b/scripts/atomic/kerneldoc/dec
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic decrement with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v - 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/dec_and_test b/scripts/atomic/kerneldoc/dec_and_test
new file mode 100644
index 0000000..71bbd23
--- /dev/null
+++ b/scripts/atomic/kerneldoc/dec_and_test
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic decrement and test if zero with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v - 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/dec_if_positive b/scripts/atomic/kerneldoc/dec_if_positive
new file mode 100644
index 0000000..7c74286
--- /dev/null
+++ b/scripts/atomic/kerneldoc/dec_if_positive
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic decrement if positive with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * If (@v > 0), atomically updates @v to (@v - 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/dec_unless_positive b/scripts/atomic/kerneldoc/dec_unless_positive
new file mode 100644
index 0000000..ee73612
--- /dev/null
+++ b/scripts/atomic/kerneldoc/dec_unless_positive
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic decrement unless positive with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * If (@v <= 0), atomically updates @v to (@v - 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/inc b/scripts/atomic/kerneldoc/inc
new file mode 100644
index 0000000..9f14f1b
--- /dev/null
+++ b/scripts/atomic/kerneldoc/inc
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic increment with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v + 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/inc_and_test b/scripts/atomic/kerneldoc/inc_and_test
new file mode 100644
index 0000000..971694d
--- /dev/null
+++ b/scripts/atomic/kerneldoc/inc_and_test
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic increment and test if zero with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v + 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/inc_not_zero b/scripts/atomic/kerneldoc/inc_not_zero
new file mode 100644
index 0000000..618be08
--- /dev/null
+++ b/scripts/atomic/kerneldoc/inc_not_zero
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic increment unless zero with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * If (@v != 0), atomically updates @v to (@v + 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/inc_unless_negative b/scripts/atomic/kerneldoc/inc_unless_negative
new file mode 100644
index 0000000..597f23d
--- /dev/null
+++ b/scripts/atomic/kerneldoc/inc_unless_negative
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic increment unless negative with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * If (@v >= 0), atomically updates @v to (@v + 1) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if @v was updated, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/or b/scripts/atomic/kerneldoc/or
new file mode 100644
index 0000000..55b33de
--- /dev/null
+++ b/scripts/atomic/kerneldoc/or
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic bitwise OR with ${desc_order} ordering
+ * @i: ${int} value
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v | @i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/read b/scripts/atomic/kerneldoc/read
new file mode 100644
index 0000000..89fe614
--- /dev/null
+++ b/scripts/atomic/kerneldoc/read
@@ -0,0 +1,12 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic load with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically loads the value of @v with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: The value loaded from @v.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/set b/scripts/atomic/kerneldoc/set
new file mode 100644
index 0000000..e82cb9e
--- /dev/null
+++ b/scripts/atomic/kerneldoc/set
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic set with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ * @i: ${int} value to assign
+ *
+ * Atomically sets @v to @i with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: Nothing.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/sub b/scripts/atomic/kerneldoc/sub
new file mode 100644
index 0000000..3ba642d
--- /dev/null
+++ b/scripts/atomic/kerneldoc/sub
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic subtract with ${desc_order} ordering
+ * @i: ${int} value to subtract
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v - @i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/sub_and_test b/scripts/atomic/kerneldoc/sub_and_test
new file mode 100644
index 0000000..d3760f7
--- /dev/null
+++ b/scripts/atomic/kerneldoc/sub_and_test
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic subtract and test if zero with ${desc_order} ordering
+ * @i: ${int} value to add
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v - @i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if the resulting value of @v is zero, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/try_cmpxchg b/scripts/atomic/kerneldoc/try_cmpxchg
new file mode 100644
index 0000000..2965532
--- /dev/null
+++ b/scripts/atomic/kerneldoc/try_cmpxchg
@@ -0,0 +1,15 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic compare and exchange with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ * @old: pointer to ${int} value to compare with
+ * @new: ${int} value to assign
+ *
+ * If (@v == @old), atomically updates @v to @new with ${desc_order} ordering.
+ * Otherwise, updates @old to the current value of @v.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: @true if the exchange occured, @false otherwise.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/xchg b/scripts/atomic/kerneldoc/xchg
new file mode 100644
index 0000000..75f04c0
--- /dev/null
+++ b/scripts/atomic/kerneldoc/xchg
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic exchange with ${desc_order} ordering
+ * @v: pointer to ${atomic}_t
+ * @new: ${int} value to assign
+ *
+ * Atomically updates @v to @new with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * Return: The original value of @v.
+ */
+EOF
diff --git a/scripts/atomic/kerneldoc/xor b/scripts/atomic/kerneldoc/xor
new file mode 100644
index 0000000..8837270
--- /dev/null
+++ b/scripts/atomic/kerneldoc/xor
@@ -0,0 +1,13 @@
+cat <<EOF
+/**
+ * ${class}${atomicname}() - atomic bitwise XOR with ${desc_order} ordering
+ * @i: ${int} value
+ * @v: pointer to ${atomic}_t
+ *
+ * Atomically updates @v to (@v ^ @i) with ${desc_order} ordering.
+ *
+ * ${desc_noinstr}
+ *
+ * ${desc_return}
+ */
+EOF
The following commit has been merged into the locking/core branch of tip:
Commit-ID: d12157efc8e083c77d054675fcdd594f54cc7e2b
Gitweb: https://git.kernel.org/tip/d12157efc8e083c77d054675fcdd594f54cc7e2b
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:01 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:14 +02:00
locking/atomic: make atomic*_{cmp,}xchg optional
Most architectures define the atomic/atomic64 xchg and cmpxchg
operations in terms of arch_xchg and arch_cmpxchg respectfully.
Add fallbacks for these cases and remove the trivial cases from arch
code. On some architectures the existing definitions are kept as these
are used to build other arch_atomic*() operations.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/alpha/include/asm/atomic.h | 10 +-
arch/arc/include/asm/atomic.h | 24 +---
arch/arc/include/asm/atomic64-arcv2.h | 2 +-
arch/arm/include/asm/atomic.h | 3 +-
arch/arm64/include/asm/atomic.h | 28 +---
arch/csky/include/asm/atomic.h | 35 +----
arch/hexagon/include/asm/atomic.h | 6 +-
arch/ia64/include/asm/atomic.h | 7 +-
arch/loongarch/include/asm/atomic.h | 7 +-
arch/m68k/include/asm/atomic.h | 9 +-
arch/mips/include/asm/atomic.h | 11 +-
arch/openrisc/include/asm/atomic.h | 3 +-
arch/parisc/include/asm/atomic.h | 9 +-
arch/powerpc/include/asm/atomic.h | 24 +---
arch/riscv/include/asm/atomic.h | 72 +---------
arch/sh/include/asm/atomic.h | 3 +-
arch/sparc/include/asm/atomic_32.h | 2 +-
arch/sparc/include/asm/atomic_64.h | 11 +-
arch/xtensa/include/asm/atomic.h | 3 +-
include/asm-generic/atomic.h | 3 +-
include/linux/atomic/atomic-arch-fallback.h | 158 ++++++++++++++++++-
scripts/atomic/fallbacks/cmpxchg | 7 +-
scripts/atomic/fallbacks/xchg | 7 +-
23 files changed, 179 insertions(+), 265 deletions(-)
create mode 100644 scripts/atomic/fallbacks/cmpxchg
create mode 100644 scripts/atomic/fallbacks/xchg
diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index f2861a4..ec8ab55 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -200,16 +200,6 @@ ATOMIC_OPS(xor, xor)
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
-#define arch_atomic64_cmpxchg(v, old, new) \
- (arch_cmpxchg(&((v)->counter), old, new))
-#define arch_atomic64_xchg(v, new) \
- (arch_xchg(&((v)->counter), new))
-
-#define arch_atomic_cmpxchg(v, old, new) \
- (arch_cmpxchg(&((v)->counter), old, new))
-#define arch_atomic_xchg(v, new) \
- (arch_xchg(&((v)->counter), new))
-
/**
* arch_atomic_fetch_add_unless - add unless the number is a given value
* @v: pointer of type atomic_t
diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
index 52ee51e..592d7ff 100644
--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -22,30 +22,6 @@
#include <asm/atomic-spinlock.h>
#endif
-#define arch_atomic_cmpxchg(v, o, n) \
-({ \
- arch_cmpxchg(&((v)->counter), (o), (n)); \
-})
-
-#ifdef arch_cmpxchg_relaxed
-#define arch_atomic_cmpxchg_relaxed(v, o, n) \
-({ \
- arch_cmpxchg_relaxed(&((v)->counter), (o), (n)); \
-})
-#endif
-
-#define arch_atomic_xchg(v, n) \
-({ \
- arch_xchg(&((v)->counter), (n)); \
-})
-
-#ifdef arch_xchg_relaxed
-#define arch_atomic_xchg_relaxed(v, n) \
-({ \
- arch_xchg_relaxed(&((v)->counter), (n)); \
-})
-#endif
-
/*
* 64-bit atomics
*/
diff --git a/arch/arc/include/asm/atomic64-arcv2.h b/arch/arc/include/asm/atomic64-arcv2.h
index c5a8010..2b7c9e6 100644
--- a/arch/arc/include/asm/atomic64-arcv2.h
+++ b/arch/arc/include/asm/atomic64-arcv2.h
@@ -159,6 +159,7 @@ arch_atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new)
return prev;
}
+#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new)
{
@@ -179,6 +180,7 @@ static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new)
return prev;
}
+#define arch_atomic64_xchg arch_atomic64_xchg
/**
* arch_atomic64_dec_if_positive - decrement by 1 if old value positive
diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index db8512d..9458d47 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -210,6 +210,7 @@ static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
return ret;
}
+#define arch_atomic_cmpxchg arch_atomic_cmpxchg
#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot
@@ -240,8 +241,6 @@ ATOMIC_OPS(xor, ^=, eor)
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
#ifndef CONFIG_GENERIC_ATOMIC64
typedef struct {
s64 counter;
diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h
index c997927..400d279 100644
--- a/arch/arm64/include/asm/atomic.h
+++ b/arch/arm64/include/asm/atomic.h
@@ -142,24 +142,6 @@ static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v)
#define arch_atomic_fetch_xor_release arch_atomic_fetch_xor_release
#define arch_atomic_fetch_xor arch_atomic_fetch_xor
-#define arch_atomic_xchg_relaxed(v, new) \
- arch_xchg_relaxed(&((v)->counter), (new))
-#define arch_atomic_xchg_acquire(v, new) \
- arch_xchg_acquire(&((v)->counter), (new))
-#define arch_atomic_xchg_release(v, new) \
- arch_xchg_release(&((v)->counter), (new))
-#define arch_atomic_xchg(v, new) \
- arch_xchg(&((v)->counter), (new))
-
-#define arch_atomic_cmpxchg_relaxed(v, old, new) \
- arch_cmpxchg_relaxed(&((v)->counter), (old), (new))
-#define arch_atomic_cmpxchg_acquire(v, old, new) \
- arch_cmpxchg_acquire(&((v)->counter), (old), (new))
-#define arch_atomic_cmpxchg_release(v, old, new) \
- arch_cmpxchg_release(&((v)->counter), (old), (new))
-#define arch_atomic_cmpxchg(v, old, new) \
- arch_cmpxchg(&((v)->counter), (old), (new))
-
#define arch_atomic_andnot arch_atomic_andnot
/*
@@ -209,16 +191,6 @@ static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v)
#define arch_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release
#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor
-#define arch_atomic64_xchg_relaxed arch_atomic_xchg_relaxed
-#define arch_atomic64_xchg_acquire arch_atomic_xchg_acquire
-#define arch_atomic64_xchg_release arch_atomic_xchg_release
-#define arch_atomic64_xchg arch_atomic_xchg
-
-#define arch_atomic64_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
-#define arch_atomic64_cmpxchg_acquire arch_atomic_cmpxchg_acquire
-#define arch_atomic64_cmpxchg_release arch_atomic_cmpxchg_release
-#define arch_atomic64_cmpxchg arch_atomic_cmpxchg
-
#define arch_atomic64_andnot arch_atomic64_andnot
#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
diff --git a/arch/csky/include/asm/atomic.h b/arch/csky/include/asm/atomic.h
index 60406ef..4dab44f 100644
--- a/arch/csky/include/asm/atomic.h
+++ b/arch/csky/include/asm/atomic.h
@@ -195,41 +195,6 @@ arch_atomic_dec_if_positive(atomic_t *v)
}
#define arch_atomic_dec_if_positive arch_atomic_dec_if_positive
-#define ATOMIC_OP() \
-static __always_inline \
-int arch_atomic_xchg_relaxed(atomic_t *v, int n) \
-{ \
- return __xchg_relaxed(n, &(v->counter), 4); \
-} \
-static __always_inline \
-int arch_atomic_cmpxchg_relaxed(atomic_t *v, int o, int n) \
-{ \
- return __cmpxchg_relaxed(&(v->counter), o, n, 4); \
-} \
-static __always_inline \
-int arch_atomic_cmpxchg_acquire(atomic_t *v, int o, int n) \
-{ \
- return __cmpxchg_acquire(&(v->counter), o, n, 4); \
-} \
-static __always_inline \
-int arch_atomic_cmpxchg(atomic_t *v, int o, int n) \
-{ \
- return __cmpxchg(&(v->counter), o, n, 4); \
-}
-
-#define ATOMIC_OPS() \
- ATOMIC_OP()
-
-ATOMIC_OPS()
-
-#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed
-#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
-#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire
-#define arch_atomic_cmpxchg arch_atomic_cmpxchg
-
-#undef ATOMIC_OPS
-#undef ATOMIC_OP
-
#else
#include <asm-generic/atomic.h>
#endif
diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h
index 738857e..ad6c111 100644
--- a/arch/hexagon/include/asm/atomic.h
+++ b/arch/hexagon/include/asm/atomic.h
@@ -36,12 +36,6 @@ static inline void arch_atomic_set(atomic_t *v, int new)
*/
#define arch_atomic_read(v) READ_ONCE((v)->counter)
-#define arch_atomic_xchg(v, new) \
- (arch_xchg(&((v)->counter), (new)))
-
-#define arch_atomic_cmpxchg(v, old, new) \
- (arch_cmpxchg(&((v)->counter), (old), (new)))
-
#define ATOMIC_OP(op) \
static inline void arch_atomic_##op(int i, atomic_t *v) \
{ \
diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h
index 266c429..6540a62 100644
--- a/arch/ia64/include/asm/atomic.h
+++ b/arch/ia64/include/asm/atomic.h
@@ -207,13 +207,6 @@ ATOMIC64_FETCH_OP(xor, ^)
#undef ATOMIC64_FETCH_OP
#undef ATOMIC64_OP
-#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), old, new))
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
-#define arch_atomic64_cmpxchg(v, old, new) \
- (arch_cmpxchg(&((v)->counter), old, new))
-#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
#define arch_atomic_add(i,v) (void)arch_atomic_add_return((i), (v))
#define arch_atomic_sub(i,v) (void)arch_atomic_sub_return((i), (v))
diff --git a/arch/loongarch/include/asm/atomic.h b/arch/loongarch/include/asm/atomic.h
index 6b9aca9..8d73c85 100644
--- a/arch/loongarch/include/asm/atomic.h
+++ b/arch/loongarch/include/asm/atomic.h
@@ -181,9 +181,6 @@ static inline int arch_atomic_sub_if_positive(int i, atomic_t *v)
return result;
}
-#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), (new)))
-
/*
* arch_atomic_dec_if_positive - decrement by 1 if old value positive
* @v: pointer of type atomic_t
@@ -342,10 +339,6 @@ static inline long arch_atomic64_sub_if_positive(long i, atomic64_t *v)
return result;
}
-#define arch_atomic64_cmpxchg(v, o, n) \
- ((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), (new)))
-
/*
* arch_atomic64_dec_if_positive - decrement by 1 if old value positive
* @v: pointer of type atomic64_t
diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h
index cfba83d..190a032 100644
--- a/arch/m68k/include/asm/atomic.h
+++ b/arch/m68k/include/asm/atomic.h
@@ -158,12 +158,7 @@ static inline int arch_atomic_inc_and_test(atomic_t *v)
}
#define arch_atomic_inc_and_test arch_atomic_inc_and_test
-#ifdef CONFIG_RMW_INSNS
-
-#define arch_atomic_cmpxchg(v, o, n) ((int)arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
-#else /* !CONFIG_RMW_INSNS */
+#ifndef CONFIG_RMW_INSNS
static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
{
@@ -177,6 +172,7 @@ static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
local_irq_restore(flags);
return prev;
}
+#define arch_atomic_cmpxchg arch_atomic_cmpxchg
static inline int arch_atomic_xchg(atomic_t *v, int new)
{
@@ -189,6 +185,7 @@ static inline int arch_atomic_xchg(atomic_t *v, int new)
local_irq_restore(flags);
return prev;
}
+#define arch_atomic_xchg arch_atomic_xchg
#endif /* !CONFIG_RMW_INSNS */
diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
index 712fb5a..ba188e7 100644
--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -33,17 +33,6 @@ static __always_inline void arch_##pfx##_set(pfx##_t *v, type i) \
{ \
WRITE_ONCE(v->counter, i); \
} \
- \
-static __always_inline type \
-arch_##pfx##_cmpxchg(pfx##_t *v, type o, type n) \
-{ \
- return arch_cmpxchg(&v->counter, o, n); \
-} \
- \
-static __always_inline type arch_##pfx##_xchg(pfx##_t *v, type n) \
-{ \
- return arch_xchg(&v->counter, n); \
-}
ATOMIC_OPS(atomic, int)
diff --git a/arch/openrisc/include/asm/atomic.h b/arch/openrisc/include/asm/atomic.h
index 326167e..8ce67ec 100644
--- a/arch/openrisc/include/asm/atomic.h
+++ b/arch/openrisc/include/asm/atomic.h
@@ -130,7 +130,4 @@ static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
#include <asm/cmpxchg.h>
-#define arch_atomic_xchg(ptr, v) (arch_xchg(&(ptr)->counter, (v)))
-#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), (old), (new)))
-
#endif /* __ASM_OPENRISC_ATOMIC_H */
diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h
index dd5a299..0b3f64c 100644
--- a/arch/parisc/include/asm/atomic.h
+++ b/arch/parisc/include/asm/atomic.h
@@ -73,10 +73,6 @@ static __inline__ int arch_atomic_read(const atomic_t *v)
return READ_ONCE((v)->counter);
}
-/* exported interface */
-#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
#define ATOMIC_OP(op, c_op) \
static __inline__ void arch_atomic_##op(int i, atomic_t *v) \
{ \
@@ -218,11 +214,6 @@ arch_atomic64_read(const atomic64_t *v)
return READ_ONCE((v)->counter);
}
-/* exported interface */
-#define arch_atomic64_cmpxchg(v, o, n) \
- ((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
#endif /* !CONFIG_64BIT */
diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
index 47228b1..5bf6a4d 100644
--- a/arch/powerpc/include/asm/atomic.h
+++ b/arch/powerpc/include/asm/atomic.h
@@ -126,18 +126,6 @@ ATOMIC_OPS(xor, xor, "", K)
#undef ATOMIC_OP_RETURN_RELAXED
#undef ATOMIC_OP
-#define arch_atomic_cmpxchg(v, o, n) \
- (arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic_cmpxchg_relaxed(v, o, n) \
- arch_cmpxchg_relaxed(&((v)->counter), (o), (n))
-#define arch_atomic_cmpxchg_acquire(v, o, n) \
- arch_cmpxchg_acquire(&((v)->counter), (o), (n))
-
-#define arch_atomic_xchg(v, new) \
- (arch_xchg(&((v)->counter), new))
-#define arch_atomic_xchg_relaxed(v, new) \
- arch_xchg_relaxed(&((v)->counter), (new))
-
/**
* atomic_fetch_add_unless - add unless the number is a given value
* @v: pointer of type atomic_t
@@ -396,18 +384,6 @@ static __inline__ s64 arch_atomic64_dec_if_positive(atomic64_t *v)
}
#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
-#define arch_atomic64_cmpxchg(v, o, n) \
- (arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic64_cmpxchg_relaxed(v, o, n) \
- arch_cmpxchg_relaxed(&((v)->counter), (o), (n))
-#define arch_atomic64_cmpxchg_acquire(v, o, n) \
- arch_cmpxchg_acquire(&((v)->counter), (o), (n))
-
-#define arch_atomic64_xchg(v, new) \
- (arch_xchg(&((v)->counter), new))
-#define arch_atomic64_xchg_relaxed(v, new) \
- arch_xchg_relaxed(&((v)->counter), (new))
-
/**
* atomic64_fetch_add_unless - add unless the number is a given value
* @v: pointer of type atomic64_t
diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index bba4729..f5dfef6 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -238,78 +238,6 @@ static __always_inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a,
#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
#endif
-/*
- * atomic_{cmp,}xchg is required to have exactly the same ordering semantics as
- * {cmp,}xchg and the operations that return, so they need a full barrier.
- */
-#define ATOMIC_OP(c_t, prefix, size) \
-static __always_inline \
-c_t arch_atomic##prefix##_xchg_relaxed(atomic##prefix##_t *v, c_t n) \
-{ \
- return __xchg_relaxed(&(v->counter), n, size); \
-} \
-static __always_inline \
-c_t arch_atomic##prefix##_xchg_acquire(atomic##prefix##_t *v, c_t n) \
-{ \
- return __xchg_acquire(&(v->counter), n, size); \
-} \
-static __always_inline \
-c_t arch_atomic##prefix##_xchg_release(atomic##prefix##_t *v, c_t n) \
-{ \
- return __xchg_release(&(v->counter), n, size); \
-} \
-static __always_inline \
-c_t arch_atomic##prefix##_xchg(atomic##prefix##_t *v, c_t n) \
-{ \
- return __arch_xchg(&(v->counter), n, size); \
-} \
-static __always_inline \
-c_t arch_atomic##prefix##_cmpxchg_relaxed(atomic##prefix##_t *v, \
- c_t o, c_t n) \
-{ \
- return __cmpxchg_relaxed(&(v->counter), o, n, size); \
-} \
-static __always_inline \
-c_t arch_atomic##prefix##_cmpxchg_acquire(atomic##prefix##_t *v, \
- c_t o, c_t n) \
-{ \
- return __cmpxchg_acquire(&(v->counter), o, n, size); \
-} \
-static __always_inline \
-c_t arch_atomic##prefix##_cmpxchg_release(atomic##prefix##_t *v, \
- c_t o, c_t n) \
-{ \
- return __cmpxchg_release(&(v->counter), o, n, size); \
-} \
-static __always_inline \
-c_t arch_atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n) \
-{ \
- return __cmpxchg(&(v->counter), o, n, size); \
-}
-
-#ifdef CONFIG_GENERIC_ATOMIC64
-#define ATOMIC_OPS() \
- ATOMIC_OP(int, , 4)
-#else
-#define ATOMIC_OPS() \
- ATOMIC_OP(int, , 4) \
- ATOMIC_OP(s64, 64, 8)
-#endif
-
-ATOMIC_OPS()
-
-#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed
-#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire
-#define arch_atomic_xchg_release arch_atomic_xchg_release
-#define arch_atomic_xchg arch_atomic_xchg
-#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
-#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire
-#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release
-#define arch_atomic_cmpxchg arch_atomic_cmpxchg
-
-#undef ATOMIC_OPS
-#undef ATOMIC_OP
-
static __always_inline bool arch_atomic_inc_unless_negative(atomic_t *v)
{
int prev, rc;
diff --git a/arch/sh/include/asm/atomic.h b/arch/sh/include/asm/atomic.h
index 528bfed..7a18cb2 100644
--- a/arch/sh/include/asm/atomic.h
+++ b/arch/sh/include/asm/atomic.h
@@ -30,9 +30,6 @@
#include <asm/atomic-irq.h>
#endif
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n)))
-
#endif /* CONFIG_CPU_J2 */
#endif /* __ASM_SH_ATOMIC_H */
diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h
index d775daa..1c9e6c7 100644
--- a/arch/sparc/include/asm/atomic_32.h
+++ b/arch/sparc/include/asm/atomic_32.h
@@ -24,7 +24,9 @@ int arch_atomic_fetch_and(int, atomic_t *);
int arch_atomic_fetch_or(int, atomic_t *);
int arch_atomic_fetch_xor(int, atomic_t *);
int arch_atomic_cmpxchg(atomic_t *, int, int);
+#define arch_atomic_cmpxchg arch_atomic_cmpxchg
int arch_atomic_xchg(atomic_t *, int);
+#define arch_atomic_xchg arch_atomic_xchg
int arch_atomic_fetch_add_unless(atomic_t *, int, int);
void arch_atomic_set(atomic_t *, int);
diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h
index 0778916..df6a8b0 100644
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -49,17 +49,6 @@ ATOMIC_OPS(xor)
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
-#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n)))
-
-static inline int arch_atomic_xchg(atomic_t *v, int new)
-{
- return arch_xchg(&v->counter, new);
-}
-
-#define arch_atomic64_cmpxchg(v, o, n) \
- ((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
s64 arch_atomic64_dec_if_positive(atomic64_t *v);
#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h
index 52da614..1d323a8 100644
--- a/arch/xtensa/include/asm/atomic.h
+++ b/arch/xtensa/include/asm/atomic.h
@@ -257,7 +257,4 @@ ATOMIC_OPS(xor)
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
-#define arch_atomic_cmpxchg(v, o, n) ((int)arch_cmpxchg(&((v)->counter), (o), (n)))
-#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
-
#endif /* _XTENSA_ATOMIC_H */
diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index e271d67..22142c7 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -130,7 +130,4 @@ ATOMIC_OP(xor, ^)
#define arch_atomic_read(v) READ_ONCE((v)->counter)
#define arch_atomic_set(v, i) WRITE_ONCE(((v)->counter), (i))
-#define arch_atomic_xchg(ptr, v) (arch_xchg(&(ptr)->counter, (u32)(v)))
-#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), (u32)(old), (u32)(new)))
-
#endif /* __ASM_GENERIC_ATOMIC_H */
diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 3ce4cb5..1a2d81d 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -1091,9 +1091,48 @@ arch_atomic_fetch_xor(int i, atomic_t *v)
#endif /* arch_atomic_fetch_xor_relaxed */
#ifndef arch_atomic_xchg_relaxed
+#ifdef arch_atomic_xchg
#define arch_atomic_xchg_acquire arch_atomic_xchg
#define arch_atomic_xchg_release arch_atomic_xchg
#define arch_atomic_xchg_relaxed arch_atomic_xchg
+#endif /* arch_atomic_xchg */
+
+#ifndef arch_atomic_xchg
+static __always_inline int
+arch_atomic_xchg(atomic_t *v, int new)
+{
+ return arch_xchg(&v->counter, new);
+}
+#define arch_atomic_xchg arch_atomic_xchg
+#endif
+
+#ifndef arch_atomic_xchg_acquire
+static __always_inline int
+arch_atomic_xchg_acquire(atomic_t *v, int new)
+{
+ return arch_xchg_acquire(&v->counter, new);
+}
+#define arch_atomic_xchg_acquire arch_atomic_xchg_acquire
+#endif
+
+#ifndef arch_atomic_xchg_release
+static __always_inline int
+arch_atomic_xchg_release(atomic_t *v, int new)
+{
+ return arch_xchg_release(&v->counter, new);
+}
+#define arch_atomic_xchg_release arch_atomic_xchg_release
+#endif
+
+#ifndef arch_atomic_xchg_relaxed
+static __always_inline int
+arch_atomic_xchg_relaxed(atomic_t *v, int new)
+{
+ return arch_xchg_relaxed(&v->counter, new);
+}
+#define arch_atomic_xchg_relaxed arch_atomic_xchg_relaxed
+#endif
+
#else /* arch_atomic_xchg_relaxed */
#ifndef arch_atomic_xchg_acquire
@@ -1133,9 +1172,48 @@ arch_atomic_xchg(atomic_t *v, int i)
#endif /* arch_atomic_xchg_relaxed */
#ifndef arch_atomic_cmpxchg_relaxed
+#ifdef arch_atomic_cmpxchg
#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg
#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg
#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg
+#endif /* arch_atomic_cmpxchg */
+
+#ifndef arch_atomic_cmpxchg
+static __always_inline int
+arch_atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+ return arch_cmpxchg(&v->counter, old, new);
+}
+#define arch_atomic_cmpxchg arch_atomic_cmpxchg
+#endif
+
+#ifndef arch_atomic_cmpxchg_acquire
+static __always_inline int
+arch_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
+{
+ return arch_cmpxchg_acquire(&v->counter, old, new);
+}
+#define arch_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire
+#endif
+
+#ifndef arch_atomic_cmpxchg_release
+static __always_inline int
+arch_atomic_cmpxchg_release(atomic_t *v, int old, int new)
+{
+ return arch_cmpxchg_release(&v->counter, old, new);
+}
+#define arch_atomic_cmpxchg_release arch_atomic_cmpxchg_release
+#endif
+
+#ifndef arch_atomic_cmpxchg_relaxed
+static __always_inline int
+arch_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
+{
+ return arch_cmpxchg_relaxed(&v->counter, old, new);
+}
+#define arch_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
+#endif
+
#else /* arch_atomic_cmpxchg_relaxed */
#ifndef arch_atomic_cmpxchg_acquire
@@ -2225,9 +2303,48 @@ arch_atomic64_fetch_xor(s64 i, atomic64_t *v)
#endif /* arch_atomic64_fetch_xor_relaxed */
#ifndef arch_atomic64_xchg_relaxed
+#ifdef arch_atomic64_xchg
#define arch_atomic64_xchg_acquire arch_atomic64_xchg
#define arch_atomic64_xchg_release arch_atomic64_xchg
#define arch_atomic64_xchg_relaxed arch_atomic64_xchg
+#endif /* arch_atomic64_xchg */
+
+#ifndef arch_atomic64_xchg
+static __always_inline s64
+arch_atomic64_xchg(atomic64_t *v, s64 new)
+{
+ return arch_xchg(&v->counter, new);
+}
+#define arch_atomic64_xchg arch_atomic64_xchg
+#endif
+
+#ifndef arch_atomic64_xchg_acquire
+static __always_inline s64
+arch_atomic64_xchg_acquire(atomic64_t *v, s64 new)
+{
+ return arch_xchg_acquire(&v->counter, new);
+}
+#define arch_atomic64_xchg_acquire arch_atomic64_xchg_acquire
+#endif
+
+#ifndef arch_atomic64_xchg_release
+static __always_inline s64
+arch_atomic64_xchg_release(atomic64_t *v, s64 new)
+{
+ return arch_xchg_release(&v->counter, new);
+}
+#define arch_atomic64_xchg_release arch_atomic64_xchg_release
+#endif
+
+#ifndef arch_atomic64_xchg_relaxed
+static __always_inline s64
+arch_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
+{
+ return arch_xchg_relaxed(&v->counter, new);
+}
+#define arch_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed
+#endif
+
#else /* arch_atomic64_xchg_relaxed */
#ifndef arch_atomic64_xchg_acquire
@@ -2267,9 +2384,48 @@ arch_atomic64_xchg(atomic64_t *v, s64 i)
#endif /* arch_atomic64_xchg_relaxed */
#ifndef arch_atomic64_cmpxchg_relaxed
+#ifdef arch_atomic64_cmpxchg
#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg
#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg
#define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg
+#endif /* arch_atomic64_cmpxchg */
+
+#ifndef arch_atomic64_cmpxchg
+static __always_inline s64
+arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_cmpxchg(&v->counter, old, new);
+}
+#define arch_atomic64_cmpxchg arch_atomic64_cmpxchg
+#endif
+
+#ifndef arch_atomic64_cmpxchg_acquire
+static __always_inline s64
+arch_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_cmpxchg_acquire(&v->counter, old, new);
+}
+#define arch_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire
+#endif
+
+#ifndef arch_atomic64_cmpxchg_release
+static __always_inline s64
+arch_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_cmpxchg_release(&v->counter, old, new);
+}
+#define arch_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release
+#endif
+
+#ifndef arch_atomic64_cmpxchg_relaxed
+static __always_inline s64
+arch_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_cmpxchg_relaxed(&v->counter, old, new);
+}
+#define arch_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed
+#endif
+
#else /* arch_atomic64_cmpxchg_relaxed */
#ifndef arch_atomic64_cmpxchg_acquire
@@ -2597,4 +2753,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
#endif
#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// 9f0fd6ed53267c6ec64e36cd18e6fd8df57ea277
+// e1cee558cc61cae887890db30fcdf93baca9f498
diff --git a/scripts/atomic/fallbacks/cmpxchg b/scripts/atomic/fallbacks/cmpxchg
new file mode 100644
index 0000000..87cd010
--- /dev/null
+++ b/scripts/atomic/fallbacks/cmpxchg
@@ -0,0 +1,7 @@
+cat <<EOF
+static __always_inline ${int}
+arch_${atomic}_cmpxchg${order}(${atomic}_t *v, ${int} old, ${int} new)
+{
+ return arch_cmpxchg${order}(&v->counter, old, new);
+}
+EOF
diff --git a/scripts/atomic/fallbacks/xchg b/scripts/atomic/fallbacks/xchg
new file mode 100644
index 0000000..733b898
--- /dev/null
+++ b/scripts/atomic/fallbacks/xchg
@@ -0,0 +1,7 @@
+cat <<EOF
+static __always_inline ${int}
+arch_${atomic}_xchg${order}(${atomic}_t *v, ${int} new)
+{
+ return arch_xchg${order}(&v->counter, new);
+}
+EOF
The following commit has been merged into the locking/core branch of tip:
Commit-ID: e40e5298e692bb6b5a200b3f0f55e6e5adf0e5ad
Gitweb: https://git.kernel.org/tip/e40e5298e692bb6b5a200b3f0f55e6e5adf0e5ad
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:12 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:18 +02:00
locking/atomic: scripts: remove leftover "${mult}"
We removed cmpxchg_double() and variants in commit:
b4cf83b2d1da40b2 ("arch: Remove cmpxchg_double")
Which removed the need for "${mult}" in the instrumentation logic.
Unfortunately we missed an instance of "${mult}".
There is no change to the generated header.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
scripts/atomic/gen-atomic-instrumented.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh
index a2ef735..68557bf 100755
--- a/scripts/atomic/gen-atomic-instrumented.sh
+++ b/scripts/atomic/gen-atomic-instrumented.sh
@@ -118,7 +118,7 @@ cat <<EOF
EOF
[ -n "$kcsan_barrier" ] && printf "\t${kcsan_barrier}; \\\\\n"
cat <<EOF
- instrument_atomic_read_write(__ai_ptr, ${mult}sizeof(*__ai_ptr)); \\
+ instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \\
arch_${xchg}${order}(__ai_ptr, __VA_ARGS__); \\
})
EOF
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 5bef003538ae8621c95ac6ebfd37324373fae37d
Gitweb: https://git.kernel.org/tip/5bef003538ae8621c95ac6ebfd37324373fae37d
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:09 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:17 +02:00
locking/atomic: x86: add preprocessor symbols
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/x86.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/x86/include/asm/cmpxchg_64.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/x86/include/asm/cmpxchg_64.h b/arch/x86/include/asm/cmpxchg_64.h
index 3e6e3ee..44b08b5 100644
--- a/arch/x86/include/asm/cmpxchg_64.h
+++ b/arch/x86/include/asm/cmpxchg_64.h
@@ -45,11 +45,13 @@ static __always_inline u128 arch_cmpxchg128(volatile u128 *ptr, u128 old, u128 n
{
return __arch_cmpxchg128(ptr, old, new, LOCK_PREFIX);
}
+#define arch_cmpxchg128 arch_cmpxchg128
static __always_inline u128 arch_cmpxchg128_local(volatile u128 *ptr, u128 old, u128 new)
{
return __arch_cmpxchg128(ptr, old, new,);
}
+#define arch_cmpxchg128_local arch_cmpxchg128_local
#define __arch_try_cmpxchg128(_ptr, _oldp, _new, _lock) \
({ \
@@ -75,11 +77,13 @@ static __always_inline bool arch_try_cmpxchg128(volatile u128 *ptr, u128 *oldp,
{
return __arch_try_cmpxchg128(ptr, oldp, new, LOCK_PREFIX);
}
+#define arch_try_cmpxchg128 arch_try_cmpxchg128
static __always_inline bool arch_try_cmpxchg128_local(volatile u128 *ptr, u128 *oldp, u128 new)
{
return __arch_try_cmpxchg128(ptr, oldp, new,);
}
+#define arch_try_cmpxchg128_local arch_try_cmpxchg128_local
#define system_has_cmpxchg128() boot_cpu_has(X86_FEATURE_CX16)
The following commit has been merged into the locking/core branch of tip:
Commit-ID: e74f4059d11f36e936b08e98bc96f654c308807a
Gitweb: https://git.kernel.org/tip/e74f4059d11f36e936b08e98bc96f654c308807a
Author: Paul E. McKenney <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:23 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:24 +02:00
locking/atomic: docs: Add atomic operations to the driver basic API documentation
Add the generated atomic headers to driver-api/basics.rst in order to
provide documentation for the Linux kernel's atomic operations.
At the same time, dtop the x86 atomic header, which provides kerneldoc
comments for some arch_atomic*_*() operations. The arch_atomic*_*()
operations are now purely an implenentation detail of the
raw_atomic*_*() ops, and outside of implementing the atomics, code
should use the raw_atomic*_*() forms.
[Mark: add atomic-{instrumented,long}.h, update commit message]
Signed-off-by: Paul E. McKenney <[email protected]>
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
Documentation/driver-api/basics.rst | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/Documentation/driver-api/basics.rst b/Documentation/driver-api/basics.rst
index 4b4d8e2..7671b53 100644
--- a/Documentation/driver-api/basics.rst
+++ b/Documentation/driver-api/basics.rst
@@ -84,7 +84,13 @@ Reference counting
Atomics
-------
-.. kernel-doc:: arch/x86/include/asm/atomic.h
+.. kernel-doc:: include/linux/atomic/atomic-instrumented.h
+ :internal:
+
+.. kernel-doc:: include/linux/atomic/atomic-arch-fallback.h
+ :internal:
+
+.. kernel-doc:: include/linux/atomic/atomic-long.h
:internal:
Kernel objects manipulation
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 14d72d4b6f0e88b5f683c1a5b7a876a55055852d
Gitweb: https://git.kernel.org/tip/14d72d4b6f0e88b5f683c1a5b7a876a55055852d
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:00:59 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:13 +02:00
locking/atomic: remove fallback comments
Currently a subset of the fallback templates have kerneldoc comments,
resulting in a haphazard set of generated kerneldoc comments as only
some operations have fallback templates to begin with.
We'd like to generate more consistent kerneldoc comments, and to do so
we'll need to restructure the way the fallback code is generated.
To minimize churn and to make it easier to restructure the fallback
code, this patch removes the existing kerneldoc comments from the
fallback templates. We can add new kerneldoc comments in subsequent
patches.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
include/linux/atomic/atomic-arch-fallback.h | 166 +-------------------
scripts/atomic/fallbacks/add_negative | 8 +-
scripts/atomic/fallbacks/add_unless | 9 +-
scripts/atomic/fallbacks/dec_and_test | 8 +-
scripts/atomic/fallbacks/fetch_add_unless | 9 +-
scripts/atomic/fallbacks/inc_and_test | 8 +-
scripts/atomic/fallbacks/inc_not_zero | 7 +-
scripts/atomic/fallbacks/sub_and_test | 9 +-
8 files changed, 1 insertion(+), 223 deletions(-)
diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 1722ddb..3ce4cb5 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -1272,15 +1272,6 @@ arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
#endif /* arch_atomic_try_cmpxchg_relaxed */
#ifndef arch_atomic_sub_and_test
-/**
- * arch_atomic_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer of type atomic_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool
arch_atomic_sub_and_test(int i, atomic_t *v)
{
@@ -1290,14 +1281,6 @@ arch_atomic_sub_and_test(int i, atomic_t *v)
#endif
#ifndef arch_atomic_dec_and_test
-/**
- * arch_atomic_dec_and_test - decrement and test
- * @v: pointer of type atomic_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
static __always_inline bool
arch_atomic_dec_and_test(atomic_t *v)
{
@@ -1307,14 +1290,6 @@ arch_atomic_dec_and_test(atomic_t *v)
#endif
#ifndef arch_atomic_inc_and_test
-/**
- * arch_atomic_inc_and_test - increment and test
- * @v: pointer of type atomic_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool
arch_atomic_inc_and_test(atomic_t *v)
{
@@ -1331,14 +1306,6 @@ arch_atomic_inc_and_test(atomic_t *v)
#endif /* arch_atomic_add_negative */
#ifndef arch_atomic_add_negative
-/**
- * arch_atomic_add_negative - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic_add_negative(int i, atomic_t *v)
{
@@ -1348,14 +1315,6 @@ arch_atomic_add_negative(int i, atomic_t *v)
#endif
#ifndef arch_atomic_add_negative_acquire
-/**
- * arch_atomic_add_negative_acquire - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic_add_negative_acquire(int i, atomic_t *v)
{
@@ -1365,14 +1324,6 @@ arch_atomic_add_negative_acquire(int i, atomic_t *v)
#endif
#ifndef arch_atomic_add_negative_release
-/**
- * arch_atomic_add_negative_release - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic_add_negative_release(int i, atomic_t *v)
{
@@ -1382,14 +1333,6 @@ arch_atomic_add_negative_release(int i, atomic_t *v)
#endif
#ifndef arch_atomic_add_negative_relaxed
-/**
- * arch_atomic_add_negative_relaxed - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic_add_negative_relaxed(int i, atomic_t *v)
{
@@ -1437,15 +1380,6 @@ arch_atomic_add_negative(int i, atomic_t *v)
#endif /* arch_atomic_add_negative_relaxed */
#ifndef arch_atomic_fetch_add_unless
-/**
- * arch_atomic_fetch_add_unless - add unless the number is already a given value
- * @v: pointer of type atomic_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as @v was not already @u.
- * Returns original value of @v
- */
static __always_inline int
arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
@@ -1462,15 +1396,6 @@ arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
#endif
#ifndef arch_atomic_add_unless
-/**
- * arch_atomic_add_unless - add unless the number is already a given value
- * @v: pointer of type atomic_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, if @v was not already @u.
- * Returns true if the addition was done.
- */
static __always_inline bool
arch_atomic_add_unless(atomic_t *v, int a, int u)
{
@@ -1480,13 +1405,6 @@ arch_atomic_add_unless(atomic_t *v, int a, int u)
#endif
#ifndef arch_atomic_inc_not_zero
-/**
- * arch_atomic_inc_not_zero - increment unless the number is zero
- * @v: pointer of type atomic_t
- *
- * Atomically increments @v by 1, if @v is non-zero.
- * Returns true if the increment was done.
- */
static __always_inline bool
arch_atomic_inc_not_zero(atomic_t *v)
{
@@ -2488,15 +2406,6 @@ arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
#endif /* arch_atomic64_try_cmpxchg_relaxed */
#ifndef arch_atomic64_sub_and_test
-/**
- * arch_atomic64_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer of type atomic64_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool
arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
{
@@ -2506,14 +2415,6 @@ arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
#endif
#ifndef arch_atomic64_dec_and_test
-/**
- * arch_atomic64_dec_and_test - decrement and test
- * @v: pointer of type atomic64_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
static __always_inline bool
arch_atomic64_dec_and_test(atomic64_t *v)
{
@@ -2523,14 +2424,6 @@ arch_atomic64_dec_and_test(atomic64_t *v)
#endif
#ifndef arch_atomic64_inc_and_test
-/**
- * arch_atomic64_inc_and_test - increment and test
- * @v: pointer of type atomic64_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool
arch_atomic64_inc_and_test(atomic64_t *v)
{
@@ -2547,14 +2440,6 @@ arch_atomic64_inc_and_test(atomic64_t *v)
#endif /* arch_atomic64_add_negative */
#ifndef arch_atomic64_add_negative
-/**
- * arch_atomic64_add_negative - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic64_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic64_add_negative(s64 i, atomic64_t *v)
{
@@ -2564,14 +2449,6 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v)
#endif
#ifndef arch_atomic64_add_negative_acquire
-/**
- * arch_atomic64_add_negative_acquire - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic64_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
@@ -2581,14 +2458,6 @@ arch_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
#endif
#ifndef arch_atomic64_add_negative_release
-/**
- * arch_atomic64_add_negative_release - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic64_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
{
@@ -2598,14 +2467,6 @@ arch_atomic64_add_negative_release(s64 i, atomic64_t *v)
#endif
#ifndef arch_atomic64_add_negative_relaxed
-/**
- * arch_atomic64_add_negative_relaxed - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type atomic64_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
{
@@ -2653,15 +2514,6 @@ arch_atomic64_add_negative(s64 i, atomic64_t *v)
#endif /* arch_atomic64_add_negative_relaxed */
#ifndef arch_atomic64_fetch_add_unless
-/**
- * arch_atomic64_fetch_add_unless - add unless the number is already a given value
- * @v: pointer of type atomic64_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as @v was not already @u.
- * Returns original value of @v
- */
static __always_inline s64
arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
@@ -2678,15 +2530,6 @@ arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
#endif
#ifndef arch_atomic64_add_unless
-/**
- * arch_atomic64_add_unless - add unless the number is already a given value
- * @v: pointer of type atomic64_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, if @v was not already @u.
- * Returns true if the addition was done.
- */
static __always_inline bool
arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
@@ -2696,13 +2539,6 @@ arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
#endif
#ifndef arch_atomic64_inc_not_zero
-/**
- * arch_atomic64_inc_not_zero - increment unless the number is zero
- * @v: pointer of type atomic64_t
- *
- * Atomically increments @v by 1, if @v is non-zero.
- * Returns true if the increment was done.
- */
static __always_inline bool
arch_atomic64_inc_not_zero(atomic64_t *v)
{
@@ -2761,4 +2597,4 @@ arch_atomic64_dec_if_positive(atomic64_t *v)
#endif
#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// 52dfc6fe4a2e7234bbd2aa3e16a377c1db793a53
+// 9f0fd6ed53267c6ec64e36cd18e6fd8df57ea277
diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative
index e5980ab..d0bd2df 100755
--- a/scripts/atomic/fallbacks/add_negative
+++ b/scripts/atomic/fallbacks/add_negative
@@ -1,12 +1,4 @@
cat <<EOF
-/**
- * arch_${atomic}_add_negative${order} - Add and test if negative
- * @i: integer value to add
- * @v: pointer of type ${atomic}_t
- *
- * Atomically adds @i to @v and returns true if the result is negative,
- * or false when the result is greater than or equal to zero.
- */
static __always_inline bool
arch_${atomic}_add_negative${order}(${int} i, ${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/add_unless b/scripts/atomic/fallbacks/add_unless
index 9e5159c..cf79b9d 100755
--- a/scripts/atomic/fallbacks/add_unless
+++ b/scripts/atomic/fallbacks/add_unless
@@ -1,13 +1,4 @@
cat << EOF
-/**
- * arch_${atomic}_add_unless - add unless the number is already a given value
- * @v: pointer of type ${atomic}_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, if @v was not already @u.
- * Returns true if the addition was done.
- */
static __always_inline bool
arch_${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
{
diff --git a/scripts/atomic/fallbacks/dec_and_test b/scripts/atomic/fallbacks/dec_and_test
index 8549f35..3f6b6a8 100755
--- a/scripts/atomic/fallbacks/dec_and_test
+++ b/scripts/atomic/fallbacks/dec_and_test
@@ -1,12 +1,4 @@
cat <<EOF
-/**
- * arch_${atomic}_dec_and_test - decrement and test
- * @v: pointer of type ${atomic}_t
- *
- * Atomically decrements @v by 1 and
- * returns true if the result is 0, or false for all other
- * cases.
- */
static __always_inline bool
arch_${atomic}_dec_and_test(${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fallbacks/fetch_add_unless
index 68ce13c..81d2834 100755
--- a/scripts/atomic/fallbacks/fetch_add_unless
+++ b/scripts/atomic/fallbacks/fetch_add_unless
@@ -1,13 +1,4 @@
cat << EOF
-/**
- * arch_${atomic}_fetch_add_unless - add unless the number is already a given value
- * @v: pointer of type ${atomic}_t
- * @a: the amount to add to v...
- * @u: ...unless v is equal to u.
- *
- * Atomically adds @a to @v, so long as @v was not already @u.
- * Returns original value of @v
- */
static __always_inline ${int}
arch_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
{
diff --git a/scripts/atomic/fallbacks/inc_and_test b/scripts/atomic/fallbacks/inc_and_test
index 0cf23fe..c726a6d 100755
--- a/scripts/atomic/fallbacks/inc_and_test
+++ b/scripts/atomic/fallbacks/inc_and_test
@@ -1,12 +1,4 @@
cat <<EOF
-/**
- * arch_${atomic}_inc_and_test - increment and test
- * @v: pointer of type ${atomic}_t
- *
- * Atomically increments @v by 1
- * and returns true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool
arch_${atomic}_inc_and_test(${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/inc_not_zero b/scripts/atomic/fallbacks/inc_not_zero
index ed8a1f5..9760359 100755
--- a/scripts/atomic/fallbacks/inc_not_zero
+++ b/scripts/atomic/fallbacks/inc_not_zero
@@ -1,11 +1,4 @@
cat <<EOF
-/**
- * arch_${atomic}_inc_not_zero - increment unless the number is zero
- * @v: pointer of type ${atomic}_t
- *
- * Atomically increments @v by 1, if @v is non-zero.
- * Returns true if the increment was done.
- */
static __always_inline bool
arch_${atomic}_inc_not_zero(${atomic}_t *v)
{
diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test
index 260f373..da8a049 100755
--- a/scripts/atomic/fallbacks/sub_and_test
+++ b/scripts/atomic/fallbacks/sub_and_test
@@ -1,13 +1,4 @@
cat <<EOF
-/**
- * arch_${atomic}_sub_and_test - subtract value from variable and test result
- * @i: integer value to subtract
- * @v: pointer of type ${atomic}_t
- *
- * Atomically subtracts @i from @v and returns
- * true if the result is zero, or false for all
- * other cases.
- */
static __always_inline bool
arch_${atomic}_sub_and_test(${int} i, ${atomic}_t *v)
{
The following commit has been merged into the locking/core branch of tip:
Commit-ID: e50f06ce2d876c740993b5e3d01e203520391ccd
Gitweb: https://git.kernel.org/tip/e50f06ce2d876c740993b5e3d01e203520391ccd
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:05 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:16 +02:00
locking/atomic: m68k: add preprocessor symbols
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/m68k.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/m68k/include/asm/atomic.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h
index 190a032..4bfbc25 100644
--- a/arch/m68k/include/asm/atomic.h
+++ b/arch/m68k/include/asm/atomic.h
@@ -106,6 +106,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t * v) \
ATOMIC_OPS(add, +=, add)
ATOMIC_OPS(sub, -=, sub)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op, c_op, asm_op) \
ATOMIC_OP(op, c_op, asm_op) \
@@ -115,6 +120,10 @@ ATOMIC_OPS(and, &=, and)
ATOMIC_OPS(or, |=, or)
ATOMIC_OPS(xor, ^=, eor)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
The following commit has been merged into the locking/core branch of tip:
Commit-ID: c9268ac615f9f6dded7801df5993374598934377
Gitweb: https://git.kernel.org/tip/c9268ac615f9f6dded7801df5993374598934377
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:14 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:19 +02:00
locking/atomic: scripts: add trivial raw_atomic*_<op>()
Currently a number of arch_atomic*_<op>() functions are optional, and
where an arch does not provide a given arch_atomic*_<op>() we will
define an implementation of arch_atomic*_<op>() in
atomic-arch-fallback.h.
Filling in the missing ops requires special care as we want to select
the optimal definition of each op (e.g. preferentially defining ops in
terms of their relaxed form rather than their fully-ordered form). The
ifdeffery necessary for this requires us to group ordering variants
together, which can be a bit painful to read, and is painful for
kerneldoc generation.
It would be easier to handle this if we generated ops into a separate
namespace, as this would remove the need to take special care with the
ifdeffery, and allow each ordering variant to be generated separately.
This patch adds a new set of raw_atomic_<op>() definitions, which are
currently trivial wrappers of their arch_atomic_<op>() equivalent. This
will allow us to move treewide users of arch_atomic_<op>() over to raw
atomic op before we rework the fallback generation to generate
raw_atomic_<op> directly.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
include/linux/atomic.h | 1 +-
include/linux/atomic/atomic-instrumented.h | 595 +++----
include/linux/atomic/atomic-raw.h | 1645 +++++++++++++++++++-
scripts/atomic/gen-atomic-instrumented.sh | 19 +-
scripts/atomic/gen-atomic-raw.sh | 84 +-
scripts/atomic/gen-atomics.sh | 1 +-
6 files changed, 2033 insertions(+), 312 deletions(-)
create mode 100644 include/linux/atomic/atomic-raw.h
create mode 100644 scripts/atomic/gen-atomic-raw.sh
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 8dd57c3..127f5dc 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -79,6 +79,7 @@
#include <linux/atomic/atomic-arch-fallback.h>
#include <linux/atomic/atomic-long.h>
+#include <linux/atomic/atomic-raw.h>
#include <linux/atomic/atomic-instrumented.h>
#endif /* _LINUX_ATOMIC_H */
diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h
index a55b5b7..90ee2f5 100644
--- a/include/linux/atomic/atomic-instrumented.h
+++ b/include/linux/atomic/atomic-instrumented.h
@@ -4,15 +4,10 @@
// DO NOT MODIFY THIS FILE DIRECTLY
/*
- * This file provides wrappers with KASAN instrumentation for atomic operations.
- * To use this functionality an arch's atomic.h file needs to define all
- * atomic operations with arch_ prefix (e.g. arch_atomic_read()) and include
- * this file at the end. This file provides atomic_read() that forwards to
- * arch_atomic_read() for actual atomic operation.
- * Note: if an arch atomic operation is implemented by means of other atomic
- * operations (e.g. atomic_read()/atomic_cmpxchg() loop), then it needs to use
- * arch_ variants (i.e. arch_atomic_read()/arch_atomic_cmpxchg()) to avoid
- * double instrumentation.
+ * This file provoides atomic operations with explicit instrumentation (e.g.
+ * KASAN, KCSAN), which should be used unless it is necessary to avoid
+ * instrumentation. Where it is necessary to aovid instrumenation, the
+ * raw_atomic*() operations should be used.
*/
#ifndef _LINUX_ATOMIC_INSTRUMENTED_H
#define _LINUX_ATOMIC_INSTRUMENTED_H
@@ -25,21 +20,21 @@ static __always_inline int
atomic_read(const atomic_t *v)
{
instrument_atomic_read(v, sizeof(*v));
- return arch_atomic_read(v);
+ return raw_atomic_read(v);
}
static __always_inline int
atomic_read_acquire(const atomic_t *v)
{
instrument_atomic_read(v, sizeof(*v));
- return arch_atomic_read_acquire(v);
+ return raw_atomic_read_acquire(v);
}
static __always_inline void
atomic_set(atomic_t *v, int i)
{
instrument_atomic_write(v, sizeof(*v));
- arch_atomic_set(v, i);
+ raw_atomic_set(v, i);
}
static __always_inline void
@@ -47,14 +42,14 @@ atomic_set_release(atomic_t *v, int i)
{
kcsan_release();
instrument_atomic_write(v, sizeof(*v));
- arch_atomic_set_release(v, i);
+ raw_atomic_set_release(v, i);
}
static __always_inline void
atomic_add(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_add(i, v);
+ raw_atomic_add(i, v);
}
static __always_inline int
@@ -62,14 +57,14 @@ atomic_add_return(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_return(i, v);
+ return raw_atomic_add_return(i, v);
}
static __always_inline int
atomic_add_return_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_return_acquire(i, v);
+ return raw_atomic_add_return_acquire(i, v);
}
static __always_inline int
@@ -77,14 +72,14 @@ atomic_add_return_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_return_release(i, v);
+ return raw_atomic_add_return_release(i, v);
}
static __always_inline int
atomic_add_return_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_return_relaxed(i, v);
+ return raw_atomic_add_return_relaxed(i, v);
}
static __always_inline int
@@ -92,14 +87,14 @@ atomic_fetch_add(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_add(i, v);
+ return raw_atomic_fetch_add(i, v);
}
static __always_inline int
atomic_fetch_add_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_add_acquire(i, v);
+ return raw_atomic_fetch_add_acquire(i, v);
}
static __always_inline int
@@ -107,21 +102,21 @@ atomic_fetch_add_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_add_release(i, v);
+ return raw_atomic_fetch_add_release(i, v);
}
static __always_inline int
atomic_fetch_add_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_add_relaxed(i, v);
+ return raw_atomic_fetch_add_relaxed(i, v);
}
static __always_inline void
atomic_sub(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_sub(i, v);
+ raw_atomic_sub(i, v);
}
static __always_inline int
@@ -129,14 +124,14 @@ atomic_sub_return(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_sub_return(i, v);
+ return raw_atomic_sub_return(i, v);
}
static __always_inline int
atomic_sub_return_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_sub_return_acquire(i, v);
+ return raw_atomic_sub_return_acquire(i, v);
}
static __always_inline int
@@ -144,14 +139,14 @@ atomic_sub_return_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_sub_return_release(i, v);
+ return raw_atomic_sub_return_release(i, v);
}
static __always_inline int
atomic_sub_return_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_sub_return_relaxed(i, v);
+ return raw_atomic_sub_return_relaxed(i, v);
}
static __always_inline int
@@ -159,14 +154,14 @@ atomic_fetch_sub(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_sub(i, v);
+ return raw_atomic_fetch_sub(i, v);
}
static __always_inline int
atomic_fetch_sub_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_sub_acquire(i, v);
+ return raw_atomic_fetch_sub_acquire(i, v);
}
static __always_inline int
@@ -174,21 +169,21 @@ atomic_fetch_sub_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_sub_release(i, v);
+ return raw_atomic_fetch_sub_release(i, v);
}
static __always_inline int
atomic_fetch_sub_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_sub_relaxed(i, v);
+ return raw_atomic_fetch_sub_relaxed(i, v);
}
static __always_inline void
atomic_inc(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_inc(v);
+ raw_atomic_inc(v);
}
static __always_inline int
@@ -196,14 +191,14 @@ atomic_inc_return(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_inc_return(v);
+ return raw_atomic_inc_return(v);
}
static __always_inline int
atomic_inc_return_acquire(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_inc_return_acquire(v);
+ return raw_atomic_inc_return_acquire(v);
}
static __always_inline int
@@ -211,14 +206,14 @@ atomic_inc_return_release(atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_inc_return_release(v);
+ return raw_atomic_inc_return_release(v);
}
static __always_inline int
atomic_inc_return_relaxed(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_inc_return_relaxed(v);
+ return raw_atomic_inc_return_relaxed(v);
}
static __always_inline int
@@ -226,14 +221,14 @@ atomic_fetch_inc(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_inc(v);
+ return raw_atomic_fetch_inc(v);
}
static __always_inline int
atomic_fetch_inc_acquire(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_inc_acquire(v);
+ return raw_atomic_fetch_inc_acquire(v);
}
static __always_inline int
@@ -241,21 +236,21 @@ atomic_fetch_inc_release(atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_inc_release(v);
+ return raw_atomic_fetch_inc_release(v);
}
static __always_inline int
atomic_fetch_inc_relaxed(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_inc_relaxed(v);
+ return raw_atomic_fetch_inc_relaxed(v);
}
static __always_inline void
atomic_dec(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_dec(v);
+ raw_atomic_dec(v);
}
static __always_inline int
@@ -263,14 +258,14 @@ atomic_dec_return(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_dec_return(v);
+ return raw_atomic_dec_return(v);
}
static __always_inline int
atomic_dec_return_acquire(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_dec_return_acquire(v);
+ return raw_atomic_dec_return_acquire(v);
}
static __always_inline int
@@ -278,14 +273,14 @@ atomic_dec_return_release(atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_dec_return_release(v);
+ return raw_atomic_dec_return_release(v);
}
static __always_inline int
atomic_dec_return_relaxed(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_dec_return_relaxed(v);
+ return raw_atomic_dec_return_relaxed(v);
}
static __always_inline int
@@ -293,14 +288,14 @@ atomic_fetch_dec(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_dec(v);
+ return raw_atomic_fetch_dec(v);
}
static __always_inline int
atomic_fetch_dec_acquire(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_dec_acquire(v);
+ return raw_atomic_fetch_dec_acquire(v);
}
static __always_inline int
@@ -308,21 +303,21 @@ atomic_fetch_dec_release(atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_dec_release(v);
+ return raw_atomic_fetch_dec_release(v);
}
static __always_inline int
atomic_fetch_dec_relaxed(atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_dec_relaxed(v);
+ return raw_atomic_fetch_dec_relaxed(v);
}
static __always_inline void
atomic_and(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_and(i, v);
+ raw_atomic_and(i, v);
}
static __always_inline int
@@ -330,14 +325,14 @@ atomic_fetch_and(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_and(i, v);
+ return raw_atomic_fetch_and(i, v);
}
static __always_inline int
atomic_fetch_and_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_and_acquire(i, v);
+ return raw_atomic_fetch_and_acquire(i, v);
}
static __always_inline int
@@ -345,21 +340,21 @@ atomic_fetch_and_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_and_release(i, v);
+ return raw_atomic_fetch_and_release(i, v);
}
static __always_inline int
atomic_fetch_and_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_and_relaxed(i, v);
+ return raw_atomic_fetch_and_relaxed(i, v);
}
static __always_inline void
atomic_andnot(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_andnot(i, v);
+ raw_atomic_andnot(i, v);
}
static __always_inline int
@@ -367,14 +362,14 @@ atomic_fetch_andnot(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_andnot(i, v);
+ return raw_atomic_fetch_andnot(i, v);
}
static __always_inline int
atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_andnot_acquire(i, v);
+ return raw_atomic_fetch_andnot_acquire(i, v);
}
static __always_inline int
@@ -382,21 +377,21 @@ atomic_fetch_andnot_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_andnot_release(i, v);
+ return raw_atomic_fetch_andnot_release(i, v);
}
static __always_inline int
atomic_fetch_andnot_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_andnot_relaxed(i, v);
+ return raw_atomic_fetch_andnot_relaxed(i, v);
}
static __always_inline void
atomic_or(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_or(i, v);
+ raw_atomic_or(i, v);
}
static __always_inline int
@@ -404,14 +399,14 @@ atomic_fetch_or(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_or(i, v);
+ return raw_atomic_fetch_or(i, v);
}
static __always_inline int
atomic_fetch_or_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_or_acquire(i, v);
+ return raw_atomic_fetch_or_acquire(i, v);
}
static __always_inline int
@@ -419,21 +414,21 @@ atomic_fetch_or_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_or_release(i, v);
+ return raw_atomic_fetch_or_release(i, v);
}
static __always_inline int
atomic_fetch_or_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_or_relaxed(i, v);
+ return raw_atomic_fetch_or_relaxed(i, v);
}
static __always_inline void
atomic_xor(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_xor(i, v);
+ raw_atomic_xor(i, v);
}
static __always_inline int
@@ -441,14 +436,14 @@ atomic_fetch_xor(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_xor(i, v);
+ return raw_atomic_fetch_xor(i, v);
}
static __always_inline int
atomic_fetch_xor_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_xor_acquire(i, v);
+ return raw_atomic_fetch_xor_acquire(i, v);
}
static __always_inline int
@@ -456,14 +451,14 @@ atomic_fetch_xor_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_xor_release(i, v);
+ return raw_atomic_fetch_xor_release(i, v);
}
static __always_inline int
atomic_fetch_xor_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_xor_relaxed(i, v);
+ return raw_atomic_fetch_xor_relaxed(i, v);
}
static __always_inline int
@@ -471,14 +466,14 @@ atomic_xchg(atomic_t *v, int i)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_xchg(v, i);
+ return raw_atomic_xchg(v, i);
}
static __always_inline int
atomic_xchg_acquire(atomic_t *v, int i)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_xchg_acquire(v, i);
+ return raw_atomic_xchg_acquire(v, i);
}
static __always_inline int
@@ -486,14 +481,14 @@ atomic_xchg_release(atomic_t *v, int i)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_xchg_release(v, i);
+ return raw_atomic_xchg_release(v, i);
}
static __always_inline int
atomic_xchg_relaxed(atomic_t *v, int i)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_xchg_relaxed(v, i);
+ return raw_atomic_xchg_relaxed(v, i);
}
static __always_inline int
@@ -501,14 +496,14 @@ atomic_cmpxchg(atomic_t *v, int old, int new)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_cmpxchg(v, old, new);
+ return raw_atomic_cmpxchg(v, old, new);
}
static __always_inline int
atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_cmpxchg_acquire(v, old, new);
+ return raw_atomic_cmpxchg_acquire(v, old, new);
}
static __always_inline int
@@ -516,14 +511,14 @@ atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_cmpxchg_release(v, old, new);
+ return raw_atomic_cmpxchg_release(v, old, new);
}
static __always_inline int
atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_cmpxchg_relaxed(v, old, new);
+ return raw_atomic_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
@@ -532,7 +527,7 @@ atomic_try_cmpxchg(atomic_t *v, int *old, int new)
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_try_cmpxchg(v, old, new);
+ return raw_atomic_try_cmpxchg(v, old, new);
}
static __always_inline bool
@@ -540,7 +535,7 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_try_cmpxchg_acquire(v, old, new);
+ return raw_atomic_try_cmpxchg_acquire(v, old, new);
}
static __always_inline bool
@@ -549,7 +544,7 @@ atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_try_cmpxchg_release(v, old, new);
+ return raw_atomic_try_cmpxchg_release(v, old, new);
}
static __always_inline bool
@@ -557,7 +552,7 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
{
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_try_cmpxchg_relaxed(v, old, new);
+ return raw_atomic_try_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
@@ -565,7 +560,7 @@ atomic_sub_and_test(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_sub_and_test(i, v);
+ return raw_atomic_sub_and_test(i, v);
}
static __always_inline bool
@@ -573,7 +568,7 @@ atomic_dec_and_test(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_dec_and_test(v);
+ return raw_atomic_dec_and_test(v);
}
static __always_inline bool
@@ -581,7 +576,7 @@ atomic_inc_and_test(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_inc_and_test(v);
+ return raw_atomic_inc_and_test(v);
}
static __always_inline bool
@@ -589,14 +584,14 @@ atomic_add_negative(int i, atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_negative(i, v);
+ return raw_atomic_add_negative(i, v);
}
static __always_inline bool
atomic_add_negative_acquire(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_negative_acquire(i, v);
+ return raw_atomic_add_negative_acquire(i, v);
}
static __always_inline bool
@@ -604,14 +599,14 @@ atomic_add_negative_release(int i, atomic_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_negative_release(i, v);
+ return raw_atomic_add_negative_release(i, v);
}
static __always_inline bool
atomic_add_negative_relaxed(int i, atomic_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_negative_relaxed(i, v);
+ return raw_atomic_add_negative_relaxed(i, v);
}
static __always_inline int
@@ -619,7 +614,7 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_fetch_add_unless(v, a, u);
+ return raw_atomic_fetch_add_unless(v, a, u);
}
static __always_inline bool
@@ -627,7 +622,7 @@ atomic_add_unless(atomic_t *v, int a, int u)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_add_unless(v, a, u);
+ return raw_atomic_add_unless(v, a, u);
}
static __always_inline bool
@@ -635,7 +630,7 @@ atomic_inc_not_zero(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_inc_not_zero(v);
+ return raw_atomic_inc_not_zero(v);
}
static __always_inline bool
@@ -643,7 +638,7 @@ atomic_inc_unless_negative(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_inc_unless_negative(v);
+ return raw_atomic_inc_unless_negative(v);
}
static __always_inline bool
@@ -651,7 +646,7 @@ atomic_dec_unless_positive(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_dec_unless_positive(v);
+ return raw_atomic_dec_unless_positive(v);
}
static __always_inline int
@@ -659,28 +654,28 @@ atomic_dec_if_positive(atomic_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_dec_if_positive(v);
+ return raw_atomic_dec_if_positive(v);
}
static __always_inline s64
atomic64_read(const atomic64_t *v)
{
instrument_atomic_read(v, sizeof(*v));
- return arch_atomic64_read(v);
+ return raw_atomic64_read(v);
}
static __always_inline s64
atomic64_read_acquire(const atomic64_t *v)
{
instrument_atomic_read(v, sizeof(*v));
- return arch_atomic64_read_acquire(v);
+ return raw_atomic64_read_acquire(v);
}
static __always_inline void
atomic64_set(atomic64_t *v, s64 i)
{
instrument_atomic_write(v, sizeof(*v));
- arch_atomic64_set(v, i);
+ raw_atomic64_set(v, i);
}
static __always_inline void
@@ -688,14 +683,14 @@ atomic64_set_release(atomic64_t *v, s64 i)
{
kcsan_release();
instrument_atomic_write(v, sizeof(*v));
- arch_atomic64_set_release(v, i);
+ raw_atomic64_set_release(v, i);
}
static __always_inline void
atomic64_add(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_add(i, v);
+ raw_atomic64_add(i, v);
}
static __always_inline s64
@@ -703,14 +698,14 @@ atomic64_add_return(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_return(i, v);
+ return raw_atomic64_add_return(i, v);
}
static __always_inline s64
atomic64_add_return_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_return_acquire(i, v);
+ return raw_atomic64_add_return_acquire(i, v);
}
static __always_inline s64
@@ -718,14 +713,14 @@ atomic64_add_return_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_return_release(i, v);
+ return raw_atomic64_add_return_release(i, v);
}
static __always_inline s64
atomic64_add_return_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_return_relaxed(i, v);
+ return raw_atomic64_add_return_relaxed(i, v);
}
static __always_inline s64
@@ -733,14 +728,14 @@ atomic64_fetch_add(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_add(i, v);
+ return raw_atomic64_fetch_add(i, v);
}
static __always_inline s64
atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_add_acquire(i, v);
+ return raw_atomic64_fetch_add_acquire(i, v);
}
static __always_inline s64
@@ -748,21 +743,21 @@ atomic64_fetch_add_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_add_release(i, v);
+ return raw_atomic64_fetch_add_release(i, v);
}
static __always_inline s64
atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_add_relaxed(i, v);
+ return raw_atomic64_fetch_add_relaxed(i, v);
}
static __always_inline void
atomic64_sub(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_sub(i, v);
+ raw_atomic64_sub(i, v);
}
static __always_inline s64
@@ -770,14 +765,14 @@ atomic64_sub_return(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_sub_return(i, v);
+ return raw_atomic64_sub_return(i, v);
}
static __always_inline s64
atomic64_sub_return_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_sub_return_acquire(i, v);
+ return raw_atomic64_sub_return_acquire(i, v);
}
static __always_inline s64
@@ -785,14 +780,14 @@ atomic64_sub_return_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_sub_return_release(i, v);
+ return raw_atomic64_sub_return_release(i, v);
}
static __always_inline s64
atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_sub_return_relaxed(i, v);
+ return raw_atomic64_sub_return_relaxed(i, v);
}
static __always_inline s64
@@ -800,14 +795,14 @@ atomic64_fetch_sub(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_sub(i, v);
+ return raw_atomic64_fetch_sub(i, v);
}
static __always_inline s64
atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_sub_acquire(i, v);
+ return raw_atomic64_fetch_sub_acquire(i, v);
}
static __always_inline s64
@@ -815,21 +810,21 @@ atomic64_fetch_sub_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_sub_release(i, v);
+ return raw_atomic64_fetch_sub_release(i, v);
}
static __always_inline s64
atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_sub_relaxed(i, v);
+ return raw_atomic64_fetch_sub_relaxed(i, v);
}
static __always_inline void
atomic64_inc(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_inc(v);
+ raw_atomic64_inc(v);
}
static __always_inline s64
@@ -837,14 +832,14 @@ atomic64_inc_return(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_inc_return(v);
+ return raw_atomic64_inc_return(v);
}
static __always_inline s64
atomic64_inc_return_acquire(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_inc_return_acquire(v);
+ return raw_atomic64_inc_return_acquire(v);
}
static __always_inline s64
@@ -852,14 +847,14 @@ atomic64_inc_return_release(atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_inc_return_release(v);
+ return raw_atomic64_inc_return_release(v);
}
static __always_inline s64
atomic64_inc_return_relaxed(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_inc_return_relaxed(v);
+ return raw_atomic64_inc_return_relaxed(v);
}
static __always_inline s64
@@ -867,14 +862,14 @@ atomic64_fetch_inc(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_inc(v);
+ return raw_atomic64_fetch_inc(v);
}
static __always_inline s64
atomic64_fetch_inc_acquire(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_inc_acquire(v);
+ return raw_atomic64_fetch_inc_acquire(v);
}
static __always_inline s64
@@ -882,21 +877,21 @@ atomic64_fetch_inc_release(atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_inc_release(v);
+ return raw_atomic64_fetch_inc_release(v);
}
static __always_inline s64
atomic64_fetch_inc_relaxed(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_inc_relaxed(v);
+ return raw_atomic64_fetch_inc_relaxed(v);
}
static __always_inline void
atomic64_dec(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_dec(v);
+ raw_atomic64_dec(v);
}
static __always_inline s64
@@ -904,14 +899,14 @@ atomic64_dec_return(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_dec_return(v);
+ return raw_atomic64_dec_return(v);
}
static __always_inline s64
atomic64_dec_return_acquire(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_dec_return_acquire(v);
+ return raw_atomic64_dec_return_acquire(v);
}
static __always_inline s64
@@ -919,14 +914,14 @@ atomic64_dec_return_release(atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_dec_return_release(v);
+ return raw_atomic64_dec_return_release(v);
}
static __always_inline s64
atomic64_dec_return_relaxed(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_dec_return_relaxed(v);
+ return raw_atomic64_dec_return_relaxed(v);
}
static __always_inline s64
@@ -934,14 +929,14 @@ atomic64_fetch_dec(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_dec(v);
+ return raw_atomic64_fetch_dec(v);
}
static __always_inline s64
atomic64_fetch_dec_acquire(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_dec_acquire(v);
+ return raw_atomic64_fetch_dec_acquire(v);
}
static __always_inline s64
@@ -949,21 +944,21 @@ atomic64_fetch_dec_release(atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_dec_release(v);
+ return raw_atomic64_fetch_dec_release(v);
}
static __always_inline s64
atomic64_fetch_dec_relaxed(atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_dec_relaxed(v);
+ return raw_atomic64_fetch_dec_relaxed(v);
}
static __always_inline void
atomic64_and(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_and(i, v);
+ raw_atomic64_and(i, v);
}
static __always_inline s64
@@ -971,14 +966,14 @@ atomic64_fetch_and(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_and(i, v);
+ return raw_atomic64_fetch_and(i, v);
}
static __always_inline s64
atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_and_acquire(i, v);
+ return raw_atomic64_fetch_and_acquire(i, v);
}
static __always_inline s64
@@ -986,21 +981,21 @@ atomic64_fetch_and_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_and_release(i, v);
+ return raw_atomic64_fetch_and_release(i, v);
}
static __always_inline s64
atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_and_relaxed(i, v);
+ return raw_atomic64_fetch_and_relaxed(i, v);
}
static __always_inline void
atomic64_andnot(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_andnot(i, v);
+ raw_atomic64_andnot(i, v);
}
static __always_inline s64
@@ -1008,14 +1003,14 @@ atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_andnot(i, v);
+ return raw_atomic64_fetch_andnot(i, v);
}
static __always_inline s64
atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_andnot_acquire(i, v);
+ return raw_atomic64_fetch_andnot_acquire(i, v);
}
static __always_inline s64
@@ -1023,21 +1018,21 @@ atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_andnot_release(i, v);
+ return raw_atomic64_fetch_andnot_release(i, v);
}
static __always_inline s64
atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_andnot_relaxed(i, v);
+ return raw_atomic64_fetch_andnot_relaxed(i, v);
}
static __always_inline void
atomic64_or(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_or(i, v);
+ raw_atomic64_or(i, v);
}
static __always_inline s64
@@ -1045,14 +1040,14 @@ atomic64_fetch_or(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_or(i, v);
+ return raw_atomic64_fetch_or(i, v);
}
static __always_inline s64
atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_or_acquire(i, v);
+ return raw_atomic64_fetch_or_acquire(i, v);
}
static __always_inline s64
@@ -1060,21 +1055,21 @@ atomic64_fetch_or_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_or_release(i, v);
+ return raw_atomic64_fetch_or_release(i, v);
}
static __always_inline s64
atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_or_relaxed(i, v);
+ return raw_atomic64_fetch_or_relaxed(i, v);
}
static __always_inline void
atomic64_xor(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic64_xor(i, v);
+ raw_atomic64_xor(i, v);
}
static __always_inline s64
@@ -1082,14 +1077,14 @@ atomic64_fetch_xor(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_xor(i, v);
+ return raw_atomic64_fetch_xor(i, v);
}
static __always_inline s64
atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_xor_acquire(i, v);
+ return raw_atomic64_fetch_xor_acquire(i, v);
}
static __always_inline s64
@@ -1097,14 +1092,14 @@ atomic64_fetch_xor_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_xor_release(i, v);
+ return raw_atomic64_fetch_xor_release(i, v);
}
static __always_inline s64
atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_xor_relaxed(i, v);
+ return raw_atomic64_fetch_xor_relaxed(i, v);
}
static __always_inline s64
@@ -1112,14 +1107,14 @@ atomic64_xchg(atomic64_t *v, s64 i)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_xchg(v, i);
+ return raw_atomic64_xchg(v, i);
}
static __always_inline s64
atomic64_xchg_acquire(atomic64_t *v, s64 i)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_xchg_acquire(v, i);
+ return raw_atomic64_xchg_acquire(v, i);
}
static __always_inline s64
@@ -1127,14 +1122,14 @@ atomic64_xchg_release(atomic64_t *v, s64 i)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_xchg_release(v, i);
+ return raw_atomic64_xchg_release(v, i);
}
static __always_inline s64
atomic64_xchg_relaxed(atomic64_t *v, s64 i)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_xchg_relaxed(v, i);
+ return raw_atomic64_xchg_relaxed(v, i);
}
static __always_inline s64
@@ -1142,14 +1137,14 @@ atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_cmpxchg(v, old, new);
+ return raw_atomic64_cmpxchg(v, old, new);
}
static __always_inline s64
atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_cmpxchg_acquire(v, old, new);
+ return raw_atomic64_cmpxchg_acquire(v, old, new);
}
static __always_inline s64
@@ -1157,14 +1152,14 @@ atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_cmpxchg_release(v, old, new);
+ return raw_atomic64_cmpxchg_release(v, old, new);
}
static __always_inline s64
atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_cmpxchg_relaxed(v, old, new);
+ return raw_atomic64_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
@@ -1173,7 +1168,7 @@ atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic64_try_cmpxchg(v, old, new);
+ return raw_atomic64_try_cmpxchg(v, old, new);
}
static __always_inline bool
@@ -1181,7 +1176,7 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic64_try_cmpxchg_acquire(v, old, new);
+ return raw_atomic64_try_cmpxchg_acquire(v, old, new);
}
static __always_inline bool
@@ -1190,7 +1185,7 @@ atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic64_try_cmpxchg_release(v, old, new);
+ return raw_atomic64_try_cmpxchg_release(v, old, new);
}
static __always_inline bool
@@ -1198,7 +1193,7 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
{
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
+ return raw_atomic64_try_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
@@ -1206,7 +1201,7 @@ atomic64_sub_and_test(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_sub_and_test(i, v);
+ return raw_atomic64_sub_and_test(i, v);
}
static __always_inline bool
@@ -1214,7 +1209,7 @@ atomic64_dec_and_test(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_dec_and_test(v);
+ return raw_atomic64_dec_and_test(v);
}
static __always_inline bool
@@ -1222,7 +1217,7 @@ atomic64_inc_and_test(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_inc_and_test(v);
+ return raw_atomic64_inc_and_test(v);
}
static __always_inline bool
@@ -1230,14 +1225,14 @@ atomic64_add_negative(s64 i, atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_negative(i, v);
+ return raw_atomic64_add_negative(i, v);
}
static __always_inline bool
atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_negative_acquire(i, v);
+ return raw_atomic64_add_negative_acquire(i, v);
}
static __always_inline bool
@@ -1245,14 +1240,14 @@ atomic64_add_negative_release(s64 i, atomic64_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_negative_release(i, v);
+ return raw_atomic64_add_negative_release(i, v);
}
static __always_inline bool
atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_negative_relaxed(i, v);
+ return raw_atomic64_add_negative_relaxed(i, v);
}
static __always_inline s64
@@ -1260,7 +1255,7 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_fetch_add_unless(v, a, u);
+ return raw_atomic64_fetch_add_unless(v, a, u);
}
static __always_inline bool
@@ -1268,7 +1263,7 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_add_unless(v, a, u);
+ return raw_atomic64_add_unless(v, a, u);
}
static __always_inline bool
@@ -1276,7 +1271,7 @@ atomic64_inc_not_zero(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_inc_not_zero(v);
+ return raw_atomic64_inc_not_zero(v);
}
static __always_inline bool
@@ -1284,7 +1279,7 @@ atomic64_inc_unless_negative(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_inc_unless_negative(v);
+ return raw_atomic64_inc_unless_negative(v);
}
static __always_inline bool
@@ -1292,7 +1287,7 @@ atomic64_dec_unless_positive(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_dec_unless_positive(v);
+ return raw_atomic64_dec_unless_positive(v);
}
static __always_inline s64
@@ -1300,28 +1295,28 @@ atomic64_dec_if_positive(atomic64_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic64_dec_if_positive(v);
+ return raw_atomic64_dec_if_positive(v);
}
static __always_inline long
atomic_long_read(const atomic_long_t *v)
{
instrument_atomic_read(v, sizeof(*v));
- return arch_atomic_long_read(v);
+ return raw_atomic_long_read(v);
}
static __always_inline long
atomic_long_read_acquire(const atomic_long_t *v)
{
instrument_atomic_read(v, sizeof(*v));
- return arch_atomic_long_read_acquire(v);
+ return raw_atomic_long_read_acquire(v);
}
static __always_inline void
atomic_long_set(atomic_long_t *v, long i)
{
instrument_atomic_write(v, sizeof(*v));
- arch_atomic_long_set(v, i);
+ raw_atomic_long_set(v, i);
}
static __always_inline void
@@ -1329,14 +1324,14 @@ atomic_long_set_release(atomic_long_t *v, long i)
{
kcsan_release();
instrument_atomic_write(v, sizeof(*v));
- arch_atomic_long_set_release(v, i);
+ raw_atomic_long_set_release(v, i);
}
static __always_inline void
atomic_long_add(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_add(i, v);
+ raw_atomic_long_add(i, v);
}
static __always_inline long
@@ -1344,14 +1339,14 @@ atomic_long_add_return(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_return(i, v);
+ return raw_atomic_long_add_return(i, v);
}
static __always_inline long
atomic_long_add_return_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_return_acquire(i, v);
+ return raw_atomic_long_add_return_acquire(i, v);
}
static __always_inline long
@@ -1359,14 +1354,14 @@ atomic_long_add_return_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_return_release(i, v);
+ return raw_atomic_long_add_return_release(i, v);
}
static __always_inline long
atomic_long_add_return_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_return_relaxed(i, v);
+ return raw_atomic_long_add_return_relaxed(i, v);
}
static __always_inline long
@@ -1374,14 +1369,14 @@ atomic_long_fetch_add(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_add(i, v);
+ return raw_atomic_long_fetch_add(i, v);
}
static __always_inline long
atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_add_acquire(i, v);
+ return raw_atomic_long_fetch_add_acquire(i, v);
}
static __always_inline long
@@ -1389,21 +1384,21 @@ atomic_long_fetch_add_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_add_release(i, v);
+ return raw_atomic_long_fetch_add_release(i, v);
}
static __always_inline long
atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_add_relaxed(i, v);
+ return raw_atomic_long_fetch_add_relaxed(i, v);
}
static __always_inline void
atomic_long_sub(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_sub(i, v);
+ raw_atomic_long_sub(i, v);
}
static __always_inline long
@@ -1411,14 +1406,14 @@ atomic_long_sub_return(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_sub_return(i, v);
+ return raw_atomic_long_sub_return(i, v);
}
static __always_inline long
atomic_long_sub_return_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_sub_return_acquire(i, v);
+ return raw_atomic_long_sub_return_acquire(i, v);
}
static __always_inline long
@@ -1426,14 +1421,14 @@ atomic_long_sub_return_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_sub_return_release(i, v);
+ return raw_atomic_long_sub_return_release(i, v);
}
static __always_inline long
atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_sub_return_relaxed(i, v);
+ return raw_atomic_long_sub_return_relaxed(i, v);
}
static __always_inline long
@@ -1441,14 +1436,14 @@ atomic_long_fetch_sub(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_sub(i, v);
+ return raw_atomic_long_fetch_sub(i, v);
}
static __always_inline long
atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_sub_acquire(i, v);
+ return raw_atomic_long_fetch_sub_acquire(i, v);
}
static __always_inline long
@@ -1456,21 +1451,21 @@ atomic_long_fetch_sub_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_sub_release(i, v);
+ return raw_atomic_long_fetch_sub_release(i, v);
}
static __always_inline long
atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_sub_relaxed(i, v);
+ return raw_atomic_long_fetch_sub_relaxed(i, v);
}
static __always_inline void
atomic_long_inc(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_inc(v);
+ raw_atomic_long_inc(v);
}
static __always_inline long
@@ -1478,14 +1473,14 @@ atomic_long_inc_return(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_inc_return(v);
+ return raw_atomic_long_inc_return(v);
}
static __always_inline long
atomic_long_inc_return_acquire(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_inc_return_acquire(v);
+ return raw_atomic_long_inc_return_acquire(v);
}
static __always_inline long
@@ -1493,14 +1488,14 @@ atomic_long_inc_return_release(atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_inc_return_release(v);
+ return raw_atomic_long_inc_return_release(v);
}
static __always_inline long
atomic_long_inc_return_relaxed(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_inc_return_relaxed(v);
+ return raw_atomic_long_inc_return_relaxed(v);
}
static __always_inline long
@@ -1508,14 +1503,14 @@ atomic_long_fetch_inc(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_inc(v);
+ return raw_atomic_long_fetch_inc(v);
}
static __always_inline long
atomic_long_fetch_inc_acquire(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_inc_acquire(v);
+ return raw_atomic_long_fetch_inc_acquire(v);
}
static __always_inline long
@@ -1523,21 +1518,21 @@ atomic_long_fetch_inc_release(atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_inc_release(v);
+ return raw_atomic_long_fetch_inc_release(v);
}
static __always_inline long
atomic_long_fetch_inc_relaxed(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_inc_relaxed(v);
+ return raw_atomic_long_fetch_inc_relaxed(v);
}
static __always_inline void
atomic_long_dec(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_dec(v);
+ raw_atomic_long_dec(v);
}
static __always_inline long
@@ -1545,14 +1540,14 @@ atomic_long_dec_return(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_dec_return(v);
+ return raw_atomic_long_dec_return(v);
}
static __always_inline long
atomic_long_dec_return_acquire(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_dec_return_acquire(v);
+ return raw_atomic_long_dec_return_acquire(v);
}
static __always_inline long
@@ -1560,14 +1555,14 @@ atomic_long_dec_return_release(atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_dec_return_release(v);
+ return raw_atomic_long_dec_return_release(v);
}
static __always_inline long
atomic_long_dec_return_relaxed(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_dec_return_relaxed(v);
+ return raw_atomic_long_dec_return_relaxed(v);
}
static __always_inline long
@@ -1575,14 +1570,14 @@ atomic_long_fetch_dec(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_dec(v);
+ return raw_atomic_long_fetch_dec(v);
}
static __always_inline long
atomic_long_fetch_dec_acquire(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_dec_acquire(v);
+ return raw_atomic_long_fetch_dec_acquire(v);
}
static __always_inline long
@@ -1590,21 +1585,21 @@ atomic_long_fetch_dec_release(atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_dec_release(v);
+ return raw_atomic_long_fetch_dec_release(v);
}
static __always_inline long
atomic_long_fetch_dec_relaxed(atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_dec_relaxed(v);
+ return raw_atomic_long_fetch_dec_relaxed(v);
}
static __always_inline void
atomic_long_and(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_and(i, v);
+ raw_atomic_long_and(i, v);
}
static __always_inline long
@@ -1612,14 +1607,14 @@ atomic_long_fetch_and(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_and(i, v);
+ return raw_atomic_long_fetch_and(i, v);
}
static __always_inline long
atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_and_acquire(i, v);
+ return raw_atomic_long_fetch_and_acquire(i, v);
}
static __always_inline long
@@ -1627,21 +1622,21 @@ atomic_long_fetch_and_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_and_release(i, v);
+ return raw_atomic_long_fetch_and_release(i, v);
}
static __always_inline long
atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_and_relaxed(i, v);
+ return raw_atomic_long_fetch_and_relaxed(i, v);
}
static __always_inline void
atomic_long_andnot(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_andnot(i, v);
+ raw_atomic_long_andnot(i, v);
}
static __always_inline long
@@ -1649,14 +1644,14 @@ atomic_long_fetch_andnot(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_andnot(i, v);
+ return raw_atomic_long_fetch_andnot(i, v);
}
static __always_inline long
atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_andnot_acquire(i, v);
+ return raw_atomic_long_fetch_andnot_acquire(i, v);
}
static __always_inline long
@@ -1664,21 +1659,21 @@ atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_andnot_release(i, v);
+ return raw_atomic_long_fetch_andnot_release(i, v);
}
static __always_inline long
atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_andnot_relaxed(i, v);
+ return raw_atomic_long_fetch_andnot_relaxed(i, v);
}
static __always_inline void
atomic_long_or(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_or(i, v);
+ raw_atomic_long_or(i, v);
}
static __always_inline long
@@ -1686,14 +1681,14 @@ atomic_long_fetch_or(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_or(i, v);
+ return raw_atomic_long_fetch_or(i, v);
}
static __always_inline long
atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_or_acquire(i, v);
+ return raw_atomic_long_fetch_or_acquire(i, v);
}
static __always_inline long
@@ -1701,21 +1696,21 @@ atomic_long_fetch_or_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_or_release(i, v);
+ return raw_atomic_long_fetch_or_release(i, v);
}
static __always_inline long
atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_or_relaxed(i, v);
+ return raw_atomic_long_fetch_or_relaxed(i, v);
}
static __always_inline void
atomic_long_xor(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- arch_atomic_long_xor(i, v);
+ raw_atomic_long_xor(i, v);
}
static __always_inline long
@@ -1723,14 +1718,14 @@ atomic_long_fetch_xor(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_xor(i, v);
+ return raw_atomic_long_fetch_xor(i, v);
}
static __always_inline long
atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_xor_acquire(i, v);
+ return raw_atomic_long_fetch_xor_acquire(i, v);
}
static __always_inline long
@@ -1738,14 +1733,14 @@ atomic_long_fetch_xor_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_xor_release(i, v);
+ return raw_atomic_long_fetch_xor_release(i, v);
}
static __always_inline long
atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_xor_relaxed(i, v);
+ return raw_atomic_long_fetch_xor_relaxed(i, v);
}
static __always_inline long
@@ -1753,14 +1748,14 @@ atomic_long_xchg(atomic_long_t *v, long i)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_xchg(v, i);
+ return raw_atomic_long_xchg(v, i);
}
static __always_inline long
atomic_long_xchg_acquire(atomic_long_t *v, long i)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_xchg_acquire(v, i);
+ return raw_atomic_long_xchg_acquire(v, i);
}
static __always_inline long
@@ -1768,14 +1763,14 @@ atomic_long_xchg_release(atomic_long_t *v, long i)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_xchg_release(v, i);
+ return raw_atomic_long_xchg_release(v, i);
}
static __always_inline long
atomic_long_xchg_relaxed(atomic_long_t *v, long i)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_xchg_relaxed(v, i);
+ return raw_atomic_long_xchg_relaxed(v, i);
}
static __always_inline long
@@ -1783,14 +1778,14 @@ atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_cmpxchg(v, old, new);
+ return raw_atomic_long_cmpxchg(v, old, new);
}
static __always_inline long
atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_cmpxchg_acquire(v, old, new);
+ return raw_atomic_long_cmpxchg_acquire(v, old, new);
}
static __always_inline long
@@ -1798,14 +1793,14 @@ atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_cmpxchg_release(v, old, new);
+ return raw_atomic_long_cmpxchg_release(v, old, new);
}
static __always_inline long
atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_cmpxchg_relaxed(v, old, new);
+ return raw_atomic_long_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
@@ -1814,7 +1809,7 @@ atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_long_try_cmpxchg(v, old, new);
+ return raw_atomic_long_try_cmpxchg(v, old, new);
}
static __always_inline bool
@@ -1822,7 +1817,7 @@ atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
{
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_long_try_cmpxchg_acquire(v, old, new);
+ return raw_atomic_long_try_cmpxchg_acquire(v, old, new);
}
static __always_inline bool
@@ -1831,7 +1826,7 @@ atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_long_try_cmpxchg_release(v, old, new);
+ return raw_atomic_long_try_cmpxchg_release(v, old, new);
}
static __always_inline bool
@@ -1839,7 +1834,7 @@ atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
{
instrument_atomic_read_write(v, sizeof(*v));
instrument_atomic_read_write(old, sizeof(*old));
- return arch_atomic_long_try_cmpxchg_relaxed(v, old, new);
+ return raw_atomic_long_try_cmpxchg_relaxed(v, old, new);
}
static __always_inline bool
@@ -1847,7 +1842,7 @@ atomic_long_sub_and_test(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_sub_and_test(i, v);
+ return raw_atomic_long_sub_and_test(i, v);
}
static __always_inline bool
@@ -1855,7 +1850,7 @@ atomic_long_dec_and_test(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_dec_and_test(v);
+ return raw_atomic_long_dec_and_test(v);
}
static __always_inline bool
@@ -1863,7 +1858,7 @@ atomic_long_inc_and_test(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_inc_and_test(v);
+ return raw_atomic_long_inc_and_test(v);
}
static __always_inline bool
@@ -1871,14 +1866,14 @@ atomic_long_add_negative(long i, atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_negative(i, v);
+ return raw_atomic_long_add_negative(i, v);
}
static __always_inline bool
atomic_long_add_negative_acquire(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_negative_acquire(i, v);
+ return raw_atomic_long_add_negative_acquire(i, v);
}
static __always_inline bool
@@ -1886,14 +1881,14 @@ atomic_long_add_negative_release(long i, atomic_long_t *v)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_negative_release(i, v);
+ return raw_atomic_long_add_negative_release(i, v);
}
static __always_inline bool
atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
{
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_negative_relaxed(i, v);
+ return raw_atomic_long_add_negative_relaxed(i, v);
}
static __always_inline long
@@ -1901,7 +1896,7 @@ atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_fetch_add_unless(v, a, u);
+ return raw_atomic_long_fetch_add_unless(v, a, u);
}
static __always_inline bool
@@ -1909,7 +1904,7 @@ atomic_long_add_unless(atomic_long_t *v, long a, long u)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_add_unless(v, a, u);
+ return raw_atomic_long_add_unless(v, a, u);
}
static __always_inline bool
@@ -1917,7 +1912,7 @@ atomic_long_inc_not_zero(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_inc_not_zero(v);
+ return raw_atomic_long_inc_not_zero(v);
}
static __always_inline bool
@@ -1925,7 +1920,7 @@ atomic_long_inc_unless_negative(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_inc_unless_negative(v);
+ return raw_atomic_long_inc_unless_negative(v);
}
static __always_inline bool
@@ -1933,7 +1928,7 @@ atomic_long_dec_unless_positive(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_dec_unless_positive(v);
+ return raw_atomic_long_dec_unless_positive(v);
}
static __always_inline long
@@ -1941,7 +1936,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return arch_atomic_long_dec_if_positive(v);
+ return raw_atomic_long_dec_if_positive(v);
}
#define xchg(ptr, ...) \
@@ -1949,14 +1944,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_xchg(__ai_ptr, __VA_ARGS__); \
+ raw_xchg(__ai_ptr, __VA_ARGS__); \
})
#define xchg_acquire(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_xchg_acquire(__ai_ptr, __VA_ARGS__); \
+ raw_xchg_acquire(__ai_ptr, __VA_ARGS__); \
})
#define xchg_release(ptr, ...) \
@@ -1964,14 +1959,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_release(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_xchg_release(__ai_ptr, __VA_ARGS__); \
+ raw_xchg_release(__ai_ptr, __VA_ARGS__); \
})
#define xchg_relaxed(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_xchg_relaxed(__ai_ptr, __VA_ARGS__); \
+ raw_xchg_relaxed(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg(ptr, ...) \
@@ -1979,14 +1974,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg_acquire(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg_acquire(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg_acquire(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg_release(ptr, ...) \
@@ -1994,14 +1989,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_release(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg_release(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg_release(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg_relaxed(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg64(ptr, ...) \
@@ -2009,14 +2004,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg64(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg64(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg64_acquire(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg64_acquire(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg64_acquire(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg64_release(ptr, ...) \
@@ -2024,14 +2019,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_release(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg64_relaxed(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg64_relaxed(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg64_relaxed(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg128(ptr, ...) \
@@ -2039,14 +2034,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg128(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg128(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg128_acquire(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg128_acquire(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg128_acquire(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg128_release(ptr, ...) \
@@ -2054,14 +2049,14 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_release(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg128_release(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg128_release(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg128_relaxed(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg128_relaxed(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg128_relaxed(__ai_ptr, __VA_ARGS__); \
})
#define try_cmpxchg(ptr, oldp, ...) \
@@ -2071,7 +2066,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg_acquire(ptr, oldp, ...) \
@@ -2080,7 +2075,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg_release(ptr, oldp, ...) \
@@ -2090,7 +2085,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
kcsan_release(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg_relaxed(ptr, oldp, ...) \
@@ -2099,7 +2094,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg64(ptr, oldp, ...) \
@@ -2109,7 +2104,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg64(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg64(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg64_acquire(ptr, oldp, ...) \
@@ -2118,7 +2113,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg64_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg64_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg64_release(ptr, oldp, ...) \
@@ -2128,7 +2123,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
kcsan_release(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg64_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg64_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg64_relaxed(ptr, oldp, ...) \
@@ -2137,7 +2132,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg64_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg64_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg128(ptr, oldp, ...) \
@@ -2147,7 +2142,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg128(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg128(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg128_acquire(ptr, oldp, ...) \
@@ -2156,7 +2151,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg128_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg128_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg128_release(ptr, oldp, ...) \
@@ -2166,7 +2161,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
kcsan_release(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg128_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg128_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg128_relaxed(ptr, oldp, ...) \
@@ -2175,28 +2170,28 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg128_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg128_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define cmpxchg_local(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg_local(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg_local(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg64_local(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg64_local(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg64_local(__ai_ptr, __VA_ARGS__); \
})
#define cmpxchg128_local(ptr, ...) \
({ \
typeof(ptr) __ai_ptr = (ptr); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_cmpxchg128_local(__ai_ptr, __VA_ARGS__); \
+ raw_cmpxchg128_local(__ai_ptr, __VA_ARGS__); \
})
#define sync_cmpxchg(ptr, ...) \
@@ -2204,7 +2199,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(ptr) __ai_ptr = (ptr); \
kcsan_mb(); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
- arch_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \
+ raw_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \
})
#define try_cmpxchg_local(ptr, oldp, ...) \
@@ -2213,7 +2208,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg64_local(ptr, oldp, ...) \
@@ -2222,7 +2217,7 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg64_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg64_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#define try_cmpxchg128_local(ptr, oldp, ...) \
@@ -2231,9 +2226,9 @@ atomic_long_dec_if_positive(atomic_long_t *v)
typeof(oldp) __ai_oldp = (oldp); \
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \
- arch_try_cmpxchg128_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \
+ raw_try_cmpxchg128_local(__ai_ptr, __ai_oldp, __VA_ARGS__); \
})
#endif /* _LINUX_ATOMIC_INSTRUMENTED_H */
-// 3611991b015450e119bcd7417a9431af7f3ba13c
+// f6502977180430e61c1a7c4e5e665f04f501fb8d
diff --git a/include/linux/atomic/atomic-raw.h b/include/linux/atomic/atomic-raw.h
new file mode 100644
index 0000000..83ff026
--- /dev/null
+++ b/include/linux/atomic/atomic-raw.h
@@ -0,0 +1,1645 @@
+// SPDX-License-Identifier: GPL-2.0
+
+// Generated by scripts/atomic/gen-atomic-raw.sh
+// DO NOT MODIFY THIS FILE DIRECTLY
+
+#ifndef _LINUX_ATOMIC_RAW_H
+#define _LINUX_ATOMIC_RAW_H
+
+static __always_inline int
+raw_atomic_read(const atomic_t *v)
+{
+ return arch_atomic_read(v);
+}
+
+static __always_inline int
+raw_atomic_read_acquire(const atomic_t *v)
+{
+ return arch_atomic_read_acquire(v);
+}
+
+static __always_inline void
+raw_atomic_set(atomic_t *v, int i)
+{
+ arch_atomic_set(v, i);
+}
+
+static __always_inline void
+raw_atomic_set_release(atomic_t *v, int i)
+{
+ arch_atomic_set_release(v, i);
+}
+
+static __always_inline void
+raw_atomic_add(int i, atomic_t *v)
+{
+ arch_atomic_add(i, v);
+}
+
+static __always_inline int
+raw_atomic_add_return(int i, atomic_t *v)
+{
+ return arch_atomic_add_return(i, v);
+}
+
+static __always_inline int
+raw_atomic_add_return_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_add_return_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_add_return_release(int i, atomic_t *v)
+{
+ return arch_atomic_add_return_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_add_return_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_add_return_relaxed(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_add(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_add(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_add_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_add_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_add_release(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_add_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_add_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_add_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_sub(int i, atomic_t *v)
+{
+ arch_atomic_sub(i, v);
+}
+
+static __always_inline int
+raw_atomic_sub_return(int i, atomic_t *v)
+{
+ return arch_atomic_sub_return(i, v);
+}
+
+static __always_inline int
+raw_atomic_sub_return_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_sub_return_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_sub_return_release(int i, atomic_t *v)
+{
+ return arch_atomic_sub_return_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_sub_return_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_sub_return_relaxed(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_sub(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_sub(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_sub_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_sub_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_sub_release(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_sub_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_sub_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_sub_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_inc(atomic_t *v)
+{
+ arch_atomic_inc(v);
+}
+
+static __always_inline int
+raw_atomic_inc_return(atomic_t *v)
+{
+ return arch_atomic_inc_return(v);
+}
+
+static __always_inline int
+raw_atomic_inc_return_acquire(atomic_t *v)
+{
+ return arch_atomic_inc_return_acquire(v);
+}
+
+static __always_inline int
+raw_atomic_inc_return_release(atomic_t *v)
+{
+ return arch_atomic_inc_return_release(v);
+}
+
+static __always_inline int
+raw_atomic_inc_return_relaxed(atomic_t *v)
+{
+ return arch_atomic_inc_return_relaxed(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_inc(atomic_t *v)
+{
+ return arch_atomic_fetch_inc(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_inc_acquire(atomic_t *v)
+{
+ return arch_atomic_fetch_inc_acquire(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_inc_release(atomic_t *v)
+{
+ return arch_atomic_fetch_inc_release(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_inc_relaxed(atomic_t *v)
+{
+ return arch_atomic_fetch_inc_relaxed(v);
+}
+
+static __always_inline void
+raw_atomic_dec(atomic_t *v)
+{
+ arch_atomic_dec(v);
+}
+
+static __always_inline int
+raw_atomic_dec_return(atomic_t *v)
+{
+ return arch_atomic_dec_return(v);
+}
+
+static __always_inline int
+raw_atomic_dec_return_acquire(atomic_t *v)
+{
+ return arch_atomic_dec_return_acquire(v);
+}
+
+static __always_inline int
+raw_atomic_dec_return_release(atomic_t *v)
+{
+ return arch_atomic_dec_return_release(v);
+}
+
+static __always_inline int
+raw_atomic_dec_return_relaxed(atomic_t *v)
+{
+ return arch_atomic_dec_return_relaxed(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_dec(atomic_t *v)
+{
+ return arch_atomic_fetch_dec(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_dec_acquire(atomic_t *v)
+{
+ return arch_atomic_fetch_dec_acquire(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_dec_release(atomic_t *v)
+{
+ return arch_atomic_fetch_dec_release(v);
+}
+
+static __always_inline int
+raw_atomic_fetch_dec_relaxed(atomic_t *v)
+{
+ return arch_atomic_fetch_dec_relaxed(v);
+}
+
+static __always_inline void
+raw_atomic_and(int i, atomic_t *v)
+{
+ arch_atomic_and(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_and(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_and(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_and_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_and_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_and_release(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_and_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_and_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_and_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_andnot(int i, atomic_t *v)
+{
+ arch_atomic_andnot(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_andnot(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_andnot(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_andnot_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_andnot_release(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_andnot_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_andnot_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_or(int i, atomic_t *v)
+{
+ arch_atomic_or(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_or(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_or(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_or_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_or_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_or_release(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_or_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_or_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_or_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_xor(int i, atomic_t *v)
+{
+ arch_atomic_xor(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_xor(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_xor(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_xor_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_xor_acquire(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_xor_release(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_xor_release(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_xor_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_fetch_xor_relaxed(i, v);
+}
+
+static __always_inline int
+raw_atomic_xchg(atomic_t *v, int i)
+{
+ return arch_atomic_xchg(v, i);
+}
+
+static __always_inline int
+raw_atomic_xchg_acquire(atomic_t *v, int i)
+{
+ return arch_atomic_xchg_acquire(v, i);
+}
+
+static __always_inline int
+raw_atomic_xchg_release(atomic_t *v, int i)
+{
+ return arch_atomic_xchg_release(v, i);
+}
+
+static __always_inline int
+raw_atomic_xchg_relaxed(atomic_t *v, int i)
+{
+ return arch_atomic_xchg_relaxed(v, i);
+}
+
+static __always_inline int
+raw_atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+ return arch_atomic_cmpxchg(v, old, new);
+}
+
+static __always_inline int
+raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
+{
+ return arch_atomic_cmpxchg_acquire(v, old, new);
+}
+
+static __always_inline int
+raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
+{
+ return arch_atomic_cmpxchg_release(v, old, new);
+}
+
+static __always_inline int
+raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
+{
+ return arch_atomic_cmpxchg_relaxed(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
+{
+ return arch_atomic_try_cmpxchg(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
+{
+ return arch_atomic_try_cmpxchg_acquire(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
+{
+ return arch_atomic_try_cmpxchg_release(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
+{
+ return arch_atomic_try_cmpxchg_relaxed(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_sub_and_test(int i, atomic_t *v)
+{
+ return arch_atomic_sub_and_test(i, v);
+}
+
+static __always_inline bool
+raw_atomic_dec_and_test(atomic_t *v)
+{
+ return arch_atomic_dec_and_test(v);
+}
+
+static __always_inline bool
+raw_atomic_inc_and_test(atomic_t *v)
+{
+ return arch_atomic_inc_and_test(v);
+}
+
+static __always_inline bool
+raw_atomic_add_negative(int i, atomic_t *v)
+{
+ return arch_atomic_add_negative(i, v);
+}
+
+static __always_inline bool
+raw_atomic_add_negative_acquire(int i, atomic_t *v)
+{
+ return arch_atomic_add_negative_acquire(i, v);
+}
+
+static __always_inline bool
+raw_atomic_add_negative_release(int i, atomic_t *v)
+{
+ return arch_atomic_add_negative_release(i, v);
+}
+
+static __always_inline bool
+raw_atomic_add_negative_relaxed(int i, atomic_t *v)
+{
+ return arch_atomic_add_negative_relaxed(i, v);
+}
+
+static __always_inline int
+raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
+{
+ return arch_atomic_fetch_add_unless(v, a, u);
+}
+
+static __always_inline bool
+raw_atomic_add_unless(atomic_t *v, int a, int u)
+{
+ return arch_atomic_add_unless(v, a, u);
+}
+
+static __always_inline bool
+raw_atomic_inc_not_zero(atomic_t *v)
+{
+ return arch_atomic_inc_not_zero(v);
+}
+
+static __always_inline bool
+raw_atomic_inc_unless_negative(atomic_t *v)
+{
+ return arch_atomic_inc_unless_negative(v);
+}
+
+static __always_inline bool
+raw_atomic_dec_unless_positive(atomic_t *v)
+{
+ return arch_atomic_dec_unless_positive(v);
+}
+
+static __always_inline int
+raw_atomic_dec_if_positive(atomic_t *v)
+{
+ return arch_atomic_dec_if_positive(v);
+}
+
+static __always_inline s64
+raw_atomic64_read(const atomic64_t *v)
+{
+ return arch_atomic64_read(v);
+}
+
+static __always_inline s64
+raw_atomic64_read_acquire(const atomic64_t *v)
+{
+ return arch_atomic64_read_acquire(v);
+}
+
+static __always_inline void
+raw_atomic64_set(atomic64_t *v, s64 i)
+{
+ arch_atomic64_set(v, i);
+}
+
+static __always_inline void
+raw_atomic64_set_release(atomic64_t *v, s64 i)
+{
+ arch_atomic64_set_release(v, i);
+}
+
+static __always_inline void
+raw_atomic64_add(s64 i, atomic64_t *v)
+{
+ arch_atomic64_add(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_add_return(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_return(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_return_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_add_return_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_return_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_return_relaxed(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_add(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_add(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_add_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_add_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_add_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic64_sub(s64 i, atomic64_t *v)
+{
+ arch_atomic64_sub(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_sub_return(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_sub_return(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_sub_return_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_sub_return_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_sub_return_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_sub_return_relaxed(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_sub(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_sub(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_sub_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_sub_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_sub_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic64_inc(atomic64_t *v)
+{
+ arch_atomic64_inc(v);
+}
+
+static __always_inline s64
+raw_atomic64_inc_return(atomic64_t *v)
+{
+ return arch_atomic64_inc_return(v);
+}
+
+static __always_inline s64
+raw_atomic64_inc_return_acquire(atomic64_t *v)
+{
+ return arch_atomic64_inc_return_acquire(v);
+}
+
+static __always_inline s64
+raw_atomic64_inc_return_release(atomic64_t *v)
+{
+ return arch_atomic64_inc_return_release(v);
+}
+
+static __always_inline s64
+raw_atomic64_inc_return_relaxed(atomic64_t *v)
+{
+ return arch_atomic64_inc_return_relaxed(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_inc(atomic64_t *v)
+{
+ return arch_atomic64_fetch_inc(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_inc_acquire(atomic64_t *v)
+{
+ return arch_atomic64_fetch_inc_acquire(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_inc_release(atomic64_t *v)
+{
+ return arch_atomic64_fetch_inc_release(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_inc_relaxed(atomic64_t *v)
+{
+ return arch_atomic64_fetch_inc_relaxed(v);
+}
+
+static __always_inline void
+raw_atomic64_dec(atomic64_t *v)
+{
+ arch_atomic64_dec(v);
+}
+
+static __always_inline s64
+raw_atomic64_dec_return(atomic64_t *v)
+{
+ return arch_atomic64_dec_return(v);
+}
+
+static __always_inline s64
+raw_atomic64_dec_return_acquire(atomic64_t *v)
+{
+ return arch_atomic64_dec_return_acquire(v);
+}
+
+static __always_inline s64
+raw_atomic64_dec_return_release(atomic64_t *v)
+{
+ return arch_atomic64_dec_return_release(v);
+}
+
+static __always_inline s64
+raw_atomic64_dec_return_relaxed(atomic64_t *v)
+{
+ return arch_atomic64_dec_return_relaxed(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_dec(atomic64_t *v)
+{
+ return arch_atomic64_fetch_dec(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_dec_acquire(atomic64_t *v)
+{
+ return arch_atomic64_fetch_dec_acquire(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_dec_release(atomic64_t *v)
+{
+ return arch_atomic64_fetch_dec_release(v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_dec_relaxed(atomic64_t *v)
+{
+ return arch_atomic64_fetch_dec_relaxed(v);
+}
+
+static __always_inline void
+raw_atomic64_and(s64 i, atomic64_t *v)
+{
+ arch_atomic64_and(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_and(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_and(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_and_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_and_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_and_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic64_andnot(s64 i, atomic64_t *v)
+{
+ arch_atomic64_andnot(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_andnot(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_andnot_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_andnot_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_andnot_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic64_or(s64 i, atomic64_t *v)
+{
+ arch_atomic64_or(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_or(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_or(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_or_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_or_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_or_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic64_xor(s64 i, atomic64_t *v)
+{
+ arch_atomic64_xor(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_xor(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_xor(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_xor_acquire(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_xor_release(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_fetch_xor_relaxed(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_xchg(atomic64_t *v, s64 i)
+{
+ return arch_atomic64_xchg(v, i);
+}
+
+static __always_inline s64
+raw_atomic64_xchg_acquire(atomic64_t *v, s64 i)
+{
+ return arch_atomic64_xchg_acquire(v, i);
+}
+
+static __always_inline s64
+raw_atomic64_xchg_release(atomic64_t *v, s64 i)
+{
+ return arch_atomic64_xchg_release(v, i);
+}
+
+static __always_inline s64
+raw_atomic64_xchg_relaxed(atomic64_t *v, s64 i)
+{
+ return arch_atomic64_xchg_relaxed(v, i);
+}
+
+static __always_inline s64
+raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_atomic64_cmpxchg(v, old, new);
+}
+
+static __always_inline s64
+raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_atomic64_cmpxchg_acquire(v, old, new);
+}
+
+static __always_inline s64
+raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_atomic64_cmpxchg_release(v, old, new);
+}
+
+static __always_inline s64
+raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
+{
+ return arch_atomic64_cmpxchg_relaxed(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
+{
+ return arch_atomic64_try_cmpxchg(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
+{
+ return arch_atomic64_try_cmpxchg_acquire(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
+{
+ return arch_atomic64_try_cmpxchg_release(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
+{
+ return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic64_sub_and_test(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_sub_and_test(i, v);
+}
+
+static __always_inline bool
+raw_atomic64_dec_and_test(atomic64_t *v)
+{
+ return arch_atomic64_dec_and_test(v);
+}
+
+static __always_inline bool
+raw_atomic64_inc_and_test(atomic64_t *v)
+{
+ return arch_atomic64_inc_and_test(v);
+}
+
+static __always_inline bool
+raw_atomic64_add_negative(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_negative(i, v);
+}
+
+static __always_inline bool
+raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_negative_acquire(i, v);
+}
+
+static __always_inline bool
+raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_negative_release(i, v);
+}
+
+static __always_inline bool
+raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
+{
+ return arch_atomic64_add_negative_relaxed(i, v);
+}
+
+static __always_inline s64
+raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+{
+ return arch_atomic64_fetch_add_unless(v, a, u);
+}
+
+static __always_inline bool
+raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
+{
+ return arch_atomic64_add_unless(v, a, u);
+}
+
+static __always_inline bool
+raw_atomic64_inc_not_zero(atomic64_t *v)
+{
+ return arch_atomic64_inc_not_zero(v);
+}
+
+static __always_inline bool
+raw_atomic64_inc_unless_negative(atomic64_t *v)
+{
+ return arch_atomic64_inc_unless_negative(v);
+}
+
+static __always_inline bool
+raw_atomic64_dec_unless_positive(atomic64_t *v)
+{
+ return arch_atomic64_dec_unless_positive(v);
+}
+
+static __always_inline s64
+raw_atomic64_dec_if_positive(atomic64_t *v)
+{
+ return arch_atomic64_dec_if_positive(v);
+}
+
+static __always_inline long
+raw_atomic_long_read(const atomic_long_t *v)
+{
+ return arch_atomic_long_read(v);
+}
+
+static __always_inline long
+raw_atomic_long_read_acquire(const atomic_long_t *v)
+{
+ return arch_atomic_long_read_acquire(v);
+}
+
+static __always_inline void
+raw_atomic_long_set(atomic_long_t *v, long i)
+{
+ arch_atomic_long_set(v, i);
+}
+
+static __always_inline void
+raw_atomic_long_set_release(atomic_long_t *v, long i)
+{
+ arch_atomic_long_set_release(v, i);
+}
+
+static __always_inline void
+raw_atomic_long_add(long i, atomic_long_t *v)
+{
+ arch_atomic_long_add(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_add_return(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_return(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_return_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_add_return_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_return_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_return_relaxed(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_add(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_add(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_add_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_add_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_add_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_long_sub(long i, atomic_long_t *v)
+{
+ arch_atomic_long_sub(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_sub_return(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_sub_return(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_sub_return_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_sub_return_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_sub_return_relaxed(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_sub(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_sub_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_sub_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_sub_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_long_inc(atomic_long_t *v)
+{
+ arch_atomic_long_inc(v);
+}
+
+static __always_inline long
+raw_atomic_long_inc_return(atomic_long_t *v)
+{
+ return arch_atomic_long_inc_return(v);
+}
+
+static __always_inline long
+raw_atomic_long_inc_return_acquire(atomic_long_t *v)
+{
+ return arch_atomic_long_inc_return_acquire(v);
+}
+
+static __always_inline long
+raw_atomic_long_inc_return_release(atomic_long_t *v)
+{
+ return arch_atomic_long_inc_return_release(v);
+}
+
+static __always_inline long
+raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
+{
+ return arch_atomic_long_inc_return_relaxed(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_inc(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_inc(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_inc_acquire(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_inc_release(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_inc_release(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_inc_relaxed(v);
+}
+
+static __always_inline void
+raw_atomic_long_dec(atomic_long_t *v)
+{
+ arch_atomic_long_dec(v);
+}
+
+static __always_inline long
+raw_atomic_long_dec_return(atomic_long_t *v)
+{
+ return arch_atomic_long_dec_return(v);
+}
+
+static __always_inline long
+raw_atomic_long_dec_return_acquire(atomic_long_t *v)
+{
+ return arch_atomic_long_dec_return_acquire(v);
+}
+
+static __always_inline long
+raw_atomic_long_dec_return_release(atomic_long_t *v)
+{
+ return arch_atomic_long_dec_return_release(v);
+}
+
+static __always_inline long
+raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
+{
+ return arch_atomic_long_dec_return_relaxed(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_dec(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_dec(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_dec_acquire(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_dec_release(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_dec_release(v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_dec_relaxed(v);
+}
+
+static __always_inline void
+raw_atomic_long_and(long i, atomic_long_t *v)
+{
+ arch_atomic_long_and(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_and(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_and(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_and_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_and_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_and_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_long_andnot(long i, atomic_long_t *v)
+{
+ arch_atomic_long_andnot(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_andnot(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_andnot_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_andnot_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_andnot_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_long_or(long i, atomic_long_t *v)
+{
+ arch_atomic_long_or(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_or(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_or(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_or_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_or_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_or_relaxed(i, v);
+}
+
+static __always_inline void
+raw_atomic_long_xor(long i, atomic_long_t *v)
+{
+ arch_atomic_long_xor(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_xor(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_xor_acquire(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_xor_release(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_fetch_xor_relaxed(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_xchg(atomic_long_t *v, long i)
+{
+ return arch_atomic_long_xchg(v, i);
+}
+
+static __always_inline long
+raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
+{
+ return arch_atomic_long_xchg_acquire(v, i);
+}
+
+static __always_inline long
+raw_atomic_long_xchg_release(atomic_long_t *v, long i)
+{
+ return arch_atomic_long_xchg_release(v, i);
+}
+
+static __always_inline long
+raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
+{
+ return arch_atomic_long_xchg_relaxed(v, i);
+}
+
+static __always_inline long
+raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
+{
+ return arch_atomic_long_cmpxchg(v, old, new);
+}
+
+static __always_inline long
+raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
+{
+ return arch_atomic_long_cmpxchg_acquire(v, old, new);
+}
+
+static __always_inline long
+raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
+{
+ return arch_atomic_long_cmpxchg_release(v, old, new);
+}
+
+static __always_inline long
+raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
+{
+ return arch_atomic_long_cmpxchg_relaxed(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
+{
+ return arch_atomic_long_try_cmpxchg(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
+{
+ return arch_atomic_long_try_cmpxchg_acquire(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
+{
+ return arch_atomic_long_try_cmpxchg_release(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
+{
+ return arch_atomic_long_try_cmpxchg_relaxed(v, old, new);
+}
+
+static __always_inline bool
+raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_sub_and_test(i, v);
+}
+
+static __always_inline bool
+raw_atomic_long_dec_and_test(atomic_long_t *v)
+{
+ return arch_atomic_long_dec_and_test(v);
+}
+
+static __always_inline bool
+raw_atomic_long_inc_and_test(atomic_long_t *v)
+{
+ return arch_atomic_long_inc_and_test(v);
+}
+
+static __always_inline bool
+raw_atomic_long_add_negative(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_negative(i, v);
+}
+
+static __always_inline bool
+raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_negative_acquire(i, v);
+}
+
+static __always_inline bool
+raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_negative_release(i, v);
+}
+
+static __always_inline bool
+raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
+{
+ return arch_atomic_long_add_negative_relaxed(i, v);
+}
+
+static __always_inline long
+raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
+{
+ return arch_atomic_long_fetch_add_unless(v, a, u);
+}
+
+static __always_inline bool
+raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
+{
+ return arch_atomic_long_add_unless(v, a, u);
+}
+
+static __always_inline bool
+raw_atomic_long_inc_not_zero(atomic_long_t *v)
+{
+ return arch_atomic_long_inc_not_zero(v);
+}
+
+static __always_inline bool
+raw_atomic_long_inc_unless_negative(atomic_long_t *v)
+{
+ return arch_atomic_long_inc_unless_negative(v);
+}
+
+static __always_inline bool
+raw_atomic_long_dec_unless_positive(atomic_long_t *v)
+{
+ return arch_atomic_long_dec_unless_positive(v);
+}
+
+static __always_inline long
+raw_atomic_long_dec_if_positive(atomic_long_t *v)
+{
+ return arch_atomic_long_dec_if_positive(v);
+}
+
+#define raw_xchg(...) \
+ arch_xchg(__VA_ARGS__)
+
+#define raw_xchg_acquire(...) \
+ arch_xchg_acquire(__VA_ARGS__)
+
+#define raw_xchg_release(...) \
+ arch_xchg_release(__VA_ARGS__)
+
+#define raw_xchg_relaxed(...) \
+ arch_xchg_relaxed(__VA_ARGS__)
+
+#define raw_cmpxchg(...) \
+ arch_cmpxchg(__VA_ARGS__)
+
+#define raw_cmpxchg_acquire(...) \
+ arch_cmpxchg_acquire(__VA_ARGS__)
+
+#define raw_cmpxchg_release(...) \
+ arch_cmpxchg_release(__VA_ARGS__)
+
+#define raw_cmpxchg_relaxed(...) \
+ arch_cmpxchg_relaxed(__VA_ARGS__)
+
+#define raw_cmpxchg64(...) \
+ arch_cmpxchg64(__VA_ARGS__)
+
+#define raw_cmpxchg64_acquire(...) \
+ arch_cmpxchg64_acquire(__VA_ARGS__)
+
+#define raw_cmpxchg64_release(...) \
+ arch_cmpxchg64_release(__VA_ARGS__)
+
+#define raw_cmpxchg64_relaxed(...) \
+ arch_cmpxchg64_relaxed(__VA_ARGS__)
+
+#define raw_cmpxchg128(...) \
+ arch_cmpxchg128(__VA_ARGS__)
+
+#define raw_cmpxchg128_acquire(...) \
+ arch_cmpxchg128_acquire(__VA_ARGS__)
+
+#define raw_cmpxchg128_release(...) \
+ arch_cmpxchg128_release(__VA_ARGS__)
+
+#define raw_cmpxchg128_relaxed(...) \
+ arch_cmpxchg128_relaxed(__VA_ARGS__)
+
+#define raw_try_cmpxchg(...) \
+ arch_try_cmpxchg(__VA_ARGS__)
+
+#define raw_try_cmpxchg_acquire(...) \
+ arch_try_cmpxchg_acquire(__VA_ARGS__)
+
+#define raw_try_cmpxchg_release(...) \
+ arch_try_cmpxchg_release(__VA_ARGS__)
+
+#define raw_try_cmpxchg_relaxed(...) \
+ arch_try_cmpxchg_relaxed(__VA_ARGS__)
+
+#define raw_try_cmpxchg64(...) \
+ arch_try_cmpxchg64(__VA_ARGS__)
+
+#define raw_try_cmpxchg64_acquire(...) \
+ arch_try_cmpxchg64_acquire(__VA_ARGS__)
+
+#define raw_try_cmpxchg64_release(...) \
+ arch_try_cmpxchg64_release(__VA_ARGS__)
+
+#define raw_try_cmpxchg64_relaxed(...) \
+ arch_try_cmpxchg64_relaxed(__VA_ARGS__)
+
+#define raw_try_cmpxchg128(...) \
+ arch_try_cmpxchg128(__VA_ARGS__)
+
+#define raw_try_cmpxchg128_acquire(...) \
+ arch_try_cmpxchg128_acquire(__VA_ARGS__)
+
+#define raw_try_cmpxchg128_release(...) \
+ arch_try_cmpxchg128_release(__VA_ARGS__)
+
+#define raw_try_cmpxchg128_relaxed(...) \
+ arch_try_cmpxchg128_relaxed(__VA_ARGS__)
+
+#define raw_cmpxchg_local(...) \
+ arch_cmpxchg_local(__VA_ARGS__)
+
+#define raw_cmpxchg64_local(...) \
+ arch_cmpxchg64_local(__VA_ARGS__)
+
+#define raw_cmpxchg128_local(...) \
+ arch_cmpxchg128_local(__VA_ARGS__)
+
+#define raw_sync_cmpxchg(...) \
+ arch_sync_cmpxchg(__VA_ARGS__)
+
+#define raw_try_cmpxchg_local(...) \
+ arch_try_cmpxchg_local(__VA_ARGS__)
+
+#define raw_try_cmpxchg64_local(...) \
+ arch_try_cmpxchg64_local(__VA_ARGS__)
+
+#define raw_try_cmpxchg128_local(...) \
+ arch_try_cmpxchg128_local(__VA_ARGS__)
+
+#endif /* _LINUX_ATOMIC_RAW_H */
+// 01d54200571b3857755a07c10074a4fd58cef6b1
diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh
index 68557bf..93c949a 100755
--- a/scripts/atomic/gen-atomic-instrumented.sh
+++ b/scripts/atomic/gen-atomic-instrumented.sh
@@ -73,7 +73,7 @@ static __always_inline ${ret}
${atomicname}(${params})
{
${checks}
- ${retstmt}arch_${atomicname}(${args});
+ ${retstmt}raw_${atomicname}(${args});
}
EOF
@@ -105,7 +105,7 @@ EOF
cat <<EOF
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \\
instrument_read_write(__ai_oldp, sizeof(*__ai_oldp)); \\
- arch_${xchg}${order}(__ai_ptr, __ai_oldp, __VA_ARGS__); \\
+ raw_${xchg}${order}(__ai_ptr, __ai_oldp, __VA_ARGS__); \\
})
EOF
@@ -119,7 +119,7 @@ EOF
[ -n "$kcsan_barrier" ] && printf "\t${kcsan_barrier}; \\\\\n"
cat <<EOF
instrument_atomic_read_write(__ai_ptr, sizeof(*__ai_ptr)); \\
- arch_${xchg}${order}(__ai_ptr, __VA_ARGS__); \\
+ raw_${xchg}${order}(__ai_ptr, __VA_ARGS__); \\
})
EOF
@@ -133,15 +133,10 @@ cat << EOF
// DO NOT MODIFY THIS FILE DIRECTLY
/*
- * This file provides wrappers with KASAN instrumentation for atomic operations.
- * To use this functionality an arch's atomic.h file needs to define all
- * atomic operations with arch_ prefix (e.g. arch_atomic_read()) and include
- * this file at the end. This file provides atomic_read() that forwards to
- * arch_atomic_read() for actual atomic operation.
- * Note: if an arch atomic operation is implemented by means of other atomic
- * operations (e.g. atomic_read()/atomic_cmpxchg() loop), then it needs to use
- * arch_ variants (i.e. arch_atomic_read()/arch_atomic_cmpxchg()) to avoid
- * double instrumentation.
+ * This file provoides atomic operations with explicit instrumentation (e.g.
+ * KASAN, KCSAN), which should be used unless it is necessary to avoid
+ * instrumentation. Where it is necessary to aovid instrumenation, the
+ * raw_atomic*() operations should be used.
*/
#ifndef _LINUX_ATOMIC_INSTRUMENTED_H
#define _LINUX_ATOMIC_INSTRUMENTED_H
diff --git a/scripts/atomic/gen-atomic-raw.sh b/scripts/atomic/gen-atomic-raw.sh
new file mode 100644
index 0000000..ba8d136
--- /dev/null
+++ b/scripts/atomic/gen-atomic-raw.sh
@@ -0,0 +1,84 @@
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0
+
+ATOMICDIR=$(dirname $0)
+
+. ${ATOMICDIR}/atomic-tbl.sh
+
+#gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...)
+gen_proto_order_variant()
+{
+ local meta="$1"; shift
+ local pfx="$1"; shift
+ local name="$1"; shift
+ local sfx="$1"; shift
+ local order="$1"; shift
+ local atomic="$1"; shift
+ local int="$1"; shift
+
+ local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
+
+ local ret="$(gen_ret_type "${meta}" "${int}")"
+ local params="$(gen_params "${int}" "${atomic}" "$@")"
+ local args="$(gen_args "$@")"
+ local retstmt="$(gen_ret_stmt "${meta}")"
+
+cat <<EOF
+static __always_inline ${ret}
+raw_${atomicname}(${params})
+{
+ ${retstmt}arch_${atomicname}(${args});
+}
+
+EOF
+}
+
+gen_xchg()
+{
+ local xchg="$1"; shift
+ local order="$1"; shift
+
+cat <<EOF
+#define raw_${xchg}${order}(...) \\
+ arch_${xchg}${order}(__VA_ARGS__)
+EOF
+}
+
+cat << EOF
+// SPDX-License-Identifier: GPL-2.0
+
+// Generated by $0
+// DO NOT MODIFY THIS FILE DIRECTLY
+
+#ifndef _LINUX_ATOMIC_RAW_H
+#define _LINUX_ATOMIC_RAW_H
+
+EOF
+
+grep '^[a-z]' "$1" | while read name meta args; do
+ gen_proto "${meta}" "${name}" "atomic" "int" ${args}
+done
+
+grep '^[a-z]' "$1" | while read name meta args; do
+ gen_proto "${meta}" "${name}" "atomic64" "s64" ${args}
+done
+
+grep '^[a-z]' "$1" | while read name meta args; do
+ gen_proto "${meta}" "${name}" "atomic_long" "long" ${args}
+done
+
+for xchg in "xchg" "cmpxchg" "cmpxchg64" "cmpxchg128" "try_cmpxchg" "try_cmpxchg64" "try_cmpxchg128"; do
+ for order in "" "_acquire" "_release" "_relaxed"; do
+ gen_xchg "${xchg}" "${order}"
+ printf "\n"
+ done
+done
+
+for xchg in "cmpxchg_local" "cmpxchg64_local" "cmpxchg128_local" "sync_cmpxchg" "try_cmpxchg_local" "try_cmpxchg64_local" "try_cmpxchg128_local"; do
+ gen_xchg "${xchg}" ""
+ printf "\n"
+done
+
+cat <<EOF
+#endif /* _LINUX_ATOMIC_RAW_H */
+EOF
diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh
index 5b98a83..631d351 100755
--- a/scripts/atomic/gen-atomics.sh
+++ b/scripts/atomic/gen-atomics.sh
@@ -11,6 +11,7 @@ cat <<EOF |
gen-atomic-instrumented.sh linux/atomic/atomic-instrumented.h
gen-atomic-long.sh linux/atomic/atomic-long.h
gen-atomic-fallback.sh linux/atomic/atomic-arch-fallback.h
+gen-atomic-raw.sh linux/atomic/atomic-raw.h
EOF
while read script header args; do
/bin/sh ${ATOMICDIR}/${script} ${ATOMICTBL} ${args} > ${LINUXDIR}/include/${header}
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 8aaf297a0dd66d4fac215af24ece8dea091079bc
Gitweb: https://git.kernel.org/tip/8aaf297a0dd66d4fac215af24ece8dea091079bc
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:21 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:23 +02:00
docs: scripts: kernel-doc: accept bitwise negation like ~@var
In some cases we'd like to indicate the bitwise negation of a parameter,
e.g.
~@var
This will be helpful for describing the atomic andnot operations, where
we'd like to write comments of the form:
Atomically updates @v to (@v & ~@i)
Which kernel-doc currently transforms to:
Atomically updates **v** to (**v** & ~**i**)
Rather than the preferable form:
Atomically updates **v** to (**v** & **~i**)
This is similar to what we did for '!@var' in commit:
ee2aa7590398 ("scripts: kernel-doc: accept negation like !@var")
This patch follows the same pattern that commit used to permit a '!'
prefix on a param ref, allowing a '~' prefix on a param ref, cuasing
kernel-doc to generate the preferred form above.
Suggested-by: Akira Yokosawa <[email protected]>
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
scripts/kernel-doc | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/kernel-doc b/scripts/kernel-doc
index 2486689..eb70c1f 100755
--- a/scripts/kernel-doc
+++ b/scripts/kernel-doc
@@ -64,7 +64,7 @@ my $type_constant = '\b``([^\`]+)``\b';
my $type_constant2 = '\%([-_\w]+)';
my $type_func = '(\w+)\(\)';
my $type_param = '\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
-my $type_param_ref = '([\!]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
+my $type_param_ref = '([\!~]?)\@(\w*((\.\w+)|(->\w+))*(\.\.\.)?)';
my $type_fp_param = '\@(\w+)\(\)'; # Special RST handling for func ptr params
my $type_fp_param2 = '\@(\w+->\S+)\(\)'; # Special RST handling for structs with func ptr params
my $type_env = '(\$\w+)';
The following commit has been merged into the locking/core branch of tip:
Commit-ID: d6cd3664806fbe8313b8e04b042d40e8135ca459
Gitweb: https://git.kernel.org/tip/d6cd3664806fbe8313b8e04b042d40e8135ca459
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:03 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:15 +02:00
locking/atomic: arm: add preprocessor symbols
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/arm.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/arm/include/asm/atomic.h | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index 9458d47..f0e3b01 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -197,6 +197,16 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \
return val; \
}
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
{
int ret;
@@ -212,8 +222,6 @@ static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
}
#define arch_atomic_cmpxchg arch_atomic_cmpxchg
-#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot
-
#endif /* __LINUX_ARM_ARCH__ */
#define ATOMIC_OPS(op, c_op, asm_op) \
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 1d78814d41701c216e28fcf2656526146dec4a1a
Gitweb: https://git.kernel.org/tip/1d78814d41701c216e28fcf2656526146dec4a1a
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:20 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:22 +02:00
locking/atomic: scripts: simplify raw_atomic*() definitions
Currently each ordering variant has several potential definitions,
with a mixture of preprocessor and C definitions, including several
copies of its C prototype, e.g.
| #if defined(arch_atomic_fetch_andnot_acquire)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
| #elif defined(arch_atomic_fetch_andnot_relaxed)
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| int ret = arch_atomic_fetch_andnot_relaxed(i, v);
| __atomic_acquire_fence();
| return ret;
| }
| #elif defined(arch_atomic_fetch_andnot)
| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot
| #else
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| return raw_atomic_fetch_and_acquire(~i, v);
| }
| #endif
Make this a bit simpler by defining the C prototype once, and writing
the various potential definitions as plain C code guarded by ifdeffery.
For example, the above becomes:
| static __always_inline int
| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
| {
| #if defined(arch_atomic_fetch_andnot_acquire)
| return arch_atomic_fetch_andnot_acquire(i, v);
| #elif defined(arch_atomic_fetch_andnot_relaxed)
| int ret = arch_atomic_fetch_andnot_relaxed(i, v);
| __atomic_acquire_fence();
| return ret;
| #elif defined(arch_atomic_fetch_andnot)
| return arch_atomic_fetch_andnot(i, v);
| #else
| return raw_atomic_fetch_and_acquire(~i, v);
| #endif
| }
Which is far easier to read. As we now always have a single copy of the
C prototype wrapping all the potential definitions, we now have an
obvious single location for kerneldoc comments.
At the same time, the fallbacks for raw_atomic*_xhcg() are made to use
'new' rather than 'i' as the name of the new value. This is what the
existing fallback template used, and is more consistent with the
raw_atomic{_try,}cmpxchg() fallbacks.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
include/linux/atomic/atomic-arch-fallback.h | 1790 ++++++++---------
include/linux/atomic/atomic-instrumented.h | 50 +-
include/linux/atomic/atomic-long.h | 26 +-
scripts/atomic/atomics.tbl | 2 +-
scripts/atomic/fallbacks/acquire | 4 +-
scripts/atomic/fallbacks/add_negative | 4 +-
scripts/atomic/fallbacks/add_unless | 4 +-
scripts/atomic/fallbacks/andnot | 4 +-
scripts/atomic/fallbacks/cmpxchg | 4 +-
scripts/atomic/fallbacks/dec | 4 +-
scripts/atomic/fallbacks/dec_and_test | 4 +-
scripts/atomic/fallbacks/dec_if_positive | 4 +-
scripts/atomic/fallbacks/dec_unless_positive | 4 +-
scripts/atomic/fallbacks/fence | 4 +-
scripts/atomic/fallbacks/fetch_add_unless | 4 +-
scripts/atomic/fallbacks/inc | 4 +-
scripts/atomic/fallbacks/inc_and_test | 4 +-
scripts/atomic/fallbacks/inc_not_zero | 4 +-
scripts/atomic/fallbacks/inc_unless_negative | 4 +-
scripts/atomic/fallbacks/read_acquire | 4 +-
scripts/atomic/fallbacks/release | 4 +-
scripts/atomic/fallbacks/set_release | 4 +-
scripts/atomic/fallbacks/sub_and_test | 4 +-
scripts/atomic/fallbacks/try_cmpxchg | 4 +-
scripts/atomic/fallbacks/xchg | 4 +-
scripts/atomic/gen-atomic-fallback.sh | 26 +-
26 files changed, 901 insertions(+), 1077 deletions(-)
diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
index 99bc1a8..470c289 100644
--- a/include/linux/atomic/atomic-arch-fallback.h
+++ b/include/linux/atomic/atomic-arch-fallback.h
@@ -428,16 +428,20 @@ extern void raw_cmpxchg128_relaxed_not_implemented(void);
#define raw_sync_cmpxchg arch_sync_cmpxchg
-#define raw_atomic_read arch_atomic_read
+static __always_inline int
+raw_atomic_read(const atomic_t *v)
+{
+ return arch_atomic_read(v);
+}
-#if defined(arch_atomic_read_acquire)
-#define raw_atomic_read_acquire arch_atomic_read_acquire
-#elif defined(arch_atomic_read)
-#define raw_atomic_read_acquire arch_atomic_read
-#else
static __always_inline int
raw_atomic_read_acquire(const atomic_t *v)
{
+#if defined(arch_atomic_read_acquire)
+ return arch_atomic_read_acquire(v);
+#elif defined(arch_atomic_read)
+ return arch_atomic_read(v);
+#else
int ret;
if (__native_word(atomic_t)) {
@@ -448,1144 +452,1088 @@ raw_atomic_read_acquire(const atomic_t *v)
}
return ret;
-}
#endif
+}
-#define raw_atomic_set arch_atomic_set
+static __always_inline void
+raw_atomic_set(atomic_t *v, int i)
+{
+ arch_atomic_set(v, i);
+}
-#if defined(arch_atomic_set_release)
-#define raw_atomic_set_release arch_atomic_set_release
-#elif defined(arch_atomic_set)
-#define raw_atomic_set_release arch_atomic_set
-#else
static __always_inline void
raw_atomic_set_release(atomic_t *v, int i)
{
+#if defined(arch_atomic_set_release)
+ arch_atomic_set_release(v, i);
+#elif defined(arch_atomic_set)
+ arch_atomic_set(v, i);
+#else
if (__native_word(atomic_t)) {
smp_store_release(&(v)->counter, i);
} else {
__atomic_release_fence();
raw_atomic_set(v, i);
}
-}
#endif
+}
-#define raw_atomic_add arch_atomic_add
+static __always_inline void
+raw_atomic_add(int i, atomic_t *v)
+{
+ arch_atomic_add(i, v);
+}
-#if defined(arch_atomic_add_return)
-#define raw_atomic_add_return arch_atomic_add_return
-#elif defined(arch_atomic_add_return_relaxed)
static __always_inline int
raw_atomic_add_return(int i, atomic_t *v)
{
+#if defined(arch_atomic_add_return)
+ return arch_atomic_add_return(i, v);
+#elif defined(arch_atomic_add_return_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_add_return_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic_add_return"
#endif
+}
-#if defined(arch_atomic_add_return_acquire)
-#define raw_atomic_add_return_acquire arch_atomic_add_return_acquire
-#elif defined(arch_atomic_add_return_relaxed)
static __always_inline int
raw_atomic_add_return_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_add_return_acquire)
+ return arch_atomic_add_return_acquire(i, v);
+#elif defined(arch_atomic_add_return_relaxed)
int ret = arch_atomic_add_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_add_return)
-#define raw_atomic_add_return_acquire arch_atomic_add_return
+ return arch_atomic_add_return(i, v);
#else
#error "Unable to define raw_atomic_add_return_acquire"
#endif
+}
-#if defined(arch_atomic_add_return_release)
-#define raw_atomic_add_return_release arch_atomic_add_return_release
-#elif defined(arch_atomic_add_return_relaxed)
static __always_inline int
raw_atomic_add_return_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_add_return_release)
+ return arch_atomic_add_return_release(i, v);
+#elif defined(arch_atomic_add_return_relaxed)
__atomic_release_fence();
return arch_atomic_add_return_relaxed(i, v);
-}
#elif defined(arch_atomic_add_return)
-#define raw_atomic_add_return_release arch_atomic_add_return
+ return arch_atomic_add_return(i, v);
#else
#error "Unable to define raw_atomic_add_return_release"
#endif
+}
+static __always_inline int
+raw_atomic_add_return_relaxed(int i, atomic_t *v)
+{
#if defined(arch_atomic_add_return_relaxed)
-#define raw_atomic_add_return_relaxed arch_atomic_add_return_relaxed
+ return arch_atomic_add_return_relaxed(i, v);
#elif defined(arch_atomic_add_return)
-#define raw_atomic_add_return_relaxed arch_atomic_add_return
+ return arch_atomic_add_return(i, v);
#else
#error "Unable to define raw_atomic_add_return_relaxed"
#endif
+}
-#if defined(arch_atomic_fetch_add)
-#define raw_atomic_fetch_add arch_atomic_fetch_add
-#elif defined(arch_atomic_fetch_add_relaxed)
static __always_inline int
raw_atomic_fetch_add(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_add)
+ return arch_atomic_fetch_add(i, v);
+#elif defined(arch_atomic_fetch_add_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_add_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic_fetch_add"
#endif
+}
-#if defined(arch_atomic_fetch_add_acquire)
-#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add_acquire
-#elif defined(arch_atomic_fetch_add_relaxed)
static __always_inline int
raw_atomic_fetch_add_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_add_acquire)
+ return arch_atomic_fetch_add_acquire(i, v);
+#elif defined(arch_atomic_fetch_add_relaxed)
int ret = arch_atomic_fetch_add_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_add)
-#define raw_atomic_fetch_add_acquire arch_atomic_fetch_add
+ return arch_atomic_fetch_add(i, v);
#else
#error "Unable to define raw_atomic_fetch_add_acquire"
#endif
+}
-#if defined(arch_atomic_fetch_add_release)
-#define raw_atomic_fetch_add_release arch_atomic_fetch_add_release
-#elif defined(arch_atomic_fetch_add_relaxed)
static __always_inline int
raw_atomic_fetch_add_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_add_release)
+ return arch_atomic_fetch_add_release(i, v);
+#elif defined(arch_atomic_fetch_add_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_add_relaxed(i, v);
-}
#elif defined(arch_atomic_fetch_add)
-#define raw_atomic_fetch_add_release arch_atomic_fetch_add
+ return arch_atomic_fetch_add(i, v);
#else
#error "Unable to define raw_atomic_fetch_add_release"
#endif
+}
+static __always_inline int
+raw_atomic_fetch_add_relaxed(int i, atomic_t *v)
+{
#if defined(arch_atomic_fetch_add_relaxed)
-#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed
+ return arch_atomic_fetch_add_relaxed(i, v);
#elif defined(arch_atomic_fetch_add)
-#define raw_atomic_fetch_add_relaxed arch_atomic_fetch_add
+ return arch_atomic_fetch_add(i, v);
#else
#error "Unable to define raw_atomic_fetch_add_relaxed"
#endif
+}
-#define raw_atomic_sub arch_atomic_sub
+static __always_inline void
+raw_atomic_sub(int i, atomic_t *v)
+{
+ arch_atomic_sub(i, v);
+}
-#if defined(arch_atomic_sub_return)
-#define raw_atomic_sub_return arch_atomic_sub_return
-#elif defined(arch_atomic_sub_return_relaxed)
static __always_inline int
raw_atomic_sub_return(int i, atomic_t *v)
{
+#if defined(arch_atomic_sub_return)
+ return arch_atomic_sub_return(i, v);
+#elif defined(arch_atomic_sub_return_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_sub_return_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic_sub_return"
#endif
+}
-#if defined(arch_atomic_sub_return_acquire)
-#define raw_atomic_sub_return_acquire arch_atomic_sub_return_acquire
-#elif defined(arch_atomic_sub_return_relaxed)
static __always_inline int
raw_atomic_sub_return_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_sub_return_acquire)
+ return arch_atomic_sub_return_acquire(i, v);
+#elif defined(arch_atomic_sub_return_relaxed)
int ret = arch_atomic_sub_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_sub_return)
-#define raw_atomic_sub_return_acquire arch_atomic_sub_return
+ return arch_atomic_sub_return(i, v);
#else
#error "Unable to define raw_atomic_sub_return_acquire"
#endif
+}
-#if defined(arch_atomic_sub_return_release)
-#define raw_atomic_sub_return_release arch_atomic_sub_return_release
-#elif defined(arch_atomic_sub_return_relaxed)
static __always_inline int
raw_atomic_sub_return_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_sub_return_release)
+ return arch_atomic_sub_return_release(i, v);
+#elif defined(arch_atomic_sub_return_relaxed)
__atomic_release_fence();
return arch_atomic_sub_return_relaxed(i, v);
-}
#elif defined(arch_atomic_sub_return)
-#define raw_atomic_sub_return_release arch_atomic_sub_return
+ return arch_atomic_sub_return(i, v);
#else
#error "Unable to define raw_atomic_sub_return_release"
#endif
+}
+static __always_inline int
+raw_atomic_sub_return_relaxed(int i, atomic_t *v)
+{
#if defined(arch_atomic_sub_return_relaxed)
-#define raw_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed
+ return arch_atomic_sub_return_relaxed(i, v);
#elif defined(arch_atomic_sub_return)
-#define raw_atomic_sub_return_relaxed arch_atomic_sub_return
+ return arch_atomic_sub_return(i, v);
#else
#error "Unable to define raw_atomic_sub_return_relaxed"
#endif
+}
-#if defined(arch_atomic_fetch_sub)
-#define raw_atomic_fetch_sub arch_atomic_fetch_sub
-#elif defined(arch_atomic_fetch_sub_relaxed)
static __always_inline int
raw_atomic_fetch_sub(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_sub)
+ return arch_atomic_fetch_sub(i, v);
+#elif defined(arch_atomic_fetch_sub_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_sub_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic_fetch_sub"
#endif
+}
-#if defined(arch_atomic_fetch_sub_acquire)
-#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub_acquire
-#elif defined(arch_atomic_fetch_sub_relaxed)
static __always_inline int
raw_atomic_fetch_sub_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_sub_acquire)
+ return arch_atomic_fetch_sub_acquire(i, v);
+#elif defined(arch_atomic_fetch_sub_relaxed)
int ret = arch_atomic_fetch_sub_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_sub)
-#define raw_atomic_fetch_sub_acquire arch_atomic_fetch_sub
+ return arch_atomic_fetch_sub(i, v);
#else
#error "Unable to define raw_atomic_fetch_sub_acquire"
#endif
+}
-#if defined(arch_atomic_fetch_sub_release)
-#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub_release
-#elif defined(arch_atomic_fetch_sub_relaxed)
static __always_inline int
raw_atomic_fetch_sub_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_sub_release)
+ return arch_atomic_fetch_sub_release(i, v);
+#elif defined(arch_atomic_fetch_sub_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_sub_relaxed(i, v);
-}
#elif defined(arch_atomic_fetch_sub)
-#define raw_atomic_fetch_sub_release arch_atomic_fetch_sub
+ return arch_atomic_fetch_sub(i, v);
#else
#error "Unable to define raw_atomic_fetch_sub_release"
#endif
+}
+static __always_inline int
+raw_atomic_fetch_sub_relaxed(int i, atomic_t *v)
+{
#if defined(arch_atomic_fetch_sub_relaxed)
-#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed
+ return arch_atomic_fetch_sub_relaxed(i, v);
#elif defined(arch_atomic_fetch_sub)
-#define raw_atomic_fetch_sub_relaxed arch_atomic_fetch_sub
+ return arch_atomic_fetch_sub(i, v);
#else
#error "Unable to define raw_atomic_fetch_sub_relaxed"
#endif
+}
-#if defined(arch_atomic_inc)
-#define raw_atomic_inc arch_atomic_inc
-#else
static __always_inline void
raw_atomic_inc(atomic_t *v)
{
+#if defined(arch_atomic_inc)
+ arch_atomic_inc(v);
+#else
raw_atomic_add(1, v);
-}
#endif
+}
-#if defined(arch_atomic_inc_return)
-#define raw_atomic_inc_return arch_atomic_inc_return
-#elif defined(arch_atomic_inc_return_relaxed)
static __always_inline int
raw_atomic_inc_return(atomic_t *v)
{
+#if defined(arch_atomic_inc_return)
+ return arch_atomic_inc_return(v);
+#elif defined(arch_atomic_inc_return_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_inc_return_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline int
-raw_atomic_inc_return(atomic_t *v)
-{
return raw_atomic_add_return(1, v);
-}
#endif
+}
-#if defined(arch_atomic_inc_return_acquire)
-#define raw_atomic_inc_return_acquire arch_atomic_inc_return_acquire
-#elif defined(arch_atomic_inc_return_relaxed)
static __always_inline int
raw_atomic_inc_return_acquire(atomic_t *v)
{
+#if defined(arch_atomic_inc_return_acquire)
+ return arch_atomic_inc_return_acquire(v);
+#elif defined(arch_atomic_inc_return_relaxed)
int ret = arch_atomic_inc_return_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_inc_return)
-#define raw_atomic_inc_return_acquire arch_atomic_inc_return
+ return arch_atomic_inc_return(v);
#else
-static __always_inline int
-raw_atomic_inc_return_acquire(atomic_t *v)
-{
return raw_atomic_add_return_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic_inc_return_release)
-#define raw_atomic_inc_return_release arch_atomic_inc_return_release
-#elif defined(arch_atomic_inc_return_relaxed)
static __always_inline int
raw_atomic_inc_return_release(atomic_t *v)
{
+#if defined(arch_atomic_inc_return_release)
+ return arch_atomic_inc_return_release(v);
+#elif defined(arch_atomic_inc_return_relaxed)
__atomic_release_fence();
return arch_atomic_inc_return_relaxed(v);
-}
#elif defined(arch_atomic_inc_return)
-#define raw_atomic_inc_return_release arch_atomic_inc_return
+ return arch_atomic_inc_return(v);
#else
-static __always_inline int
-raw_atomic_inc_return_release(atomic_t *v)
-{
return raw_atomic_add_return_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic_inc_return_relaxed)
-#define raw_atomic_inc_return_relaxed arch_atomic_inc_return_relaxed
-#elif defined(arch_atomic_inc_return)
-#define raw_atomic_inc_return_relaxed arch_atomic_inc_return
-#else
static __always_inline int
raw_atomic_inc_return_relaxed(atomic_t *v)
{
+#if defined(arch_atomic_inc_return_relaxed)
+ return arch_atomic_inc_return_relaxed(v);
+#elif defined(arch_atomic_inc_return)
+ return arch_atomic_inc_return(v);
+#else
return raw_atomic_add_return_relaxed(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_inc)
-#define raw_atomic_fetch_inc arch_atomic_fetch_inc
-#elif defined(arch_atomic_fetch_inc_relaxed)
static __always_inline int
raw_atomic_fetch_inc(atomic_t *v)
{
+#if defined(arch_atomic_fetch_inc)
+ return arch_atomic_fetch_inc(v);
+#elif defined(arch_atomic_fetch_inc_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_inc_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline int
-raw_atomic_fetch_inc(atomic_t *v)
-{
return raw_atomic_fetch_add(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_inc_acquire)
-#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc_acquire
-#elif defined(arch_atomic_fetch_inc_relaxed)
static __always_inline int
raw_atomic_fetch_inc_acquire(atomic_t *v)
{
+#if defined(arch_atomic_fetch_inc_acquire)
+ return arch_atomic_fetch_inc_acquire(v);
+#elif defined(arch_atomic_fetch_inc_relaxed)
int ret = arch_atomic_fetch_inc_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_inc)
-#define raw_atomic_fetch_inc_acquire arch_atomic_fetch_inc
+ return arch_atomic_fetch_inc(v);
#else
-static __always_inline int
-raw_atomic_fetch_inc_acquire(atomic_t *v)
-{
return raw_atomic_fetch_add_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_inc_release)
-#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc_release
-#elif defined(arch_atomic_fetch_inc_relaxed)
static __always_inline int
raw_atomic_fetch_inc_release(atomic_t *v)
{
+#if defined(arch_atomic_fetch_inc_release)
+ return arch_atomic_fetch_inc_release(v);
+#elif defined(arch_atomic_fetch_inc_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_inc_relaxed(v);
-}
#elif defined(arch_atomic_fetch_inc)
-#define raw_atomic_fetch_inc_release arch_atomic_fetch_inc
+ return arch_atomic_fetch_inc(v);
#else
-static __always_inline int
-raw_atomic_fetch_inc_release(atomic_t *v)
-{
return raw_atomic_fetch_add_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_inc_relaxed)
-#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc_relaxed
-#elif defined(arch_atomic_fetch_inc)
-#define raw_atomic_fetch_inc_relaxed arch_atomic_fetch_inc
-#else
static __always_inline int
raw_atomic_fetch_inc_relaxed(atomic_t *v)
{
+#if defined(arch_atomic_fetch_inc_relaxed)
+ return arch_atomic_fetch_inc_relaxed(v);
+#elif defined(arch_atomic_fetch_inc)
+ return arch_atomic_fetch_inc(v);
+#else
return raw_atomic_fetch_add_relaxed(1, v);
-}
#endif
+}
-#if defined(arch_atomic_dec)
-#define raw_atomic_dec arch_atomic_dec
-#else
static __always_inline void
raw_atomic_dec(atomic_t *v)
{
+#if defined(arch_atomic_dec)
+ arch_atomic_dec(v);
+#else
raw_atomic_sub(1, v);
-}
#endif
+}
-#if defined(arch_atomic_dec_return)
-#define raw_atomic_dec_return arch_atomic_dec_return
-#elif defined(arch_atomic_dec_return_relaxed)
static __always_inline int
raw_atomic_dec_return(atomic_t *v)
{
+#if defined(arch_atomic_dec_return)
+ return arch_atomic_dec_return(v);
+#elif defined(arch_atomic_dec_return_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_dec_return_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline int
-raw_atomic_dec_return(atomic_t *v)
-{
return raw_atomic_sub_return(1, v);
-}
#endif
+}
-#if defined(arch_atomic_dec_return_acquire)
-#define raw_atomic_dec_return_acquire arch_atomic_dec_return_acquire
-#elif defined(arch_atomic_dec_return_relaxed)
static __always_inline int
raw_atomic_dec_return_acquire(atomic_t *v)
{
+#if defined(arch_atomic_dec_return_acquire)
+ return arch_atomic_dec_return_acquire(v);
+#elif defined(arch_atomic_dec_return_relaxed)
int ret = arch_atomic_dec_return_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_dec_return)
-#define raw_atomic_dec_return_acquire arch_atomic_dec_return
+ return arch_atomic_dec_return(v);
#else
-static __always_inline int
-raw_atomic_dec_return_acquire(atomic_t *v)
-{
return raw_atomic_sub_return_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic_dec_return_release)
-#define raw_atomic_dec_return_release arch_atomic_dec_return_release
-#elif defined(arch_atomic_dec_return_relaxed)
static __always_inline int
raw_atomic_dec_return_release(atomic_t *v)
{
+#if defined(arch_atomic_dec_return_release)
+ return arch_atomic_dec_return_release(v);
+#elif defined(arch_atomic_dec_return_relaxed)
__atomic_release_fence();
return arch_atomic_dec_return_relaxed(v);
-}
#elif defined(arch_atomic_dec_return)
-#define raw_atomic_dec_return_release arch_atomic_dec_return
+ return arch_atomic_dec_return(v);
#else
-static __always_inline int
-raw_atomic_dec_return_release(atomic_t *v)
-{
return raw_atomic_sub_return_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic_dec_return_relaxed)
-#define raw_atomic_dec_return_relaxed arch_atomic_dec_return_relaxed
-#elif defined(arch_atomic_dec_return)
-#define raw_atomic_dec_return_relaxed arch_atomic_dec_return
-#else
static __always_inline int
raw_atomic_dec_return_relaxed(atomic_t *v)
{
+#if defined(arch_atomic_dec_return_relaxed)
+ return arch_atomic_dec_return_relaxed(v);
+#elif defined(arch_atomic_dec_return)
+ return arch_atomic_dec_return(v);
+#else
return raw_atomic_sub_return_relaxed(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_dec)
-#define raw_atomic_fetch_dec arch_atomic_fetch_dec
-#elif defined(arch_atomic_fetch_dec_relaxed)
static __always_inline int
raw_atomic_fetch_dec(atomic_t *v)
{
+#if defined(arch_atomic_fetch_dec)
+ return arch_atomic_fetch_dec(v);
+#elif defined(arch_atomic_fetch_dec_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_dec_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline int
-raw_atomic_fetch_dec(atomic_t *v)
-{
return raw_atomic_fetch_sub(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_dec_acquire)
-#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec_acquire
-#elif defined(arch_atomic_fetch_dec_relaxed)
static __always_inline int
raw_atomic_fetch_dec_acquire(atomic_t *v)
{
+#if defined(arch_atomic_fetch_dec_acquire)
+ return arch_atomic_fetch_dec_acquire(v);
+#elif defined(arch_atomic_fetch_dec_relaxed)
int ret = arch_atomic_fetch_dec_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_dec)
-#define raw_atomic_fetch_dec_acquire arch_atomic_fetch_dec
+ return arch_atomic_fetch_dec(v);
#else
-static __always_inline int
-raw_atomic_fetch_dec_acquire(atomic_t *v)
-{
return raw_atomic_fetch_sub_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_dec_release)
-#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec_release
-#elif defined(arch_atomic_fetch_dec_relaxed)
static __always_inline int
raw_atomic_fetch_dec_release(atomic_t *v)
{
+#if defined(arch_atomic_fetch_dec_release)
+ return arch_atomic_fetch_dec_release(v);
+#elif defined(arch_atomic_fetch_dec_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_dec_relaxed(v);
-}
#elif defined(arch_atomic_fetch_dec)
-#define raw_atomic_fetch_dec_release arch_atomic_fetch_dec
+ return arch_atomic_fetch_dec(v);
#else
-static __always_inline int
-raw_atomic_fetch_dec_release(atomic_t *v)
-{
return raw_atomic_fetch_sub_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_dec_relaxed)
-#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec_relaxed
-#elif defined(arch_atomic_fetch_dec)
-#define raw_atomic_fetch_dec_relaxed arch_atomic_fetch_dec
-#else
static __always_inline int
raw_atomic_fetch_dec_relaxed(atomic_t *v)
{
+#if defined(arch_atomic_fetch_dec_relaxed)
+ return arch_atomic_fetch_dec_relaxed(v);
+#elif defined(arch_atomic_fetch_dec)
+ return arch_atomic_fetch_dec(v);
+#else
return raw_atomic_fetch_sub_relaxed(1, v);
-}
#endif
+}
-#define raw_atomic_and arch_atomic_and
+static __always_inline void
+raw_atomic_and(int i, atomic_t *v)
+{
+ arch_atomic_and(i, v);
+}
-#if defined(arch_atomic_fetch_and)
-#define raw_atomic_fetch_and arch_atomic_fetch_and
-#elif defined(arch_atomic_fetch_and_relaxed)
static __always_inline int
raw_atomic_fetch_and(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_and)
+ return arch_atomic_fetch_and(i, v);
+#elif defined(arch_atomic_fetch_and_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_and_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic_fetch_and"
#endif
+}
-#if defined(arch_atomic_fetch_and_acquire)
-#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and_acquire
-#elif defined(arch_atomic_fetch_and_relaxed)
static __always_inline int
raw_atomic_fetch_and_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_and_acquire)
+ return arch_atomic_fetch_and_acquire(i, v);
+#elif defined(arch_atomic_fetch_and_relaxed)
int ret = arch_atomic_fetch_and_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_and)
-#define raw_atomic_fetch_and_acquire arch_atomic_fetch_and
+ return arch_atomic_fetch_and(i, v);
#else
#error "Unable to define raw_atomic_fetch_and_acquire"
#endif
+}
-#if defined(arch_atomic_fetch_and_release)
-#define raw_atomic_fetch_and_release arch_atomic_fetch_and_release
-#elif defined(arch_atomic_fetch_and_relaxed)
static __always_inline int
raw_atomic_fetch_and_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_and_release)
+ return arch_atomic_fetch_and_release(i, v);
+#elif defined(arch_atomic_fetch_and_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_and_relaxed(i, v);
-}
#elif defined(arch_atomic_fetch_and)
-#define raw_atomic_fetch_and_release arch_atomic_fetch_and
+ return arch_atomic_fetch_and(i, v);
#else
#error "Unable to define raw_atomic_fetch_and_release"
#endif
+}
+static __always_inline int
+raw_atomic_fetch_and_relaxed(int i, atomic_t *v)
+{
#if defined(arch_atomic_fetch_and_relaxed)
-#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed
+ return arch_atomic_fetch_and_relaxed(i, v);
#elif defined(arch_atomic_fetch_and)
-#define raw_atomic_fetch_and_relaxed arch_atomic_fetch_and
+ return arch_atomic_fetch_and(i, v);
#else
#error "Unable to define raw_atomic_fetch_and_relaxed"
#endif
+}
-#if defined(arch_atomic_andnot)
-#define raw_atomic_andnot arch_atomic_andnot
-#else
static __always_inline void
raw_atomic_andnot(int i, atomic_t *v)
{
+#if defined(arch_atomic_andnot)
+ arch_atomic_andnot(i, v);
+#else
raw_atomic_and(~i, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_andnot)
-#define raw_atomic_fetch_andnot arch_atomic_fetch_andnot
-#elif defined(arch_atomic_fetch_andnot_relaxed)
static __always_inline int
raw_atomic_fetch_andnot(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_andnot)
+ return arch_atomic_fetch_andnot(i, v);
+#elif defined(arch_atomic_fetch_andnot_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_andnot_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline int
-raw_atomic_fetch_andnot(int i, atomic_t *v)
-{
return raw_atomic_fetch_and(~i, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_andnot_acquire)
-#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire
-#elif defined(arch_atomic_fetch_andnot_relaxed)
static __always_inline int
raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_andnot_acquire)
+ return arch_atomic_fetch_andnot_acquire(i, v);
+#elif defined(arch_atomic_fetch_andnot_relaxed)
int ret = arch_atomic_fetch_andnot_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_andnot)
-#define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot
+ return arch_atomic_fetch_andnot(i, v);
#else
-static __always_inline int
-raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
-{
return raw_atomic_fetch_and_acquire(~i, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_andnot_release)
-#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot_release
-#elif defined(arch_atomic_fetch_andnot_relaxed)
static __always_inline int
raw_atomic_fetch_andnot_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_andnot_release)
+ return arch_atomic_fetch_andnot_release(i, v);
+#elif defined(arch_atomic_fetch_andnot_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_andnot_relaxed(i, v);
-}
#elif defined(arch_atomic_fetch_andnot)
-#define raw_atomic_fetch_andnot_release arch_atomic_fetch_andnot
+ return arch_atomic_fetch_andnot(i, v);
#else
-static __always_inline int
-raw_atomic_fetch_andnot_release(int i, atomic_t *v)
-{
return raw_atomic_fetch_and_release(~i, v);
-}
#endif
+}
-#if defined(arch_atomic_fetch_andnot_relaxed)
-#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot_relaxed
-#elif defined(arch_atomic_fetch_andnot)
-#define raw_atomic_fetch_andnot_relaxed arch_atomic_fetch_andnot
-#else
static __always_inline int
raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_andnot_relaxed)
+ return arch_atomic_fetch_andnot_relaxed(i, v);
+#elif defined(arch_atomic_fetch_andnot)
+ return arch_atomic_fetch_andnot(i, v);
+#else
return raw_atomic_fetch_and_relaxed(~i, v);
-}
#endif
+}
-#define raw_atomic_or arch_atomic_or
+static __always_inline void
+raw_atomic_or(int i, atomic_t *v)
+{
+ arch_atomic_or(i, v);
+}
-#if defined(arch_atomic_fetch_or)
-#define raw_atomic_fetch_or arch_atomic_fetch_or
-#elif defined(arch_atomic_fetch_or_relaxed)
static __always_inline int
raw_atomic_fetch_or(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_or)
+ return arch_atomic_fetch_or(i, v);
+#elif defined(arch_atomic_fetch_or_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_or_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic_fetch_or"
#endif
+}
-#if defined(arch_atomic_fetch_or_acquire)
-#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or_acquire
-#elif defined(arch_atomic_fetch_or_relaxed)
static __always_inline int
raw_atomic_fetch_or_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_or_acquire)
+ return arch_atomic_fetch_or_acquire(i, v);
+#elif defined(arch_atomic_fetch_or_relaxed)
int ret = arch_atomic_fetch_or_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_or)
-#define raw_atomic_fetch_or_acquire arch_atomic_fetch_or
+ return arch_atomic_fetch_or(i, v);
#else
#error "Unable to define raw_atomic_fetch_or_acquire"
#endif
+}
-#if defined(arch_atomic_fetch_or_release)
-#define raw_atomic_fetch_or_release arch_atomic_fetch_or_release
-#elif defined(arch_atomic_fetch_or_relaxed)
static __always_inline int
raw_atomic_fetch_or_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_or_release)
+ return arch_atomic_fetch_or_release(i, v);
+#elif defined(arch_atomic_fetch_or_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_or_relaxed(i, v);
-}
#elif defined(arch_atomic_fetch_or)
-#define raw_atomic_fetch_or_release arch_atomic_fetch_or
+ return arch_atomic_fetch_or(i, v);
#else
#error "Unable to define raw_atomic_fetch_or_release"
#endif
+}
+static __always_inline int
+raw_atomic_fetch_or_relaxed(int i, atomic_t *v)
+{
#if defined(arch_atomic_fetch_or_relaxed)
-#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or_relaxed
+ return arch_atomic_fetch_or_relaxed(i, v);
#elif defined(arch_atomic_fetch_or)
-#define raw_atomic_fetch_or_relaxed arch_atomic_fetch_or
+ return arch_atomic_fetch_or(i, v);
#else
#error "Unable to define raw_atomic_fetch_or_relaxed"
#endif
+}
-#define raw_atomic_xor arch_atomic_xor
+static __always_inline void
+raw_atomic_xor(int i, atomic_t *v)
+{
+ arch_atomic_xor(i, v);
+}
-#if defined(arch_atomic_fetch_xor)
-#define raw_atomic_fetch_xor arch_atomic_fetch_xor
-#elif defined(arch_atomic_fetch_xor_relaxed)
static __always_inline int
raw_atomic_fetch_xor(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_xor)
+ return arch_atomic_fetch_xor(i, v);
+#elif defined(arch_atomic_fetch_xor_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_fetch_xor_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic_fetch_xor"
#endif
+}
-#if defined(arch_atomic_fetch_xor_acquire)
-#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor_acquire
-#elif defined(arch_atomic_fetch_xor_relaxed)
static __always_inline int
raw_atomic_fetch_xor_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_xor_acquire)
+ return arch_atomic_fetch_xor_acquire(i, v);
+#elif defined(arch_atomic_fetch_xor_relaxed)
int ret = arch_atomic_fetch_xor_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_fetch_xor)
-#define raw_atomic_fetch_xor_acquire arch_atomic_fetch_xor
+ return arch_atomic_fetch_xor(i, v);
#else
#error "Unable to define raw_atomic_fetch_xor_acquire"
#endif
+}
-#if defined(arch_atomic_fetch_xor_release)
-#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor_release
-#elif defined(arch_atomic_fetch_xor_relaxed)
static __always_inline int
raw_atomic_fetch_xor_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_fetch_xor_release)
+ return arch_atomic_fetch_xor_release(i, v);
+#elif defined(arch_atomic_fetch_xor_relaxed)
__atomic_release_fence();
return arch_atomic_fetch_xor_relaxed(i, v);
-}
#elif defined(arch_atomic_fetch_xor)
-#define raw_atomic_fetch_xor_release arch_atomic_fetch_xor
+ return arch_atomic_fetch_xor(i, v);
#else
#error "Unable to define raw_atomic_fetch_xor_release"
#endif
+}
+static __always_inline int
+raw_atomic_fetch_xor_relaxed(int i, atomic_t *v)
+{
#if defined(arch_atomic_fetch_xor_relaxed)
-#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed
+ return arch_atomic_fetch_xor_relaxed(i, v);
#elif defined(arch_atomic_fetch_xor)
-#define raw_atomic_fetch_xor_relaxed arch_atomic_fetch_xor
+ return arch_atomic_fetch_xor(i, v);
#else
#error "Unable to define raw_atomic_fetch_xor_relaxed"
#endif
+}
-#if defined(arch_atomic_xchg)
-#define raw_atomic_xchg arch_atomic_xchg
-#elif defined(arch_atomic_xchg_relaxed)
static __always_inline int
-raw_atomic_xchg(atomic_t *v, int i)
+raw_atomic_xchg(atomic_t *v, int new)
{
+#if defined(arch_atomic_xchg)
+ return arch_atomic_xchg(v, new);
+#elif defined(arch_atomic_xchg_relaxed)
int ret;
__atomic_pre_full_fence();
- ret = arch_atomic_xchg_relaxed(v, i);
+ ret = arch_atomic_xchg_relaxed(v, new);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline int
-raw_atomic_xchg(atomic_t *v, int new)
-{
return raw_xchg(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic_xchg_acquire)
-#define raw_atomic_xchg_acquire arch_atomic_xchg_acquire
-#elif defined(arch_atomic_xchg_relaxed)
static __always_inline int
-raw_atomic_xchg_acquire(atomic_t *v, int i)
+raw_atomic_xchg_acquire(atomic_t *v, int new)
{
- int ret = arch_atomic_xchg_relaxed(v, i);
+#if defined(arch_atomic_xchg_acquire)
+ return arch_atomic_xchg_acquire(v, new);
+#elif defined(arch_atomic_xchg_relaxed)
+ int ret = arch_atomic_xchg_relaxed(v, new);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_xchg)
-#define raw_atomic_xchg_acquire arch_atomic_xchg
+ return arch_atomic_xchg(v, new);
#else
-static __always_inline int
-raw_atomic_xchg_acquire(atomic_t *v, int new)
-{
return raw_xchg_acquire(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic_xchg_release)
-#define raw_atomic_xchg_release arch_atomic_xchg_release
-#elif defined(arch_atomic_xchg_relaxed)
static __always_inline int
-raw_atomic_xchg_release(atomic_t *v, int i)
+raw_atomic_xchg_release(atomic_t *v, int new)
{
+#if defined(arch_atomic_xchg_release)
+ return arch_atomic_xchg_release(v, new);
+#elif defined(arch_atomic_xchg_relaxed)
__atomic_release_fence();
- return arch_atomic_xchg_relaxed(v, i);
-}
+ return arch_atomic_xchg_relaxed(v, new);
#elif defined(arch_atomic_xchg)
-#define raw_atomic_xchg_release arch_atomic_xchg
+ return arch_atomic_xchg(v, new);
#else
-static __always_inline int
-raw_atomic_xchg_release(atomic_t *v, int new)
-{
return raw_xchg_release(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic_xchg_relaxed)
-#define raw_atomic_xchg_relaxed arch_atomic_xchg_relaxed
-#elif defined(arch_atomic_xchg)
-#define raw_atomic_xchg_relaxed arch_atomic_xchg
-#else
static __always_inline int
raw_atomic_xchg_relaxed(atomic_t *v, int new)
{
+#if defined(arch_atomic_xchg_relaxed)
+ return arch_atomic_xchg_relaxed(v, new);
+#elif defined(arch_atomic_xchg)
+ return arch_atomic_xchg(v, new);
+#else
return raw_xchg_relaxed(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic_cmpxchg)
-#define raw_atomic_cmpxchg arch_atomic_cmpxchg
-#elif defined(arch_atomic_cmpxchg_relaxed)
static __always_inline int
raw_atomic_cmpxchg(atomic_t *v, int old, int new)
{
+#if defined(arch_atomic_cmpxchg)
+ return arch_atomic_cmpxchg(v, old, new);
+#elif defined(arch_atomic_cmpxchg_relaxed)
int ret;
__atomic_pre_full_fence();
ret = arch_atomic_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline int
-raw_atomic_cmpxchg(atomic_t *v, int old, int new)
-{
return raw_cmpxchg(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic_cmpxchg_acquire)
-#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg_acquire
-#elif defined(arch_atomic_cmpxchg_relaxed)
static __always_inline int
raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
{
+#if defined(arch_atomic_cmpxchg_acquire)
+ return arch_atomic_cmpxchg_acquire(v, old, new);
+#elif defined(arch_atomic_cmpxchg_relaxed)
int ret = arch_atomic_cmpxchg_relaxed(v, old, new);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_cmpxchg)
-#define raw_atomic_cmpxchg_acquire arch_atomic_cmpxchg
+ return arch_atomic_cmpxchg(v, old, new);
#else
-static __always_inline int
-raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
-{
return raw_cmpxchg_acquire(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic_cmpxchg_release)
-#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg_release
-#elif defined(arch_atomic_cmpxchg_relaxed)
static __always_inline int
raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
{
+#if defined(arch_atomic_cmpxchg_release)
+ return arch_atomic_cmpxchg_release(v, old, new);
+#elif defined(arch_atomic_cmpxchg_relaxed)
__atomic_release_fence();
return arch_atomic_cmpxchg_relaxed(v, old, new);
-}
#elif defined(arch_atomic_cmpxchg)
-#define raw_atomic_cmpxchg_release arch_atomic_cmpxchg
+ return arch_atomic_cmpxchg(v, old, new);
#else
-static __always_inline int
-raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
-{
return raw_cmpxchg_release(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic_cmpxchg_relaxed)
-#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg_relaxed
-#elif defined(arch_atomic_cmpxchg)
-#define raw_atomic_cmpxchg_relaxed arch_atomic_cmpxchg
-#else
static __always_inline int
raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
{
+#if defined(arch_atomic_cmpxchg_relaxed)
+ return arch_atomic_cmpxchg_relaxed(v, old, new);
+#elif defined(arch_atomic_cmpxchg)
+ return arch_atomic_cmpxchg(v, old, new);
+#else
return raw_cmpxchg_relaxed(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic_try_cmpxchg)
-#define raw_atomic_try_cmpxchg arch_atomic_try_cmpxchg
-#elif defined(arch_atomic_try_cmpxchg_relaxed)
static __always_inline bool
raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
{
+#if defined(arch_atomic_try_cmpxchg)
+ return arch_atomic_try_cmpxchg(v, old, new);
+#elif defined(arch_atomic_try_cmpxchg_relaxed)
bool ret;
__atomic_pre_full_fence();
ret = arch_atomic_try_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline bool
-raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
-{
int r, o = *old;
r = raw_atomic_cmpxchg(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic_try_cmpxchg_acquire)
-#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg_acquire
-#elif defined(arch_atomic_try_cmpxchg_relaxed)
static __always_inline bool
raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
{
+#if defined(arch_atomic_try_cmpxchg_acquire)
+ return arch_atomic_try_cmpxchg_acquire(v, old, new);
+#elif defined(arch_atomic_try_cmpxchg_relaxed)
bool ret = arch_atomic_try_cmpxchg_relaxed(v, old, new);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_try_cmpxchg)
-#define raw_atomic_try_cmpxchg_acquire arch_atomic_try_cmpxchg
+ return arch_atomic_try_cmpxchg(v, old, new);
#else
-static __always_inline bool
-raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
-{
int r, o = *old;
r = raw_atomic_cmpxchg_acquire(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic_try_cmpxchg_release)
-#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg_release
-#elif defined(arch_atomic_try_cmpxchg_relaxed)
static __always_inline bool
raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
{
+#if defined(arch_atomic_try_cmpxchg_release)
+ return arch_atomic_try_cmpxchg_release(v, old, new);
+#elif defined(arch_atomic_try_cmpxchg_relaxed)
__atomic_release_fence();
return arch_atomic_try_cmpxchg_relaxed(v, old, new);
-}
#elif defined(arch_atomic_try_cmpxchg)
-#define raw_atomic_try_cmpxchg_release arch_atomic_try_cmpxchg
+ return arch_atomic_try_cmpxchg(v, old, new);
#else
-static __always_inline bool
-raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
-{
int r, o = *old;
r = raw_atomic_cmpxchg_release(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic_try_cmpxchg_relaxed)
-#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg_relaxed
-#elif defined(arch_atomic_try_cmpxchg)
-#define raw_atomic_try_cmpxchg_relaxed arch_atomic_try_cmpxchg
-#else
static __always_inline bool
raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
{
+#if defined(arch_atomic_try_cmpxchg_relaxed)
+ return arch_atomic_try_cmpxchg_relaxed(v, old, new);
+#elif defined(arch_atomic_try_cmpxchg)
+ return arch_atomic_try_cmpxchg(v, old, new);
+#else
int r, o = *old;
r = raw_atomic_cmpxchg_relaxed(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic_sub_and_test)
-#define raw_atomic_sub_and_test arch_atomic_sub_and_test
-#else
static __always_inline bool
raw_atomic_sub_and_test(int i, atomic_t *v)
{
+#if defined(arch_atomic_sub_and_test)
+ return arch_atomic_sub_and_test(i, v);
+#else
return raw_atomic_sub_return(i, v) == 0;
-}
#endif
+}
-#if defined(arch_atomic_dec_and_test)
-#define raw_atomic_dec_and_test arch_atomic_dec_and_test
-#else
static __always_inline bool
raw_atomic_dec_and_test(atomic_t *v)
{
+#if defined(arch_atomic_dec_and_test)
+ return arch_atomic_dec_and_test(v);
+#else
return raw_atomic_dec_return(v) == 0;
-}
#endif
+}
-#if defined(arch_atomic_inc_and_test)
-#define raw_atomic_inc_and_test arch_atomic_inc_and_test
-#else
static __always_inline bool
raw_atomic_inc_and_test(atomic_t *v)
{
+#if defined(arch_atomic_inc_and_test)
+ return arch_atomic_inc_and_test(v);
+#else
return raw_atomic_inc_return(v) == 0;
-}
#endif
+}
-#if defined(arch_atomic_add_negative)
-#define raw_atomic_add_negative arch_atomic_add_negative
-#elif defined(arch_atomic_add_negative_relaxed)
static __always_inline bool
raw_atomic_add_negative(int i, atomic_t *v)
{
+#if defined(arch_atomic_add_negative)
+ return arch_atomic_add_negative(i, v);
+#elif defined(arch_atomic_add_negative_relaxed)
bool ret;
__atomic_pre_full_fence();
ret = arch_atomic_add_negative_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline bool
-raw_atomic_add_negative(int i, atomic_t *v)
-{
return raw_atomic_add_return(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic_add_negative_acquire)
-#define raw_atomic_add_negative_acquire arch_atomic_add_negative_acquire
-#elif defined(arch_atomic_add_negative_relaxed)
static __always_inline bool
raw_atomic_add_negative_acquire(int i, atomic_t *v)
{
+#if defined(arch_atomic_add_negative_acquire)
+ return arch_atomic_add_negative_acquire(i, v);
+#elif defined(arch_atomic_add_negative_relaxed)
bool ret = arch_atomic_add_negative_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic_add_negative)
-#define raw_atomic_add_negative_acquire arch_atomic_add_negative
+ return arch_atomic_add_negative(i, v);
#else
-static __always_inline bool
-raw_atomic_add_negative_acquire(int i, atomic_t *v)
-{
return raw_atomic_add_return_acquire(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic_add_negative_release)
-#define raw_atomic_add_negative_release arch_atomic_add_negative_release
-#elif defined(arch_atomic_add_negative_relaxed)
static __always_inline bool
raw_atomic_add_negative_release(int i, atomic_t *v)
{
+#if defined(arch_atomic_add_negative_release)
+ return arch_atomic_add_negative_release(i, v);
+#elif defined(arch_atomic_add_negative_relaxed)
__atomic_release_fence();
return arch_atomic_add_negative_relaxed(i, v);
-}
#elif defined(arch_atomic_add_negative)
-#define raw_atomic_add_negative_release arch_atomic_add_negative
+ return arch_atomic_add_negative(i, v);
#else
-static __always_inline bool
-raw_atomic_add_negative_release(int i, atomic_t *v)
-{
return raw_atomic_add_return_release(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic_add_negative_relaxed)
-#define raw_atomic_add_negative_relaxed arch_atomic_add_negative_relaxed
-#elif defined(arch_atomic_add_negative)
-#define raw_atomic_add_negative_relaxed arch_atomic_add_negative
-#else
static __always_inline bool
raw_atomic_add_negative_relaxed(int i, atomic_t *v)
{
+#if defined(arch_atomic_add_negative_relaxed)
+ return arch_atomic_add_negative_relaxed(i, v);
+#elif defined(arch_atomic_add_negative)
+ return arch_atomic_add_negative(i, v);
+#else
return raw_atomic_add_return_relaxed(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic_fetch_add_unless)
-#define raw_atomic_fetch_add_unless arch_atomic_fetch_add_unless
-#else
static __always_inline int
raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
{
+#if defined(arch_atomic_fetch_add_unless)
+ return arch_atomic_fetch_add_unless(v, a, u);
+#else
int c = raw_atomic_read(v);
do {
@@ -1594,35 +1542,35 @@ raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
} while (!raw_atomic_try_cmpxchg(v, &c, c + a));
return c;
-}
#endif
+}
-#if defined(arch_atomic_add_unless)
-#define raw_atomic_add_unless arch_atomic_add_unless
-#else
static __always_inline bool
raw_atomic_add_unless(atomic_t *v, int a, int u)
{
+#if defined(arch_atomic_add_unless)
+ return arch_atomic_add_unless(v, a, u);
+#else
return raw_atomic_fetch_add_unless(v, a, u) != u;
-}
#endif
+}
-#if defined(arch_atomic_inc_not_zero)
-#define raw_atomic_inc_not_zero arch_atomic_inc_not_zero
-#else
static __always_inline bool
raw_atomic_inc_not_zero(atomic_t *v)
{
+#if defined(arch_atomic_inc_not_zero)
+ return arch_atomic_inc_not_zero(v);
+#else
return raw_atomic_add_unless(v, 1, 0);
-}
#endif
+}
-#if defined(arch_atomic_inc_unless_negative)
-#define raw_atomic_inc_unless_negative arch_atomic_inc_unless_negative
-#else
static __always_inline bool
raw_atomic_inc_unless_negative(atomic_t *v)
{
+#if defined(arch_atomic_inc_unless_negative)
+ return arch_atomic_inc_unless_negative(v);
+#else
int c = raw_atomic_read(v);
do {
@@ -1631,15 +1579,15 @@ raw_atomic_inc_unless_negative(atomic_t *v)
} while (!raw_atomic_try_cmpxchg(v, &c, c + 1));
return true;
-}
#endif
+}
-#if defined(arch_atomic_dec_unless_positive)
-#define raw_atomic_dec_unless_positive arch_atomic_dec_unless_positive
-#else
static __always_inline bool
raw_atomic_dec_unless_positive(atomic_t *v)
{
+#if defined(arch_atomic_dec_unless_positive)
+ return arch_atomic_dec_unless_positive(v);
+#else
int c = raw_atomic_read(v);
do {
@@ -1648,15 +1596,15 @@ raw_atomic_dec_unless_positive(atomic_t *v)
} while (!raw_atomic_try_cmpxchg(v, &c, c - 1));
return true;
-}
#endif
+}
-#if defined(arch_atomic_dec_if_positive)
-#define raw_atomic_dec_if_positive arch_atomic_dec_if_positive
-#else
static __always_inline int
raw_atomic_dec_if_positive(atomic_t *v)
{
+#if defined(arch_atomic_dec_if_positive)
+ return arch_atomic_dec_if_positive(v);
+#else
int dec, c = raw_atomic_read(v);
do {
@@ -1666,23 +1614,27 @@ raw_atomic_dec_if_positive(atomic_t *v)
} while (!raw_atomic_try_cmpxchg(v, &c, dec));
return dec;
-}
#endif
+}
#ifdef CONFIG_GENERIC_ATOMIC64
#include <asm-generic/atomic64.h>
#endif
-#define raw_atomic64_read arch_atomic64_read
+static __always_inline s64
+raw_atomic64_read(const atomic64_t *v)
+{
+ return arch_atomic64_read(v);
+}
-#if defined(arch_atomic64_read_acquire)
-#define raw_atomic64_read_acquire arch_atomic64_read_acquire
-#elif defined(arch_atomic64_read)
-#define raw_atomic64_read_acquire arch_atomic64_read
-#else
static __always_inline s64
raw_atomic64_read_acquire(const atomic64_t *v)
{
+#if defined(arch_atomic64_read_acquire)
+ return arch_atomic64_read_acquire(v);
+#elif defined(arch_atomic64_read)
+ return arch_atomic64_read(v);
+#else
s64 ret;
if (__native_word(atomic64_t)) {
@@ -1693,1144 +1645,1088 @@ raw_atomic64_read_acquire(const atomic64_t *v)
}
return ret;
-}
#endif
+}
-#define raw_atomic64_set arch_atomic64_set
+static __always_inline void
+raw_atomic64_set(atomic64_t *v, s64 i)
+{
+ arch_atomic64_set(v, i);
+}
-#if defined(arch_atomic64_set_release)
-#define raw_atomic64_set_release arch_atomic64_set_release
-#elif defined(arch_atomic64_set)
-#define raw_atomic64_set_release arch_atomic64_set
-#else
static __always_inline void
raw_atomic64_set_release(atomic64_t *v, s64 i)
{
+#if defined(arch_atomic64_set_release)
+ arch_atomic64_set_release(v, i);
+#elif defined(arch_atomic64_set)
+ arch_atomic64_set(v, i);
+#else
if (__native_word(atomic64_t)) {
smp_store_release(&(v)->counter, i);
} else {
__atomic_release_fence();
raw_atomic64_set(v, i);
}
-}
#endif
+}
-#define raw_atomic64_add arch_atomic64_add
+static __always_inline void
+raw_atomic64_add(s64 i, atomic64_t *v)
+{
+ arch_atomic64_add(i, v);
+}
-#if defined(arch_atomic64_add_return)
-#define raw_atomic64_add_return arch_atomic64_add_return
-#elif defined(arch_atomic64_add_return_relaxed)
static __always_inline s64
raw_atomic64_add_return(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_add_return)
+ return arch_atomic64_add_return(i, v);
+#elif defined(arch_atomic64_add_return_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_add_return_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic64_add_return"
#endif
+}
-#if defined(arch_atomic64_add_return_acquire)
-#define raw_atomic64_add_return_acquire arch_atomic64_add_return_acquire
-#elif defined(arch_atomic64_add_return_relaxed)
static __always_inline s64
raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_add_return_acquire)
+ return arch_atomic64_add_return_acquire(i, v);
+#elif defined(arch_atomic64_add_return_relaxed)
s64 ret = arch_atomic64_add_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_add_return)
-#define raw_atomic64_add_return_acquire arch_atomic64_add_return
+ return arch_atomic64_add_return(i, v);
#else
#error "Unable to define raw_atomic64_add_return_acquire"
#endif
+}
-#if defined(arch_atomic64_add_return_release)
-#define raw_atomic64_add_return_release arch_atomic64_add_return_release
-#elif defined(arch_atomic64_add_return_relaxed)
static __always_inline s64
raw_atomic64_add_return_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_add_return_release)
+ return arch_atomic64_add_return_release(i, v);
+#elif defined(arch_atomic64_add_return_relaxed)
__atomic_release_fence();
return arch_atomic64_add_return_relaxed(i, v);
-}
#elif defined(arch_atomic64_add_return)
-#define raw_atomic64_add_return_release arch_atomic64_add_return
+ return arch_atomic64_add_return(i, v);
#else
#error "Unable to define raw_atomic64_add_return_release"
#endif
+}
+static __always_inline s64
+raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
+{
#if defined(arch_atomic64_add_return_relaxed)
-#define raw_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed
+ return arch_atomic64_add_return_relaxed(i, v);
#elif defined(arch_atomic64_add_return)
-#define raw_atomic64_add_return_relaxed arch_atomic64_add_return
+ return arch_atomic64_add_return(i, v);
#else
#error "Unable to define raw_atomic64_add_return_relaxed"
#endif
+}
-#if defined(arch_atomic64_fetch_add)
-#define raw_atomic64_fetch_add arch_atomic64_fetch_add
-#elif defined(arch_atomic64_fetch_add_relaxed)
static __always_inline s64
raw_atomic64_fetch_add(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_add)
+ return arch_atomic64_fetch_add(i, v);
+#elif defined(arch_atomic64_fetch_add_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_add_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic64_fetch_add"
#endif
+}
-#if defined(arch_atomic64_fetch_add_acquire)
-#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add_acquire
-#elif defined(arch_atomic64_fetch_add_relaxed)
static __always_inline s64
raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_add_acquire)
+ return arch_atomic64_fetch_add_acquire(i, v);
+#elif defined(arch_atomic64_fetch_add_relaxed)
s64 ret = arch_atomic64_fetch_add_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_add)
-#define raw_atomic64_fetch_add_acquire arch_atomic64_fetch_add
+ return arch_atomic64_fetch_add(i, v);
#else
#error "Unable to define raw_atomic64_fetch_add_acquire"
#endif
+}
-#if defined(arch_atomic64_fetch_add_release)
-#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add_release
-#elif defined(arch_atomic64_fetch_add_relaxed)
static __always_inline s64
raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_add_release)
+ return arch_atomic64_fetch_add_release(i, v);
+#elif defined(arch_atomic64_fetch_add_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_add_relaxed(i, v);
-}
#elif defined(arch_atomic64_fetch_add)
-#define raw_atomic64_fetch_add_release arch_atomic64_fetch_add
+ return arch_atomic64_fetch_add(i, v);
#else
#error "Unable to define raw_atomic64_fetch_add_release"
#endif
+}
+static __always_inline s64
+raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
+{
#if defined(arch_atomic64_fetch_add_relaxed)
-#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed
+ return arch_atomic64_fetch_add_relaxed(i, v);
#elif defined(arch_atomic64_fetch_add)
-#define raw_atomic64_fetch_add_relaxed arch_atomic64_fetch_add
+ return arch_atomic64_fetch_add(i, v);
#else
#error "Unable to define raw_atomic64_fetch_add_relaxed"
#endif
+}
-#define raw_atomic64_sub arch_atomic64_sub
+static __always_inline void
+raw_atomic64_sub(s64 i, atomic64_t *v)
+{
+ arch_atomic64_sub(i, v);
+}
-#if defined(arch_atomic64_sub_return)
-#define raw_atomic64_sub_return arch_atomic64_sub_return
-#elif defined(arch_atomic64_sub_return_relaxed)
static __always_inline s64
raw_atomic64_sub_return(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_sub_return)
+ return arch_atomic64_sub_return(i, v);
+#elif defined(arch_atomic64_sub_return_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_sub_return_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic64_sub_return"
#endif
+}
-#if defined(arch_atomic64_sub_return_acquire)
-#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return_acquire
-#elif defined(arch_atomic64_sub_return_relaxed)
static __always_inline s64
raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_sub_return_acquire)
+ return arch_atomic64_sub_return_acquire(i, v);
+#elif defined(arch_atomic64_sub_return_relaxed)
s64 ret = arch_atomic64_sub_return_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_sub_return)
-#define raw_atomic64_sub_return_acquire arch_atomic64_sub_return
+ return arch_atomic64_sub_return(i, v);
#else
#error "Unable to define raw_atomic64_sub_return_acquire"
#endif
+}
-#if defined(arch_atomic64_sub_return_release)
-#define raw_atomic64_sub_return_release arch_atomic64_sub_return_release
-#elif defined(arch_atomic64_sub_return_relaxed)
static __always_inline s64
raw_atomic64_sub_return_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_sub_return_release)
+ return arch_atomic64_sub_return_release(i, v);
+#elif defined(arch_atomic64_sub_return_relaxed)
__atomic_release_fence();
return arch_atomic64_sub_return_relaxed(i, v);
-}
#elif defined(arch_atomic64_sub_return)
-#define raw_atomic64_sub_return_release arch_atomic64_sub_return
+ return arch_atomic64_sub_return(i, v);
#else
#error "Unable to define raw_atomic64_sub_return_release"
#endif
+}
+static __always_inline s64
+raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
+{
#if defined(arch_atomic64_sub_return_relaxed)
-#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed
+ return arch_atomic64_sub_return_relaxed(i, v);
#elif defined(arch_atomic64_sub_return)
-#define raw_atomic64_sub_return_relaxed arch_atomic64_sub_return
+ return arch_atomic64_sub_return(i, v);
#else
#error "Unable to define raw_atomic64_sub_return_relaxed"
#endif
+}
-#if defined(arch_atomic64_fetch_sub)
-#define raw_atomic64_fetch_sub arch_atomic64_fetch_sub
-#elif defined(arch_atomic64_fetch_sub_relaxed)
static __always_inline s64
raw_atomic64_fetch_sub(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_sub)
+ return arch_atomic64_fetch_sub(i, v);
+#elif defined(arch_atomic64_fetch_sub_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_sub_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic64_fetch_sub"
#endif
+}
-#if defined(arch_atomic64_fetch_sub_acquire)
-#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub_acquire
-#elif defined(arch_atomic64_fetch_sub_relaxed)
static __always_inline s64
raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_sub_acquire)
+ return arch_atomic64_fetch_sub_acquire(i, v);
+#elif defined(arch_atomic64_fetch_sub_relaxed)
s64 ret = arch_atomic64_fetch_sub_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_sub)
-#define raw_atomic64_fetch_sub_acquire arch_atomic64_fetch_sub
+ return arch_atomic64_fetch_sub(i, v);
#else
#error "Unable to define raw_atomic64_fetch_sub_acquire"
#endif
+}
-#if defined(arch_atomic64_fetch_sub_release)
-#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub_release
-#elif defined(arch_atomic64_fetch_sub_relaxed)
static __always_inline s64
raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_sub_release)
+ return arch_atomic64_fetch_sub_release(i, v);
+#elif defined(arch_atomic64_fetch_sub_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_sub_relaxed(i, v);
-}
#elif defined(arch_atomic64_fetch_sub)
-#define raw_atomic64_fetch_sub_release arch_atomic64_fetch_sub
+ return arch_atomic64_fetch_sub(i, v);
#else
#error "Unable to define raw_atomic64_fetch_sub_release"
#endif
+}
+static __always_inline s64
+raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
+{
#if defined(arch_atomic64_fetch_sub_relaxed)
-#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed
+ return arch_atomic64_fetch_sub_relaxed(i, v);
#elif defined(arch_atomic64_fetch_sub)
-#define raw_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub
+ return arch_atomic64_fetch_sub(i, v);
#else
#error "Unable to define raw_atomic64_fetch_sub_relaxed"
#endif
+}
-#if defined(arch_atomic64_inc)
-#define raw_atomic64_inc arch_atomic64_inc
-#else
static __always_inline void
raw_atomic64_inc(atomic64_t *v)
{
+#if defined(arch_atomic64_inc)
+ arch_atomic64_inc(v);
+#else
raw_atomic64_add(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_inc_return)
-#define raw_atomic64_inc_return arch_atomic64_inc_return
-#elif defined(arch_atomic64_inc_return_relaxed)
static __always_inline s64
raw_atomic64_inc_return(atomic64_t *v)
{
+#if defined(arch_atomic64_inc_return)
+ return arch_atomic64_inc_return(v);
+#elif defined(arch_atomic64_inc_return_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_inc_return_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline s64
-raw_atomic64_inc_return(atomic64_t *v)
-{
return raw_atomic64_add_return(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_inc_return_acquire)
-#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return_acquire
-#elif defined(arch_atomic64_inc_return_relaxed)
static __always_inline s64
raw_atomic64_inc_return_acquire(atomic64_t *v)
{
+#if defined(arch_atomic64_inc_return_acquire)
+ return arch_atomic64_inc_return_acquire(v);
+#elif defined(arch_atomic64_inc_return_relaxed)
s64 ret = arch_atomic64_inc_return_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_inc_return)
-#define raw_atomic64_inc_return_acquire arch_atomic64_inc_return
+ return arch_atomic64_inc_return(v);
#else
-static __always_inline s64
-raw_atomic64_inc_return_acquire(atomic64_t *v)
-{
return raw_atomic64_add_return_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_inc_return_release)
-#define raw_atomic64_inc_return_release arch_atomic64_inc_return_release
-#elif defined(arch_atomic64_inc_return_relaxed)
static __always_inline s64
raw_atomic64_inc_return_release(atomic64_t *v)
{
+#if defined(arch_atomic64_inc_return_release)
+ return arch_atomic64_inc_return_release(v);
+#elif defined(arch_atomic64_inc_return_relaxed)
__atomic_release_fence();
return arch_atomic64_inc_return_relaxed(v);
-}
#elif defined(arch_atomic64_inc_return)
-#define raw_atomic64_inc_return_release arch_atomic64_inc_return
+ return arch_atomic64_inc_return(v);
#else
-static __always_inline s64
-raw_atomic64_inc_return_release(atomic64_t *v)
-{
return raw_atomic64_add_return_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_inc_return_relaxed)
-#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return_relaxed
-#elif defined(arch_atomic64_inc_return)
-#define raw_atomic64_inc_return_relaxed arch_atomic64_inc_return
-#else
static __always_inline s64
raw_atomic64_inc_return_relaxed(atomic64_t *v)
{
+#if defined(arch_atomic64_inc_return_relaxed)
+ return arch_atomic64_inc_return_relaxed(v);
+#elif defined(arch_atomic64_inc_return)
+ return arch_atomic64_inc_return(v);
+#else
return raw_atomic64_add_return_relaxed(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_inc)
-#define raw_atomic64_fetch_inc arch_atomic64_fetch_inc
-#elif defined(arch_atomic64_fetch_inc_relaxed)
static __always_inline s64
raw_atomic64_fetch_inc(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_inc)
+ return arch_atomic64_fetch_inc(v);
+#elif defined(arch_atomic64_fetch_inc_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_inc_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline s64
-raw_atomic64_fetch_inc(atomic64_t *v)
-{
return raw_atomic64_fetch_add(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_inc_acquire)
-#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc_acquire
-#elif defined(arch_atomic64_fetch_inc_relaxed)
static __always_inline s64
raw_atomic64_fetch_inc_acquire(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_inc_acquire)
+ return arch_atomic64_fetch_inc_acquire(v);
+#elif defined(arch_atomic64_fetch_inc_relaxed)
s64 ret = arch_atomic64_fetch_inc_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_inc)
-#define raw_atomic64_fetch_inc_acquire arch_atomic64_fetch_inc
+ return arch_atomic64_fetch_inc(v);
#else
-static __always_inline s64
-raw_atomic64_fetch_inc_acquire(atomic64_t *v)
-{
return raw_atomic64_fetch_add_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_inc_release)
-#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc_release
-#elif defined(arch_atomic64_fetch_inc_relaxed)
static __always_inline s64
raw_atomic64_fetch_inc_release(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_inc_release)
+ return arch_atomic64_fetch_inc_release(v);
+#elif defined(arch_atomic64_fetch_inc_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_inc_relaxed(v);
-}
#elif defined(arch_atomic64_fetch_inc)
-#define raw_atomic64_fetch_inc_release arch_atomic64_fetch_inc
+ return arch_atomic64_fetch_inc(v);
#else
-static __always_inline s64
-raw_atomic64_fetch_inc_release(atomic64_t *v)
-{
return raw_atomic64_fetch_add_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_inc_relaxed)
-#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc_relaxed
-#elif defined(arch_atomic64_fetch_inc)
-#define raw_atomic64_fetch_inc_relaxed arch_atomic64_fetch_inc
-#else
static __always_inline s64
raw_atomic64_fetch_inc_relaxed(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_inc_relaxed)
+ return arch_atomic64_fetch_inc_relaxed(v);
+#elif defined(arch_atomic64_fetch_inc)
+ return arch_atomic64_fetch_inc(v);
+#else
return raw_atomic64_fetch_add_relaxed(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_dec)
-#define raw_atomic64_dec arch_atomic64_dec
-#else
static __always_inline void
raw_atomic64_dec(atomic64_t *v)
{
+#if defined(arch_atomic64_dec)
+ arch_atomic64_dec(v);
+#else
raw_atomic64_sub(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_dec_return)
-#define raw_atomic64_dec_return arch_atomic64_dec_return
-#elif defined(arch_atomic64_dec_return_relaxed)
static __always_inline s64
raw_atomic64_dec_return(atomic64_t *v)
{
+#if defined(arch_atomic64_dec_return)
+ return arch_atomic64_dec_return(v);
+#elif defined(arch_atomic64_dec_return_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_dec_return_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline s64
-raw_atomic64_dec_return(atomic64_t *v)
-{
return raw_atomic64_sub_return(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_dec_return_acquire)
-#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return_acquire
-#elif defined(arch_atomic64_dec_return_relaxed)
static __always_inline s64
raw_atomic64_dec_return_acquire(atomic64_t *v)
{
+#if defined(arch_atomic64_dec_return_acquire)
+ return arch_atomic64_dec_return_acquire(v);
+#elif defined(arch_atomic64_dec_return_relaxed)
s64 ret = arch_atomic64_dec_return_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_dec_return)
-#define raw_atomic64_dec_return_acquire arch_atomic64_dec_return
+ return arch_atomic64_dec_return(v);
#else
-static __always_inline s64
-raw_atomic64_dec_return_acquire(atomic64_t *v)
-{
return raw_atomic64_sub_return_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_dec_return_release)
-#define raw_atomic64_dec_return_release arch_atomic64_dec_return_release
-#elif defined(arch_atomic64_dec_return_relaxed)
static __always_inline s64
raw_atomic64_dec_return_release(atomic64_t *v)
{
+#if defined(arch_atomic64_dec_return_release)
+ return arch_atomic64_dec_return_release(v);
+#elif defined(arch_atomic64_dec_return_relaxed)
__atomic_release_fence();
return arch_atomic64_dec_return_relaxed(v);
-}
#elif defined(arch_atomic64_dec_return)
-#define raw_atomic64_dec_return_release arch_atomic64_dec_return
+ return arch_atomic64_dec_return(v);
#else
-static __always_inline s64
-raw_atomic64_dec_return_release(atomic64_t *v)
-{
return raw_atomic64_sub_return_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_dec_return_relaxed)
-#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return_relaxed
-#elif defined(arch_atomic64_dec_return)
-#define raw_atomic64_dec_return_relaxed arch_atomic64_dec_return
-#else
static __always_inline s64
raw_atomic64_dec_return_relaxed(atomic64_t *v)
{
+#if defined(arch_atomic64_dec_return_relaxed)
+ return arch_atomic64_dec_return_relaxed(v);
+#elif defined(arch_atomic64_dec_return)
+ return arch_atomic64_dec_return(v);
+#else
return raw_atomic64_sub_return_relaxed(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_dec)
-#define raw_atomic64_fetch_dec arch_atomic64_fetch_dec
-#elif defined(arch_atomic64_fetch_dec_relaxed)
static __always_inline s64
raw_atomic64_fetch_dec(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_dec)
+ return arch_atomic64_fetch_dec(v);
+#elif defined(arch_atomic64_fetch_dec_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_dec_relaxed(v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline s64
-raw_atomic64_fetch_dec(atomic64_t *v)
-{
return raw_atomic64_fetch_sub(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_dec_acquire)
-#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec_acquire
-#elif defined(arch_atomic64_fetch_dec_relaxed)
static __always_inline s64
raw_atomic64_fetch_dec_acquire(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_dec_acquire)
+ return arch_atomic64_fetch_dec_acquire(v);
+#elif defined(arch_atomic64_fetch_dec_relaxed)
s64 ret = arch_atomic64_fetch_dec_relaxed(v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_dec)
-#define raw_atomic64_fetch_dec_acquire arch_atomic64_fetch_dec
+ return arch_atomic64_fetch_dec(v);
#else
-static __always_inline s64
-raw_atomic64_fetch_dec_acquire(atomic64_t *v)
-{
return raw_atomic64_fetch_sub_acquire(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_dec_release)
-#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec_release
-#elif defined(arch_atomic64_fetch_dec_relaxed)
static __always_inline s64
raw_atomic64_fetch_dec_release(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_dec_release)
+ return arch_atomic64_fetch_dec_release(v);
+#elif defined(arch_atomic64_fetch_dec_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_dec_relaxed(v);
-}
#elif defined(arch_atomic64_fetch_dec)
-#define raw_atomic64_fetch_dec_release arch_atomic64_fetch_dec
+ return arch_atomic64_fetch_dec(v);
#else
-static __always_inline s64
-raw_atomic64_fetch_dec_release(atomic64_t *v)
-{
return raw_atomic64_fetch_sub_release(1, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_dec_relaxed)
-#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec_relaxed
-#elif defined(arch_atomic64_fetch_dec)
-#define raw_atomic64_fetch_dec_relaxed arch_atomic64_fetch_dec
-#else
static __always_inline s64
raw_atomic64_fetch_dec_relaxed(atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_dec_relaxed)
+ return arch_atomic64_fetch_dec_relaxed(v);
+#elif defined(arch_atomic64_fetch_dec)
+ return arch_atomic64_fetch_dec(v);
+#else
return raw_atomic64_fetch_sub_relaxed(1, v);
-}
#endif
+}
-#define raw_atomic64_and arch_atomic64_and
+static __always_inline void
+raw_atomic64_and(s64 i, atomic64_t *v)
+{
+ arch_atomic64_and(i, v);
+}
-#if defined(arch_atomic64_fetch_and)
-#define raw_atomic64_fetch_and arch_atomic64_fetch_and
-#elif defined(arch_atomic64_fetch_and_relaxed)
static __always_inline s64
raw_atomic64_fetch_and(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_and)
+ return arch_atomic64_fetch_and(i, v);
+#elif defined(arch_atomic64_fetch_and_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_and_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic64_fetch_and"
#endif
+}
-#if defined(arch_atomic64_fetch_and_acquire)
-#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and_acquire
-#elif defined(arch_atomic64_fetch_and_relaxed)
static __always_inline s64
raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_and_acquire)
+ return arch_atomic64_fetch_and_acquire(i, v);
+#elif defined(arch_atomic64_fetch_and_relaxed)
s64 ret = arch_atomic64_fetch_and_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_and)
-#define raw_atomic64_fetch_and_acquire arch_atomic64_fetch_and
+ return arch_atomic64_fetch_and(i, v);
#else
#error "Unable to define raw_atomic64_fetch_and_acquire"
#endif
+}
-#if defined(arch_atomic64_fetch_and_release)
-#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and_release
-#elif defined(arch_atomic64_fetch_and_relaxed)
static __always_inline s64
raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_and_release)
+ return arch_atomic64_fetch_and_release(i, v);
+#elif defined(arch_atomic64_fetch_and_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_and_relaxed(i, v);
-}
#elif defined(arch_atomic64_fetch_and)
-#define raw_atomic64_fetch_and_release arch_atomic64_fetch_and
+ return arch_atomic64_fetch_and(i, v);
#else
#error "Unable to define raw_atomic64_fetch_and_release"
#endif
+}
+static __always_inline s64
+raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
+{
#if defined(arch_atomic64_fetch_and_relaxed)
-#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed
+ return arch_atomic64_fetch_and_relaxed(i, v);
#elif defined(arch_atomic64_fetch_and)
-#define raw_atomic64_fetch_and_relaxed arch_atomic64_fetch_and
+ return arch_atomic64_fetch_and(i, v);
#else
#error "Unable to define raw_atomic64_fetch_and_relaxed"
#endif
+}
-#if defined(arch_atomic64_andnot)
-#define raw_atomic64_andnot arch_atomic64_andnot
-#else
static __always_inline void
raw_atomic64_andnot(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_andnot)
+ arch_atomic64_andnot(i, v);
+#else
raw_atomic64_and(~i, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_andnot)
-#define raw_atomic64_fetch_andnot arch_atomic64_fetch_andnot
-#elif defined(arch_atomic64_fetch_andnot_relaxed)
static __always_inline s64
raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_andnot)
+ return arch_atomic64_fetch_andnot(i, v);
+#elif defined(arch_atomic64_fetch_andnot_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_andnot_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline s64
-raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
-{
return raw_atomic64_fetch_and(~i, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_andnot_acquire)
-#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot_acquire
-#elif defined(arch_atomic64_fetch_andnot_relaxed)
static __always_inline s64
raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_andnot_acquire)
+ return arch_atomic64_fetch_andnot_acquire(i, v);
+#elif defined(arch_atomic64_fetch_andnot_relaxed)
s64 ret = arch_atomic64_fetch_andnot_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_andnot)
-#define raw_atomic64_fetch_andnot_acquire arch_atomic64_fetch_andnot
+ return arch_atomic64_fetch_andnot(i, v);
#else
-static __always_inline s64
-raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
-{
return raw_atomic64_fetch_and_acquire(~i, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_andnot_release)
-#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot_release
-#elif defined(arch_atomic64_fetch_andnot_relaxed)
static __always_inline s64
raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_andnot_release)
+ return arch_atomic64_fetch_andnot_release(i, v);
+#elif defined(arch_atomic64_fetch_andnot_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_andnot_relaxed(i, v);
-}
#elif defined(arch_atomic64_fetch_andnot)
-#define raw_atomic64_fetch_andnot_release arch_atomic64_fetch_andnot
+ return arch_atomic64_fetch_andnot(i, v);
#else
-static __always_inline s64
-raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
-{
return raw_atomic64_fetch_and_release(~i, v);
-}
#endif
+}
-#if defined(arch_atomic64_fetch_andnot_relaxed)
-#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot_relaxed
-#elif defined(arch_atomic64_fetch_andnot)
-#define raw_atomic64_fetch_andnot_relaxed arch_atomic64_fetch_andnot
-#else
static __always_inline s64
raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_andnot_relaxed)
+ return arch_atomic64_fetch_andnot_relaxed(i, v);
+#elif defined(arch_atomic64_fetch_andnot)
+ return arch_atomic64_fetch_andnot(i, v);
+#else
return raw_atomic64_fetch_and_relaxed(~i, v);
-}
#endif
+}
-#define raw_atomic64_or arch_atomic64_or
+static __always_inline void
+raw_atomic64_or(s64 i, atomic64_t *v)
+{
+ arch_atomic64_or(i, v);
+}
-#if defined(arch_atomic64_fetch_or)
-#define raw_atomic64_fetch_or arch_atomic64_fetch_or
-#elif defined(arch_atomic64_fetch_or_relaxed)
static __always_inline s64
raw_atomic64_fetch_or(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_or)
+ return arch_atomic64_fetch_or(i, v);
+#elif defined(arch_atomic64_fetch_or_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_or_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic64_fetch_or"
#endif
+}
-#if defined(arch_atomic64_fetch_or_acquire)
-#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or_acquire
-#elif defined(arch_atomic64_fetch_or_relaxed)
static __always_inline s64
raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_or_acquire)
+ return arch_atomic64_fetch_or_acquire(i, v);
+#elif defined(arch_atomic64_fetch_or_relaxed)
s64 ret = arch_atomic64_fetch_or_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_or)
-#define raw_atomic64_fetch_or_acquire arch_atomic64_fetch_or
+ return arch_atomic64_fetch_or(i, v);
#else
#error "Unable to define raw_atomic64_fetch_or_acquire"
#endif
+}
-#if defined(arch_atomic64_fetch_or_release)
-#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or_release
-#elif defined(arch_atomic64_fetch_or_relaxed)
static __always_inline s64
raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_or_release)
+ return arch_atomic64_fetch_or_release(i, v);
+#elif defined(arch_atomic64_fetch_or_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_or_relaxed(i, v);
-}
#elif defined(arch_atomic64_fetch_or)
-#define raw_atomic64_fetch_or_release arch_atomic64_fetch_or
+ return arch_atomic64_fetch_or(i, v);
#else
#error "Unable to define raw_atomic64_fetch_or_release"
#endif
+}
+static __always_inline s64
+raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
+{
#if defined(arch_atomic64_fetch_or_relaxed)
-#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or_relaxed
+ return arch_atomic64_fetch_or_relaxed(i, v);
#elif defined(arch_atomic64_fetch_or)
-#define raw_atomic64_fetch_or_relaxed arch_atomic64_fetch_or
+ return arch_atomic64_fetch_or(i, v);
#else
#error "Unable to define raw_atomic64_fetch_or_relaxed"
#endif
+}
-#define raw_atomic64_xor arch_atomic64_xor
+static __always_inline void
+raw_atomic64_xor(s64 i, atomic64_t *v)
+{
+ arch_atomic64_xor(i, v);
+}
-#if defined(arch_atomic64_fetch_xor)
-#define raw_atomic64_fetch_xor arch_atomic64_fetch_xor
-#elif defined(arch_atomic64_fetch_xor_relaxed)
static __always_inline s64
raw_atomic64_fetch_xor(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_xor)
+ return arch_atomic64_fetch_xor(i, v);
+#elif defined(arch_atomic64_fetch_xor_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_fetch_xor_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
#error "Unable to define raw_atomic64_fetch_xor"
#endif
+}
-#if defined(arch_atomic64_fetch_xor_acquire)
-#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor_acquire
-#elif defined(arch_atomic64_fetch_xor_relaxed)
static __always_inline s64
raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_xor_acquire)
+ return arch_atomic64_fetch_xor_acquire(i, v);
+#elif defined(arch_atomic64_fetch_xor_relaxed)
s64 ret = arch_atomic64_fetch_xor_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_fetch_xor)
-#define raw_atomic64_fetch_xor_acquire arch_atomic64_fetch_xor
+ return arch_atomic64_fetch_xor(i, v);
#else
#error "Unable to define raw_atomic64_fetch_xor_acquire"
#endif
+}
-#if defined(arch_atomic64_fetch_xor_release)
-#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor_release
-#elif defined(arch_atomic64_fetch_xor_relaxed)
static __always_inline s64
raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_fetch_xor_release)
+ return arch_atomic64_fetch_xor_release(i, v);
+#elif defined(arch_atomic64_fetch_xor_relaxed)
__atomic_release_fence();
return arch_atomic64_fetch_xor_relaxed(i, v);
-}
#elif defined(arch_atomic64_fetch_xor)
-#define raw_atomic64_fetch_xor_release arch_atomic64_fetch_xor
+ return arch_atomic64_fetch_xor(i, v);
#else
#error "Unable to define raw_atomic64_fetch_xor_release"
#endif
+}
+static __always_inline s64
+raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
+{
#if defined(arch_atomic64_fetch_xor_relaxed)
-#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed
+ return arch_atomic64_fetch_xor_relaxed(i, v);
#elif defined(arch_atomic64_fetch_xor)
-#define raw_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor
+ return arch_atomic64_fetch_xor(i, v);
#else
#error "Unable to define raw_atomic64_fetch_xor_relaxed"
#endif
+}
-#if defined(arch_atomic64_xchg)
-#define raw_atomic64_xchg arch_atomic64_xchg
-#elif defined(arch_atomic64_xchg_relaxed)
static __always_inline s64
-raw_atomic64_xchg(atomic64_t *v, s64 i)
+raw_atomic64_xchg(atomic64_t *v, s64 new)
{
+#if defined(arch_atomic64_xchg)
+ return arch_atomic64_xchg(v, new);
+#elif defined(arch_atomic64_xchg_relaxed)
s64 ret;
__atomic_pre_full_fence();
- ret = arch_atomic64_xchg_relaxed(v, i);
+ ret = arch_atomic64_xchg_relaxed(v, new);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline s64
-raw_atomic64_xchg(atomic64_t *v, s64 new)
-{
return raw_xchg(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic64_xchg_acquire)
-#define raw_atomic64_xchg_acquire arch_atomic64_xchg_acquire
-#elif defined(arch_atomic64_xchg_relaxed)
static __always_inline s64
-raw_atomic64_xchg_acquire(atomic64_t *v, s64 i)
+raw_atomic64_xchg_acquire(atomic64_t *v, s64 new)
{
- s64 ret = arch_atomic64_xchg_relaxed(v, i);
+#if defined(arch_atomic64_xchg_acquire)
+ return arch_atomic64_xchg_acquire(v, new);
+#elif defined(arch_atomic64_xchg_relaxed)
+ s64 ret = arch_atomic64_xchg_relaxed(v, new);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_xchg)
-#define raw_atomic64_xchg_acquire arch_atomic64_xchg
+ return arch_atomic64_xchg(v, new);
#else
-static __always_inline s64
-raw_atomic64_xchg_acquire(atomic64_t *v, s64 new)
-{
return raw_xchg_acquire(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic64_xchg_release)
-#define raw_atomic64_xchg_release arch_atomic64_xchg_release
-#elif defined(arch_atomic64_xchg_relaxed)
static __always_inline s64
-raw_atomic64_xchg_release(atomic64_t *v, s64 i)
+raw_atomic64_xchg_release(atomic64_t *v, s64 new)
{
+#if defined(arch_atomic64_xchg_release)
+ return arch_atomic64_xchg_release(v, new);
+#elif defined(arch_atomic64_xchg_relaxed)
__atomic_release_fence();
- return arch_atomic64_xchg_relaxed(v, i);
-}
+ return arch_atomic64_xchg_relaxed(v, new);
#elif defined(arch_atomic64_xchg)
-#define raw_atomic64_xchg_release arch_atomic64_xchg
+ return arch_atomic64_xchg(v, new);
#else
-static __always_inline s64
-raw_atomic64_xchg_release(atomic64_t *v, s64 new)
-{
return raw_xchg_release(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic64_xchg_relaxed)
-#define raw_atomic64_xchg_relaxed arch_atomic64_xchg_relaxed
-#elif defined(arch_atomic64_xchg)
-#define raw_atomic64_xchg_relaxed arch_atomic64_xchg
-#else
static __always_inline s64
raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
{
+#if defined(arch_atomic64_xchg_relaxed)
+ return arch_atomic64_xchg_relaxed(v, new);
+#elif defined(arch_atomic64_xchg)
+ return arch_atomic64_xchg(v, new);
+#else
return raw_xchg_relaxed(&v->counter, new);
-}
#endif
+}
-#if defined(arch_atomic64_cmpxchg)
-#define raw_atomic64_cmpxchg arch_atomic64_cmpxchg
-#elif defined(arch_atomic64_cmpxchg_relaxed)
static __always_inline s64
raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
+#if defined(arch_atomic64_cmpxchg)
+ return arch_atomic64_cmpxchg(v, old, new);
+#elif defined(arch_atomic64_cmpxchg_relaxed)
s64 ret;
__atomic_pre_full_fence();
ret = arch_atomic64_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline s64
-raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
-{
return raw_cmpxchg(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic64_cmpxchg_acquire)
-#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg_acquire
-#elif defined(arch_atomic64_cmpxchg_relaxed)
static __always_inline s64
raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
{
+#if defined(arch_atomic64_cmpxchg_acquire)
+ return arch_atomic64_cmpxchg_acquire(v, old, new);
+#elif defined(arch_atomic64_cmpxchg_relaxed)
s64 ret = arch_atomic64_cmpxchg_relaxed(v, old, new);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_cmpxchg)
-#define raw_atomic64_cmpxchg_acquire arch_atomic64_cmpxchg
+ return arch_atomic64_cmpxchg(v, old, new);
#else
-static __always_inline s64
-raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
-{
return raw_cmpxchg_acquire(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic64_cmpxchg_release)
-#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg_release
-#elif defined(arch_atomic64_cmpxchg_relaxed)
static __always_inline s64
raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
{
+#if defined(arch_atomic64_cmpxchg_release)
+ return arch_atomic64_cmpxchg_release(v, old, new);
+#elif defined(arch_atomic64_cmpxchg_relaxed)
__atomic_release_fence();
return arch_atomic64_cmpxchg_relaxed(v, old, new);
-}
#elif defined(arch_atomic64_cmpxchg)
-#define raw_atomic64_cmpxchg_release arch_atomic64_cmpxchg
+ return arch_atomic64_cmpxchg(v, old, new);
#else
-static __always_inline s64
-raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
-{
return raw_cmpxchg_release(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic64_cmpxchg_relaxed)
-#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg_relaxed
-#elif defined(arch_atomic64_cmpxchg)
-#define raw_atomic64_cmpxchg_relaxed arch_atomic64_cmpxchg
-#else
static __always_inline s64
raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
{
+#if defined(arch_atomic64_cmpxchg_relaxed)
+ return arch_atomic64_cmpxchg_relaxed(v, old, new);
+#elif defined(arch_atomic64_cmpxchg)
+ return arch_atomic64_cmpxchg(v, old, new);
+#else
return raw_cmpxchg_relaxed(&v->counter, old, new);
-}
#endif
+}
-#if defined(arch_atomic64_try_cmpxchg)
-#define raw_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
-#elif defined(arch_atomic64_try_cmpxchg_relaxed)
static __always_inline bool
raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
+#if defined(arch_atomic64_try_cmpxchg)
+ return arch_atomic64_try_cmpxchg(v, old, new);
+#elif defined(arch_atomic64_try_cmpxchg_relaxed)
bool ret;
__atomic_pre_full_fence();
ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline bool
-raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
-{
s64 r, o = *old;
r = raw_atomic64_cmpxchg(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic64_try_cmpxchg_acquire)
-#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg_acquire
-#elif defined(arch_atomic64_try_cmpxchg_relaxed)
static __always_inline bool
raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
{
+#if defined(arch_atomic64_try_cmpxchg_acquire)
+ return arch_atomic64_try_cmpxchg_acquire(v, old, new);
+#elif defined(arch_atomic64_try_cmpxchg_relaxed)
bool ret = arch_atomic64_try_cmpxchg_relaxed(v, old, new);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_try_cmpxchg)
-#define raw_atomic64_try_cmpxchg_acquire arch_atomic64_try_cmpxchg
+ return arch_atomic64_try_cmpxchg(v, old, new);
#else
-static __always_inline bool
-raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
-{
s64 r, o = *old;
r = raw_atomic64_cmpxchg_acquire(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic64_try_cmpxchg_release)
-#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg_release
-#elif defined(arch_atomic64_try_cmpxchg_relaxed)
static __always_inline bool
raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
{
+#if defined(arch_atomic64_try_cmpxchg_release)
+ return arch_atomic64_try_cmpxchg_release(v, old, new);
+#elif defined(arch_atomic64_try_cmpxchg_relaxed)
__atomic_release_fence();
return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
-}
#elif defined(arch_atomic64_try_cmpxchg)
-#define raw_atomic64_try_cmpxchg_release arch_atomic64_try_cmpxchg
+ return arch_atomic64_try_cmpxchg(v, old, new);
#else
-static __always_inline bool
-raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
-{
s64 r, o = *old;
r = raw_atomic64_cmpxchg_release(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic64_try_cmpxchg_relaxed)
-#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg_relaxed
-#elif defined(arch_atomic64_try_cmpxchg)
-#define raw_atomic64_try_cmpxchg_relaxed arch_atomic64_try_cmpxchg
-#else
static __always_inline bool
raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
{
+#if defined(arch_atomic64_try_cmpxchg_relaxed)
+ return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
+#elif defined(arch_atomic64_try_cmpxchg)
+ return arch_atomic64_try_cmpxchg(v, old, new);
+#else
s64 r, o = *old;
r = raw_atomic64_cmpxchg_relaxed(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
#endif
+}
-#if defined(arch_atomic64_sub_and_test)
-#define raw_atomic64_sub_and_test arch_atomic64_sub_and_test
-#else
static __always_inline bool
raw_atomic64_sub_and_test(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_sub_and_test)
+ return arch_atomic64_sub_and_test(i, v);
+#else
return raw_atomic64_sub_return(i, v) == 0;
-}
#endif
+}
-#if defined(arch_atomic64_dec_and_test)
-#define raw_atomic64_dec_and_test arch_atomic64_dec_and_test
-#else
static __always_inline bool
raw_atomic64_dec_and_test(atomic64_t *v)
{
+#if defined(arch_atomic64_dec_and_test)
+ return arch_atomic64_dec_and_test(v);
+#else
return raw_atomic64_dec_return(v) == 0;
-}
#endif
+}
-#if defined(arch_atomic64_inc_and_test)
-#define raw_atomic64_inc_and_test arch_atomic64_inc_and_test
-#else
static __always_inline bool
raw_atomic64_inc_and_test(atomic64_t *v)
{
+#if defined(arch_atomic64_inc_and_test)
+ return arch_atomic64_inc_and_test(v);
+#else
return raw_atomic64_inc_return(v) == 0;
-}
#endif
+}
-#if defined(arch_atomic64_add_negative)
-#define raw_atomic64_add_negative arch_atomic64_add_negative
-#elif defined(arch_atomic64_add_negative_relaxed)
static __always_inline bool
raw_atomic64_add_negative(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_add_negative)
+ return arch_atomic64_add_negative(i, v);
+#elif defined(arch_atomic64_add_negative_relaxed)
bool ret;
__atomic_pre_full_fence();
ret = arch_atomic64_add_negative_relaxed(i, v);
__atomic_post_full_fence();
return ret;
-}
#else
-static __always_inline bool
-raw_atomic64_add_negative(s64 i, atomic64_t *v)
-{
return raw_atomic64_add_return(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic64_add_negative_acquire)
-#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative_acquire
-#elif defined(arch_atomic64_add_negative_relaxed)
static __always_inline bool
raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_add_negative_acquire)
+ return arch_atomic64_add_negative_acquire(i, v);
+#elif defined(arch_atomic64_add_negative_relaxed)
bool ret = arch_atomic64_add_negative_relaxed(i, v);
__atomic_acquire_fence();
return ret;
-}
#elif defined(arch_atomic64_add_negative)
-#define raw_atomic64_add_negative_acquire arch_atomic64_add_negative
+ return arch_atomic64_add_negative(i, v);
#else
-static __always_inline bool
-raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
-{
return raw_atomic64_add_return_acquire(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic64_add_negative_release)
-#define raw_atomic64_add_negative_release arch_atomic64_add_negative_release
-#elif defined(arch_atomic64_add_negative_relaxed)
static __always_inline bool
raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_add_negative_release)
+ return arch_atomic64_add_negative_release(i, v);
+#elif defined(arch_atomic64_add_negative_relaxed)
__atomic_release_fence();
return arch_atomic64_add_negative_relaxed(i, v);
-}
#elif defined(arch_atomic64_add_negative)
-#define raw_atomic64_add_negative_release arch_atomic64_add_negative
+ return arch_atomic64_add_negative(i, v);
#else
-static __always_inline bool
-raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
-{
return raw_atomic64_add_return_release(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic64_add_negative_relaxed)
-#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative_relaxed
-#elif defined(arch_atomic64_add_negative)
-#define raw_atomic64_add_negative_relaxed arch_atomic64_add_negative
-#else
static __always_inline bool
raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
{
+#if defined(arch_atomic64_add_negative_relaxed)
+ return arch_atomic64_add_negative_relaxed(i, v);
+#elif defined(arch_atomic64_add_negative)
+ return arch_atomic64_add_negative(i, v);
+#else
return raw_atomic64_add_return_relaxed(i, v) < 0;
-}
#endif
+}
-#if defined(arch_atomic64_fetch_add_unless)
-#define raw_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
-#else
static __always_inline s64
raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
+#if defined(arch_atomic64_fetch_add_unless)
+ return arch_atomic64_fetch_add_unless(v, a, u);
+#else
s64 c = raw_atomic64_read(v);
do {
@@ -2839,35 +2735,35 @@ raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
} while (!raw_atomic64_try_cmpxchg(v, &c, c + a));
return c;
-}
#endif
+}
-#if defined(arch_atomic64_add_unless)
-#define raw_atomic64_add_unless arch_atomic64_add_unless
-#else
static __always_inline bool
raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
+#if defined(arch_atomic64_add_unless)
+ return arch_atomic64_add_unless(v, a, u);
+#else
return raw_atomic64_fetch_add_unless(v, a, u) != u;
-}
#endif
+}
-#if defined(arch_atomic64_inc_not_zero)
-#define raw_atomic64_inc_not_zero arch_atomic64_inc_not_zero
-#else
static __always_inline bool
raw_atomic64_inc_not_zero(atomic64_t *v)
{
+#if defined(arch_atomic64_inc_not_zero)
+ return arch_atomic64_inc_not_zero(v);
+#else
return raw_atomic64_add_unless(v, 1, 0);
-}
#endif
+}
-#if defined(arch_atomic64_inc_unless_negative)
-#define raw_atomic64_inc_unless_negative arch_atomic64_inc_unless_negative
-#else
static __always_inline bool
raw_atomic64_inc_unless_negative(atomic64_t *v)
{
+#if defined(arch_atomic64_inc_unless_negative)
+ return arch_atomic64_inc_unless_negative(v);
+#else
s64 c = raw_atomic64_read(v);
do {
@@ -2876,15 +2772,15 @@ raw_atomic64_inc_unless_negative(atomic64_t *v)
} while (!raw_atomic64_try_cmpxchg(v, &c, c + 1));
return true;
-}
#endif
+}
-#if defined(arch_atomic64_dec_unless_positive)
-#define raw_atomic64_dec_unless_positive arch_atomic64_dec_unless_positive
-#else
static __always_inline bool
raw_atomic64_dec_unless_positive(atomic64_t *v)
{
+#if defined(arch_atomic64_dec_unless_positive)
+ return arch_atomic64_dec_unless_positive(v);
+#else
s64 c = raw_atomic64_read(v);
do {
@@ -2893,15 +2789,15 @@ raw_atomic64_dec_unless_positive(atomic64_t *v)
} while (!raw_atomic64_try_cmpxchg(v, &c, c - 1));
return true;
-}
#endif
+}
-#if defined(arch_atomic64_dec_if_positive)
-#define raw_atomic64_dec_if_positive arch_atomic64_dec_if_positive
-#else
static __always_inline s64
raw_atomic64_dec_if_positive(atomic64_t *v)
{
+#if defined(arch_atomic64_dec_if_positive)
+ return arch_atomic64_dec_if_positive(v);
+#else
s64 dec, c = raw_atomic64_read(v);
do {
@@ -2911,8 +2807,8 @@ raw_atomic64_dec_if_positive(atomic64_t *v)
} while (!raw_atomic64_try_cmpxchg(v, &c, dec));
return dec;
-}
#endif
+}
#endif /* _LINUX_ATOMIC_FALLBACK_H */
-// c2048fccede6fac923252290e2b303949d5dec83
+// 205e090382132f1fc85e48b46e722865f9c81309
diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h
index 90ee2f5..5491c89 100644
--- a/include/linux/atomic/atomic-instrumented.h
+++ b/include/linux/atomic/atomic-instrumented.h
@@ -462,33 +462,33 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v)
}
static __always_inline int
-atomic_xchg(atomic_t *v, int i)
+atomic_xchg(atomic_t *v, int new)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_xchg(v, i);
+ return raw_atomic_xchg(v, new);
}
static __always_inline int
-atomic_xchg_acquire(atomic_t *v, int i)
+atomic_xchg_acquire(atomic_t *v, int new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_xchg_acquire(v, i);
+ return raw_atomic_xchg_acquire(v, new);
}
static __always_inline int
-atomic_xchg_release(atomic_t *v, int i)
+atomic_xchg_release(atomic_t *v, int new)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_xchg_release(v, i);
+ return raw_atomic_xchg_release(v, new);
}
static __always_inline int
-atomic_xchg_relaxed(atomic_t *v, int i)
+atomic_xchg_relaxed(atomic_t *v, int new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_xchg_relaxed(v, i);
+ return raw_atomic_xchg_relaxed(v, new);
}
static __always_inline int
@@ -1103,33 +1103,33 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
}
static __always_inline s64
-atomic64_xchg(atomic64_t *v, s64 i)
+atomic64_xchg(atomic64_t *v, s64 new)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic64_xchg(v, i);
+ return raw_atomic64_xchg(v, new);
}
static __always_inline s64
-atomic64_xchg_acquire(atomic64_t *v, s64 i)
+atomic64_xchg_acquire(atomic64_t *v, s64 new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic64_xchg_acquire(v, i);
+ return raw_atomic64_xchg_acquire(v, new);
}
static __always_inline s64
-atomic64_xchg_release(atomic64_t *v, s64 i)
+atomic64_xchg_release(atomic64_t *v, s64 new)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic64_xchg_release(v, i);
+ return raw_atomic64_xchg_release(v, new);
}
static __always_inline s64
-atomic64_xchg_relaxed(atomic64_t *v, s64 i)
+atomic64_xchg_relaxed(atomic64_t *v, s64 new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic64_xchg_relaxed(v, i);
+ return raw_atomic64_xchg_relaxed(v, new);
}
static __always_inline s64
@@ -1744,33 +1744,33 @@ atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
}
static __always_inline long
-atomic_long_xchg(atomic_long_t *v, long i)
+atomic_long_xchg(atomic_long_t *v, long new)
{
kcsan_mb();
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_long_xchg(v, i);
+ return raw_atomic_long_xchg(v, new);
}
static __always_inline long
-atomic_long_xchg_acquire(atomic_long_t *v, long i)
+atomic_long_xchg_acquire(atomic_long_t *v, long new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_long_xchg_acquire(v, i);
+ return raw_atomic_long_xchg_acquire(v, new);
}
static __always_inline long
-atomic_long_xchg_release(atomic_long_t *v, long i)
+atomic_long_xchg_release(atomic_long_t *v, long new)
{
kcsan_release();
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_long_xchg_release(v, i);
+ return raw_atomic_long_xchg_release(v, new);
}
static __always_inline long
-atomic_long_xchg_relaxed(atomic_long_t *v, long i)
+atomic_long_xchg_relaxed(atomic_long_t *v, long new)
{
instrument_atomic_read_write(v, sizeof(*v));
- return raw_atomic_long_xchg_relaxed(v, i);
+ return raw_atomic_long_xchg_relaxed(v, new);
}
static __always_inline long
@@ -2231,4 +2231,4 @@ atomic_long_dec_if_positive(atomic_long_t *v)
#endif /* _LINUX_ATOMIC_INSTRUMENTED_H */
-// f6502977180430e61c1a7c4e5e665f04f501fb8d
+// a4c3d2b229f907654cc53cb5d40e80f7fed1ec9c
diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h
index 63e0b40..f564f71 100644
--- a/include/linux/atomic/atomic-long.h
+++ b/include/linux/atomic/atomic-long.h
@@ -622,42 +622,42 @@ raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
}
static __always_inline long
-raw_atomic_long_xchg(atomic_long_t *v, long i)
+raw_atomic_long_xchg(atomic_long_t *v, long new)
{
#ifdef CONFIG_64BIT
- return raw_atomic64_xchg(v, i);
+ return raw_atomic64_xchg(v, new);
#else
- return raw_atomic_xchg(v, i);
+ return raw_atomic_xchg(v, new);
#endif
}
static __always_inline long
-raw_atomic_long_xchg_acquire(atomic_long_t *v, long i)
+raw_atomic_long_xchg_acquire(atomic_long_t *v, long new)
{
#ifdef CONFIG_64BIT
- return raw_atomic64_xchg_acquire(v, i);
+ return raw_atomic64_xchg_acquire(v, new);
#else
- return raw_atomic_xchg_acquire(v, i);
+ return raw_atomic_xchg_acquire(v, new);
#endif
}
static __always_inline long
-raw_atomic_long_xchg_release(atomic_long_t *v, long i)
+raw_atomic_long_xchg_release(atomic_long_t *v, long new)
{
#ifdef CONFIG_64BIT
- return raw_atomic64_xchg_release(v, i);
+ return raw_atomic64_xchg_release(v, new);
#else
- return raw_atomic_xchg_release(v, i);
+ return raw_atomic_xchg_release(v, new);
#endif
}
static __always_inline long
-raw_atomic_long_xchg_relaxed(atomic_long_t *v, long i)
+raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new)
{
#ifdef CONFIG_64BIT
- return raw_atomic64_xchg_relaxed(v, i);
+ return raw_atomic64_xchg_relaxed(v, new);
#else
- return raw_atomic_xchg_relaxed(v, i);
+ return raw_atomic_xchg_relaxed(v, new);
#endif
}
@@ -872,4 +872,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v)
}
#endif /* _LINUX_ATOMIC_LONG_H */
-// ad09f849db0db5b30c82e497eeb9056a394c5f22
+// e785d25cc3f220b7d473d36aac9da85dd7eb13a8
diff --git a/scripts/atomic/atomics.tbl b/scripts/atomic/atomics.tbl
index 85ca8d9..903946c 100644
--- a/scripts/atomic/atomics.tbl
+++ b/scripts/atomic/atomics.tbl
@@ -27,7 +27,7 @@ and vF i v
andnot vF i v
or vF i v
xor vF i v
-xchg I v i
+xchg I v i:new
cmpxchg I v i:old i:new
try_cmpxchg B v p:old i:new
sub_and_test b i v
diff --git a/scripts/atomic/fallbacks/acquire b/scripts/atomic/fallbacks/acquire
index b0f732a..4da0cab 100755
--- a/scripts/atomic/fallbacks/acquire
+++ b/scripts/atomic/fallbacks/acquire
@@ -1,9 +1,5 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_${pfx}${name}${sfx}_acquire(${params})
-{
${ret} ret = arch_${atomic}_${pfx}${name}${sfx}_relaxed(${args});
__atomic_acquire_fence();
return ret;
-}
EOF
diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative
index 1687611..1d3d4ab 100755
--- a/scripts/atomic/fallbacks/add_negative
+++ b/scripts/atomic/fallbacks/add_negative
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_add_negative${order}(${int} i, ${atomic}_t *v)
-{
return raw_${atomic}_add_return${order}(i, v) < 0;
-}
EOF
diff --git a/scripts/atomic/fallbacks/add_unless b/scripts/atomic/fallbacks/add_unless
index 88593e2..95ecb2b 100755
--- a/scripts/atomic/fallbacks/add_unless
+++ b/scripts/atomic/fallbacks/add_unless
@@ -1,7 +1,3 @@
cat << EOF
-static __always_inline bool
-raw_${atomic}_add_unless(${atomic}_t *v, ${int} a, ${int} u)
-{
return raw_${atomic}_fetch_add_unless(v, a, u) != u;
-}
EOF
diff --git a/scripts/atomic/fallbacks/andnot b/scripts/atomic/fallbacks/andnot
index 5b83bb6..6676045 100755
--- a/scripts/atomic/fallbacks/andnot
+++ b/scripts/atomic/fallbacks/andnot
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_${pfx}andnot${sfx}${order}(${int} i, ${atomic}_t *v)
-{
${retstmt}raw_${atomic}_${pfx}and${sfx}${order}(~i, v);
-}
EOF
diff --git a/scripts/atomic/fallbacks/cmpxchg b/scripts/atomic/fallbacks/cmpxchg
index 312ee67..1c8507f 100644
--- a/scripts/atomic/fallbacks/cmpxchg
+++ b/scripts/atomic/fallbacks/cmpxchg
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline ${int}
-raw_${atomic}_cmpxchg${order}(${atomic}_t *v, ${int} old, ${int} new)
-{
return raw_cmpxchg${order}(&v->counter, old, new);
-}
EOF
diff --git a/scripts/atomic/fallbacks/dec b/scripts/atomic/fallbacks/dec
index a660ac6..60d286d 100755
--- a/scripts/atomic/fallbacks/dec
+++ b/scripts/atomic/fallbacks/dec
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_${pfx}dec${sfx}${order}(${atomic}_t *v)
-{
${retstmt}raw_${atomic}_${pfx}sub${sfx}${order}(1, v);
-}
EOF
diff --git a/scripts/atomic/fallbacks/dec_and_test b/scripts/atomic/fallbacks/dec_and_test
index 521dfca..3a0278e 100755
--- a/scripts/atomic/fallbacks/dec_and_test
+++ b/scripts/atomic/fallbacks/dec_and_test
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_dec_and_test(${atomic}_t *v)
-{
return raw_${atomic}_dec_return(v) == 0;
-}
EOF
diff --git a/scripts/atomic/fallbacks/dec_if_positive b/scripts/atomic/fallbacks/dec_if_positive
index 7acb205..f65c11b 100755
--- a/scripts/atomic/fallbacks/dec_if_positive
+++ b/scripts/atomic/fallbacks/dec_if_positive
@@ -1,7 +1,4 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_dec_if_positive(${atomic}_t *v)
-{
${int} dec, c = raw_${atomic}_read(v);
do {
@@ -11,5 +8,4 @@ raw_${atomic}_dec_if_positive(${atomic}_t *v)
} while (!raw_${atomic}_try_cmpxchg(v, &c, dec));
return dec;
-}
EOF
diff --git a/scripts/atomic/fallbacks/dec_unless_positive b/scripts/atomic/fallbacks/dec_unless_positive
index bcb4f27..d025361 100755
--- a/scripts/atomic/fallbacks/dec_unless_positive
+++ b/scripts/atomic/fallbacks/dec_unless_positive
@@ -1,7 +1,4 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_dec_unless_positive(${atomic}_t *v)
-{
${int} c = raw_${atomic}_read(v);
do {
@@ -10,5 +7,4 @@ raw_${atomic}_dec_unless_positive(${atomic}_t *v)
} while (!raw_${atomic}_try_cmpxchg(v, &c, c - 1));
return true;
-}
EOF
diff --git a/scripts/atomic/fallbacks/fence b/scripts/atomic/fallbacks/fence
index 067eea5..40d5b39 100755
--- a/scripts/atomic/fallbacks/fence
+++ b/scripts/atomic/fallbacks/fence
@@ -1,11 +1,7 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_${pfx}${name}${sfx}(${params})
-{
${ret} ret;
__atomic_pre_full_fence();
ret = arch_${atomic}_${pfx}${name}${sfx}_relaxed(${args});
__atomic_post_full_fence();
return ret;
-}
EOF
diff --git a/scripts/atomic/fallbacks/fetch_add_unless b/scripts/atomic/fallbacks/fetch_add_unless
index c18b940..8db7e9e 100755
--- a/scripts/atomic/fallbacks/fetch_add_unless
+++ b/scripts/atomic/fallbacks/fetch_add_unless
@@ -1,7 +1,4 @@
cat << EOF
-static __always_inline ${int}
-raw_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
-{
${int} c = raw_${atomic}_read(v);
do {
@@ -10,5 +7,4 @@ raw_${atomic}_fetch_add_unless(${atomic}_t *v, ${int} a, ${int} u)
} while (!raw_${atomic}_try_cmpxchg(v, &c, c + a));
return c;
-}
EOF
diff --git a/scripts/atomic/fallbacks/inc b/scripts/atomic/fallbacks/inc
index 7d838f0..56c770f 100755
--- a/scripts/atomic/fallbacks/inc
+++ b/scripts/atomic/fallbacks/inc
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_${pfx}inc${sfx}${order}(${atomic}_t *v)
-{
${retstmt}raw_${atomic}_${pfx}add${sfx}${order}(1, v);
-}
EOF
diff --git a/scripts/atomic/fallbacks/inc_and_test b/scripts/atomic/fallbacks/inc_and_test
index de25aeb..7d16a10 100755
--- a/scripts/atomic/fallbacks/inc_and_test
+++ b/scripts/atomic/fallbacks/inc_and_test
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_inc_and_test(${atomic}_t *v)
-{
return raw_${atomic}_inc_return(v) == 0;
-}
EOF
diff --git a/scripts/atomic/fallbacks/inc_not_zero b/scripts/atomic/fallbacks/inc_not_zero
index e02206d..1fcef1e 100755
--- a/scripts/atomic/fallbacks/inc_not_zero
+++ b/scripts/atomic/fallbacks/inc_not_zero
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_inc_not_zero(${atomic}_t *v)
-{
return raw_${atomic}_add_unless(v, 1, 0);
-}
EOF
diff --git a/scripts/atomic/fallbacks/inc_unless_negative b/scripts/atomic/fallbacks/inc_unless_negative
index 7b85cc5..7b4b098 100755
--- a/scripts/atomic/fallbacks/inc_unless_negative
+++ b/scripts/atomic/fallbacks/inc_unless_negative
@@ -1,7 +1,4 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_inc_unless_negative(${atomic}_t *v)
-{
${int} c = raw_${atomic}_read(v);
do {
@@ -10,5 +7,4 @@ raw_${atomic}_inc_unless_negative(${atomic}_t *v)
} while (!raw_${atomic}_try_cmpxchg(v, &c, c + 1));
return true;
-}
EOF
diff --git a/scripts/atomic/fallbacks/read_acquire b/scripts/atomic/fallbacks/read_acquire
index 26d15ad..e319862 100755
--- a/scripts/atomic/fallbacks/read_acquire
+++ b/scripts/atomic/fallbacks/read_acquire
@@ -1,7 +1,4 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_read_acquire(const ${atomic}_t *v)
-{
${int} ret;
if (__native_word(${atomic}_t)) {
@@ -12,5 +9,4 @@ raw_${atomic}_read_acquire(const ${atomic}_t *v)
}
return ret;
-}
EOF
diff --git a/scripts/atomic/fallbacks/release b/scripts/atomic/fallbacks/release
index cbbff70..1e6daf5 100755
--- a/scripts/atomic/fallbacks/release
+++ b/scripts/atomic/fallbacks/release
@@ -1,8 +1,4 @@
cat <<EOF
-static __always_inline ${ret}
-raw_${atomic}_${pfx}${name}${sfx}_release(${params})
-{
__atomic_release_fence();
${retstmt}arch_${atomic}_${pfx}${name}${sfx}_relaxed(${args});
-}
EOF
diff --git a/scripts/atomic/fallbacks/set_release b/scripts/atomic/fallbacks/set_release
index 104693b..16a374a 100755
--- a/scripts/atomic/fallbacks/set_release
+++ b/scripts/atomic/fallbacks/set_release
@@ -1,12 +1,8 @@
cat <<EOF
-static __always_inline void
-raw_${atomic}_set_release(${atomic}_t *v, ${int} i)
-{
if (__native_word(${atomic}_t)) {
smp_store_release(&(v)->counter, i);
} else {
__atomic_release_fence();
raw_${atomic}_set(v, i);
}
-}
EOF
diff --git a/scripts/atomic/fallbacks/sub_and_test b/scripts/atomic/fallbacks/sub_and_test
index 8975a49..d1f746f 100755
--- a/scripts/atomic/fallbacks/sub_and_test
+++ b/scripts/atomic/fallbacks/sub_and_test
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_sub_and_test(${int} i, ${atomic}_t *v)
-{
return raw_${atomic}_sub_return(i, v) == 0;
-}
EOF
diff --git a/scripts/atomic/fallbacks/try_cmpxchg b/scripts/atomic/fallbacks/try_cmpxchg
index 4c911a6..d4da820 100755
--- a/scripts/atomic/fallbacks/try_cmpxchg
+++ b/scripts/atomic/fallbacks/try_cmpxchg
@@ -1,11 +1,7 @@
cat <<EOF
-static __always_inline bool
-raw_${atomic}_try_cmpxchg${order}(${atomic}_t *v, ${int} *old, ${int} new)
-{
${int} r, o = *old;
r = raw_${atomic}_cmpxchg${order}(v, o, new);
if (unlikely(r != o))
*old = r;
return likely(r == o);
-}
EOF
diff --git a/scripts/atomic/fallbacks/xchg b/scripts/atomic/fallbacks/xchg
index bdd788a..e4def1e 100644
--- a/scripts/atomic/fallbacks/xchg
+++ b/scripts/atomic/fallbacks/xchg
@@ -1,7 +1,3 @@
cat <<EOF
-static __always_inline ${int}
-raw_${atomic}_xchg${order}(${atomic}_t *v, ${int} new)
-{
return raw_xchg${order}(&v->counter, new);
-}
EOF
diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh
index 86aca4f..2b470d3 100755
--- a/scripts/atomic/gen-atomic-fallback.sh
+++ b/scripts/atomic/gen-atomic-fallback.sh
@@ -60,13 +60,23 @@ gen_proto_order_variant()
local name="$1"; shift
local sfx="$1"; shift
local order="$1"; shift
- local atomic="$1"
+ local atomic="$1"; shift
+ local int="$1"; shift
local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
local basename="${atomic}_${pfx}${name}${sfx}"
local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")"
+ local ret="$(gen_ret_type "${meta}" "${int}")"
+ local retstmt="$(gen_ret_stmt "${meta}")"
+ local params="$(gen_params "${int}" "${atomic}" "$@")"
+ local args="$(gen_args "$@")"
+
+ printf "static __always_inline ${ret}\n"
+ printf "raw_${atomicname}(${params})\n"
+ printf "{\n"
+
# Where there is no possible fallback, this order variant is mandatory
# and must be provided by arch code. Add a comment to the header to
# make this obvious.
@@ -75,33 +85,35 @@ gen_proto_order_variant()
# define this order variant as a C function without a preprocessor
# symbol.
if [ -z ${template} ] && [ -z "${order}" ] && ! meta_has_relaxed "${meta}"; then
- printf "#define raw_${atomicname} arch_${atomicname}\n\n"
+ printf "\t${retstmt}arch_${atomicname}(${args});\n"
+ printf "}\n\n"
return
fi
printf "#if defined(arch_${atomicname})\n"
- printf "#define raw_${atomicname} arch_${atomicname}\n"
+ printf "\t${retstmt}arch_${atomicname}(${args});\n"
# Allow FULL/ACQUIRE/RELEASE ops to be defined in terms of RELAXED ops
if [ "${order}" != "_relaxed" ] && meta_has_relaxed "${meta}"; then
printf "#elif defined(arch_${basename}_relaxed)\n"
- gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
+ gen_order_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@"
fi
# Allow ACQUIRE/RELEASE/RELAXED ops to be defined in terms of FULL ops
if [ ! -z "${order}" ]; then
printf "#elif defined(arch_${basename})\n"
- printf "#define raw_${atomicname} arch_${basename}\n"
+ printf "\t${retstmt}arch_${basename}(${args});\n"
fi
printf "#else\n"
if [ ! -z "${template}" ]; then
- gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
+ gen_proto_fallback "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@"
else
printf "#error \"Unable to define raw_${atomicname}\"\n"
fi
- printf "#endif\n\n"
+ printf "#endif\n"
+ printf "}\n\n"
}
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 358c449afa662b1120d43738d2b0400ed2cc97df
Gitweb: https://git.kernel.org/tip/358c449afa662b1120d43738d2b0400ed2cc97df
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:08 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:17 +02:00
locking/atomic: sparc: add preprocessor symbols
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/sparc.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/sparc/include/asm/atomic_32.h | 16 ++++++++++++++--
arch/sparc/include/asm/atomic_64.h | 18 ++++++++++++++++++
2 files changed, 32 insertions(+), 2 deletions(-)
diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h
index 1c9e6c7..60ce2fe 100644
--- a/arch/sparc/include/asm/atomic_32.h
+++ b/arch/sparc/include/asm/atomic_32.h
@@ -19,19 +19,31 @@
#include <asm-generic/atomic64.h>
int arch_atomic_add_return(int, atomic_t *);
+#define arch_atomic_add_return arch_atomic_add_return
+
int arch_atomic_fetch_add(int, atomic_t *);
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+
int arch_atomic_fetch_and(int, atomic_t *);
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+
int arch_atomic_fetch_or(int, atomic_t *);
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+
int arch_atomic_fetch_xor(int, atomic_t *);
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
int arch_atomic_cmpxchg(atomic_t *, int, int);
#define arch_atomic_cmpxchg arch_atomic_cmpxchg
+
int arch_atomic_xchg(atomic_t *, int);
#define arch_atomic_xchg arch_atomic_xchg
-int arch_atomic_fetch_add_unless(atomic_t *, int, int);
-void arch_atomic_set(atomic_t *, int);
+int arch_atomic_fetch_add_unless(atomic_t *, int, int);
#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
+void arch_atomic_set(atomic_t *, int);
+
#define arch_atomic_set_release(v, i) arch_atomic_set((v), (i))
#define arch_atomic_read(v) READ_ONCE((v)->counter)
diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h
index df6a8b0..a5e9c37 100644
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -37,6 +37,16 @@ s64 arch_atomic64_fetch_##op(s64, atomic64_t *);
ATOMIC_OPS(add)
ATOMIC_OPS(sub)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
+#define arch_atomic64_add_return arch_atomic64_add_return
+#define arch_atomic64_sub_return arch_atomic64_sub_return
+#define arch_atomic64_fetch_add arch_atomic64_fetch_add
+#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_FETCH_OP(op)
@@ -44,6 +54,14 @@ ATOMIC_OPS(and)
ATOMIC_OPS(or)
ATOMIC_OPS(xor)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
+#define arch_atomic64_fetch_and arch_atomic64_fetch_and
+#define arch_atomic64_fetch_or arch_atomic64_fetch_or
+#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 07bf3dcbe0e199422598f12918021c516161fd12
Gitweb: https://git.kernel.org/tip/07bf3dcbe0e199422598f12918021c516161fd12
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:06 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:16 +02:00
locking/atomic: parisc: add preprocessor symbols
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/parisc.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/parisc/include/asm/atomic.h | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h
index 0b3f64c..d4f0238 100644
--- a/arch/parisc/include/asm/atomic.h
+++ b/arch/parisc/include/asm/atomic.h
@@ -118,6 +118,11 @@ static __inline__ int arch_atomic_fetch_##op(int i, atomic_t *v) \
ATOMIC_OPS(add, +=)
ATOMIC_OPS(sub, -=)
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op, c_op) \
ATOMIC_OP(op, c_op) \
@@ -127,6 +132,10 @@ ATOMIC_OPS(and, &=)
ATOMIC_OPS(or, |=)
ATOMIC_OPS(xor, ^=)
+#define arch_atomic_fetch_and arch_atomic_fetch_and
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
+
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
@@ -181,6 +190,11 @@ static __inline__ s64 arch_atomic64_fetch_##op(s64 i, atomic64_t *v) \
ATOMIC64_OPS(add, +=)
ATOMIC64_OPS(sub, -=)
+#define arch_atomic64_add_return arch_atomic64_add_return
+#define arch_atomic64_sub_return arch_atomic64_sub_return
+#define arch_atomic64_fetch_add arch_atomic64_fetch_add
+#define arch_atomic64_fetch_sub arch_atomic64_fetch_sub
+
#undef ATOMIC64_OPS
#define ATOMIC64_OPS(op, c_op) \
ATOMIC64_OP(op, c_op) \
@@ -190,6 +204,10 @@ ATOMIC64_OPS(and, &=)
ATOMIC64_OPS(or, |=)
ATOMIC64_OPS(xor, ^=)
+#define arch_atomic64_fetch_and arch_atomic64_fetch_and
+#define arch_atomic64_fetch_or arch_atomic64_fetch_or
+#define arch_atomic64_fetch_xor arch_atomic64_fetch_xor
+
#undef ATOMIC64_OPS
#undef ATOMIC64_FETCH_OP
#undef ATOMIC64_OP_RETURN
The following commit has been merged into the locking/core branch of tip:
Commit-ID: dda5f312bb09e56e7a1c3e3851f2000eb2e9c879
Gitweb: https://git.kernel.org/tip/dda5f312bb09e56e7a1c3e3851f2000eb2e9c879
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:00:58 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:13 +02:00
locking/atomic: arm: fix sync ops
The sync_*() ops on arch/arm are defined in terms of the regular bitops
with no special handling. This is not correct, as UP kernels elide
barriers for the fully-ordered operations, and so the required ordering
is lost when such UP kernels are run under a hypervsior on an SMP
system.
Fix this by defining sync ops with the required barriers.
Note: On 32-bit arm, the sync_*() ops are currently only used by Xen,
which requires ARMv7, but the semantics can be implemented for ARMv6+.
Fixes: e54d2f61528165bb ("xen/arm: sync_bitops")
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/arm/include/asm/assembler.h | 17 +++++++++++++++++-
arch/arm/include/asm/sync_bitops.h | 29 +++++++++++++++++++++++++----
arch/arm/lib/bitops.h | 14 +++++++++++---
arch/arm/lib/testchangebit.S | 4 ++++-
arch/arm/lib/testclearbit.S | 4 ++++-
arch/arm/lib/testsetbit.S | 4 ++++-
6 files changed, 65 insertions(+), 7 deletions(-)
diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
index 505a306..aebe2c8 100644
--- a/arch/arm/include/asm/assembler.h
+++ b/arch/arm/include/asm/assembler.h
@@ -394,6 +394,23 @@ ALT_UP_B(.L0_\@)
#endif
.endm
+/*
+ * Raw SMP data memory barrier
+ */
+ .macro __smp_dmb mode
+#if __LINUX_ARM_ARCH__ >= 7
+ .ifeqs "\mode","arm"
+ dmb ish
+ .else
+ W(dmb) ish
+ .endif
+#elif __LINUX_ARM_ARCH__ == 6
+ mcr p15, 0, r0, c7, c10, 5 @ dmb
+#else
+ .error "Incompatible SMP platform"
+#endif
+ .endm
+
#if defined(CONFIG_CPU_V7M)
/*
* setmode is used to assert to be in svc mode during boot. For v7-M
diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
index 6f5d627..f46b3c5 100644
--- a/arch/arm/include/asm/sync_bitops.h
+++ b/arch/arm/include/asm/sync_bitops.h
@@ -14,14 +14,35 @@
* ops which are SMP safe even on a UP kernel.
*/
+/*
+ * Unordered
+ */
+
#define sync_set_bit(nr, p) _set_bit(nr, p)
#define sync_clear_bit(nr, p) _clear_bit(nr, p)
#define sync_change_bit(nr, p) _change_bit(nr, p)
-#define sync_test_and_set_bit(nr, p) _test_and_set_bit(nr, p)
-#define sync_test_and_clear_bit(nr, p) _test_and_clear_bit(nr, p)
-#define sync_test_and_change_bit(nr, p) _test_and_change_bit(nr, p)
#define sync_test_bit(nr, addr) test_bit(nr, addr)
-#define arch_sync_cmpxchg arch_cmpxchg
+/*
+ * Fully ordered
+ */
+
+int _sync_test_and_set_bit(int nr, volatile unsigned long * p);
+#define sync_test_and_set_bit(nr, p) _sync_test_and_set_bit(nr, p)
+
+int _sync_test_and_clear_bit(int nr, volatile unsigned long * p);
+#define sync_test_and_clear_bit(nr, p) _sync_test_and_clear_bit(nr, p)
+
+int _sync_test_and_change_bit(int nr, volatile unsigned long * p);
+#define sync_test_and_change_bit(nr, p) _sync_test_and_change_bit(nr, p)
+
+#define arch_sync_cmpxchg(ptr, old, new) \
+({ \
+ __typeof__(*(ptr)) __ret; \
+ __smp_mb__before_atomic(); \
+ __ret = arch_cmpxchg_relaxed((ptr), (old), (new)); \
+ __smp_mb__after_atomic(); \
+ __ret; \
+})
#endif
diff --git a/arch/arm/lib/bitops.h b/arch/arm/lib/bitops.h
index 95bd359..f069d1b 100644
--- a/arch/arm/lib/bitops.h
+++ b/arch/arm/lib/bitops.h
@@ -28,7 +28,7 @@ UNWIND( .fnend )
ENDPROC(\name )
.endm
- .macro testop, name, instr, store
+ .macro __testop, name, instr, store, barrier
ENTRY( \name )
UNWIND( .fnstart )
ands ip, r1, #3
@@ -38,7 +38,7 @@ UNWIND( .fnstart )
mov r0, r0, lsr #5
add r1, r1, r0, lsl #2 @ Get word offset
mov r3, r2, lsl r3 @ create mask
- smp_dmb
+ \barrier
#if __LINUX_ARM_ARCH__ >= 7 && defined(CONFIG_SMP)
.arch_extension mp
ALT_SMP(W(pldw) [r1])
@@ -50,13 +50,21 @@ UNWIND( .fnstart )
strex ip, r2, [r1]
cmp ip, #0
bne 1b
- smp_dmb
+ \barrier
cmp r0, #0
movne r0, #1
2: bx lr
UNWIND( .fnend )
ENDPROC(\name )
.endm
+
+ .macro testop, name, instr, store
+ __testop \name, \instr, \store, smp_dmb
+ .endm
+
+ .macro sync_testop, name, instr, store
+ __testop \name, \instr, \store, __smp_dmb
+ .endm
#else
.macro bitop, name, instr
ENTRY( \name )
diff --git a/arch/arm/lib/testchangebit.S b/arch/arm/lib/testchangebit.S
index 4ebecc6..f13fe9b 100644
--- a/arch/arm/lib/testchangebit.S
+++ b/arch/arm/lib/testchangebit.S
@@ -10,3 +10,7 @@
.text
testop _test_and_change_bit, eor, str
+
+#if __LINUX_ARM_ARCH__ >= 6
+sync_testop _sync_test_and_change_bit, eor, str
+#endif
diff --git a/arch/arm/lib/testclearbit.S b/arch/arm/lib/testclearbit.S
index 009afa0..4d2c5ca 100644
--- a/arch/arm/lib/testclearbit.S
+++ b/arch/arm/lib/testclearbit.S
@@ -10,3 +10,7 @@
.text
testop _test_and_clear_bit, bicne, strne
+
+#if __LINUX_ARM_ARCH__ >= 6
+sync_testop _sync_test_and_clear_bit, bicne, strne
+#endif
diff --git a/arch/arm/lib/testsetbit.S b/arch/arm/lib/testsetbit.S
index f3192e5..649dbab 100644
--- a/arch/arm/lib/testsetbit.S
+++ b/arch/arm/lib/testsetbit.S
@@ -10,3 +10,7 @@
.text
testop _test_and_set_bit, orreq, streq
+
+#if __LINUX_ARM_ARCH__ >= 6
+sync_testop _sync_test_and_set_bit, orreq, streq
+#endif
The following commit has been merged into the locking/core branch of tip:
Commit-ID: f739287ef57bc01155e556033462e9a6ff020c97
Gitweb: https://git.kernel.org/tip/f739287ef57bc01155e556033462e9a6ff020c97
Author: Mark Rutland <[email protected]>
AuthorDate: Mon, 05 Jun 2023 08:01:02 +01:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 05 Jun 2023 09:57:14 +02:00
locking/atomic: arc: add preprocessor symbols
Some atomics can be implemented in several different ways, e.g.
FULL/ACQUIRE/RELEASE ordered atomics can be implemented in terms of
RELAXED atomics, and ACQUIRE/RELEASE/RELAXED can be implemented in terms
of FULL ordered atomics. Other atomics are optional, and don't exist in
some configurations (e.g. not all architectures implement the 128-bit
cmpxchg ops).
Subsequent patches will require that architectures define a preprocessor
symbol for any atomic (or ordering variant) which is optional. This will
make the fallback ifdeffery more robust, and simplify future changes.
Add the required definitions to arch/arc.
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
arch/arc/include/asm/atomic-spinlock.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/arch/arc/include/asm/atomic-spinlock.h b/arch/arc/include/asm/atomic-spinlock.h
index 2c83034..89d12a6 100644
--- a/arch/arc/include/asm/atomic-spinlock.h
+++ b/arch/arc/include/asm/atomic-spinlock.h
@@ -81,6 +81,11 @@ static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \
ATOMIC_OPS(add, +=, add)
ATOMIC_OPS(sub, -=, sub)
+#define arch_atomic_fetch_add arch_atomic_fetch_add
+#define arch_atomic_fetch_sub arch_atomic_fetch_sub
+#define arch_atomic_add_return arch_atomic_add_return
+#define arch_atomic_sub_return arch_atomic_sub_return
+
#undef ATOMIC_OPS
#define ATOMIC_OPS(op, c_op, asm_op) \
ATOMIC_OP(op, c_op, asm_op) \
@@ -92,7 +97,11 @@ ATOMIC_OPS(or, |=, or)
ATOMIC_OPS(xor, ^=, xor)
#define arch_atomic_andnot arch_atomic_andnot
+
+#define arch_atomic_fetch_and arch_atomic_fetch_and
#define arch_atomic_fetch_andnot arch_atomic_fetch_andnot
+#define arch_atomic_fetch_or arch_atomic_fetch_or
+#define arch_atomic_fetch_xor arch_atomic_fetch_xor
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
On Mon, Jun 05, 2023 at 08:01:22AM +0100, Mark Rutland wrote:
> Currently the atomics are documented in Documentation/atomic_t.txt, and
> have no kerneldoc comments. There are a sufficient number of gotchas
> (e.g. semantics, noinstr-safety) that it would be nice to have comments
> to call these out, and it would be nice to have kerneldoc comments such
> that these can be collated.
>
> While it's possible to derive the semantics from the code, this can be
> painful given the amount of indirection we currently have (e.g. fallback
> paths), and it's easy to be mislead by naming, e.g.
>
> * The unconditional void-returning ops *only* have relaxed variants
> without a _relaxed suffix, and can easily be mistaken for being fully
> ordered.
>
> It would be nice to give these a _relaxed() suffix, but this would
> result in significant churn throughout the kernel.
>
> * Our naming of conditional and unconditional+test ops is rather
> inconsistent, and it can be difficult to derive the name of an
> operation, or to identify where an op is conditional or
> unconditional+test.
>
> Some ops are clearly conditional:
> - dec_if_positive
> - add_unless
> - dec_unless_positive
> - inc_unless_negative
>
> Some ops are clearly unconditional+test:
> - sub_and_test
> - dec_and_test
> - inc_and_test
>
> However, what exactly those test is not obvious. A _test_zero suffix
> might be clearer.
>
> Others could be read ambiguously:
> - inc_not_zero // conditional
> - add_negative // unconditional+test
>
> It would probably be worth renaming these, e.g. to inc_unless_zero and
> add_test_negative.
>
> As a step towards making this more consistent and easier to understand,
> this patch adds kerneldoc comments for all generated *atomic*_*()
> functions. These are generated from templates, with some common text
> shared, making it easy to extend these in future if necessary.
>
> I've tried to make these as consistent and clear as possible, and I've
> deliberately ensured:
>
> * All ops have their ordering explicitly mentioned in the short and long
> description.
>
> * All test ops have "test" in their short description.
>
> * All ops are described as an expression using their usual C operator.
> For example:
>
> andnot: "Atomically updates @v to (@v & ~@i)"
> inc: "Atomically updates @v to (@v + 1)"
>
> Which may be clearer to non-naative English speakers, and allows all
> the operations to be described in the same style.
>
> * All conditional ops have their condition described as an expression
> using the usual C operators. For example:
>
> add_unless: "If (@v != @u), atomically updates @v to (@v + @i)"
> cmpxchg: "If (@v == @old), atomically updates @v to @new"
>
> Which may be clearer to non-naative English speakers, and allows all
> the operations to be described in the same style.
>
> * All bitwise ops (and,andnot,or,xor) explicitly mention that they are
> bitwise in their short description, so that they are not mistaken for
> performing their logical equivalents.
>
> * The noinstr safety of each op is explicitly described, with a
> description of whether or not to use the raw_ form of the op.
>
> There should be no functional change as a result of this patch.
>
> Reported-by: Paul E. McKenney <[email protected]>
> Signed-off-by: Mark Rutland <[email protected]>
> Reviewed-by: Kees Cook <[email protected]>
> Cc: Boqun Feng <[email protected]>
> Cc: Jonathan Corbet <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Will Deacon <[email protected]>
With the dec_if_positive fix:
Reviewed-by: Paul E. McKenney <[email protected]>
Good stuff!!!
> ---
> include/linux/atomic/atomic-arch-fallback.h | 1848 +++++++++++-
> include/linux/atomic/atomic-instrumented.h | 2771 +++++++++++++++++-
> include/linux/atomic/atomic-long.h | 925 +++++-
> scripts/atomic/atomic-tbl.sh | 112 +-
> scripts/atomic/gen-atomic-fallback.sh | 2 +
> scripts/atomic/gen-atomic-instrumented.sh | 2 +
> scripts/atomic/gen-atomic-long.sh | 2 +
> scripts/atomic/kerneldoc/add | 13 +
> scripts/atomic/kerneldoc/add_negative | 13 +
> scripts/atomic/kerneldoc/add_unless | 18 +
> scripts/atomic/kerneldoc/and | 13 +
> scripts/atomic/kerneldoc/andnot | 13 +
> scripts/atomic/kerneldoc/cmpxchg | 14 +
> scripts/atomic/kerneldoc/dec | 12 +
> scripts/atomic/kerneldoc/dec_and_test | 12 +
> scripts/atomic/kerneldoc/dec_if_positive | 12 +
> scripts/atomic/kerneldoc/dec_unless_positive | 12 +
> scripts/atomic/kerneldoc/inc | 12 +
> scripts/atomic/kerneldoc/inc_and_test | 12 +
> scripts/atomic/kerneldoc/inc_not_zero | 12 +
> scripts/atomic/kerneldoc/inc_unless_negative | 12 +
> scripts/atomic/kerneldoc/or | 13 +
> scripts/atomic/kerneldoc/read | 12 +
> scripts/atomic/kerneldoc/set | 13 +
> scripts/atomic/kerneldoc/sub | 13 +
> scripts/atomic/kerneldoc/sub_and_test | 13 +
> scripts/atomic/kerneldoc/try_cmpxchg | 15 +
> scripts/atomic/kerneldoc/xchg | 13 +
> scripts/atomic/kerneldoc/xor | 13 +
> 29 files changed, 5940 insertions(+), 7 deletions(-)
> create mode 100644 scripts/atomic/kerneldoc/add
> create mode 100644 scripts/atomic/kerneldoc/add_negative
> create mode 100644 scripts/atomic/kerneldoc/add_unless
> create mode 100644 scripts/atomic/kerneldoc/and
> create mode 100644 scripts/atomic/kerneldoc/andnot
> create mode 100644 scripts/atomic/kerneldoc/cmpxchg
> create mode 100644 scripts/atomic/kerneldoc/dec
> create mode 100644 scripts/atomic/kerneldoc/dec_and_test
> create mode 100644 scripts/atomic/kerneldoc/dec_if_positive
> create mode 100644 scripts/atomic/kerneldoc/dec_unless_positive
> create mode 100644 scripts/atomic/kerneldoc/inc
> create mode 100644 scripts/atomic/kerneldoc/inc_and_test
> create mode 100644 scripts/atomic/kerneldoc/inc_not_zero
> create mode 100644 scripts/atomic/kerneldoc/inc_unless_negative
> create mode 100644 scripts/atomic/kerneldoc/or
> create mode 100644 scripts/atomic/kerneldoc/read
> create mode 100644 scripts/atomic/kerneldoc/set
> create mode 100644 scripts/atomic/kerneldoc/sub
> create mode 100644 scripts/atomic/kerneldoc/sub_and_test
> create mode 100644 scripts/atomic/kerneldoc/try_cmpxchg
> create mode 100644 scripts/atomic/kerneldoc/xchg
> create mode 100644 scripts/atomic/kerneldoc/xor
>
> diff --git a/include/linux/atomic/atomic-arch-fallback.h b/include/linux/atomic/atomic-arch-fallback.h
> index 470c2890ab8d6..8cded57dd7a6f 100644
> --- a/include/linux/atomic/atomic-arch-fallback.h
> +++ b/include/linux/atomic/atomic-arch-fallback.h
> @@ -428,12 +428,32 @@ extern void raw_cmpxchg128_relaxed_not_implemented(void);
>
> #define raw_sync_cmpxchg arch_sync_cmpxchg
>
> +/**
> + * raw_atomic_read() - atomic load with relaxed ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically loads the value of @v with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_read() elsewhere.
> + *
> + * Return: The value loaded from @v.
> + */
> static __always_inline int
> raw_atomic_read(const atomic_t *v)
> {
> return arch_atomic_read(v);
> }
>
> +/**
> + * raw_atomic_read_acquire() - atomic load with acquire ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically loads the value of @v with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_read_acquire() elsewhere.
> + *
> + * Return: The value loaded from @v.
> + */
> static __always_inline int
> raw_atomic_read_acquire(const atomic_t *v)
> {
> @@ -455,12 +475,34 @@ raw_atomic_read_acquire(const atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_set() - atomic set with relaxed ordering
> + * @v: pointer to atomic_t
> + * @i: int value to assign
> + *
> + * Atomically sets @v to @i with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_set() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_set(atomic_t *v, int i)
> {
> arch_atomic_set(v, i);
> }
>
> +/**
> + * raw_atomic_set_release() - atomic set with release ordering
> + * @v: pointer to atomic_t
> + * @i: int value to assign
> + *
> + * Atomically sets @v to @i with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_set_release() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_set_release(atomic_t *v, int i)
> {
> @@ -478,12 +520,34 @@ raw_atomic_set_release(atomic_t *v, int i)
> #endif
> }
>
> +/**
> + * raw_atomic_add() - atomic add with relaxed ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_add() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_add(int i, atomic_t *v)
> {
> arch_atomic_add(i, v);
> }
>
> +/**
> + * raw_atomic_add_return() - atomic add with full ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_add_return() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_add_return(int i, atomic_t *v)
> {
> @@ -500,6 +564,17 @@ raw_atomic_add_return(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_add_return_acquire() - atomic add with acquire ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_add_return_acquire() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_add_return_acquire(int i, atomic_t *v)
> {
> @@ -516,6 +591,17 @@ raw_atomic_add_return_acquire(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_add_return_release() - atomic add with release ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_add_return_release() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_add_return_release(int i, atomic_t *v)
> {
> @@ -531,6 +617,17 @@ raw_atomic_add_return_release(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_add_return_relaxed() - atomic add with relaxed ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_add_return_relaxed() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_add_return_relaxed(int i, atomic_t *v)
> {
> @@ -543,6 +640,17 @@ raw_atomic_add_return_relaxed(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_add() - atomic add with full ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_add() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_add(int i, atomic_t *v)
> {
> @@ -559,6 +667,17 @@ raw_atomic_fetch_add(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_add_acquire() - atomic add with acquire ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_add_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_add_acquire(int i, atomic_t *v)
> {
> @@ -575,6 +694,17 @@ raw_atomic_fetch_add_acquire(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_add_release() - atomic add with release ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_add_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_add_release(int i, atomic_t *v)
> {
> @@ -590,6 +720,17 @@ raw_atomic_fetch_add_release(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_add_relaxed() - atomic add with relaxed ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_add_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_add_relaxed(int i, atomic_t *v)
> {
> @@ -602,12 +743,34 @@ raw_atomic_fetch_add_relaxed(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_sub() - atomic subtract with relaxed ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_sub() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_sub(int i, atomic_t *v)
> {
> arch_atomic_sub(i, v);
> }
>
> +/**
> + * raw_atomic_sub_return() - atomic subtract with full ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_sub_return() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_sub_return(int i, atomic_t *v)
> {
> @@ -624,6 +787,17 @@ raw_atomic_sub_return(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_sub_return_acquire() - atomic subtract with acquire ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_sub_return_acquire() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_sub_return_acquire(int i, atomic_t *v)
> {
> @@ -640,6 +814,17 @@ raw_atomic_sub_return_acquire(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_sub_return_release() - atomic subtract with release ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_sub_return_release() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_sub_return_release(int i, atomic_t *v)
> {
> @@ -655,6 +840,17 @@ raw_atomic_sub_return_release(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_sub_return_relaxed() - atomic subtract with relaxed ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_sub_return_relaxed() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_sub_return_relaxed(int i, atomic_t *v)
> {
> @@ -667,6 +863,17 @@ raw_atomic_sub_return_relaxed(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_sub() - atomic subtract with full ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_sub() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_sub(int i, atomic_t *v)
> {
> @@ -683,6 +890,17 @@ raw_atomic_fetch_sub(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_sub_acquire() - atomic subtract with acquire ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_sub_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_sub_acquire(int i, atomic_t *v)
> {
> @@ -699,6 +917,17 @@ raw_atomic_fetch_sub_acquire(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_sub_release() - atomic subtract with release ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_sub_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_sub_release(int i, atomic_t *v)
> {
> @@ -714,6 +943,17 @@ raw_atomic_fetch_sub_release(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_sub_relaxed() - atomic subtract with relaxed ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_sub_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_sub_relaxed(int i, atomic_t *v)
> {
> @@ -726,6 +966,16 @@ raw_atomic_fetch_sub_relaxed(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_inc() - atomic increment with relaxed ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_inc() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_inc(atomic_t *v)
> {
> @@ -736,6 +986,16 @@ raw_atomic_inc(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_inc_return() - atomic increment with full ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_inc_return() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_inc_return(atomic_t *v)
> {
> @@ -752,6 +1012,16 @@ raw_atomic_inc_return(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_inc_return_acquire() - atomic increment with acquire ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_inc_return_acquire() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_inc_return_acquire(atomic_t *v)
> {
> @@ -768,6 +1038,16 @@ raw_atomic_inc_return_acquire(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_inc_return_release() - atomic increment with release ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_inc_return_release() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_inc_return_release(atomic_t *v)
> {
> @@ -783,6 +1063,16 @@ raw_atomic_inc_return_release(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_inc_return_relaxed() - atomic increment with relaxed ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_inc_return_relaxed() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_inc_return_relaxed(atomic_t *v)
> {
> @@ -795,6 +1085,16 @@ raw_atomic_inc_return_relaxed(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_inc() - atomic increment with full ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_inc() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_inc(atomic_t *v)
> {
> @@ -811,6 +1111,16 @@ raw_atomic_fetch_inc(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_inc_acquire() - atomic increment with acquire ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_inc_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_inc_acquire(atomic_t *v)
> {
> @@ -827,6 +1137,16 @@ raw_atomic_fetch_inc_acquire(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_inc_release() - atomic increment with release ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_inc_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_inc_release(atomic_t *v)
> {
> @@ -842,6 +1162,16 @@ raw_atomic_fetch_inc_release(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_inc_relaxed() - atomic increment with relaxed ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_inc_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_inc_relaxed(atomic_t *v)
> {
> @@ -854,6 +1184,16 @@ raw_atomic_fetch_inc_relaxed(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_dec() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_dec() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_dec(atomic_t *v)
> {
> @@ -864,6 +1204,16 @@ raw_atomic_dec(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_dec_return() - atomic decrement with full ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_dec_return() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_dec_return(atomic_t *v)
> {
> @@ -880,6 +1230,16 @@ raw_atomic_dec_return(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_dec_return_acquire() - atomic decrement with acquire ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_dec_return_acquire() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_dec_return_acquire(atomic_t *v)
> {
> @@ -896,6 +1256,16 @@ raw_atomic_dec_return_acquire(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_dec_return_release() - atomic decrement with release ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_dec_return_release() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_dec_return_release(atomic_t *v)
> {
> @@ -911,6 +1281,16 @@ raw_atomic_dec_return_release(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_dec_return_relaxed() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_dec_return_relaxed() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> raw_atomic_dec_return_relaxed(atomic_t *v)
> {
> @@ -923,6 +1303,16 @@ raw_atomic_dec_return_relaxed(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_dec() - atomic decrement with full ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_dec() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_dec(atomic_t *v)
> {
> @@ -939,6 +1329,16 @@ raw_atomic_fetch_dec(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_dec_acquire() - atomic decrement with acquire ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_dec_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_dec_acquire(atomic_t *v)
> {
> @@ -955,6 +1355,16 @@ raw_atomic_fetch_dec_acquire(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_dec_release() - atomic decrement with release ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_dec_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_dec_release(atomic_t *v)
> {
> @@ -970,6 +1380,16 @@ raw_atomic_fetch_dec_release(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_dec_relaxed() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_dec_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_dec_relaxed(atomic_t *v)
> {
> @@ -982,12 +1402,34 @@ raw_atomic_fetch_dec_relaxed(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_and() - atomic bitwise AND with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_and() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_and(int i, atomic_t *v)
> {
> arch_atomic_and(i, v);
> }
>
> +/**
> + * raw_atomic_fetch_and() - atomic bitwise AND with full ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_and() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_and(int i, atomic_t *v)
> {
> @@ -1004,6 +1446,17 @@ raw_atomic_fetch_and(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_and_acquire() - atomic bitwise AND with acquire ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_and_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_and_acquire(int i, atomic_t *v)
> {
> @@ -1020,6 +1473,17 @@ raw_atomic_fetch_and_acquire(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_and_release() - atomic bitwise AND with release ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_and_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_and_release(int i, atomic_t *v)
> {
> @@ -1035,6 +1499,17 @@ raw_atomic_fetch_and_release(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_and_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_and_relaxed(int i, atomic_t *v)
> {
> @@ -1047,6 +1522,17 @@ raw_atomic_fetch_and_relaxed(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_andnot() - atomic bitwise AND NOT with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & ~@i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_andnot() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_andnot(int i, atomic_t *v)
> {
> @@ -1057,6 +1543,17 @@ raw_atomic_andnot(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_andnot() - atomic bitwise AND NOT with full ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & ~@i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_andnot() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_andnot(int i, atomic_t *v)
> {
> @@ -1073,6 +1570,17 @@ raw_atomic_fetch_andnot(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & ~@i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_andnot_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
> {
> @@ -1089,6 +1597,17 @@ raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & ~@i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_andnot_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_andnot_release(int i, atomic_t *v)
> {
> @@ -1104,6 +1623,17 @@ raw_atomic_fetch_andnot_release(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & ~@i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_andnot_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
> {
> @@ -1116,12 +1646,34 @@ raw_atomic_fetch_andnot_relaxed(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_or() - atomic bitwise OR with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v | @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_or() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_or(int i, atomic_t *v)
> {
> arch_atomic_or(i, v);
> }
>
> +/**
> + * raw_atomic_fetch_or() - atomic bitwise OR with full ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v | @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_or() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_or(int i, atomic_t *v)
> {
> @@ -1138,6 +1690,17 @@ raw_atomic_fetch_or(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_or_acquire() - atomic bitwise OR with acquire ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v | @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_or_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_or_acquire(int i, atomic_t *v)
> {
> @@ -1154,6 +1717,17 @@ raw_atomic_fetch_or_acquire(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_or_release() - atomic bitwise OR with release ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v | @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_or_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_or_release(int i, atomic_t *v)
> {
> @@ -1169,6 +1743,17 @@ raw_atomic_fetch_or_release(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v | @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_or_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_or_relaxed(int i, atomic_t *v)
> {
> @@ -1181,12 +1766,34 @@ raw_atomic_fetch_or_relaxed(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_xor() - atomic bitwise XOR with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v ^ @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_xor() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_xor(int i, atomic_t *v)
> {
> arch_atomic_xor(i, v);
> }
>
> +/**
> + * raw_atomic_fetch_xor() - atomic bitwise XOR with full ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v ^ @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_xor() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_xor(int i, atomic_t *v)
> {
> @@ -1203,6 +1810,17 @@ raw_atomic_fetch_xor(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v ^ @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_xor_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_xor_acquire(int i, atomic_t *v)
> {
> @@ -1219,6 +1837,17 @@ raw_atomic_fetch_xor_acquire(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_xor_release() - atomic bitwise XOR with release ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v ^ @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_xor_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_xor_release(int i, atomic_t *v)
> {
> @@ -1234,6 +1863,17 @@ raw_atomic_fetch_xor_release(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v ^ @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_xor_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_xor_relaxed(int i, atomic_t *v)
> {
> @@ -1246,6 +1886,17 @@ raw_atomic_fetch_xor_relaxed(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_xchg() - atomic exchange with full ordering
> + * @v: pointer to atomic_t
> + * @new: int value to assign
> + *
> + * Atomically updates @v to @new with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_xchg() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_xchg(atomic_t *v, int new)
> {
> @@ -1262,6 +1913,17 @@ raw_atomic_xchg(atomic_t *v, int new)
> #endif
> }
>
> +/**
> + * raw_atomic_xchg_acquire() - atomic exchange with acquire ordering
> + * @v: pointer to atomic_t
> + * @new: int value to assign
> + *
> + * Atomically updates @v to @new with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_xchg_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_xchg_acquire(atomic_t *v, int new)
> {
> @@ -1278,6 +1940,17 @@ raw_atomic_xchg_acquire(atomic_t *v, int new)
> #endif
> }
>
> +/**
> + * raw_atomic_xchg_release() - atomic exchange with release ordering
> + * @v: pointer to atomic_t
> + * @new: int value to assign
> + *
> + * Atomically updates @v to @new with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_xchg_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_xchg_release(atomic_t *v, int new)
> {
> @@ -1293,6 +1966,17 @@ raw_atomic_xchg_release(atomic_t *v, int new)
> #endif
> }
>
> +/**
> + * raw_atomic_xchg_relaxed() - atomic exchange with relaxed ordering
> + * @v: pointer to atomic_t
> + * @new: int value to assign
> + *
> + * Atomically updates @v to @new with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_xchg_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_xchg_relaxed(atomic_t *v, int new)
> {
> @@ -1305,6 +1989,18 @@ raw_atomic_xchg_relaxed(atomic_t *v, int new)
> #endif
> }
>
> +/**
> + * raw_atomic_cmpxchg() - atomic compare and exchange with full ordering
> + * @v: pointer to atomic_t
> + * @old: int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_cmpxchg() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_cmpxchg(atomic_t *v, int old, int new)
> {
> @@ -1321,6 +2017,18 @@ raw_atomic_cmpxchg(atomic_t *v, int old, int new)
> #endif
> }
>
> +/**
> + * raw_atomic_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
> + * @v: pointer to atomic_t
> + * @old: int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_cmpxchg_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
> {
> @@ -1337,6 +2045,18 @@ raw_atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
> #endif
> }
>
> +/**
> + * raw_atomic_cmpxchg_release() - atomic compare and exchange with release ordering
> + * @v: pointer to atomic_t
> + * @old: int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_cmpxchg_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
> {
> @@ -1352,6 +2072,18 @@ raw_atomic_cmpxchg_release(atomic_t *v, int old, int new)
> #endif
> }
>
> +/**
> + * raw_atomic_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
> + * @v: pointer to atomic_t
> + * @old: int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_cmpxchg_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
> {
> @@ -1364,6 +2096,19 @@ raw_atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
> #endif
> }
>
> +/**
> + * raw_atomic_try_cmpxchg() - atomic compare and exchange with full ordering
> + * @v: pointer to atomic_t
> + * @old: pointer to int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with full ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Safe to use in noinstr code; prefer atomic_try_cmpxchg() elsewhere.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
> {
> @@ -1384,6 +2129,19 @@ raw_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
> #endif
> }
>
> +/**
> + * raw_atomic_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
> + * @v: pointer to atomic_t
> + * @old: pointer to int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with acquire ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Safe to use in noinstr code; prefer atomic_try_cmpxchg_acquire() elsewhere.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
> {
> @@ -1404,6 +2162,19 @@ raw_atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
> #endif
> }
>
> +/**
> + * raw_atomic_try_cmpxchg_release() - atomic compare and exchange with release ordering
> + * @v: pointer to atomic_t
> + * @old: pointer to int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with release ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Safe to use in noinstr code; prefer atomic_try_cmpxchg_release() elsewhere.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
> {
> @@ -1423,6 +2194,19 @@ raw_atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
> #endif
> }
>
> +/**
> + * raw_atomic_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
> + * @v: pointer to atomic_t
> + * @old: pointer to int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with relaxed ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Safe to use in noinstr code; prefer atomic_try_cmpxchg_relaxed() elsewhere.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
> {
> @@ -1439,6 +2223,17 @@ raw_atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
> #endif
> }
>
> +/**
> + * raw_atomic_sub_and_test() - atomic subtract and test if zero with full ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_sub_and_test() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_sub_and_test(int i, atomic_t *v)
> {
> @@ -1449,6 +2244,16 @@ raw_atomic_sub_and_test(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_dec_and_test() - atomic decrement and test if zero with full ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_dec_and_test() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_dec_and_test(atomic_t *v)
> {
> @@ -1459,6 +2264,16 @@ raw_atomic_dec_and_test(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_inc_and_test() - atomic increment and test if zero with full ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_inc_and_test() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_inc_and_test(atomic_t *v)
> {
> @@ -1469,6 +2284,17 @@ raw_atomic_inc_and_test(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_add_negative() - atomic add and test if negative with full ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_add_negative() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_add_negative(int i, atomic_t *v)
> {
> @@ -1485,6 +2311,17 @@ raw_atomic_add_negative(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_add_negative_acquire() - atomic add and test if negative with acquire ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_add_negative_acquire() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_add_negative_acquire(int i, atomic_t *v)
> {
> @@ -1501,6 +2338,17 @@ raw_atomic_add_negative_acquire(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_add_negative_release() - atomic add and test if negative with release ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_add_negative_release() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_add_negative_release(int i, atomic_t *v)
> {
> @@ -1516,6 +2364,17 @@ raw_atomic_add_negative_release(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_add_negative_relaxed() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_add_negative_relaxed(int i, atomic_t *v)
> {
> @@ -1528,6 +2387,18 @@ raw_atomic_add_negative_relaxed(int i, atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_fetch_add_unless() - atomic add unless value with full ordering
> + * @v: pointer to atomic_t
> + * @a: int value to add
> + * @u: int value to compare with
> + *
> + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_fetch_add_unless() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
> {
> @@ -1545,6 +2416,18 @@ raw_atomic_fetch_add_unless(atomic_t *v, int a, int u)
> #endif
> }
>
> +/**
> + * raw_atomic_add_unless() - atomic add unless value with full ordering
> + * @v: pointer to atomic_t
> + * @a: int value to add
> + * @u: int value to compare with
> + *
> + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_add_unless() elsewhere.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_add_unless(atomic_t *v, int a, int u)
> {
> @@ -1555,6 +2438,16 @@ raw_atomic_add_unless(atomic_t *v, int a, int u)
> #endif
> }
>
> +/**
> + * raw_atomic_inc_not_zero() - atomic increment unless zero with full ordering
> + * @v: pointer to atomic_t
> + *
> + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_inc_not_zero() elsewhere.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_inc_not_zero(atomic_t *v)
> {
> @@ -1565,6 +2458,16 @@ raw_atomic_inc_not_zero(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_inc_unless_negative() - atomic increment unless negative with full ordering
> + * @v: pointer to atomic_t
> + *
> + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_inc_unless_negative() elsewhere.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_inc_unless_negative(atomic_t *v)
> {
> @@ -1582,6 +2485,16 @@ raw_atomic_inc_unless_negative(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_dec_unless_positive() - atomic decrement unless positive with full ordering
> + * @v: pointer to atomic_t
> + *
> + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_dec_unless_positive() elsewhere.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_dec_unless_positive(atomic_t *v)
> {
> @@ -1599,6 +2512,16 @@ raw_atomic_dec_unless_positive(atomic_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_dec_if_positive() - atomic decrement if positive with full ordering
> + * @v: pointer to atomic_t
> + *
> + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_dec_if_positive() elsewhere.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline int
> raw_atomic_dec_if_positive(atomic_t *v)
> {
> @@ -1621,12 +2544,32 @@ raw_atomic_dec_if_positive(atomic_t *v)
> #include <asm-generic/atomic64.h>
> #endif
>
> +/**
> + * raw_atomic64_read() - atomic load with relaxed ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically loads the value of @v with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_read() elsewhere.
> + *
> + * Return: The value loaded from @v.
> + */
> static __always_inline s64
> raw_atomic64_read(const atomic64_t *v)
> {
> return arch_atomic64_read(v);
> }
>
> +/**
> + * raw_atomic64_read_acquire() - atomic load with acquire ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically loads the value of @v with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_read_acquire() elsewhere.
> + *
> + * Return: The value loaded from @v.
> + */
> static __always_inline s64
> raw_atomic64_read_acquire(const atomic64_t *v)
> {
> @@ -1648,12 +2591,34 @@ raw_atomic64_read_acquire(const atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_set() - atomic set with relaxed ordering
> + * @v: pointer to atomic64_t
> + * @i: s64 value to assign
> + *
> + * Atomically sets @v to @i with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_set() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic64_set(atomic64_t *v, s64 i)
> {
> arch_atomic64_set(v, i);
> }
>
> +/**
> + * raw_atomic64_set_release() - atomic set with release ordering
> + * @v: pointer to atomic64_t
> + * @i: s64 value to assign
> + *
> + * Atomically sets @v to @i with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_set_release() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic64_set_release(atomic64_t *v, s64 i)
> {
> @@ -1671,12 +2636,34 @@ raw_atomic64_set_release(atomic64_t *v, s64 i)
> #endif
> }
>
> +/**
> + * raw_atomic64_add() - atomic add with relaxed ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_add() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic64_add(s64 i, atomic64_t *v)
> {
> arch_atomic64_add(i, v);
> }
>
> +/**
> + * raw_atomic64_add_return() - atomic add with full ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_add_return() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_add_return(s64 i, atomic64_t *v)
> {
> @@ -1693,6 +2680,17 @@ raw_atomic64_add_return(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_add_return_acquire() - atomic add with acquire ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_add_return_acquire() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)
> {
> @@ -1709,6 +2707,17 @@ raw_atomic64_add_return_acquire(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_add_return_release() - atomic add with release ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_add_return_release() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_add_return_release(s64 i, atomic64_t *v)
> {
> @@ -1724,6 +2733,17 @@ raw_atomic64_add_return_release(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_add_return_relaxed() - atomic add with relaxed ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_add_return_relaxed() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
> {
> @@ -1736,6 +2756,17 @@ raw_atomic64_add_return_relaxed(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_add() - atomic add with full ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_add() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_add(s64 i, atomic64_t *v)
> {
> @@ -1752,6 +2783,17 @@ raw_atomic64_fetch_add(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_add_acquire() - atomic add with acquire ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_add_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
> {
> @@ -1768,6 +2810,17 @@ raw_atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_add_release() - atomic add with release ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_add_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)
> {
> @@ -1783,6 +2836,17 @@ raw_atomic64_fetch_add_release(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_add_relaxed() - atomic add with relaxed ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_add_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
> {
> @@ -1795,12 +2859,34 @@ raw_atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_sub() - atomic subtract with relaxed ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_sub() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic64_sub(s64 i, atomic64_t *v)
> {
> arch_atomic64_sub(i, v);
> }
>
> +/**
> + * raw_atomic64_sub_return() - atomic subtract with full ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_sub_return() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_sub_return(s64 i, atomic64_t *v)
> {
> @@ -1817,6 +2903,17 @@ raw_atomic64_sub_return(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_sub_return_acquire() - atomic subtract with acquire ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_sub_return_acquire() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
> {
> @@ -1833,6 +2930,17 @@ raw_atomic64_sub_return_acquire(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_sub_return_release() - atomic subtract with release ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_sub_return_release() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_sub_return_release(s64 i, atomic64_t *v)
> {
> @@ -1848,6 +2956,17 @@ raw_atomic64_sub_return_release(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_sub_return_relaxed() - atomic subtract with relaxed ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_sub_return_relaxed() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
> {
> @@ -1860,6 +2979,17 @@ raw_atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_sub() - atomic subtract with full ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_sub() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_sub(s64 i, atomic64_t *v)
> {
> @@ -1876,6 +3006,17 @@ raw_atomic64_fetch_sub(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_sub_acquire() - atomic subtract with acquire ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_sub_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
> {
> @@ -1892,6 +3033,17 @@ raw_atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_sub_release() - atomic subtract with release ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_sub_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
> {
> @@ -1907,6 +3059,17 @@ raw_atomic64_fetch_sub_release(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_sub_relaxed() - atomic subtract with relaxed ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_sub_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
> {
> @@ -1919,6 +3082,16 @@ raw_atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_inc() - atomic increment with relaxed ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_inc() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic64_inc(atomic64_t *v)
> {
> @@ -1929,6 +3102,16 @@ raw_atomic64_inc(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_inc_return() - atomic increment with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_inc_return() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_inc_return(atomic64_t *v)
> {
> @@ -1945,6 +3128,16 @@ raw_atomic64_inc_return(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_inc_return_acquire() - atomic increment with acquire ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_inc_return_acquire() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_inc_return_acquire(atomic64_t *v)
> {
> @@ -1961,6 +3154,16 @@ raw_atomic64_inc_return_acquire(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_inc_return_release() - atomic increment with release ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_inc_return_release() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_inc_return_release(atomic64_t *v)
> {
> @@ -1976,6 +3179,16 @@ raw_atomic64_inc_return_release(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_inc_return_relaxed() - atomic increment with relaxed ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_inc_return_relaxed() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_inc_return_relaxed(atomic64_t *v)
> {
> @@ -1988,6 +3201,16 @@ raw_atomic64_inc_return_relaxed(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_inc() - atomic increment with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_inc() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_inc(atomic64_t *v)
> {
> @@ -2004,6 +3227,16 @@ raw_atomic64_fetch_inc(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_inc_acquire() - atomic increment with acquire ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_inc_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_inc_acquire(atomic64_t *v)
> {
> @@ -2020,6 +3253,16 @@ raw_atomic64_fetch_inc_acquire(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_inc_release() - atomic increment with release ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_inc_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_inc_release(atomic64_t *v)
> {
> @@ -2035,6 +3278,16 @@ raw_atomic64_fetch_inc_release(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_inc_relaxed() - atomic increment with relaxed ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_inc_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_inc_relaxed(atomic64_t *v)
> {
> @@ -2047,6 +3300,16 @@ raw_atomic64_fetch_inc_relaxed(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_dec() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_dec() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic64_dec(atomic64_t *v)
> {
> @@ -2057,6 +3320,16 @@ raw_atomic64_dec(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_dec_return() - atomic decrement with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_dec_return() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_dec_return(atomic64_t *v)
> {
> @@ -2073,6 +3346,16 @@ raw_atomic64_dec_return(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_dec_return_acquire() - atomic decrement with acquire ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_dec_return_acquire() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_dec_return_acquire(atomic64_t *v)
> {
> @@ -2089,6 +3372,16 @@ raw_atomic64_dec_return_acquire(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_dec_return_release() - atomic decrement with release ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_dec_return_release() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_dec_return_release(atomic64_t *v)
> {
> @@ -2104,6 +3397,16 @@ raw_atomic64_dec_return_release(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_dec_return_relaxed() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_dec_return_relaxed() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> raw_atomic64_dec_return_relaxed(atomic64_t *v)
> {
> @@ -2116,6 +3419,16 @@ raw_atomic64_dec_return_relaxed(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_dec() - atomic decrement with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_dec() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_dec(atomic64_t *v)
> {
> @@ -2132,6 +3445,16 @@ raw_atomic64_fetch_dec(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_dec_acquire() - atomic decrement with acquire ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_dec_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_dec_acquire(atomic64_t *v)
> {
> @@ -2148,6 +3471,16 @@ raw_atomic64_fetch_dec_acquire(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_dec_release() - atomic decrement with release ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_dec_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_dec_release(atomic64_t *v)
> {
> @@ -2163,6 +3496,16 @@ raw_atomic64_fetch_dec_release(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_dec_relaxed() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_dec_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_dec_relaxed(atomic64_t *v)
> {
> @@ -2175,12 +3518,34 @@ raw_atomic64_fetch_dec_relaxed(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_and() - atomic bitwise AND with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_and() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic64_and(s64 i, atomic64_t *v)
> {
> arch_atomic64_and(i, v);
> }
>
> +/**
> + * raw_atomic64_fetch_and() - atomic bitwise AND with full ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_and() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_and(s64 i, atomic64_t *v)
> {
> @@ -2197,6 +3562,17 @@ raw_atomic64_fetch_and(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_and_acquire() - atomic bitwise AND with acquire ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_and_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
> {
> @@ -2213,6 +3589,17 @@ raw_atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_and_release() - atomic bitwise AND with release ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_and_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)
> {
> @@ -2228,6 +3615,17 @@ raw_atomic64_fetch_and_release(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_and_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
> {
> @@ -2240,6 +3638,17 @@ raw_atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_andnot() - atomic bitwise AND NOT with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & ~@i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_andnot() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic64_andnot(s64 i, atomic64_t *v)
> {
> @@ -2250,6 +3659,17 @@ raw_atomic64_andnot(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_andnot() - atomic bitwise AND NOT with full ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & ~@i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_andnot() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
> {
> @@ -2266,6 +3686,17 @@ raw_atomic64_fetch_andnot(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & ~@i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_andnot_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
> {
> @@ -2282,6 +3713,17 @@ raw_atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & ~@i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_andnot_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
> {
> @@ -2297,6 +3739,17 @@ raw_atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & ~@i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_andnot_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
> {
> @@ -2309,12 +3762,34 @@ raw_atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_or() - atomic bitwise OR with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v | @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_or() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic64_or(s64 i, atomic64_t *v)
> {
> arch_atomic64_or(i, v);
> }
>
> +/**
> + * raw_atomic64_fetch_or() - atomic bitwise OR with full ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v | @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_or() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_or(s64 i, atomic64_t *v)
> {
> @@ -2331,6 +3806,17 @@ raw_atomic64_fetch_or(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_or_acquire() - atomic bitwise OR with acquire ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v | @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_or_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
> {
> @@ -2347,6 +3833,17 @@ raw_atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_or_release() - atomic bitwise OR with release ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v | @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_or_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)
> {
> @@ -2362,6 +3859,17 @@ raw_atomic64_fetch_or_release(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v | @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_or_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
> {
> @@ -2374,12 +3882,34 @@ raw_atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_xor() - atomic bitwise XOR with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v ^ @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_xor() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic64_xor(s64 i, atomic64_t *v)
> {
> arch_atomic64_xor(i, v);
> }
>
> +/**
> + * raw_atomic64_fetch_xor() - atomic bitwise XOR with full ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v ^ @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_xor() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_xor(s64 i, atomic64_t *v)
> {
> @@ -2396,6 +3926,17 @@ raw_atomic64_fetch_xor(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v ^ @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_xor_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
> {
> @@ -2412,6 +3953,17 @@ raw_atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_xor_release() - atomic bitwise XOR with release ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v ^ @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_xor_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
> {
> @@ -2427,6 +3979,17 @@ raw_atomic64_fetch_xor_release(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v ^ @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_xor_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
> {
> @@ -2439,6 +4002,17 @@ raw_atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_xchg() - atomic exchange with full ordering
> + * @v: pointer to atomic64_t
> + * @new: s64 value to assign
> + *
> + * Atomically updates @v to @new with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_xchg() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_xchg(atomic64_t *v, s64 new)
> {
> @@ -2455,6 +4029,17 @@ raw_atomic64_xchg(atomic64_t *v, s64 new)
> #endif
> }
>
> +/**
> + * raw_atomic64_xchg_acquire() - atomic exchange with acquire ordering
> + * @v: pointer to atomic64_t
> + * @new: s64 value to assign
> + *
> + * Atomically updates @v to @new with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_xchg_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_xchg_acquire(atomic64_t *v, s64 new)
> {
> @@ -2471,6 +4056,17 @@ raw_atomic64_xchg_acquire(atomic64_t *v, s64 new)
> #endif
> }
>
> +/**
> + * raw_atomic64_xchg_release() - atomic exchange with release ordering
> + * @v: pointer to atomic64_t
> + * @new: s64 value to assign
> + *
> + * Atomically updates @v to @new with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_xchg_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_xchg_release(atomic64_t *v, s64 new)
> {
> @@ -2486,6 +4082,17 @@ raw_atomic64_xchg_release(atomic64_t *v, s64 new)
> #endif
> }
>
> +/**
> + * raw_atomic64_xchg_relaxed() - atomic exchange with relaxed ordering
> + * @v: pointer to atomic64_t
> + * @new: s64 value to assign
> + *
> + * Atomically updates @v to @new with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_xchg_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
> {
> @@ -2498,6 +4105,18 @@ raw_atomic64_xchg_relaxed(atomic64_t *v, s64 new)
> #endif
> }
>
> +/**
> + * raw_atomic64_cmpxchg() - atomic compare and exchange with full ordering
> + * @v: pointer to atomic64_t
> + * @old: s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_cmpxchg() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
> {
> @@ -2514,6 +4133,18 @@ raw_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
> #endif
> }
>
> +/**
> + * raw_atomic64_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
> + * @v: pointer to atomic64_t
> + * @old: s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_cmpxchg_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
> {
> @@ -2530,6 +4161,18 @@ raw_atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
> #endif
> }
>
> +/**
> + * raw_atomic64_cmpxchg_release() - atomic compare and exchange with release ordering
> + * @v: pointer to atomic64_t
> + * @old: s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_cmpxchg_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
> {
> @@ -2545,6 +4188,18 @@ raw_atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
> #endif
> }
>
> +/**
> + * raw_atomic64_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
> + * @v: pointer to atomic64_t
> + * @old: s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_cmpxchg_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
> {
> @@ -2557,6 +4212,19 @@ raw_atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
> #endif
> }
>
> +/**
> + * raw_atomic64_try_cmpxchg() - atomic compare and exchange with full ordering
> + * @v: pointer to atomic64_t
> + * @old: pointer to s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with full ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Safe to use in noinstr code; prefer atomic64_try_cmpxchg() elsewhere.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
> {
> @@ -2577,6 +4245,19 @@ raw_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
> #endif
> }
>
> +/**
> + * raw_atomic64_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
> + * @v: pointer to atomic64_t
> + * @old: pointer to s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with acquire ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_acquire() elsewhere.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
> {
> @@ -2597,6 +4278,19 @@ raw_atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
> #endif
> }
>
> +/**
> + * raw_atomic64_try_cmpxchg_release() - atomic compare and exchange with release ordering
> + * @v: pointer to atomic64_t
> + * @old: pointer to s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with release ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_release() elsewhere.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
> {
> @@ -2616,6 +4310,19 @@ raw_atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
> #endif
> }
>
> +/**
> + * raw_atomic64_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
> + * @v: pointer to atomic64_t
> + * @old: pointer to s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with relaxed ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Safe to use in noinstr code; prefer atomic64_try_cmpxchg_relaxed() elsewhere.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
> {
> @@ -2632,6 +4339,17 @@ raw_atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
> #endif
> }
>
> +/**
> + * raw_atomic64_sub_and_test() - atomic subtract and test if zero with full ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_sub_and_test() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic64_sub_and_test(s64 i, atomic64_t *v)
> {
> @@ -2642,6 +4360,16 @@ raw_atomic64_sub_and_test(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_dec_and_test() - atomic decrement and test if zero with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_dec_and_test() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic64_dec_and_test(atomic64_t *v)
> {
> @@ -2652,6 +4380,16 @@ raw_atomic64_dec_and_test(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_inc_and_test() - atomic increment and test if zero with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_inc_and_test() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic64_inc_and_test(atomic64_t *v)
> {
> @@ -2662,6 +4400,17 @@ raw_atomic64_inc_and_test(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_add_negative() - atomic add and test if negative with full ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_add_negative() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic64_add_negative(s64 i, atomic64_t *v)
> {
> @@ -2678,6 +4427,17 @@ raw_atomic64_add_negative(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_add_negative_acquire() - atomic add and test if negative with acquire ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_add_negative_acquire() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
> {
> @@ -2694,6 +4454,17 @@ raw_atomic64_add_negative_acquire(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_add_negative_release() - atomic add and test if negative with release ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_add_negative_release() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
> {
> @@ -2709,6 +4480,17 @@ raw_atomic64_add_negative_release(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_add_negative_relaxed() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
> {
> @@ -2721,6 +4503,18 @@ raw_atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_fetch_add_unless() - atomic add unless value with full ordering
> + * @v: pointer to atomic64_t
> + * @a: s64 value to add
> + * @u: s64 value to compare with
> + *
> + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_fetch_add_unless() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> {
> @@ -2738,6 +4532,18 @@ raw_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> #endif
> }
>
> +/**
> + * raw_atomic64_add_unless() - atomic add unless value with full ordering
> + * @v: pointer to atomic64_t
> + * @a: s64 value to add
> + * @u: s64 value to compare with
> + *
> + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_add_unless() elsewhere.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
> {
> @@ -2748,6 +4554,16 @@ raw_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
> #endif
> }
>
> +/**
> + * raw_atomic64_inc_not_zero() - atomic increment unless zero with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_inc_not_zero() elsewhere.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic64_inc_not_zero(atomic64_t *v)
> {
> @@ -2758,6 +4574,16 @@ raw_atomic64_inc_not_zero(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_inc_unless_negative() - atomic increment unless negative with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_inc_unless_negative() elsewhere.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic64_inc_unless_negative(atomic64_t *v)
> {
> @@ -2775,6 +4601,16 @@ raw_atomic64_inc_unless_negative(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_dec_unless_positive() - atomic decrement unless positive with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_dec_unless_positive() elsewhere.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic64_dec_unless_positive(atomic64_t *v)
> {
> @@ -2792,6 +4628,16 @@ raw_atomic64_dec_unless_positive(atomic64_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic64_dec_if_positive() - atomic decrement if positive with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic64_dec_if_positive() elsewhere.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline s64
> raw_atomic64_dec_if_positive(atomic64_t *v)
> {
> @@ -2811,4 +4657,4 @@ raw_atomic64_dec_if_positive(atomic64_t *v)
> }
>
> #endif /* _LINUX_ATOMIC_FALLBACK_H */
> -// 205e090382132f1fc85e48b46e722865f9c81309
> +// 3916f02c038baa3f5190d275f68b9211667fcc9d
> diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h
> index 5491c89dc03a0..ebfc795f921b9 100644
> --- a/include/linux/atomic/atomic-instrumented.h
> +++ b/include/linux/atomic/atomic-instrumented.h
> @@ -16,6 +16,16 @@
> #include <linux/compiler.h>
> #include <linux/instrumented.h>
>
> +/**
> + * atomic_read() - atomic load with relaxed ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically loads the value of @v with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_read() there.
> + *
> + * Return: The value loaded from @v.
> + */
> static __always_inline int
> atomic_read(const atomic_t *v)
> {
> @@ -23,6 +33,16 @@ atomic_read(const atomic_t *v)
> return raw_atomic_read(v);
> }
>
> +/**
> + * atomic_read_acquire() - atomic load with acquire ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically loads the value of @v with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_read_acquire() there.
> + *
> + * Return: The value loaded from @v.
> + */
> static __always_inline int
> atomic_read_acquire(const atomic_t *v)
> {
> @@ -30,6 +50,17 @@ atomic_read_acquire(const atomic_t *v)
> return raw_atomic_read_acquire(v);
> }
>
> +/**
> + * atomic_set() - atomic set with relaxed ordering
> + * @v: pointer to atomic_t
> + * @i: int value to assign
> + *
> + * Atomically sets @v to @i with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_set() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_set(atomic_t *v, int i)
> {
> @@ -37,6 +68,17 @@ atomic_set(atomic_t *v, int i)
> raw_atomic_set(v, i);
> }
>
> +/**
> + * atomic_set_release() - atomic set with release ordering
> + * @v: pointer to atomic_t
> + * @i: int value to assign
> + *
> + * Atomically sets @v to @i with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_set_release() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_set_release(atomic_t *v, int i)
> {
> @@ -45,6 +87,17 @@ atomic_set_release(atomic_t *v, int i)
> raw_atomic_set_release(v, i);
> }
>
> +/**
> + * atomic_add() - atomic add with relaxed ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_add() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_add(int i, atomic_t *v)
> {
> @@ -52,6 +105,17 @@ atomic_add(int i, atomic_t *v)
> raw_atomic_add(i, v);
> }
>
> +/**
> + * atomic_add_return() - atomic add with full ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_add_return() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_add_return(int i, atomic_t *v)
> {
> @@ -60,6 +124,17 @@ atomic_add_return(int i, atomic_t *v)
> return raw_atomic_add_return(i, v);
> }
>
> +/**
> + * atomic_add_return_acquire() - atomic add with acquire ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_add_return_acquire() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_add_return_acquire(int i, atomic_t *v)
> {
> @@ -67,6 +142,17 @@ atomic_add_return_acquire(int i, atomic_t *v)
> return raw_atomic_add_return_acquire(i, v);
> }
>
> +/**
> + * atomic_add_return_release() - atomic add with release ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_add_return_release() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_add_return_release(int i, atomic_t *v)
> {
> @@ -75,6 +161,17 @@ atomic_add_return_release(int i, atomic_t *v)
> return raw_atomic_add_return_release(i, v);
> }
>
> +/**
> + * atomic_add_return_relaxed() - atomic add with relaxed ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_add_return_relaxed() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_add_return_relaxed(int i, atomic_t *v)
> {
> @@ -82,6 +179,17 @@ atomic_add_return_relaxed(int i, atomic_t *v)
> return raw_atomic_add_return_relaxed(i, v);
> }
>
> +/**
> + * atomic_fetch_add() - atomic add with full ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_add() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_add(int i, atomic_t *v)
> {
> @@ -90,6 +198,17 @@ atomic_fetch_add(int i, atomic_t *v)
> return raw_atomic_fetch_add(i, v);
> }
>
> +/**
> + * atomic_fetch_add_acquire() - atomic add with acquire ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_add_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_add_acquire(int i, atomic_t *v)
> {
> @@ -97,6 +216,17 @@ atomic_fetch_add_acquire(int i, atomic_t *v)
> return raw_atomic_fetch_add_acquire(i, v);
> }
>
> +/**
> + * atomic_fetch_add_release() - atomic add with release ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_add_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_add_release(int i, atomic_t *v)
> {
> @@ -105,6 +235,17 @@ atomic_fetch_add_release(int i, atomic_t *v)
> return raw_atomic_fetch_add_release(i, v);
> }
>
> +/**
> + * atomic_fetch_add_relaxed() - atomic add with relaxed ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_add_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_add_relaxed(int i, atomic_t *v)
> {
> @@ -112,6 +253,17 @@ atomic_fetch_add_relaxed(int i, atomic_t *v)
> return raw_atomic_fetch_add_relaxed(i, v);
> }
>
> +/**
> + * atomic_sub() - atomic subtract with relaxed ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_sub() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_sub(int i, atomic_t *v)
> {
> @@ -119,6 +271,17 @@ atomic_sub(int i, atomic_t *v)
> raw_atomic_sub(i, v);
> }
>
> +/**
> + * atomic_sub_return() - atomic subtract with full ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_sub_return() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_sub_return(int i, atomic_t *v)
> {
> @@ -127,6 +290,17 @@ atomic_sub_return(int i, atomic_t *v)
> return raw_atomic_sub_return(i, v);
> }
>
> +/**
> + * atomic_sub_return_acquire() - atomic subtract with acquire ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_sub_return_acquire() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_sub_return_acquire(int i, atomic_t *v)
> {
> @@ -134,6 +308,17 @@ atomic_sub_return_acquire(int i, atomic_t *v)
> return raw_atomic_sub_return_acquire(i, v);
> }
>
> +/**
> + * atomic_sub_return_release() - atomic subtract with release ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_sub_return_release() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_sub_return_release(int i, atomic_t *v)
> {
> @@ -142,6 +327,17 @@ atomic_sub_return_release(int i, atomic_t *v)
> return raw_atomic_sub_return_release(i, v);
> }
>
> +/**
> + * atomic_sub_return_relaxed() - atomic subtract with relaxed ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_sub_return_relaxed() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_sub_return_relaxed(int i, atomic_t *v)
> {
> @@ -149,6 +345,17 @@ atomic_sub_return_relaxed(int i, atomic_t *v)
> return raw_atomic_sub_return_relaxed(i, v);
> }
>
> +/**
> + * atomic_fetch_sub() - atomic subtract with full ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_sub() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_sub(int i, atomic_t *v)
> {
> @@ -157,6 +364,17 @@ atomic_fetch_sub(int i, atomic_t *v)
> return raw_atomic_fetch_sub(i, v);
> }
>
> +/**
> + * atomic_fetch_sub_acquire() - atomic subtract with acquire ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_sub_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_sub_acquire(int i, atomic_t *v)
> {
> @@ -164,6 +382,17 @@ atomic_fetch_sub_acquire(int i, atomic_t *v)
> return raw_atomic_fetch_sub_acquire(i, v);
> }
>
> +/**
> + * atomic_fetch_sub_release() - atomic subtract with release ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_sub_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_sub_release(int i, atomic_t *v)
> {
> @@ -172,6 +401,17 @@ atomic_fetch_sub_release(int i, atomic_t *v)
> return raw_atomic_fetch_sub_release(i, v);
> }
>
> +/**
> + * atomic_fetch_sub_relaxed() - atomic subtract with relaxed ordering
> + * @i: int value to subtract
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_sub_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_sub_relaxed(int i, atomic_t *v)
> {
> @@ -179,6 +419,16 @@ atomic_fetch_sub_relaxed(int i, atomic_t *v)
> return raw_atomic_fetch_sub_relaxed(i, v);
> }
>
> +/**
> + * atomic_inc() - atomic increment with relaxed ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_inc() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_inc(atomic_t *v)
> {
> @@ -186,6 +436,16 @@ atomic_inc(atomic_t *v)
> raw_atomic_inc(v);
> }
>
> +/**
> + * atomic_inc_return() - atomic increment with full ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_inc_return() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_inc_return(atomic_t *v)
> {
> @@ -194,6 +454,16 @@ atomic_inc_return(atomic_t *v)
> return raw_atomic_inc_return(v);
> }
>
> +/**
> + * atomic_inc_return_acquire() - atomic increment with acquire ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_inc_return_acquire() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_inc_return_acquire(atomic_t *v)
> {
> @@ -201,6 +471,16 @@ atomic_inc_return_acquire(atomic_t *v)
> return raw_atomic_inc_return_acquire(v);
> }
>
> +/**
> + * atomic_inc_return_release() - atomic increment with release ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_inc_return_release() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_inc_return_release(atomic_t *v)
> {
> @@ -209,6 +489,16 @@ atomic_inc_return_release(atomic_t *v)
> return raw_atomic_inc_return_release(v);
> }
>
> +/**
> + * atomic_inc_return_relaxed() - atomic increment with relaxed ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_inc_return_relaxed() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_inc_return_relaxed(atomic_t *v)
> {
> @@ -216,6 +506,16 @@ atomic_inc_return_relaxed(atomic_t *v)
> return raw_atomic_inc_return_relaxed(v);
> }
>
> +/**
> + * atomic_fetch_inc() - atomic increment with full ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_inc() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_inc(atomic_t *v)
> {
> @@ -224,6 +524,16 @@ atomic_fetch_inc(atomic_t *v)
> return raw_atomic_fetch_inc(v);
> }
>
> +/**
> + * atomic_fetch_inc_acquire() - atomic increment with acquire ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_inc_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_inc_acquire(atomic_t *v)
> {
> @@ -231,6 +541,16 @@ atomic_fetch_inc_acquire(atomic_t *v)
> return raw_atomic_fetch_inc_acquire(v);
> }
>
> +/**
> + * atomic_fetch_inc_release() - atomic increment with release ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_inc_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_inc_release(atomic_t *v)
> {
> @@ -239,6 +559,16 @@ atomic_fetch_inc_release(atomic_t *v)
> return raw_atomic_fetch_inc_release(v);
> }
>
> +/**
> + * atomic_fetch_inc_relaxed() - atomic increment with relaxed ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_inc_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_inc_relaxed(atomic_t *v)
> {
> @@ -246,6 +576,16 @@ atomic_fetch_inc_relaxed(atomic_t *v)
> return raw_atomic_fetch_inc_relaxed(v);
> }
>
> +/**
> + * atomic_dec() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_dec() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_dec(atomic_t *v)
> {
> @@ -253,6 +593,16 @@ atomic_dec(atomic_t *v)
> raw_atomic_dec(v);
> }
>
> +/**
> + * atomic_dec_return() - atomic decrement with full ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_dec_return() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_dec_return(atomic_t *v)
> {
> @@ -261,6 +611,16 @@ atomic_dec_return(atomic_t *v)
> return raw_atomic_dec_return(v);
> }
>
> +/**
> + * atomic_dec_return_acquire() - atomic decrement with acquire ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_dec_return_acquire() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_dec_return_acquire(atomic_t *v)
> {
> @@ -268,6 +628,16 @@ atomic_dec_return_acquire(atomic_t *v)
> return raw_atomic_dec_return_acquire(v);
> }
>
> +/**
> + * atomic_dec_return_release() - atomic decrement with release ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_dec_return_release() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_dec_return_release(atomic_t *v)
> {
> @@ -276,6 +646,16 @@ atomic_dec_return_release(atomic_t *v)
> return raw_atomic_dec_return_release(v);
> }
>
> +/**
> + * atomic_dec_return_relaxed() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_dec_return_relaxed() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline int
> atomic_dec_return_relaxed(atomic_t *v)
> {
> @@ -283,6 +663,16 @@ atomic_dec_return_relaxed(atomic_t *v)
> return raw_atomic_dec_return_relaxed(v);
> }
>
> +/**
> + * atomic_fetch_dec() - atomic decrement with full ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_dec() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_dec(atomic_t *v)
> {
> @@ -291,6 +681,16 @@ atomic_fetch_dec(atomic_t *v)
> return raw_atomic_fetch_dec(v);
> }
>
> +/**
> + * atomic_fetch_dec_acquire() - atomic decrement with acquire ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_dec_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_dec_acquire(atomic_t *v)
> {
> @@ -298,6 +698,16 @@ atomic_fetch_dec_acquire(atomic_t *v)
> return raw_atomic_fetch_dec_acquire(v);
> }
>
> +/**
> + * atomic_fetch_dec_release() - atomic decrement with release ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_dec_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_dec_release(atomic_t *v)
> {
> @@ -306,6 +716,16 @@ atomic_fetch_dec_release(atomic_t *v)
> return raw_atomic_fetch_dec_release(v);
> }
>
> +/**
> + * atomic_fetch_dec_relaxed() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_dec_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_dec_relaxed(atomic_t *v)
> {
> @@ -313,6 +733,17 @@ atomic_fetch_dec_relaxed(atomic_t *v)
> return raw_atomic_fetch_dec_relaxed(v);
> }
>
> +/**
> + * atomic_and() - atomic bitwise AND with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_and() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_and(int i, atomic_t *v)
> {
> @@ -320,6 +751,17 @@ atomic_and(int i, atomic_t *v)
> raw_atomic_and(i, v);
> }
>
> +/**
> + * atomic_fetch_and() - atomic bitwise AND with full ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_and() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_and(int i, atomic_t *v)
> {
> @@ -328,6 +770,17 @@ atomic_fetch_and(int i, atomic_t *v)
> return raw_atomic_fetch_and(i, v);
> }
>
> +/**
> + * atomic_fetch_and_acquire() - atomic bitwise AND with acquire ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_and_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_and_acquire(int i, atomic_t *v)
> {
> @@ -335,6 +788,17 @@ atomic_fetch_and_acquire(int i, atomic_t *v)
> return raw_atomic_fetch_and_acquire(i, v);
> }
>
> +/**
> + * atomic_fetch_and_release() - atomic bitwise AND with release ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_and_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_and_release(int i, atomic_t *v)
> {
> @@ -343,6 +807,17 @@ atomic_fetch_and_release(int i, atomic_t *v)
> return raw_atomic_fetch_and_release(i, v);
> }
>
> +/**
> + * atomic_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_and_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_and_relaxed(int i, atomic_t *v)
> {
> @@ -350,6 +825,17 @@ atomic_fetch_and_relaxed(int i, atomic_t *v)
> return raw_atomic_fetch_and_relaxed(i, v);
> }
>
> +/**
> + * atomic_andnot() - atomic bitwise AND NOT with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & ~@i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_andnot() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_andnot(int i, atomic_t *v)
> {
> @@ -357,6 +843,17 @@ atomic_andnot(int i, atomic_t *v)
> raw_atomic_andnot(i, v);
> }
>
> +/**
> + * atomic_fetch_andnot() - atomic bitwise AND NOT with full ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & ~@i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_andnot(int i, atomic_t *v)
> {
> @@ -365,6 +862,17 @@ atomic_fetch_andnot(int i, atomic_t *v)
> return raw_atomic_fetch_andnot(i, v);
> }
>
> +/**
> + * atomic_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & ~@i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_andnot_acquire(int i, atomic_t *v)
> {
> @@ -372,6 +880,17 @@ atomic_fetch_andnot_acquire(int i, atomic_t *v)
> return raw_atomic_fetch_andnot_acquire(i, v);
> }
>
> +/**
> + * atomic_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & ~@i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_andnot_release(int i, atomic_t *v)
> {
> @@ -380,6 +899,17 @@ atomic_fetch_andnot_release(int i, atomic_t *v)
> return raw_atomic_fetch_andnot_release(i, v);
> }
>
> +/**
> + * atomic_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v & ~@i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_andnot_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_andnot_relaxed(int i, atomic_t *v)
> {
> @@ -387,6 +917,17 @@ atomic_fetch_andnot_relaxed(int i, atomic_t *v)
> return raw_atomic_fetch_andnot_relaxed(i, v);
> }
>
> +/**
> + * atomic_or() - atomic bitwise OR with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v | @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_or() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_or(int i, atomic_t *v)
> {
> @@ -394,6 +935,17 @@ atomic_or(int i, atomic_t *v)
> raw_atomic_or(i, v);
> }
>
> +/**
> + * atomic_fetch_or() - atomic bitwise OR with full ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v | @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_or() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_or(int i, atomic_t *v)
> {
> @@ -402,6 +954,17 @@ atomic_fetch_or(int i, atomic_t *v)
> return raw_atomic_fetch_or(i, v);
> }
>
> +/**
> + * atomic_fetch_or_acquire() - atomic bitwise OR with acquire ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v | @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_or_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_or_acquire(int i, atomic_t *v)
> {
> @@ -409,6 +972,17 @@ atomic_fetch_or_acquire(int i, atomic_t *v)
> return raw_atomic_fetch_or_acquire(i, v);
> }
>
> +/**
> + * atomic_fetch_or_release() - atomic bitwise OR with release ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v | @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_or_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_or_release(int i, atomic_t *v)
> {
> @@ -417,6 +991,17 @@ atomic_fetch_or_release(int i, atomic_t *v)
> return raw_atomic_fetch_or_release(i, v);
> }
>
> +/**
> + * atomic_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v | @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_or_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_or_relaxed(int i, atomic_t *v)
> {
> @@ -424,6 +1009,17 @@ atomic_fetch_or_relaxed(int i, atomic_t *v)
> return raw_atomic_fetch_or_relaxed(i, v);
> }
>
> +/**
> + * atomic_xor() - atomic bitwise XOR with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v ^ @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_xor() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_xor(int i, atomic_t *v)
> {
> @@ -431,6 +1027,17 @@ atomic_xor(int i, atomic_t *v)
> raw_atomic_xor(i, v);
> }
>
> +/**
> + * atomic_fetch_xor() - atomic bitwise XOR with full ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v ^ @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_xor() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_xor(int i, atomic_t *v)
> {
> @@ -439,6 +1046,17 @@ atomic_fetch_xor(int i, atomic_t *v)
> return raw_atomic_fetch_xor(i, v);
> }
>
> +/**
> + * atomic_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v ^ @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_xor_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_xor_acquire(int i, atomic_t *v)
> {
> @@ -446,6 +1064,17 @@ atomic_fetch_xor_acquire(int i, atomic_t *v)
> return raw_atomic_fetch_xor_acquire(i, v);
> }
>
> +/**
> + * atomic_fetch_xor_release() - atomic bitwise XOR with release ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v ^ @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_xor_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_xor_release(int i, atomic_t *v)
> {
> @@ -454,6 +1083,17 @@ atomic_fetch_xor_release(int i, atomic_t *v)
> return raw_atomic_fetch_xor_release(i, v);
> }
>
> +/**
> + * atomic_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
> + * @i: int value
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v ^ @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_xor_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_xor_relaxed(int i, atomic_t *v)
> {
> @@ -461,6 +1101,17 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v)
> return raw_atomic_fetch_xor_relaxed(i, v);
> }
>
> +/**
> + * atomic_xchg() - atomic exchange with full ordering
> + * @v: pointer to atomic_t
> + * @new: int value to assign
> + *
> + * Atomically updates @v to @new with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_xchg() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_xchg(atomic_t *v, int new)
> {
> @@ -469,6 +1120,17 @@ atomic_xchg(atomic_t *v, int new)
> return raw_atomic_xchg(v, new);
> }
>
> +/**
> + * atomic_xchg_acquire() - atomic exchange with acquire ordering
> + * @v: pointer to atomic_t
> + * @new: int value to assign
> + *
> + * Atomically updates @v to @new with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_xchg_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_xchg_acquire(atomic_t *v, int new)
> {
> @@ -476,6 +1138,17 @@ atomic_xchg_acquire(atomic_t *v, int new)
> return raw_atomic_xchg_acquire(v, new);
> }
>
> +/**
> + * atomic_xchg_release() - atomic exchange with release ordering
> + * @v: pointer to atomic_t
> + * @new: int value to assign
> + *
> + * Atomically updates @v to @new with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_xchg_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_xchg_release(atomic_t *v, int new)
> {
> @@ -484,6 +1157,17 @@ atomic_xchg_release(atomic_t *v, int new)
> return raw_atomic_xchg_release(v, new);
> }
>
> +/**
> + * atomic_xchg_relaxed() - atomic exchange with relaxed ordering
> + * @v: pointer to atomic_t
> + * @new: int value to assign
> + *
> + * Atomically updates @v to @new with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_xchg_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_xchg_relaxed(atomic_t *v, int new)
> {
> @@ -491,6 +1175,18 @@ atomic_xchg_relaxed(atomic_t *v, int new)
> return raw_atomic_xchg_relaxed(v, new);
> }
>
> +/**
> + * atomic_cmpxchg() - atomic compare and exchange with full ordering
> + * @v: pointer to atomic_t
> + * @old: int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_cmpxchg() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_cmpxchg(atomic_t *v, int old, int new)
> {
> @@ -499,6 +1195,18 @@ atomic_cmpxchg(atomic_t *v, int old, int new)
> return raw_atomic_cmpxchg(v, old, new);
> }
>
> +/**
> + * atomic_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
> + * @v: pointer to atomic_t
> + * @old: int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_cmpxchg_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
> {
> @@ -506,6 +1214,18 @@ atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
> return raw_atomic_cmpxchg_acquire(v, old, new);
> }
>
> +/**
> + * atomic_cmpxchg_release() - atomic compare and exchange with release ordering
> + * @v: pointer to atomic_t
> + * @old: int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_cmpxchg_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_cmpxchg_release(atomic_t *v, int old, int new)
> {
> @@ -514,6 +1234,18 @@ atomic_cmpxchg_release(atomic_t *v, int old, int new)
> return raw_atomic_cmpxchg_release(v, old, new);
> }
>
> +/**
> + * atomic_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
> + * @v: pointer to atomic_t
> + * @old: int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_cmpxchg_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
> {
> @@ -521,6 +1253,19 @@ atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
> return raw_atomic_cmpxchg_relaxed(v, old, new);
> }
>
> +/**
> + * atomic_try_cmpxchg() - atomic compare and exchange with full ordering
> + * @v: pointer to atomic_t
> + * @old: pointer to int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with full ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg() there.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> atomic_try_cmpxchg(atomic_t *v, int *old, int new)
> {
> @@ -530,6 +1275,19 @@ atomic_try_cmpxchg(atomic_t *v, int *old, int new)
> return raw_atomic_try_cmpxchg(v, old, new);
> }
>
> +/**
> + * atomic_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
> + * @v: pointer to atomic_t
> + * @old: pointer to int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with acquire ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_acquire() there.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
> {
> @@ -538,6 +1296,19 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
> return raw_atomic_try_cmpxchg_acquire(v, old, new);
> }
>
> +/**
> + * atomic_try_cmpxchg_release() - atomic compare and exchange with release ordering
> + * @v: pointer to atomic_t
> + * @old: pointer to int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with release ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_release() there.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
> {
> @@ -547,6 +1318,19 @@ atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
> return raw_atomic_try_cmpxchg_release(v, old, new);
> }
>
> +/**
> + * atomic_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
> + * @v: pointer to atomic_t
> + * @old: pointer to int value to compare with
> + * @new: int value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with relaxed ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_try_cmpxchg_relaxed() there.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
> {
> @@ -555,6 +1339,17 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
> return raw_atomic_try_cmpxchg_relaxed(v, old, new);
> }
>
> +/**
> + * atomic_sub_and_test() - atomic subtract and test if zero with full ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_sub_and_test() there.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> atomic_sub_and_test(int i, atomic_t *v)
> {
> @@ -563,6 +1358,16 @@ atomic_sub_and_test(int i, atomic_t *v)
> return raw_atomic_sub_and_test(i, v);
> }
>
> +/**
> + * atomic_dec_and_test() - atomic decrement and test if zero with full ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_dec_and_test() there.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> atomic_dec_and_test(atomic_t *v)
> {
> @@ -571,6 +1376,16 @@ atomic_dec_and_test(atomic_t *v)
> return raw_atomic_dec_and_test(v);
> }
>
> +/**
> + * atomic_inc_and_test() - atomic increment and test if zero with full ordering
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_inc_and_test() there.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> atomic_inc_and_test(atomic_t *v)
> {
> @@ -579,6 +1394,17 @@ atomic_inc_and_test(atomic_t *v)
> return raw_atomic_inc_and_test(v);
> }
>
> +/**
> + * atomic_add_negative() - atomic add and test if negative with full ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_add_negative() there.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> atomic_add_negative(int i, atomic_t *v)
> {
> @@ -587,6 +1413,17 @@ atomic_add_negative(int i, atomic_t *v)
> return raw_atomic_add_negative(i, v);
> }
>
> +/**
> + * atomic_add_negative_acquire() - atomic add and test if negative with acquire ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_add_negative_acquire() there.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> atomic_add_negative_acquire(int i, atomic_t *v)
> {
> @@ -594,6 +1431,17 @@ atomic_add_negative_acquire(int i, atomic_t *v)
> return raw_atomic_add_negative_acquire(i, v);
> }
>
> +/**
> + * atomic_add_negative_release() - atomic add and test if negative with release ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_add_negative_release() there.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> atomic_add_negative_release(int i, atomic_t *v)
> {
> @@ -602,6 +1450,17 @@ atomic_add_negative_release(int i, atomic_t *v)
> return raw_atomic_add_negative_release(i, v);
> }
>
> +/**
> + * atomic_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
> + * @i: int value to add
> + * @v: pointer to atomic_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_add_negative_relaxed() there.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> atomic_add_negative_relaxed(int i, atomic_t *v)
> {
> @@ -609,6 +1468,18 @@ atomic_add_negative_relaxed(int i, atomic_t *v)
> return raw_atomic_add_negative_relaxed(i, v);
> }
>
> +/**
> + * atomic_fetch_add_unless() - atomic add unless value with full ordering
> + * @v: pointer to atomic_t
> + * @a: int value to add
> + * @u: int value to compare with
> + *
> + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_fetch_add_unless() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline int
> atomic_fetch_add_unless(atomic_t *v, int a, int u)
> {
> @@ -617,6 +1488,18 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u)
> return raw_atomic_fetch_add_unless(v, a, u);
> }
>
> +/**
> + * atomic_add_unless() - atomic add unless value with full ordering
> + * @v: pointer to atomic_t
> + * @a: int value to add
> + * @u: int value to compare with
> + *
> + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_add_unless() there.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> atomic_add_unless(atomic_t *v, int a, int u)
> {
> @@ -625,6 +1508,16 @@ atomic_add_unless(atomic_t *v, int a, int u)
> return raw_atomic_add_unless(v, a, u);
> }
>
> +/**
> + * atomic_inc_not_zero() - atomic increment unless zero with full ordering
> + * @v: pointer to atomic_t
> + *
> + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_inc_not_zero() there.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> atomic_inc_not_zero(atomic_t *v)
> {
> @@ -633,6 +1526,16 @@ atomic_inc_not_zero(atomic_t *v)
> return raw_atomic_inc_not_zero(v);
> }
>
> +/**
> + * atomic_inc_unless_negative() - atomic increment unless negative with full ordering
> + * @v: pointer to atomic_t
> + *
> + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_inc_unless_negative() there.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> atomic_inc_unless_negative(atomic_t *v)
> {
> @@ -641,6 +1544,16 @@ atomic_inc_unless_negative(atomic_t *v)
> return raw_atomic_inc_unless_negative(v);
> }
>
> +/**
> + * atomic_dec_unless_positive() - atomic decrement unless positive with full ordering
> + * @v: pointer to atomic_t
> + *
> + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_dec_unless_positive() there.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> atomic_dec_unless_positive(atomic_t *v)
> {
> @@ -649,6 +1562,16 @@ atomic_dec_unless_positive(atomic_t *v)
> return raw_atomic_dec_unless_positive(v);
> }
>
> +/**
> + * atomic_dec_if_positive() - atomic decrement if positive with full ordering
> + * @v: pointer to atomic_t
> + *
> + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_dec_if_positive() there.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline int
> atomic_dec_if_positive(atomic_t *v)
> {
> @@ -657,6 +1580,16 @@ atomic_dec_if_positive(atomic_t *v)
> return raw_atomic_dec_if_positive(v);
> }
>
> +/**
> + * atomic64_read() - atomic load with relaxed ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically loads the value of @v with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_read() there.
> + *
> + * Return: The value loaded from @v.
> + */
> static __always_inline s64
> atomic64_read(const atomic64_t *v)
> {
> @@ -664,6 +1597,16 @@ atomic64_read(const atomic64_t *v)
> return raw_atomic64_read(v);
> }
>
> +/**
> + * atomic64_read_acquire() - atomic load with acquire ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically loads the value of @v with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_read_acquire() there.
> + *
> + * Return: The value loaded from @v.
> + */
> static __always_inline s64
> atomic64_read_acquire(const atomic64_t *v)
> {
> @@ -671,6 +1614,17 @@ atomic64_read_acquire(const atomic64_t *v)
> return raw_atomic64_read_acquire(v);
> }
>
> +/**
> + * atomic64_set() - atomic set with relaxed ordering
> + * @v: pointer to atomic64_t
> + * @i: s64 value to assign
> + *
> + * Atomically sets @v to @i with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_set() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic64_set(atomic64_t *v, s64 i)
> {
> @@ -678,6 +1632,17 @@ atomic64_set(atomic64_t *v, s64 i)
> raw_atomic64_set(v, i);
> }
>
> +/**
> + * atomic64_set_release() - atomic set with release ordering
> + * @v: pointer to atomic64_t
> + * @i: s64 value to assign
> + *
> + * Atomically sets @v to @i with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_set_release() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic64_set_release(atomic64_t *v, s64 i)
> {
> @@ -686,6 +1651,17 @@ atomic64_set_release(atomic64_t *v, s64 i)
> raw_atomic64_set_release(v, i);
> }
>
> +/**
> + * atomic64_add() - atomic add with relaxed ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_add() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic64_add(s64 i, atomic64_t *v)
> {
> @@ -693,6 +1669,17 @@ atomic64_add(s64 i, atomic64_t *v)
> raw_atomic64_add(i, v);
> }
>
> +/**
> + * atomic64_add_return() - atomic add with full ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_add_return() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_add_return(s64 i, atomic64_t *v)
> {
> @@ -701,6 +1688,17 @@ atomic64_add_return(s64 i, atomic64_t *v)
> return raw_atomic64_add_return(i, v);
> }
>
> +/**
> + * atomic64_add_return_acquire() - atomic add with acquire ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_add_return_acquire() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_add_return_acquire(s64 i, atomic64_t *v)
> {
> @@ -708,6 +1706,17 @@ atomic64_add_return_acquire(s64 i, atomic64_t *v)
> return raw_atomic64_add_return_acquire(i, v);
> }
>
> +/**
> + * atomic64_add_return_release() - atomic add with release ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_add_return_release() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_add_return_release(s64 i, atomic64_t *v)
> {
> @@ -716,6 +1725,17 @@ atomic64_add_return_release(s64 i, atomic64_t *v)
> return raw_atomic64_add_return_release(i, v);
> }
>
> +/**
> + * atomic64_add_return_relaxed() - atomic add with relaxed ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_add_return_relaxed() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_add_return_relaxed(s64 i, atomic64_t *v)
> {
> @@ -723,6 +1743,17 @@ atomic64_add_return_relaxed(s64 i, atomic64_t *v)
> return raw_atomic64_add_return_relaxed(i, v);
> }
>
> +/**
> + * atomic64_fetch_add() - atomic add with full ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_add() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_add(s64 i, atomic64_t *v)
> {
> @@ -731,6 +1762,17 @@ atomic64_fetch_add(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_add(i, v);
> }
>
> +/**
> + * atomic64_fetch_add_acquire() - atomic add with acquire ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
> {
> @@ -738,6 +1780,17 @@ atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_add_acquire(i, v);
> }
>
> +/**
> + * atomic64_fetch_add_release() - atomic add with release ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_add_release(s64 i, atomic64_t *v)
> {
> @@ -746,6 +1799,17 @@ atomic64_fetch_add_release(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_add_release(i, v);
> }
>
> +/**
> + * atomic64_fetch_add_relaxed() - atomic add with relaxed ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
> {
> @@ -753,6 +1817,17 @@ atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_add_relaxed(i, v);
> }
>
> +/**
> + * atomic64_sub() - atomic subtract with relaxed ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_sub() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic64_sub(s64 i, atomic64_t *v)
> {
> @@ -760,6 +1835,17 @@ atomic64_sub(s64 i, atomic64_t *v)
> raw_atomic64_sub(i, v);
> }
>
> +/**
> + * atomic64_sub_return() - atomic subtract with full ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_sub_return() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_sub_return(s64 i, atomic64_t *v)
> {
> @@ -768,6 +1854,17 @@ atomic64_sub_return(s64 i, atomic64_t *v)
> return raw_atomic64_sub_return(i, v);
> }
>
> +/**
> + * atomic64_sub_return_acquire() - atomic subtract with acquire ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_sub_return_acquire() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_sub_return_acquire(s64 i, atomic64_t *v)
> {
> @@ -775,6 +1872,17 @@ atomic64_sub_return_acquire(s64 i, atomic64_t *v)
> return raw_atomic64_sub_return_acquire(i, v);
> }
>
> +/**
> + * atomic64_sub_return_release() - atomic subtract with release ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_sub_return_release() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_sub_return_release(s64 i, atomic64_t *v)
> {
> @@ -783,6 +1891,17 @@ atomic64_sub_return_release(s64 i, atomic64_t *v)
> return raw_atomic64_sub_return_release(i, v);
> }
>
> +/**
> + * atomic64_sub_return_relaxed() - atomic subtract with relaxed ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_sub_return_relaxed() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
> {
> @@ -790,6 +1909,17 @@ atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
> return raw_atomic64_sub_return_relaxed(i, v);
> }
>
> +/**
> + * atomic64_fetch_sub() - atomic subtract with full ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_sub(s64 i, atomic64_t *v)
> {
> @@ -798,6 +1928,17 @@ atomic64_fetch_sub(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_sub(i, v);
> }
>
> +/**
> + * atomic64_fetch_sub_acquire() - atomic subtract with acquire ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
> {
> @@ -805,6 +1946,17 @@ atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_sub_acquire(i, v);
> }
>
> +/**
> + * atomic64_fetch_sub_release() - atomic subtract with release ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_sub_release(s64 i, atomic64_t *v)
> {
> @@ -813,6 +1965,17 @@ atomic64_fetch_sub_release(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_sub_release(i, v);
> }
>
> +/**
> + * atomic64_fetch_sub_relaxed() - atomic subtract with relaxed ordering
> + * @i: s64 value to subtract
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_sub_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
> {
> @@ -820,6 +1983,16 @@ atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_sub_relaxed(i, v);
> }
>
> +/**
> + * atomic64_inc() - atomic increment with relaxed ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_inc() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic64_inc(atomic64_t *v)
> {
> @@ -827,6 +2000,16 @@ atomic64_inc(atomic64_t *v)
> raw_atomic64_inc(v);
> }
>
> +/**
> + * atomic64_inc_return() - atomic increment with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_inc_return() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_inc_return(atomic64_t *v)
> {
> @@ -835,6 +2018,16 @@ atomic64_inc_return(atomic64_t *v)
> return raw_atomic64_inc_return(v);
> }
>
> +/**
> + * atomic64_inc_return_acquire() - atomic increment with acquire ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_inc_return_acquire() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_inc_return_acquire(atomic64_t *v)
> {
> @@ -842,6 +2035,16 @@ atomic64_inc_return_acquire(atomic64_t *v)
> return raw_atomic64_inc_return_acquire(v);
> }
>
> +/**
> + * atomic64_inc_return_release() - atomic increment with release ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_inc_return_release() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_inc_return_release(atomic64_t *v)
> {
> @@ -850,6 +2053,16 @@ atomic64_inc_return_release(atomic64_t *v)
> return raw_atomic64_inc_return_release(v);
> }
>
> +/**
> + * atomic64_inc_return_relaxed() - atomic increment with relaxed ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_inc_return_relaxed() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_inc_return_relaxed(atomic64_t *v)
> {
> @@ -857,6 +2070,16 @@ atomic64_inc_return_relaxed(atomic64_t *v)
> return raw_atomic64_inc_return_relaxed(v);
> }
>
> +/**
> + * atomic64_fetch_inc() - atomic increment with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_inc(atomic64_t *v)
> {
> @@ -865,6 +2088,16 @@ atomic64_fetch_inc(atomic64_t *v)
> return raw_atomic64_fetch_inc(v);
> }
>
> +/**
> + * atomic64_fetch_inc_acquire() - atomic increment with acquire ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_inc_acquire(atomic64_t *v)
> {
> @@ -872,6 +2105,16 @@ atomic64_fetch_inc_acquire(atomic64_t *v)
> return raw_atomic64_fetch_inc_acquire(v);
> }
>
> +/**
> + * atomic64_fetch_inc_release() - atomic increment with release ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_inc_release(atomic64_t *v)
> {
> @@ -880,6 +2123,16 @@ atomic64_fetch_inc_release(atomic64_t *v)
> return raw_atomic64_fetch_inc_release(v);
> }
>
> +/**
> + * atomic64_fetch_inc_relaxed() - atomic increment with relaxed ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_inc_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_inc_relaxed(atomic64_t *v)
> {
> @@ -887,6 +2140,16 @@ atomic64_fetch_inc_relaxed(atomic64_t *v)
> return raw_atomic64_fetch_inc_relaxed(v);
> }
>
> +/**
> + * atomic64_dec() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_dec() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic64_dec(atomic64_t *v)
> {
> @@ -894,6 +2157,16 @@ atomic64_dec(atomic64_t *v)
> raw_atomic64_dec(v);
> }
>
> +/**
> + * atomic64_dec_return() - atomic decrement with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_dec_return() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_dec_return(atomic64_t *v)
> {
> @@ -902,6 +2175,16 @@ atomic64_dec_return(atomic64_t *v)
> return raw_atomic64_dec_return(v);
> }
>
> +/**
> + * atomic64_dec_return_acquire() - atomic decrement with acquire ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_dec_return_acquire() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_dec_return_acquire(atomic64_t *v)
> {
> @@ -909,6 +2192,16 @@ atomic64_dec_return_acquire(atomic64_t *v)
> return raw_atomic64_dec_return_acquire(v);
> }
>
> +/**
> + * atomic64_dec_return_release() - atomic decrement with release ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_dec_return_release() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_dec_return_release(atomic64_t *v)
> {
> @@ -917,6 +2210,16 @@ atomic64_dec_return_release(atomic64_t *v)
> return raw_atomic64_dec_return_release(v);
> }
>
> +/**
> + * atomic64_dec_return_relaxed() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_dec_return_relaxed() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline s64
> atomic64_dec_return_relaxed(atomic64_t *v)
> {
> @@ -924,6 +2227,16 @@ atomic64_dec_return_relaxed(atomic64_t *v)
> return raw_atomic64_dec_return_relaxed(v);
> }
>
> +/**
> + * atomic64_fetch_dec() - atomic decrement with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_dec(atomic64_t *v)
> {
> @@ -932,6 +2245,16 @@ atomic64_fetch_dec(atomic64_t *v)
> return raw_atomic64_fetch_dec(v);
> }
>
> +/**
> + * atomic64_fetch_dec_acquire() - atomic decrement with acquire ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_dec_acquire(atomic64_t *v)
> {
> @@ -939,6 +2262,16 @@ atomic64_fetch_dec_acquire(atomic64_t *v)
> return raw_atomic64_fetch_dec_acquire(v);
> }
>
> +/**
> + * atomic64_fetch_dec_release() - atomic decrement with release ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_dec_release(atomic64_t *v)
> {
> @@ -947,6 +2280,16 @@ atomic64_fetch_dec_release(atomic64_t *v)
> return raw_atomic64_fetch_dec_release(v);
> }
>
> +/**
> + * atomic64_fetch_dec_relaxed() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_dec_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_dec_relaxed(atomic64_t *v)
> {
> @@ -954,6 +2297,17 @@ atomic64_fetch_dec_relaxed(atomic64_t *v)
> return raw_atomic64_fetch_dec_relaxed(v);
> }
>
> +/**
> + * atomic64_and() - atomic bitwise AND with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_and() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic64_and(s64 i, atomic64_t *v)
> {
> @@ -961,6 +2315,17 @@ atomic64_and(s64 i, atomic64_t *v)
> raw_atomic64_and(i, v);
> }
>
> +/**
> + * atomic64_fetch_and() - atomic bitwise AND with full ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_and() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_and(s64 i, atomic64_t *v)
> {
> @@ -969,6 +2334,17 @@ atomic64_fetch_and(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_and(i, v);
> }
>
> +/**
> + * atomic64_fetch_and_acquire() - atomic bitwise AND with acquire ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_and_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
> {
> @@ -976,6 +2352,17 @@ atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_and_acquire(i, v);
> }
>
> +/**
> + * atomic64_fetch_and_release() - atomic bitwise AND with release ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_and_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_and_release(s64 i, atomic64_t *v)
> {
> @@ -984,6 +2371,17 @@ atomic64_fetch_and_release(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_and_release(i, v);
> }
>
> +/**
> + * atomic64_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_and_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
> {
> @@ -991,6 +2389,17 @@ atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_and_relaxed(i, v);
> }
>
> +/**
> + * atomic64_andnot() - atomic bitwise AND NOT with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & ~@i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_andnot() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic64_andnot(s64 i, atomic64_t *v)
> {
> @@ -998,6 +2407,17 @@ atomic64_andnot(s64 i, atomic64_t *v)
> raw_atomic64_andnot(i, v);
> }
>
> +/**
> + * atomic64_fetch_andnot() - atomic bitwise AND NOT with full ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & ~@i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_andnot(s64 i, atomic64_t *v)
> {
> @@ -1006,6 +2426,17 @@ atomic64_fetch_andnot(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_andnot(i, v);
> }
>
> +/**
> + * atomic64_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & ~@i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
> {
> @@ -1013,6 +2444,17 @@ atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_andnot_acquire(i, v);
> }
>
> +/**
> + * atomic64_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & ~@i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
> {
> @@ -1021,6 +2463,17 @@ atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_andnot_release(i, v);
> }
>
> +/**
> + * atomic64_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v & ~@i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_andnot_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
> {
> @@ -1028,6 +2481,17 @@ atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_andnot_relaxed(i, v);
> }
>
> +/**
> + * atomic64_or() - atomic bitwise OR with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v | @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_or() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic64_or(s64 i, atomic64_t *v)
> {
> @@ -1035,6 +2499,17 @@ atomic64_or(s64 i, atomic64_t *v)
> raw_atomic64_or(i, v);
> }
>
> +/**
> + * atomic64_fetch_or() - atomic bitwise OR with full ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v | @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_or() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_or(s64 i, atomic64_t *v)
> {
> @@ -1043,6 +2518,17 @@ atomic64_fetch_or(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_or(i, v);
> }
>
> +/**
> + * atomic64_fetch_or_acquire() - atomic bitwise OR with acquire ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v | @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_or_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
> {
> @@ -1050,6 +2536,17 @@ atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_or_acquire(i, v);
> }
>
> +/**
> + * atomic64_fetch_or_release() - atomic bitwise OR with release ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v | @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_or_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_or_release(s64 i, atomic64_t *v)
> {
> @@ -1058,6 +2555,17 @@ atomic64_fetch_or_release(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_or_release(i, v);
> }
>
> +/**
> + * atomic64_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v | @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_or_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
> {
> @@ -1065,6 +2573,17 @@ atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_or_relaxed(i, v);
> }
>
> +/**
> + * atomic64_xor() - atomic bitwise XOR with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v ^ @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_xor() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic64_xor(s64 i, atomic64_t *v)
> {
> @@ -1072,6 +2591,17 @@ atomic64_xor(s64 i, atomic64_t *v)
> raw_atomic64_xor(i, v);
> }
>
> +/**
> + * atomic64_fetch_xor() - atomic bitwise XOR with full ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v ^ @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_xor(s64 i, atomic64_t *v)
> {
> @@ -1080,6 +2610,17 @@ atomic64_fetch_xor(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_xor(i, v);
> }
>
> +/**
> + * atomic64_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v ^ @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
> {
> @@ -1087,6 +2628,17 @@ atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_xor_acquire(i, v);
> }
>
> +/**
> + * atomic64_fetch_xor_release() - atomic bitwise XOR with release ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v ^ @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_xor_release(s64 i, atomic64_t *v)
> {
> @@ -1095,6 +2647,17 @@ atomic64_fetch_xor_release(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_xor_release(i, v);
> }
>
> +/**
> + * atomic64_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
> + * @i: s64 value
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v ^ @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_xor_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
> {
> @@ -1102,6 +2665,17 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
> return raw_atomic64_fetch_xor_relaxed(i, v);
> }
>
> +/**
> + * atomic64_xchg() - atomic exchange with full ordering
> + * @v: pointer to atomic64_t
> + * @new: s64 value to assign
> + *
> + * Atomically updates @v to @new with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_xchg() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_xchg(atomic64_t *v, s64 new)
> {
> @@ -1110,6 +2684,17 @@ atomic64_xchg(atomic64_t *v, s64 new)
> return raw_atomic64_xchg(v, new);
> }
>
> +/**
> + * atomic64_xchg_acquire() - atomic exchange with acquire ordering
> + * @v: pointer to atomic64_t
> + * @new: s64 value to assign
> + *
> + * Atomically updates @v to @new with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_xchg_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_xchg_acquire(atomic64_t *v, s64 new)
> {
> @@ -1117,6 +2702,17 @@ atomic64_xchg_acquire(atomic64_t *v, s64 new)
> return raw_atomic64_xchg_acquire(v, new);
> }
>
> +/**
> + * atomic64_xchg_release() - atomic exchange with release ordering
> + * @v: pointer to atomic64_t
> + * @new: s64 value to assign
> + *
> + * Atomically updates @v to @new with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_xchg_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_xchg_release(atomic64_t *v, s64 new)
> {
> @@ -1125,6 +2721,17 @@ atomic64_xchg_release(atomic64_t *v, s64 new)
> return raw_atomic64_xchg_release(v, new);
> }
>
> +/**
> + * atomic64_xchg_relaxed() - atomic exchange with relaxed ordering
> + * @v: pointer to atomic64_t
> + * @new: s64 value to assign
> + *
> + * Atomically updates @v to @new with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_xchg_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_xchg_relaxed(atomic64_t *v, s64 new)
> {
> @@ -1132,6 +2739,18 @@ atomic64_xchg_relaxed(atomic64_t *v, s64 new)
> return raw_atomic64_xchg_relaxed(v, new);
> }
>
> +/**
> + * atomic64_cmpxchg() - atomic compare and exchange with full ordering
> + * @v: pointer to atomic64_t
> + * @old: s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
> {
> @@ -1140,6 +2759,18 @@ atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
> return raw_atomic64_cmpxchg(v, old, new);
> }
>
> +/**
> + * atomic64_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
> + * @v: pointer to atomic64_t
> + * @old: s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
> {
> @@ -1147,6 +2778,18 @@ atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
> return raw_atomic64_cmpxchg_acquire(v, old, new);
> }
>
> +/**
> + * atomic64_cmpxchg_release() - atomic compare and exchange with release ordering
> + * @v: pointer to atomic64_t
> + * @old: s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
> {
> @@ -1155,6 +2798,18 @@ atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
> return raw_atomic64_cmpxchg_release(v, old, new);
> }
>
> +/**
> + * atomic64_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
> + * @v: pointer to atomic64_t
> + * @old: s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_cmpxchg_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
> {
> @@ -1162,6 +2817,19 @@ atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
> return raw_atomic64_cmpxchg_relaxed(v, old, new);
> }
>
> +/**
> + * atomic64_try_cmpxchg() - atomic compare and exchange with full ordering
> + * @v: pointer to atomic64_t
> + * @old: pointer to s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with full ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg() there.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
> {
> @@ -1171,6 +2839,19 @@ atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
> return raw_atomic64_try_cmpxchg(v, old, new);
> }
>
> +/**
> + * atomic64_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
> + * @v: pointer to atomic64_t
> + * @old: pointer to s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with acquire ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_acquire() there.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
> {
> @@ -1179,6 +2860,19 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
> return raw_atomic64_try_cmpxchg_acquire(v, old, new);
> }
>
> +/**
> + * atomic64_try_cmpxchg_release() - atomic compare and exchange with release ordering
> + * @v: pointer to atomic64_t
> + * @old: pointer to s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with release ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_release() there.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
> {
> @@ -1188,6 +2882,19 @@ atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
> return raw_atomic64_try_cmpxchg_release(v, old, new);
> }
>
> +/**
> + * atomic64_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
> + * @v: pointer to atomic64_t
> + * @old: pointer to s64 value to compare with
> + * @new: s64 value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with relaxed ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_try_cmpxchg_relaxed() there.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
> {
> @@ -1196,6 +2903,17 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
> return raw_atomic64_try_cmpxchg_relaxed(v, old, new);
> }
>
> +/**
> + * atomic64_sub_and_test() - atomic subtract and test if zero with full ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_sub_and_test() there.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> atomic64_sub_and_test(s64 i, atomic64_t *v)
> {
> @@ -1204,6 +2922,16 @@ atomic64_sub_and_test(s64 i, atomic64_t *v)
> return raw_atomic64_sub_and_test(i, v);
> }
>
> +/**
> + * atomic64_dec_and_test() - atomic decrement and test if zero with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_dec_and_test() there.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> atomic64_dec_and_test(atomic64_t *v)
> {
> @@ -1212,6 +2940,16 @@ atomic64_dec_and_test(atomic64_t *v)
> return raw_atomic64_dec_and_test(v);
> }
>
> +/**
> + * atomic64_inc_and_test() - atomic increment and test if zero with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_inc_and_test() there.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> atomic64_inc_and_test(atomic64_t *v)
> {
> @@ -1220,6 +2958,17 @@ atomic64_inc_and_test(atomic64_t *v)
> return raw_atomic64_inc_and_test(v);
> }
>
> +/**
> + * atomic64_add_negative() - atomic add and test if negative with full ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_add_negative() there.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> atomic64_add_negative(s64 i, atomic64_t *v)
> {
> @@ -1228,6 +2977,17 @@ atomic64_add_negative(s64 i, atomic64_t *v)
> return raw_atomic64_add_negative(i, v);
> }
>
> +/**
> + * atomic64_add_negative_acquire() - atomic add and test if negative with acquire ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_add_negative_acquire() there.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> atomic64_add_negative_acquire(s64 i, atomic64_t *v)
> {
> @@ -1235,6 +2995,17 @@ atomic64_add_negative_acquire(s64 i, atomic64_t *v)
> return raw_atomic64_add_negative_acquire(i, v);
> }
>
> +/**
> + * atomic64_add_negative_release() - atomic add and test if negative with release ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_add_negative_release() there.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> atomic64_add_negative_release(s64 i, atomic64_t *v)
> {
> @@ -1243,6 +3014,17 @@ atomic64_add_negative_release(s64 i, atomic64_t *v)
> return raw_atomic64_add_negative_release(i, v);
> }
>
> +/**
> + * atomic64_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
> + * @i: s64 value to add
> + * @v: pointer to atomic64_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_add_negative_relaxed() there.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
> {
> @@ -1250,6 +3032,18 @@ atomic64_add_negative_relaxed(s64 i, atomic64_t *v)
> return raw_atomic64_add_negative_relaxed(i, v);
> }
>
> +/**
> + * atomic64_fetch_add_unless() - atomic add unless value with full ordering
> + * @v: pointer to atomic64_t
> + * @a: s64 value to add
> + * @u: s64 value to compare with
> + *
> + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_fetch_add_unless() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline s64
> atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> {
> @@ -1258,6 +3052,18 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> return raw_atomic64_fetch_add_unless(v, a, u);
> }
>
> +/**
> + * atomic64_add_unless() - atomic add unless value with full ordering
> + * @v: pointer to atomic64_t
> + * @a: s64 value to add
> + * @u: s64 value to compare with
> + *
> + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_add_unless() there.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
> {
> @@ -1266,6 +3072,16 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
> return raw_atomic64_add_unless(v, a, u);
> }
>
> +/**
> + * atomic64_inc_not_zero() - atomic increment unless zero with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_inc_not_zero() there.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> atomic64_inc_not_zero(atomic64_t *v)
> {
> @@ -1274,6 +3090,16 @@ atomic64_inc_not_zero(atomic64_t *v)
> return raw_atomic64_inc_not_zero(v);
> }
>
> +/**
> + * atomic64_inc_unless_negative() - atomic increment unless negative with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_inc_unless_negative() there.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> atomic64_inc_unless_negative(atomic64_t *v)
> {
> @@ -1282,6 +3108,16 @@ atomic64_inc_unless_negative(atomic64_t *v)
> return raw_atomic64_inc_unless_negative(v);
> }
>
> +/**
> + * atomic64_dec_unless_positive() - atomic decrement unless positive with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_dec_unless_positive() there.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> atomic64_dec_unless_positive(atomic64_t *v)
> {
> @@ -1290,6 +3126,16 @@ atomic64_dec_unless_positive(atomic64_t *v)
> return raw_atomic64_dec_unless_positive(v);
> }
>
> +/**
> + * atomic64_dec_if_positive() - atomic decrement if positive with full ordering
> + * @v: pointer to atomic64_t
> + *
> + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic64_dec_if_positive() there.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline s64
> atomic64_dec_if_positive(atomic64_t *v)
> {
> @@ -1298,6 +3144,16 @@ atomic64_dec_if_positive(atomic64_t *v)
> return raw_atomic64_dec_if_positive(v);
> }
>
> +/**
> + * atomic_long_read() - atomic load with relaxed ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically loads the value of @v with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_read() there.
> + *
> + * Return: The value loaded from @v.
> + */
> static __always_inline long
> atomic_long_read(const atomic_long_t *v)
> {
> @@ -1305,6 +3161,16 @@ atomic_long_read(const atomic_long_t *v)
> return raw_atomic_long_read(v);
> }
>
> +/**
> + * atomic_long_read_acquire() - atomic load with acquire ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically loads the value of @v with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_read_acquire() there.
> + *
> + * Return: The value loaded from @v.
> + */
> static __always_inline long
> atomic_long_read_acquire(const atomic_long_t *v)
> {
> @@ -1312,6 +3178,17 @@ atomic_long_read_acquire(const atomic_long_t *v)
> return raw_atomic_long_read_acquire(v);
> }
>
> +/**
> + * atomic_long_set() - atomic set with relaxed ordering
> + * @v: pointer to atomic_long_t
> + * @i: long value to assign
> + *
> + * Atomically sets @v to @i with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_set() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_long_set(atomic_long_t *v, long i)
> {
> @@ -1319,6 +3196,17 @@ atomic_long_set(atomic_long_t *v, long i)
> raw_atomic_long_set(v, i);
> }
>
> +/**
> + * atomic_long_set_release() - atomic set with release ordering
> + * @v: pointer to atomic_long_t
> + * @i: long value to assign
> + *
> + * Atomically sets @v to @i with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_set_release() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_long_set_release(atomic_long_t *v, long i)
> {
> @@ -1327,6 +3215,17 @@ atomic_long_set_release(atomic_long_t *v, long i)
> raw_atomic_long_set_release(v, i);
> }
>
> +/**
> + * atomic_long_add() - atomic add with relaxed ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_add() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_long_add(long i, atomic_long_t *v)
> {
> @@ -1334,6 +3233,17 @@ atomic_long_add(long i, atomic_long_t *v)
> raw_atomic_long_add(i, v);
> }
>
> +/**
> + * atomic_long_add_return() - atomic add with full ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_add_return() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_add_return(long i, atomic_long_t *v)
> {
> @@ -1342,6 +3252,17 @@ atomic_long_add_return(long i, atomic_long_t *v)
> return raw_atomic_long_add_return(i, v);
> }
>
> +/**
> + * atomic_long_add_return_acquire() - atomic add with acquire ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_add_return_acquire() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_add_return_acquire(long i, atomic_long_t *v)
> {
> @@ -1349,6 +3270,17 @@ atomic_long_add_return_acquire(long i, atomic_long_t *v)
> return raw_atomic_long_add_return_acquire(i, v);
> }
>
> +/**
> + * atomic_long_add_return_release() - atomic add with release ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_add_return_release() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_add_return_release(long i, atomic_long_t *v)
> {
> @@ -1357,6 +3289,17 @@ atomic_long_add_return_release(long i, atomic_long_t *v)
> return raw_atomic_long_add_return_release(i, v);
> }
>
> +/**
> + * atomic_long_add_return_relaxed() - atomic add with relaxed ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_add_return_relaxed() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_add_return_relaxed(long i, atomic_long_t *v)
> {
> @@ -1364,6 +3307,17 @@ atomic_long_add_return_relaxed(long i, atomic_long_t *v)
> return raw_atomic_long_add_return_relaxed(i, v);
> }
>
> +/**
> + * atomic_long_fetch_add() - atomic add with full ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_add(long i, atomic_long_t *v)
> {
> @@ -1372,6 +3326,17 @@ atomic_long_fetch_add(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_add(i, v);
> }
>
> +/**
> + * atomic_long_fetch_add_acquire() - atomic add with acquire ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
> {
> @@ -1379,6 +3344,17 @@ atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_add_acquire(i, v);
> }
>
> +/**
> + * atomic_long_fetch_add_release() - atomic add with release ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_add_release(long i, atomic_long_t *v)
> {
> @@ -1387,6 +3363,17 @@ atomic_long_fetch_add_release(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_add_release(i, v);
> }
>
> +/**
> + * atomic_long_fetch_add_relaxed() - atomic add with relaxed ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
> {
> @@ -1394,6 +3381,17 @@ atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_add_relaxed(i, v);
> }
>
> +/**
> + * atomic_long_sub() - atomic subtract with relaxed ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_sub() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_long_sub(long i, atomic_long_t *v)
> {
> @@ -1401,6 +3399,17 @@ atomic_long_sub(long i, atomic_long_t *v)
> raw_atomic_long_sub(i, v);
> }
>
> +/**
> + * atomic_long_sub_return() - atomic subtract with full ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_sub_return() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_sub_return(long i, atomic_long_t *v)
> {
> @@ -1409,6 +3418,17 @@ atomic_long_sub_return(long i, atomic_long_t *v)
> return raw_atomic_long_sub_return(i, v);
> }
>
> +/**
> + * atomic_long_sub_return_acquire() - atomic subtract with acquire ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_sub_return_acquire() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_sub_return_acquire(long i, atomic_long_t *v)
> {
> @@ -1416,6 +3436,17 @@ atomic_long_sub_return_acquire(long i, atomic_long_t *v)
> return raw_atomic_long_sub_return_acquire(i, v);
> }
>
> +/**
> + * atomic_long_sub_return_release() - atomic subtract with release ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_sub_return_release() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_sub_return_release(long i, atomic_long_t *v)
> {
> @@ -1424,6 +3455,17 @@ atomic_long_sub_return_release(long i, atomic_long_t *v)
> return raw_atomic_long_sub_return_release(i, v);
> }
>
> +/**
> + * atomic_long_sub_return_relaxed() - atomic subtract with relaxed ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_sub_return_relaxed() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
> {
> @@ -1431,6 +3473,17 @@ atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
> return raw_atomic_long_sub_return_relaxed(i, v);
> }
>
> +/**
> + * atomic_long_fetch_sub() - atomic subtract with full ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_sub(long i, atomic_long_t *v)
> {
> @@ -1439,6 +3492,17 @@ atomic_long_fetch_sub(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_sub(i, v);
> }
>
> +/**
> + * atomic_long_fetch_sub_acquire() - atomic subtract with acquire ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
> {
> @@ -1446,6 +3510,17 @@ atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_sub_acquire(i, v);
> }
>
> +/**
> + * atomic_long_fetch_sub_release() - atomic subtract with release ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_sub_release(long i, atomic_long_t *v)
> {
> @@ -1454,6 +3529,17 @@ atomic_long_fetch_sub_release(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_sub_release(i, v);
> }
>
> +/**
> + * atomic_long_fetch_sub_relaxed() - atomic subtract with relaxed ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_sub_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
> {
> @@ -1461,6 +3547,16 @@ atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_sub_relaxed(i, v);
> }
>
> +/**
> + * atomic_long_inc() - atomic increment with relaxed ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_inc() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_long_inc(atomic_long_t *v)
> {
> @@ -1468,6 +3564,16 @@ atomic_long_inc(atomic_long_t *v)
> raw_atomic_long_inc(v);
> }
>
> +/**
> + * atomic_long_inc_return() - atomic increment with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_inc_return() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_inc_return(atomic_long_t *v)
> {
> @@ -1476,6 +3582,16 @@ atomic_long_inc_return(atomic_long_t *v)
> return raw_atomic_long_inc_return(v);
> }
>
> +/**
> + * atomic_long_inc_return_acquire() - atomic increment with acquire ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_inc_return_acquire() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_inc_return_acquire(atomic_long_t *v)
> {
> @@ -1483,6 +3599,16 @@ atomic_long_inc_return_acquire(atomic_long_t *v)
> return raw_atomic_long_inc_return_acquire(v);
> }
>
> +/**
> + * atomic_long_inc_return_release() - atomic increment with release ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_inc_return_release() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_inc_return_release(atomic_long_t *v)
> {
> @@ -1491,6 +3617,16 @@ atomic_long_inc_return_release(atomic_long_t *v)
> return raw_atomic_long_inc_return_release(v);
> }
>
> +/**
> + * atomic_long_inc_return_relaxed() - atomic increment with relaxed ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_inc_return_relaxed() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_inc_return_relaxed(atomic_long_t *v)
> {
> @@ -1498,6 +3634,16 @@ atomic_long_inc_return_relaxed(atomic_long_t *v)
> return raw_atomic_long_inc_return_relaxed(v);
> }
>
> +/**
> + * atomic_long_fetch_inc() - atomic increment with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_inc(atomic_long_t *v)
> {
> @@ -1506,6 +3652,16 @@ atomic_long_fetch_inc(atomic_long_t *v)
> return raw_atomic_long_fetch_inc(v);
> }
>
> +/**
> + * atomic_long_fetch_inc_acquire() - atomic increment with acquire ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_inc_acquire(atomic_long_t *v)
> {
> @@ -1513,6 +3669,16 @@ atomic_long_fetch_inc_acquire(atomic_long_t *v)
> return raw_atomic_long_fetch_inc_acquire(v);
> }
>
> +/**
> + * atomic_long_fetch_inc_release() - atomic increment with release ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_inc_release(atomic_long_t *v)
> {
> @@ -1521,6 +3687,16 @@ atomic_long_fetch_inc_release(atomic_long_t *v)
> return raw_atomic_long_fetch_inc_release(v);
> }
>
> +/**
> + * atomic_long_fetch_inc_relaxed() - atomic increment with relaxed ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_inc_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_inc_relaxed(atomic_long_t *v)
> {
> @@ -1528,6 +3704,16 @@ atomic_long_fetch_inc_relaxed(atomic_long_t *v)
> return raw_atomic_long_fetch_inc_relaxed(v);
> }
>
> +/**
> + * atomic_long_dec() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_dec() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_long_dec(atomic_long_t *v)
> {
> @@ -1535,6 +3721,16 @@ atomic_long_dec(atomic_long_t *v)
> raw_atomic_long_dec(v);
> }
>
> +/**
> + * atomic_long_dec_return() - atomic decrement with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_dec_return() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_dec_return(atomic_long_t *v)
> {
> @@ -1543,6 +3739,16 @@ atomic_long_dec_return(atomic_long_t *v)
> return raw_atomic_long_dec_return(v);
> }
>
> +/**
> + * atomic_long_dec_return_acquire() - atomic decrement with acquire ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_dec_return_acquire() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_dec_return_acquire(atomic_long_t *v)
> {
> @@ -1550,6 +3756,16 @@ atomic_long_dec_return_acquire(atomic_long_t *v)
> return raw_atomic_long_dec_return_acquire(v);
> }
>
> +/**
> + * atomic_long_dec_return_release() - atomic decrement with release ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_dec_return_release() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_dec_return_release(atomic_long_t *v)
> {
> @@ -1558,6 +3774,16 @@ atomic_long_dec_return_release(atomic_long_t *v)
> return raw_atomic_long_dec_return_release(v);
> }
>
> +/**
> + * atomic_long_dec_return_relaxed() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_dec_return_relaxed() there.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> atomic_long_dec_return_relaxed(atomic_long_t *v)
> {
> @@ -1565,6 +3791,16 @@ atomic_long_dec_return_relaxed(atomic_long_t *v)
> return raw_atomic_long_dec_return_relaxed(v);
> }
>
> +/**
> + * atomic_long_fetch_dec() - atomic decrement with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_dec(atomic_long_t *v)
> {
> @@ -1573,6 +3809,16 @@ atomic_long_fetch_dec(atomic_long_t *v)
> return raw_atomic_long_fetch_dec(v);
> }
>
> +/**
> + * atomic_long_fetch_dec_acquire() - atomic decrement with acquire ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_dec_acquire(atomic_long_t *v)
> {
> @@ -1580,6 +3826,16 @@ atomic_long_fetch_dec_acquire(atomic_long_t *v)
> return raw_atomic_long_fetch_dec_acquire(v);
> }
>
> +/**
> + * atomic_long_fetch_dec_release() - atomic decrement with release ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_dec_release(atomic_long_t *v)
> {
> @@ -1588,6 +3844,16 @@ atomic_long_fetch_dec_release(atomic_long_t *v)
> return raw_atomic_long_fetch_dec_release(v);
> }
>
> +/**
> + * atomic_long_fetch_dec_relaxed() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_dec_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_dec_relaxed(atomic_long_t *v)
> {
> @@ -1595,6 +3861,17 @@ atomic_long_fetch_dec_relaxed(atomic_long_t *v)
> return raw_atomic_long_fetch_dec_relaxed(v);
> }
>
> +/**
> + * atomic_long_and() - atomic bitwise AND with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_and() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_long_and(long i, atomic_long_t *v)
> {
> @@ -1602,6 +3879,17 @@ atomic_long_and(long i, atomic_long_t *v)
> raw_atomic_long_and(i, v);
> }
>
> +/**
> + * atomic_long_fetch_and() - atomic bitwise AND with full ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_and(long i, atomic_long_t *v)
> {
> @@ -1610,6 +3898,17 @@ atomic_long_fetch_and(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_and(i, v);
> }
>
> +/**
> + * atomic_long_fetch_and_acquire() - atomic bitwise AND with acquire ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
> {
> @@ -1617,6 +3916,17 @@ atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_and_acquire(i, v);
> }
>
> +/**
> + * atomic_long_fetch_and_release() - atomic bitwise AND with release ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_and_release(long i, atomic_long_t *v)
> {
> @@ -1625,6 +3935,17 @@ atomic_long_fetch_and_release(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_and_release(i, v);
> }
>
> +/**
> + * atomic_long_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_and_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
> {
> @@ -1632,6 +3953,17 @@ atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_and_relaxed(i, v);
> }
>
> +/**
> + * atomic_long_andnot() - atomic bitwise AND NOT with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & ~@i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_andnot() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_long_andnot(long i, atomic_long_t *v)
> {
> @@ -1639,6 +3971,17 @@ atomic_long_andnot(long i, atomic_long_t *v)
> raw_atomic_long_andnot(i, v);
> }
>
> +/**
> + * atomic_long_fetch_andnot() - atomic bitwise AND NOT with full ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & ~@i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_andnot(long i, atomic_long_t *v)
> {
> @@ -1647,6 +3990,17 @@ atomic_long_fetch_andnot(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_andnot(i, v);
> }
>
> +/**
> + * atomic_long_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & ~@i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
> {
> @@ -1654,6 +4008,17 @@ atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_andnot_acquire(i, v);
> }
>
> +/**
> + * atomic_long_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & ~@i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
> {
> @@ -1662,6 +4027,17 @@ atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_andnot_release(i, v);
> }
>
> +/**
> + * atomic_long_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & ~@i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_andnot_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
> {
> @@ -1669,6 +4045,17 @@ atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_andnot_relaxed(i, v);
> }
>
> +/**
> + * atomic_long_or() - atomic bitwise OR with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v | @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_or() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_long_or(long i, atomic_long_t *v)
> {
> @@ -1676,6 +4063,17 @@ atomic_long_or(long i, atomic_long_t *v)
> raw_atomic_long_or(i, v);
> }
>
> +/**
> + * atomic_long_fetch_or() - atomic bitwise OR with full ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v | @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_or(long i, atomic_long_t *v)
> {
> @@ -1684,6 +4082,17 @@ atomic_long_fetch_or(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_or(i, v);
> }
>
> +/**
> + * atomic_long_fetch_or_acquire() - atomic bitwise OR with acquire ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v | @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
> {
> @@ -1691,6 +4100,17 @@ atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_or_acquire(i, v);
> }
>
> +/**
> + * atomic_long_fetch_or_release() - atomic bitwise OR with release ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v | @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_or_release(long i, atomic_long_t *v)
> {
> @@ -1699,6 +4119,17 @@ atomic_long_fetch_or_release(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_or_release(i, v);
> }
>
> +/**
> + * atomic_long_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v | @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_or_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
> {
> @@ -1706,6 +4137,17 @@ atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_or_relaxed(i, v);
> }
>
> +/**
> + * atomic_long_xor() - atomic bitwise XOR with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v ^ @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_xor() there.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> atomic_long_xor(long i, atomic_long_t *v)
> {
> @@ -1713,6 +4155,17 @@ atomic_long_xor(long i, atomic_long_t *v)
> raw_atomic_long_xor(i, v);
> }
>
> +/**
> + * atomic_long_fetch_xor() - atomic bitwise XOR with full ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v ^ @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_xor(long i, atomic_long_t *v)
> {
> @@ -1721,6 +4174,17 @@ atomic_long_fetch_xor(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_xor(i, v);
> }
>
> +/**
> + * atomic_long_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v ^ @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
> {
> @@ -1728,6 +4192,17 @@ atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_xor_acquire(i, v);
> }
>
> +/**
> + * atomic_long_fetch_xor_release() - atomic bitwise XOR with release ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v ^ @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_xor_release(long i, atomic_long_t *v)
> {
> @@ -1736,6 +4211,17 @@ atomic_long_fetch_xor_release(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_xor_release(i, v);
> }
>
> +/**
> + * atomic_long_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v ^ @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_xor_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
> {
> @@ -1743,6 +4229,17 @@ atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
> return raw_atomic_long_fetch_xor_relaxed(i, v);
> }
>
> +/**
> + * atomic_long_xchg() - atomic exchange with full ordering
> + * @v: pointer to atomic_long_t
> + * @new: long value to assign
> + *
> + * Atomically updates @v to @new with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_xchg() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_xchg(atomic_long_t *v, long new)
> {
> @@ -1751,6 +4248,17 @@ atomic_long_xchg(atomic_long_t *v, long new)
> return raw_atomic_long_xchg(v, new);
> }
>
> +/**
> + * atomic_long_xchg_acquire() - atomic exchange with acquire ordering
> + * @v: pointer to atomic_long_t
> + * @new: long value to assign
> + *
> + * Atomically updates @v to @new with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_xchg_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_xchg_acquire(atomic_long_t *v, long new)
> {
> @@ -1758,6 +4266,17 @@ atomic_long_xchg_acquire(atomic_long_t *v, long new)
> return raw_atomic_long_xchg_acquire(v, new);
> }
>
> +/**
> + * atomic_long_xchg_release() - atomic exchange with release ordering
> + * @v: pointer to atomic_long_t
> + * @new: long value to assign
> + *
> + * Atomically updates @v to @new with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_xchg_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_xchg_release(atomic_long_t *v, long new)
> {
> @@ -1766,6 +4285,17 @@ atomic_long_xchg_release(atomic_long_t *v, long new)
> return raw_atomic_long_xchg_release(v, new);
> }
>
> +/**
> + * atomic_long_xchg_relaxed() - atomic exchange with relaxed ordering
> + * @v: pointer to atomic_long_t
> + * @new: long value to assign
> + *
> + * Atomically updates @v to @new with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_xchg_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_xchg_relaxed(atomic_long_t *v, long new)
> {
> @@ -1773,6 +4303,18 @@ atomic_long_xchg_relaxed(atomic_long_t *v, long new)
> return raw_atomic_long_xchg_relaxed(v, new);
> }
>
> +/**
> + * atomic_long_cmpxchg() - atomic compare and exchange with full ordering
> + * @v: pointer to atomic_long_t
> + * @old: long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
> {
> @@ -1781,6 +4323,18 @@ atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
> return raw_atomic_long_cmpxchg(v, old, new);
> }
>
> +/**
> + * atomic_long_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
> + * @v: pointer to atomic_long_t
> + * @old: long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_acquire() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
> {
> @@ -1788,6 +4342,18 @@ atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
> return raw_atomic_long_cmpxchg_acquire(v, old, new);
> }
>
> +/**
> + * atomic_long_cmpxchg_release() - atomic compare and exchange with release ordering
> + * @v: pointer to atomic_long_t
> + * @old: long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_release() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
> {
> @@ -1796,6 +4362,18 @@ atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
> return raw_atomic_long_cmpxchg_release(v, old, new);
> }
>
> +/**
> + * atomic_long_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
> + * @v: pointer to atomic_long_t
> + * @old: long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_cmpxchg_relaxed() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
> {
> @@ -1803,6 +4381,19 @@ atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
> return raw_atomic_long_cmpxchg_relaxed(v, old, new);
> }
>
> +/**
> + * atomic_long_try_cmpxchg() - atomic compare and exchange with full ordering
> + * @v: pointer to atomic_long_t
> + * @old: pointer to long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with full ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg() there.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
> {
> @@ -1812,6 +4403,19 @@ atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
> return raw_atomic_long_try_cmpxchg(v, old, new);
> }
>
> +/**
> + * atomic_long_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
> + * @v: pointer to atomic_long_t
> + * @old: pointer to long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with acquire ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_acquire() there.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
> {
> @@ -1820,6 +4424,19 @@ atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
> return raw_atomic_long_try_cmpxchg_acquire(v, old, new);
> }
>
> +/**
> + * atomic_long_try_cmpxchg_release() - atomic compare and exchange with release ordering
> + * @v: pointer to atomic_long_t
> + * @old: pointer to long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with release ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_release() there.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
> {
> @@ -1829,6 +4446,19 @@ atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
> return raw_atomic_long_try_cmpxchg_release(v, old, new);
> }
>
> +/**
> + * atomic_long_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
> + * @v: pointer to atomic_long_t
> + * @old: pointer to long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with relaxed ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_try_cmpxchg_relaxed() there.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
> {
> @@ -1837,6 +4467,17 @@ atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
> return raw_atomic_long_try_cmpxchg_relaxed(v, old, new);
> }
>
> +/**
> + * atomic_long_sub_and_test() - atomic subtract and test if zero with full ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_sub_and_test() there.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> atomic_long_sub_and_test(long i, atomic_long_t *v)
> {
> @@ -1845,6 +4486,16 @@ atomic_long_sub_and_test(long i, atomic_long_t *v)
> return raw_atomic_long_sub_and_test(i, v);
> }
>
> +/**
> + * atomic_long_dec_and_test() - atomic decrement and test if zero with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_dec_and_test() there.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> atomic_long_dec_and_test(atomic_long_t *v)
> {
> @@ -1853,6 +4504,16 @@ atomic_long_dec_and_test(atomic_long_t *v)
> return raw_atomic_long_dec_and_test(v);
> }
>
> +/**
> + * atomic_long_inc_and_test() - atomic increment and test if zero with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_inc_and_test() there.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> atomic_long_inc_and_test(atomic_long_t *v)
> {
> @@ -1861,6 +4522,17 @@ atomic_long_inc_and_test(atomic_long_t *v)
> return raw_atomic_long_inc_and_test(v);
> }
>
> +/**
> + * atomic_long_add_negative() - atomic add and test if negative with full ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_add_negative() there.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> atomic_long_add_negative(long i, atomic_long_t *v)
> {
> @@ -1869,6 +4541,17 @@ atomic_long_add_negative(long i, atomic_long_t *v)
> return raw_atomic_long_add_negative(i, v);
> }
>
> +/**
> + * atomic_long_add_negative_acquire() - atomic add and test if negative with acquire ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_add_negative_acquire() there.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> atomic_long_add_negative_acquire(long i, atomic_long_t *v)
> {
> @@ -1876,6 +4559,17 @@ atomic_long_add_negative_acquire(long i, atomic_long_t *v)
> return raw_atomic_long_add_negative_acquire(i, v);
> }
>
> +/**
> + * atomic_long_add_negative_release() - atomic add and test if negative with release ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_add_negative_release() there.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> atomic_long_add_negative_release(long i, atomic_long_t *v)
> {
> @@ -1884,6 +4578,17 @@ atomic_long_add_negative_release(long i, atomic_long_t *v)
> return raw_atomic_long_add_negative_release(i, v);
> }
>
> +/**
> + * atomic_long_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_add_negative_relaxed() there.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
> {
> @@ -1891,6 +4596,18 @@ atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
> return raw_atomic_long_add_negative_relaxed(i, v);
> }
>
> +/**
> + * atomic_long_fetch_add_unless() - atomic add unless value with full ordering
> + * @v: pointer to atomic_long_t
> + * @a: long value to add
> + * @u: long value to compare with
> + *
> + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_fetch_add_unless() there.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
> {
> @@ -1899,6 +4616,18 @@ atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
> return raw_atomic_long_fetch_add_unless(v, a, u);
> }
>
> +/**
> + * atomic_long_add_unless() - atomic add unless value with full ordering
> + * @v: pointer to atomic_long_t
> + * @a: long value to add
> + * @u: long value to compare with
> + *
> + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_add_unless() there.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> atomic_long_add_unless(atomic_long_t *v, long a, long u)
> {
> @@ -1907,6 +4636,16 @@ atomic_long_add_unless(atomic_long_t *v, long a, long u)
> return raw_atomic_long_add_unless(v, a, u);
> }
>
> +/**
> + * atomic_long_inc_not_zero() - atomic increment unless zero with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_inc_not_zero() there.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> atomic_long_inc_not_zero(atomic_long_t *v)
> {
> @@ -1915,6 +4654,16 @@ atomic_long_inc_not_zero(atomic_long_t *v)
> return raw_atomic_long_inc_not_zero(v);
> }
>
> +/**
> + * atomic_long_inc_unless_negative() - atomic increment unless negative with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_inc_unless_negative() there.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> atomic_long_inc_unless_negative(atomic_long_t *v)
> {
> @@ -1923,6 +4672,16 @@ atomic_long_inc_unless_negative(atomic_long_t *v)
> return raw_atomic_long_inc_unless_negative(v);
> }
>
> +/**
> + * atomic_long_dec_unless_positive() - atomic decrement unless positive with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_dec_unless_positive() there.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> atomic_long_dec_unless_positive(atomic_long_t *v)
> {
> @@ -1931,6 +4690,16 @@ atomic_long_dec_unless_positive(atomic_long_t *v)
> return raw_atomic_long_dec_unless_positive(v);
> }
>
> +/**
> + * atomic_long_dec_if_positive() - atomic decrement if positive with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Unsafe to use in noinstr code; use raw_atomic_long_dec_if_positive() there.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline long
> atomic_long_dec_if_positive(atomic_long_t *v)
> {
> @@ -2231,4 +5000,4 @@ atomic_long_dec_if_positive(atomic_long_t *v)
>
>
> #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */
> -// a4c3d2b229f907654cc53cb5d40e80f7fed1ec9c
> +// 06cec02e676a484857aee38b0071a1d846ec9457
> diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h
> index f564f71ff8afc..f6df2adadf997 100644
> --- a/include/linux/atomic/atomic-long.h
> +++ b/include/linux/atomic/atomic-long.h
> @@ -21,6 +21,16 @@ typedef atomic_t atomic_long_t;
> #define atomic_long_cond_read_relaxed atomic_cond_read_relaxed
> #endif
>
> +/**
> + * raw_atomic_long_read() - atomic load with relaxed ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically loads the value of @v with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_read() elsewhere.
> + *
> + * Return: The value loaded from @v.
> + */
> static __always_inline long
> raw_atomic_long_read(const atomic_long_t *v)
> {
> @@ -31,6 +41,16 @@ raw_atomic_long_read(const atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_read_acquire() - atomic load with acquire ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically loads the value of @v with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_read_acquire() elsewhere.
> + *
> + * Return: The value loaded from @v.
> + */
> static __always_inline long
> raw_atomic_long_read_acquire(const atomic_long_t *v)
> {
> @@ -41,6 +61,17 @@ raw_atomic_long_read_acquire(const atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_set() - atomic set with relaxed ordering
> + * @v: pointer to atomic_long_t
> + * @i: long value to assign
> + *
> + * Atomically sets @v to @i with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_set() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_long_set(atomic_long_t *v, long i)
> {
> @@ -51,6 +82,17 @@ raw_atomic_long_set(atomic_long_t *v, long i)
> #endif
> }
>
> +/**
> + * raw_atomic_long_set_release() - atomic set with release ordering
> + * @v: pointer to atomic_long_t
> + * @i: long value to assign
> + *
> + * Atomically sets @v to @i with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_set_release() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_long_set_release(atomic_long_t *v, long i)
> {
> @@ -61,6 +103,17 @@ raw_atomic_long_set_release(atomic_long_t *v, long i)
> #endif
> }
>
> +/**
> + * raw_atomic_long_add() - atomic add with relaxed ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_add() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_long_add(long i, atomic_long_t *v)
> {
> @@ -71,6 +124,17 @@ raw_atomic_long_add(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_add_return() - atomic add with full ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_add_return() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_add_return(long i, atomic_long_t *v)
> {
> @@ -81,6 +145,17 @@ raw_atomic_long_add_return(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_add_return_acquire() - atomic add with acquire ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_add_return_acquire() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
> {
> @@ -91,6 +166,17 @@ raw_atomic_long_add_return_acquire(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_add_return_release() - atomic add with release ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_add_return_release() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_add_return_release(long i, atomic_long_t *v)
> {
> @@ -101,6 +187,17 @@ raw_atomic_long_add_return_release(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_add_return_relaxed() - atomic add with relaxed ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_add_return_relaxed() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
> {
> @@ -111,6 +208,17 @@ raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_add() - atomic add with full ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_add() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_add(long i, atomic_long_t *v)
> {
> @@ -121,6 +229,17 @@ raw_atomic_long_fetch_add(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_add_acquire() - atomic add with acquire ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_add_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
> {
> @@ -131,6 +250,17 @@ raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_add_release() - atomic add with release ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_add_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
> {
> @@ -141,6 +271,17 @@ raw_atomic_long_fetch_add_release(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_add_relaxed() - atomic add with relaxed ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_add_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
> {
> @@ -151,6 +292,17 @@ raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_sub() - atomic subtract with relaxed ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_sub() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_long_sub(long i, atomic_long_t *v)
> {
> @@ -161,6 +313,17 @@ raw_atomic_long_sub(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_sub_return() - atomic subtract with full ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_sub_return() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_sub_return(long i, atomic_long_t *v)
> {
> @@ -171,6 +334,17 @@ raw_atomic_long_sub_return(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_sub_return_acquire() - atomic subtract with acquire ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_sub_return_acquire() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
> {
> @@ -181,6 +355,17 @@ raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_sub_return_release() - atomic subtract with release ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_sub_return_release() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
> {
> @@ -191,6 +376,17 @@ raw_atomic_long_sub_return_release(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_sub_return_relaxed() - atomic subtract with relaxed ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_sub_return_relaxed() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
> {
> @@ -201,6 +397,17 @@ raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_sub() - atomic subtract with full ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_sub() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
> {
> @@ -211,6 +418,17 @@ raw_atomic_long_fetch_sub(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_sub_acquire() - atomic subtract with acquire ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_sub_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
> {
> @@ -221,6 +439,17 @@ raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_sub_release() - atomic subtract with release ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_sub_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
> {
> @@ -231,6 +460,17 @@ raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_sub_relaxed() - atomic subtract with relaxed ordering
> + * @i: long value to subtract
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_sub_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
> {
> @@ -241,6 +481,16 @@ raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_inc() - atomic increment with relaxed ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_inc() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_long_inc(atomic_long_t *v)
> {
> @@ -251,6 +501,16 @@ raw_atomic_long_inc(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_inc_return() - atomic increment with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_inc_return() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_inc_return(atomic_long_t *v)
> {
> @@ -261,6 +521,16 @@ raw_atomic_long_inc_return(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_inc_return_acquire() - atomic increment with acquire ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_inc_return_acquire() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_inc_return_acquire(atomic_long_t *v)
> {
> @@ -271,6 +541,16 @@ raw_atomic_long_inc_return_acquire(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_inc_return_release() - atomic increment with release ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_inc_return_release() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_inc_return_release(atomic_long_t *v)
> {
> @@ -281,6 +561,16 @@ raw_atomic_long_inc_return_release(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_inc_return_relaxed() - atomic increment with relaxed ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_inc_return_relaxed() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
> {
> @@ -291,6 +581,16 @@ raw_atomic_long_inc_return_relaxed(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_inc() - atomic increment with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_inc() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_inc(atomic_long_t *v)
> {
> @@ -301,6 +601,16 @@ raw_atomic_long_fetch_inc(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_inc_acquire() - atomic increment with acquire ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_inc_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
> {
> @@ -311,6 +621,16 @@ raw_atomic_long_fetch_inc_acquire(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_inc_release() - atomic increment with release ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_inc_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_inc_release(atomic_long_t *v)
> {
> @@ -321,6 +641,16 @@ raw_atomic_long_fetch_inc_release(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_inc_relaxed() - atomic increment with relaxed ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_inc_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
> {
> @@ -331,6 +661,16 @@ raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_dec() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_dec() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_long_dec(atomic_long_t *v)
> {
> @@ -341,6 +681,16 @@ raw_atomic_long_dec(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_dec_return() - atomic decrement with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_dec_return() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_dec_return(atomic_long_t *v)
> {
> @@ -351,6 +701,16 @@ raw_atomic_long_dec_return(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_dec_return_acquire() - atomic decrement with acquire ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_dec_return_acquire() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_dec_return_acquire(atomic_long_t *v)
> {
> @@ -361,6 +721,16 @@ raw_atomic_long_dec_return_acquire(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_dec_return_release() - atomic decrement with release ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_dec_return_release() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_dec_return_release(atomic_long_t *v)
> {
> @@ -371,6 +741,16 @@ raw_atomic_long_dec_return_release(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_dec_return_relaxed() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_dec_return_relaxed() elsewhere.
> + *
> + * Return: The updated value of @v.
> + */
> static __always_inline long
> raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
> {
> @@ -381,6 +761,16 @@ raw_atomic_long_dec_return_relaxed(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_dec() - atomic decrement with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_dec() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_dec(atomic_long_t *v)
> {
> @@ -391,6 +781,16 @@ raw_atomic_long_fetch_dec(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_dec_acquire() - atomic decrement with acquire ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_dec_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
> {
> @@ -401,6 +801,16 @@ raw_atomic_long_fetch_dec_acquire(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_dec_release() - atomic decrement with release ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_dec_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_dec_release(atomic_long_t *v)
> {
> @@ -411,6 +821,16 @@ raw_atomic_long_fetch_dec_release(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_dec_relaxed() - atomic decrement with relaxed ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_dec_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
> {
> @@ -421,6 +841,17 @@ raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_and() - atomic bitwise AND with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_and() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_long_and(long i, atomic_long_t *v)
> {
> @@ -431,6 +862,17 @@ raw_atomic_long_and(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_and() - atomic bitwise AND with full ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_and() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_and(long i, atomic_long_t *v)
> {
> @@ -441,6 +883,17 @@ raw_atomic_long_fetch_and(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_and_acquire() - atomic bitwise AND with acquire ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_and_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
> {
> @@ -451,6 +904,17 @@ raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_and_release() - atomic bitwise AND with release ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_and_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
> {
> @@ -461,6 +925,17 @@ raw_atomic_long_fetch_and_release(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_and_relaxed() - atomic bitwise AND with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_and_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
> {
> @@ -471,6 +946,17 @@ raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_andnot() - atomic bitwise AND NOT with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & ~@i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_andnot() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_long_andnot(long i, atomic_long_t *v)
> {
> @@ -481,6 +967,17 @@ raw_atomic_long_andnot(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_andnot() - atomic bitwise AND NOT with full ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & ~@i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_andnot() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
> {
> @@ -491,6 +988,17 @@ raw_atomic_long_fetch_andnot(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_andnot_acquire() - atomic bitwise AND NOT with acquire ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & ~@i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_andnot_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
> {
> @@ -501,6 +1009,17 @@ raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_andnot_release() - atomic bitwise AND NOT with release ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & ~@i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_andnot_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
> {
> @@ -511,6 +1030,17 @@ raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_andnot_relaxed() - atomic bitwise AND NOT with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v & ~@i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_andnot_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
> {
> @@ -521,6 +1051,17 @@ raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_or() - atomic bitwise OR with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v | @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_or() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_long_or(long i, atomic_long_t *v)
> {
> @@ -531,6 +1072,17 @@ raw_atomic_long_or(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_or() - atomic bitwise OR with full ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v | @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_or() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_or(long i, atomic_long_t *v)
> {
> @@ -541,6 +1093,17 @@ raw_atomic_long_fetch_or(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_or_acquire() - atomic bitwise OR with acquire ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v | @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_or_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
> {
> @@ -551,6 +1114,17 @@ raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_or_release() - atomic bitwise OR with release ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v | @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_or_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
> {
> @@ -561,6 +1135,17 @@ raw_atomic_long_fetch_or_release(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_or_relaxed() - atomic bitwise OR with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v | @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_or_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
> {
> @@ -571,6 +1156,17 @@ raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_xor() - atomic bitwise XOR with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v ^ @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_xor() elsewhere.
> + *
> + * Return: Nothing.
> + */
> static __always_inline void
> raw_atomic_long_xor(long i, atomic_long_t *v)
> {
> @@ -581,6 +1177,17 @@ raw_atomic_long_xor(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_xor() - atomic bitwise XOR with full ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v ^ @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_xor() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
> {
> @@ -591,6 +1198,17 @@ raw_atomic_long_fetch_xor(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_xor_acquire() - atomic bitwise XOR with acquire ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v ^ @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_xor_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
> {
> @@ -601,6 +1219,17 @@ raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_xor_release() - atomic bitwise XOR with release ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v ^ @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_xor_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
> {
> @@ -611,6 +1240,17 @@ raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_xor_relaxed() - atomic bitwise XOR with relaxed ordering
> + * @i: long value
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v ^ @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_xor_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
> {
> @@ -621,6 +1261,17 @@ raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_xchg() - atomic exchange with full ordering
> + * @v: pointer to atomic_long_t
> + * @new: long value to assign
> + *
> + * Atomically updates @v to @new with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_xchg() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_xchg(atomic_long_t *v, long new)
> {
> @@ -631,6 +1282,17 @@ raw_atomic_long_xchg(atomic_long_t *v, long new)
> #endif
> }
>
> +/**
> + * raw_atomic_long_xchg_acquire() - atomic exchange with acquire ordering
> + * @v: pointer to atomic_long_t
> + * @new: long value to assign
> + *
> + * Atomically updates @v to @new with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_xchg_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_xchg_acquire(atomic_long_t *v, long new)
> {
> @@ -641,6 +1303,17 @@ raw_atomic_long_xchg_acquire(atomic_long_t *v, long new)
> #endif
> }
>
> +/**
> + * raw_atomic_long_xchg_release() - atomic exchange with release ordering
> + * @v: pointer to atomic_long_t
> + * @new: long value to assign
> + *
> + * Atomically updates @v to @new with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_xchg_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_xchg_release(atomic_long_t *v, long new)
> {
> @@ -651,6 +1324,17 @@ raw_atomic_long_xchg_release(atomic_long_t *v, long new)
> #endif
> }
>
> +/**
> + * raw_atomic_long_xchg_relaxed() - atomic exchange with relaxed ordering
> + * @v: pointer to atomic_long_t
> + * @new: long value to assign
> + *
> + * Atomically updates @v to @new with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_xchg_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new)
> {
> @@ -661,6 +1345,18 @@ raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new)
> #endif
> }
>
> +/**
> + * raw_atomic_long_cmpxchg() - atomic compare and exchange with full ordering
> + * @v: pointer to atomic_long_t
> + * @old: long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_cmpxchg() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
> {
> @@ -671,6 +1367,18 @@ raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new)
> #endif
> }
>
> +/**
> + * raw_atomic_long_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
> + * @v: pointer to atomic_long_t
> + * @old: long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_cmpxchg_acquire() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
> {
> @@ -681,6 +1389,18 @@ raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new)
> #endif
> }
>
> +/**
> + * raw_atomic_long_cmpxchg_release() - atomic compare and exchange with release ordering
> + * @v: pointer to atomic_long_t
> + * @old: long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_cmpxchg_release() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
> {
> @@ -691,6 +1411,18 @@ raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new)
> #endif
> }
>
> +/**
> + * raw_atomic_long_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
> + * @v: pointer to atomic_long_t
> + * @old: long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_cmpxchg_relaxed() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
> {
> @@ -701,6 +1433,19 @@ raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new)
> #endif
> }
>
> +/**
> + * raw_atomic_long_try_cmpxchg() - atomic compare and exchange with full ordering
> + * @v: pointer to atomic_long_t
> + * @old: pointer to long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with full ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg() elsewhere.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
> {
> @@ -711,6 +1456,19 @@ raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)
> #endif
> }
>
> +/**
> + * raw_atomic_long_try_cmpxchg_acquire() - atomic compare and exchange with acquire ordering
> + * @v: pointer to atomic_long_t
> + * @old: pointer to long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with acquire ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_acquire() elsewhere.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
> {
> @@ -721,6 +1479,19 @@ raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new)
> #endif
> }
>
> +/**
> + * raw_atomic_long_try_cmpxchg_release() - atomic compare and exchange with release ordering
> + * @v: pointer to atomic_long_t
> + * @old: pointer to long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with release ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_release() elsewhere.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
> {
> @@ -731,6 +1502,19 @@ raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new)
> #endif
> }
>
> +/**
> + * raw_atomic_long_try_cmpxchg_relaxed() - atomic compare and exchange with relaxed ordering
> + * @v: pointer to atomic_long_t
> + * @old: pointer to long value to compare with
> + * @new: long value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with relaxed ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_try_cmpxchg_relaxed() elsewhere.
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
> {
> @@ -741,6 +1525,17 @@ raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new)
> #endif
> }
>
> +/**
> + * raw_atomic_long_sub_and_test() - atomic subtract and test if zero with full ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_sub_and_test() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
> {
> @@ -751,6 +1546,16 @@ raw_atomic_long_sub_and_test(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_dec_and_test() - atomic decrement and test if zero with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_dec_and_test() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_long_dec_and_test(atomic_long_t *v)
> {
> @@ -761,6 +1566,16 @@ raw_atomic_long_dec_and_test(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_inc_and_test() - atomic increment and test if zero with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_inc_and_test() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_long_inc_and_test(atomic_long_t *v)
> {
> @@ -771,6 +1586,17 @@ raw_atomic_long_inc_and_test(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_add_negative() - atomic add and test if negative with full ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_add_negative() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_long_add_negative(long i, atomic_long_t *v)
> {
> @@ -781,6 +1607,17 @@ raw_atomic_long_add_negative(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_add_negative_acquire() - atomic add and test if negative with acquire ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with acquire ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_add_negative_acquire() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
> {
> @@ -791,6 +1628,17 @@ raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_add_negative_release() - atomic add and test if negative with release ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with release ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_add_negative_release() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
> {
> @@ -801,6 +1649,17 @@ raw_atomic_long_add_negative_release(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_add_negative_relaxed() - atomic add and test if negative with relaxed ordering
> + * @i: long value to add
> + * @v: pointer to atomic_long_t
> + *
> + * Atomically updates @v to (@v + @i) with relaxed ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_add_negative_relaxed() elsewhere.
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
> {
> @@ -811,6 +1670,18 @@ raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_fetch_add_unless() - atomic add unless value with full ordering
> + * @v: pointer to atomic_long_t
> + * @a: long value to add
> + * @u: long value to compare with
> + *
> + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_fetch_add_unless() elsewhere.
> + *
> + * Return: The original value of @v.
> + */
> static __always_inline long
> raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
> {
> @@ -821,6 +1692,18 @@ raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u)
> #endif
> }
>
> +/**
> + * raw_atomic_long_add_unless() - atomic add unless value with full ordering
> + * @v: pointer to atomic_long_t
> + * @a: long value to add
> + * @u: long value to compare with
> + *
> + * If (@v != @u), atomically updates @v to (@v + @a) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_add_unless() elsewhere.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
> {
> @@ -831,6 +1714,16 @@ raw_atomic_long_add_unless(atomic_long_t *v, long a, long u)
> #endif
> }
>
> +/**
> + * raw_atomic_long_inc_not_zero() - atomic increment unless zero with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * If (@v != 0), atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_inc_not_zero() elsewhere.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_long_inc_not_zero(atomic_long_t *v)
> {
> @@ -841,6 +1734,16 @@ raw_atomic_long_inc_not_zero(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_inc_unless_negative() - atomic increment unless negative with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * If (@v >= 0), atomically updates @v to (@v + 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_inc_unless_negative() elsewhere.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_long_inc_unless_negative(atomic_long_t *v)
> {
> @@ -851,6 +1754,16 @@ raw_atomic_long_inc_unless_negative(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_dec_unless_positive() - atomic decrement unless positive with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * If (@v <= 0), atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_dec_unless_positive() elsewhere.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline bool
> raw_atomic_long_dec_unless_positive(atomic_long_t *v)
> {
> @@ -861,6 +1774,16 @@ raw_atomic_long_dec_unless_positive(atomic_long_t *v)
> #endif
> }
>
> +/**
> + * raw_atomic_long_dec_if_positive() - atomic decrement if positive with full ordering
> + * @v: pointer to atomic_long_t
> + *
> + * If (@v > 0), atomically updates @v to (@v - 1) with full ordering.
> + *
> + * Safe to use in noinstr code; prefer atomic_long_dec_if_positive() elsewhere.
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> static __always_inline long
> raw_atomic_long_dec_if_positive(atomic_long_t *v)
> {
> @@ -872,4 +1795,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v)
> }
>
> #endif /* _LINUX_ATOMIC_LONG_H */
> -// e785d25cc3f220b7d473d36aac9da85dd7eb13a8
> +// 029d2e3a493086671e874a4c2e0e42084be42403
> diff --git a/scripts/atomic/atomic-tbl.sh b/scripts/atomic/atomic-tbl.sh
> index 81d5c32039dd4..d4d4b474e8d56 100755
> --- a/scripts/atomic/atomic-tbl.sh
> +++ b/scripts/atomic/atomic-tbl.sh
> @@ -36,9 +36,16 @@ meta_has_relaxed()
> meta_in "$1" "BFIR"
> }
>
> -#find_fallback_template(pfx, name, sfx, order)
> -find_fallback_template()
> +#meta_is_implicitly_relaxed(meta)
> +meta_is_implicitly_relaxed()
> +{
> + meta_in "$1" "vls"
> +}
> +
> +#find_template(tmpltype, pfx, name, sfx, order)
> +find_template()
> {
> + local tmpltype="$1"; shift
> local pfx="$1"; shift
> local name="$1"; shift
> local sfx="$1"; shift
> @@ -52,8 +59,8 @@ find_fallback_template()
> #
> # Start at the most specific, and fall back to the most general. Once
> # we find a specific fallback, don't bother looking for more.
> - for base in "${pfx}${name}${sfx}${order}" "${name}"; do
> - file="${ATOMICDIR}/fallbacks/${base}"
> + for base in "${pfx}${name}${sfx}${order}" "${pfx}${name}${sfx}" "${name}"; do
> + file="${ATOMICDIR}/${tmpltype}/${base}"
>
> if [ -f "${file}" ]; then
> printf "${file}"
> @@ -62,6 +69,18 @@ find_fallback_template()
> done
> }
>
> +#find_fallback_template(pfx, name, sfx, order)
> +find_fallback_template()
> +{
> + find_template "fallbacks" "$@"
> +}
> +
> +#find_kerneldoc_template(pfx, name, sfx, order)
> +find_kerneldoc_template()
> +{
> + find_template "kerneldoc" "$@"
> +}
> +
> #gen_ret_type(meta, int)
> gen_ret_type() {
> local meta="$1"; shift
> @@ -142,6 +161,91 @@ gen_args()
> done
> }
>
> +#gen_desc_return(meta)
> +gen_desc_return()
> +{
> + local meta="$1"; shift
> +
> + case "${meta}" in
> + [v])
> + printf "Return: Nothing."
> + ;;
> + [Ff])
> + printf "Return: The original value of @v."
> + ;;
> + [R])
> + printf "Return: The updated value of @v."
> + ;;
> + [l])
> + printf "Return: The value of @v."
> + ;;
> + esac
> +}
> +
> +#gen_template_kerneldoc(template, class, meta, pfx, name, sfx, order, atomic, int, args...)
> +gen_template_kerneldoc()
> +{
> + local template="$1"; shift
> + local class="$1"; shift
> + local meta="$1"; shift
> + local pfx="$1"; shift
> + local name="$1"; shift
> + local sfx="$1"; shift
> + local order="$1"; shift
> + local atomic="$1"; shift
> + local int="$1"; shift
> +
> + local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
> +
> + local ret="$(gen_ret_type "${meta}" "${int}")"
> + local retstmt="$(gen_ret_stmt "${meta}")"
> + local params="$(gen_params "${int}" "${atomic}" "$@")"
> + local args="$(gen_args "$@")"
> + local desc_order=""
> + local desc_instrumentation=""
> + local desc_return=""
> +
> + if [ ! -z "${order}" ]; then
> + desc_order="${order##_}"
> + elif meta_is_implicitly_relaxed "${meta}"; then
> + desc_order="relaxed"
> + else
> + desc_order="full"
> + fi
> +
> + if [ -z "${class}" ]; then
> + desc_noinstr="Unsafe to use in noinstr code; use raw_${atomicname}() there."
> + else
> + desc_noinstr="Safe to use in noinstr code; prefer ${atomicname}() elsewhere."
> + fi
> +
> + desc_return="$(gen_desc_return "${meta}")"
> +
> + . ${template}
> +}
> +
> +#gen_kerneldoc(class, meta, pfx, name, sfx, order, atomic, int, args...)
> +gen_kerneldoc()
> +{
> + local class="$1"; shift
> + local meta="$1"; shift
> + local pfx="$1"; shift
> + local name="$1"; shift
> + local sfx="$1"; shift
> + local order="$1"; shift
> +
> + local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
> +
> + local tmpl="$(find_kerneldoc_template "${pfx}" "${name}" "${sfx}" "${order}")"
> + if [ -z "${tmpl}" ]; then
> + printf "/*\n"
> + printf " * No kerneldoc available for ${class}${atomicname}\n"
> + printf " */\n"
> + else
> + gen_template_kerneldoc "${tmpl}" "${class}" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "$@"
> + fi
> +}
> +
> #gen_proto_order_variants(meta, pfx, name, sfx, ...)
> gen_proto_order_variants()
> {
> diff --git a/scripts/atomic/gen-atomic-fallback.sh b/scripts/atomic/gen-atomic-fallback.sh
> index 2b470d31e3539..c0c8a85d7c81b 100755
> --- a/scripts/atomic/gen-atomic-fallback.sh
> +++ b/scripts/atomic/gen-atomic-fallback.sh
> @@ -73,6 +73,8 @@ gen_proto_order_variant()
> local params="$(gen_params "${int}" "${atomic}" "$@")"
> local args="$(gen_args "$@")"
>
> + gen_kerneldoc "raw_" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@"
> +
> printf "static __always_inline ${ret}\n"
> printf "raw_${atomicname}(${params})\n"
> printf "{\n"
> diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh
> index 93c949aa9e544..9d3863ceb4d48 100755
> --- a/scripts/atomic/gen-atomic-instrumented.sh
> +++ b/scripts/atomic/gen-atomic-instrumented.sh
> @@ -67,6 +67,8 @@ gen_proto_order_variant()
> local checks="$(gen_params_checks "${meta}" "${order}" "$@")"
> local args="$(gen_args "$@")"
> local retstmt="$(gen_ret_stmt "${meta}")"
> +
> + gen_kerneldoc "" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "${atomic}" "${int}" "$@"
>
> cat <<EOF
> static __always_inline ${ret}
> diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh
> index af27a71b37ef1..9826be3ba9862 100755
> --- a/scripts/atomic/gen-atomic-long.sh
> +++ b/scripts/atomic/gen-atomic-long.sh
> @@ -49,6 +49,8 @@ gen_proto_order_variant()
> local argscast_64="$(gen_args_cast "s64" "atomic64" "$@")"
> local retstmt="$(gen_ret_stmt "${meta}")"
>
> + gen_kerneldoc "raw_" "${meta}" "${pfx}" "${name}" "${sfx}" "${order}" "atomic_long" "long" "$@"
> +
> cat <<EOF
> static __always_inline ${ret}
> raw_atomic_long_${atomicname}(${params})
> diff --git a/scripts/atomic/kerneldoc/add b/scripts/atomic/kerneldoc/add
> new file mode 100644
> index 0000000000000..991f3dafceea3
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/add
> @@ -0,0 +1,13 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic add with ${desc_order} ordering
> + * @i: ${int} value to add
> + * @v: pointer to ${atomic}_t
> + *
> + * Atomically updates @v to (@v + @i) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * ${desc_return}
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/add_negative b/scripts/atomic/kerneldoc/add_negative
> new file mode 100644
> index 0000000000000..f4ca1f05d1d81
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/add_negative
> @@ -0,0 +1,13 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic add and test if negative with ${desc_order} ordering
> + * @i: ${int} value to add
> + * @v: pointer to ${atomic}_t
> + *
> + * Atomically updates @v to (@v + @i) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * Return: @true if the resulting value of @v is negative, @false otherwise.
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/add_unless b/scripts/atomic/kerneldoc/add_unless
> new file mode 100644
> index 0000000000000..f828e5f6750c2
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/add_unless
> @@ -0,0 +1,18 @@
> +if [ -z "${pfx}" ]; then
> + desc_return="Return: @true if @v was updated, @false otherwise."
> +fi
> +
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic add unless value with ${desc_order} ordering
> + * @v: pointer to ${atomic}_t
> + * @a: ${int} value to add
> + * @u: ${int} value to compare with
> + *
> + * If (@v != @u), atomically updates @v to (@v + @a) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * ${desc_return}
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/and b/scripts/atomic/kerneldoc/and
> new file mode 100644
> index 0000000000000..a923574351fc2
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/and
> @@ -0,0 +1,13 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic bitwise AND with ${desc_order} ordering
> + * @i: ${int} value
> + * @v: pointer to ${atomic}_t
> + *
> + * Atomically updates @v to (@v & @i) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * ${desc_return}
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/andnot b/scripts/atomic/kerneldoc/andnot
> new file mode 100644
> index 0000000000000..64bb509f866bf
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/andnot
> @@ -0,0 +1,13 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic bitwise AND NOT with ${desc_order} ordering
> + * @i: ${int} value
> + * @v: pointer to ${atomic}_t
> + *
> + * Atomically updates @v to (@v & ~@i) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * ${desc_return}
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/cmpxchg b/scripts/atomic/kerneldoc/cmpxchg
> new file mode 100644
> index 0000000000000..3bce328f50cff
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/cmpxchg
> @@ -0,0 +1,14 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic compare and exchange with ${desc_order} ordering
> + * @v: pointer to ${atomic}_t
> + * @old: ${int} value to compare with
> + * @new: ${int} value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * Return: The original value of @v.
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/dec b/scripts/atomic/kerneldoc/dec
> new file mode 100644
> index 0000000000000..bbeecbc4c20a4
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/dec
> @@ -0,0 +1,12 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic decrement with ${desc_order} ordering
> + * @v: pointer to ${atomic}_t
> + *
> + * Atomically updates @v to (@v - 1) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * ${desc_return}
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/dec_and_test b/scripts/atomic/kerneldoc/dec_and_test
> new file mode 100644
> index 0000000000000..71bbd23ce4bca
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/dec_and_test
> @@ -0,0 +1,12 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic decrement and test if zero with ${desc_order} ordering
> + * @v: pointer to ${atomic}_t
> + *
> + * Atomically updates @v to (@v - 1) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/dec_if_positive b/scripts/atomic/kerneldoc/dec_if_positive
> new file mode 100644
> index 0000000000000..7c742866fb6b6
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/dec_if_positive
> @@ -0,0 +1,12 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic decrement if positive with ${desc_order} ordering
> + * @v: pointer to ${atomic}_t
> + *
> + * If (@v > 0), atomically updates @v to (@v - 1) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/dec_unless_positive b/scripts/atomic/kerneldoc/dec_unless_positive
> new file mode 100644
> index 0000000000000..ee73612f03547
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/dec_unless_positive
> @@ -0,0 +1,12 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic decrement unless positive with ${desc_order} ordering
> + * @v: pointer to ${atomic}_t
> + *
> + * If (@v <= 0), atomically updates @v to (@v - 1) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/inc b/scripts/atomic/kerneldoc/inc
> new file mode 100644
> index 0000000000000..9f14f1b3d2ef2
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/inc
> @@ -0,0 +1,12 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic increment with ${desc_order} ordering
> + * @v: pointer to ${atomic}_t
> + *
> + * Atomically updates @v to (@v + 1) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * ${desc_return}
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/inc_and_test b/scripts/atomic/kerneldoc/inc_and_test
> new file mode 100644
> index 0000000000000..971694d59bbd1
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/inc_and_test
> @@ -0,0 +1,12 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic increment and test if zero with ${desc_order} ordering
> + * @v: pointer to ${atomic}_t
> + *
> + * Atomically updates @v to (@v + 1) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/inc_not_zero b/scripts/atomic/kerneldoc/inc_not_zero
> new file mode 100644
> index 0000000000000..618be08e653e5
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/inc_not_zero
> @@ -0,0 +1,12 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic increment unless zero with ${desc_order} ordering
> + * @v: pointer to ${atomic}_t
> + *
> + * If (@v != 0), atomically updates @v to (@v + 1) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/inc_unless_negative b/scripts/atomic/kerneldoc/inc_unless_negative
> new file mode 100644
> index 0000000000000..597f23d4dc8dc
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/inc_unless_negative
> @@ -0,0 +1,12 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic increment unless negative with ${desc_order} ordering
> + * @v: pointer to ${atomic}_t
> + *
> + * If (@v >= 0), atomically updates @v to (@v + 1) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * Return: @true if @v was updated, @false otherwise.
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/or b/scripts/atomic/kerneldoc/or
> new file mode 100644
> index 0000000000000..55b33de504165
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/or
> @@ -0,0 +1,13 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic bitwise OR with ${desc_order} ordering
> + * @i: ${int} value
> + * @v: pointer to ${atomic}_t
> + *
> + * Atomically updates @v to (@v | @i) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * ${desc_return}
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/read b/scripts/atomic/kerneldoc/read
> new file mode 100644
> index 0000000000000..89fe6147c9643
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/read
> @@ -0,0 +1,12 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic load with ${desc_order} ordering
> + * @v: pointer to ${atomic}_t
> + *
> + * Atomically loads the value of @v with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * Return: The value loaded from @v.
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/set b/scripts/atomic/kerneldoc/set
> new file mode 100644
> index 0000000000000..e82cb9ebbc423
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/set
> @@ -0,0 +1,13 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic set with ${desc_order} ordering
> + * @v: pointer to ${atomic}_t
> + * @i: ${int} value to assign
> + *
> + * Atomically sets @v to @i with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * Return: Nothing.
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/sub b/scripts/atomic/kerneldoc/sub
> new file mode 100644
> index 0000000000000..3ba642d04407a
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/sub
> @@ -0,0 +1,13 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic subtract with ${desc_order} ordering
> + * @i: ${int} value to subtract
> + * @v: pointer to ${atomic}_t
> + *
> + * Atomically updates @v to (@v - @i) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * ${desc_return}
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/sub_and_test b/scripts/atomic/kerneldoc/sub_and_test
> new file mode 100644
> index 0000000000000..d3760f7749d4e
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/sub_and_test
> @@ -0,0 +1,13 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic subtract and test if zero with ${desc_order} ordering
> + * @i: ${int} value to add
> + * @v: pointer to ${atomic}_t
> + *
> + * Atomically updates @v to (@v - @i) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * Return: @true if the resulting value of @v is zero, @false otherwise.
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/try_cmpxchg b/scripts/atomic/kerneldoc/try_cmpxchg
> new file mode 100644
> index 0000000000000..296553206c06e
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/try_cmpxchg
> @@ -0,0 +1,15 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic compare and exchange with ${desc_order} ordering
> + * @v: pointer to ${atomic}_t
> + * @old: pointer to ${int} value to compare with
> + * @new: ${int} value to assign
> + *
> + * If (@v == @old), atomically updates @v to @new with ${desc_order} ordering.
> + * Otherwise, updates @old to the current value of @v.
> + *
> + * ${desc_noinstr}
> + *
> + * Return: @true if the exchange occured, @false otherwise.
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/xchg b/scripts/atomic/kerneldoc/xchg
> new file mode 100644
> index 0000000000000..75f04c085f252
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/xchg
> @@ -0,0 +1,13 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic exchange with ${desc_order} ordering
> + * @v: pointer to ${atomic}_t
> + * @new: ${int} value to assign
> + *
> + * Atomically updates @v to @new with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * Return: The original value of @v.
> + */
> +EOF
> diff --git a/scripts/atomic/kerneldoc/xor b/scripts/atomic/kerneldoc/xor
> new file mode 100644
> index 0000000000000..8837270f2806d
> --- /dev/null
> +++ b/scripts/atomic/kerneldoc/xor
> @@ -0,0 +1,13 @@
> +cat <<EOF
> +/**
> + * ${class}${atomicname}() - atomic bitwise XOR with ${desc_order} ordering
> + * @i: ${int} value
> + * @v: pointer to ${atomic}_t
> + *
> + * Atomically updates @v to (@v ^ @i) with ${desc_order} ordering.
> + *
> + * ${desc_noinstr}
> + *
> + * ${desc_return}
> + */
> +EOF
> --
> 2.30.2
>
On Thu, Jun 15, 2023 at 07:07:13AM -0700, Paul E. McKenney wrote:
> On Mon, Jun 05, 2023 at 08:01:22AM +0100, Mark Rutland wrote:
> > Currently the atomics are documented in Documentation/atomic_t.txt, and
> > have no kerneldoc comments. There are a sufficient number of gotchas
> > (e.g. semantics, noinstr-safety) that it would be nice to have comments
> > to call these out, and it would be nice to have kerneldoc comments such
> > that these can be collated.
> >
> > While it's possible to derive the semantics from the code, this can be
> > painful given the amount of indirection we currently have (e.g. fallback
> > paths), and it's easy to be mislead by naming, e.g.
> >
> > * The unconditional void-returning ops *only* have relaxed variants
> > without a _relaxed suffix, and can easily be mistaken for being fully
> > ordered.
> >
> > It would be nice to give these a _relaxed() suffix, but this would
> > result in significant churn throughout the kernel.
> >
> > * Our naming of conditional and unconditional+test ops is rather
> > inconsistent, and it can be difficult to derive the name of an
> > operation, or to identify where an op is conditional or
> > unconditional+test.
> >
> > Some ops are clearly conditional:
> > - dec_if_positive
> > - add_unless
> > - dec_unless_positive
> > - inc_unless_negative
> >
> > Some ops are clearly unconditional+test:
> > - sub_and_test
> > - dec_and_test
> > - inc_and_test
> >
> > However, what exactly those test is not obvious. A _test_zero suffix
> > might be clearer.
> >
> > Others could be read ambiguously:
> > - inc_not_zero // conditional
> > - add_negative // unconditional+test
> >
> > It would probably be worth renaming these, e.g. to inc_unless_zero and
> > add_test_negative.
> >
> > As a step towards making this more consistent and easier to understand,
> > this patch adds kerneldoc comments for all generated *atomic*_*()
> > functions. These are generated from templates, with some common text
> > shared, making it easy to extend these in future if necessary.
> >
> > I've tried to make these as consistent and clear as possible, and I've
> > deliberately ensured:
> >
> > * All ops have their ordering explicitly mentioned in the short and long
> > description.
> >
> > * All test ops have "test" in their short description.
> >
> > * All ops are described as an expression using their usual C operator.
> > For example:
> >
> > andnot: "Atomically updates @v to (@v & ~@i)"
> > inc: "Atomically updates @v to (@v + 1)"
> >
> > Which may be clearer to non-naative English speakers, and allows all
> > the operations to be described in the same style.
> >
> > * All conditional ops have their condition described as an expression
> > using the usual C operators. For example:
> >
> > add_unless: "If (@v != @u), atomically updates @v to (@v + @i)"
> > cmpxchg: "If (@v == @old), atomically updates @v to @new"
> >
> > Which may be clearer to non-naative English speakers, and allows all
> > the operations to be described in the same style.
> >
> > * All bitwise ops (and,andnot,or,xor) explicitly mention that they are
> > bitwise in their short description, so that they are not mistaken for
> > performing their logical equivalents.
> >
> > * The noinstr safety of each op is explicitly described, with a
> > description of whether or not to use the raw_ form of the op.
> >
> > There should be no functional change as a result of this patch.
> >
> > Reported-by: Paul E. McKenney <[email protected]>
> > Signed-off-by: Mark Rutland <[email protected]>
> > Reviewed-by: Kees Cook <[email protected]>
> > Cc: Boqun Feng <[email protected]>
> > Cc: Jonathan Corbet <[email protected]>
> > Cc: Peter Zijlstra <[email protected]>
> > Cc: Will Deacon <[email protected]>
>
> With the dec_if_positive fix:
>
> Reviewed-by: Paul E. McKenney <[email protected]>
Thanks! This is already queued in the tip tree's locking/core branch:
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/log/?h=locking/core
... so I was assuming that the dec_if_positive patch would be picked up atop
that.
Regardless, thanks for checking I hadn't missed anything else here! :)
Mark.
Hi,
On Mon, Jun 05, 2023 at 08:01:01AM +0100, Mark Rutland wrote:
> Most architectures define the atomic/atomic64 xchg and cmpxchg
> operations in terms of arch_xchg and arch_cmpxchg respectfully.
>
> Add fallbacks for these cases and remove the trivial cases from arch
> code. On some architectures the existing definitions are kept as these
> are used to build other arch_atomic*() operations.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Reviewed-by: Kees Cook <[email protected]>
> Cc: Boqun Feng <[email protected]>
> Cc: Paul E. McKenney <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Will Deacon <[email protected]>
This patch results in:
ERROR: modpost: "__xchg_called_with_bad_pointer" [lib/atomic64_test.ko] undefined!
when trying to build sparc64:allmodconfig.
Guenter
---
bisect log:
# bad: [60e7c4a25da68cd826719b685babbd23e73b85b0] Add linux-next specific files for 20230626
# good: [45a3e24f65e90a047bef86f927ebdc4c710edaa1] Linux 6.4-rc7
git bisect start 'HEAD' 'v6.4-rc7'
# good: [1fc7b1b3c9c3211898874f51919fcb1cf6f1cc79] Merge branch 'main' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
git bisect good 1fc7b1b3c9c3211898874f51919fcb1cf6f1cc79
# good: [4fce1fc9cf89412590fb681fa480cde0b23b3381] Merge branch 'for-next' of git://git.kernel.dk/linux-block.git
git bisect good 4fce1fc9cf89412590fb681fa480cde0b23b3381
# bad: [cf1a0283badf6d0bfb91876583c24ef535a3c04c] Merge branch 'driver-core-next' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git
git bisect bad cf1a0283badf6d0bfb91876583c24ef535a3c04c
# bad: [3c5388e722ea98022b4d557ab33acca2eb16c4f0] Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
git bisect bad 3c5388e722ea98022b4d557ab33acca2eb16c4f0
# good: [997730bdbf14f352ab03e42461f500aafdabc03e] Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git
git bisect good 997730bdbf14f352ab03e42461f500aafdabc03e
# bad: [6fd8266556af196763b9f876ed682873e605469b] Merge branch into tip/master: 'ras/core'
git bisect bad 6fd8266556af196763b9f876ed682873e605469b
# good: [37380ea71463658934c2d3167d559d4034ea1c5b] Merge branch into tip/master: 'irq/core'
git bisect good 37380ea71463658934c2d3167d559d4034ea1c5b
# bad: [a967852939f864c35f155a2f431292ad6fc3fed9] Merge branch into tip/master: 'locking/core'
git bisect bad a967852939f864c35f155a2f431292ad6fc3fed9
# bad: [e50f06ce2d876c740993b5e3d01e203520391ccd] locking/atomic: m68k: add preprocessor symbols
git bisect bad e50f06ce2d876c740993b5e3d01e203520391ccd
# good: [b1fe7f2cda2a003afe316ce8dfe8d3645694a67e] x86,intel_iommu: Replace cmpxchg_double()
git bisect good b1fe7f2cda2a003afe316ce8dfe8d3645694a67e
# good: [14d72d4b6f0e88b5f683c1a5b7a876a55055852d] locking/atomic: remove fallback comments
git bisect good 14d72d4b6f0e88b5f683c1a5b7a876a55055852d
# bad: [f739287ef57bc01155e556033462e9a6ff020c97] locking/atomic: arc: add preprocessor symbols
git bisect bad f739287ef57bc01155e556033462e9a6ff020c97
# bad: [d12157efc8e083c77d054675fcdd594f54cc7e2b] locking/atomic: make atomic*_{cmp,}xchg optional
git bisect bad d12157efc8e083c77d054675fcdd594f54cc7e2b
# good: [a7bafa7969da1c0e9c342c792d8224078d1c491c] locking/atomic: hexagon: remove redundant arch_atomic_cmpxchg
git bisect good a7bafa7969da1c0e9c342c792d8224078d1c491c
# first bad commit: [d12157efc8e083c77d054675fcdd594f54cc7e2b] locking/atomic: make atomic*_{cmp,}xchg optional
On Tue, Jun 27, 2023 at 10:07:07AM -0700, Guenter Roeck wrote:
> Hi,
Hi Guenter,
> On Mon, Jun 05, 2023 at 08:01:01AM +0100, Mark Rutland wrote:
> > Most architectures define the atomic/atomic64 xchg and cmpxchg
> > operations in terms of arch_xchg and arch_cmpxchg respectfully.
> >
> > Add fallbacks for these cases and remove the trivial cases from arch
> > code. On some architectures the existing definitions are kept as these
> > are used to build other arch_atomic*() operations.
> >
> > Signed-off-by: Mark Rutland <[email protected]>
> > Reviewed-by: Kees Cook <[email protected]>
> > Cc: Boqun Feng <[email protected]>
> > Cc: Paul E. McKenney <[email protected]>
> > Cc: Peter Zijlstra <[email protected]>
> > Cc: Will Deacon <[email protected]>
>
> This patch results in:
>
> ERROR: modpost: "__xchg_called_with_bad_pointer" [lib/atomic64_test.ko] undefined!
>
> when trying to build sparc64:allmodconfig.
Hmm... it seems that in that configuration, the compiler decides to place
__arch_xchg() out-of-line, and hence can't remove the call to
__xchg_called_with_bad_pointer() via dead code elimination.
So this is due to tickling the compiler into making a different inlining
decision rather than due to a semantic issue in the patch.
Marking __arch_xchg() as __always_inline solves that in local testing, and we
should do likewise for the other bits used under the arch_ atomics.
I'll try to spin a patch for that soon, unless someone beats me to it.
Thanks,
Mark.
>
> Guenter
>
> ---
> bisect log:
>
> # bad: [60e7c4a25da68cd826719b685babbd23e73b85b0] Add linux-next specific files for 20230626
> # good: [45a3e24f65e90a047bef86f927ebdc4c710edaa1] Linux 6.4-rc7
> git bisect start 'HEAD' 'v6.4-rc7'
> # good: [1fc7b1b3c9c3211898874f51919fcb1cf6f1cc79] Merge branch 'main' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
> git bisect good 1fc7b1b3c9c3211898874f51919fcb1cf6f1cc79
> # good: [4fce1fc9cf89412590fb681fa480cde0b23b3381] Merge branch 'for-next' of git://git.kernel.dk/linux-block.git
> git bisect good 4fce1fc9cf89412590fb681fa480cde0b23b3381
> # bad: [cf1a0283badf6d0bfb91876583c24ef535a3c04c] Merge branch 'driver-core-next' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git
> git bisect bad cf1a0283badf6d0bfb91876583c24ef535a3c04c
> # bad: [3c5388e722ea98022b4d557ab33acca2eb16c4f0] Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
> git bisect bad 3c5388e722ea98022b4d557ab33acca2eb16c4f0
> # good: [997730bdbf14f352ab03e42461f500aafdabc03e] Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git
> git bisect good 997730bdbf14f352ab03e42461f500aafdabc03e
> # bad: [6fd8266556af196763b9f876ed682873e605469b] Merge branch into tip/master: 'ras/core'
> git bisect bad 6fd8266556af196763b9f876ed682873e605469b
> # good: [37380ea71463658934c2d3167d559d4034ea1c5b] Merge branch into tip/master: 'irq/core'
> git bisect good 37380ea71463658934c2d3167d559d4034ea1c5b
> # bad: [a967852939f864c35f155a2f431292ad6fc3fed9] Merge branch into tip/master: 'locking/core'
> git bisect bad a967852939f864c35f155a2f431292ad6fc3fed9
> # bad: [e50f06ce2d876c740993b5e3d01e203520391ccd] locking/atomic: m68k: add preprocessor symbols
> git bisect bad e50f06ce2d876c740993b5e3d01e203520391ccd
> # good: [b1fe7f2cda2a003afe316ce8dfe8d3645694a67e] x86,intel_iommu: Replace cmpxchg_double()
> git bisect good b1fe7f2cda2a003afe316ce8dfe8d3645694a67e
> # good: [14d72d4b6f0e88b5f683c1a5b7a876a55055852d] locking/atomic: remove fallback comments
> git bisect good 14d72d4b6f0e88b5f683c1a5b7a876a55055852d
> # bad: [f739287ef57bc01155e556033462e9a6ff020c97] locking/atomic: arc: add preprocessor symbols
> git bisect bad f739287ef57bc01155e556033462e9a6ff020c97
> # bad: [d12157efc8e083c77d054675fcdd594f54cc7e2b] locking/atomic: make atomic*_{cmp,}xchg optional
> git bisect bad d12157efc8e083c77d054675fcdd594f54cc7e2b
> # good: [a7bafa7969da1c0e9c342c792d8224078d1c491c] locking/atomic: hexagon: remove redundant arch_atomic_cmpxchg
> git bisect good a7bafa7969da1c0e9c342c792d8224078d1c491c
> # first bad commit: [d12157efc8e083c77d054675fcdd594f54cc7e2b] locking/atomic: make atomic*_{cmp,}xchg optional
>
On 27.06.23 19:07, Guenter Roeck wrote:
> On Mon, Jun 05, 2023 at 08:01:01AM +0100, Mark Rutland wrote:
>> Most architectures define the atomic/atomic64 xchg and cmpxchg
>> operations in terms of arch_xchg and arch_cmpxchg respectfully.
>>
>> Add fallbacks for these cases and remove the trivial cases from arch
>> code. On some architectures the existing definitions are kept as these
>> are used to build other arch_atomic*() operations.
>>
>> Signed-off-by: Mark Rutland <[email protected]>
> [...]
>
> This patch results in:
>
> ERROR: modpost: "__xchg_called_with_bad_pointer" [lib/atomic64_test.ko] undefined!
>
> when trying to build sparc64:allmodconfig.
Guenter, please correct me if I'm wrong:
This is fixed by Arnd's patch "sparc: mark __arch_xchg() as
__always_inline", but that afaics sadly hasn't even reached -next yet.
https://lore.kernel.org/all/[email protected]/
Hence adding it to the tracking now that the end of the merge window is
near:
#regzbot ^introduced d12157efc8e083c7
#regzbot title locking/atomic: build error on sparc64:allmodconfig
#regzbot ignore-activity
Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
--
Everything you wanna know about Linux kernel regression tracking:
https://linux-regtracking.leemhuis.info/about/#tldr
That page also explains what to do if mails like this annoy you.
On 7/8/23 06:07, Linux regression tracking (Thorsten Leemhuis) wrote:
> On 27.06.23 19:07, Guenter Roeck wrote:
>> On Mon, Jun 05, 2023 at 08:01:01AM +0100, Mark Rutland wrote:
>>> Most architectures define the atomic/atomic64 xchg and cmpxchg
>>> operations in terms of arch_xchg and arch_cmpxchg respectfully.
>>>
>>> Add fallbacks for these cases and remove the trivial cases from arch
>>> code. On some architectures the existing definitions are kept as these
>>> are used to build other arch_atomic*() operations.
>>>
>>> Signed-off-by: Mark Rutland <[email protected]>
>> [...]
>>
>> This patch results in:
>>
>> ERROR: modpost: "__xchg_called_with_bad_pointer" [lib/atomic64_test.ko] undefined!
>>
>> when trying to build sparc64:allmodconfig.
>
> Guenter, please correct me if I'm wrong:
>
> This is fixed by Arnd's patch "sparc: mark __arch_xchg() as
> __always_inline", but that afaics sadly hasn't even reached -next yet.
> https://lore.kernel.org/all/[email protected]/
>
> Hence adding it to the tracking now that the end of the merge window is
> near:
>
> #regzbot ^introduced d12157efc8e083c7
> #regzbot title locking/atomic: build error on sparc64:allmodconfig
> #regzbot ignore-activity
>
Yes, this is correct.
Guenter
On 08.07.23 15:20, Guenter Roeck wrote:
> On 7/8/23 06:07, Linux regression tracking (Thorsten Leemhuis) wrote:
>> On 27.06.23 19:07, Guenter Roeck wrote:
>>> On Mon, Jun 05, 2023 at 08:01:01AM +0100, Mark Rutland wrote:
>>>> Most architectures define the atomic/atomic64 xchg and cmpxchg
>>>> operations in terms of arch_xchg and arch_cmpxchg respectfully.
>>>>
>>>> Add fallbacks for these cases and remove the trivial cases from arch
>>>> code. On some architectures the existing definitions are kept as these
>>>> are used to build other arch_atomic*() operations.
>>>>
>>>> Signed-off-by: Mark Rutland <[email protected]>
>>> [...]
>>>
>>> This patch results in:
>>>
>>> ERROR: modpost: "__xchg_called_with_bad_pointer"
>>> [lib/atomic64_test.ko] undefined!
>>>
>>> when trying to build sparc64:allmodconfig.
>>
>> Guenter, please correct me if I'm wrong:
>>
>> This is fixed by Arnd's patch "sparc: mark __arch_xchg() as
>> __always_inline", but that afaics sadly hasn't even reached -next yet.
>> https://lore.kernel.org/all/[email protected]/
>>
>> Hence adding it to the tracking now that the end of the merge window is
>> near:
>>
>> #regzbot ^introduced d12157efc8e083c7
>> #regzbot title locking/atomic: build error on sparc64:allmodconfig
>> #regzbot ignore-activity
>
> Yes, this is correct.
Thx for confirming (and also for adding the other regression to the
tracking). Let me use this opportunity to tell regzbot about the patch
for this regression, which I forgot to do earlier (sorry!):
#regzbot monitor:
https://lore.kernel.org/all/[email protected]/
Also CCing Arnd, maybe the fix fell through the cracks on his side.
Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
--
Everything you wanna know about Linux kernel regression tracking:
https://linux-regtracking.leemhuis.info/about/#tldr
If I did something stupid, please tell me, as explained on that page.
[TLDR: This mail in primarily relevant for Linux kernel regression
tracking. See link in footer if these mails annoy you.]
On 08.07.23 15:37, Linux regression tracking (Thorsten Leemhuis) wrote:
> On 08.07.23 15:20, Guenter Roeck wrote:
>> On 7/8/23 06:07, Linux regression tracking (Thorsten Leemhuis) wrote:
>>> On 27.06.23 19:07, Guenter Roeck wrote:
>>> Guenter, please correct me if I'm wrong:
>>>
>>> This is fixed by Arnd's patch "sparc: mark __arch_xchg() as
>>> __always_inline", but that afaics sadly hasn't even reached -next yet.
>>> https://lore.kernel.org/all/[email protected]/
>>>
>>> Hence adding it to the tracking now that the end of the merge window is
>>> near:
>>> #regzbot ^introduced d12157efc8e083c7
>>> #regzbot title locking/atomic: build error on sparc64:allmodconfig
>>> #regzbot ignore-activity
>> Yes, this is correct.
> Thx for confirming (and also for adding the other regression to the
> tracking). Let me use this opportunity to tell regzbot about the patch
> for this regression, which I forgot to do earlier (sorry!):
> #regzbot monitor:
> https://lore.kernel.org/all/[email protected]/
Kees applied this: https://git.kernel.org/kees/c/ec7633de404e
#regzbot fix: sparc: mark __arch_xchg() as __always_inline
#regzbot ignore-activity
Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
--
Everything you wanna know about Linux kernel regression tracking:
https://linux-regtracking.leemhuis.info/about/#tldr
That page also explains what to do if mails like this annoy you.