2019-05-22 13:24:44

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 00/18] locking/atomic: atomic64 type cleanup

Currently architectures return inconsistent types for atomic64 ops. Some return
long (e..g. powerpc), some return long long (e.g. arc), and some return s64
(e.g. x86).

This is a bit messy, and causes unnecessary pain (e.g. as values must be cast
before they can be printed [1]).

This series reworks all the atomic64 implementations to use s64 as the base
type for atomic64_t (as discussed [2]), and to ensure that this type is
consistently used for parameters and return values in the API, avoiding further
problems in this area.

This series (based on v5.1-rc1) can also be found in my atomics/type-cleanup
branch [3] on kernel.org.

Thanks,
Mark.

[1] https://lkml.kernel.org/r/[email protected]
[2] https://lkml.kernel.org/r/[email protected]
[3] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git atomics/type-cleanup

Mark Rutland (18):
locking/atomic: crypto: nx: prepare for atomic64_read() conversion
locking/atomic: s390/pci: prepare for atomic64_read() conversion
locking/atomic: generic: use s64 for atomic64
locking/atomic: alpha: use s64 for atomic64
locking/atomic: arc: use s64 for atomic64
locking/atomic: arm: use s64 for atomic64
locking/atomic: arm64: use s64 for atomic64
locking/atomic: ia64: use s64 for atomic64
locking/atomic: mips: use s64 for atomic64
locking/atomic: powerpc: use s64 for atomic64
locking/atomic: riscv: fix atomic64_sub_if_positive() offset argument
locking/atomic: riscv: use s64 for atomic64
locking/atomic: s390: use s64 for atomic64
locking/atomic: sparc: use s64 for atomic64
locking/atomic: x86: use s64 for atomic64
locking/atomic: use s64 for atomic64_t on 64-bit
locking/atomic: crypto: nx: remove redundant casts
locking/atomic: s390/pci: remove redundant casts

arch/alpha/include/asm/atomic.h | 20 +++++------
arch/arc/include/asm/atomic.h | 41 +++++++++++-----------
arch/arm/include/asm/atomic.h | 50 +++++++++++++-------------
arch/arm64/include/asm/atomic_ll_sc.h | 20 +++++------
arch/arm64/include/asm/atomic_lse.h | 34 +++++++++---------
arch/ia64/include/asm/atomic.h | 20 +++++------
arch/mips/include/asm/atomic.h | 22 ++++++------
arch/powerpc/include/asm/atomic.h | 44 +++++++++++------------
arch/riscv/include/asm/atomic.h | 44 ++++++++++++-----------
arch/s390/include/asm/atomic.h | 38 ++++++++++----------
arch/s390/pci/pci_debug.c | 2 +-
arch/sparc/include/asm/atomic_64.h | 8 ++---
arch/x86/include/asm/atomic64_32.h | 66 +++++++++++++++++------------------
arch/x86/include/asm/atomic64_64.h | 38 ++++++++++----------
drivers/crypto/nx/nx-842-pseries.c | 6 ++--
include/asm-generic/atomic64.h | 20 +++++------
include/linux/types.h | 2 +-
lib/atomic64.c | 32 ++++++++---------
18 files changed, 252 insertions(+), 255 deletions(-)

--
2.11.0


2019-05-22 13:25:12

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 01/18] locking/atomic: crypto: nx: prepare for atomic64_read() conversion

The return type of atomic64_read() varies by architecture. It may return
long (e.g. powerpc), long long (e.g. arm), or s64 (e.g. x86_64). This is
somewhat painful, and mandates the use of explicit casts in some cases
(e.g. when printing the return value).

To ameliorate matters, subsequent patches will make the atomic64 API
consistently use s64.

As a preparatory step, this patch updates the nx-842 code to treat the
return value of atomic64_read() as s64, using explicit casts. These
casts will be removed once the s64 conversion is complete.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Herbert Xu <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
drivers/crypto/nx/nx-842-pseries.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/nx/nx-842-pseries.c b/drivers/crypto/nx/nx-842-pseries.c
index 57932848361b..9432e9e42afe 100644
--- a/drivers/crypto/nx/nx-842-pseries.c
+++ b/drivers/crypto/nx/nx-842-pseries.c
@@ -869,8 +869,8 @@ static ssize_t nx842_##_name##_show(struct device *dev, \
rcu_read_lock(); \
local_devdata = rcu_dereference(devdata); \
if (local_devdata) \
- p = snprintf(buf, PAGE_SIZE, "%ld\n", \
- atomic64_read(&local_devdata->counters->_name)); \
+ p = snprintf(buf, PAGE_SIZE, "%lld\n", \
+ (s64)atomic64_read(&local_devdata->counters->_name)); \
rcu_read_unlock(); \
return p; \
}
@@ -922,17 +922,17 @@ static ssize_t nx842_timehist_show(struct device *dev,
}

for (i = 0; i < (NX842_HIST_SLOTS - 2); i++) {
- bytes = snprintf(p, bytes_remain, "%u-%uus:\t%ld\n",
+ bytes = snprintf(p, bytes_remain, "%u-%uus:\t%lld\n",
i ? (2<<(i-1)) : 0, (2<<i)-1,
- atomic64_read(&times[i]));
+ (s64)atomic64_read(&times[i]));
bytes_remain -= bytes;
p += bytes;
}
/* The last bucket holds everything over
* 2<<(NX842_HIST_SLOTS - 2) us */
- bytes = snprintf(p, bytes_remain, "%uus - :\t%ld\n",
+ bytes = snprintf(p, bytes_remain, "%uus - :\t%lld\n",
2<<(NX842_HIST_SLOTS - 2),
- atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
+ (s64)atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
p += bytes;

rcu_read_unlock();
--
2.11.0

2019-05-22 13:25:41

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 03/18] locking/atomic: generic: use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the generic atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long long, matching the generated
headers.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
include/asm-generic/atomic64.h | 20 ++++++++++----------
lib/atomic64.c | 32 ++++++++++++++++----------------
2 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h
index 97b28b7f1f29..fc7b831ed632 100644
--- a/include/asm-generic/atomic64.h
+++ b/include/asm-generic/atomic64.h
@@ -14,24 +14,24 @@
#include <linux/types.h>

typedef struct {
- long long counter;
+ s64 counter;
} atomic64_t;

#define ATOMIC64_INIT(i) { (i) }

-extern long long atomic64_read(const atomic64_t *v);
-extern void atomic64_set(atomic64_t *v, long long i);
+extern s64 atomic64_read(const atomic64_t *v);
+extern void atomic64_set(atomic64_t *v, s64 i);

#define atomic64_set_release(v, i) atomic64_set((v), (i))

#define ATOMIC64_OP(op) \
-extern void atomic64_##op(long long a, atomic64_t *v);
+extern void atomic64_##op(s64 a, atomic64_t *v);

#define ATOMIC64_OP_RETURN(op) \
-extern long long atomic64_##op##_return(long long a, atomic64_t *v);
+extern s64 atomic64_##op##_return(s64 a, atomic64_t *v);

#define ATOMIC64_FETCH_OP(op) \
-extern long long atomic64_fetch_##op(long long a, atomic64_t *v);
+extern s64 atomic64_fetch_##op(s64 a, atomic64_t *v);

#define ATOMIC64_OPS(op) ATOMIC64_OP(op) ATOMIC64_OP_RETURN(op) ATOMIC64_FETCH_OP(op)

@@ -50,11 +50,11 @@ ATOMIC64_OPS(xor)
#undef ATOMIC64_OP_RETURN
#undef ATOMIC64_OP

-extern long long atomic64_dec_if_positive(atomic64_t *v);
+extern s64 atomic64_dec_if_positive(atomic64_t *v);
#define atomic64_dec_if_positive atomic64_dec_if_positive
-extern long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n);
-extern long long atomic64_xchg(atomic64_t *v, long long new);
-extern long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u);
+extern s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n);
+extern s64 atomic64_xchg(atomic64_t *v, s64 new);
+extern s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u);
#define atomic64_fetch_add_unless atomic64_fetch_add_unless

#endif /* _ASM_GENERIC_ATOMIC64_H */
diff --git a/lib/atomic64.c b/lib/atomic64.c
index 1d91e31eceec..62f218bf50a0 100644
--- a/lib/atomic64.c
+++ b/lib/atomic64.c
@@ -46,11 +46,11 @@ static inline raw_spinlock_t *lock_addr(const atomic64_t *v)
return &atomic64_lock[addr & (NR_LOCKS - 1)].lock;
}

-long long atomic64_read(const atomic64_t *v)
+s64 atomic64_read(const atomic64_t *v)
{
unsigned long flags;
raw_spinlock_t *lock = lock_addr(v);
- long long val;
+ s64 val;

raw_spin_lock_irqsave(lock, flags);
val = v->counter;
@@ -59,7 +59,7 @@ long long atomic64_read(const atomic64_t *v)
}
EXPORT_SYMBOL(atomic64_read);

-void atomic64_set(atomic64_t *v, long long i)
+void atomic64_set(atomic64_t *v, s64 i)
{
unsigned long flags;
raw_spinlock_t *lock = lock_addr(v);
@@ -71,7 +71,7 @@ void atomic64_set(atomic64_t *v, long long i)
EXPORT_SYMBOL(atomic64_set);

#define ATOMIC64_OP(op, c_op) \
-void atomic64_##op(long long a, atomic64_t *v) \
+void atomic64_##op(s64 a, atomic64_t *v) \
{ \
unsigned long flags; \
raw_spinlock_t *lock = lock_addr(v); \
@@ -83,11 +83,11 @@ void atomic64_##op(long long a, atomic64_t *v) \
EXPORT_SYMBOL(atomic64_##op);

#define ATOMIC64_OP_RETURN(op, c_op) \
-long long atomic64_##op##_return(long long a, atomic64_t *v) \
+s64 atomic64_##op##_return(s64 a, atomic64_t *v) \
{ \
unsigned long flags; \
raw_spinlock_t *lock = lock_addr(v); \
- long long val; \
+ s64 val; \
\
raw_spin_lock_irqsave(lock, flags); \
val = (v->counter c_op a); \
@@ -97,11 +97,11 @@ long long atomic64_##op##_return(long long a, atomic64_t *v) \
EXPORT_SYMBOL(atomic64_##op##_return);

#define ATOMIC64_FETCH_OP(op, c_op) \
-long long atomic64_fetch_##op(long long a, atomic64_t *v) \
+s64 atomic64_fetch_##op(s64 a, atomic64_t *v) \
{ \
unsigned long flags; \
raw_spinlock_t *lock = lock_addr(v); \
- long long val; \
+ s64 val; \
\
raw_spin_lock_irqsave(lock, flags); \
val = v->counter; \
@@ -134,11 +134,11 @@ ATOMIC64_OPS(xor, ^=)
#undef ATOMIC64_OP_RETURN
#undef ATOMIC64_OP

-long long atomic64_dec_if_positive(atomic64_t *v)
+s64 atomic64_dec_if_positive(atomic64_t *v)
{
unsigned long flags;
raw_spinlock_t *lock = lock_addr(v);
- long long val;
+ s64 val;

raw_spin_lock_irqsave(lock, flags);
val = v->counter - 1;
@@ -149,11 +149,11 @@ long long atomic64_dec_if_positive(atomic64_t *v)
}
EXPORT_SYMBOL(atomic64_dec_if_positive);

-long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n)
+s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
{
unsigned long flags;
raw_spinlock_t *lock = lock_addr(v);
- long long val;
+ s64 val;

raw_spin_lock_irqsave(lock, flags);
val = v->counter;
@@ -164,11 +164,11 @@ long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n)
}
EXPORT_SYMBOL(atomic64_cmpxchg);

-long long atomic64_xchg(atomic64_t *v, long long new)
+s64 atomic64_xchg(atomic64_t *v, s64 new)
{
unsigned long flags;
raw_spinlock_t *lock = lock_addr(v);
- long long val;
+ s64 val;

raw_spin_lock_irqsave(lock, flags);
val = v->counter;
@@ -178,11 +178,11 @@ long long atomic64_xchg(atomic64_t *v, long long new)
}
EXPORT_SYMBOL(atomic64_xchg);

-long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u)
+s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
unsigned long flags;
raw_spinlock_t *lock = lock_addr(v);
- long long val;
+ s64 val;

raw_spin_lock_irqsave(lock, flags);
val = v->counter;
--
2.11.0

2019-05-22 13:25:45

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 04/18] locking/atomic: alpha: use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the alpha atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Richard Henderson <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/alpha/include/asm/atomic.h | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index 150a1c5d6a2c..2144530d1428 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -93,9 +93,9 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \
}

#define ATOMIC64_OP(op, asm_op) \
-static __inline__ void atomic64_##op(long i, atomic64_t * v) \
+static __inline__ void atomic64_##op(s64 i, atomic64_t * v) \
{ \
- unsigned long temp; \
+ s64 temp; \
__asm__ __volatile__( \
"1: ldq_l %0,%1\n" \
" " #asm_op " %0,%2,%0\n" \
@@ -109,9 +109,9 @@ static __inline__ void atomic64_##op(long i, atomic64_t * v) \
} \

#define ATOMIC64_OP_RETURN(op, asm_op) \
-static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
+static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v) \
{ \
- long temp, result; \
+ s64 temp, result; \
__asm__ __volatile__( \
"1: ldq_l %0,%1\n" \
" " #asm_op " %0,%3,%2\n" \
@@ -128,9 +128,9 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
}

#define ATOMIC64_FETCH_OP(op, asm_op) \
-static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v) \
+static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v) \
{ \
- long temp, result; \
+ s64 temp, result; \
__asm__ __volatile__( \
"1: ldq_l %2,%1\n" \
" " #asm_op " %2,%3,%0\n" \
@@ -246,9 +246,9 @@ static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u)
* Atomically adds @a to @v, so long as it was not @u.
* Returns the old value of @v.
*/
-static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
+static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
- long c, new, old;
+ s64 c, new, old;
smp_mb();
__asm__ __volatile__(
"1: ldq_l %[old],%[mem]\n"
@@ -276,9 +276,9 @@ static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
* The function returns the old value of *v minus 1, even if
* the atomic variable, v, was not decremented.
*/
-static inline long atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
{
- long old, tmp;
+ s64 old, tmp;
smp_mb();
__asm__ __volatile__(
"1: ldq_l %[old],%[mem]\n"
--
2.11.0

2019-05-22 13:26:18

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 06/18] locking/atomic: arm: use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the arm atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long long, matching the generated
headers.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Russell King <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm/include/asm/atomic.h | 50 +++++++++++++++++++++----------------------
1 file changed, 24 insertions(+), 26 deletions(-)

diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index f74756641410..d45c41f6f69c 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -249,15 +249,15 @@ ATOMIC_OPS(xor, ^=, eor)

#ifndef CONFIG_GENERIC_ATOMIC64
typedef struct {
- long long counter;
+ s64 counter;
} atomic64_t;

#define ATOMIC64_INIT(i) { (i) }

#ifdef CONFIG_ARM_LPAE
-static inline long long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
{
- long long result;
+ s64 result;

__asm__ __volatile__("@ atomic64_read\n"
" ldrd %0, %H0, [%1]"
@@ -268,7 +268,7 @@ static inline long long atomic64_read(const atomic64_t *v)
return result;
}

-static inline void atomic64_set(atomic64_t *v, long long i)
+static inline void atomic64_set(atomic64_t *v, s64 i)
{
__asm__ __volatile__("@ atomic64_set\n"
" strd %2, %H2, [%1]"
@@ -277,9 +277,9 @@ static inline void atomic64_set(atomic64_t *v, long long i)
);
}
#else
-static inline long long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
{
- long long result;
+ s64 result;

__asm__ __volatile__("@ atomic64_read\n"
" ldrexd %0, %H0, [%1]"
@@ -290,9 +290,9 @@ static inline long long atomic64_read(const atomic64_t *v)
return result;
}

-static inline void atomic64_set(atomic64_t *v, long long i)
+static inline void atomic64_set(atomic64_t *v, s64 i)
{
- long long tmp;
+ s64 tmp;

prefetchw(&v->counter);
__asm__ __volatile__("@ atomic64_set\n"
@@ -307,9 +307,9 @@ static inline void atomic64_set(atomic64_t *v, long long i)
#endif

#define ATOMIC64_OP(op, op1, op2) \
-static inline void atomic64_##op(long long i, atomic64_t *v) \
+static inline void atomic64_##op(s64 i, atomic64_t *v) \
{ \
- long long result; \
+ s64 result; \
unsigned long tmp; \
\
prefetchw(&v->counter); \
@@ -326,10 +326,10 @@ static inline void atomic64_##op(long long i, atomic64_t *v) \
} \

#define ATOMIC64_OP_RETURN(op, op1, op2) \
-static inline long long \
-atomic64_##op##_return_relaxed(long long i, atomic64_t *v) \
+static inline s64 \
+atomic64_##op##_return_relaxed(s64 i, atomic64_t *v) \
{ \
- long long result; \
+ s64 result; \
unsigned long tmp; \
\
prefetchw(&v->counter); \
@@ -349,10 +349,10 @@ atomic64_##op##_return_relaxed(long long i, atomic64_t *v) \
}

#define ATOMIC64_FETCH_OP(op, op1, op2) \
-static inline long long \
-atomic64_fetch_##op##_relaxed(long long i, atomic64_t *v) \
+static inline s64 \
+atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v) \
{ \
- long long result, val; \
+ s64 result, val; \
unsigned long tmp; \
\
prefetchw(&v->counter); \
@@ -406,10 +406,9 @@ ATOMIC64_OPS(xor, eor, eor)
#undef ATOMIC64_OP_RETURN
#undef ATOMIC64_OP

-static inline long long
-atomic64_cmpxchg_relaxed(atomic64_t *ptr, long long old, long long new)
+static inline s64 atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 new)
{
- long long oldval;
+ s64 oldval;
unsigned long res;

prefetchw(&ptr->counter);
@@ -430,9 +429,9 @@ atomic64_cmpxchg_relaxed(atomic64_t *ptr, long long old, long long new)
}
#define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed

-static inline long long atomic64_xchg_relaxed(atomic64_t *ptr, long long new)
+static inline s64 atomic64_xchg_relaxed(atomic64_t *ptr, s64 new)
{
- long long result;
+ s64 result;
unsigned long tmp;

prefetchw(&ptr->counter);
@@ -450,9 +449,9 @@ static inline long long atomic64_xchg_relaxed(atomic64_t *ptr, long long new)
}
#define atomic64_xchg_relaxed atomic64_xchg_relaxed

-static inline long long atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
{
- long long result;
+ s64 result;
unsigned long tmp;

smp_mb();
@@ -478,10 +477,9 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
}
#define atomic64_dec_if_positive atomic64_dec_if_positive

-static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a,
- long long u)
+static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
- long long oldval, newval;
+ s64 oldval, newval;
unsigned long tmp;

smp_mb();
--
2.11.0

2019-05-22 13:26:27

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 02/18] locking/atomic: s390/pci: prepare for atomic64_read() conversion

The return type of atomic64_read() varies by architecture. It may return
long (e.g. powerpc), long long (e.g. arm), or s64 (e.g. x86_64). This is
somewhat painful, and mandates the use of explicit casts in some cases
(e.g. when printing the return value).

To ameliorate matters, subsequent patches will make the atomic64 API
consistently use s64.

As a preparatory step, this patch updates the s390 pci debug code to
treat the return value of atomic64_read() as s64, using an explicit
cast. This cast will be removed once the s64 conversion is complete.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/s390/pci/pci_debug.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c
index 6b48ca7760a7..45eccf79e990 100644
--- a/arch/s390/pci/pci_debug.c
+++ b/arch/s390/pci/pci_debug.c
@@ -74,8 +74,8 @@ static void pci_sw_counter_show(struct seq_file *m)
int i;

for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++)
- seq_printf(m, "%26s:\t%lu\n", pci_sw_names[i],
- atomic64_read(counter));
+ seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i],
+ (s64)atomic64_read(counter));
}

static int pci_perf_show(struct seq_file *m, void *v)
--
2.11.0

2019-05-22 13:26:27

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 07/18] locking/atomic: arm64: use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the arm64 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Note that in arch_atomic64_dec_if_positive(), the x0 variable is left as
long, as this variable is also used to hold the pointer to the
atomic64_t.

Otherwise, there should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arm64/include/asm/atomic_ll_sc.h | 20 ++++++++++----------
arch/arm64/include/asm/atomic_lse.h | 34 +++++++++++++++++-----------------
2 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h
index e321293e0c89..f3b12d7f431f 100644
--- a/arch/arm64/include/asm/atomic_ll_sc.h
+++ b/arch/arm64/include/asm/atomic_ll_sc.h
@@ -133,9 +133,9 @@ ATOMIC_OPS(xor, eor)

#define ATOMIC64_OP(op, asm_op) \
__LL_SC_INLINE void \
-__LL_SC_PREFIX(arch_atomic64_##op(long i, atomic64_t *v)) \
+__LL_SC_PREFIX(arch_atomic64_##op(s64 i, atomic64_t *v)) \
{ \
- long result; \
+ s64 result; \
unsigned long tmp; \
\
asm volatile("// atomic64_" #op "\n" \
@@ -150,10 +150,10 @@ __LL_SC_PREFIX(arch_atomic64_##op(long i, atomic64_t *v)) \
__LL_SC_EXPORT(arch_atomic64_##op);

#define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op) \
-__LL_SC_INLINE long \
-__LL_SC_PREFIX(arch_atomic64_##op##_return##name(long i, atomic64_t *v))\
+__LL_SC_INLINE s64 \
+__LL_SC_PREFIX(arch_atomic64_##op##_return##name(s64 i, atomic64_t *v))\
{ \
- long result; \
+ s64 result; \
unsigned long tmp; \
\
asm volatile("// atomic64_" #op "_return" #name "\n" \
@@ -172,10 +172,10 @@ __LL_SC_PREFIX(arch_atomic64_##op##_return##name(long i, atomic64_t *v))\
__LL_SC_EXPORT(arch_atomic64_##op##_return##name);

#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op) \
-__LL_SC_INLINE long \
-__LL_SC_PREFIX(arch_atomic64_fetch_##op##name(long i, atomic64_t *v)) \
+__LL_SC_INLINE s64 \
+__LL_SC_PREFIX(arch_atomic64_fetch_##op##name(s64 i, atomic64_t *v)) \
{ \
- long result, val; \
+ s64 result, val; \
unsigned long tmp; \
\
asm volatile("// atomic64_fetch_" #op #name "\n" \
@@ -225,10 +225,10 @@ ATOMIC64_OPS(xor, eor)
#undef ATOMIC64_OP_RETURN
#undef ATOMIC64_OP

-__LL_SC_INLINE long
+__LL_SC_INLINE s64
__LL_SC_PREFIX(arch_atomic64_dec_if_positive(atomic64_t *v))
{
- long result;
+ s64 result;
unsigned long tmp;

asm volatile("// atomic64_dec_if_positive\n"
diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h
index 9256a3921e4b..c53832b08af7 100644
--- a/arch/arm64/include/asm/atomic_lse.h
+++ b/arch/arm64/include/asm/atomic_lse.h
@@ -224,9 +224,9 @@ ATOMIC_FETCH_OP_SUB( , al, "memory")

#define __LL_SC_ATOMIC64(op) __LL_SC_CALL(arch_atomic64_##op)
#define ATOMIC64_OP(op, asm_op) \
-static inline void arch_atomic64_##op(long i, atomic64_t *v) \
+static inline void arch_atomic64_##op(s64 i, atomic64_t *v) \
{ \
- register long x0 asm ("x0") = i; \
+ register s64 x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
\
asm volatile(ARM64_LSE_ATOMIC_INSN(__LL_SC_ATOMIC64(op), \
@@ -244,9 +244,9 @@ ATOMIC64_OP(add, stadd)
#undef ATOMIC64_OP

#define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \
-static inline long arch_atomic64_fetch_##op##name(long i, atomic64_t *v)\
+static inline s64 arch_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \
{ \
- register long x0 asm ("x0") = i; \
+ register s64 x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
\
asm volatile(ARM64_LSE_ATOMIC_INSN( \
@@ -276,9 +276,9 @@ ATOMIC64_FETCH_OPS(add, ldadd)
#undef ATOMIC64_FETCH_OPS

#define ATOMIC64_OP_ADD_RETURN(name, mb, cl...) \
-static inline long arch_atomic64_add_return##name(long i, atomic64_t *v)\
+static inline s64 arch_atomic64_add_return##name(s64 i, atomic64_t *v) \
{ \
- register long x0 asm ("x0") = i; \
+ register s64 x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
\
asm volatile(ARM64_LSE_ATOMIC_INSN( \
@@ -302,9 +302,9 @@ ATOMIC64_OP_ADD_RETURN( , al, "memory")

#undef ATOMIC64_OP_ADD_RETURN

-static inline void arch_atomic64_and(long i, atomic64_t *v)
+static inline void arch_atomic64_and(s64 i, atomic64_t *v)
{
- register long x0 asm ("x0") = i;
+ register s64 x0 asm ("x0") = i;
register atomic64_t *x1 asm ("x1") = v;

asm volatile(ARM64_LSE_ATOMIC_INSN(
@@ -320,9 +320,9 @@ static inline void arch_atomic64_and(long i, atomic64_t *v)
}

#define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \
-static inline long arch_atomic64_fetch_and##name(long i, atomic64_t *v) \
+static inline s64 arch_atomic64_fetch_and##name(s64 i, atomic64_t *v) \
{ \
- register long x0 asm ("x0") = i; \
+ register s64 x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
\
asm volatile(ARM64_LSE_ATOMIC_INSN( \
@@ -346,9 +346,9 @@ ATOMIC64_FETCH_OP_AND( , al, "memory")

#undef ATOMIC64_FETCH_OP_AND

-static inline void arch_atomic64_sub(long i, atomic64_t *v)
+static inline void arch_atomic64_sub(s64 i, atomic64_t *v)
{
- register long x0 asm ("x0") = i;
+ register s64 x0 asm ("x0") = i;
register atomic64_t *x1 asm ("x1") = v;

asm volatile(ARM64_LSE_ATOMIC_INSN(
@@ -364,9 +364,9 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v)
}

#define ATOMIC64_OP_SUB_RETURN(name, mb, cl...) \
-static inline long arch_atomic64_sub_return##name(long i, atomic64_t *v)\
+static inline s64 arch_atomic64_sub_return##name(s64 i, atomic64_t *v) \
{ \
- register long x0 asm ("x0") = i; \
+ register s64 x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
\
asm volatile(ARM64_LSE_ATOMIC_INSN( \
@@ -392,9 +392,9 @@ ATOMIC64_OP_SUB_RETURN( , al, "memory")
#undef ATOMIC64_OP_SUB_RETURN

#define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \
-static inline long arch_atomic64_fetch_sub##name(long i, atomic64_t *v) \
+static inline s64 arch_atomic64_fetch_sub##name(s64 i, atomic64_t *v) \
{ \
- register long x0 asm ("x0") = i; \
+ register s64 x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
\
asm volatile(ARM64_LSE_ATOMIC_INSN( \
@@ -418,7 +418,7 @@ ATOMIC64_FETCH_OP_SUB( , al, "memory")

#undef ATOMIC64_FETCH_OP_SUB

-static inline long arch_atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
{
register long x0 asm ("x0") = (long)v;

--
2.11.0

2019-05-22 13:26:44

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 09/18] locking/atomic: mips: use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the mips atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long or __s64, matching the generated
headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long on 64-bit. This will be converted in a subsequent
patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Paul Burton <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/mips/include/asm/atomic.h | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
index 94096299fc56..9a82dd11c0e9 100644
--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -254,10 +254,10 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v)
#define atomic64_set(v, i) WRITE_ONCE((v)->counter, (i))

#define ATOMIC64_OP(op, c_op, asm_op) \
-static __inline__ void atomic64_##op(long i, atomic64_t * v) \
+static __inline__ void atomic64_##op(s64 i, atomic64_t * v) \
{ \
if (kernel_uses_llsc) { \
- long temp; \
+ s64 temp; \
\
loongson_llsc_mb(); \
__asm__ __volatile__( \
@@ -280,12 +280,12 @@ static __inline__ void atomic64_##op(long i, atomic64_t * v) \
}

#define ATOMIC64_OP_RETURN(op, c_op, asm_op) \
-static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
+static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v) \
{ \
- long result; \
+ s64 result; \
\
if (kernel_uses_llsc) { \
- long temp; \
+ s64 temp; \
\
loongson_llsc_mb(); \
__asm__ __volatile__( \
@@ -314,12 +314,12 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
}

#define ATOMIC64_FETCH_OP(op, c_op, asm_op) \
-static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v) \
+static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v) \
{ \
- long result; \
+ s64 result; \
\
if (kernel_uses_llsc) { \
- long temp; \
+ s64 temp; \
\
loongson_llsc_mb(); \
__asm__ __volatile__( \
@@ -386,14 +386,14 @@ ATOMIC64_OPS(xor, ^=, xor)
* Atomically test @v and subtract @i if @v is greater or equal than @i.
* The function returns the old value of @v minus @i.
*/
-static __inline__ long atomic64_sub_if_positive(long i, atomic64_t * v)
+static __inline__ s64 atomic64_sub_if_positive(s64 i, atomic64_t * v)
{
- long result;
+ s64 result;

smp_mb__before_llsc();

if (kernel_uses_llsc) {
- long temp;
+ s64 temp;

__asm__ __volatile__(
" .set push \n"
--
2.11.0

2019-05-22 13:26:56

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 10/18] locking/atomic: powerpc: use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the powerpc atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long on 64-bit. This will be converted in a subsequent
patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/powerpc/include/asm/atomic.h | 44 +++++++++++++++++++--------------------
1 file changed, 22 insertions(+), 22 deletions(-)

diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
index 52eafaf74054..31c231ea56b7 100644
--- a/arch/powerpc/include/asm/atomic.h
+++ b/arch/powerpc/include/asm/atomic.h
@@ -297,24 +297,24 @@ static __inline__ int atomic_dec_if_positive(atomic_t *v)

#define ATOMIC64_INIT(i) { (i) }

-static __inline__ long atomic64_read(const atomic64_t *v)
+static __inline__ s64 atomic64_read(const atomic64_t *v)
{
- long t;
+ s64 t;

__asm__ __volatile__("ld%U1%X1 %0,%1" : "=r"(t) : "m"(v->counter));

return t;
}

-static __inline__ void atomic64_set(atomic64_t *v, long i)
+static __inline__ void atomic64_set(atomic64_t *v, s64 i)
{
__asm__ __volatile__("std%U0%X0 %1,%0" : "=m"(v->counter) : "r"(i));
}

#define ATOMIC64_OP(op, asm_op) \
-static __inline__ void atomic64_##op(long a, atomic64_t *v) \
+static __inline__ void atomic64_##op(s64 a, atomic64_t *v) \
{ \
- long t; \
+ s64 t; \
\
__asm__ __volatile__( \
"1: ldarx %0,0,%3 # atomic64_" #op "\n" \
@@ -327,10 +327,10 @@ static __inline__ void atomic64_##op(long a, atomic64_t *v) \
}

#define ATOMIC64_OP_RETURN_RELAXED(op, asm_op) \
-static inline long \
-atomic64_##op##_return_relaxed(long a, atomic64_t *v) \
+static inline s64 \
+atomic64_##op##_return_relaxed(s64 a, atomic64_t *v) \
{ \
- long t; \
+ s64 t; \
\
__asm__ __volatile__( \
"1: ldarx %0,0,%3 # atomic64_" #op "_return_relaxed\n" \
@@ -345,10 +345,10 @@ atomic64_##op##_return_relaxed(long a, atomic64_t *v) \
}

#define ATOMIC64_FETCH_OP_RELAXED(op, asm_op) \
-static inline long \
-atomic64_fetch_##op##_relaxed(long a, atomic64_t *v) \
+static inline s64 \
+atomic64_fetch_##op##_relaxed(s64 a, atomic64_t *v) \
{ \
- long res, t; \
+ s64 res, t; \
\
__asm__ __volatile__( \
"1: ldarx %0,0,%4 # atomic64_fetch_" #op "_relaxed\n" \
@@ -396,7 +396,7 @@ ATOMIC64_OPS(xor, xor)

static __inline__ void atomic64_inc(atomic64_t *v)
{
- long t;
+ s64 t;

__asm__ __volatile__(
"1: ldarx %0,0,%2 # atomic64_inc\n\
@@ -409,9 +409,9 @@ static __inline__ void atomic64_inc(atomic64_t *v)
}
#define atomic64_inc atomic64_inc

-static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v)
+static __inline__ s64 atomic64_inc_return_relaxed(atomic64_t *v)
{
- long t;
+ s64 t;

__asm__ __volatile__(
"1: ldarx %0,0,%2 # atomic64_inc_return_relaxed\n"
@@ -427,7 +427,7 @@ static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v)

static __inline__ void atomic64_dec(atomic64_t *v)
{
- long t;
+ s64 t;

__asm__ __volatile__(
"1: ldarx %0,0,%2 # atomic64_dec\n\
@@ -440,9 +440,9 @@ static __inline__ void atomic64_dec(atomic64_t *v)
}
#define atomic64_dec atomic64_dec

-static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v)
+static __inline__ s64 atomic64_dec_return_relaxed(atomic64_t *v)
{
- long t;
+ s64 t;

__asm__ __volatile__(
"1: ldarx %0,0,%2 # atomic64_dec_return_relaxed\n"
@@ -463,9 +463,9 @@ static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v)
* Atomically test *v and decrement if it is greater than 0.
* The function returns the old value of *v minus 1.
*/
-static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
+static __inline__ s64 atomic64_dec_if_positive(atomic64_t *v)
{
- long t;
+ s64 t;

__asm__ __volatile__(
PPC_ATOMIC_ENTRY_BARRIER
@@ -502,9 +502,9 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
* Atomically adds @a to @v, so long as it was not @u.
* Returns the old value of @v.
*/
-static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
+static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
- long t;
+ s64 t;

__asm__ __volatile__ (
PPC_ATOMIC_ENTRY_BARRIER
@@ -534,7 +534,7 @@ static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
*/
static __inline__ int atomic64_inc_not_zero(atomic64_t *v)
{
- long t1, t2;
+ s64 t1, t2;

__asm__ __volatile__ (
PPC_ATOMIC_ENTRY_BARRIER
--
2.11.0

2019-05-22 13:27:06

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 12/18] locking/atomic: riscv: use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the s390 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long on 64-bit. This will be converted in a subsequent
patch.

Otherwise, there should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Albert Ou <[email protected]>
Cc: Palmer Dabbelt <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/riscv/include/asm/atomic.h | 44 +++++++++++++++++++++--------------------
1 file changed, 23 insertions(+), 21 deletions(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index c9e18289d65c..bffebc57357d 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -42,11 +42,11 @@ static __always_inline void atomic_set(atomic_t *v, int i)

#ifndef CONFIG_GENERIC_ATOMIC64
#define ATOMIC64_INIT(i) { (i) }
-static __always_inline long atomic64_read(const atomic64_t *v)
+static __always_inline s64 atomic64_read(const atomic64_t *v)
{
return READ_ONCE(v->counter);
}
-static __always_inline void atomic64_set(atomic64_t *v, long i)
+static __always_inline void atomic64_set(atomic64_t *v, s64 i)
{
WRITE_ONCE(v->counter, i);
}
@@ -70,11 +70,11 @@ void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v) \

#ifdef CONFIG_GENERIC_ATOMIC64
#define ATOMIC_OPS(op, asm_op, I) \
- ATOMIC_OP (op, asm_op, I, w, int, )
+ ATOMIC_OP (op, asm_op, I, w, int, )
#else
#define ATOMIC_OPS(op, asm_op, I) \
- ATOMIC_OP (op, asm_op, I, w, int, ) \
- ATOMIC_OP (op, asm_op, I, d, long, 64)
+ ATOMIC_OP (op, asm_op, I, w, int, ) \
+ ATOMIC_OP (op, asm_op, I, d, s64, 64)
#endif

ATOMIC_OPS(add, add, i)
@@ -131,14 +131,14 @@ c_type atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v) \

#ifdef CONFIG_GENERIC_ATOMIC64
#define ATOMIC_OPS(op, asm_op, c_op, I) \
- ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
- ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, )
+ ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
+ ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, )
#else
#define ATOMIC_OPS(op, asm_op, c_op, I) \
- ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
- ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) \
- ATOMIC_FETCH_OP( op, asm_op, I, d, long, 64) \
- ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, long, 64)
+ ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
+ ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) \
+ ATOMIC_FETCH_OP( op, asm_op, I, d, s64, 64) \
+ ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, s64, 64)
#endif

ATOMIC_OPS(add, add, +, i)
@@ -170,11 +170,11 @@ ATOMIC_OPS(sub, add, +, -i)

#ifdef CONFIG_GENERIC_ATOMIC64
#define ATOMIC_OPS(op, asm_op, I) \
- ATOMIC_FETCH_OP(op, asm_op, I, w, int, )
+ ATOMIC_FETCH_OP(op, asm_op, I, w, int, )
#else
#define ATOMIC_OPS(op, asm_op, I) \
- ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) \
- ATOMIC_FETCH_OP(op, asm_op, I, d, long, 64)
+ ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) \
+ ATOMIC_FETCH_OP(op, asm_op, I, d, s64, 64)
#endif

ATOMIC_OPS(and, and, i)
@@ -223,9 +223,10 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
#define atomic_fetch_add_unless atomic_fetch_add_unless

#ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
+static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
- long prev, rc;
+ s64 prev;
+ long rc;

__asm__ __volatile__ (
"0: lr.d %[p], %[c]\n"
@@ -294,11 +295,11 @@ c_t atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n) \

#ifdef CONFIG_GENERIC_ATOMIC64
#define ATOMIC_OPS() \
- ATOMIC_OP( int, , 4)
+ ATOMIC_OP(int, , 4)
#else
#define ATOMIC_OPS() \
- ATOMIC_OP( int, , 4) \
- ATOMIC_OP(long, 64, 8)
+ ATOMIC_OP(int, , 4) \
+ ATOMIC_OP(s64, 64, 8)
#endif

ATOMIC_OPS()
@@ -336,9 +337,10 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
#define atomic_dec_if_positive(v) atomic_sub_if_positive(v, 1)

#ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset)
+static __always_inline s64 atomic64_sub_if_positive(atomic64_t *v, s64 offset)
{
- long prev, rc;
+ s64 prev;
+ long rc;

__asm__ __volatile__ (
"0: lr.d %[p], %[c]\n"
--
2.11.0

2019-05-22 13:27:18

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 05/18] locking/atomic: arc: use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the arc atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than u64, matching the generated headers.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Vineet Gupta <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/arc/include/asm/atomic.h | 41 ++++++++++++++++++++---------------------
1 file changed, 20 insertions(+), 21 deletions(-)

diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
index 158af079838d..2c75df55d0d2 100644
--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -324,14 +324,14 @@ ATOMIC_OPS(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3)
*/

typedef struct {
- aligned_u64 counter;
+ s64 __aligned(8) counter;
} atomic64_t;

#define ATOMIC64_INIT(a) { (a) }

-static inline long long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
{
- unsigned long long val;
+ s64 val;

__asm__ __volatile__(
" ldd %0, [%1] \n"
@@ -341,7 +341,7 @@ static inline long long atomic64_read(const atomic64_t *v)
return val;
}

-static inline void atomic64_set(atomic64_t *v, long long a)
+static inline void atomic64_set(atomic64_t *v, s64 a)
{
/*
* This could have been a simple assignment in "C" but would need
@@ -362,9 +362,9 @@ static inline void atomic64_set(atomic64_t *v, long long a)
}

#define ATOMIC64_OP(op, op1, op2) \
-static inline void atomic64_##op(long long a, atomic64_t *v) \
+static inline void atomic64_##op(s64 a, atomic64_t *v) \
{ \
- unsigned long long val; \
+ s64 val; \
\
__asm__ __volatile__( \
"1: \n" \
@@ -375,13 +375,13 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \
" bnz 1b \n" \
: "=&r"(val) \
: "r"(&v->counter), "ir"(a) \
- : "cc"); \
+ : "cc"); \
} \

#define ATOMIC64_OP_RETURN(op, op1, op2) \
-static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \
+static inline s64 atomic64_##op##_return(s64 a, atomic64_t *v) \
{ \
- unsigned long long val; \
+ s64 val; \
\
smp_mb(); \
\
@@ -402,9 +402,9 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \
}

#define ATOMIC64_FETCH_OP(op, op1, op2) \
-static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \
+static inline s64 atomic64_fetch_##op(s64 a, atomic64_t *v) \
{ \
- unsigned long long val, orig; \
+ s64 val, orig; \
\
smp_mb(); \
\
@@ -444,10 +444,10 @@ ATOMIC64_OPS(xor, xor, xor)
#undef ATOMIC64_OP_RETURN
#undef ATOMIC64_OP

-static inline long long
-atomic64_cmpxchg(atomic64_t *ptr, long long expected, long long new)
+static inline s64
+atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new)
{
- long long prev;
+ s64 prev;

smp_mb();

@@ -467,9 +467,9 @@ atomic64_cmpxchg(atomic64_t *ptr, long long expected, long long new)
return prev;
}

-static inline long long atomic64_xchg(atomic64_t *ptr, long long new)
+static inline s64 atomic64_xchg(atomic64_t *ptr, s64 new)
{
- long long prev;
+ s64 prev;

smp_mb();

@@ -495,9 +495,9 @@ static inline long long atomic64_xchg(atomic64_t *ptr, long long new)
* the atomic variable, v, was not decremented.
*/

-static inline long long atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
{
- long long val;
+ s64 val;

smp_mb();

@@ -528,10 +528,9 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
* Atomically adds @a to @v, if it was not @u.
* Returns the old value of @v
*/
-static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a,
- long long u)
+static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
- long long old, temp;
+ s64 old, temp;

smp_mb();

--
2.11.0

2019-05-22 13:27:19

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 14/18] locking/atomic: sparc: use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the sparc atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Otherwise, there should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/sparc/include/asm/atomic_64.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h
index 6963482c81d8..b60448397d4f 100644
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -23,15 +23,15 @@

#define ATOMIC_OP(op) \
void atomic_##op(int, atomic_t *); \
-void atomic64_##op(long, atomic64_t *);
+void atomic64_##op(s64, atomic64_t *);

#define ATOMIC_OP_RETURN(op) \
int atomic_##op##_return(int, atomic_t *); \
-long atomic64_##op##_return(long, atomic64_t *);
+s64 atomic64_##op##_return(s64, atomic64_t *);

#define ATOMIC_FETCH_OP(op) \
int atomic_fetch_##op(int, atomic_t *); \
-long atomic64_fetch_##op(long, atomic64_t *);
+s64 atomic64_fetch_##op(s64, atomic64_t *);

#define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_OP_RETURN(op) ATOMIC_FETCH_OP(op)

@@ -61,7 +61,7 @@ static inline int atomic_xchg(atomic_t *v, int new)
((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n)))
#define atomic64_xchg(v, new) (xchg(&((v)->counter), new))

-long atomic64_dec_if_positive(atomic64_t *v);
+s64 atomic64_dec_if_positive(atomic64_t *v);
#define atomic64_dec_if_positive atomic64_dec_if_positive

#endif /* !(__ARCH_SPARC64_ATOMIC__) */
--
2.11.0

2019-05-22 13:27:40

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 08/18] locking/atomic: ia64: use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the ia64 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long or __s64, matching the generated
headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/ia64/include/asm/atomic.h | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h
index 206530d0751b..50440f3ddc43 100644
--- a/arch/ia64/include/asm/atomic.h
+++ b/arch/ia64/include/asm/atomic.h
@@ -124,10 +124,10 @@ ATOMIC_FETCH_OP(xor, ^)
#undef ATOMIC_OP

#define ATOMIC64_OP(op, c_op) \
-static __inline__ long \
-ia64_atomic64_##op (__s64 i, atomic64_t *v) \
+static __inline__ s64 \
+ia64_atomic64_##op (s64 i, atomic64_t *v) \
{ \
- __s64 old, new; \
+ s64 old, new; \
CMPXCHG_BUGCHECK_DECL \
\
do { \
@@ -139,10 +139,10 @@ ia64_atomic64_##op (__s64 i, atomic64_t *v) \
}

#define ATOMIC64_FETCH_OP(op, c_op) \
-static __inline__ long \
-ia64_atomic64_fetch_##op (__s64 i, atomic64_t *v) \
+static __inline__ s64 \
+ia64_atomic64_fetch_##op (s64 i, atomic64_t *v) \
{ \
- __s64 old, new; \
+ s64 old, new; \
CMPXCHG_BUGCHECK_DECL \
\
do { \
@@ -162,7 +162,7 @@ ATOMIC64_OPS(sub, -)

#define atomic64_add_return(i,v) \
({ \
- long __ia64_aar_i = (i); \
+ s64 __ia64_aar_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetch_and_add(__ia64_aar_i, &(v)->counter) \
: ia64_atomic64_add(__ia64_aar_i, v); \
@@ -170,7 +170,7 @@ ATOMIC64_OPS(sub, -)

#define atomic64_sub_return(i,v) \
({ \
- long __ia64_asr_i = (i); \
+ s64 __ia64_asr_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetch_and_add(-__ia64_asr_i, &(v)->counter) \
: ia64_atomic64_sub(__ia64_asr_i, v); \
@@ -178,7 +178,7 @@ ATOMIC64_OPS(sub, -)

#define atomic64_fetch_add(i,v) \
({ \
- long __ia64_aar_i = (i); \
+ s64 __ia64_aar_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetchadd(__ia64_aar_i, &(v)->counter, acq) \
: ia64_atomic64_fetch_add(__ia64_aar_i, v); \
@@ -186,7 +186,7 @@ ATOMIC64_OPS(sub, -)

#define atomic64_fetch_sub(i,v) \
({ \
- long __ia64_asr_i = (i); \
+ s64 __ia64_asr_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetchadd(-__ia64_asr_i, &(v)->counter, acq) \
: ia64_atomic64_fetch_sub(__ia64_asr_i, v); \
--
2.11.0

2019-05-22 13:27:55

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 11/18] locking/atomic: riscv: fix atomic64_sub_if_positive() offset argument

Presently the riscv implementation of atomic64_sub_if_positive() takes
a 32-bit offset value rather than a 64-bit offset value as it should do.
Thus, if called with a 64-bit offset, the value will be unexpectedly
truncated to 32 bits.

Fix this by taking the offset as a long rather than an int.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Albert Ou <[email protected]>
Cc: Palmer Dabbelt <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
---
arch/riscv/include/asm/atomic.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index 93826771b616..c9e18289d65c 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -336,7 +336,7 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
#define atomic_dec_if_positive(v) atomic_sub_if_positive(v, 1)

#ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline long atomic64_sub_if_positive(atomic64_t *v, int offset)
+static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset)
{
long prev, rc;

--
2.11.0

2019-05-22 13:27:55

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 18/18] locking/atomic: s390/pci: remove redundant casts

Now that atomic64_read() returns s64 consistently, we don't need to
explicitly cast its return value. Drop the redundant casts.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/s390/pci/pci_debug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c
index 45eccf79e990..3408c0df3ebf 100644
--- a/arch/s390/pci/pci_debug.c
+++ b/arch/s390/pci/pci_debug.c
@@ -75,7 +75,7 @@ static void pci_sw_counter_show(struct seq_file *m)

for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++)
seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i],
- (s64)atomic64_read(counter));
+ atomic64_read(counter));
}

static int pci_perf_show(struct seq_file *m, void *v)
--
2.11.0

2019-05-22 13:28:12

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 13/18] locking/atomic: s390: use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the s390 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

The s390-internal __atomic64_*() ops are also used by the s390 bitops,
and expect pointers to long. Since atomic64_t::counter will be converted
to s64 in a subsequent patch, pointes to this are explicitly cast to
pointers to long when passed to __atomic64_*() ops.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/s390/include/asm/atomic.h | 38 +++++++++++++++++++-------------------
1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h
index fd20ab5d4cf7..491ad53a0d4e 100644
--- a/arch/s390/include/asm/atomic.h
+++ b/arch/s390/include/asm/atomic.h
@@ -84,9 +84,9 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new)

#define ATOMIC64_INIT(i) { (i) }

-static inline long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
{
- long c;
+ s64 c;

asm volatile(
" lg %0,%1\n"
@@ -94,49 +94,49 @@ static inline long atomic64_read(const atomic64_t *v)
return c;
}

-static inline void atomic64_set(atomic64_t *v, long i)
+static inline void atomic64_set(atomic64_t *v, s64 i)
{
asm volatile(
" stg %1,%0\n"
: "=Q" (v->counter) : "d" (i));
}

-static inline long atomic64_add_return(long i, atomic64_t *v)
+static inline s64 atomic64_add_return(s64 i, atomic64_t *v)
{
- return __atomic64_add_barrier(i, &v->counter) + i;
+ return __atomic64_add_barrier(i, (long *)&v->counter) + i;
}

-static inline long atomic64_fetch_add(long i, atomic64_t *v)
+static inline s64 atomic64_fetch_add(s64 i, atomic64_t *v)
{
- return __atomic64_add_barrier(i, &v->counter);
+ return __atomic64_add_barrier(i, (long *)&v->counter);
}

-static inline void atomic64_add(long i, atomic64_t *v)
+static inline void atomic64_add(s64 i, atomic64_t *v)
{
#ifdef CONFIG_HAVE_MARCH_Z196_FEATURES
if (__builtin_constant_p(i) && (i > -129) && (i < 128)) {
- __atomic64_add_const(i, &v->counter);
+ __atomic64_add_const(i, (long *)&v->counter);
return;
}
#endif
- __atomic64_add(i, &v->counter);
+ __atomic64_add(i, (long *)&v->counter);
}

#define atomic64_xchg(v, new) (xchg(&((v)->counter), new))

-static inline long atomic64_cmpxchg(atomic64_t *v, long old, long new)
+static inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
- return __atomic64_cmpxchg(&v->counter, old, new);
+ return __atomic64_cmpxchg((long *)&v->counter, old, new);
}

#define ATOMIC64_OPS(op) \
-static inline void atomic64_##op(long i, atomic64_t *v) \
+static inline void atomic64_##op(s64 i, atomic64_t *v) \
{ \
- __atomic64_##op(i, &v->counter); \
+ __atomic64_##op(i, (long *)&v->counter); \
} \
-static inline long atomic64_fetch_##op(long i, atomic64_t *v) \
+static inline long atomic64_fetch_##op(s64 i, atomic64_t *v) \
{ \
- return __atomic64_##op##_barrier(i, &v->counter); \
+ return __atomic64_##op##_barrier(i, (long *)&v->counter); \
}

ATOMIC64_OPS(and)
@@ -145,8 +145,8 @@ ATOMIC64_OPS(xor)

#undef ATOMIC64_OPS

-#define atomic64_sub_return(_i, _v) atomic64_add_return(-(long)(_i), _v)
-#define atomic64_fetch_sub(_i, _v) atomic64_fetch_add(-(long)(_i), _v)
-#define atomic64_sub(_i, _v) atomic64_add(-(long)(_i), _v)
+#define atomic64_sub_return(_i, _v) atomic64_add_return(-(s64)(_i), _v)
+#define atomic64_fetch_sub(_i, _v) atomic64_fetch_add(-(s64)(_i), _v)
+#define atomic64_sub(_i, _v) atomic64_add(-(s64)(_i), _v)

#endif /* __ARCH_S390_ATOMIC__ */
--
2.11.0

2019-05-22 13:28:32

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 16/18] locking/atomic: use s64 for atomic64_t on 64-bit

Now that all architectures use 64 consistently as the base type for the
atomic64 API, let's have the CONFIG_64BIT definition of atomic64_t use
s64 as the underlying type for atomic64_t, rather than long, matching
the generated headers.

On architectures where atomic64_read(v) is READ_ONCE(v->counter), this
patch will cause the return type of atomic64_read() to be s64.

As of this patch, the atomic64 API can be relied upon to consistently
return s64 where a value rather than boolean condition is returned. This
should make code more robust, and simpler, allowing for the removal of
casts previously required to ensure consistent types.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
include/linux/types.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/types.h b/include/linux/types.h
index 231114ae38f4..05030f608be3 100644
--- a/include/linux/types.h
+++ b/include/linux/types.h
@@ -174,7 +174,7 @@ typedef struct {

#ifdef CONFIG_64BIT
typedef struct {
- long counter;
+ s64 counter;
} atomic64_t;
#endif

--
2.11.0

2019-05-22 13:28:32

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 15/18] locking/atomic: x86: use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the x86 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long or long long, matching the
generated headers.

Note that the x86 arch_atomic64 implementation is already wrapped by the
generic instrumented atomic64 implementation, which uses s64
consistently.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Russell King <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
---
arch/x86/include/asm/atomic64_32.h | 66 ++++++++++++++++++--------------------
arch/x86/include/asm/atomic64_64.h | 38 +++++++++++-----------
2 files changed, 51 insertions(+), 53 deletions(-)

diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 6a5b0ec460da..52cfaecb13f9 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -9,7 +9,7 @@
/* An 64bit atomic type */

typedef struct {
- u64 __aligned(8) counter;
+ s64 __aligned(8) counter;
} atomic64_t;

#define ATOMIC64_INIT(val) { (val) }
@@ -71,8 +71,7 @@ ATOMIC64_DECL(add_unless);
* the old value.
*/

-static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o,
- long long n)
+static inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
{
return arch_cmpxchg64(&v->counter, o, n);
}
@@ -85,9 +84,9 @@ static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o,
* Atomically xchgs the value of @v to @n and returns
* the old value.
*/
-static inline long long arch_atomic64_xchg(atomic64_t *v, long long n)
+static inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n)
{
- long long o;
+ s64 o;
unsigned high = (unsigned)(n >> 32);
unsigned low = (unsigned)n;
alternative_atomic64(xchg, "=&A" (o),
@@ -103,7 +102,7 @@ static inline long long arch_atomic64_xchg(atomic64_t *v, long long n)
*
* Atomically sets the value of @v to @n.
*/
-static inline void arch_atomic64_set(atomic64_t *v, long long i)
+static inline void arch_atomic64_set(atomic64_t *v, s64 i)
{
unsigned high = (unsigned)(i >> 32);
unsigned low = (unsigned)i;
@@ -118,9 +117,9 @@ static inline void arch_atomic64_set(atomic64_t *v, long long i)
*
* Atomically reads the value of @v and returns it.
*/
-static inline long long arch_atomic64_read(const atomic64_t *v)
+static inline s64 arch_atomic64_read(const atomic64_t *v)
{
- long long r;
+ s64 r;
alternative_atomic64(read, "=&A" (r), "c" (v) : "memory");
return r;
}
@@ -132,7 +131,7 @@ static inline long long arch_atomic64_read(const atomic64_t *v)
*
* Atomically adds @i to @v and returns @i + *@v
*/
-static inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
{
alternative_atomic64(add_return,
ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -143,7 +142,7 @@ static inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
/*
* Other variants with different arithmetic operators:
*/
-static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v)
{
alternative_atomic64(sub_return,
ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -151,18 +150,18 @@ static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
return i;
}

-static inline long long arch_atomic64_inc_return(atomic64_t *v)
+static inline s64 arch_atomic64_inc_return(atomic64_t *v)
{
- long long a;
+ s64 a;
alternative_atomic64(inc_return, "=&A" (a),
"S" (v) : "memory", "ecx");
return a;
}
#define arch_atomic64_inc_return arch_atomic64_inc_return

-static inline long long arch_atomic64_dec_return(atomic64_t *v)
+static inline s64 arch_atomic64_dec_return(atomic64_t *v)
{
- long long a;
+ s64 a;
alternative_atomic64(dec_return, "=&A" (a),
"S" (v) : "memory", "ecx");
return a;
@@ -176,7 +175,7 @@ static inline long long arch_atomic64_dec_return(atomic64_t *v)
*
* Atomically adds @i to @v.
*/
-static inline long long arch_atomic64_add(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_add(s64 i, atomic64_t *v)
{
__alternative_atomic64(add, add_return,
ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -191,7 +190,7 @@ static inline long long arch_atomic64_add(long long i, atomic64_t *v)
*
* Atomically subtracts @i from @v.
*/
-static inline long long arch_atomic64_sub(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_sub(s64 i, atomic64_t *v)
{
__alternative_atomic64(sub, sub_return,
ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -234,8 +233,7 @@ static inline void arch_atomic64_dec(atomic64_t *v)
* Atomically adds @a to @v, so long as it was not @u.
* Returns non-zero if the add was done, zero otherwise.
*/
-static inline int arch_atomic64_add_unless(atomic64_t *v, long long a,
- long long u)
+static inline int arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
unsigned low = (unsigned)u;
unsigned high = (unsigned)(u >> 32);
@@ -254,9 +252,9 @@ static inline int arch_atomic64_inc_not_zero(atomic64_t *v)
}
#define arch_atomic64_inc_not_zero arch_atomic64_inc_not_zero

-static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
{
- long long r;
+ s64 r;
alternative_atomic64(dec_if_positive, "=&A" (r),
"S" (v) : "ecx", "memory");
return r;
@@ -266,17 +264,17 @@ static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
#undef alternative_atomic64
#undef __alternative_atomic64

-static inline void arch_atomic64_and(long long i, atomic64_t *v)
+static inline void arch_atomic64_and(s64 i, atomic64_t *v)
{
- long long old, c = 0;
+ s64 old, c = 0;

while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
c = old;
}

-static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_and(s64 i, atomic64_t *v)
{
- long long old, c = 0;
+ s64 old, c = 0;

while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
c = old;
@@ -284,17 +282,17 @@ static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
return old;
}

-static inline void arch_atomic64_or(long long i, atomic64_t *v)
+static inline void arch_atomic64_or(s64 i, atomic64_t *v)
{
- long long old, c = 0;
+ s64 old, c = 0;

while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
c = old;
}

-static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_or(s64 i, atomic64_t *v)
{
- long long old, c = 0;
+ s64 old, c = 0;

while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
c = old;
@@ -302,17 +300,17 @@ static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
return old;
}

-static inline void arch_atomic64_xor(long long i, atomic64_t *v)
+static inline void arch_atomic64_xor(s64 i, atomic64_t *v)
{
- long long old, c = 0;
+ s64 old, c = 0;

while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
c = old;
}

-static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_xor(s64 i, atomic64_t *v)
{
- long long old, c = 0;
+ s64 old, c = 0;

while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
c = old;
@@ -320,9 +318,9 @@ static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
return old;
}

-static inline long long arch_atomic64_fetch_add(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v)
{
- long long old, c = 0;
+ s64 old, c = 0;

while ((old = arch_atomic64_cmpxchg(v, c, c + i)) != c)
c = old;
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index dadc20adba21..703b7dfd45e0 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -17,7 +17,7 @@
* Atomically reads the value of @v.
* Doesn't imply a read memory barrier.
*/
-static inline long arch_atomic64_read(const atomic64_t *v)
+static inline s64 arch_atomic64_read(const atomic64_t *v)
{
return READ_ONCE((v)->counter);
}
@@ -29,7 +29,7 @@ static inline long arch_atomic64_read(const atomic64_t *v)
*
* Atomically sets the value of @v to @i.
*/
-static inline void arch_atomic64_set(atomic64_t *v, long i)
+static inline void arch_atomic64_set(atomic64_t *v, s64 i)
{
WRITE_ONCE(v->counter, i);
}
@@ -41,7 +41,7 @@ static inline void arch_atomic64_set(atomic64_t *v, long i)
*
* Atomically adds @i to @v.
*/
-static __always_inline void arch_atomic64_add(long i, atomic64_t *v)
+static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "addq %1,%0"
: "=m" (v->counter)
@@ -55,7 +55,7 @@ static __always_inline void arch_atomic64_add(long i, atomic64_t *v)
*
* Atomically subtracts @i from @v.
*/
-static inline void arch_atomic64_sub(long i, atomic64_t *v)
+static inline void arch_atomic64_sub(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "subq %1,%0"
: "=m" (v->counter)
@@ -71,7 +71,7 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v)
* true if the result is zero, or false for all
* other cases.
*/
-static inline bool arch_atomic64_sub_and_test(long i, atomic64_t *v)
+static inline bool arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
{
return GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, e, "er", i);
}
@@ -142,7 +142,7 @@ static inline bool arch_atomic64_inc_and_test(atomic64_t *v)
* if the result is negative, or false when
* result is greater than or equal to zero.
*/
-static inline bool arch_atomic64_add_negative(long i, atomic64_t *v)
+static inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v)
{
return GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, s, "er", i);
}
@@ -155,43 +155,43 @@ static inline bool arch_atomic64_add_negative(long i, atomic64_t *v)
*
* Atomically adds @i to @v and returns @i + @v
*/
-static __always_inline long arch_atomic64_add_return(long i, atomic64_t *v)
+static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
{
return i + xadd(&v->counter, i);
}

-static inline long arch_atomic64_sub_return(long i, atomic64_t *v)
+static inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v)
{
return arch_atomic64_add_return(-i, v);
}

-static inline long arch_atomic64_fetch_add(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v)
{
return xadd(&v->counter, i);
}

-static inline long arch_atomic64_fetch_sub(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_sub(s64 i, atomic64_t *v)
{
return xadd(&v->counter, -i);
}

-static inline long arch_atomic64_cmpxchg(atomic64_t *v, long old, long new)
+static inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
return arch_cmpxchg(&v->counter, old, new);
}

#define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
-static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, long new)
+static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
return try_cmpxchg(&v->counter, old, new);
}

-static inline long arch_atomic64_xchg(atomic64_t *v, long new)
+static inline s64 arch_atomic64_xchg(atomic64_t *v, s64 new)
{
return arch_xchg(&v->counter, new);
}

-static inline void arch_atomic64_and(long i, atomic64_t *v)
+static inline void arch_atomic64_and(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "andq %1,%0"
: "+m" (v->counter)
@@ -199,7 +199,7 @@ static inline void arch_atomic64_and(long i, atomic64_t *v)
: "memory");
}

-static inline long arch_atomic64_fetch_and(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_and(s64 i, atomic64_t *v)
{
s64 val = arch_atomic64_read(v);

@@ -208,7 +208,7 @@ static inline long arch_atomic64_fetch_and(long i, atomic64_t *v)
return val;
}

-static inline void arch_atomic64_or(long i, atomic64_t *v)
+static inline void arch_atomic64_or(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "orq %1,%0"
: "+m" (v->counter)
@@ -216,7 +216,7 @@ static inline void arch_atomic64_or(long i, atomic64_t *v)
: "memory");
}

-static inline long arch_atomic64_fetch_or(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_or(s64 i, atomic64_t *v)
{
s64 val = arch_atomic64_read(v);

@@ -225,7 +225,7 @@ static inline long arch_atomic64_fetch_or(long i, atomic64_t *v)
return val;
}

-static inline void arch_atomic64_xor(long i, atomic64_t *v)
+static inline void arch_atomic64_xor(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "xorq %1,%0"
: "+m" (v->counter)
@@ -233,7 +233,7 @@ static inline void arch_atomic64_xor(long i, atomic64_t *v)
: "memory");
}

-static inline long arch_atomic64_fetch_xor(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_xor(s64 i, atomic64_t *v)
{
s64 val = arch_atomic64_read(v);

--
2.11.0

2019-05-22 13:28:35

by Mark Rutland

[permalink] [raw]
Subject: [PATCH 17/18] locking/atomic: crypto: nx: remove redundant casts

Now that atomic64_read() returns s64 consistently, we don't need to
explicitly cast its return value. Drop the redundant casts.

Signed-off-by: Mark Rutland <[email protected]>
Cc: Herbert Xu <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
---
drivers/crypto/nx/nx-842-pseries.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/nx/nx-842-pseries.c b/drivers/crypto/nx/nx-842-pseries.c
index 9432e9e42afe..5cf77729a438 100644
--- a/drivers/crypto/nx/nx-842-pseries.c
+++ b/drivers/crypto/nx/nx-842-pseries.c
@@ -870,7 +870,7 @@ static ssize_t nx842_##_name##_show(struct device *dev, \
local_devdata = rcu_dereference(devdata); \
if (local_devdata) \
p = snprintf(buf, PAGE_SIZE, "%lld\n", \
- (s64)atomic64_read(&local_devdata->counters->_name)); \
+ atomic64_read(&local_devdata->counters->_name)); \
rcu_read_unlock(); \
return p; \
}
@@ -924,7 +924,7 @@ static ssize_t nx842_timehist_show(struct device *dev,
for (i = 0; i < (NX842_HIST_SLOTS - 2); i++) {
bytes = snprintf(p, bytes_remain, "%u-%uus:\t%lld\n",
i ? (2<<(i-1)) : 0, (2<<i)-1,
- (s64)atomic64_read(&times[i]));
+ atomic64_read(&times[i]));
bytes_remain -= bytes;
p += bytes;
}
@@ -932,7 +932,7 @@ static ssize_t nx842_timehist_show(struct device *dev,
* 2<<(NX842_HIST_SLOTS - 2) us */
bytes = snprintf(p, bytes_remain, "%uus - :\t%lld\n",
2<<(NX842_HIST_SLOTS - 2),
- (s64)atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
+ atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
p += bytes;

rcu_read_unlock();
--
2.11.0

2019-05-22 19:08:14

by Palmer Dabbelt

[permalink] [raw]
Subject: Re: [PATCH 12/18] locking/atomic: riscv: use s64 for atomic64

On Wed, 22 May 2019 06:22:44 PDT (-0700), [email protected] wrote:
> As a step towards making the atomic64 API use consistent types treewide,
> let's have the s390 atomic64 implementation use s64 as the underlying

and apparently the RISC-V one as well? :)

> type for atomic64_t, rather than long, matching the generated headers.
>
> As atomic64_read() depends on the generic defintion of atomic64_t, this
> still returns long on 64-bit. This will be converted in a subsequent
> patch.
>
> Otherwise, there should be no functional change as a result of this patch.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Cc: Albert Ou <[email protected]>
> Cc: Palmer Dabbelt <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Will Deacon <[email protected]>
> ---
> arch/riscv/include/asm/atomic.h | 44 +++++++++++++++++++++--------------------
> 1 file changed, 23 insertions(+), 21 deletions(-)
>
> diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> index c9e18289d65c..bffebc57357d 100644
> --- a/arch/riscv/include/asm/atomic.h
> +++ b/arch/riscv/include/asm/atomic.h
> @@ -42,11 +42,11 @@ static __always_inline void atomic_set(atomic_t *v, int i)
>
> #ifndef CONFIG_GENERIC_ATOMIC64
> #define ATOMIC64_INIT(i) { (i) }
> -static __always_inline long atomic64_read(const atomic64_t *v)
> +static __always_inline s64 atomic64_read(const atomic64_t *v)
> {
> return READ_ONCE(v->counter);
> }
> -static __always_inline void atomic64_set(atomic64_t *v, long i)
> +static __always_inline void atomic64_set(atomic64_t *v, s64 i)
> {
> WRITE_ONCE(v->counter, i);
> }
> @@ -70,11 +70,11 @@ void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v) \
>
> #ifdef CONFIG_GENERIC_ATOMIC64
> #define ATOMIC_OPS(op, asm_op, I) \
> - ATOMIC_OP (op, asm_op, I, w, int, )
> + ATOMIC_OP (op, asm_op, I, w, int, )
> #else
> #define ATOMIC_OPS(op, asm_op, I) \
> - ATOMIC_OP (op, asm_op, I, w, int, ) \
> - ATOMIC_OP (op, asm_op, I, d, long, 64)
> + ATOMIC_OP (op, asm_op, I, w, int, ) \
> + ATOMIC_OP (op, asm_op, I, d, s64, 64)
> #endif
>
> ATOMIC_OPS(add, add, i)
> @@ -131,14 +131,14 @@ c_type atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v) \
>
> #ifdef CONFIG_GENERIC_ATOMIC64
> #define ATOMIC_OPS(op, asm_op, c_op, I) \
> - ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
> - ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, )
> + ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
> + ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, )
> #else
> #define ATOMIC_OPS(op, asm_op, c_op, I) \
> - ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
> - ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) \
> - ATOMIC_FETCH_OP( op, asm_op, I, d, long, 64) \
> - ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, long, 64)
> + ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
> + ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) \
> + ATOMIC_FETCH_OP( op, asm_op, I, d, s64, 64) \
> + ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, s64, 64)
> #endif
>
> ATOMIC_OPS(add, add, +, i)
> @@ -170,11 +170,11 @@ ATOMIC_OPS(sub, add, +, -i)
>
> #ifdef CONFIG_GENERIC_ATOMIC64
> #define ATOMIC_OPS(op, asm_op, I) \
> - ATOMIC_FETCH_OP(op, asm_op, I, w, int, )
> + ATOMIC_FETCH_OP(op, asm_op, I, w, int, )
> #else
> #define ATOMIC_OPS(op, asm_op, I) \
> - ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) \
> - ATOMIC_FETCH_OP(op, asm_op, I, d, long, 64)
> + ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) \
> + ATOMIC_FETCH_OP(op, asm_op, I, d, s64, 64)
> #endif
>
> ATOMIC_OPS(and, and, i)
> @@ -223,9 +223,10 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
> #define atomic_fetch_add_unless atomic_fetch_add_unless
>
> #ifndef CONFIG_GENERIC_ATOMIC64
> -static __always_inline long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
> +static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> {
> - long prev, rc;
> + s64 prev;
> + long rc;
>
> __asm__ __volatile__ (
> "0: lr.d %[p], %[c]\n"
> @@ -294,11 +295,11 @@ c_t atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n) \
>
> #ifdef CONFIG_GENERIC_ATOMIC64
> #define ATOMIC_OPS() \
> - ATOMIC_OP( int, , 4)
> + ATOMIC_OP(int, , 4)
> #else
> #define ATOMIC_OPS() \
> - ATOMIC_OP( int, , 4) \
> - ATOMIC_OP(long, 64, 8)
> + ATOMIC_OP(int, , 4) \
> + ATOMIC_OP(s64, 64, 8)
> #endif
>
> ATOMIC_OPS()
> @@ -336,9 +337,10 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
> #define atomic_dec_if_positive(v) atomic_sub_if_positive(v, 1)
>
> #ifndef CONFIG_GENERIC_ATOMIC64
> -static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset)
> +static __always_inline s64 atomic64_sub_if_positive(atomic64_t *v, s64 offset)
> {
> - long prev, rc;
> + s64 prev;
> + long rc;
>
> __asm__ __volatile__ (
> "0: lr.d %[p], %[c]\n"

Reviwed-by: Palmer Dabbelt <[email protected]>

Thanks!

2019-05-22 19:09:09

by Palmer Dabbelt

[permalink] [raw]
Subject: Re: [PATCH 11/18] locking/atomic: riscv: fix atomic64_sub_if_positive() offset argument

On Wed, 22 May 2019 06:22:43 PDT (-0700), [email protected] wrote:
> Presently the riscv implementation of atomic64_sub_if_positive() takes
> a 32-bit offset value rather than a 64-bit offset value as it should do.
> Thus, if called with a 64-bit offset, the value will be unexpectedly
> truncated to 32 bits.
>
> Fix this by taking the offset as a long rather than an int.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Cc: Albert Ou <[email protected]>
> Cc: Palmer Dabbelt <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: [email protected]
> ---
> arch/riscv/include/asm/atomic.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> index 93826771b616..c9e18289d65c 100644
> --- a/arch/riscv/include/asm/atomic.h
> +++ b/arch/riscv/include/asm/atomic.h
> @@ -336,7 +336,7 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
> #define atomic_dec_if_positive(v) atomic_sub_if_positive(v, 1)
>
> #ifndef CONFIG_GENERIC_ATOMIC64
> -static __always_inline long atomic64_sub_if_positive(atomic64_t *v, int offset)
> +static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset)
> {
> long prev, rc;

Reviewed-by: Palmer Dabbelt <[email protected]>

Thanks!

2019-05-22 21:17:41

by Arnd Bergmann

[permalink] [raw]
Subject: Re: [PATCH 03/18] locking/atomic: generic: use s64 for atomic64

On Wed, May 22, 2019 at 3:23 PM Mark Rutland <[email protected]> wrote:
>
> As a step towards making the atomic64 API use consistent types treewide,
> let's have the generic atomic64 implementation use s64 as the underlying
> type for atomic64_t, rather than long long, matching the generated
> headers.
>
> Otherwise, there should be no functional change as a result of this
> patch.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Will Deacon <[email protected]>

Acked-by: Arnd Bergmann <[email protected]>

2019-05-22 21:21:12

by Arnd Bergmann

[permalink] [raw]
Subject: Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup

On Wed, May 22, 2019 at 3:23 PM Mark Rutland <[email protected]> wrote:
>
> Currently architectures return inconsistent types for atomic64 ops. Some return
> long (e..g. powerpc), some return long long (e.g. arc), and some return s64
> (e.g. x86).
>
> This is a bit messy, and causes unnecessary pain (e.g. as values must be cast
> before they can be printed [1]).
>
> This series reworks all the atomic64 implementations to use s64 as the base
> type for atomic64_t (as discussed [2]), and to ensure that this type is
> consistently used for parameters and return values in the API, avoiding further
> problems in this area.
>
> This series (based on v5.1-rc1) can also be found in my atomics/type-cleanup
> branch [3] on kernel.org.

Nice cleanup!

I've provided an explicit Ack for the asm-generic patch if someone wants
to pick up the entire series, but I can also put it all into my asm-generic
tree if you want, after more people have had a chance to take a look.

Arnd

2019-05-23 08:33:08

by Andrea Parri

[permalink] [raw]
Subject: Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup

Hi Mark,

On Wed, May 22, 2019 at 02:22:32PM +0100, Mark Rutland wrote:
> Currently architectures return inconsistent types for atomic64 ops. Some return
> long (e..g. powerpc), some return long long (e.g. arc), and some return s64
> (e.g. x86).

(only partially related, but probably worth asking:)

While reading the series, I realized that the following expression:

atomic64_t v;
...
typeof(v.counter) my_val = atomic64_set(&v, VAL);

is a valid expression on some architectures (in part., on architectures
which #define atomic64_set() to WRITE_ONCE()) but is invalid on others.
(This is due to the fact that WRITE_ONCE() can be used as an rvalue in
the above assignment; TBH, I ignore the reasons for having such rvalue?)

IIUC, similar considerations hold for atomic_set().

The question is whether this is a known/"expected" inconsistency in the
implementation of atomic64_set() or if this would also need to be fixed
/addressed (say in a different patchset)?

Thanks,
Andrea

2019-05-23 10:21:20

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup

On Thu, May 23, 2019 at 10:30:13AM +0200, Andrea Parri wrote:
> Hi Mark,

Hi Andrea,

> On Wed, May 22, 2019 at 02:22:32PM +0100, Mark Rutland wrote:
> > Currently architectures return inconsistent types for atomic64 ops. Some return
> > long (e..g. powerpc), some return long long (e.g. arc), and some return s64
> > (e.g. x86).
>
> (only partially related, but probably worth asking:)
>
> While reading the series, I realized that the following expression:
>
> atomic64_t v;
> ...
> typeof(v.counter) my_val = atomic64_set(&v, VAL);
>
> is a valid expression on some architectures (in part., on architectures
> which #define atomic64_set() to WRITE_ONCE()) but is invalid on others.
> (This is due to the fact that WRITE_ONCE() can be used as an rvalue in
> the above assignment; TBH, I ignore the reasons for having such rvalue?)
>
> IIUC, similar considerations hold for atomic_set().
>
> The question is whether this is a known/"expected" inconsistency in the
> implementation of atomic64_set() or if this would also need to be fixed
> /addressed (say in a different patchset)?

In either case, I don't think the intent is that they should be used that way,
and from a quick scan, I can only fine a single relevant instance today:

[mark@lakrids:~/src/linux]% git grep '\(return\|=\)\s\+atomic\(64\)\?_set'
include/linux/vmw_vmci_defs.h: return atomic_set((atomic_t *)var, (u32)new_val);
include/linux/vmw_vmci_defs.h: return atomic64_set(var, new_val);


[mark@lakrids:~/src/linux]% git grep '=\s+atomic_set' | wc -l
0
[mark@lakrids:~/src/linux]% git grep '=\s+atomic64_set' | wc -l
0

Any architectures implementing arch_atomic_* will have both of these functions
returning void. Currently that's x86 and arm64, but (time permitting) I intend
to migrate other architectures, so I guess we'll have to fix the above up as
required.

I think it's best to avoid the construct above.

Thanks,
Mark.

2019-05-23 10:25:24

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH 12/18] locking/atomic: riscv: use s64 for atomic64

On Wed, May 22, 2019 at 12:06:31PM -0700, Palmer Dabbelt wrote:
> On Wed, 22 May 2019 06:22:44 PDT (-0700), [email protected] wrote:
> > As a step towards making the atomic64 API use consistent types treewide,
> > let's have the s390 atomic64 implementation use s64 as the underlying
>
> and apparently the RISC-V one as well? :)

Heh. You can guess which commit message I wrote first...

> Reviwed-by: Palmer Dabbelt <[email protected]>

Cheers! I'll add an extra 'e' when I fold this in. :)

Thanks,
Mark.

2019-05-23 10:29:40

by Mark Rutland

[permalink] [raw]
Subject: Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup

On Wed, May 22, 2019 at 11:18:59PM +0200, Arnd Bergmann wrote:
> On Wed, May 22, 2019 at 3:23 PM Mark Rutland <[email protected]> wrote:
> >
> > Currently architectures return inconsistent types for atomic64 ops. Some return
> > long (e..g. powerpc), some return long long (e.g. arc), and some return s64
> > (e.g. x86).
> >
> > This is a bit messy, and causes unnecessary pain (e.g. as values must be cast
> > before they can be printed [1]).
> >
> > This series reworks all the atomic64 implementations to use s64 as the base
> > type for atomic64_t (as discussed [2]), and to ensure that this type is
> > consistently used for parameters and return values in the API, avoiding further
> > problems in this area.
> >
> > This series (based on v5.1-rc1) can also be found in my atomics/type-cleanup
> > branch [3] on kernel.org.
>
> Nice cleanup!
>
> I've provided an explicit Ack for the asm-generic patch if someone wants
> to pick up the entire series, but I can also put it all into my asm-generic
> tree if you want, after more people have had a chance to take a look.

Thanks!

I had assumed that this would go through the tip tree, as previous
atomic rework had, but I have no preference as to how this gets merged.

I'm not sure what the policy is, so I'll leave it to Peter and Will to
say.

Mark.

2019-05-23 11:22:36

by Andrea Parri

[permalink] [raw]
Subject: Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup

> > While reading the series, I realized that the following expression:
> >
> > atomic64_t v;
> > ...
> > typeof(v.counter) my_val = atomic64_set(&v, VAL);
> >
> > is a valid expression on some architectures (in part., on architectures
> > which #define atomic64_set() to WRITE_ONCE()) but is invalid on others.
> > (This is due to the fact that WRITE_ONCE() can be used as an rvalue in
> > the above assignment; TBH, I ignore the reasons for having such rvalue?)
> >
> > IIUC, similar considerations hold for atomic_set().
> >
> > The question is whether this is a known/"expected" inconsistency in the
> > implementation of atomic64_set() or if this would also need to be fixed
> > /addressed (say in a different patchset)?
>
> In either case, I don't think the intent is that they should be used that way,
> and from a quick scan, I can only fine a single relevant instance today:
>
> [mark@lakrids:~/src/linux]% git grep '\(return\|=\)\s\+atomic\(64\)\?_set'
> include/linux/vmw_vmci_defs.h: return atomic_set((atomic_t *)var, (u32)new_val);
> include/linux/vmw_vmci_defs.h: return atomic64_set(var, new_val);
>
>
> [mark@lakrids:~/src/linux]% git grep '=\s+atomic_set' | wc -l
> 0
> [mark@lakrids:~/src/linux]% git grep '=\s+atomic64_set' | wc -l
> 0
>
> Any architectures implementing arch_atomic_* will have both of these functions
> returning void. Currently that's x86 and arm64, but (time permitting) I intend
> to migrate other architectures, so I guess we'll have to fix the above up as
> required.
>
> I think it's best to avoid the construct above.

Thank you for the clarification, Mark. I agree with you that it'd be
better to avoid such constructs. (FWIW, it is not currently possible
to use them in litmus tests for the LKMM...)

Thanks,
Andrea

2019-05-23 13:29:39

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH 10/18] locking/atomic: powerpc: use s64 for atomic64

Mark Rutland <[email protected]> writes:
> As a step towards making the atomic64 API use consistent types treewide,
> let's have the powerpc atomic64 implementation use s64 as the underlying
> type for atomic64_t, rather than long, matching the generated headers.
>
> As atomic64_read() depends on the generic defintion of atomic64_t, this
> still returns long on 64-bit. This will be converted in a subsequent
> patch.
>
> Otherwise, there should be no functional change as a result of this
> patch.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Cc: Michael Ellerman <[email protected]>
> Cc: Paul Mackerras <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Will Deacon <[email protected]>
> ---
> arch/powerpc/include/asm/atomic.h | 44 +++++++++++++++++++--------------------
> 1 file changed, 22 insertions(+), 22 deletions(-)

Conversion looks good to me.

Reviewed-by: Michael Ellerman <[email protected]> (powerpc)

cheers

> diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
> index 52eafaf74054..31c231ea56b7 100644
> --- a/arch/powerpc/include/asm/atomic.h
> +++ b/arch/powerpc/include/asm/atomic.h
> @@ -297,24 +297,24 @@ static __inline__ int atomic_dec_if_positive(atomic_t *v)
>
> #define ATOMIC64_INIT(i) { (i) }
>
> -static __inline__ long atomic64_read(const atomic64_t *v)
> +static __inline__ s64 atomic64_read(const atomic64_t *v)
> {
> - long t;
> + s64 t;
>
> __asm__ __volatile__("ld%U1%X1 %0,%1" : "=r"(t) : "m"(v->counter));
>
> return t;
> }
>
> -static __inline__ void atomic64_set(atomic64_t *v, long i)
> +static __inline__ void atomic64_set(atomic64_t *v, s64 i)
> {
> __asm__ __volatile__("std%U0%X0 %1,%0" : "=m"(v->counter) : "r"(i));
> }
>
> #define ATOMIC64_OP(op, asm_op) \
> -static __inline__ void atomic64_##op(long a, atomic64_t *v) \
> +static __inline__ void atomic64_##op(s64 a, atomic64_t *v) \
> { \
> - long t; \
> + s64 t; \
> \
> __asm__ __volatile__( \
> "1: ldarx %0,0,%3 # atomic64_" #op "\n" \
> @@ -327,10 +327,10 @@ static __inline__ void atomic64_##op(long a, atomic64_t *v) \
> }
>
> #define ATOMIC64_OP_RETURN_RELAXED(op, asm_op) \
> -static inline long \
> -atomic64_##op##_return_relaxed(long a, atomic64_t *v) \
> +static inline s64 \
> +atomic64_##op##_return_relaxed(s64 a, atomic64_t *v) \
> { \
> - long t; \
> + s64 t; \
> \
> __asm__ __volatile__( \
> "1: ldarx %0,0,%3 # atomic64_" #op "_return_relaxed\n" \
> @@ -345,10 +345,10 @@ atomic64_##op##_return_relaxed(long a, atomic64_t *v) \
> }
>
> #define ATOMIC64_FETCH_OP_RELAXED(op, asm_op) \
> -static inline long \
> -atomic64_fetch_##op##_relaxed(long a, atomic64_t *v) \
> +static inline s64 \
> +atomic64_fetch_##op##_relaxed(s64 a, atomic64_t *v) \
> { \
> - long res, t; \
> + s64 res, t; \
> \
> __asm__ __volatile__( \
> "1: ldarx %0,0,%4 # atomic64_fetch_" #op "_relaxed\n" \
> @@ -396,7 +396,7 @@ ATOMIC64_OPS(xor, xor)
>
> static __inline__ void atomic64_inc(atomic64_t *v)
> {
> - long t;
> + s64 t;
>
> __asm__ __volatile__(
> "1: ldarx %0,0,%2 # atomic64_inc\n\
> @@ -409,9 +409,9 @@ static __inline__ void atomic64_inc(atomic64_t *v)
> }
> #define atomic64_inc atomic64_inc
>
> -static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v)
> +static __inline__ s64 atomic64_inc_return_relaxed(atomic64_t *v)
> {
> - long t;
> + s64 t;
>
> __asm__ __volatile__(
> "1: ldarx %0,0,%2 # atomic64_inc_return_relaxed\n"
> @@ -427,7 +427,7 @@ static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v)
>
> static __inline__ void atomic64_dec(atomic64_t *v)
> {
> - long t;
> + s64 t;
>
> __asm__ __volatile__(
> "1: ldarx %0,0,%2 # atomic64_dec\n\
> @@ -440,9 +440,9 @@ static __inline__ void atomic64_dec(atomic64_t *v)
> }
> #define atomic64_dec atomic64_dec
>
> -static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v)
> +static __inline__ s64 atomic64_dec_return_relaxed(atomic64_t *v)
> {
> - long t;
> + s64 t;
>
> __asm__ __volatile__(
> "1: ldarx %0,0,%2 # atomic64_dec_return_relaxed\n"
> @@ -463,9 +463,9 @@ static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v)
> * Atomically test *v and decrement if it is greater than 0.
> * The function returns the old value of *v minus 1.
> */
> -static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
> +static __inline__ s64 atomic64_dec_if_positive(atomic64_t *v)
> {
> - long t;
> + s64 t;
>
> __asm__ __volatile__(
> PPC_ATOMIC_ENTRY_BARRIER
> @@ -502,9 +502,9 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
> * Atomically adds @a to @v, so long as it was not @u.
> * Returns the old value of @v.
> */
> -static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
> +static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> {
> - long t;
> + s64 t;
>
> __asm__ __volatile__ (
> PPC_ATOMIC_ENTRY_BARRIER
> @@ -534,7 +534,7 @@ static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
> */
> static __inline__ int atomic64_inc_not_zero(atomic64_t *v)
> {
> - long t1, t2;
> + s64 t1, t2;
>
> __asm__ __volatile__ (
> PPC_ATOMIC_ENTRY_BARRIER
> --
> 2.11.0

2019-05-23 23:14:49

by Vineet Gupta

[permalink] [raw]
Subject: Re: [PATCH 05/18] locking/atomic: arc: use s64 for atomic64

On 5/22/19 6:24 AM, Mark Rutland wrote:
> As a step towards making the atomic64 API use consistent types treewide,
> let's have the arc atomic64 implementation use s64 as the underlying
> type for atomic64_t, rather than u64, matching the generated headers.
>
> Otherwise, there should be no functional change as a result of this
> patch.
>
> Signed-off-by: Mark Rutland <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Vineet Gupta <[email protected]>
> Cc: Will Deacon <[email protected]>

Thx for the cleanup Mark.

Acked-By: Vineet Gupta <[email protected]> # for ARC bits

-Vineet

2019-05-24 10:41:06

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup

On Thu, May 23, 2019 at 11:19:26AM +0100, Mark Rutland wrote:

> [mark@lakrids:~/src/linux]% git grep '\(return\|=\)\s\+atomic\(64\)\?_set'
> include/linux/vmw_vmci_defs.h: return atomic_set((atomic_t *)var, (u32)new_val);
> include/linux/vmw_vmci_defs.h: return atomic64_set(var, new_val);
>

Oh boy, what a load of crap you just did find.

How about something like the below? I've not read how that buffer is
used, but the below preserves all broken without using atomic*_t.

---
diff --git a/include/linux/vmw_vmci_defs.h b/include/linux/vmw_vmci_defs.h
index 0c06178e4985..8ee472118f54 100644
--- a/include/linux/vmw_vmci_defs.h
+++ b/include/linux/vmw_vmci_defs.h
@@ -438,8 +438,8 @@ enum {
struct vmci_queue_header {
/* All fields are 64bit and aligned. */
struct vmci_handle handle; /* Identifier. */
- atomic64_t producer_tail; /* Offset in this queue. */
- atomic64_t consumer_head; /* Offset in peer queue. */
+ u64 producer_tail; /* Offset in this queue. */
+ u64 consumer_head; /* Offset in peer queue. */
};

/*
@@ -740,13 +740,9 @@ static inline void *vmci_event_data_payload(struct vmci_event_data *ev_data)
* prefix will be used, so correctness isn't an issue, but using a
* 64bit operation still adds unnecessary overhead.
*/
-static inline u64 vmci_q_read_pointer(atomic64_t *var)
+static inline u64 vmci_q_read_pointer(u64 *var)
{
-#if defined(CONFIG_X86_32)
- return atomic_read((atomic_t *)var);
-#else
- return atomic64_read(var);
-#endif
+ return READ_ONCE(*(unsigned long *)var);
}

/*
@@ -755,23 +751,17 @@ static inline u64 vmci_q_read_pointer(atomic64_t *var)
* never exceeds a 32bit value in this case. On 32bit SMP, using a
* locked cmpxchg8b adds unnecessary overhead.
*/
-static inline void vmci_q_set_pointer(atomic64_t *var,
- u64 new_val)
+static inline void vmci_q_set_pointer(u64 *var, u64 new_val)
{
-#if defined(CONFIG_X86_32)
- return atomic_set((atomic_t *)var, (u32)new_val);
-#else
- return atomic64_set(var, new_val);
-#endif
+ /* XXX buggered on big-endian */
+ WRITE_ONCE(*(unsigned long *)var, (unsigned long)new_val);
}

/*
* Helper to add a given offset to a head or tail pointer. Wraps the
* value of the pointer around the max size of the queue.
*/
-static inline void vmci_qp_add_pointer(atomic64_t *var,
- size_t add,
- u64 size)
+static inline void vmci_qp_add_pointer(u64 *var, size_t add, u64 size)
{
u64 new_val = vmci_q_read_pointer(var);

@@ -848,8 +838,8 @@ static inline void vmci_q_header_init(struct vmci_queue_header *q_header,
const struct vmci_handle handle)
{
q_header->handle = handle;
- atomic64_set(&q_header->producer_tail, 0);
- atomic64_set(&q_header->consumer_head, 0);
+ q_header->producer_tail = 0;
+ q_header->consumer_head = 0;
}

/*

2019-05-24 11:20:18

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup

On Fri, May 24, 2019 at 12:37:31PM +0200, Peter Zijlstra wrote:
> On Thu, May 23, 2019 at 11:19:26AM +0100, Mark Rutland wrote:
>
> > [mark@lakrids:~/src/linux]% git grep '\(return\|=\)\s\+atomic\(64\)\?_set'
> > include/linux/vmw_vmci_defs.h: return atomic_set((atomic_t *)var, (u32)new_val);
> > include/linux/vmw_vmci_defs.h: return atomic64_set(var, new_val);
> >
>
> Oh boy, what a load of crap you just did find.
>
> How about something like the below? I've not read how that buffer is
> used, but the below preserves all broken without using atomic*_t.

Clarified by something along these lines?

---
Documentation/atomic_t.txt | 3 +++
1 file changed, 3 insertions(+)

diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
index dca3fb0554db..125c95ddbbc0 100644
--- a/Documentation/atomic_t.txt
+++ b/Documentation/atomic_t.txt
@@ -83,6 +83,9 @@ The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
smp_store_release() respectively.

+Therefore, if you find yourself only using the Non-RMW operations of atomic_t,
+you do not in fact need atomic_t at all and are doing it wrong.
+
The one detail to this is that atomic_set{}() should be observable to the RMW
ops. That is:

2019-05-24 11:39:48

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup

On Fri, May 24, 2019 at 01:18:07PM +0200, Peter Zijlstra wrote:
> On Fri, May 24, 2019 at 12:37:31PM +0200, Peter Zijlstra wrote:
> > On Thu, May 23, 2019 at 11:19:26AM +0100, Mark Rutland wrote:
> >
> > > [mark@lakrids:~/src/linux]% git grep '\(return\|=\)\s\+atomic\(64\)\?_set'
> > > include/linux/vmw_vmci_defs.h: return atomic_set((atomic_t *)var, (u32)new_val);
> > > include/linux/vmw_vmci_defs.h: return atomic64_set(var, new_val);
> > >
> >
> > Oh boy, what a load of crap you just did find.
> >
> > How about something like the below? I've not read how that buffer is
> > used, but the below preserves all broken without using atomic*_t.
>
> Clarified by something along these lines?
>
> ---
> Documentation/atomic_t.txt | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> index dca3fb0554db..125c95ddbbc0 100644
> --- a/Documentation/atomic_t.txt
> +++ b/Documentation/atomic_t.txt
> @@ -83,6 +83,9 @@ The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
> implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
> smp_store_release() respectively.
>
> +Therefore, if you find yourself only using the Non-RMW operations of atomic_t,
> +you do not in fact need atomic_t at all and are doing it wrong.
> +
> The one detail to this is that atomic_set{}() should be observable to the RMW
> ops. That is:
>

I like it!

Reviewed-by: Greg Kroah-Hartman <[email protected]>

2019-05-24 11:44:03

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup

On Fri, May 24, 2019 at 01:18:07PM +0200, Peter Zijlstra wrote:
> On Fri, May 24, 2019 at 12:37:31PM +0200, Peter Zijlstra wrote:
> > On Thu, May 23, 2019 at 11:19:26AM +0100, Mark Rutland wrote:
> >
> > > [mark@lakrids:~/src/linux]% git grep '\(return\|=\)\s\+atomic\(64\)\?_set'
> > > include/linux/vmw_vmci_defs.h: return atomic_set((atomic_t *)var, (u32)new_val);
> > > include/linux/vmw_vmci_defs.h: return atomic64_set(var, new_val);
> > >
> >
> > Oh boy, what a load of crap you just did find.
> >
> > How about something like the below? I've not read how that buffer is
> > used, but the below preserves all broken without using atomic*_t.
>
> Clarified by something along these lines?
>
> ---
> Documentation/atomic_t.txt | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> index dca3fb0554db..125c95ddbbc0 100644
> --- a/Documentation/atomic_t.txt
> +++ b/Documentation/atomic_t.txt
> @@ -83,6 +83,9 @@ The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
> implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
> smp_store_release() respectively.
>

Not sure you need a new paragraph here.

> +Therefore, if you find yourself only using the Non-RMW operations of atomic_t,
> +you do not in fact need atomic_t at all and are doing it wrong.
> +

That makes sense to me, although I now find that the sentence below is a bit
confusing because it sounds like it's a caveat relating to only using
Non-RMW ops.

> The one detail to this is that atomic_set{}() should be observable to the RMW
> ops. That is:

How about changing this to be:

"A subtle detail of atomic_set{}() is that it should be observable..."

With that:

Acked-by: Will Deacon <[email protected]>

Will

2019-05-24 11:56:44

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup

On Fri, May 24, 2019 at 12:42:20PM +0100, Will Deacon wrote:

> > diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> > index dca3fb0554db..125c95ddbbc0 100644
> > --- a/Documentation/atomic_t.txt
> > +++ b/Documentation/atomic_t.txt
> > @@ -83,6 +83,9 @@ The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
> > implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
> > smp_store_release() respectively.
> >
>
> Not sure you need a new paragraph here.
>
> > +Therefore, if you find yourself only using the Non-RMW operations of atomic_t,
> > +you do not in fact need atomic_t at all and are doing it wrong.
> > +
>
> That makes sense to me, although I now find that the sentence below is a bit
> confusing because it sounds like it's a caveat relating to only using
> Non-RMW ops.
>
> > The one detail to this is that atomic_set{}() should be observable to the RMW
> > ops. That is:
>
> How about changing this to be:
>
> "A subtle detail of atomic_set{}() is that it should be observable..."

Done, find below.

---
Subject: Documentation/atomic_t.txt: Clarify pure non-rmw usage

Clarify that pure non-RMW usage of atomic_t is pointless, there is
nothing 'magical' about atomic_set() / atomic_read().

This is something that seems to confuse people, because I happen upon it
semi-regularly.

Acked-by: Will Deacon <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
---
Documentation/atomic_t.txt | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
index dca3fb0554db..89eae7f6b360 100644
--- a/Documentation/atomic_t.txt
+++ b/Documentation/atomic_t.txt
@@ -81,9 +81,11 @@ SEMANTICS

The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
-smp_store_release() respectively.
+smp_store_release() respectively. Therefore, if you find yourself only using
+the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
+and are doing it wrong.

-The one detail to this is that atomic_set{}() should be observable to the RMW
+A subtle detail of atomic_set{}() is that it should be observable to the RMW
ops. That is:

C atomic-set

2019-05-24 22:46:43

by Andrea Parri

[permalink] [raw]
Subject: Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup

> ---
> Subject: Documentation/atomic_t.txt: Clarify pure non-rmw usage
>
> Clarify that pure non-RMW usage of atomic_t is pointless, there is
> nothing 'magical' about atomic_set() / atomic_read().
>
> This is something that seems to confuse people, because I happen upon it
> semi-regularly.
>
> Acked-by: Will Deacon <[email protected]>
> Reviewed-by: Greg Kroah-Hartman <[email protected]>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> ---
> Documentation/atomic_t.txt | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> index dca3fb0554db..89eae7f6b360 100644
> --- a/Documentation/atomic_t.txt
> +++ b/Documentation/atomic_t.txt
> @@ -81,9 +81,11 @@ SEMANTICS
>
> The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
> implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
> -smp_store_release() respectively.
> +smp_store_release() respectively. Therefore, if you find yourself only using
> +the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
> +and are doing it wrong.

The counterargument (not so theoretic, just look around in the kernel!) is:
we all 'forget' to use READ_ONCE() and WRITE_ONCE(), it should be difficult
or more difficult to forget to use atomic_read() and atomic_set()... IAC,
I wouldn't call any of them 'wrong'.

Andrea


>
> -The one detail to this is that atomic_set{}() should be observable to the RMW
> +A subtle detail of atomic_set{}() is that it should be observable to the RMW
> ops. That is:
>
> C atomic-set

2019-05-28 10:50:46

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup

On Sat, May 25, 2019 at 12:43:40AM +0200, Andrea Parri wrote:
> > ---
> > Subject: Documentation/atomic_t.txt: Clarify pure non-rmw usage
> >
> > Clarify that pure non-RMW usage of atomic_t is pointless, there is
> > nothing 'magical' about atomic_set() / atomic_read().
> >
> > This is something that seems to confuse people, because I happen upon it
> > semi-regularly.
> >
> > Acked-by: Will Deacon <[email protected]>
> > Reviewed-by: Greg Kroah-Hartman <[email protected]>
> > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > ---
> > Documentation/atomic_t.txt | 6 ++++--
> > 1 file changed, 4 insertions(+), 2 deletions(-)
> >
> > diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> > index dca3fb0554db..89eae7f6b360 100644
> > --- a/Documentation/atomic_t.txt
> > +++ b/Documentation/atomic_t.txt
> > @@ -81,9 +81,11 @@ SEMANTICS
> >
> > The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
> > implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
> > -smp_store_release() respectively.
> > +smp_store_release() respectively. Therefore, if you find yourself only using
> > +the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
> > +and are doing it wrong.
>
> The counterargument (not so theoretic, just look around in the kernel!) is:
> we all 'forget' to use READ_ONCE() and WRITE_ONCE(), it should be difficult
> or more difficult to forget to use atomic_read() and atomic_set()... IAC,
> I wouldn't call any of them 'wrong'.

I'm thinking you mean that the type system isn't helping us with
READ/WRITE_ONCE() like it does with atomic_t ? And while I agree that
there is room for improvement there, that doesn't mean we should start
using atomic*_t all over the place for that.

Part of the problem with READ/WRITE_ONCE() is that it serves a dual
purpose; we've tried to untangle that at some point, but Linus wasn't
having it.

2019-05-28 11:17:39

by Andrea Parri

[permalink] [raw]
Subject: Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup

On Tue, May 28, 2019 at 12:47:19PM +0200, Peter Zijlstra wrote:
> On Sat, May 25, 2019 at 12:43:40AM +0200, Andrea Parri wrote:
> > > ---
> > > Subject: Documentation/atomic_t.txt: Clarify pure non-rmw usage
> > >
> > > Clarify that pure non-RMW usage of atomic_t is pointless, there is
> > > nothing 'magical' about atomic_set() / atomic_read().
> > >
> > > This is something that seems to confuse people, because I happen upon it
> > > semi-regularly.
> > >
> > > Acked-by: Will Deacon <[email protected]>
> > > Reviewed-by: Greg Kroah-Hartman <[email protected]>
> > > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > > ---
> > > Documentation/atomic_t.txt | 6 ++++--
> > > 1 file changed, 4 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> > > index dca3fb0554db..89eae7f6b360 100644
> > > --- a/Documentation/atomic_t.txt
> > > +++ b/Documentation/atomic_t.txt
> > > @@ -81,9 +81,11 @@ SEMANTICS
> > >
> > > The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
> > > implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
> > > -smp_store_release() respectively.
> > > +smp_store_release() respectively. Therefore, if you find yourself only using
> > > +the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
> > > +and are doing it wrong.
> >
> > The counterargument (not so theoretic, just look around in the kernel!) is:
> > we all 'forget' to use READ_ONCE() and WRITE_ONCE(), it should be difficult
> > or more difficult to forget to use atomic_read() and atomic_set()... IAC,
> > I wouldn't call any of them 'wrong'.
>
> I'm thinking you mean that the type system isn't helping us with
> READ/WRITE_ONCE() like it does with atomic_t ?

Yep.


> And while I agree that
> there is room for improvement there, that doesn't mean we should start
> using atomic*_t all over the place for that.

Agreed. But this still doesn't explain that "and are doing it wrong",
AFAICT; maybe just remove that part?

Andrea


>
> Part of the problem with READ/WRITE_ONCE() is that it serves a dual
> purpose; we've tried to untangle that at some point, but Linus wasn't
> having it.

Subject: [tip:locking/core] locking/atomic, s390/pci: Prepare for atomic64_read() conversion

Commit-ID: 982164d62a4b2097c0db28ae7c31fc905af26bb8
Gitweb: https://git.kernel.org/tip/982164d62a4b2097c0db28ae7c31fc905af26bb8
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:34 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, s390/pci: Prepare for atomic64_read() conversion

The return type of atomic64_read() varies by architecture. It may return
long (e.g. powerpc), long long (e.g. arm), or s64 (e.g. x86_64). This is
somewhat painful, and mandates the use of explicit casts in some cases
(e.g. when printing the return value).

To ameliorate matters, subsequent patches will make the atomic64 API
consistently use s64.

As a preparatory step, this patch updates the s390 pci debug code to
treat the return value of atomic64_read() as s64, using an explicit
cast. This cast will be removed once the s64 conversion is complete.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/s390/pci/pci_debug.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c
index 6b48ca7760a7..45eccf79e990 100644
--- a/arch/s390/pci/pci_debug.c
+++ b/arch/s390/pci/pci_debug.c
@@ -74,8 +74,8 @@ static void pci_sw_counter_show(struct seq_file *m)
int i;

for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++)
- seq_printf(m, "%26s:\t%lu\n", pci_sw_names[i],
- atomic64_read(counter));
+ seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i],
+ (s64)atomic64_read(counter));
}

static int pci_perf_show(struct seq_file *m, void *v)

Subject: [tip:locking/core] locking/atomic, arm: Use s64 for atomic64

Commit-ID: ef4cdc09260e2b0576423ca708e245e7549aa8e0
Gitweb: https://git.kernel.org/tip/ef4cdc09260e2b0576423ca708e245e7549aa8e0
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:38 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, arm: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the arm atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long long, matching the generated
headers.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Russell King <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/arm/include/asm/atomic.h | 50 +++++++++++++++++++++----------------------
1 file changed, 24 insertions(+), 26 deletions(-)

diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index f74756641410..d45c41f6f69c 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -249,15 +249,15 @@ ATOMIC_OPS(xor, ^=, eor)

#ifndef CONFIG_GENERIC_ATOMIC64
typedef struct {
- long long counter;
+ s64 counter;
} atomic64_t;

#define ATOMIC64_INIT(i) { (i) }

#ifdef CONFIG_ARM_LPAE
-static inline long long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
{
- long long result;
+ s64 result;

__asm__ __volatile__("@ atomic64_read\n"
" ldrd %0, %H0, [%1]"
@@ -268,7 +268,7 @@ static inline long long atomic64_read(const atomic64_t *v)
return result;
}

-static inline void atomic64_set(atomic64_t *v, long long i)
+static inline void atomic64_set(atomic64_t *v, s64 i)
{
__asm__ __volatile__("@ atomic64_set\n"
" strd %2, %H2, [%1]"
@@ -277,9 +277,9 @@ static inline void atomic64_set(atomic64_t *v, long long i)
);
}
#else
-static inline long long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
{
- long long result;
+ s64 result;

__asm__ __volatile__("@ atomic64_read\n"
" ldrexd %0, %H0, [%1]"
@@ -290,9 +290,9 @@ static inline long long atomic64_read(const atomic64_t *v)
return result;
}

-static inline void atomic64_set(atomic64_t *v, long long i)
+static inline void atomic64_set(atomic64_t *v, s64 i)
{
- long long tmp;
+ s64 tmp;

prefetchw(&v->counter);
__asm__ __volatile__("@ atomic64_set\n"
@@ -307,9 +307,9 @@ static inline void atomic64_set(atomic64_t *v, long long i)
#endif

#define ATOMIC64_OP(op, op1, op2) \
-static inline void atomic64_##op(long long i, atomic64_t *v) \
+static inline void atomic64_##op(s64 i, atomic64_t *v) \
{ \
- long long result; \
+ s64 result; \
unsigned long tmp; \
\
prefetchw(&v->counter); \
@@ -326,10 +326,10 @@ static inline void atomic64_##op(long long i, atomic64_t *v) \
} \

#define ATOMIC64_OP_RETURN(op, op1, op2) \
-static inline long long \
-atomic64_##op##_return_relaxed(long long i, atomic64_t *v) \
+static inline s64 \
+atomic64_##op##_return_relaxed(s64 i, atomic64_t *v) \
{ \
- long long result; \
+ s64 result; \
unsigned long tmp; \
\
prefetchw(&v->counter); \
@@ -349,10 +349,10 @@ atomic64_##op##_return_relaxed(long long i, atomic64_t *v) \
}

#define ATOMIC64_FETCH_OP(op, op1, op2) \
-static inline long long \
-atomic64_fetch_##op##_relaxed(long long i, atomic64_t *v) \
+static inline s64 \
+atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v) \
{ \
- long long result, val; \
+ s64 result, val; \
unsigned long tmp; \
\
prefetchw(&v->counter); \
@@ -406,10 +406,9 @@ ATOMIC64_OPS(xor, eor, eor)
#undef ATOMIC64_OP_RETURN
#undef ATOMIC64_OP

-static inline long long
-atomic64_cmpxchg_relaxed(atomic64_t *ptr, long long old, long long new)
+static inline s64 atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 new)
{
- long long oldval;
+ s64 oldval;
unsigned long res;

prefetchw(&ptr->counter);
@@ -430,9 +429,9 @@ atomic64_cmpxchg_relaxed(atomic64_t *ptr, long long old, long long new)
}
#define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed

-static inline long long atomic64_xchg_relaxed(atomic64_t *ptr, long long new)
+static inline s64 atomic64_xchg_relaxed(atomic64_t *ptr, s64 new)
{
- long long result;
+ s64 result;
unsigned long tmp;

prefetchw(&ptr->counter);
@@ -450,9 +449,9 @@ static inline long long atomic64_xchg_relaxed(atomic64_t *ptr, long long new)
}
#define atomic64_xchg_relaxed atomic64_xchg_relaxed

-static inline long long atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
{
- long long result;
+ s64 result;
unsigned long tmp;

smp_mb();
@@ -478,10 +477,9 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
}
#define atomic64_dec_if_positive atomic64_dec_if_positive

-static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a,
- long long u)
+static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
- long long oldval, newval;
+ s64 oldval, newval;
unsigned long tmp;

smp_mb();

Subject: [tip:locking/core] locking/atomic, powerpc: Use s64 for atomic64

Commit-ID: 8cd8de59748ba71b476d1b7101f9ecaccd5cb8c2
Gitweb: https://git.kernel.org/tip/8cd8de59748ba71b476d1b7101f9ecaccd5cb8c2
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:42 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, powerpc: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the powerpc atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long on 64-bit. This will be converted in a subsequent
patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Michael Ellerman <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/powerpc/include/asm/atomic.h | 44 +++++++++++++++++++--------------------
1 file changed, 22 insertions(+), 22 deletions(-)

diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
index 52eafaf74054..31c231ea56b7 100644
--- a/arch/powerpc/include/asm/atomic.h
+++ b/arch/powerpc/include/asm/atomic.h
@@ -297,24 +297,24 @@ static __inline__ int atomic_dec_if_positive(atomic_t *v)

#define ATOMIC64_INIT(i) { (i) }

-static __inline__ long atomic64_read(const atomic64_t *v)
+static __inline__ s64 atomic64_read(const atomic64_t *v)
{
- long t;
+ s64 t;

__asm__ __volatile__("ld%U1%X1 %0,%1" : "=r"(t) : "m"(v->counter));

return t;
}

-static __inline__ void atomic64_set(atomic64_t *v, long i)
+static __inline__ void atomic64_set(atomic64_t *v, s64 i)
{
__asm__ __volatile__("std%U0%X0 %1,%0" : "=m"(v->counter) : "r"(i));
}

#define ATOMIC64_OP(op, asm_op) \
-static __inline__ void atomic64_##op(long a, atomic64_t *v) \
+static __inline__ void atomic64_##op(s64 a, atomic64_t *v) \
{ \
- long t; \
+ s64 t; \
\
__asm__ __volatile__( \
"1: ldarx %0,0,%3 # atomic64_" #op "\n" \
@@ -327,10 +327,10 @@ static __inline__ void atomic64_##op(long a, atomic64_t *v) \
}

#define ATOMIC64_OP_RETURN_RELAXED(op, asm_op) \
-static inline long \
-atomic64_##op##_return_relaxed(long a, atomic64_t *v) \
+static inline s64 \
+atomic64_##op##_return_relaxed(s64 a, atomic64_t *v) \
{ \
- long t; \
+ s64 t; \
\
__asm__ __volatile__( \
"1: ldarx %0,0,%3 # atomic64_" #op "_return_relaxed\n" \
@@ -345,10 +345,10 @@ atomic64_##op##_return_relaxed(long a, atomic64_t *v) \
}

#define ATOMIC64_FETCH_OP_RELAXED(op, asm_op) \
-static inline long \
-atomic64_fetch_##op##_relaxed(long a, atomic64_t *v) \
+static inline s64 \
+atomic64_fetch_##op##_relaxed(s64 a, atomic64_t *v) \
{ \
- long res, t; \
+ s64 res, t; \
\
__asm__ __volatile__( \
"1: ldarx %0,0,%4 # atomic64_fetch_" #op "_relaxed\n" \
@@ -396,7 +396,7 @@ ATOMIC64_OPS(xor, xor)

static __inline__ void atomic64_inc(atomic64_t *v)
{
- long t;
+ s64 t;

__asm__ __volatile__(
"1: ldarx %0,0,%2 # atomic64_inc\n\
@@ -409,9 +409,9 @@ static __inline__ void atomic64_inc(atomic64_t *v)
}
#define atomic64_inc atomic64_inc

-static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v)
+static __inline__ s64 atomic64_inc_return_relaxed(atomic64_t *v)
{
- long t;
+ s64 t;

__asm__ __volatile__(
"1: ldarx %0,0,%2 # atomic64_inc_return_relaxed\n"
@@ -427,7 +427,7 @@ static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v)

static __inline__ void atomic64_dec(atomic64_t *v)
{
- long t;
+ s64 t;

__asm__ __volatile__(
"1: ldarx %0,0,%2 # atomic64_dec\n\
@@ -440,9 +440,9 @@ static __inline__ void atomic64_dec(atomic64_t *v)
}
#define atomic64_dec atomic64_dec

-static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v)
+static __inline__ s64 atomic64_dec_return_relaxed(atomic64_t *v)
{
- long t;
+ s64 t;

__asm__ __volatile__(
"1: ldarx %0,0,%2 # atomic64_dec_return_relaxed\n"
@@ -463,9 +463,9 @@ static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v)
* Atomically test *v and decrement if it is greater than 0.
* The function returns the old value of *v minus 1.
*/
-static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
+static __inline__ s64 atomic64_dec_if_positive(atomic64_t *v)
{
- long t;
+ s64 t;

__asm__ __volatile__(
PPC_ATOMIC_ENTRY_BARRIER
@@ -502,9 +502,9 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
* Atomically adds @a to @v, so long as it was not @u.
* Returns the old value of @v.
*/
-static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
+static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
- long t;
+ s64 t;

__asm__ __volatile__ (
PPC_ATOMIC_ENTRY_BARRIER
@@ -534,7 +534,7 @@ static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
*/
static __inline__ int atomic64_inc_not_zero(atomic64_t *v)
{
- long t1, t2;
+ s64 t1, t2;

__asm__ __volatile__ (
PPC_ATOMIC_ENTRY_BARRIER

Subject: [tip:locking/core] locking/atomic, riscv: Fix atomic64_sub_if_positive() offset argument

Commit-ID: 33e42ef571979fe6601ac838d338eb599d842a6d
Gitweb: https://git.kernel.org/tip/33e42ef571979fe6601ac838d338eb599d842a6d
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:43 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, riscv: Fix atomic64_sub_if_positive() offset argument

Presently the riscv implementation of atomic64_sub_if_positive() takes
a 32-bit offset value rather than a 64-bit offset value as it should do.
Thus, if called with a 64-bit offset, the value will be unexpectedly
truncated to 32 bits.

Fix this by taking the offset as a long rather than an int.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Palmer Dabbelt <[email protected]>
Cc: Albert Ou <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/riscv/include/asm/atomic.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index 9038aeb900a6..9c263bd9d5ad 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -332,7 +332,7 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
#define atomic_dec_if_positive(v) atomic_sub_if_positive(v, 1)

#ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline long atomic64_sub_if_positive(atomic64_t *v, int offset)
+static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset)
{
long prev, rc;

Subject: [tip:locking/core] locking/atomic, riscv: Use s64 for atomic64

Commit-ID: 0754211847d7a228f1c34a49fd122979dfd19a1a
Gitweb: https://git.kernel.org/tip/0754211847d7a228f1c34a49fd122979dfd19a1a
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:44 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, riscv: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the RISC-V atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long on 64-bit. This will be converted in a subsequent
patch.

Otherwise, there should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Palmer Dabbelt <[email protected]>
Cc: Albert Ou <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/riscv/include/asm/atomic.h | 44 +++++++++++++++++++++--------------------
1 file changed, 23 insertions(+), 21 deletions(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index 9c263bd9d5ad..96f95c9ebd97 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -38,11 +38,11 @@ static __always_inline void atomic_set(atomic_t *v, int i)

#ifndef CONFIG_GENERIC_ATOMIC64
#define ATOMIC64_INIT(i) { (i) }
-static __always_inline long atomic64_read(const atomic64_t *v)
+static __always_inline s64 atomic64_read(const atomic64_t *v)
{
return READ_ONCE(v->counter);
}
-static __always_inline void atomic64_set(atomic64_t *v, long i)
+static __always_inline void atomic64_set(atomic64_t *v, s64 i)
{
WRITE_ONCE(v->counter, i);
}
@@ -66,11 +66,11 @@ void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v) \

#ifdef CONFIG_GENERIC_ATOMIC64
#define ATOMIC_OPS(op, asm_op, I) \
- ATOMIC_OP (op, asm_op, I, w, int, )
+ ATOMIC_OP (op, asm_op, I, w, int, )
#else
#define ATOMIC_OPS(op, asm_op, I) \
- ATOMIC_OP (op, asm_op, I, w, int, ) \
- ATOMIC_OP (op, asm_op, I, d, long, 64)
+ ATOMIC_OP (op, asm_op, I, w, int, ) \
+ ATOMIC_OP (op, asm_op, I, d, s64, 64)
#endif

ATOMIC_OPS(add, add, i)
@@ -127,14 +127,14 @@ c_type atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v) \

#ifdef CONFIG_GENERIC_ATOMIC64
#define ATOMIC_OPS(op, asm_op, c_op, I) \
- ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
- ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, )
+ ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
+ ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, )
#else
#define ATOMIC_OPS(op, asm_op, c_op, I) \
- ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
- ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) \
- ATOMIC_FETCH_OP( op, asm_op, I, d, long, 64) \
- ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, long, 64)
+ ATOMIC_FETCH_OP( op, asm_op, I, w, int, ) \
+ ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int, ) \
+ ATOMIC_FETCH_OP( op, asm_op, I, d, s64, 64) \
+ ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, s64, 64)
#endif

ATOMIC_OPS(add, add, +, i)
@@ -166,11 +166,11 @@ ATOMIC_OPS(sub, add, +, -i)

#ifdef CONFIG_GENERIC_ATOMIC64
#define ATOMIC_OPS(op, asm_op, I) \
- ATOMIC_FETCH_OP(op, asm_op, I, w, int, )
+ ATOMIC_FETCH_OP(op, asm_op, I, w, int, )
#else
#define ATOMIC_OPS(op, asm_op, I) \
- ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) \
- ATOMIC_FETCH_OP(op, asm_op, I, d, long, 64)
+ ATOMIC_FETCH_OP(op, asm_op, I, w, int, ) \
+ ATOMIC_FETCH_OP(op, asm_op, I, d, s64, 64)
#endif

ATOMIC_OPS(and, and, i)
@@ -219,9 +219,10 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
#define atomic_fetch_add_unless atomic_fetch_add_unless

#ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
+static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
- long prev, rc;
+ s64 prev;
+ long rc;

__asm__ __volatile__ (
"0: lr.d %[p], %[c]\n"
@@ -290,11 +291,11 @@ c_t atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n) \

#ifdef CONFIG_GENERIC_ATOMIC64
#define ATOMIC_OPS() \
- ATOMIC_OP( int, , 4)
+ ATOMIC_OP(int, , 4)
#else
#define ATOMIC_OPS() \
- ATOMIC_OP( int, , 4) \
- ATOMIC_OP(long, 64, 8)
+ ATOMIC_OP(int, , 4) \
+ ATOMIC_OP(s64, 64, 8)
#endif

ATOMIC_OPS()
@@ -332,9 +333,10 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
#define atomic_dec_if_positive(v) atomic_sub_if_positive(v, 1)

#ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset)
+static __always_inline s64 atomic64_sub_if_positive(atomic64_t *v, s64 offset)
{
- long prev, rc;
+ s64 prev;
+ long rc;

__asm__ __volatile__ (
"0: lr.d %[p], %[c]\n"

Subject: [tip:locking/core] locking/atomic, sparc: Use s64 for atomic64

Commit-ID: 04e8851af767153c0878cc79ce30c0d8806eec43
Gitweb: https://git.kernel.org/tip/04e8851af767153c0878cc79ce30c0d8806eec43
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:46 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, sparc: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the sparc atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Otherwise, there should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/sparc/include/asm/atomic_64.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h
index 6963482c81d8..b60448397d4f 100644
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -23,15 +23,15 @@

#define ATOMIC_OP(op) \
void atomic_##op(int, atomic_t *); \
-void atomic64_##op(long, atomic64_t *);
+void atomic64_##op(s64, atomic64_t *);

#define ATOMIC_OP_RETURN(op) \
int atomic_##op##_return(int, atomic_t *); \
-long atomic64_##op##_return(long, atomic64_t *);
+s64 atomic64_##op##_return(s64, atomic64_t *);

#define ATOMIC_FETCH_OP(op) \
int atomic_fetch_##op(int, atomic_t *); \
-long atomic64_fetch_##op(long, atomic64_t *);
+s64 atomic64_fetch_##op(s64, atomic64_t *);

#define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_OP_RETURN(op) ATOMIC_FETCH_OP(op)

@@ -61,7 +61,7 @@ static inline int atomic_xchg(atomic_t *v, int new)
((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n)))
#define atomic64_xchg(v, new) (xchg(&((v)->counter), new))

-long atomic64_dec_if_positive(atomic64_t *v);
+s64 atomic64_dec_if_positive(atomic64_t *v);
#define atomic64_dec_if_positive atomic64_dec_if_positive

#endif /* !(__ARCH_SPARC64_ATOMIC__) */

Subject: [tip:locking/core] locking/atomic, s390: Use s64 for atomic64

Commit-ID: 0ca94800762e8a2f7037c9b02ba74aff8016dd82
Gitweb: https://git.kernel.org/tip/0ca94800762e8a2f7037c9b02ba74aff8016dd82
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:45 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, s390: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the s390 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

The s390-internal __atomic64_*() ops are also used by the s390 bitops,
and expect pointers to long. Since atomic64_t::counter will be converted
to s64 in a subsequent patch, pointes to this are explicitly cast to
pointers to long when passed to __atomic64_*() ops.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/s390/include/asm/atomic.h | 38 +++++++++++++++++++-------------------
1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h
index fd20ab5d4cf7..491ad53a0d4e 100644
--- a/arch/s390/include/asm/atomic.h
+++ b/arch/s390/include/asm/atomic.h
@@ -84,9 +84,9 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new)

#define ATOMIC64_INIT(i) { (i) }

-static inline long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
{
- long c;
+ s64 c;

asm volatile(
" lg %0,%1\n"
@@ -94,49 +94,49 @@ static inline long atomic64_read(const atomic64_t *v)
return c;
}

-static inline void atomic64_set(atomic64_t *v, long i)
+static inline void atomic64_set(atomic64_t *v, s64 i)
{
asm volatile(
" stg %1,%0\n"
: "=Q" (v->counter) : "d" (i));
}

-static inline long atomic64_add_return(long i, atomic64_t *v)
+static inline s64 atomic64_add_return(s64 i, atomic64_t *v)
{
- return __atomic64_add_barrier(i, &v->counter) + i;
+ return __atomic64_add_barrier(i, (long *)&v->counter) + i;
}

-static inline long atomic64_fetch_add(long i, atomic64_t *v)
+static inline s64 atomic64_fetch_add(s64 i, atomic64_t *v)
{
- return __atomic64_add_barrier(i, &v->counter);
+ return __atomic64_add_barrier(i, (long *)&v->counter);
}

-static inline void atomic64_add(long i, atomic64_t *v)
+static inline void atomic64_add(s64 i, atomic64_t *v)
{
#ifdef CONFIG_HAVE_MARCH_Z196_FEATURES
if (__builtin_constant_p(i) && (i > -129) && (i < 128)) {
- __atomic64_add_const(i, &v->counter);
+ __atomic64_add_const(i, (long *)&v->counter);
return;
}
#endif
- __atomic64_add(i, &v->counter);
+ __atomic64_add(i, (long *)&v->counter);
}

#define atomic64_xchg(v, new) (xchg(&((v)->counter), new))

-static inline long atomic64_cmpxchg(atomic64_t *v, long old, long new)
+static inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
- return __atomic64_cmpxchg(&v->counter, old, new);
+ return __atomic64_cmpxchg((long *)&v->counter, old, new);
}

#define ATOMIC64_OPS(op) \
-static inline void atomic64_##op(long i, atomic64_t *v) \
+static inline void atomic64_##op(s64 i, atomic64_t *v) \
{ \
- __atomic64_##op(i, &v->counter); \
+ __atomic64_##op(i, (long *)&v->counter); \
} \
-static inline long atomic64_fetch_##op(long i, atomic64_t *v) \
+static inline long atomic64_fetch_##op(s64 i, atomic64_t *v) \
{ \
- return __atomic64_##op##_barrier(i, &v->counter); \
+ return __atomic64_##op##_barrier(i, (long *)&v->counter); \
}

ATOMIC64_OPS(and)
@@ -145,8 +145,8 @@ ATOMIC64_OPS(xor)

#undef ATOMIC64_OPS

-#define atomic64_sub_return(_i, _v) atomic64_add_return(-(long)(_i), _v)
-#define atomic64_fetch_sub(_i, _v) atomic64_fetch_add(-(long)(_i), _v)
-#define atomic64_sub(_i, _v) atomic64_add(-(long)(_i), _v)
+#define atomic64_sub_return(_i, _v) atomic64_add_return(-(s64)(_i), _v)
+#define atomic64_fetch_sub(_i, _v) atomic64_fetch_add(-(s64)(_i), _v)
+#define atomic64_sub(_i, _v) atomic64_add(-(s64)(_i), _v)

#endif /* __ARCH_S390_ATOMIC__ */

Subject: [tip:locking/core] locking/atomic, x86: Use s64 for atomic64

Commit-ID: 79c53a83d7a31a5b5c7bafce4f0723bebf26836a
Gitweb: https://git.kernel.org/tip/79c53a83d7a31a5b5c7bafce4f0723bebf26836a
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:47 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:57 +0200

locking/atomic, x86: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the x86 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long or long long, matching the
generated headers.

Note that the x86 arch_atomic64 implementation is already wrapped by the
generic instrumented atomic64 implementation, which uses s64
consistently.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Russell King <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/x86/include/asm/atomic64_32.h | 66 ++++++++++++++++++--------------------
arch/x86/include/asm/atomic64_64.h | 38 +++++++++++-----------
2 files changed, 51 insertions(+), 53 deletions(-)

diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 6a5b0ec460da..52cfaecb13f9 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -9,7 +9,7 @@
/* An 64bit atomic type */

typedef struct {
- u64 __aligned(8) counter;
+ s64 __aligned(8) counter;
} atomic64_t;

#define ATOMIC64_INIT(val) { (val) }
@@ -71,8 +71,7 @@ ATOMIC64_DECL(add_unless);
* the old value.
*/

-static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o,
- long long n)
+static inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
{
return arch_cmpxchg64(&v->counter, o, n);
}
@@ -85,9 +84,9 @@ static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o,
* Atomically xchgs the value of @v to @n and returns
* the old value.
*/
-static inline long long arch_atomic64_xchg(atomic64_t *v, long long n)
+static inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n)
{
- long long o;
+ s64 o;
unsigned high = (unsigned)(n >> 32);
unsigned low = (unsigned)n;
alternative_atomic64(xchg, "=&A" (o),
@@ -103,7 +102,7 @@ static inline long long arch_atomic64_xchg(atomic64_t *v, long long n)
*
* Atomically sets the value of @v to @n.
*/
-static inline void arch_atomic64_set(atomic64_t *v, long long i)
+static inline void arch_atomic64_set(atomic64_t *v, s64 i)
{
unsigned high = (unsigned)(i >> 32);
unsigned low = (unsigned)i;
@@ -118,9 +117,9 @@ static inline void arch_atomic64_set(atomic64_t *v, long long i)
*
* Atomically reads the value of @v and returns it.
*/
-static inline long long arch_atomic64_read(const atomic64_t *v)
+static inline s64 arch_atomic64_read(const atomic64_t *v)
{
- long long r;
+ s64 r;
alternative_atomic64(read, "=&A" (r), "c" (v) : "memory");
return r;
}
@@ -132,7 +131,7 @@ static inline long long arch_atomic64_read(const atomic64_t *v)
*
* Atomically adds @i to @v and returns @i + *@v
*/
-static inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
{
alternative_atomic64(add_return,
ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -143,7 +142,7 @@ static inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
/*
* Other variants with different arithmetic operators:
*/
-static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v)
{
alternative_atomic64(sub_return,
ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -151,18 +150,18 @@ static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
return i;
}

-static inline long long arch_atomic64_inc_return(atomic64_t *v)
+static inline s64 arch_atomic64_inc_return(atomic64_t *v)
{
- long long a;
+ s64 a;
alternative_atomic64(inc_return, "=&A" (a),
"S" (v) : "memory", "ecx");
return a;
}
#define arch_atomic64_inc_return arch_atomic64_inc_return

-static inline long long arch_atomic64_dec_return(atomic64_t *v)
+static inline s64 arch_atomic64_dec_return(atomic64_t *v)
{
- long long a;
+ s64 a;
alternative_atomic64(dec_return, "=&A" (a),
"S" (v) : "memory", "ecx");
return a;
@@ -176,7 +175,7 @@ static inline long long arch_atomic64_dec_return(atomic64_t *v)
*
* Atomically adds @i to @v.
*/
-static inline long long arch_atomic64_add(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_add(s64 i, atomic64_t *v)
{
__alternative_atomic64(add, add_return,
ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -191,7 +190,7 @@ static inline long long arch_atomic64_add(long long i, atomic64_t *v)
*
* Atomically subtracts @i from @v.
*/
-static inline long long arch_atomic64_sub(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_sub(s64 i, atomic64_t *v)
{
__alternative_atomic64(sub, sub_return,
ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -234,8 +233,7 @@ static inline void arch_atomic64_dec(atomic64_t *v)
* Atomically adds @a to @v, so long as it was not @u.
* Returns non-zero if the add was done, zero otherwise.
*/
-static inline int arch_atomic64_add_unless(atomic64_t *v, long long a,
- long long u)
+static inline int arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
{
unsigned low = (unsigned)u;
unsigned high = (unsigned)(u >> 32);
@@ -254,9 +252,9 @@ static inline int arch_atomic64_inc_not_zero(atomic64_t *v)
}
#define arch_atomic64_inc_not_zero arch_atomic64_inc_not_zero

-static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
{
- long long r;
+ s64 r;
alternative_atomic64(dec_if_positive, "=&A" (r),
"S" (v) : "ecx", "memory");
return r;
@@ -266,17 +264,17 @@ static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
#undef alternative_atomic64
#undef __alternative_atomic64

-static inline void arch_atomic64_and(long long i, atomic64_t *v)
+static inline void arch_atomic64_and(s64 i, atomic64_t *v)
{
- long long old, c = 0;
+ s64 old, c = 0;

while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
c = old;
}

-static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_and(s64 i, atomic64_t *v)
{
- long long old, c = 0;
+ s64 old, c = 0;

while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
c = old;
@@ -284,17 +282,17 @@ static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
return old;
}

-static inline void arch_atomic64_or(long long i, atomic64_t *v)
+static inline void arch_atomic64_or(s64 i, atomic64_t *v)
{
- long long old, c = 0;
+ s64 old, c = 0;

while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
c = old;
}

-static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_or(s64 i, atomic64_t *v)
{
- long long old, c = 0;
+ s64 old, c = 0;

while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
c = old;
@@ -302,17 +300,17 @@ static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
return old;
}

-static inline void arch_atomic64_xor(long long i, atomic64_t *v)
+static inline void arch_atomic64_xor(s64 i, atomic64_t *v)
{
- long long old, c = 0;
+ s64 old, c = 0;

while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
c = old;
}

-static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_xor(s64 i, atomic64_t *v)
{
- long long old, c = 0;
+ s64 old, c = 0;

while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
c = old;
@@ -320,9 +318,9 @@ static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
return old;
}

-static inline long long arch_atomic64_fetch_add(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v)
{
- long long old, c = 0;
+ s64 old, c = 0;

while ((old = arch_atomic64_cmpxchg(v, c, c + i)) != c)
c = old;
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index dadc20adba21..703b7dfd45e0 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -17,7 +17,7 @@
* Atomically reads the value of @v.
* Doesn't imply a read memory barrier.
*/
-static inline long arch_atomic64_read(const atomic64_t *v)
+static inline s64 arch_atomic64_read(const atomic64_t *v)
{
return READ_ONCE((v)->counter);
}
@@ -29,7 +29,7 @@ static inline long arch_atomic64_read(const atomic64_t *v)
*
* Atomically sets the value of @v to @i.
*/
-static inline void arch_atomic64_set(atomic64_t *v, long i)
+static inline void arch_atomic64_set(atomic64_t *v, s64 i)
{
WRITE_ONCE(v->counter, i);
}
@@ -41,7 +41,7 @@ static inline void arch_atomic64_set(atomic64_t *v, long i)
*
* Atomically adds @i to @v.
*/
-static __always_inline void arch_atomic64_add(long i, atomic64_t *v)
+static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "addq %1,%0"
: "=m" (v->counter)
@@ -55,7 +55,7 @@ static __always_inline void arch_atomic64_add(long i, atomic64_t *v)
*
* Atomically subtracts @i from @v.
*/
-static inline void arch_atomic64_sub(long i, atomic64_t *v)
+static inline void arch_atomic64_sub(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "subq %1,%0"
: "=m" (v->counter)
@@ -71,7 +71,7 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v)
* true if the result is zero, or false for all
* other cases.
*/
-static inline bool arch_atomic64_sub_and_test(long i, atomic64_t *v)
+static inline bool arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
{
return GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, e, "er", i);
}
@@ -142,7 +142,7 @@ static inline bool arch_atomic64_inc_and_test(atomic64_t *v)
* if the result is negative, or false when
* result is greater than or equal to zero.
*/
-static inline bool arch_atomic64_add_negative(long i, atomic64_t *v)
+static inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v)
{
return GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, s, "er", i);
}
@@ -155,43 +155,43 @@ static inline bool arch_atomic64_add_negative(long i, atomic64_t *v)
*
* Atomically adds @i to @v and returns @i + @v
*/
-static __always_inline long arch_atomic64_add_return(long i, atomic64_t *v)
+static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
{
return i + xadd(&v->counter, i);
}

-static inline long arch_atomic64_sub_return(long i, atomic64_t *v)
+static inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v)
{
return arch_atomic64_add_return(-i, v);
}

-static inline long arch_atomic64_fetch_add(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v)
{
return xadd(&v->counter, i);
}

-static inline long arch_atomic64_fetch_sub(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_sub(s64 i, atomic64_t *v)
{
return xadd(&v->counter, -i);
}

-static inline long arch_atomic64_cmpxchg(atomic64_t *v, long old, long new)
+static inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
{
return arch_cmpxchg(&v->counter, old, new);
}

#define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
-static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, long new)
+static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
{
return try_cmpxchg(&v->counter, old, new);
}

-static inline long arch_atomic64_xchg(atomic64_t *v, long new)
+static inline s64 arch_atomic64_xchg(atomic64_t *v, s64 new)
{
return arch_xchg(&v->counter, new);
}

-static inline void arch_atomic64_and(long i, atomic64_t *v)
+static inline void arch_atomic64_and(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "andq %1,%0"
: "+m" (v->counter)
@@ -199,7 +199,7 @@ static inline void arch_atomic64_and(long i, atomic64_t *v)
: "memory");
}

-static inline long arch_atomic64_fetch_and(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_and(s64 i, atomic64_t *v)
{
s64 val = arch_atomic64_read(v);

@@ -208,7 +208,7 @@ static inline long arch_atomic64_fetch_and(long i, atomic64_t *v)
return val;
}

-static inline void arch_atomic64_or(long i, atomic64_t *v)
+static inline void arch_atomic64_or(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "orq %1,%0"
: "+m" (v->counter)
@@ -216,7 +216,7 @@ static inline void arch_atomic64_or(long i, atomic64_t *v)
: "memory");
}

-static inline long arch_atomic64_fetch_or(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_or(s64 i, atomic64_t *v)
{
s64 val = arch_atomic64_read(v);

@@ -225,7 +225,7 @@ static inline long arch_atomic64_fetch_or(long i, atomic64_t *v)
return val;
}

-static inline void arch_atomic64_xor(long i, atomic64_t *v)
+static inline void arch_atomic64_xor(s64 i, atomic64_t *v)
{
asm volatile(LOCK_PREFIX "xorq %1,%0"
: "+m" (v->counter)
@@ -233,7 +233,7 @@ static inline void arch_atomic64_xor(long i, atomic64_t *v)
: "memory");
}

-static inline long arch_atomic64_fetch_xor(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_xor(s64 i, atomic64_t *v)
{
s64 val = arch_atomic64_read(v);

Subject: [tip:locking/core] locking/atomic, crypto/nx: Remove redundant casts

Commit-ID: 2af7a0f91c3a645ec9f1cd68e41472021746db35
Gitweb: https://git.kernel.org/tip/2af7a0f91c3a645ec9f1cd68e41472021746db35
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:49 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:57 +0200

locking/atomic, crypto/nx: Remove redundant casts

Now that atomic64_read() returns s64 consistently, we don't need to
explicitly cast its return value. Drop the redundant casts.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Herbert Xu <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
drivers/crypto/nx/nx-842-pseries.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/nx/nx-842-pseries.c b/drivers/crypto/nx/nx-842-pseries.c
index 938332ce3b60..2de5e3672e42 100644
--- a/drivers/crypto/nx/nx-842-pseries.c
+++ b/drivers/crypto/nx/nx-842-pseries.c
@@ -857,7 +857,7 @@ static ssize_t nx842_##_name##_show(struct device *dev, \
local_devdata = rcu_dereference(devdata); \
if (local_devdata) \
p = snprintf(buf, PAGE_SIZE, "%lld\n", \
- (s64)atomic64_read(&local_devdata->counters->_name)); \
+ atomic64_read(&local_devdata->counters->_name)); \
rcu_read_unlock(); \
return p; \
}
@@ -911,7 +911,7 @@ static ssize_t nx842_timehist_show(struct device *dev,
for (i = 0; i < (NX842_HIST_SLOTS - 2); i++) {
bytes = snprintf(p, bytes_remain, "%u-%uus:\t%lld\n",
i ? (2<<(i-1)) : 0, (2<<i)-1,
- (s64)atomic64_read(&times[i]));
+ atomic64_read(&times[i]));
bytes_remain -= bytes;
p += bytes;
}
@@ -919,7 +919,7 @@ static ssize_t nx842_timehist_show(struct device *dev,
* 2<<(NX842_HIST_SLOTS - 2) us */
bytes = snprintf(p, bytes_remain, "%uus - :\t%lld\n",
2<<(NX842_HIST_SLOTS - 2),
- (s64)atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
+ atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
p += bytes;

rcu_read_unlock();

Subject: [tip:locking/core] locking/atomic, s390/pci: Remove redundant casts

Commit-ID: 6a6a9d5fb9f26d2c2127497f3a42adbeb5ccc2a4
Gitweb: https://git.kernel.org/tip/6a6a9d5fb9f26d2c2127497f3a42adbeb5ccc2a4
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:50 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:57 +0200

locking/atomic, s390/pci: Remove redundant casts

Now that atomic64_read() returns s64 consistently, we don't need to
explicitly cast its return value. Drop the redundant casts.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/s390/pci/pci_debug.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c
index 45eccf79e990..3408c0df3ebf 100644
--- a/arch/s390/pci/pci_debug.c
+++ b/arch/s390/pci/pci_debug.c
@@ -75,7 +75,7 @@ static void pci_sw_counter_show(struct seq_file *m)

for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++)
seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i],
- (s64)atomic64_read(counter));
+ atomic64_read(counter));
}

static int pci_perf_show(struct seq_file *m, void *v)

Subject: [tip:locking/core] locking/atomic: Use s64 for atomic64_t on 64-bit

Commit-ID: 3724921396dd1a07c93e3516b8d7c9ff570d17a9
Gitweb: https://git.kernel.org/tip/3724921396dd1a07c93e3516b8d7c9ff570d17a9
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:48 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:57 +0200

locking/atomic: Use s64 for atomic64_t on 64-bit

Now that all architectures use 64 consistently as the base type for the
atomic64 API, let's have the CONFIG_64BIT definition of atomic64_t use
s64 as the underlying type for atomic64_t, rather than long, matching
the generated headers.

On architectures where atomic64_read(v) is READ_ONCE(v->counter), this
patch will cause the return type of atomic64_read() to be s64.

As of this patch, the atomic64 API can be relied upon to consistently
return s64 where a value rather than boolean condition is returned. This
should make code more robust, and simpler, allowing for the removal of
casts previously required to ensure consistent types.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
include/linux/types.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/types.h b/include/linux/types.h
index 231114ae38f4..05030f608be3 100644
--- a/include/linux/types.h
+++ b/include/linux/types.h
@@ -174,7 +174,7 @@ typedef struct {

#ifdef CONFIG_64BIT
typedef struct {
- long counter;
+ s64 counter;
} atomic64_t;
#endif

Subject: [tip:locking/core] Documentation/atomic_t.txt: Clarify pure non-rmw usage

Commit-ID: fff9b6c7d26943a8eb32b58364b7ec6b9369746a
Gitweb: https://git.kernel.org/tip/fff9b6c7d26943a8eb32b58364b7ec6b9369746a
Author: Peter Zijlstra <[email protected]>
AuthorDate: Fri, 24 May 2019 13:52:31 +0200
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:57 +0200

Documentation/atomic_t.txt: Clarify pure non-rmw usage

Clarify that pure non-RMW usage of atomic_t is pointless, there is
nothing 'magical' about atomic_set() / atomic_read().

This is something that seems to confuse people, because I happen upon it
semi-regularly.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Acked-by: Will Deacon <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
Documentation/atomic_t.txt | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
index dca3fb0554db..89eae7f6b360 100644
--- a/Documentation/atomic_t.txt
+++ b/Documentation/atomic_t.txt
@@ -81,9 +81,11 @@ Non-RMW ops:

The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
-smp_store_release() respectively.
+smp_store_release() respectively. Therefore, if you find yourself only using
+the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
+and are doing it wrong.

-The one detail to this is that atomic_set{}() should be observable to the RMW
+A subtle detail of atomic_set{}() is that it should be observable to the RMW
ops. That is:

C atomic-set

Subject: [tip:locking/core] locking/atomic: Use s64 for atomic64

Commit-ID: 9255813d5841e158f033e0d83d455bffdae009a4
Gitweb: https://git.kernel.org/tip/9255813d5841e158f033e0d83d455bffdae009a4
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:35 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the generic atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long long, matching the generated
headers.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Arnd Bergmann <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
include/asm-generic/atomic64.h | 20 ++++++++++----------
lib/atomic64.c | 32 ++++++++++++++++----------------
2 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h
index d7a15096fb3b..370f01d4450f 100644
--- a/include/asm-generic/atomic64.h
+++ b/include/asm-generic/atomic64.h
@@ -10,24 +10,24 @@
#include <linux/types.h>

typedef struct {
- long long counter;
+ s64 counter;
} atomic64_t;

#define ATOMIC64_INIT(i) { (i) }

-extern long long atomic64_read(const atomic64_t *v);
-extern void atomic64_set(atomic64_t *v, long long i);
+extern s64 atomic64_read(const atomic64_t *v);
+extern void atomic64_set(atomic64_t *v, s64 i);

#define atomic64_set_release(v, i) atomic64_set((v), (i))

#define ATOMIC64_OP(op) \
-extern void atomic64_##op(long long a, atomic64_t *v);
+extern void atomic64_##op(s64 a, atomic64_t *v);

#define ATOMIC64_OP_RETURN(op) \
-extern long long atomic64_##op##_return(long long a, atomic64_t *v);
+extern s64 atomic64_##op##_return(s64 a, atomic64_t *v);

#define ATOMIC64_FETCH_OP(op) \
-extern long long atomic64_fetch_##op(long long a, atomic64_t *v);
+extern s64 atomic64_fetch_##op(s64 a, atomic64_t *v);

#define ATOMIC64_OPS(op) ATOMIC64_OP(op) ATOMIC64_OP_RETURN(op) ATOMIC64_FETCH_OP(op)

@@ -46,11 +46,11 @@ ATOMIC64_OPS(xor)
#undef ATOMIC64_OP_RETURN
#undef ATOMIC64_OP

-extern long long atomic64_dec_if_positive(atomic64_t *v);
+extern s64 atomic64_dec_if_positive(atomic64_t *v);
#define atomic64_dec_if_positive atomic64_dec_if_positive
-extern long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n);
-extern long long atomic64_xchg(atomic64_t *v, long long new);
-extern long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u);
+extern s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n);
+extern s64 atomic64_xchg(atomic64_t *v, s64 new);
+extern s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u);
#define atomic64_fetch_add_unless atomic64_fetch_add_unless

#endif /* _ASM_GENERIC_ATOMIC64_H */
diff --git a/lib/atomic64.c b/lib/atomic64.c
index 7e6905751522..e98c85a99787 100644
--- a/lib/atomic64.c
+++ b/lib/atomic64.c
@@ -42,11 +42,11 @@ static inline raw_spinlock_t *lock_addr(const atomic64_t *v)
return &atomic64_lock[addr & (NR_LOCKS - 1)].lock;
}

-long long atomic64_read(const atomic64_t *v)
+s64 atomic64_read(const atomic64_t *v)
{
unsigned long flags;
raw_spinlock_t *lock = lock_addr(v);
- long long val;
+ s64 val;

raw_spin_lock_irqsave(lock, flags);
val = v->counter;
@@ -55,7 +55,7 @@ long long atomic64_read(const atomic64_t *v)
}
EXPORT_SYMBOL(atomic64_read);

-void atomic64_set(atomic64_t *v, long long i)
+void atomic64_set(atomic64_t *v, s64 i)
{
unsigned long flags;
raw_spinlock_t *lock = lock_addr(v);
@@ -67,7 +67,7 @@ void atomic64_set(atomic64_t *v, long long i)
EXPORT_SYMBOL(atomic64_set);

#define ATOMIC64_OP(op, c_op) \
-void atomic64_##op(long long a, atomic64_t *v) \
+void atomic64_##op(s64 a, atomic64_t *v) \
{ \
unsigned long flags; \
raw_spinlock_t *lock = lock_addr(v); \
@@ -79,11 +79,11 @@ void atomic64_##op(long long a, atomic64_t *v) \
EXPORT_SYMBOL(atomic64_##op);

#define ATOMIC64_OP_RETURN(op, c_op) \
-long long atomic64_##op##_return(long long a, atomic64_t *v) \
+s64 atomic64_##op##_return(s64 a, atomic64_t *v) \
{ \
unsigned long flags; \
raw_spinlock_t *lock = lock_addr(v); \
- long long val; \
+ s64 val; \
\
raw_spin_lock_irqsave(lock, flags); \
val = (v->counter c_op a); \
@@ -93,11 +93,11 @@ long long atomic64_##op##_return(long long a, atomic64_t *v) \
EXPORT_SYMBOL(atomic64_##op##_return);

#define ATOMIC64_FETCH_OP(op, c_op) \
-long long atomic64_fetch_##op(long long a, atomic64_t *v) \
+s64 atomic64_fetch_##op(s64 a, atomic64_t *v) \
{ \
unsigned long flags; \
raw_spinlock_t *lock = lock_addr(v); \
- long long val; \
+ s64 val; \
\
raw_spin_lock_irqsave(lock, flags); \
val = v->counter; \
@@ -130,11 +130,11 @@ ATOMIC64_OPS(xor, ^=)
#undef ATOMIC64_OP_RETURN
#undef ATOMIC64_OP

-long long atomic64_dec_if_positive(atomic64_t *v)
+s64 atomic64_dec_if_positive(atomic64_t *v)
{
unsigned long flags;
raw_spinlock_t *lock = lock_addr(v);
- long long val;
+ s64 val;

raw_spin_lock_irqsave(lock, flags);
val = v->counter - 1;
@@ -145,11 +145,11 @@ long long atomic64_dec_if_positive(atomic64_t *v)
}
EXPORT_SYMBOL(atomic64_dec_if_positive);

-long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n)
+s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
{
unsigned long flags;
raw_spinlock_t *lock = lock_addr(v);
- long long val;
+ s64 val;

raw_spin_lock_irqsave(lock, flags);
val = v->counter;
@@ -160,11 +160,11 @@ long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n)
}
EXPORT_SYMBOL(atomic64_cmpxchg);

-long long atomic64_xchg(atomic64_t *v, long long new)
+s64 atomic64_xchg(atomic64_t *v, s64 new)
{
unsigned long flags;
raw_spinlock_t *lock = lock_addr(v);
- long long val;
+ s64 val;

raw_spin_lock_irqsave(lock, flags);
val = v->counter;
@@ -174,11 +174,11 @@ long long atomic64_xchg(atomic64_t *v, long long new)
}
EXPORT_SYMBOL(atomic64_xchg);

-long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u)
+s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
unsigned long flags;
raw_spinlock_t *lock = lock_addr(v);
- long long val;
+ s64 val;

raw_spin_lock_irqsave(lock, flags);
val = v->counter;

Subject: [tip:locking/core] locking/atomic, alpha: Use s64 for atomic64

Commit-ID: 0203fdc160a8c8d8651a3b79aa453ec36cfbd867
Gitweb: https://git.kernel.org/tip/0203fdc160a8c8d8651a3b79aa453ec36cfbd867
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:36 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, alpha: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the alpha atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Richard Henderson <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/alpha/include/asm/atomic.h | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index 150a1c5d6a2c..2144530d1428 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -93,9 +93,9 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \
}

#define ATOMIC64_OP(op, asm_op) \
-static __inline__ void atomic64_##op(long i, atomic64_t * v) \
+static __inline__ void atomic64_##op(s64 i, atomic64_t * v) \
{ \
- unsigned long temp; \
+ s64 temp; \
__asm__ __volatile__( \
"1: ldq_l %0,%1\n" \
" " #asm_op " %0,%2,%0\n" \
@@ -109,9 +109,9 @@ static __inline__ void atomic64_##op(long i, atomic64_t * v) \
} \

#define ATOMIC64_OP_RETURN(op, asm_op) \
-static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
+static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v) \
{ \
- long temp, result; \
+ s64 temp, result; \
__asm__ __volatile__( \
"1: ldq_l %0,%1\n" \
" " #asm_op " %0,%3,%2\n" \
@@ -128,9 +128,9 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
}

#define ATOMIC64_FETCH_OP(op, asm_op) \
-static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v) \
+static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v) \
{ \
- long temp, result; \
+ s64 temp, result; \
__asm__ __volatile__( \
"1: ldq_l %2,%1\n" \
" " #asm_op " %2,%3,%0\n" \
@@ -246,9 +246,9 @@ static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u)
* Atomically adds @a to @v, so long as it was not @u.
* Returns the old value of @v.
*/
-static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
+static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
- long c, new, old;
+ s64 c, new, old;
smp_mb();
__asm__ __volatile__(
"1: ldq_l %[old],%[mem]\n"
@@ -276,9 +276,9 @@ static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
* The function returns the old value of *v minus 1, even if
* the atomic variable, v, was not decremented.
*/
-static inline long atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
{
- long old, tmp;
+ s64 old, tmp;
smp_mb();
__asm__ __volatile__(
"1: ldq_l %[old],%[mem]\n"

Subject: [tip:locking/core] locking/atomic, crypto/nx: Prepare for atomic64_read() conversion

Commit-ID: 90fde663aed0a1c27e50dd1bf3f121141b2fe9f2
Gitweb: https://git.kernel.org/tip/90fde663aed0a1c27e50dd1bf3f121141b2fe9f2
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:33 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, crypto/nx: Prepare for atomic64_read() conversion

The return type of atomic64_read() varies by architecture. It may return
long (e.g. powerpc), long long (e.g. arm), or s64 (e.g. x86_64). This is
somewhat painful, and mandates the use of explicit casts in some cases
(e.g. when printing the return value).

To ameliorate matters, subsequent patches will make the atomic64 API
consistently use s64.

As a preparatory step, this patch updates the nx-842 code to treat the
return value of atomic64_read() as s64, using explicit casts. These
casts will be removed once the s64 conversion is complete.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Herbert Xu <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
drivers/crypto/nx/nx-842-pseries.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/nx/nx-842-pseries.c b/drivers/crypto/nx/nx-842-pseries.c
index 5c4aa606208c..938332ce3b60 100644
--- a/drivers/crypto/nx/nx-842-pseries.c
+++ b/drivers/crypto/nx/nx-842-pseries.c
@@ -856,8 +856,8 @@ static ssize_t nx842_##_name##_show(struct device *dev, \
rcu_read_lock(); \
local_devdata = rcu_dereference(devdata); \
if (local_devdata) \
- p = snprintf(buf, PAGE_SIZE, "%ld\n", \
- atomic64_read(&local_devdata->counters->_name)); \
+ p = snprintf(buf, PAGE_SIZE, "%lld\n", \
+ (s64)atomic64_read(&local_devdata->counters->_name)); \
rcu_read_unlock(); \
return p; \
}
@@ -909,17 +909,17 @@ static ssize_t nx842_timehist_show(struct device *dev,
}

for (i = 0; i < (NX842_HIST_SLOTS - 2); i++) {
- bytes = snprintf(p, bytes_remain, "%u-%uus:\t%ld\n",
+ bytes = snprintf(p, bytes_remain, "%u-%uus:\t%lld\n",
i ? (2<<(i-1)) : 0, (2<<i)-1,
- atomic64_read(&times[i]));
+ (s64)atomic64_read(&times[i]));
bytes_remain -= bytes;
p += bytes;
}
/* The last bucket holds everything over
* 2<<(NX842_HIST_SLOTS - 2) us */
- bytes = snprintf(p, bytes_remain, "%uus - :\t%ld\n",
+ bytes = snprintf(p, bytes_remain, "%uus - :\t%lld\n",
2<<(NX842_HIST_SLOTS - 2),
- atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
+ (s64)atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
p += bytes;

rcu_read_unlock();

Subject: [tip:locking/core] locking/atomic, arm64: Use s64 for atomic64

Commit-ID: 16f18688af7ea6c65f6daa3efb4661415e2e6041
Gitweb: https://git.kernel.org/tip/16f18688af7ea6c65f6daa3efb4661415e2e6041
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:39 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, arm64: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the arm64 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Note that in arch_atomic64_dec_if_positive(), the x0 variable is left as
long, as this variable is also used to hold the pointer to the
atomic64_t.

Otherwise, there should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/arm64/include/asm/atomic_ll_sc.h | 20 ++++++++++----------
arch/arm64/include/asm/atomic_lse.h | 34 +++++++++++++++++-----------------
2 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h
index e321293e0c89..f3b12d7f431f 100644
--- a/arch/arm64/include/asm/atomic_ll_sc.h
+++ b/arch/arm64/include/asm/atomic_ll_sc.h
@@ -133,9 +133,9 @@ ATOMIC_OPS(xor, eor)

#define ATOMIC64_OP(op, asm_op) \
__LL_SC_INLINE void \
-__LL_SC_PREFIX(arch_atomic64_##op(long i, atomic64_t *v)) \
+__LL_SC_PREFIX(arch_atomic64_##op(s64 i, atomic64_t *v)) \
{ \
- long result; \
+ s64 result; \
unsigned long tmp; \
\
asm volatile("// atomic64_" #op "\n" \
@@ -150,10 +150,10 @@ __LL_SC_PREFIX(arch_atomic64_##op(long i, atomic64_t *v)) \
__LL_SC_EXPORT(arch_atomic64_##op);

#define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op) \
-__LL_SC_INLINE long \
-__LL_SC_PREFIX(arch_atomic64_##op##_return##name(long i, atomic64_t *v))\
+__LL_SC_INLINE s64 \
+__LL_SC_PREFIX(arch_atomic64_##op##_return##name(s64 i, atomic64_t *v))\
{ \
- long result; \
+ s64 result; \
unsigned long tmp; \
\
asm volatile("// atomic64_" #op "_return" #name "\n" \
@@ -172,10 +172,10 @@ __LL_SC_PREFIX(arch_atomic64_##op##_return##name(long i, atomic64_t *v))\
__LL_SC_EXPORT(arch_atomic64_##op##_return##name);

#define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op) \
-__LL_SC_INLINE long \
-__LL_SC_PREFIX(arch_atomic64_fetch_##op##name(long i, atomic64_t *v)) \
+__LL_SC_INLINE s64 \
+__LL_SC_PREFIX(arch_atomic64_fetch_##op##name(s64 i, atomic64_t *v)) \
{ \
- long result, val; \
+ s64 result, val; \
unsigned long tmp; \
\
asm volatile("// atomic64_fetch_" #op #name "\n" \
@@ -225,10 +225,10 @@ ATOMIC64_OPS(xor, eor)
#undef ATOMIC64_OP_RETURN
#undef ATOMIC64_OP

-__LL_SC_INLINE long
+__LL_SC_INLINE s64
__LL_SC_PREFIX(arch_atomic64_dec_if_positive(atomic64_t *v))
{
- long result;
+ s64 result;
unsigned long tmp;

asm volatile("// atomic64_dec_if_positive\n"
diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h
index 9256a3921e4b..c53832b08af7 100644
--- a/arch/arm64/include/asm/atomic_lse.h
+++ b/arch/arm64/include/asm/atomic_lse.h
@@ -224,9 +224,9 @@ ATOMIC_FETCH_OP_SUB( , al, "memory")

#define __LL_SC_ATOMIC64(op) __LL_SC_CALL(arch_atomic64_##op)
#define ATOMIC64_OP(op, asm_op) \
-static inline void arch_atomic64_##op(long i, atomic64_t *v) \
+static inline void arch_atomic64_##op(s64 i, atomic64_t *v) \
{ \
- register long x0 asm ("x0") = i; \
+ register s64 x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
\
asm volatile(ARM64_LSE_ATOMIC_INSN(__LL_SC_ATOMIC64(op), \
@@ -244,9 +244,9 @@ ATOMIC64_OP(add, stadd)
#undef ATOMIC64_OP

#define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...) \
-static inline long arch_atomic64_fetch_##op##name(long i, atomic64_t *v)\
+static inline s64 arch_atomic64_fetch_##op##name(s64 i, atomic64_t *v) \
{ \
- register long x0 asm ("x0") = i; \
+ register s64 x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
\
asm volatile(ARM64_LSE_ATOMIC_INSN( \
@@ -276,9 +276,9 @@ ATOMIC64_FETCH_OPS(add, ldadd)
#undef ATOMIC64_FETCH_OPS

#define ATOMIC64_OP_ADD_RETURN(name, mb, cl...) \
-static inline long arch_atomic64_add_return##name(long i, atomic64_t *v)\
+static inline s64 arch_atomic64_add_return##name(s64 i, atomic64_t *v) \
{ \
- register long x0 asm ("x0") = i; \
+ register s64 x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
\
asm volatile(ARM64_LSE_ATOMIC_INSN( \
@@ -302,9 +302,9 @@ ATOMIC64_OP_ADD_RETURN( , al, "memory")

#undef ATOMIC64_OP_ADD_RETURN

-static inline void arch_atomic64_and(long i, atomic64_t *v)
+static inline void arch_atomic64_and(s64 i, atomic64_t *v)
{
- register long x0 asm ("x0") = i;
+ register s64 x0 asm ("x0") = i;
register atomic64_t *x1 asm ("x1") = v;

asm volatile(ARM64_LSE_ATOMIC_INSN(
@@ -320,9 +320,9 @@ static inline void arch_atomic64_and(long i, atomic64_t *v)
}

#define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \
-static inline long arch_atomic64_fetch_and##name(long i, atomic64_t *v) \
+static inline s64 arch_atomic64_fetch_and##name(s64 i, atomic64_t *v) \
{ \
- register long x0 asm ("x0") = i; \
+ register s64 x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
\
asm volatile(ARM64_LSE_ATOMIC_INSN( \
@@ -346,9 +346,9 @@ ATOMIC64_FETCH_OP_AND( , al, "memory")

#undef ATOMIC64_FETCH_OP_AND

-static inline void arch_atomic64_sub(long i, atomic64_t *v)
+static inline void arch_atomic64_sub(s64 i, atomic64_t *v)
{
- register long x0 asm ("x0") = i;
+ register s64 x0 asm ("x0") = i;
register atomic64_t *x1 asm ("x1") = v;

asm volatile(ARM64_LSE_ATOMIC_INSN(
@@ -364,9 +364,9 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v)
}

#define ATOMIC64_OP_SUB_RETURN(name, mb, cl...) \
-static inline long arch_atomic64_sub_return##name(long i, atomic64_t *v)\
+static inline s64 arch_atomic64_sub_return##name(s64 i, atomic64_t *v) \
{ \
- register long x0 asm ("x0") = i; \
+ register s64 x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
\
asm volatile(ARM64_LSE_ATOMIC_INSN( \
@@ -392,9 +392,9 @@ ATOMIC64_OP_SUB_RETURN( , al, "memory")
#undef ATOMIC64_OP_SUB_RETURN

#define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \
-static inline long arch_atomic64_fetch_sub##name(long i, atomic64_t *v) \
+static inline s64 arch_atomic64_fetch_sub##name(s64 i, atomic64_t *v) \
{ \
- register long x0 asm ("x0") = i; \
+ register s64 x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \
\
asm volatile(ARM64_LSE_ATOMIC_INSN( \
@@ -418,7 +418,7 @@ ATOMIC64_FETCH_OP_SUB( , al, "memory")

#undef ATOMIC64_FETCH_OP_SUB

-static inline long arch_atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
{
register long x0 asm ("x0") = (long)v;

Subject: [tip:locking/core] locking/atomic, arc: Use s64 for atomic64

Commit-ID: 16fbad086976574b99ea7019c0504d0194e95dc3
Gitweb: https://git.kernel.org/tip/16fbad086976574b99ea7019c0504d0194e95dc3
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:37 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, arc: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the arc atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than u64, matching the generated headers.

Otherwise, there should be no functional change as a result of this
patch.

Acked-By: Vineet Gupta <[email protected]>
Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/arc/include/asm/atomic.h | 41 ++++++++++++++++++++---------------------
1 file changed, 20 insertions(+), 21 deletions(-)

diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
index 158af079838d..2c75df55d0d2 100644
--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -324,14 +324,14 @@ ATOMIC_OPS(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3)
*/

typedef struct {
- aligned_u64 counter;
+ s64 __aligned(8) counter;
} atomic64_t;

#define ATOMIC64_INIT(a) { (a) }

-static inline long long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
{
- unsigned long long val;
+ s64 val;

__asm__ __volatile__(
" ldd %0, [%1] \n"
@@ -341,7 +341,7 @@ static inline long long atomic64_read(const atomic64_t *v)
return val;
}

-static inline void atomic64_set(atomic64_t *v, long long a)
+static inline void atomic64_set(atomic64_t *v, s64 a)
{
/*
* This could have been a simple assignment in "C" but would need
@@ -362,9 +362,9 @@ static inline void atomic64_set(atomic64_t *v, long long a)
}

#define ATOMIC64_OP(op, op1, op2) \
-static inline void atomic64_##op(long long a, atomic64_t *v) \
+static inline void atomic64_##op(s64 a, atomic64_t *v) \
{ \
- unsigned long long val; \
+ s64 val; \
\
__asm__ __volatile__( \
"1: \n" \
@@ -375,13 +375,13 @@ static inline void atomic64_##op(long long a, atomic64_t *v) \
" bnz 1b \n" \
: "=&r"(val) \
: "r"(&v->counter), "ir"(a) \
- : "cc"); \
+ : "cc"); \
} \

#define ATOMIC64_OP_RETURN(op, op1, op2) \
-static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \
+static inline s64 atomic64_##op##_return(s64 a, atomic64_t *v) \
{ \
- unsigned long long val; \
+ s64 val; \
\
smp_mb(); \
\
@@ -402,9 +402,9 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v) \
}

#define ATOMIC64_FETCH_OP(op, op1, op2) \
-static inline long long atomic64_fetch_##op(long long a, atomic64_t *v) \
+static inline s64 atomic64_fetch_##op(s64 a, atomic64_t *v) \
{ \
- unsigned long long val, orig; \
+ s64 val, orig; \
\
smp_mb(); \
\
@@ -444,10 +444,10 @@ ATOMIC64_OPS(xor, xor, xor)
#undef ATOMIC64_OP_RETURN
#undef ATOMIC64_OP

-static inline long long
-atomic64_cmpxchg(atomic64_t *ptr, long long expected, long long new)
+static inline s64
+atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new)
{
- long long prev;
+ s64 prev;

smp_mb();

@@ -467,9 +467,9 @@ atomic64_cmpxchg(atomic64_t *ptr, long long expected, long long new)
return prev;
}

-static inline long long atomic64_xchg(atomic64_t *ptr, long long new)
+static inline s64 atomic64_xchg(atomic64_t *ptr, s64 new)
{
- long long prev;
+ s64 prev;

smp_mb();

@@ -495,9 +495,9 @@ static inline long long atomic64_xchg(atomic64_t *ptr, long long new)
* the atomic variable, v, was not decremented.
*/

-static inline long long atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
{
- long long val;
+ s64 val;

smp_mb();

@@ -528,10 +528,9 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
* Atomically adds @a to @v, if it was not @u.
* Returns the old value of @v
*/
-static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a,
- long long u)
+static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
{
- long long old, temp;
+ s64 old, temp;

smp_mb();

Subject: [tip:locking/core] locking/atomic, ia64: Use s64 for atomic64

Commit-ID: d84e28d250150adc6526dcce4ca089e2b57430f3
Gitweb: https://git.kernel.org/tip/d84e28d250150adc6526dcce4ca089e2b57430f3
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:40 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, ia64: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the ia64 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long or __s64, matching the generated
headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Fenghua Yu <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/ia64/include/asm/atomic.h | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h
index 206530d0751b..50440f3ddc43 100644
--- a/arch/ia64/include/asm/atomic.h
+++ b/arch/ia64/include/asm/atomic.h
@@ -124,10 +124,10 @@ ATOMIC_FETCH_OP(xor, ^)
#undef ATOMIC_OP

#define ATOMIC64_OP(op, c_op) \
-static __inline__ long \
-ia64_atomic64_##op (__s64 i, atomic64_t *v) \
+static __inline__ s64 \
+ia64_atomic64_##op (s64 i, atomic64_t *v) \
{ \
- __s64 old, new; \
+ s64 old, new; \
CMPXCHG_BUGCHECK_DECL \
\
do { \
@@ -139,10 +139,10 @@ ia64_atomic64_##op (__s64 i, atomic64_t *v) \
}

#define ATOMIC64_FETCH_OP(op, c_op) \
-static __inline__ long \
-ia64_atomic64_fetch_##op (__s64 i, atomic64_t *v) \
+static __inline__ s64 \
+ia64_atomic64_fetch_##op (s64 i, atomic64_t *v) \
{ \
- __s64 old, new; \
+ s64 old, new; \
CMPXCHG_BUGCHECK_DECL \
\
do { \
@@ -162,7 +162,7 @@ ATOMIC64_OPS(sub, -)

#define atomic64_add_return(i,v) \
({ \
- long __ia64_aar_i = (i); \
+ s64 __ia64_aar_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetch_and_add(__ia64_aar_i, &(v)->counter) \
: ia64_atomic64_add(__ia64_aar_i, v); \
@@ -170,7 +170,7 @@ ATOMIC64_OPS(sub, -)

#define atomic64_sub_return(i,v) \
({ \
- long __ia64_asr_i = (i); \
+ s64 __ia64_asr_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetch_and_add(-__ia64_asr_i, &(v)->counter) \
: ia64_atomic64_sub(__ia64_asr_i, v); \
@@ -178,7 +178,7 @@ ATOMIC64_OPS(sub, -)

#define atomic64_fetch_add(i,v) \
({ \
- long __ia64_aar_i = (i); \
+ s64 __ia64_aar_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetchadd(__ia64_aar_i, &(v)->counter, acq) \
: ia64_atomic64_fetch_add(__ia64_aar_i, v); \
@@ -186,7 +186,7 @@ ATOMIC64_OPS(sub, -)

#define atomic64_fetch_sub(i,v) \
({ \
- long __ia64_asr_i = (i); \
+ s64 __ia64_asr_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetchadd(-__ia64_asr_i, &(v)->counter, acq) \
: ia64_atomic64_fetch_sub(__ia64_asr_i, v); \

Subject: [tip:locking/core] locking/atomic, mips: Use s64 for atomic64

Commit-ID: d184cf1a449ca4cb0d86f3dd82c3337c645ea6c0
Gitweb: https://git.kernel.org/tip/d184cf1a449ca4cb0d86f3dd82c3337c645ea6c0
Author: Mark Rutland <[email protected]>
AuthorDate: Wed, 22 May 2019 14:22:41 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, mips: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the mips atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long or __s64, matching the generated
headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long on 64-bit. This will be converted in a subsequent
patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul Burton <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/mips/include/asm/atomic.h | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
index 94096299fc56..9a82dd11c0e9 100644
--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -254,10 +254,10 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v)
#define atomic64_set(v, i) WRITE_ONCE((v)->counter, (i))

#define ATOMIC64_OP(op, c_op, asm_op) \
-static __inline__ void atomic64_##op(long i, atomic64_t * v) \
+static __inline__ void atomic64_##op(s64 i, atomic64_t * v) \
{ \
if (kernel_uses_llsc) { \
- long temp; \
+ s64 temp; \
\
loongson_llsc_mb(); \
__asm__ __volatile__( \
@@ -280,12 +280,12 @@ static __inline__ void atomic64_##op(long i, atomic64_t * v) \
}

#define ATOMIC64_OP_RETURN(op, c_op, asm_op) \
-static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
+static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v) \
{ \
- long result; \
+ s64 result; \
\
if (kernel_uses_llsc) { \
- long temp; \
+ s64 temp; \
\
loongson_llsc_mb(); \
__asm__ __volatile__( \
@@ -314,12 +314,12 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
}

#define ATOMIC64_FETCH_OP(op, c_op, asm_op) \
-static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v) \
+static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v) \
{ \
- long result; \
+ s64 result; \
\
if (kernel_uses_llsc) { \
- long temp; \
+ s64 temp; \
\
loongson_llsc_mb(); \
__asm__ __volatile__( \
@@ -386,14 +386,14 @@ ATOMIC64_OPS(xor, ^=, xor)
* Atomically test @v and subtract @i if @v is greater or equal than @i.
* The function returns the old value of @v minus @i.
*/
-static __inline__ long atomic64_sub_if_positive(long i, atomic64_t * v)
+static __inline__ s64 atomic64_sub_if_positive(s64 i, atomic64_t * v)
{
- long result;
+ s64 result;

smp_mb__before_llsc();

if (kernel_uses_llsc) {
- long temp;
+ s64 temp;

__asm__ __volatile__(
" .set push \n"

2019-06-06 09:07:36

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [tip:locking/core] Documentation/atomic_t.txt: Clarify pure non-rmw usage

On Thu, Jun 06, 2019 at 10:44:21AM +0200, Andrea Parri wrote:
> On Mon, Jun 03, 2019 at 06:46:54AM -0700, tip-bot for Peter Zijlstra wrote:
> > Commit-ID: fff9b6c7d26943a8eb32b58364b7ec6b9369746a
> > Gitweb: https://git.kernel.org/tip/fff9b6c7d26943a8eb32b58364b7ec6b9369746a
> > Author: Peter Zijlstra <[email protected]>
> > AuthorDate: Fri, 24 May 2019 13:52:31 +0200
> > Committer: Ingo Molnar <[email protected]>
> > CommitDate: Mon, 3 Jun 2019 12:32:57 +0200
> >
> > Documentation/atomic_t.txt: Clarify pure non-rmw usage
> >
> > Clarify that pure non-RMW usage of atomic_t is pointless, there is
> > nothing 'magical' about atomic_set() / atomic_read().
> >
> > This is something that seems to confuse people, because I happen upon it
> > semi-regularly.
> >
> > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > Reviewed-by: Greg Kroah-Hartman <[email protected]>
> > Acked-by: Will Deacon <[email protected]>
> > Cc: Linus Torvalds <[email protected]>
> > Cc: Peter Zijlstra <[email protected]>
> > Cc: Thomas Gleixner <[email protected]>
> > Link: https://lkml.kernel.org/r/[email protected]
> > Signed-off-by: Ingo Molnar <[email protected]>
>
> I'd appreciate if you could Cc: me in future changes to this doc.
> (as currently suggested by get_maintainer.pl).
>
> This is particularly annoying when you spend time to review such
> changes:
>
> https://lkml.kernel.org/r/20190528111558.GA9106@andrea

Sure, I hadn't realized the LKMM entry had appropriated this file, I
considered it part of the ATOMIC entry there.

2019-06-06 09:14:45

by Andrea Parri

[permalink] [raw]
Subject: Re: [tip:locking/core] Documentation/atomic_t.txt: Clarify pure non-rmw usage

On Thu, Jun 06, 2019 at 11:04:39AM +0200, Peter Zijlstra wrote:
> On Thu, Jun 06, 2019 at 10:44:21AM +0200, Andrea Parri wrote:
> > On Mon, Jun 03, 2019 at 06:46:54AM -0700, tip-bot for Peter Zijlstra wrote:
> > > Commit-ID: fff9b6c7d26943a8eb32b58364b7ec6b9369746a
> > > Gitweb: https://git.kernel.org/tip/fff9b6c7d26943a8eb32b58364b7ec6b9369746a
> > > Author: Peter Zijlstra <[email protected]>
> > > AuthorDate: Fri, 24 May 2019 13:52:31 +0200
> > > Committer: Ingo Molnar <[email protected]>
> > > CommitDate: Mon, 3 Jun 2019 12:32:57 +0200
> > >
> > > Documentation/atomic_t.txt: Clarify pure non-rmw usage
> > >
> > > Clarify that pure non-RMW usage of atomic_t is pointless, there is
> > > nothing 'magical' about atomic_set() / atomic_read().
> > >
> > > This is something that seems to confuse people, because I happen upon it
> > > semi-regularly.
> > >
> > > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > > Reviewed-by: Greg Kroah-Hartman <[email protected]>
> > > Acked-by: Will Deacon <[email protected]>
> > > Cc: Linus Torvalds <[email protected]>
> > > Cc: Peter Zijlstra <[email protected]>
> > > Cc: Thomas Gleixner <[email protected]>
> > > Link: https://lkml.kernel.org/r/[email protected]
> > > Signed-off-by: Ingo Molnar <[email protected]>
> >
> > I'd appreciate if you could Cc: me in future changes to this doc.
> > (as currently suggested by get_maintainer.pl).
> >
> > This is particularly annoying when you spend time to review such
> > changes:
> >
> > https://lkml.kernel.org/r/20190528111558.GA9106@andrea
>
> Sure, I hadn't realized the LKMM entry had appropriated this file, I
> considered it part of the ATOMIC entry there.

Thanks. Well, that was not a 'secret', c.f.,

70b83069f70d ("tools/memory-model: Add informal LKMM documentation to MAINTAINERS")

Andrea

2019-06-06 11:43:41

by Andrea Parri

[permalink] [raw]
Subject: Re: [tip:locking/core] Documentation/atomic_t.txt: Clarify pure non-rmw usage

On Mon, Jun 03, 2019 at 06:46:54AM -0700, tip-bot for Peter Zijlstra wrote:
> Commit-ID: fff9b6c7d26943a8eb32b58364b7ec6b9369746a
> Gitweb: https://git.kernel.org/tip/fff9b6c7d26943a8eb32b58364b7ec6b9369746a
> Author: Peter Zijlstra <[email protected]>
> AuthorDate: Fri, 24 May 2019 13:52:31 +0200
> Committer: Ingo Molnar <[email protected]>
> CommitDate: Mon, 3 Jun 2019 12:32:57 +0200
>
> Documentation/atomic_t.txt: Clarify pure non-rmw usage
>
> Clarify that pure non-RMW usage of atomic_t is pointless, there is
> nothing 'magical' about atomic_set() / atomic_read().
>
> This is something that seems to confuse people, because I happen upon it
> semi-regularly.
>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> Reviewed-by: Greg Kroah-Hartman <[email protected]>
> Acked-by: Will Deacon <[email protected]>
> Cc: Linus Torvalds <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Link: https://lkml.kernel.org/r/[email protected]
> Signed-off-by: Ingo Molnar <[email protected]>

I'd appreciate if you could Cc: me in future changes to this doc.
(as currently suggested by get_maintainer.pl).

This is particularly annoying when you spend time to review such
changes:

https://lkml.kernel.org/r/20190528111558.GA9106@andrea

Thanks,
Andrea


> ---
> Documentation/atomic_t.txt | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> index dca3fb0554db..89eae7f6b360 100644
> --- a/Documentation/atomic_t.txt
> +++ b/Documentation/atomic_t.txt
> @@ -81,9 +81,11 @@ Non-RMW ops:
>
> The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
> implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
> -smp_store_release() respectively.
> +smp_store_release() respectively. Therefore, if you find yourself only using
> +the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
> +and are doing it wrong.
>
> -The one detail to this is that atomic_set{}() should be observable to the RMW
> +A subtle detail of atomic_set{}() is that it should be observable to the RMW
> ops. That is:
>
> C atomic-set