Patch 1 adds a much needed CONFIG_RWSEM_SPIN_ON_OWNER option.
Patch 2 is a quick fix to disable optimistic spinning on PA-RISC.
Thanks!
Davidlohr Bueso (2):
locking/rwsem: Add CONFIG_RWSEM_SPIN_ON_OWNER
locking/rwsem: Disable optimistic spinning for PA-RISC
include/linux/rwsem.h | 4 ++--
kernel/Kconfig.locks | 4 ++++
kernel/locking/rwsem.c | 2 +-
3 files changed, 7 insertions(+), 3 deletions(-)
--
1.8.1.4
Just like with mutexes (CONFIG_MUTEX_SPIN_ON_OWNER),
encapsulate the dependencies for rwsem optimistic spinning.
No logical changes here as it continues to depend on both
SMP and the XADD algorithm variant.
Signed-off-by: Davidlohr Bueso <[email protected]>
---
include/linux/rwsem.h | 4 ++--
kernel/Kconfig.locks | 4 ++++
kernel/locking/rwsem.c | 2 +-
3 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
index 8d79708..accdef7 100644
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -27,7 +27,7 @@ struct rw_semaphore {
long count;
raw_spinlock_t wait_lock;
struct list_head wait_list;
-#ifdef CONFIG_SMP
+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
/*
* Write owner. Used as a speculative check to see
* if the owner is running on the cpu.
@@ -64,7 +64,7 @@ static inline int rwsem_is_locked(struct rw_semaphore *sem)
# define __RWSEM_DEP_MAP_INIT(lockname)
#endif
-#if defined(CONFIG_SMP) && !defined(CONFIG_RWSEM_GENERIC_SPINLOCK)
+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
#define __RWSEM_INITIALIZER(name) \
{ RWSEM_UNLOCKED_VALUE, \
__RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index 35536d9..e4c3162 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -224,6 +224,10 @@ config MUTEX_SPIN_ON_OWNER
def_bool y
depends on SMP && !DEBUG_MUTEXES
+config RWSEM_SPIN_ON_OWNER
+ def_bool y
+ depends on SMP && RWSEM_XCHGADD_ALGORITHM
+
config ARCH_USE_QUEUE_RWLOCK
bool
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 42f806d..e2d3bc7 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -12,7 +12,7 @@
#include <linux/atomic.h>
-#if defined(CONFIG_SMP) && defined(CONFIG_RWSEM_XCHGADD_ALGORITHM)
+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
static inline void rwsem_set_owner(struct rw_semaphore *sem)
{
sem->owner = current;
--
1.8.1.4
PA-RISC's cmpxchg is not save against normal stores and the code used
for optimistic spinning is known broken because of this.
Disable for now.
[Changelog from PeterZ]
Reported-by: Mikulas Patocka <[email protected]>
Signed-off-by: Davidlohr Bueso <[email protected]>
---
kernel/Kconfig.locks | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index e4c3162..125c77a 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -226,7 +226,7 @@ config MUTEX_SPIN_ON_OWNER
config RWSEM_SPIN_ON_OWNER
def_bool y
- depends on SMP && RWSEM_XCHGADD_ALGORITHM
+ depends on SMP && RWSEM_XCHGADD_ALGORITHM && !PARISC
config ARCH_USE_QUEUE_RWLOCK
bool
--
1.8.1.4
On Fri, 2014-06-06 at 08:55 -0700, Davidlohr Bueso wrote:
> PA-RISC's cmpxchg is not save against normal stores and the code used
> for optimistic spinning is known broken because of this.
What about all the other identified architectures? The problem is that
unless you can do an atomic Read Modify Write on your architecture, you
have to implement our exchange primitives with locking, and that makes
you unsafe against stores We happen to be the architecture that
detected this, but I thought we agreed sparc32, metag, tile32, arc and
possibly hexagon have this problem.
Rather than naming all the failing architectures, we probably want an
ARCH_NO_ATOMIC_RMW
symbol which they select to indicate they can't do atomic exchange and
then you make
depends on SMP && RWSEM_XCHGADD_ALGORITHM &!ARCH_NO_ATOMIC_RMW
So they can all self select (especially if more come crawling out of the
woodwork).
James
On Fri, Jun 06, 2014 at 09:09:47AM -0700, James Bottomley wrote:
> On Fri, 2014-06-06 at 08:55 -0700, Davidlohr Bueso wrote:
> > PA-RISC's cmpxchg is not save against normal stores and the code used
> > for optimistic spinning is known broken because of this.
>
> What about all the other identified architectures? The problem is that
> unless you can do an atomic Read Modify Write on your architecture, you
> have to implement our exchange primitives with locking, and that makes
> you unsafe against stores We happen to be the architecture that
> detected this, but I thought we agreed sparc32, metag, tile32, arc and
> possibly hexagon have this problem.
>
> Rather than naming all the failing architectures, we probably want an
>
> ARCH_NO_ATOMIC_RMW
The thing is, all these archs are broken beyond this particular problem,
Mikulas Patocka found a number of other spots.
In any case, sure I can exclude more. Although ideally someone goes do
that __atomic sparse thing to flush out all this.
---
Subject: locking, mutex: Disable optimistic spinning on !RMW archs
For some archs a regular store does not play nice with cmpxchg(), the
optimistic spinning code (and various other places not caught by this)
break this assumption and make things go boom.
Until something better is found, disable optimistic spinning for these
archs.
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Reported-by: Mikulas Patocka <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
---
kernel/Kconfig.locks | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index d2b32ac27a39..b9c132c48bf1 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -220,6 +220,10 @@ config INLINE_WRITE_UNLOCK_IRQRESTORE
endif
+config ARCH_NO_ATOMIC_RMW
+ def_bool y
+ depends on PARISC || SPARC32 || METAG_ATOMICITY_LOCK1 || (TILE && !TILEGX) || (ARC && !ARC_HAS_LLSC)
+
config MUTEX_SPIN_ON_OWNER
def_bool y
- depends on SMP && !DEBUG_MUTEXES
+ depends on SMP && !DEBUG_MUTEXES && !ARCH_NO_ATOMIC_RMW
On Fri, 2014-06-06 at 08:55 -0700, Davidlohr Bueso wrote:
> Just like with mutexes (CONFIG_MUTEX_SPIN_ON_OWNER),
> encapsulate the dependencies for rwsem optimistic spinning.
> No logical changes here as it continues to depend on both
> SMP and the XADD algorithm variant.
>
> Signed-off-by: Davidlohr Bueso <[email protected]>
> ---
> include/linux/rwsem.h | 4 ++--
> kernel/Kconfig.locks | 4 ++++
> kernel/locking/rwsem.c | 2 +-
> 3 files changed, 7 insertions(+), 3 deletions(-)
Do we also want to add an #ifdef CONFIG_RWSEM_SPIN_ON_OWNER in
__init_rwsem() and in the optimistic spinning functions for rwsem-xadd?
Besides that:
Acked-by: Jason Low <[email protected]>
On 6/6/2014 1:11 PM, Peter Zijlstra wrote:
> The thing is, all these archs are broken beyond this particular problem,
> Mikulas Patocka found a number of other spots.
>
> In any case, sure I can exclude more. Although ideally someone goes do
> that __atomic sparse thing to flush out all this.
>
> ---
> Subject: locking, mutex: Disable optimistic spinning on !RMW archs
>
> For some archs a regular store does not play nice with cmpxchg(), the
> optimistic spinning code (and various other places not caught by this)
> break this assumption and make things go boom.
>
> Until something better is found, disable optimistic spinning for these
> archs.
>
> [..]
>
> +config ARCH_NO_ATOMIC_RMW
> + def_bool y
> + depends on PARISC || SPARC32 || METAG_ATOMICITY_LOCK1 || (TILE && !TILEGX) || (ARC && !ARC_HAS_LLSC)
For tile:
Acked-by: Chris Metcalf <[email protected]>
But you should use "TILEPRO" (added in kernel 3.5) instead of "(TILE && !TILEGX)".
--
Chris Metcalf, Tilera Corp.
http://www.tilera.com
On Fri, Jun 6, 2014 at 10:11 AM, Peter Zijlstra <[email protected]> wrote:
>
> +config ARCH_NO_ATOMIC_RMW
> + def_bool y
> + depends on PARISC || SPARC32 || METAG_ATOMICITY_LOCK1 || (TILE && !TILEGX) || (ARC && !ARC_HAS_LLSC)
Ugh. We've had these kinds of things before, and they are broken and
nasty to maintain.
Just make it
config ARCH_SUPPORTS_ATOMIC_RMW
bool
which defaults to no. And then make MUTEX_SPIN_ON_OWNER depend on that.
And then we can add "select ARCH_SUPPORTS_ATOMIC_RMW" to the few
architectures we (a) care about and (b) know work. So start with x86,
arm, powerpc and sparc64, and then the rest can just add their own
oneliners if they care.
Remember, most people really won't ever care about this, simply
because it only matters if you have enough CPU's for the whole
spinning thing to make a noticeable difference. So missing some odd
architecture _really_ doesn't matter.
And we really really *really* shouldn't have these kinds of "random
really odd architecture details" in the generic code, not even if it's
something as specific as kernel/Kconfig.locks.
Linus
Linus
On Fri, Jun 06, 2014 at 10:22:21AM -0700, Linus Torvalds wrote:
> On Fri, Jun 6, 2014 at 10:11 AM, Peter Zijlstra <[email protected]> wrote:
> >
> > +config ARCH_NO_ATOMIC_RMW
> > + def_bool y
> > + depends on PARISC || SPARC32 || METAG_ATOMICITY_LOCK1 || (TILE && !TILEGX) || (ARC && !ARC_HAS_LLSC)
>
> Ugh. We've had these kinds of things before, and they are broken and
> nasty to maintain.
>
> Just make it
>
> config ARCH_SUPPORTS_ATOMIC_RMW
> bool
>
> which defaults to no. And then make MUTEX_SPIN_ON_OWNER depend on that.
---
Subject: locking, mutex: Make optimistic spinning depend on ARCH_SUPPORTS_ATOMIC_RMW
The optimistic spin code assumes regular stores and cmpxchg() play nice;
this is found to not be true for at least: parisc, sparc32, tile32,
metag-lock1, arc-!llsc and hexagon.
There is further wreckage, but this in particular seemed easy to
trigger, so blacklist this.
Opt in for known good archs.
Reported-by: Mikulas Patocka <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
---
arch/arm/Kconfig | 1 +
arch/arm64/Kconfig | 1 +
arch/powerpc/Kconfig | 1 +
arch/sparc/Kconfig | 1 +
arch/x86/Kconfig | 1 +
kernel/Kconfig.locks | 5 ++++-
6 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index a2c1a18a7275..b8a6a5078eae 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -6,6 +6,7 @@ config ARM
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAVE_CUSTOM_GPIO_H
select ARCH_MIGHT_HAVE_PC_PARPORT
+ select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF
select ARCH_WANT_IPC_PARSE_VERSION
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5c30b10c0f6c..339dc07f15e2 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -3,6 +3,7 @@ config ARM64
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select ARCH_USE_CMPXCHG_LOCKREF
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
+ select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_WANT_OPTIONAL_GPIOLIB
select ARCH_WANT_COMPAT_IPC_PARSE_VERSION
select ARCH_WANT_FRAME_POINTERS
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index e0998997943b..5bb96f282373 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -145,6 +145,7 @@ config PPC
select HAVE_IRQ_EXIT_ON_IRQ_STACK
select ARCH_USE_CMPXCHG_LOCKREF if PPC64
select HAVE_ARCH_AUDITSYSCALL
+ select ARCH_SUPPORTS_ATOMIC_RMW
config GENERIC_CSUM
def_bool CPU_LITTLE_ENDIAN
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 29f2e988c56a..407c87d9879a 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -78,6 +78,7 @@ config SPARC64
select HAVE_C_RECORDMCOUNT
select NO_BOOTMEM
select HAVE_ARCH_AUDITSYSCALL
+ select ARCH_SUPPORTS_ATOMIC_RMW
config ARCH_DEFCONFIG
string
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 3d4951d35587..2fe1f7c5994e 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -130,6 +130,7 @@ config X86
select HAVE_CC_STACKPROTECTOR
select GENERIC_CPU_AUTOPROBE
select HAVE_ARCH_AUDITSYSCALL
+ select ARCH_SUPPORTS_ATOMIC_RMW
config INSTRUCTION_DECODER
def_bool y
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index d2b32ac27a39..ecee67a00f5f 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -220,6 +220,9 @@ config INLINE_WRITE_UNLOCK_IRQRESTORE
endif
+config ARCH_SUPPORTS_ATOMIC_RMW
+ bool
+
config MUTEX_SPIN_ON_OWNER
def_bool y
- depends on SMP && !DEBUG_MUTEXES
+ depends on SMP && !DEBUG_MUTEXES && ARCH_SUPPORTS_ATOMIC_RMW
On Fri, 2014-06-06 at 19:53 +0200, Peter Zijlstra wrote:
> On Fri, Jun 06, 2014 at 10:22:21AM -0700, Linus Torvalds wrote:
> > On Fri, Jun 6, 2014 at 10:11 AM, Peter Zijlstra <[email protected]> wrote:
> > >
> > > +config ARCH_NO_ATOMIC_RMW
> > > + def_bool y
> > > + depends on PARISC || SPARC32 || METAG_ATOMICITY_LOCK1 || (TILE && !TILEGX) || (ARC && !ARC_HAS_LLSC)
> >
> > Ugh. We've had these kinds of things before, and they are broken and
> > nasty to maintain.
> >
> > Just make it
> >
> > config ARCH_SUPPORTS_ATOMIC_RMW
> > bool
> >
> > which defaults to no. And then make MUTEX_SPIN_ON_OWNER depend on that.
>
> ---
> Subject: locking, mutex: Make optimistic spinning depend on ARCH_SUPPORTS_ATOMIC_RMW
>
> The optimistic spin code assumes regular stores and cmpxchg() play nice;
> this is found to not be true for at least: parisc, sparc32, tile32,
> metag-lock1, arc-!llsc and hexagon.
>
> There is further wreckage, but this in particular seemed easy to
> trigger, so blacklist this.
>
> Opt in for known good archs.
>
> Reported-by: Mikulas Patocka <[email protected]>
> Signed-off-by: Peter Zijlstra <[email protected]>
> ---
> arch/arm/Kconfig | 1 +
> arch/arm64/Kconfig | 1 +
> arch/powerpc/Kconfig | 1 +
> arch/sparc/Kconfig | 1 +
> arch/x86/Kconfig | 1 +
> kernel/Kconfig.locks | 5 ++++-
> 6 files changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index a2c1a18a7275..b8a6a5078eae 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -6,6 +6,7 @@ config ARM
> select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
> select ARCH_HAVE_CUSTOM_GPIO_H
> select ARCH_MIGHT_HAVE_PC_PARPORT
> + select ARCH_SUPPORTS_ATOMIC_RMW
> select ARCH_USE_BUILTIN_BSWAP
> select ARCH_USE_CMPXCHG_LOCKREF
> select ARCH_WANT_IPC_PARSE_VERSION
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 5c30b10c0f6c..339dc07f15e2 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -3,6 +3,7 @@ config ARM64
> select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
> select ARCH_USE_CMPXCHG_LOCKREF
> select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
> + select ARCH_SUPPORTS_ATOMIC_RMW
> select ARCH_WANT_OPTIONAL_GPIOLIB
> select ARCH_WANT_COMPAT_IPC_PARSE_VERSION
> select ARCH_WANT_FRAME_POINTERS
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index e0998997943b..5bb96f282373 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -145,6 +145,7 @@ config PPC
> select HAVE_IRQ_EXIT_ON_IRQ_STACK
> select ARCH_USE_CMPXCHG_LOCKREF if PPC64
> select HAVE_ARCH_AUDITSYSCALL
> + select ARCH_SUPPORTS_ATOMIC_RMW
>
> config GENERIC_CSUM
> def_bool CPU_LITTLE_ENDIAN
> diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
> index 29f2e988c56a..407c87d9879a 100644
> --- a/arch/sparc/Kconfig
> +++ b/arch/sparc/Kconfig
> @@ -78,6 +78,7 @@ config SPARC64
> select HAVE_C_RECORDMCOUNT
> select NO_BOOTMEM
> select HAVE_ARCH_AUDITSYSCALL
> + select ARCH_SUPPORTS_ATOMIC_RMW
Sparc64 does, Sparc32 doesn't, so shouldn't that be
select ARCH_SUPPORTS_ATOMIC_RMW if 64BIT
?
Other than this, looks good to me (you can add reviewed by or acked by
from me to your taste).
James
On Fri, 2014-06-06 at 10:57 -0700, James Bottomley wrote:
> On Fri, 2014-06-06 at 19:53 +0200, Peter Zijlstra wrote:
> > On Fri, Jun 06, 2014 at 10:22:21AM -0700, Linus Torvalds wrote:
> > > On Fri, Jun 6, 2014 at 10:11 AM, Peter Zijlstra <[email protected]> wrote:
> > > >
> > > > +config ARCH_NO_ATOMIC_RMW
> > > > + def_bool y
> > > > + depends on PARISC || SPARC32 || METAG_ATOMICITY_LOCK1 || (TILE && !TILEGX) || (ARC && !ARC_HAS_LLSC)
> > >
> > > Ugh. We've had these kinds of things before, and they are broken and
> > > nasty to maintain.
> > >
> > > Just make it
> > >
> > > config ARCH_SUPPORTS_ATOMIC_RMW
> > > bool
> > >
> > > which defaults to no. And then make MUTEX_SPIN_ON_OWNER depend on that.
> >
> > ---
> > Subject: locking, mutex: Make optimistic spinning depend on ARCH_SUPPORTS_ATOMIC_RMW
> >
> > The optimistic spin code assumes regular stores and cmpxchg() play nice;
> > this is found to not be true for at least: parisc, sparc32, tile32,
> > metag-lock1, arc-!llsc and hexagon.
> >
> > There is further wreckage, but this in particular seemed easy to
> > trigger, so blacklist this.
> >
> > Opt in for known good archs.
> >
> > Reported-by: Mikulas Patocka <[email protected]>
> > Signed-off-by: Peter Zijlstra <[email protected]>
> > ---
> > arch/arm/Kconfig | 1 +
> > arch/arm64/Kconfig | 1 +
> > arch/powerpc/Kconfig | 1 +
> > arch/sparc/Kconfig | 1 +
> > arch/x86/Kconfig | 1 +
> > kernel/Kconfig.locks | 5 ++++-
> > 6 files changed, 9 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> > index a2c1a18a7275..b8a6a5078eae 100644
> > --- a/arch/arm/Kconfig
> > +++ b/arch/arm/Kconfig
> > @@ -6,6 +6,7 @@ config ARM
> > select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
> > select ARCH_HAVE_CUSTOM_GPIO_H
> > select ARCH_MIGHT_HAVE_PC_PARPORT
> > + select ARCH_SUPPORTS_ATOMIC_RMW
> > select ARCH_USE_BUILTIN_BSWAP
> > select ARCH_USE_CMPXCHG_LOCKREF
> > select ARCH_WANT_IPC_PARSE_VERSION
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index 5c30b10c0f6c..339dc07f15e2 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -3,6 +3,7 @@ config ARM64
> > select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
> > select ARCH_USE_CMPXCHG_LOCKREF
> > select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
> > + select ARCH_SUPPORTS_ATOMIC_RMW
> > select ARCH_WANT_OPTIONAL_GPIOLIB
> > select ARCH_WANT_COMPAT_IPC_PARSE_VERSION
> > select ARCH_WANT_FRAME_POINTERS
> > diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> > index e0998997943b..5bb96f282373 100644
> > --- a/arch/powerpc/Kconfig
> > +++ b/arch/powerpc/Kconfig
> > @@ -145,6 +145,7 @@ config PPC
> > select HAVE_IRQ_EXIT_ON_IRQ_STACK
> > select ARCH_USE_CMPXCHG_LOCKREF if PPC64
> > select HAVE_ARCH_AUDITSYSCALL
> > + select ARCH_SUPPORTS_ATOMIC_RMW
> >
> > config GENERIC_CSUM
> > def_bool CPU_LITTLE_ENDIAN
> > diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
> > index 29f2e988c56a..407c87d9879a 100644
> > --- a/arch/sparc/Kconfig
> > +++ b/arch/sparc/Kconfig
> > @@ -78,6 +78,7 @@ config SPARC64
> > select HAVE_C_RECORDMCOUNT
> > select NO_BOOTMEM
> > select HAVE_ARCH_AUDITSYSCALL
> > + select ARCH_SUPPORTS_ATOMIC_RMW
>
> Sparc64 does, Sparc32 doesn't, so shouldn't that be
>
> select ARCH_SUPPORTS_ATOMIC_RMW if 64BIT
>
> ?
>
> Other than this, looks good to me (you can add reviewed by or acked by
> from me to your taste).
Oh, wait, I see how sparc does it, never mind ...
James
On Fri, Jun 06, 2014 at 10:57:07AM -0700, James Bottomley wrote:
> > diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
> > index 29f2e988c56a..407c87d9879a 100644
> > --- a/arch/sparc/Kconfig
> > +++ b/arch/sparc/Kconfig
> > @@ -78,6 +78,7 @@ config SPARC64
^^^^^^^^^^^^^^
> > select HAVE_C_RECORDMCOUNT
> > select NO_BOOTMEM
> > select HAVE_ARCH_AUDITSYSCALL
> > + select ARCH_SUPPORTS_ATOMIC_RMW
>
> Sparc64 does, Sparc32 doesn't, so shouldn't that be
>
> select ARCH_SUPPORTS_ATOMIC_RMW if 64BIT
>
> ?
arch/sparc/Kconfig has 3 large sections:
config SPARC (included in both)
config SPARC32 (specific to 32bit)
config SPARC64 (specific to 64bit)
I added it to the last.
On Fri, 2014-06-06 at 10:13 -0700, Jason Low wrote:
> On Fri, 2014-06-06 at 08:55 -0700, Davidlohr Bueso wrote:
> > Just like with mutexes (CONFIG_MUTEX_SPIN_ON_OWNER),
> > encapsulate the dependencies for rwsem optimistic spinning.
> > No logical changes here as it continues to depend on both
> > SMP and the XADD algorithm variant.
> >
> > Signed-off-by: Davidlohr Bueso <[email protected]>
> > ---
> > include/linux/rwsem.h | 4 ++--
> > kernel/Kconfig.locks | 4 ++++
> > kernel/locking/rwsem.c | 2 +-
> > 3 files changed, 7 insertions(+), 3 deletions(-)
>
> Do we also want to add an #ifdef CONFIG_RWSEM_SPIN_ON_OWNER in
> __init_rwsem() and in the optimistic spinning functions for rwsem-xadd?
>
Not really, as we conditionally build rwsem-xadd.o based on
CONFIG_RWSEM_XCHGADD_ALGORITHM:
obj-$(CONFIG_RWSEM_GENERIC_SPINLOCK) += rwsem-spinlock.o
obj-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem-xadd.o
So leaving just SMP is ok there.
>
> Besides that:
>
> Acked-by: Jason Low <[email protected]>
>
Thanks,
Davidlohr
On Fri, 2014-06-06 at 11:12 -0700, Davidlohr Bueso wrote:
> On Fri, 2014-06-06 at 10:13 -0700, Jason Low wrote:
> > On Fri, 2014-06-06 at 08:55 -0700, Davidlohr Bueso wrote:
> > > Just like with mutexes (CONFIG_MUTEX_SPIN_ON_OWNER),
> > > encapsulate the dependencies for rwsem optimistic spinning.
> > > No logical changes here as it continues to depend on both
> > > SMP and the XADD algorithm variant.
> > >
> > > Signed-off-by: Davidlohr Bueso <[email protected]>
> > > ---
> > > include/linux/rwsem.h | 4 ++--
> > > kernel/Kconfig.locks | 4 ++++
> > > kernel/locking/rwsem.c | 2 +-
> > > 3 files changed, 7 insertions(+), 3 deletions(-)
> >
> > Do we also want to add an #ifdef CONFIG_RWSEM_SPIN_ON_OWNER in
> > __init_rwsem() and in the optimistic spinning functions for rwsem-xadd?
> >
>
> Not really, as we conditionally build rwsem-xadd.o based on
> CONFIG_RWSEM_XCHGADD_ALGORITHM:
>
> obj-$(CONFIG_RWSEM_GENERIC_SPINLOCK) += rwsem-spinlock.o
> obj-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem-xadd.o
>
> So leaving just SMP is ok there.
Of course that's bogus when we add more dependencies (ie 2/2), so yeah,
we need to add the whole CONFIG_RWSEM_SPIN_ON_OWNER option.
Thanks.
8<---------------------------------------------------------
From: Davidlohr Bueso <[email protected]>
Date: Fri, 6 Jun 2014 11:45:34 -0700
Subject: [PATCH v2 1/2] locking/rwsem: Add CONFIG_RWSEM_SPIN_ON_OWNER
Just like with mutexes (CONFIG_MUTEX_SPIN_ON_OWNER),
encapsulate the dependencies for rwsem optimistic spinning.
No logical changes here as it continues to depend on both
SMP and the XADD algorithm variant.
Acked-by: Jason Low <[email protected]>
Signed-off-by: Davidlohr Bueso <[email protected]>
---
include/linux/rwsem.h | 4 ++--
kernel/Kconfig.locks | 4 ++++
kernel/locking/rwsem-xadd.c | 2 +-
kernel/locking/rwsem.c | 2 +-
4 files changed, 8 insertions(+), 4 deletions(-)
diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
index 8d79708..accdef7 100644
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -27,7 +27,7 @@ struct rw_semaphore {
long count;
raw_spinlock_t wait_lock;
struct list_head wait_list;
-#ifdef CONFIG_SMP
+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
/*
* Write owner. Used as a speculative check to see
* if the owner is running on the cpu.
@@ -64,7 +64,7 @@ static inline int rwsem_is_locked(struct rw_semaphore *sem)
# define __RWSEM_DEP_MAP_INIT(lockname)
#endif
-#if defined(CONFIG_SMP) && !defined(CONFIG_RWSEM_GENERIC_SPINLOCK)
+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
#define __RWSEM_INITIALIZER(name) \
{ RWSEM_UNLOCKED_VALUE, \
__RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index 35536d9..e4c3162 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -224,6 +224,10 @@ config MUTEX_SPIN_ON_OWNER
def_bool y
depends on SMP && !DEBUG_MUTEXES
+config RWSEM_SPIN_ON_OWNER
+ def_bool y
+ depends on SMP && RWSEM_XCHGADD_ALGORITHM
+
config ARCH_USE_QUEUE_RWLOCK
bool
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index dacc321..f6b5b96 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -82,7 +82,7 @@ void __init_rwsem(struct rw_semaphore *sem, const char *name,
sem->count = RWSEM_UNLOCKED_VALUE;
raw_spin_lock_init(&sem->wait_lock);
INIT_LIST_HEAD(&sem->wait_list);
-#ifdef CONFIG_SMP
+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
sem->owner = NULL;
sem->osq = NULL;
#endif
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 42f806d..e2d3bc7 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -12,7 +12,7 @@
#include <linux/atomic.h>
-#if defined(CONFIG_SMP) && defined(CONFIG_RWSEM_XCHGADD_ALGORITHM)
+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
static inline void rwsem_set_owner(struct rw_semaphore *sem)
{
sem->owner = current;
--
1.8.1.4
On Fri, 2014-06-06 at 11:48 -0700, Davidlohr Bueso wrote:
> On Fri, 2014-06-06 at 11:12 -0700, Davidlohr Bueso wrote:
> > On Fri, 2014-06-06 at 10:13 -0700, Jason Low wrote:
> > > On Fri, 2014-06-06 at 08:55 -0700, Davidlohr Bueso wrote:
> > > > Just like with mutexes (CONFIG_MUTEX_SPIN_ON_OWNER),
> > > > encapsulate the dependencies for rwsem optimistic spinning.
> > > > No logical changes here as it continues to depend on both
> > > > SMP and the XADD algorithm variant.
> > > >
> > > > Signed-off-by: Davidlohr Bueso <[email protected]>
> > > > ---
> > > > include/linux/rwsem.h | 4 ++--
> > > > kernel/Kconfig.locks | 4 ++++
> > > > kernel/locking/rwsem.c | 2 +-
> > > > 3 files changed, 7 insertions(+), 3 deletions(-)
> > >
> > > Do we also want to add an #ifdef CONFIG_RWSEM_SPIN_ON_OWNER in
> > > __init_rwsem() and in the optimistic spinning functions for rwsem-xadd?
> > >
> >
> > Not really, as we conditionally build rwsem-xadd.o based on
> > CONFIG_RWSEM_XCHGADD_ALGORITHM:
> >
> > obj-$(CONFIG_RWSEM_GENERIC_SPINLOCK) += rwsem-spinlock.o
> > obj-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem-xadd.o
> >
> > So leaving just SMP is ok there.
>
> Of course that's bogus when we add more dependencies (ie 2/2), so yeah,
> we need to add the whole CONFIG_RWSEM_SPIN_ON_OWNER option.
>
> Thanks.
> diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
> index dacc321..f6b5b96 100644
> --- a/kernel/locking/rwsem-xadd.c
> +++ b/kernel/locking/rwsem-xadd.c
> @@ -82,7 +82,7 @@ void __init_rwsem(struct rw_semaphore *sem, const char *name,
> sem->count = RWSEM_UNLOCKED_VALUE;
> raw_spin_lock_init(&sem->wait_lock);
> INIT_LIST_HEAD(&sem->wait_list);
> -#ifdef CONFIG_SMP
> +#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
> sem->owner = NULL;
> sem->osq = NULL;
> #endif
And should we also change that in the optimistic spinning functions so
that it defaults to:
static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
{
return false;
}
in the !CONFIG_RWSEM_SPIN_ON_OWNER case.
Thanks.
> diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
> index 42f806d..e2d3bc7 100644
> --- a/kernel/locking/rwsem.c
> +++ b/kernel/locking/rwsem.c
> @@ -12,7 +12,7 @@
>
> #include <linux/atomic.h>
>
> -#if defined(CONFIG_SMP) && defined(CONFIG_RWSEM_XCHGADD_ALGORITHM)
> +#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
> static inline void rwsem_set_owner(struct rw_semaphore *sem)
> {
> sem->owner = current;
On Fri, 2014-06-06 at 12:08 -0700, Jason Low wrote:
> And should we also change that in the optimistic spinning functions so
> that it defaults to:
>
> static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
> {
> return false;
> }
Oops Yep, afaict the last offending user.
8<----------------------------------
From: Davidlohr Bueso <[email protected]>
Date: Fri, 6 Jun 2014 12:15:03 -0700
Subject: [PATCH v3 1/2] locking/rwsem: Add CONFIG_RWSEM_SPIN_ON_OWNER
Just like with mutexes (CONFIG_MUTEX_SPIN_ON_OWNER),
encapsulate the dependencies for rwsem optimistic spinning.
No logical changes here as it continues to depend on both
SMP and the XADD algorithm variant.
Acked-by: Jason Low <[email protected]>
Signed-off-by: Davidlohr Bueso <[email protected]>
---
include/linux/rwsem.h | 4 ++--
kernel/Kconfig.locks | 4 ++++
kernel/locking/rwsem-xadd.c | 4 ++--
kernel/locking/rwsem.c | 2 +-
4 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/include/linux/rwsem.h b/include/linux/rwsem.h
index 8d79708..accdef7 100644
--- a/include/linux/rwsem.h
+++ b/include/linux/rwsem.h
@@ -27,7 +27,7 @@ struct rw_semaphore {
long count;
raw_spinlock_t wait_lock;
struct list_head wait_list;
-#ifdef CONFIG_SMP
+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
/*
* Write owner. Used as a speculative check to see
* if the owner is running on the cpu.
@@ -64,7 +64,7 @@ static inline int rwsem_is_locked(struct rw_semaphore *sem)
# define __RWSEM_DEP_MAP_INIT(lockname)
#endif
-#if defined(CONFIG_SMP) && !defined(CONFIG_RWSEM_GENERIC_SPINLOCK)
+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
#define __RWSEM_INITIALIZER(name) \
{ RWSEM_UNLOCKED_VALUE, \
__RAW_SPIN_LOCK_UNLOCKED(name.wait_lock), \
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index 35536d9..e4c3162 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -224,6 +224,10 @@ config MUTEX_SPIN_ON_OWNER
def_bool y
depends on SMP && !DEBUG_MUTEXES
+config RWSEM_SPIN_ON_OWNER
+ def_bool y
+ depends on SMP && RWSEM_XCHGADD_ALGORITHM
+
config ARCH_USE_QUEUE_RWLOCK
bool
diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index dacc321..abe6e06 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -82,7 +82,7 @@ void __init_rwsem(struct rw_semaphore *sem, const char *name,
sem->count = RWSEM_UNLOCKED_VALUE;
raw_spin_lock_init(&sem->wait_lock);
INIT_LIST_HEAD(&sem->wait_list);
-#ifdef CONFIG_SMP
+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
sem->owner = NULL;
sem->osq = NULL;
#endif
@@ -262,7 +262,7 @@ static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
return false;
}
-#ifdef CONFIG_SMP
+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
/*
* Try to acquire write lock before the writer has been put on wait queue.
*/
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 42f806d..e2d3bc7 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -12,7 +12,7 @@
#include <linux/atomic.h>
-#if defined(CONFIG_SMP) && defined(CONFIG_RWSEM_XCHGADD_ALGORITHM)
+#ifdef CONFIG_RWSEM_SPIN_ON_OWNER
static inline void rwsem_set_owner(struct rw_semaphore *sem)
{
sem->owner = current;
--
1.8.1.4
Commit-ID: 4badad352a6bb202ec68afa7a574c0bb961e5ebc
Gitweb: http://git.kernel.org/tip/4badad352a6bb202ec68afa7a574c0bb961e5ebc
Author: Peter Zijlstra <[email protected]>
AuthorDate: Fri, 6 Jun 2014 19:53:16 +0200
Committer: Ingo Molnar <[email protected]>
CommitDate: Wed, 16 Jul 2014 14:57:07 +0200
locking/mutex: Disable optimistic spinning on some architectures
The optimistic spin code assumes regular stores and cmpxchg() play nice;
this is found to not be true for at least: parisc, sparc32, tile32,
metag-lock1, arc-!llsc and hexagon.
There is further wreckage, but this in particular seemed easy to
trigger, so blacklist this.
Opt in for known good archs.
Signed-off-by: Peter Zijlstra <[email protected]>
Reported-by: Mikulas Patocka <[email protected]>
Cc: David Miller <[email protected]>
Cc: Chris Metcalf <[email protected]>
Cc: James Bottomley <[email protected]>
Cc: Vineet Gupta <[email protected]>
Cc: Jason Low <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Paul McKenney <[email protected]>
Cc: John David Anglin <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Davidlohr Bueso <[email protected]>
Cc: [email protected]
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Russell King <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/arm/Kconfig | 1 +
arch/arm64/Kconfig | 1 +
arch/powerpc/Kconfig | 1 +
arch/sparc/Kconfig | 1 +
arch/x86/Kconfig | 1 +
kernel/Kconfig.locks | 5 ++++-
6 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 245058b..88acf8b 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -6,6 +6,7 @@ config ARM
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAVE_CUSTOM_GPIO_H
select ARCH_MIGHT_HAVE_PC_PARPORT
+ select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF
select ARCH_WANT_IPC_PARSE_VERSION
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index a474de34..839f48c 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -4,6 +4,7 @@ config ARM64
select ARCH_HAS_OPP
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_USE_CMPXCHG_LOCKREF
+ select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_WANT_OPTIONAL_GPIOLIB
select ARCH_WANT_COMPAT_IPC_PARSE_VERSION
select ARCH_WANT_FRAME_POINTERS
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index fefe7c8..80b94b0 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -145,6 +145,7 @@ config PPC
select HAVE_IRQ_EXIT_ON_IRQ_STACK
select ARCH_USE_CMPXCHG_LOCKREF if PPC64
select HAVE_ARCH_AUDITSYSCALL
+ select ARCH_SUPPORTS_ATOMIC_RMW
config GENERIC_CSUM
def_bool CPU_LITTLE_ENDIAN
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 29f2e98..407c87d 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -78,6 +78,7 @@ config SPARC64
select HAVE_C_RECORDMCOUNT
select NO_BOOTMEM
select HAVE_ARCH_AUDITSYSCALL
+ select ARCH_SUPPORTS_ATOMIC_RMW
config ARCH_DEFCONFIG
string
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index a8f749e..d24887b 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -131,6 +131,7 @@ config X86
select HAVE_CC_STACKPROTECTOR
select GENERIC_CPU_AUTOPROBE
select HAVE_ARCH_AUDITSYSCALL
+ select ARCH_SUPPORTS_ATOMIC_RMW
config INSTRUCTION_DECODER
def_bool y
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index 35536d9..8190794 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -220,9 +220,12 @@ config INLINE_WRITE_UNLOCK_IRQRESTORE
endif
+config ARCH_SUPPORTS_ATOMIC_RMW
+ bool
+
config MUTEX_SPIN_ON_OWNER
def_bool y
- depends on SMP && !DEBUG_MUTEXES
+ depends on SMP && !DEBUG_MUTEXES && ARCH_SUPPORTS_ATOMIC_RMW
config ARCH_USE_QUEUE_RWLOCK
bool
s390: s390 also supports proper atomic read modify write,
There is no need to disable mutex spinning. The instructions
CS,CSG and friends provide the proper guarantees. (We dont
implement cmpxchg with locks).
Signed-off-by: Christian Borntraeger <[email protected]>
---
arch/s390/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index bb63499..8c5df31 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -63,6 +63,7 @@ config S390
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS
select ARCH_HAVE_NMI_SAFE_CMPXCHG
+ select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_INLINE_READ_LOCK
select ARCH_INLINE_READ_LOCK_BH
select ARCH_INLINE_READ_LOCK_IRQ
--
1.7.9.5
On 17/07/14 13:51, Christian Borntraeger wrote:
> select ARCH_HAVE_NMI_SAFE_CMPXCHG
> + select ARCH_SUPPORTS_ATOMIC_RMW
This begs the question, if both defines were created for the same reason and could be combined.
Christian
On Wed, 2014-07-16 at 12:24 -0700, tip-bot for Peter Zijlstra wrote:
> Commit-ID: 4badad352a6bb202ec68afa7a574c0bb961e5ebc
> Gitweb: http://git.kernel.org/tip/4badad352a6bb202ec68afa7a574c0bb961e5ebc
> Author: Peter Zijlstra <[email protected]>
> AuthorDate: Fri, 6 Jun 2014 19:53:16 +0200
> Committer: Ingo Molnar <[email protected]>
> CommitDate: Wed, 16 Jul 2014 14:57:07 +0200
>
> locking/mutex: Disable optimistic spinning on some architectures
>
> The optimistic spin code assumes regular stores and cmpxchg() play nice;
> this is found to not be true for at least: parisc, sparc32, tile32,
> metag-lock1, arc-!llsc and hexagon.
>
> There is further wreckage, but this in particular seemed easy to
> trigger, so blacklist this.
>
> Opt in for known good archs.
>
> Signed-off-by: Peter Zijlstra <[email protected]>
> Reported-by: Mikulas Patocka <[email protected]>
> Cc: David Miller <[email protected]>
> Cc: Chris Metcalf <[email protected]>
> Cc: James Bottomley <[email protected]>
> Cc: Vineet Gupta <[email protected]>
> Cc: Jason Low <[email protected]>
> Cc: Waiman Long <[email protected]>
> Cc: "James E.J. Bottomley" <[email protected]>
> Cc: Paul McKenney <[email protected]>
> Cc: John David Anglin <[email protected]>
> Cc: James Hogan <[email protected]>
> Cc: Linus Torvalds <[email protected]>
> Cc: Davidlohr Bueso <[email protected]>
> Cc: [email protected]
> Cc: Benjamin Herrenschmidt <[email protected]>
> Cc: Catalin Marinas <[email protected]>
> Cc: Russell King <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Link: http://lkml.kernel.org/r/[email protected]
> Signed-off-by: Ingo Molnar <[email protected]>
Cc'ing stable: this issue was reported to begin after commit fb0527bd,
so since v3.13.
Thanks,
Davidlohr
On Thu, 2014-07-17 at 20:07 -0700, Davidlohr Bueso wrote:
> On Wed, 2014-07-16 at 12:24 -0700, tip-bot for Peter Zijlstra wrote:
> > Commit-ID: 4badad352a6bb202ec68afa7a574c0bb961e5ebc
> > Gitweb: http://git.kernel.org/tip/4badad352a6bb202ec68afa7a574c0bb961e5ebc
> > Author: Peter Zijlstra <[email protected]>
> > AuthorDate: Fri, 6 Jun 2014 19:53:16 +0200
> > Committer: Ingo Molnar <[email protected]>
> > CommitDate: Wed, 16 Jul 2014 14:57:07 +0200
> >
> > locking/mutex: Disable optimistic spinning on some architectures
> >
> > The optimistic spin code assumes regular stores and cmpxchg() play nice;
> > this is found to not be true for at least: parisc, sparc32, tile32,
> > metag-lock1, arc-!llsc and hexagon.
> >
> > There is further wreckage, but this in particular seemed easy to
> > trigger, so blacklist this.
> >
> > Opt in for known good archs.
> >
> > Signed-off-by: Peter Zijlstra <[email protected]>
> > Reported-by: Mikulas Patocka <[email protected]>
> > Cc: David Miller <[email protected]>
> > Cc: Chris Metcalf <[email protected]>
> > Cc: James Bottomley <[email protected]>
> > Cc: Vineet Gupta <[email protected]>
> > Cc: Jason Low <[email protected]>
> > Cc: Waiman Long <[email protected]>
> > Cc: "James E.J. Bottomley" <[email protected]>
> > Cc: Paul McKenney <[email protected]>
> > Cc: John David Anglin <[email protected]>
> > Cc: James Hogan <[email protected]>
> > Cc: Linus Torvalds <[email protected]>
> > Cc: Davidlohr Bueso <[email protected]>
> > Cc: [email protected]
> > Cc: Benjamin Herrenschmidt <[email protected]>
> > Cc: Catalin Marinas <[email protected]>
> > Cc: Russell King <[email protected]>
> > Cc: Will Deacon <[email protected]>
> > Cc: [email protected]
> > Cc: [email protected]
> > Cc: [email protected]
> > Cc: [email protected]
> > Link: http://lkml.kernel.org/r/[email protected]
> > Signed-off-by: Ingo Molnar <[email protected]>
>
> Cc'ing stable: this issue was reported to begin after commit fb0527bd,
> so since v3.13.
Bah, never mind, I missed it when scanning the Cc list. Sorry about the
noise.