Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752617AbcDTTiO (ORCPT ); Wed, 20 Apr 2016 15:38:14 -0400 Received: from g2t4619.austin.hp.com ([15.73.212.82]:50386 "EHLO g2t4619.austin.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752571AbcDTTiK (ORCPT ); Wed, 20 Apr 2016 15:38:10 -0400 Message-ID: <1461181011.3113.75.camel@j-VirtualBox> Subject: Re: [RFC] arm64: Implement WFE based spin wait for MCS spinlocks From: Jason Low To: Peter Zijlstra Cc: Will Deacon , Linus Torvalds , "linux-kernel@vger.kernel.org" , "mingo@redhat.com" , "paulmck@linux.vnet.ibm.com" , terry.rudd@hpe.com, "Long, Wai Man" , "boqun.feng@gmail.com" , "dave@stgolabs.net" , jason.low2@hp.com Date: Wed, 20 Apr 2016 12:36:51 -0700 In-Reply-To: <20160420103059.GX3408@twins.programming.kicks-ass.net> References: <1460618018.2871.25.camel@j-VirtualBox> <20160420103059.GX3408@twins.programming.kicks-ass.net> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.10.4-0ubuntu2 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1599 Lines: 46 On Wed, 2016-04-20 at 12:30 +0200, Peter Zijlstra wrote: > On Thu, Apr 14, 2016 at 12:13:38AM -0700, Jason Low wrote: > > Use WFE to avoid most spinning with MCS spinlocks. This is implemented > > with the new cmpwait() mechanism for comparing and waiting for the MCS > > locked value to change using LDXR + WFE. > > > > Signed-off-by: Jason Low > > --- > > arch/arm64/include/asm/mcs_spinlock.h | 21 +++++++++++++++++++++ > > 1 file changed, 21 insertions(+) > > create mode 100644 arch/arm64/include/asm/mcs_spinlock.h > > > > diff --git a/arch/arm64/include/asm/mcs_spinlock.h b/arch/arm64/include/asm/mcs_spinlock.h > > new file mode 100644 > > index 0000000..d295d9d > > --- /dev/null > > +++ b/arch/arm64/include/asm/mcs_spinlock.h > > @@ -0,0 +1,21 @@ > > +#ifndef __ASM_MCS_SPINLOCK_H > > +#define __ASM_MCS_SPINLOCK_H > > + > > +#define arch_mcs_spin_lock_contended(l) \ > > +do { \ > > + int locked_val; \ > > + for (;;) { \ > > + locked_val = READ_ONCE(*l); \ > > + if (locked_val) \ > > + break; \ > > + cmpwait(l, locked_val); \ > > + } \ > > + smp_rmb(); \ > > +} while (0) > > If you make the generic version use smp_cond_load_acquire() this isn't > needed. Yup, in the email thread about modifying the generic version to use smp_cond_load_acquire(), I mentioned that overriding it in arch/arm64 would not be needed anymore. Will made a suggestion about overriding it on arm64, but it turns out he was just referring to avoiding the immediate dependency on smp_cond_load_acquire(). Thanks, Jason