Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754153AbbGPASi (ORCPT ); Wed, 15 Jul 2015 20:18:38 -0400 Received: from g4t3426.houston.hp.com ([15.201.208.54]:60622 "EHLO g4t3426.houston.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753516AbbGPASg (ORCPT ); Wed, 15 Jul 2015 20:18:36 -0400 Message-ID: <55A6F84F.5050502@hp.com> Date: Wed, 15 Jul 2015 20:18:23 -0400 From: Waiman Long User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12 MIME-Version: 1.0 To: Peter Zijlstra CC: Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , x86@kernel.org, linux-kernel@vger.kernel.org, Scott J Norton , Douglas Hatch , Davidlohr Bueso Subject: Re: [PATCH v2 1/6] locking/pvqspinlock: Unconditional PV kick with _Q_SLOW_VAL References: <1436926417-20256-1-git-send-email-Waiman.Long@hp.com> <1436926417-20256-2-git-send-email-Waiman.Long@hp.com> <20150715091012.GG2859@worktop.programming.kicks-ass.net> In-Reply-To: <20150715091012.GG2859@worktop.programming.kicks-ass.net> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3746 Lines: 90 On 07/15/2015 05:10 AM, Peter Zijlstra wrote: > On Tue, Jul 14, 2015 at 10:13:32PM -0400, Waiman Long wrote: >> The smp_store_release() is not a full barrier. In order to avoid missed >> wakeup, we may need to add memory barrier around locked and cpu state >> variables adding to complexity. As the chance of spurious wakeup is very >> low, it is easier and safer to just do an unconditional kick at unlock >> time. > I have the below patch. We need that rmb in there anyhow for the hash to > work. > > --- > Subject: locking/pvqspinlock: Order pv_unhash after cmpxchg on unlock slowpath > From: Will Deacon > Date: Mon, 13 Jul 2015 16:58:30 +0100 > > When we unlock in __pv_queued_spin_unlock, a failed cmpxchg on the lock > value indicates that we need to take the slow-path and unhash the > corresponding node blocked on the lock. > > Since a failed cmpxchg does not provide any memory-ordering guarantees, > it is possible that the node data could be read before the cmpxchg on > weakly-ordered architectures and therefore return a stale value, leading > to hash corruption and/or a BUG(). > > This patch adds an smb_rmb() following the failed cmpxchg operation, so > that the unhashing is ordered after the lock has been checked. > > Cc: Paul McKenney > Cc: Waiman Long > Cc: Steve Capper > Reported-by: Peter Zijlstra > Signed-off-by: Will Deacon > [peterz: More comments] > Signed-off-by: Peter Zijlstra (Intel) > Link: http://lkml.kernel.org/r/20150713155830.GL2632@arm.com > --- > kernel/locking/qspinlock_paravirt.h | 23 ++++++++++++++++++----- > 1 file changed, 18 insertions(+), 5 deletions(-) > > --- a/kernel/locking/qspinlock_paravirt.h > +++ b/kernel/locking/qspinlock_paravirt.h > @@ -244,13 +244,17 @@ static void pv_wait_head(struct qspinloc > if (!lp) { /* ONCE */ > lp = pv_hash(lock, pn); > /* > - * lp must be set before setting _Q_SLOW_VAL > + * We must hash before setting _Q_SLOW_VAL, such that > + * when we observe _Q_SLOW_VAL in __pv_queued_spin_unlock() > + * we'll be sure to be able to observe our hash entry. > * > - * [S] lp = lock [RmW] l = l->locked = 0 > - * MB MB > - * [S] l->locked = _Q_SLOW_VAL [L] lp > + * [S] pn->state > + * [S] [Rmw] l->locked == _Q_SLOW_VAL > + * MB RMB > + * [RmW] l->locked = _Q_SLOW_VAL [L] > + * [L] pn->state > * > - * Matches the cmpxchg() in __pv_queued_spin_unlock(). > + * Matches the smp_rmb() in __pv_queued_spin_unlock(). > */ > if (!cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_VAL)) { > /* > @@ -306,6 +310,15 @@ __visible void __pv_queued_spin_unlock(s > } > > /* > + * A failed cmpxchg doesn't provide any memory-ordering guarantees, > + * so we need a barrier to order the read of the node data in > + * pv_unhash *after* we've read the lock being _Q_SLOW_VAL. > + * > + * Matches the cmpxchg() in pv_wait_head() setting _Q_SLOW_VAL. > + */ > + smp_rmb(); According to memory_barriers.txt, cmpxchg() is a full memory barrier. It didn't say a failed cmpxchg will lose its memory guarantee. So is the documentation right? Or is that true for some architectures? I think it is not true for x86. Cheers, Longman -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/