Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754828AbbBOQJd (ORCPT ); Sun, 15 Feb 2015 11:09:33 -0500 Received: from mx1.redhat.com ([209.132.183.28]:40235 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751763AbbBOQJb (ORCPT ); Sun, 15 Feb 2015 11:09:31 -0500 Date: Sun, 15 Feb 2015 17:07:00 +0100 From: Oleg Nesterov To: Raghavendra K T Cc: tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com, peterz@infradead.org, torvalds@linux-foundation.org, konrad.wilk@oracle.com, pbonzini@redhat.com, paulmck@linux.vnet.ibm.com, waiman.long@hp.com, davej@redhat.com, x86@kernel.org, jeremy@goop.org, paul.gortmaker@windriver.com, ak@linux.intel.com, jasowang@redhat.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, riel@redhat.com, borntraeger@de.ibm.com, akpm@linux-foundation.org, a.ryabinin@samsung.com, sasha.levin@oracle.com, dave@stgolabs.net Subject: Re: [PATCH V4] x86 spinlock: Fix memory corruption on completing completions Message-ID: <20150215160700.GA27608@redhat.com> References: <1423809941-11125-1-git-send-email-raghavendra.kt@linux.vnet.ibm.com> <20150213153228.GA9535@redhat.com> <54E032F1.5060503@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <54E032F1.5060503@linux.vnet.ibm.com> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1467 Lines: 40 On 02/15, Raghavendra K T wrote: > > On 02/13/2015 09:02 PM, Oleg Nesterov wrote: > >>> @@ -772,7 +773,8 @@ __visible void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want) >>> * check again make sure it didn't become free while >>> * we weren't looking. >>> */ >>> - if (ACCESS_ONCE(lock->tickets.head) == want) { >>> + head = READ_ONCE(lock->tickets.head); >>> + if (__tickets_equal(head, want)) { >>> add_stats(TAKEN_SLOW_PICKUP, 1); >>> goto out; >> >> This is off-topic, but with or without this change perhaps it makes sense >> to add smp_mb__after_atomic(). It is nop on x86, just to make this code >> more understandable for those (for me ;) who can never remember even the >> x86 rules. > > Hope you meant it for add_stat. No, no. We need a barrier between set_bit(SLOWPATH) and tickets_equal(). Yes, on x86 set_bit() can't be reordered so smp_mb_*_atomic() is nop, but it can make the code more understandable. > yes smp_mb__after_atomic() would be > harmless barrier() in x86. Did not add this V5 as yoiu though but this > made me look at slowpath_enter code and added an explicit barrier() > there :). Well. it looks even more confusing than a lack of barrier ;) Oleg. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/