Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754371AbbBKXPP (ORCPT ); Wed, 11 Feb 2015 18:15:15 -0500 Received: from claw.goop.org ([74.207.240.146]:60072 "EHLO claw.goop.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753809AbbBKXPN (ORCPT ); Wed, 11 Feb 2015 18:15:13 -0500 Message-ID: <54DBE27C.8050105@goop.org> Date: Wed, 11 Feb 2015 15:15:08 -0800 From: Jeremy Fitzhardinge User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Oleg Nesterov CC: Raghavendra K T , Linus Torvalds , Sasha Levin , Davidlohr Bueso , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Peter Anvin , Konrad Rzeszutek Wilk , Paolo Bonzini , Paul McKenney , Waiman Long , Dave Jones , the arch/x86 maintainers , Paul Gortmaker , Andi Kleen , Jason Wang , Linux Kernel Mailing List , KVM list , virtualization , xen-devel@lists.xenproject.org, Rik van Riel , Christian Borntraeger , Andrew Morton , Andrey Ryabinin Subject: Re: [PATCH] x86 spinlock: Fix memory corruption on completing completions References: <1423234148-13886-1-git-send-email-raghavendra.kt@linux.vnet.ibm.com> <54D7D19B.1000103@goop.org> <54D87F1E.9060307@linux.vnet.ibm.com> <20150209120227.GT21418@twins.programming.kicks-ass.net> <54D9CFC7.5020007@linux.vnet.ibm.com> <20150210132634.GA30380@redhat.com> <54DAADEE.6070506@goop.org> <20150211172434.GA28689@redhat.com> In-Reply-To: <20150211172434.GA28689@redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2442 Lines: 64 On 02/11/2015 09:24 AM, Oleg Nesterov wrote: > I agree, and I have to admit I am not sure I fully understand why > unlock uses the locked add. Except we need a barrier to avoid the race > with the enter_slowpath() users, of course. Perhaps this is the only > reason? Right now it needs to be a locked operation to prevent read-reordering. x86 memory ordering rules state that all writes are seen in a globally consistent order, and are globally ordered wrt reads *on the same addresses*, but reads to different addresses can be reordered wrt to writes. So, if the unlocking add were not a locked operation: __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); Then the read of lock->tickets.tail can be reordered before the unlock, which introduces a race: /* read reordered here */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) /* false */ /* ... */; /* other CPU sets SLOWPATH and blocks */ __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */ /* other CPU hung */ So it doesn't *have* to be a locked operation. This should also work: __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */ lfence(); /* prevent read reordering */ if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG)) __ticket_unlock_slowpath(lock, prev); but in practice a locked add is cheaper than an lfence (or at least was). This *might* be OK, but I think it's on dubious ground: __add(&lock->tickets.head, TICKET_LOCK_INC); /* not locked */ /* read overlaps write, and so is ordered */ if (unlikely(lock->head_tail & (TICKET_SLOWPATH_FLAG << TICKET_SHIFT)) __ticket_unlock_slowpath(lock, prev); because I think Intel and AMD differed in interpretation about how overlapping but different-sized reads & writes are ordered (or it simply isn't architecturally defined). If the slowpath flag is moved to head, then it would always have to be locked anyway, because it needs to be atomic against other CPU's RMW operations setting the flag. J -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/