Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752402AbZAGWHe (ORCPT ); Wed, 7 Jan 2009 17:07:34 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751941AbZAGWHN (ORCPT ); Wed, 7 Jan 2009 17:07:13 -0500 Received: from smtp1.linux-foundation.org ([140.211.169.13]:39867 "EHLO smtp1.linux-foundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752244AbZAGWHL (ORCPT ); Wed, 7 Jan 2009 17:07:11 -0500 Date: Wed, 7 Jan 2009 14:06:40 -0800 (PST) From: Linus Torvalds X-X-Sender: torvalds@localhost.localdomain To: Peter Zijlstra cc: Steven Rostedt , paulmck@linux.vnet.ibm.com, Gregory Haskins , Ingo Molnar , Matthew Wilcox , Andi Kleen , Chris Mason , Andrew Morton , Linux Kernel Mailing List , linux-fsdevel , linux-btrfs , Thomas Gleixner , Nick Piggin , Peter Morreale , Sven Dietrich Subject: Re: [PATCH -v5][RFC]: mutex: implement adaptive spinning In-Reply-To: Message-ID: References: <87r63ljzox.fsf@basil.nowhere.org> <20090103191706.GA2002@parisc-linux.org> <1231242031.11687.97.camel@twins> <20090106121052.GA27232@elte.hu> <4963584A.4090805@novell.com> <20090106131643.GA15228@elte.hu> <1231248041.11687.107.camel@twins> <49636799.1010109@novell.com> <20090106214229.GD6741@linux.vnet.ibm.com> <1231278275.11687.111.camel@twins> <1231279660.11687.121.camel@twins> <1231281801.11687.125.camel@twins> <1231283778.11687.136.camel@twins> <1231329783.11687.287.camel@twins> <1231347442.11687.344.camel@twins> <1231365115.11687.361.camel@twins> User-Agent: Alpine 2.00 (LFD 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1384 Lines: 33 On Wed, 7 Jan 2009, Linus Torvalds wrote: > > We don't actually care that it only happens once: this all has _known_ > races, and the "cpu_relax()" is a barrier. I phrased that badly. It's not that it has "known races", it's really that the whole code sequence is very much written and intended to be optimistic. So whatever code motion or whatever CPU memory ordering motion that happens, we don't really care, because none of the tests are final. We do need to make sure that the compiler doesn't optimize the loads out of the loops _entirely_, but the "cpu_relax()" things that we need for other reasons guarantee that part. One related issue: since we avoid the spinlock, we now suddenly end up relying on the "atomic_cmpxchg()" having lock acquire memory ordering semantics. Because _that_ is the one non-speculative thing we do end up doing in the whole loop. But atomic_cmpxchg() is currently defined to be a full memory barrier, so we should be ok. The only issue might be that it's _too_ much of a memory barrier for some architectures, but this is not the pure fastpath, so I think we're all good. Linus -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/