Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759627AbZAPBe6 (ORCPT ); Thu, 15 Jan 2009 20:34:58 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753183AbZAPBep (ORCPT ); Thu, 15 Jan 2009 20:34:45 -0500 Received: from e4.ny.us.ibm.com ([32.97.182.144]:44480 "EHLO e4.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753966AbZAPBeo (ORCPT ); Thu, 15 Jan 2009 20:34:44 -0500 Date: Thu, 15 Jan 2009 17:34:43 -0800 From: "Paul E. McKenney" To: Linus Torvalds Cc: Ingo Molnar , Matthew Wilcox , Peter Zijlstra , Gregory Haskins , Andi Kleen , Chris Mason , Andrew Morton , Linux Kernel Mailing List , linux-fsdevel , linux-btrfs , Thomas Gleixner , Nick Piggin , Peter Morreale , Sven Dietrich , Dmitry Adamushko , Johannes Weiner Subject: Re: [GIT PULL] adaptive spinning mutexes Message-ID: <20090116013443.GH6763@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <1231952436.14825.28.camel@laptop> <20090114183319.GA18630@elte.hu> <20090114184746.GA21334@elte.hu> <20090114192811.GA19691@elte.hu> <20090115174440.GF29283@parisc-linux.org> <20090115180844.GL22472@elte.hu> <20090116005351.GD6763@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.15+20070412 (2007-04-11) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2414 Lines: 47 On Thu, Jan 15, 2009 at 05:01:32PM -0800, Linus Torvalds wrote: > On Thu, 15 Jan 2009, Paul E. McKenney wrote: > > On Thu, Jan 15, 2009 at 10:16:53AM -0800, Linus Torvalds wrote: > > > > > > IOW, if you do pre-allocation instead of holding a lock over the > > > allocation, you win. So yes, spin-mutexes makes it easier to write the > > > code, but it also makes it easier to just plain be lazy. > > > > In infrequently invoked code such as some error handling, lazy/simple > > can be a big win. > > Sure. I don't disagree at all. On such code we don't even care about > locking. If it _really_ is fundamentally very rarely invoked. > > But if we're talking things like core filesystem locks, it's _really_ > irritating when one of those (supposedly rare) allocation delays or the > need to do IO then blocks all those (supposedly common) nice cached cases. Certainly if there was one big mutex covering all the operations, it would indeed be bad. On the other hand, if the filesystem/cache was partitioned (perhaps hashed) so that there was a large number of such locks, then if should be OK. Yes, I am making the perhaps naive assumption that hot spots such as the root inode would be in the cache. And that they would rarely collide with allocation or I/O, which might also be naive. But on this point I must defer to the filesystem folks. > So I don't dispute at all that "mutex with spinning" performs better than > a mutex, but I _do_ claim that it has some potentially huge downsides > compared to a real spinlock. It may perform as well as a spinlock in the > nice common case, but then when you hit the non-common case you see the > difference between well-written code and badly written code. > > And sadly, while allocations _usually_ are nice and immediate, and while > our caches _usually_ mean that we don't need to do IO, bad behavior when > we do need to do IO is what really kills interactive feel. Suddenly > everything else is hurting too, because they wanted that lock - even if > they didn't need to do IO or allocate anything. I certainly agree that there are jobs that a spin-mutex is ill-suited for. Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/