Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754970Ab0AESzt (ORCPT ); Tue, 5 Jan 2010 13:55:49 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751191Ab0AESzs (ORCPT ); Tue, 5 Jan 2010 13:55:48 -0500 Received: from e4.ny.us.ibm.com ([32.97.182.144]:48264 "EHLO e4.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750758Ab0AESzr (ORCPT ); Tue, 5 Jan 2010 13:55:47 -0500 Date: Tue, 5 Jan 2010 10:55:42 -0800 From: "Paul E. McKenney" To: Linus Torvalds Cc: Christoph Lameter , Andi Kleen , KAMEZAWA Hiroyuki , Minchan Kim , Peter Zijlstra , Peter Zijlstra , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "hugh.dickins" , Nick Piggin , Ingo Molnar Subject: Re: [RFC][PATCH 6/8] mm: handle_speculative_fault() Message-ID: <20100105185542.GH6714@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20100105134357.4bfb4951.kamezawa.hiroyu@jp.fujitsu.com> <20100105143046.73938ea2.kamezawa.hiroyu@jp.fujitsu.com> <20100105163939.a3f146fb.kamezawa.hiroyu@jp.fujitsu.com> <87wrzwbh0z.fsf@basil.nowhere.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.15+20070412 (2007-04-11) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1792 Lines: 40 On Tue, Jan 05, 2010 at 10:25:43AM -0800, Linus Torvalds wrote: > > > On Tue, 5 Jan 2010, Christoph Lameter wrote: > > > > If the critical section protected by the spinlock is small then the > > delay will keep the cacheline exclusive until we hit the unlock. This > > is the case here as far as I can tell. > > I hope somebody can time it. Because I think the idle reads on all the > (unsuccessful) spinlocks will kill it. But on many systems, it does take some time for the idle reads to make their way to the CPU that just acquired the lock. My (admittedly dated) experience is that the CPU acquiring the lock has a few bus clocks before the cache line containing the lock gets snatched away. > Think of it this way: under heavy contention, you'll see a lot of people > waiting for the spinlocks and one of them succeeds at writing it, reading > the line. So you get an O(n^2) bus traffic access pattern. In contrast, > with an xadd, you get O(n) behavior - everybody does _one_ acquire-for- > write bus access. xadd (and xchg) certainly are nicer where they apply! > Remember: the critical section is small, but since you're contending on > the spinlock, that doesn't much _help_. The readers are all hitting the > lock (and you can try to solve the O(n*2) issue with back-off, but quite > frankly, anybody who does that has basically already lost - I'm personally > convinced you should never do lock backoff, and instead look at what you > did wrong at a higher level instead). Music to my ears! ;-) Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/