Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755378Ab0AETYH (ORCPT ); Tue, 5 Jan 2010 14:24:07 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1755214Ab0AETYB (ORCPT ); Tue, 5 Jan 2010 14:24:01 -0500 Received: from e8.ny.us.ibm.com ([32.97.182.138]:51628 "EHLO e8.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751191Ab0AETYA (ORCPT ); Tue, 5 Jan 2010 14:24:00 -0500 Date: Tue, 5 Jan 2010 11:23:55 -0800 From: "Paul E. McKenney" To: Linus Torvalds Cc: Christoph Lameter , Andi Kleen , KAMEZAWA Hiroyuki , Minchan Kim , Peter Zijlstra , Peter Zijlstra , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "hugh.dickins" , Nick Piggin , Ingo Molnar Subject: Re: [RFC][PATCH 6/8] mm: handle_speculative_fault() Message-ID: <20100105192355.GK6714@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com References: <20100105143046.73938ea2.kamezawa.hiroyu@jp.fujitsu.com> <20100105163939.a3f146fb.kamezawa.hiroyu@jp.fujitsu.com> <87wrzwbh0z.fsf@basil.nowhere.org> <20100105185542.GH6714@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.15+20070412 (2007-04-11) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2635 Lines: 55 On Tue, Jan 05, 2010 at 11:08:46AM -0800, Linus Torvalds wrote: > > > On Tue, 5 Jan 2010, Paul E. McKenney wrote: > > > > But on many systems, it does take some time for the idle reads to make > > their way to the CPU that just acquired the lock. > > Yes. But the point is that there is lots of them. > > So think of it this way: every time _one_ CPU acquires a lock (and > then releases it), _all_ CPU's will read the new value. Imagine the > cross-socket traffic. Yep, been there, and with five-microsecond cross-"socket" latencies. Of course, the CPU clock frequencies were a bit slower back then. > In contrast, doing just a single xadd (which replaces the whole > "spin_lock+non-atomics+spin_unlock"), every times _once_ CPU cquires a > lock, that's it. The other CPU's arent' all waiting in line for the lock > to be released, and reading the cacheline to see if it's their turn. > > Sure, after they got the lock they'll all eventually end up reading from > that cacheline that contains 'struct mm_struct', but that's something we > could even think about trying to minimize by putting the mmap_sem as far > away from the other fields as possible. > > Now, it's very possible that if you have a broadcast model of cache > coherency, none of this much matters and you end up with almost all the > same bus traffic anyway. But I do think it could matter a lot. I have seen systems that work both ways. If the CPU has enough time between getting the cache line in unlocked state to lock it, do the modification, and release the lock before the first in the flurry of reads arrives, then performance will be just fine. Each reader will see the cache line with an unlocked lock and life will be good. On the other hand, as you say, if the first in the flurry of reads arrives before the CPU has managed to make its way through the critical section, then your n^2 nightmare comes true. If the critical section operates only on the cache line containing the lock, and if the critical section has only a handful of instructions, I bet you win on almost all platforms. But, like you, I would still prefer the xadd (or xchg or whatever) to the lock, where feasible. But you cannot expect all systems to see a major performance boost from switching from locks to xadds. Often, all you get is sounder sleep. Which is valuable in its own right. ;-) Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/