Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754836Ab0AETQh (ORCPT ); Tue, 5 Jan 2010 14:16:37 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754200Ab0AETQg (ORCPT ); Tue, 5 Jan 2010 14:16:36 -0500 Received: from nlpi129.sbcis.sbc.com ([207.115.36.143]:51039 "EHLO nlpi129.prodigy.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754224Ab0AETQf (ORCPT ); Tue, 5 Jan 2010 14:16:35 -0500 Date: Tue, 5 Jan 2010 13:15:53 -0600 (CST) From: Christoph Lameter X-X-Sender: cl@router.home To: Linus Torvalds cc: Andi Kleen , KAMEZAWA Hiroyuki , Minchan Kim , Peter Zijlstra , "Paul E. McKenney" , Peter Zijlstra , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "hugh.dickins" , Nick Piggin , Ingo Molnar Subject: Re: [RFC][PATCH 6/8] mm: handle_speculative_fault() In-Reply-To: Message-ID: References: <20100104182429.833180340@chello.nl> <20100104182813.753545361@chello.nl> <20100105092559.1de8b613.kamezawa.hiroyu@jp.fujitsu.com> <28c262361001042029w4b95f226lf54a3ed6a4291a3b@mail.gmail.com> <20100105134357.4bfb4951.kamezawa.hiroyu@jp.fujitsu.com> <20100105143046.73938ea2.kamezawa.hiroyu@jp.fujitsu.com> <20100105163939.a3f146fb.kamezawa.hiroyu@jp.fujitsu.com> <87wrzwbh0z.fsf@basil.nowhere.org> User-Agent: Alpine 2.00 (DEB 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1125 Lines: 23 On Tue, 5 Jan 2010, Linus Torvalds wrote: > Don't you see the problem? The spinlock (with ticket locks) essentially > does the "xadd" atomic increment anyway, but then it _waits_ for it. All > totally pointless, and all just making for problems, and wasting CPU time > and causing more cross-node traffic. For the xadd to work it first would have to acquire the cacheline exclusively. That can only occur for a single processor and with the fitting minimum holdtime for exclusive cachelines everything should be fine. If the critical section is done (unlock complete) then the next processor (which had to stopped waiting to access the cacheline) can acquire the lock. The wait state is the processor being stopped due to not being able to access the cacheline. Not the processor spinning in the xadd loop. That only occurs if the critical section is longer than the timeout. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/