Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754225AbYFZRlS (ORCPT ); Thu, 26 Jun 2008 13:41:18 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751429AbYFZRlD (ORCPT ); Thu, 26 Jun 2008 13:41:03 -0400 Received: from netops-testserver-3-out.sgi.com ([192.48.171.28]:52465 "EHLO relay.sgi.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751166AbYFZRlB (ORCPT ); Thu, 26 Jun 2008 13:41:01 -0400 Date: Thu, 26 Jun 2008 10:02:05 -0700 (PDT) From: Christoph Lameter X-X-Sender: clameter@schroedinger.engr.sgi.com To: Petr Tesarik cc: Jeremy Fitzhardinge , Peter Zijlstra , Ingo Molnar , linux-kernel@vger.kernel.org, Nick Piggin Subject: Re: Spinlocks: Factor our GENERIC_LOCKBREAK in order to avoid spin with irqs disable In-Reply-To: <1214471867.17319.8.camel@elijah.suse.cz> Message-ID: References: <20080507073017.GJ32195@elte.hu> <1214241561.19392.21.camel@elijah.suse.cz> <1214253593.11254.30.camel@twins> <1214254730.11254.34.camel@twins> <48630420.1090102@goop.org> <1214471867.17319.8.camel@elijah.suse.cz> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 806 Lines: 21 On Thu, 26 Jun 2008, Petr Tesarik wrote: > We should probably re-think the whole locking scheme, because spinlocks > were designed to be held for a short period of time. This was a fair > assumption when they were introduced, but obviously it is now false in > many cases (such as virtualization). And NUMA. > Ticket-based spinlocks have actually already changed the original > design, so why not implement a generic "lock scheduler" on top of > spinlock_t and rwlock_t? How about making the semaphore / mutex code as fast as spinlocks and just use that? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/