Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932607Ab0KWBge (ORCPT ); Mon, 22 Nov 2010 20:36:34 -0500 Received: from mail-qw0-f46.google.com ([209.85.216.46]:38176 "EHLO mail-qw0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932409Ab0KWBgd (ORCPT ); Mon, 22 Nov 2010 20:36:33 -0500 DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=J/kDtYyXF8y8fBM26yLzCXVsL6H5IwUk3Yy0M0mHWXoqWCxFSsKwwLoM6qq6GHYzw3 HF+uXfJU1rDRALsD+emtz09Pr0jjXvvTA9lzs6NTsujDbP4eVxvTZEBGSPq7LQ1hfGx5 Zj/cpFBFq3kzkrR5M9AkufCnPzStFAG1G/wRk= MIME-Version: 1.0 In-Reply-To: <4CEA71AD.5010606@tilera.com> References: <1289489007.17691.1310.camel@edumazet-laptop> <4CDF1945.8090101@tilera.com> <201011151425.oAFEPU3W005682@farm-0010.internal.tilera.com> <4CEA71AD.5010606@tilera.com> Date: Tue, 23 Nov 2010 09:36:31 +0800 Message-ID: Subject: Re: [PATCH] arch/tile: fix rwlock so would-be write lockers don't block new readers From: Cypher Wu To: Chris Metcalf Cc: linux-kernel@vger.kernel.org, =?ISO-8859-1?Q?Am=E9rico_Wang?= , Eric Dumazet , netdev Content-Type: text/plain; charset=ISO-8859-1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2386 Lines: 57 2010/11/22 Chris Metcalf : > On 11/22/2010 12:39 AM, Cypher Wu wrote: >> 2010/11/15 Chris Metcalf : >>> This avoids a deadlock in the IGMP code where one core gets a read >>> lock, another core starts trying to get a write lock (thus blocking >>> new readers), and then the first core tries to recursively re-acquire >>> the read lock. >>> >>> We still try to preserve some degree of balance by giving priority >>> to additional write lockers that come along while the lock is held >>> for write, so they can all complete quickly and return the lock to >>> the readers. >>> >>> Signed-off-by: Chris Metcalf >>> --- >>> This should apply relatively cleanly to 2.6.26.7 source code too. >>> [...] >> >> I've finished my business trip and tested that patch for more than an >> hour and it works. The test is still running now. >> >> But it seems there still has a potential problem: we used ticket lock >> for write_lock(), and if there are so many write_lock() occurred, is >> 256 ticket enough for 64 or even more cores to avoiding overflow? >> Since is we try to write_unlock() and there's already write_lock() >> waiting we'll only adding current ticket. > > This is OK, since each core can issue at most one (blocking) write_lock(), > and we have only 64 cores. Future >256 core machines will be based on > TILE-Gx anyway, which doesn't have the 256-core limit since it doesn't use > the spinlock_32.c implementation. > > -- > Chris Metcalf, Tilera Corp. > http://www.tilera.com > > Say, if core A try to write_lock() rwlock and current_ticket_ is 0 and it write next_ticket_ to 1, when it processing the lock, core B try to write_lock() again and write next_ticket_ to 2, then when A write_unlock() it seen that (current_ticket_+1) is not equal to next_ticket_, so it increment current_ticket_, and core B get the lock. If core A try write_lock again before core B write_unlock, it will increment next_ticket_ to 3. And so on. This may rarely happened, I've tested it yesterday for several hours it goes very well under pressure. -- Cyberman Wu http://www.meganovo.com -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/