Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933061AbWKXUET (ORCPT ); Fri, 24 Nov 2006 15:04:19 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S933065AbWKXUET (ORCPT ); Fri, 24 Nov 2006 15:04:19 -0500 Received: from brick.kernel.dk ([62.242.22.158]:19564 "EHLO kernel.dk") by vger.kernel.org with ESMTP id S933061AbWKXUES (ORCPT ); Fri, 24 Nov 2006 15:04:18 -0500 Date: Fri, 24 Nov 2006 21:04:19 +0100 From: Jens Axboe To: Oleg Nesterov Cc: "Paul E. McKenney" , Alan Stern , linux-kernel@vger.kernel.org Subject: Re: [patch] cpufreq: mark cpufreq_tsc() as core_initcall_sync Message-ID: <20061124200419.GG5400@kernel.dk> References: <20061117065128.GA5452@us.ibm.com> <20061117092925.GT7164@kernel.dk> <20061119190027.GA3676@oleg> <20061123145910.GA145@oleg> <20061124182153.GA9868@oleg> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20061124182153.GA9868@oleg> Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 999 Lines: 25 On Fri, Nov 24 2006, Oleg Nesterov wrote: > Ok, synchronize_xxx() passed 1 hour rcutorture test on dual P-III. > > It behaves the same as srcu but optimized for writers. The fast path > for synchronize_xxx() is mutex_lock() + atomic_read() + mutex_unlock(). > The slow path is __wait_event(), no polling. However, the reader does > atomic inc/dec on lock/unlock, and the counters are not per-cpu. > > Jens, is it ok for you? Alan, Paul, what is your opinion? This looks good from my end, much more appropriate than the current SRCU code. Even if I could avoid synchronize_srcu() for most cases, when I did have to issue it, the 3x synchronize_sched() was a performance killer. Thanks Oleg! And Alan and Paul for your excellent ideas. -- Jens Axboe - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/