Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752028AbbBUGEl (ORCPT ); Sat, 21 Feb 2015 01:04:41 -0500 Received: from relay6-d.mail.gandi.net ([217.70.183.198]:38239 "EHLO relay6-d.mail.gandi.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750746AbbBUGEk (ORCPT ); Sat, 21 Feb 2015 01:04:40 -0500 X-Originating-IP: 50.43.43.179 Date: Fri, 20 Feb 2015 22:04:28 -0800 From: Josh Triplett To: Peter Zijlstra Cc: "Paul E. McKenney" , linux-kernel@vger.kernel.org, mingo@kernel.org, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, tglx@linutronix.de, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, dvhart@linux.intel.com, fweisbec@gmail.com, oleg@redhat.com, bobby.prani@gmail.com Subject: Re: [PATCH tip/core/rcu 0/4] Programmatic nestable expedited grace periods Message-ID: <20150221060427.GA1408@thin> References: <20150220050850.GA32639@linux.vnet.ibm.com> <20150220091107.GN21418@twins.programming.kicks-ass.net> <20150220163737.GL5745@linux.vnet.ibm.com> <20150220165409.GU5029@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150220165409.GU5029@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1654 Lines: 34 On Fri, Feb 20, 2015 at 05:54:09PM +0100, Peter Zijlstra wrote: > On Fri, Feb 20, 2015 at 08:37:37AM -0800, Paul E. McKenney wrote: > > On Fri, Feb 20, 2015 at 10:11:07AM +0100, Peter Zijlstra wrote: > > > Does it really make a machine boot much faster? Why are people using > > > synchronous gp primitives if they care about speed? Should we not fix > > > that instead? > > > > The report I heard was that it provided 10-15% faster boot times. > > That's not insignificant; got more details? I think we should really > look at why people are using the sync primitives. Paul, what do you think about adding a compile-time debug option to synchronize_rcu() that causes it to capture the time on entry and exit and print the duration together with the file:line of the caller? Similar to initcall_debug, but for blocking calls to synchronize_rcu(). Put that together with initcall_debug, and you'd have a pretty good idea of where that holds up boot. We do want early boot to run as asynchronously as possible, and to avoid having later bits of boot waiting on a synchronize_rcu from earlier bits of boot. Switching a caller over to call_rcu() doesn't actually help if it still has to finish a grace period before it can allow later bits to run. Ideally, we ought to be able to work out the "depth" of boot in grace-periods. Has anyone wired initcall_debug up to a bootchart-like graph? - Josh Triplett -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/