Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752413AbdGLR5u (ORCPT ); Wed, 12 Jul 2017 13:57:50 -0400 Received: from merlin.infradead.org ([205.233.59.134]:36374 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751487AbdGLR5t (ORCPT ); Wed, 12 Jul 2017 13:57:49 -0400 Date: Wed, 12 Jul 2017 19:57:32 +0200 From: Peter Zijlstra To: "Paul E. McKenney" Cc: Frederic Weisbecker , Christoph Lameter , "Li, Aubrey" , Andi Kleen , Aubrey Li , tglx@linutronix.de, len.brown@intel.com, rjw@rjwysocki.net, tim.c.chen@linux.intel.com, arjan@linux.intel.com, yang.zhang.wz@gmail.com, x86@kernel.org, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH v1 00/11] Create fast idle path for short idle periods Message-ID: <20170712175732.hlhzc6g3aa47545d@hirez.programming.kicks-ass.net> References: <20170710164206.5aon5kelbisxqyxq@hirez.programming.kicks-ass.net> <20170710172705.GA3441@tassilo.jf.intel.com> <20170711094157.5xcwkloxnjehieqv@hirez.programming.kicks-ass.net> <20170711160926.GA18805@lerouge> <20170711163422.etydkhhtgfthpfi5@hirez.programming.kicks-ass.net> <20170711180931.GP2393@linux.vnet.ibm.com> <20170712122249.u6y4ymmk6qwvog57@hirez.programming.kicks-ass.net> <20170712155458.GW2393@linux.vnet.ibm.com> <20170712171756.e3fnc3waanbaiiss@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170712171756.e3fnc3waanbaiiss@hirez.programming.kicks-ass.net> User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 980 Lines: 20 On Wed, Jul 12, 2017 at 07:17:56PM +0200, Peter Zijlstra wrote: > Could be I'm just not remembering how all that works.. But I was > wondering if we can do the expensive bits if we've decided to actually > go NOHZ and avoid doing it on every idle entry. > > IIRC the RCU fast NOHZ bits try and flush the callback list (or paw it > off to another CPU?) such that we can go NOHZ sooner. Having a !empty > callback list avoid NOHZ from happening. > > Now if we've already decided we can't in fact go NOHZ due to other > concerns, flushing the callback list is pointless work. So I'm thinking > we can find a better place to do this. I'm a wee bit confused by the split between rcu_prepare_for_idle() and rcu_needs_cpu(). There's a fair amount overlap there.. that said, I'm thinking we should be calling rcu_needs_cpu() as the very last test, not the very first, such that we can bail out of tick_nohz_stop_sched_tick() without having to incur the penalty of flushing callbacks.