Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755157AbaGKS5n (ORCPT ); Fri, 11 Jul 2014 14:57:43 -0400 Received: from mail-wg0-f44.google.com ([74.125.82.44]:60248 "EHLO mail-wg0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751919AbaGKS5m (ORCPT ); Fri, 11 Jul 2014 14:57:42 -0400 Date: Fri, 11 Jul 2014 20:57:33 +0200 From: Frederic Weisbecker To: "Paul E. McKenney" Cc: Christoph Lameter , linux-kernel@vger.kernel.org, mingo@kernel.org, laijs@cn.fujitsu.com, dipankar@in.ibm.com, akpm@linux-foundation.org, mathieu.desnoyers@efficios.com, josh@joshtriplett.org, niv@us.ibm.com, tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org, dhowells@redhat.com, edumazet@google.com, dvhart@linux.intel.com, oleg@redhat.com, sbw@mit.edu Subject: Re: [PATCH tip/core/rcu 11/17] rcu: Bind grace-period kthreads to non-NO_HZ_FULL CPUs Message-ID: <20140711185731.GG26045@localhost.localdomain> References: <20140707223756.GA7187@linux.vnet.ibm.com> <1404772701-8804-1-git-send-email-paulmck@linux.vnet.ibm.com> <1404772701-8804-11-git-send-email-paulmck@linux.vnet.ibm.com> <20140708152358.GF6571@localhost.localdomain> <20140708154723.GN4603@linux.vnet.ibm.com> <20140708183846.GJ6571@localhost.localdomain> <20140711182541.GF26045@localhost.localdomain> <20140711184528.GQ16041@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20140711184528.GQ16041@linux.vnet.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jul 11, 2014 at 11:45:28AM -0700, Paul E. McKenney wrote: > On Fri, Jul 11, 2014 at 08:25:43PM +0200, Frederic Weisbecker wrote: > > On Fri, Jul 11, 2014 at 01:10:41PM -0500, Christoph Lameter wrote: > > > On Tue, 8 Jul 2014, Frederic Weisbecker wrote: > > > > > > > > I was figuring that a fair number of the kthreads might eventually > > > > > be using this, not just for the grace-period kthreads. > > > > > > > > Ok makes sense. But can we just rename the cpumask to housekeeping_mask? > > > > > > That would imply that all no-nohz processors are housekeeping? So all > > > processors with a tick are housekeeping? > > > > Well, now that I think about it again, I would really like to keep housekeeping > > to CPU 0 when nohz_full= is passed. > > When CONFIG_NO_HZ_FULL_SYSIDLE=y, then housekeeping kthreads are bound to > CPU 0. However, doing this causes significant slowdowns according to > Fengguang's testing, so when CONFIG_NO_HZ_FULL_SYSIDLE=n, I bind the > housekeeping kthreads to the set of non-nohz_full CPUs. But did he see these slowdowns with nohz_full= parameter passed? I doubt he tested that. And I'm not sure that people who need full dynticks will run the usecases that trigger slowdowns with grace period kthreads. I also doubt that people will often omit other CPUs than CPU 0 nohz_full= range. > > > > Could we make that set configurable? Ideally I'd like to have the ability > > > restrict the housekeeping to one processor. > > > > Ah, I'm curious about your usecase. But I think we can do that. And we should. > > > > In fact I think that Paul could keep affining grace period kthread to CPU 0 > > for the sole case when we have nohz_full= parameter passed. > > > > I think the performance issues reported to him refer to CONFIG_NO_HZ_FULL=y > > config without nohz_full= parameter passed. That's the most important to address. > > > > Optimizing the "nohz_full= passed" case is probably not very useful and worse > > it complicate things a lot. > > > > What do you think Paul? Can we simplify things that way? I'm pretty sure that > > nobody cares about optimizing the nohz_full= case. That would really simplify > > things to stick to CPU 0. > > When we have CONFIG_NO_HZ_FULL_SYSIDLE=y, agreed. In that case, having > housekeeping CPUs on CPUs other than CPU 0 means that you never reach > full-system-idle state. That said I expect CONFIG_NO_HZ_FULL_SYSIDLE=y to be always enable for those who run NO_HZ_FULL in the long run. > > But in other cases, we appear to need more than one housekeeping CPU. > This is especially the case when people run general workloads on systems > that have NO_HZ_FULL=y, which appears to be a significant fraction of > the systems these days. Yeah NO_HZ_FULL=y is likely to be enabled in many distros. But you know the amount of nohz_full= users. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/