Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751926AbeAPQyF (ORCPT + 1 other); Tue, 16 Jan 2018 11:54:05 -0500 Received: from mx1.redhat.com ([209.132.183.28]:47600 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751531AbeAPQyD (ORCPT ); Tue, 16 Jan 2018 11:54:03 -0500 Date: Tue, 16 Jan 2018 11:53:31 -0500 From: Luiz Capitulino To: Frederic Weisbecker Cc: Ingo Molnar , LKML , Peter Zijlstra , Chris Metcalf , Thomas Gleixner , Christoph Lameter , "Paul E . McKenney" , Wanpeng Li , Mike Galbraith , Rik van Riel Subject: Re: [PATCH 4/5] sched/isolation: Residual 1Hz scheduler tick offload Message-ID: <20180116115331.1e4574e5@redhat.com> In-Reply-To: <20180116155743.GA27567@lerouge> References: <1515039937-367-1-git-send-email-frederic@kernel.org> <1515039937-367-5-git-send-email-frederic@kernel.org> <20180112142258.31e7a24c@redhat.com> <20180116155743.GA27567@lerouge> Organization: Red Hat MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Tue, 16 Jan 2018 16:53:53 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: On Tue, 16 Jan 2018 16:57:45 +0100 Frederic Weisbecker wrote: > On Fri, Jan 12, 2018 at 02:22:58PM -0500, Luiz Capitulino wrote: > > On Thu, 4 Jan 2018 05:25:36 +0100 > > Frederic Weisbecker wrote: > > > > > When a CPU runs in full dynticks mode, a 1Hz tick remains in order to > > > keep the scheduler stats alive. However this residual tick is a burden > > > for bare metal tasks that can't stand any interruption at all, or want > > > to minimize them. > > > > > > Adding the boot parameter "isolcpus=nohz_offload" will now outsource > > > these scheduler ticks to the global workqueue so that a housekeeping CPU > > > handles that tick remotely. > > > > > > Note it's still up to the user to affine the global workqueues to the > > > housekeeping CPUs through /sys/devices/virtual/workqueue/cpumask or > > > domains isolation. > > > > > > Signed-off-by: Frederic Weisbecker > > > Cc: Chris Metcalf > > > Cc: Christoph Lameter > > > Cc: Luiz Capitulino > > > Cc: Mike Galbraith > > > Cc: Paul E. McKenney > > > Cc: Peter Zijlstra > > > Cc: Rik van Riel > > > Cc: Thomas Gleixner > > > Cc: Wanpeng Li > > > Cc: Ingo Molnar > > > --- > > > kernel/sched/core.c | 88 ++++++++++++++++++++++++++++++++++++++++++++++-- > > > kernel/sched/isolation.c | 4 +++ > > > kernel/sched/sched.h | 2 ++ > > > 3 files changed, 91 insertions(+), 3 deletions(-) > > > > > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > > index d72d0e9..b964890 100644 > > > --- a/kernel/sched/core.c > > > +++ b/kernel/sched/core.c > > > @@ -3052,9 +3052,14 @@ void scheduler_tick(void) > > > */ > > > u64 scheduler_tick_max_deferment(void) > > > { > > > - struct rq *rq = this_rq(); > > > - unsigned long next, now = READ_ONCE(jiffies); > > > + struct rq *rq; > > > + unsigned long next, now; > > > > > > + if (!housekeeping_cpu(smp_processor_id(), HK_FLAG_TICK_SCHED)) > > > + return ktime_to_ns(KTIME_MAX); > > > + > > > + rq = this_rq(); > > > + now = READ_ONCE(jiffies); > > > next = rq->last_sched_tick + HZ; > > > > > > if (time_before_eq(next, now)) > > > @@ -3062,7 +3067,82 @@ u64 scheduler_tick_max_deferment(void) > > > > > > return jiffies_to_nsecs(next - now); > > > } > > > -#endif > > > + > > > +struct tick_work { > > > + int cpu; > > > + struct delayed_work work; > > > +}; > > > + > > > +static struct tick_work __percpu *tick_work_cpu; > > > + > > > +static void sched_tick_remote(struct work_struct *work) > > > +{ > > > + struct delayed_work *dwork = to_delayed_work(work); > > > + struct tick_work *twork = container_of(dwork, struct tick_work, work); > > > + int cpu = twork->cpu; > > > + struct rq *rq = cpu_rq(cpu); > > > + struct rq_flags rf; > > > + > > > + /* > > > + * Handle the tick only if it appears the remote CPU is running > > > + * in full dynticks mode. The check is racy by nature, but > > > + * missing a tick or having one too much is no big deal. > > > + */ > > > + if (!idle_cpu(cpu) && tick_nohz_tick_stopped_cpu(cpu)) { > > > + rq_lock_irq(rq, &rf); > > > + update_rq_clock(rq); > > > + rq->curr->sched_class->task_tick(rq, rq->curr, 0); > > > + rq_unlock_irq(rq, &rf); > > > + } > > > > OK, so this executes task_tick() remotely. What about account_process_tick()? > > Don't we need it as well? > > Nope, tasks in nohz_full mode have their special accounting that doesn't > rely on the tick. OK, excellent. > > In particular, when I run a hog application on a nohz_full core configured > > with tick offload, I can see in top that the CPU usage goes from 100% > > to idle for a few seconds every couple of seconds. Could this be related? > > > > Also, in my testing I'm sometimes seeing the tick. Sometimes at 10 or > > 20 seconds interval. Is this expected? I'll dig deeper next week. > > That's expected, see the changelog: the offload is not affine by default. > You need to either also isolate the domains: > > isolcpus=nohz_offload,domain > > or tweak the workqueue cpumask through: > > /sys/devices/virtual/workqueue/cpumask Yeah, I already do that. Later today or tomorrow I'll debug this to see if the problem is in my setup or not. > > Thanks. >