Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753701AbdLDKYb (ORCPT ); Mon, 4 Dec 2017 05:24:31 -0500 Received: from mail-wr0-f193.google.com ([209.85.128.193]:43469 "EHLO mail-wr0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753631AbdLDKYZ (ORCPT ); Mon, 4 Dec 2017 05:24:25 -0500 X-Google-Smtp-Source: AGs4zMYt+UmyZid/o8BgqwWn11SZKfwGRgBCYHYgYR4rKwhsBJq106dOfROUEbaloCuEDOyLuBjvvQ== From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rjw@rjwysocki.net, viresh.kumar@linaro.org Cc: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, tglx@linutronix.de, vincent.guittot@linaro.org, rostedt@goodmis.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, tkjos@android.com, joelaf@google.com, morten.rasmussen@arm.com, dietmar.eggemann@arm.com, patrick.bellasi@arm.com, alessio.balsini@arm.com, juri.lelli@redhat.com, Juri Lelli , Ingo Molnar , "Rafael J . Wysocki" Subject: [RFC PATCH v2 5/8] sched/cpufreq_schedutil: always consider all CPUs when deciding next freq Date: Mon, 4 Dec 2017 11:23:22 +0100 Message-Id: <20171204102325.5110-6-juri.lelli@redhat.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20171204102325.5110-1-juri.lelli@redhat.com> References: <20171204102325.5110-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2170 Lines: 55 From: Juri Lelli No assumption can be made upon the rate at which frequency updates get triggered, as there are scheduling policies (like SCHED_DEADLINE) which don't trigger them so frequently. Remove such assumption from the code, by always considering SCHED_DEADLINE utilization signal as not stale. Signed-off-by: Juri Lelli Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Luca Abeni Cc: Claudio Scordino Acked-by: Viresh Kumar --- kernel/sched/cpufreq_schedutil.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index a3072f24dc16..b7a576c8dcaa 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -318,17 +318,21 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) s64 delta_ns; /* - * If the CPU utilization was last updated before the previous - * frequency update and the time elapsed between the last update - * of the CPU utilization and the last frequency update is long - * enough, don't take the CPU into account as it probably is - * idle now (and clear iowait_boost for it). + * If the CFS CPU utilization was last updated before the + * previous frequency update and the time elapsed between the + * last update of the CPU utilization and the last frequency + * update is long enough, reset iowait_boost and util_cfs, as + * they are now probably stale. However, still consider the + * CPU contribution if it has some DEADLINE utilization + * (util_dl). */ delta_ns = time - j_sg_cpu->last_update; if (delta_ns > TICK_NSEC) { j_sg_cpu->iowait_boost = 0; j_sg_cpu->iowait_boost_pending = false; - continue; + j_sg_cpu->util_cfs = 0; + if (j_sg_cpu->util_dl == 0) + continue; } if (j_sg_cpu->flags & SCHED_CPUFREQ_RT) return policy->cpuinfo.max_freq; -- 2.14.3