Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753753AbdHXSKA (ORCPT ); Thu, 24 Aug 2017 14:10:00 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:45510 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753631AbdHXSJn (ORCPT ); Thu, 24 Aug 2017 14:09:43 -0400 From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , Tejun Heo , "Rafael J . Wysocki" , Paul Turner , Vincent Guittot , John Stultz , Morten Rasmussen , Dietmar Eggemann , Juri Lelli , Tim Murray , Todd Kjos , Andres Oportus , Joel Fernandes , Viresh Kumar Subject: [RFCv4 6/6] cpufreq: schedutil: add util clamp for RT/DL tasks Date: Thu, 24 Aug 2017 19:08:57 +0100 Message-Id: <20170824180857.32103-7-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20170824180857.32103-1-patrick.bellasi@arm.com> References: <20170824180857.32103-1-patrick.bellasi@arm.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2216 Lines: 58 Currently schedutil enforces a maximum frequency when RT/DL tasks are RUNNABLE. Such a mandatory policy can be made more tunable from userspace thus allowing for example to define a max frequency which is still reasonable for the execution of a specific RT/DL workload. This will contribute to make the RT class more friendly for power/energy sensitive use-cases. This patch extends the usage of util_{min,max} to the RT/DL classes. Whenever a task in these classes is RUNNABLE, the util required is defined by the constraints of the CPU control group the task belongs to. Signed-off-by: Patrick Bellasi Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki Cc: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org --- kernel/sched/cpufreq_schedutil.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index f67c26bbade4..feca60c107bc 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -227,7 +227,10 @@ static void sugov_update_single(struct update_util_data *hook, u64 time, busy = sugov_cpu_is_busy(sg_cpu); if (flags & SCHED_CPUFREQ_RT_DL) { - next_f = policy->cpuinfo.max_freq; + util = uclamp_util(smp_processor_id(), SCHED_CAPACITY_SCALE); + next_f = (uclamp_enabled && util < SCHED_CAPACITY_SCALE) + ? get_next_freq(sg_policy, util, policy->cpuinfo.max_freq) + : policy->cpuinfo.max_freq; } else { sugov_get_util(&util, &max); sugov_iowait_boost(sg_cpu, &util, &max); @@ -276,10 +279,15 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time) j_sg_cpu->iowait_boost = 0; continue; } - if (j_sg_cpu->flags & SCHED_CPUFREQ_RT_DL) - return policy->cpuinfo.max_freq; - j_util = j_sg_cpu->util; + if (j_sg_cpu->flags & SCHED_CPUFREQ_RT_DL) { + if (!uclamp_enabled) + return policy->cpuinfo.max_freq; + j_util = uclamp_util(j, SCHED_CAPACITY_SCALE); + } else { + j_util = j_sg_cpu->util; + } + j_max = j_sg_cpu->max; if (j_util * max > j_max * util) { util = j_util; -- 2.14.1