Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755843AbeAJMZR (ORCPT + 1 other); Wed, 10 Jan 2018 07:25:17 -0500 Received: from terminus.zytor.com ([65.50.211.136]:46959 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755732AbeAJMZO (ORCPT ); Wed, 10 Jan 2018 07:25:14 -0500 Date: Wed, 10 Jan 2018 04:20:47 -0800 From: tip-bot for Juri Lelli Message-ID: Cc: rafael.j.wysocki@intel.com, torvalds@linux-foundation.org, tglx@linutronix.de, juri.lelli@arm.com, mingo@kernel.org, hpa@zytor.com, peterz@infradead.org, claudio@evidence.eu.com, linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, viresh.kumar@linaro.org Reply-To: viresh.kumar@linaro.org, claudio@evidence.eu.com, luca.abeni@santannapisa.it, linux-kernel@vger.kernel.org, hpa@zytor.com, peterz@infradead.org, mingo@kernel.org, juri.lelli@arm.com, tglx@linutronix.de, torvalds@linux-foundation.org, rafael.j.wysocki@intel.com In-Reply-To: <20171204102325.5110-9-juri.lelli@redhat.com> References: <20171204102325.5110-9-juri.lelli@redhat.com> To: linux-tip-commits@vger.kernel.org Subject: [tip:sched/core] sched/deadline: Make bandwidth enforcement scale-invariant Git-Commit-ID: 07881166a892fa4908ac4924660a7793f75d6544 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: Commit-ID: 07881166a892fa4908ac4924660a7793f75d6544 Gitweb: https://git.kernel.org/tip/07881166a892fa4908ac4924660a7793f75d6544 Author: Juri Lelli AuthorDate: Mon, 4 Dec 2017 11:23:25 +0100 Committer: Ingo Molnar CommitDate: Wed, 10 Jan 2018 12:53:35 +0100 sched/deadline: Make bandwidth enforcement scale-invariant Apply frequency and CPU scale-invariance correction factor to bandwidth enforcement (similar to what we already do to fair utilization tracking). Each delta_exec gets scaled considering current frequency and maximum CPU capacity; which means that the reservation runtime parameter (that need to be specified profiling the task execution at max frequency on biggest capacity core) gets thus scaled accordingly. Signed-off-by: Juri Lelli Signed-off-by: Peter Zijlstra (Intel) Cc: Claudio Scordino Cc: Linus Torvalds Cc: Luca Abeni Cc: Peter Zijlstra Cc: Rafael J . Wysocki Cc: Thomas Gleixner Cc: Viresh Kumar Cc: alessio.balsini@arm.com Cc: bristot@redhat.com Cc: dietmar.eggemann@arm.com Cc: joelaf@google.com Cc: juri.lelli@redhat.com Cc: mathieu.poirier@linaro.org Cc: morten.rasmussen@arm.com Cc: patrick.bellasi@arm.com Cc: rjw@rjwysocki.net Cc: rostedt@goodmis.org Cc: tkjos@android.com Cc: tommaso.cucinotta@santannapisa.it Cc: vincent.guittot@linaro.org Link: http://lkml.kernel.org/r/20171204102325.5110-9-juri.lelli@redhat.com Signed-off-by: Ingo Molnar --- kernel/sched/deadline.c | 26 ++++++++++++++++++++++---- kernel/sched/fair.c | 2 -- kernel/sched/sched.h | 2 ++ 3 files changed, 24 insertions(+), 6 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 54a0dc1..9bb0e0c 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1151,7 +1151,8 @@ static void update_curr_dl(struct rq *rq) { struct task_struct *curr = rq->curr; struct sched_dl_entity *dl_se = &curr->dl; - u64 delta_exec; + u64 delta_exec, scaled_delta_exec; + int cpu = cpu_of(rq); if (!dl_task(curr) || !on_dl_rq(dl_se)) return; @@ -1185,9 +1186,26 @@ static void update_curr_dl(struct rq *rq) if (dl_entity_is_special(dl_se)) return; - if (unlikely(dl_se->flags & SCHED_FLAG_RECLAIM)) - delta_exec = grub_reclaim(delta_exec, rq, &curr->dl); - dl_se->runtime -= delta_exec; + /* + * For tasks that participate in GRUB, we implement GRUB-PA: the + * spare reclaimed bandwidth is used to clock down frequency. + * + * For the others, we still need to scale reservation parameters + * according to current frequency and CPU maximum capacity. + */ + if (unlikely(dl_se->flags & SCHED_FLAG_RECLAIM)) { + scaled_delta_exec = grub_reclaim(delta_exec, + rq, + &curr->dl); + } else { + unsigned long scale_freq = arch_scale_freq_capacity(cpu); + unsigned long scale_cpu = arch_scale_cpu_capacity(NULL, cpu); + + scaled_delta_exec = cap_scale(delta_exec, scale_freq); + scaled_delta_exec = cap_scale(scaled_delta_exec, scale_cpu); + } + + dl_se->runtime -= scaled_delta_exec; throttle: if (dl_runtime_exceeded(dl_se) || dl_se->dl_yielded) { diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1485975..1070803 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3089,8 +3089,6 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3) return c1 + c2 + c3; } -#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT) - /* * Accumulate the three separate parts of the sum; d1 the remainder * of the last (incomplete) period, d2 the span of full periods and d3 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index e122c89..2e95505 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -156,6 +156,8 @@ static inline int task_has_dl_policy(struct task_struct *p) return dl_policy(p->policy); } +#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT) + /* * !! For sched_setattr_nocheck() (kernel) only !! *