Received: by 10.213.65.68 with SMTP id h4csp772423imn; Fri, 6 Apr 2018 08:38:56 -0700 (PDT) X-Google-Smtp-Source: AIpwx490Ga5jJVBVPzHUheHr8h/JJKRiV8gAldMe946XXlYDemKecxF/FhkvNgVrbgZnmS4Tggtq X-Received: by 10.101.67.13 with SMTP id j13mr17967577pgq.432.1523029136669; Fri, 06 Apr 2018 08:38:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523029136; cv=none; d=google.com; s=arc-20160816; b=C3PHeWz8UhKGxA5TGhyfkRiOEiuK1FXAHOl6Fc18Y6dRdiv8KMefUxMULrR9No8x5X r66pEEIqebFkDUQ5/ONltdIxOwVnOR06p5e1dMJl5aBfT8Hwj5Q2NbMQaHn3uL/klq8r ziLF116gnpVqMydsh6h0WNsu6jMOZfhSJeaAWQBDzeAoywIX9MOzt2lzj3q15E2svp70 5Z7kTdZKz2Np4IWKvGD0rHTZJ6m6V8VLwKDlsssagb2VeIy+W0t+RyD6GOv0y081d2Ol FFAy8WWL9njGuo+2Y44iUHyUmBMCIV5vSbkKXIgIekElIplCDKEF/qiB2AoN1PY/zjCR wNRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=u5p8oenYB59/ngPsj0LmGt1hCbK2BWPy+7JvoTtA20s=; b=E2RJhDqzBWXDZOqwM4X2jIUpsyMuJYeSlyZBdMAOFqnXPaUG95NUCJsQcOUJO1pQXC WN4xH+HuZHl7GH0Rqi3pLFz+CBlQHozA2Zebe6OBsIRjNX97CTL8NfNK5r7K/8STsAZ6 hkUM4LxQ7F3RPeSBWeMWyIqxopi12D0MJl9XKpKrsKM+UjnW9gW3s9ymRYn9DUqGTlTP 6gy0ufId0bkXNHk1JP8z07M8tjnJ6+3yjif5rYzc6vFWT9MSadDxoVrwR3mt9NFNwyUS FYddzltwVi68giXZ5XOfZfvgipBp/Cx13C2f8VnhqcXxxEopGqH33Jw3n+EZpuZp3YUb 5aWA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z131si7423517pgz.184.2018.04.06.08.38.42; Fri, 06 Apr 2018 08:38:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753001AbeDFPhN (ORCPT + 99 others); Fri, 6 Apr 2018 11:37:13 -0400 Received: from foss.arm.com ([217.140.101.70]:39110 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752517AbeDFPhK (ORCPT ); Fri, 6 Apr 2018 11:37:10 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9C336165C; Fri, 6 Apr 2018 08:37:09 -0700 (PDT) Received: from e107985-lin.cambridge.arm.com (e107985-lin.cambridge.arm.com [10.1.210.41]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AC5843F587; Fri, 6 Apr 2018 08:37:06 -0700 (PDT) From: Dietmar Eggemann To: linux-kernel@vger.kernel.org, Peter Zijlstra , Quentin Perret , Thara Gopinath Cc: linux-pm@vger.kernel.org, Morten Rasmussen , Chris Redpath , Patrick Bellasi , Valentin Schneider , "Rafael J . Wysocki" , Greg Kroah-Hartman , Vincent Guittot , Viresh Kumar , Todd Kjos , Joel Fernandes , Juri Lelli , Steve Muckle , Eduardo Valentin Subject: [RFC PATCH v2 4/6] sched/fair: Introduce an energy estimation helper function Date: Fri, 6 Apr 2018 16:36:05 +0100 Message-Id: <20180406153607.17815-5-dietmar.eggemann@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180406153607.17815-1-dietmar.eggemann@arm.com> References: <20180406153607.17815-1-dietmar.eggemann@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Quentin Perret In preparation for the definition of an energy-aware wakeup path, a helper function is provided to estimate the consequence on system energy when a specific task wakes-up on a specific CPU. compute_energy() estimates the OPPs to be reached by all frequency domains and estimates the consumption of each online CPU according to its energy model and its percentage of busy time. Cc: Ingo Molnar Cc: Peter Zijlstra Signed-off-by: Quentin Perret Signed-off-by: Dietmar Eggemann --- include/linux/sched/energy.h | 20 +++++++++++++ kernel/sched/fair.c | 68 ++++++++++++++++++++++++++++++++++++++++++++ kernel/sched/sched.h | 2 +- 3 files changed, 89 insertions(+), 1 deletion(-) diff --git a/include/linux/sched/energy.h b/include/linux/sched/energy.h index 941071eec013..b4110b145228 100644 --- a/include/linux/sched/energy.h +++ b/include/linux/sched/energy.h @@ -27,6 +27,24 @@ static inline bool sched_energy_enabled(void) return static_branch_unlikely(&sched_energy_present); } +static inline +struct capacity_state *find_cap_state(int cpu, unsigned long util) +{ + struct sched_energy_model *em = *per_cpu_ptr(energy_model, cpu); + struct capacity_state *cs = NULL; + int i; + + util += util >> 2; + + for (i = 0; i < em->nr_cap_states; i++) { + cs = &em->cap_states[i]; + if (cs->cap >= util) + break; + } + + return cs; +} + static inline struct cpumask *freq_domain_span(struct freq_domain *fd) { return &fd->span; @@ -42,6 +60,8 @@ struct freq_domain; static inline bool sched_energy_enabled(void) { return false; } static inline struct cpumask *freq_domain_span(struct freq_domain *fd) { return NULL; } +static inline struct capacity_state +*find_cap_state(int cpu, unsigned long util) { return NULL; } static inline void init_sched_energy(void) { } #define for_each_freq_domain(fdom) for (; fdom; fdom = NULL) #endif diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6960e5ef3c14..8cb9fb04fff2 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6633,6 +6633,74 @@ static int wake_cap(struct task_struct *p, int cpu, int prev_cpu) } /* + * Returns the util of "cpu" if "p" wakes up on "dst_cpu". + */ +static unsigned long cpu_util_next(int cpu, struct task_struct *p, int dst_cpu) +{ + unsigned long util, util_est; + struct cfs_rq *cfs_rq; + + /* Task is where it should be, or has no impact on cpu */ + if ((task_cpu(p) == dst_cpu) || (cpu != task_cpu(p) && cpu != dst_cpu)) + return cpu_util(cpu); + + cfs_rq = &cpu_rq(cpu)->cfs; + util = READ_ONCE(cfs_rq->avg.util_avg); + + if (dst_cpu == cpu) + util += task_util(p); + else + util = max_t(long, util - task_util(p), 0); + + if (sched_feat(UTIL_EST)) { + util_est = READ_ONCE(cfs_rq->avg.util_est.enqueued); + if (dst_cpu == cpu) + util_est += _task_util_est(p); + else + util_est = max_t(long, util_est - _task_util_est(p), 0); + util = max(util, util_est); + } + + return min_t(unsigned long, util, capacity_orig_of(cpu)); +} + +/* + * Estimates the system level energy assuming that p wakes-up on dst_cpu. + * + * compute_energy() is safe to call only if an energy model is available for + * the platform, which is when sched_energy_enabled() is true. + */ +static unsigned long compute_energy(struct task_struct *p, int dst_cpu) +{ + unsigned long util, max_util, sum_util; + struct capacity_state *cs; + unsigned long energy = 0; + struct freq_domain *fd; + int cpu; + + for_each_freq_domain(fd) { + max_util = sum_util = 0; + for_each_cpu_and(cpu, freq_domain_span(fd), cpu_online_mask) { + util = cpu_util_next(cpu, p, dst_cpu); + util += cpu_util_dl(cpu_rq(cpu)); + max_util = max(util, max_util); + sum_util += util; + } + + /* + * Here we assume that the capacity states of CPUs belonging to + * the same frequency domains are shared. Hence, we look at the + * capacity state of the first CPU and re-use it for all. + */ + cpu = cpumask_first(freq_domain_span(fd)); + cs = find_cap_state(cpu, max_util); + energy += cs->power * sum_util / cs->cap; + } + + return energy; +} + +/* * select_task_rq_fair: Select target runqueue for the waking task in domains * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE, * SD_BALANCE_FORK, or SD_BALANCE_EXEC. diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 5d552c0d7109..6eb38f41d5d9 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2156,7 +2156,7 @@ static inline void cpufreq_update_util(struct rq *rq, unsigned int flags) {} # define arch_scale_freq_invariant() false #endif -#ifdef CONFIG_CPU_FREQ_GOV_SCHEDUTIL +#ifdef CONFIG_SMP static inline unsigned long cpu_util_dl(struct rq *rq) { return (rq->dl.running_bw * SCHED_CAPACITY_SCALE) >> BW_SHIFT; -- 2.11.0