Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752245AbcDWM6W (ORCPT ); Sat, 23 Apr 2016 08:58:22 -0400 Received: from terminus.zytor.com ([198.137.202.10]:39414 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751981AbcDWM6T (ORCPT ); Sat, 23 Apr 2016 08:58:19 -0400 Date: Sat, 23 Apr 2016 05:57:33 -0700 From: tip-bot for Steve Muckle Message-ID: Cc: dietmar.eggemann@arm.com, rafael@kernel.org, patrick.bellasi@arm.com, vincent.guittot@linaro.org, efault@gmx.de, peterz@infradead.org, linux-kernel@vger.kernel.org, steve.muckle@linaro.org, mingo@kernel.org, hpa@zytor.com, tglx@linutronix.de, smuckle@linaro.org, Juri.Lelli@arm.com, morten.rasmussen@arm.com, mturquette@baylibre.com Reply-To: vincent.guittot@linaro.org, patrick.bellasi@arm.com, rafael@kernel.org, efault@gmx.de, peterz@infradead.org, dietmar.eggemann@arm.com, Juri.Lelli@arm.com, morten.rasmussen@arm.com, mturquette@baylibre.com, linux-kernel@vger.kernel.org, steve.muckle@linaro.org, mingo@kernel.org, hpa@zytor.com, tglx@linutronix.de, smuckle@linaro.org In-Reply-To: <1458606068-7476-1-git-send-email-smuckle@linaro.org> References: <1458606068-7476-1-git-send-email-smuckle@linaro.org> To: linux-tip-commits@vger.kernel.org Subject: [tip:sched/core] sched/fair: Move cpufreq hook to update_cfs_rq_load_avg() Git-Commit-ID: 21e96f88776deead303ecd30a17d1d7c2a1776e3 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4303 Lines: 116 Commit-ID: 21e96f88776deead303ecd30a17d1d7c2a1776e3 Gitweb: http://git.kernel.org/tip/21e96f88776deead303ecd30a17d1d7c2a1776e3 Author: Steve Muckle AuthorDate: Mon, 21 Mar 2016 17:21:07 -0700 Committer: Ingo Molnar CommitDate: Sat, 23 Apr 2016 14:20:35 +0200 sched/fair: Move cpufreq hook to update_cfs_rq_load_avg() The cpufreq hook should be called whenever the root cfs_rq utilization changes so update_cfs_rq_load_avg() is a better place for it. The current location is not invoked in the enqueue_entity() or update_blocked_averages() paths. Suggested-by: Vincent Guittot Signed-off-by: Steve Muckle Signed-off-by: Peter Zijlstra (Intel) Cc: Dietmar Eggemann Cc: Juri Lelli Cc: Michael Turquette Cc: Mike Galbraith Cc: Morten Rasmussen Cc: Patrick Bellasi Cc: Peter Zijlstra Cc: Rafael J. Wysocki Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/1458606068-7476-1-git-send-email-smuckle@linaro.org Signed-off-by: Ingo Molnar --- kernel/sched/fair.c | 50 ++++++++++++++++++++++++++------------------------ 1 file changed, 26 insertions(+), 24 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 6e371f4..6df80d4 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2878,7 +2878,9 @@ static inline u64 cfs_rq_clock_task(struct cfs_rq *cfs_rq); static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) { struct sched_avg *sa = &cfs_rq->avg; + struct rq *rq = rq_of(cfs_rq); int decayed, removed = 0; + int cpu = cpu_of(rq); if (atomic_long_read(&cfs_rq->removed_load_avg)) { s64 r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0); @@ -2893,7 +2895,7 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) sa->util_sum = max_t(s32, sa->util_sum - r * LOAD_AVG_MAX, 0); } - decayed = __update_load_avg(now, cpu_of(rq_of(cfs_rq)), sa, + decayed = __update_load_avg(now, cpu, sa, scale_load_down(cfs_rq->load.weight), cfs_rq->curr != NULL, cfs_rq); #ifndef CONFIG_64BIT @@ -2901,28 +2903,6 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) cfs_rq->load_last_update_time_copy = sa->last_update_time; #endif - return decayed || removed; -} - -/* Update task and its cfs_rq load average */ -static inline void update_load_avg(struct sched_entity *se, int update_tg) -{ - struct cfs_rq *cfs_rq = cfs_rq_of(se); - u64 now = cfs_rq_clock_task(cfs_rq); - struct rq *rq = rq_of(cfs_rq); - int cpu = cpu_of(rq); - - /* - * Track task load average for carrying it to new CPU after migrated, and - * track group sched_entity load average for task_h_load calc in migration - */ - __update_load_avg(now, cpu, &se->avg, - se->on_rq * scale_load_down(se->load.weight), - cfs_rq->curr == se, NULL); - - if (update_cfs_rq_load_avg(now, cfs_rq) && update_tg) - update_tg_load_avg(cfs_rq, 0); - if (cpu == smp_processor_id() && &rq->cfs == cfs_rq) { unsigned long max = rq->cpu_capacity_orig; @@ -2943,8 +2923,30 @@ static inline void update_load_avg(struct sched_entity *se, int update_tg) * See cpu_util(). */ cpufreq_update_util(rq_clock(rq), - min(cfs_rq->avg.util_avg, max), max); + min(sa->util_avg, max), max); } + + return decayed || removed; +} + +/* Update task and its cfs_rq load average */ +static inline void update_load_avg(struct sched_entity *se, int update_tg) +{ + struct cfs_rq *cfs_rq = cfs_rq_of(se); + u64 now = cfs_rq_clock_task(cfs_rq); + struct rq *rq = rq_of(cfs_rq); + int cpu = cpu_of(rq); + + /* + * Track task load average for carrying it to new CPU after migrated, and + * track group sched_entity load average for task_h_load calc in migration + */ + __update_load_avg(now, cpu, &se->avg, + se->on_rq * scale_load_down(se->load.weight), + cfs_rq->curr == se, NULL); + + if (update_cfs_rq_load_avg(now, cfs_rq) && update_tg) + update_tg_load_avg(cfs_rq, 0); } static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)