Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751857AbdHaL7i (ORCPT ); Thu, 31 Aug 2017 07:59:38 -0400 Received: from foss.arm.com ([217.140.101.70]:55072 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751330AbdHaL6T (ORCPT ); Thu, 31 Aug 2017 07:58:19 -0400 From: Brendan Jackman To: Ingo Molnar , Peter Zijlstra , linux-kernel@vger.kernel.org Cc: Dietmar Eggemann , Vincent Guittot , Josef Bacik , Morten Rasmussen Subject: [PATCH v3 1/5] sched/fair: Move select_task_rq_fair slow-path into its own function Date: Thu, 31 Aug 2017 12:57:58 +0100 Message-Id: <20170831115802.26365-2-brendan.jackman@arm.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20170831115802.26365-1-brendan.jackman@arm.com> References: <20170831115802.26365-1-brendan.jackman@arm.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3955 Lines: 135 In preparation for changes that would otherwise require adding a new level of indentation to the while(sd) loop, create a new function find_idlest_cpu which contains this loop, and rename the existing find_idlest_cpu to find_idlest_group_cpu. Code inside the while(sd) loop is unchanged. @new_cpu is added as a variable in the new function, with the same initial value as the @new_cpu in select_task_rq_fair. Suggested-by: Peter Zijlstra Signed-off-by: Brendan Jackman Cc: Dietmar Eggemann Cc: Vincent Guittot Cc: Josef Bacik Cc: Ingo Molnar Cc: Morten Rasmussen Cc: Peter Zijlstra Reviewed-by: Josef Bacik Reviewed-by: Vincent Guittot --- kernel/sched/fair.c | 83 +++++++++++++++++++++++++++++++---------------------- 1 file changed, 48 insertions(+), 35 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c95880e..f6e277c6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5509,10 +5509,10 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, } /* - * find_idlest_cpu - find the idlest cpu among the cpus in group. + * find_idlest_group_cpu - find the idlest cpu among the cpus in group. */ static int -find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) +find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) { unsigned long load, min_load = ULONG_MAX; unsigned int min_exit_latency = UINT_MAX; @@ -5561,6 +5561,50 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) return shallowest_idle_cpu != -1 ? shallowest_idle_cpu : least_loaded_cpu; } +static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p, + int cpu, int prev_cpu, int sd_flag) +{ + int new_cpu = prev_cpu; + + while (sd) { + struct sched_group *group; + struct sched_domain *tmp; + int weight; + + if (!(sd->flags & sd_flag)) { + sd = sd->child; + continue; + } + + group = find_idlest_group(sd, p, cpu, sd_flag); + if (!group) { + sd = sd->child; + continue; + } + + new_cpu = find_idlest_group_cpu(group, p, cpu); + if (new_cpu == -1 || new_cpu == cpu) { + /* Now try balancing at a lower domain level of cpu */ + sd = sd->child; + continue; + } + + /* Now try balancing at a lower domain level of new_cpu */ + cpu = new_cpu; + weight = sd->span_weight; + sd = NULL; + for_each_domain(cpu, tmp) { + if (weight <= tmp->span_weight) + break; + if (tmp->flags & sd_flag) + sd = tmp; + } + /* while loop will break here if sd == NULL */ + } + + return new_cpu; +} + #ifdef CONFIG_SCHED_SMT static inline void set_idle_cores(int cpu, int val) @@ -5918,39 +5962,8 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f if (sd_flag & SD_BALANCE_WAKE) /* XXX always ? */ new_cpu = select_idle_sibling(p, prev_cpu, new_cpu); - } else while (sd) { - struct sched_group *group; - int weight; - - if (!(sd->flags & sd_flag)) { - sd = sd->child; - continue; - } - - group = find_idlest_group(sd, p, cpu, sd_flag); - if (!group) { - sd = sd->child; - continue; - } - - new_cpu = find_idlest_cpu(group, p, cpu); - if (new_cpu == -1 || new_cpu == cpu) { - /* Now try balancing at a lower domain level of cpu */ - sd = sd->child; - continue; - } - - /* Now try balancing at a lower domain level of new_cpu */ - cpu = new_cpu; - weight = sd->span_weight; - sd = NULL; - for_each_domain(cpu, tmp) { - if (weight <= tmp->span_weight) - break; - if (tmp->flags & sd_flag) - sd = tmp; - } - /* while loop will break here if sd == NULL */ + } else { + new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag); } rcu_read_unlock(); -- 2.7.4