Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932863AbdHYNlL (ORCPT ); Fri, 25 Aug 2017 09:41:11 -0400 Received: from mail-it0-f52.google.com ([209.85.214.52]:38468 "EHLO mail-it0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932849AbdHYNlI (ORCPT ); Fri, 25 Aug 2017 09:41:08 -0400 MIME-Version: 1.0 In-Reply-To: <20170825101632.28065-2-brendan.jackman@arm.com> References: <20170825101632.28065-1-brendan.jackman@arm.com> <20170825101632.28065-2-brendan.jackman@arm.com> From: Vincent Guittot Date: Fri, 25 Aug 2017 15:40:47 +0200 Message-ID: Subject: Re: [PATCH v2 1/5] sched/fair: Move select_task_rq_fair slow-path into its own function To: Brendan Jackman Cc: Ingo Molnar , Peter Zijlstra , linux-kernel , Joel Fernandes , Andres Oportus , Dietmar Eggemann , Josef Bacik , Morten Rasmussen Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5400 Lines: 140 On 25 August 2017 at 12:16, Brendan Jackman wrote: > In preparation for changes that would otherwise require adding a new > level of indentation to the while(sd) loop, create a new function > find_idlest_cpu which contains this loop, and rename the existing > find_idlest_cpu to find_idlest_group_cpu. > > Code inside the while(sd) loop is unchanged. @new_cpu is added as a > variable in the new function, with the same initial value as the > @new_cpu in select_task_rq_fair. > > Suggested-by: Peter Zijlstra > Signed-off-by: Brendan Jackman > Cc: Dietmar Eggemann > Cc: Vincent Guittot > Cc: Josef Bacik > Cc: Ingo Molnar > Cc: Morten Rasmussen > Cc: Peter Zijlstra Reviewed-by: Vincent Guittot > > squash! sched/fair: Move select_task_rq_fair slow-path into its own function > --- > kernel/sched/fair.c | 83 +++++++++++++++++++++++++++++++---------------------- > 1 file changed, 48 insertions(+), 35 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index c95880e216f6..f6e277c65235 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -5509,10 +5509,10 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, > } > > /* > - * find_idlest_cpu - find the idlest cpu among the cpus in group. > + * find_idlest_group_cpu - find the idlest cpu among the cpus in group. > */ > static int > -find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) > +find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) > { > unsigned long load, min_load = ULONG_MAX; > unsigned int min_exit_latency = UINT_MAX; > @@ -5561,6 +5561,50 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) > return shallowest_idle_cpu != -1 ? shallowest_idle_cpu : least_loaded_cpu; > } > > +static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p, > + int cpu, int prev_cpu, int sd_flag) > +{ > + int new_cpu = prev_cpu; > + > + while (sd) { > + struct sched_group *group; > + struct sched_domain *tmp; > + int weight; > + > + if (!(sd->flags & sd_flag)) { > + sd = sd->child; > + continue; > + } > + > + group = find_idlest_group(sd, p, cpu, sd_flag); > + if (!group) { > + sd = sd->child; > + continue; > + } > + > + new_cpu = find_idlest_group_cpu(group, p, cpu); > + if (new_cpu == -1 || new_cpu == cpu) { > + /* Now try balancing at a lower domain level of cpu */ > + sd = sd->child; > + continue; > + } > + > + /* Now try balancing at a lower domain level of new_cpu */ > + cpu = new_cpu; > + weight = sd->span_weight; > + sd = NULL; > + for_each_domain(cpu, tmp) { > + if (weight <= tmp->span_weight) > + break; > + if (tmp->flags & sd_flag) > + sd = tmp; > + } > + /* while loop will break here if sd == NULL */ > + } > + > + return new_cpu; > +} > + > #ifdef CONFIG_SCHED_SMT > > static inline void set_idle_cores(int cpu, int val) > @@ -5918,39 +5962,8 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f > if (sd_flag & SD_BALANCE_WAKE) /* XXX always ? */ > new_cpu = select_idle_sibling(p, prev_cpu, new_cpu); > > - } else while (sd) { > - struct sched_group *group; > - int weight; > - > - if (!(sd->flags & sd_flag)) { > - sd = sd->child; > - continue; > - } > - > - group = find_idlest_group(sd, p, cpu, sd_flag); > - if (!group) { > - sd = sd->child; > - continue; > - } > - > - new_cpu = find_idlest_cpu(group, p, cpu); > - if (new_cpu == -1 || new_cpu == cpu) { > - /* Now try balancing at a lower domain level of cpu */ > - sd = sd->child; > - continue; > - } > - > - /* Now try balancing at a lower domain level of new_cpu */ > - cpu = new_cpu; > - weight = sd->span_weight; > - sd = NULL; > - for_each_domain(cpu, tmp) { > - if (weight <= tmp->span_weight) > - break; > - if (tmp->flags & sd_flag) > - sd = tmp; > - } > - /* while loop will break here if sd == NULL */ > + } else { > + new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag); > } > rcu_read_unlock(); > > -- > 2.14.1 >