Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754747AbdHVHtA (ORCPT ); Tue, 22 Aug 2017 03:49:00 -0400 Received: from mail-io0-f176.google.com ([209.85.223.176]:37215 "EHLO mail-io0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754645AbdHVHs7 (ORCPT ); Tue, 22 Aug 2017 03:48:59 -0400 MIME-Version: 1.0 In-Reply-To: <20170821152128.14418-3-brendan.jackman@arm.com> References: <20170821152128.14418-1-brendan.jackman@arm.com> <20170821152128.14418-3-brendan.jackman@arm.com> From: Vincent Guittot Date: Tue, 22 Aug 2017 09:48:38 +0200 Message-ID: Subject: Re: [PATCH 2/2] sched/fair: Fix use of NULL with find_idlest_group To: Brendan Jackman Cc: linux-kernel , Joel Fernandes , Andres Oportus , Ingo Molnar , Morten Rasmussen , Peter Zijlstra , Dietmar Eggemann Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3325 Lines: 82 On 21 August 2017 at 17:21, Brendan Jackman wrote: > The current use of returning NULL from find_idlest_group is broken in [snip] > --- > kernel/sched/fair.c | 34 +++++++++++++++++++--------------- > 1 file changed, 19 insertions(+), 15 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 64618d768546..7cb5ed719cf9 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -5382,26 +5382,29 @@ static unsigned long capacity_spare_wake(int cpu, struct task_struct *p) > * domain. > */ > static struct sched_group * > -find_idlest_group(struct sched_domain *sd, struct task_struct *p, > - int this_cpu, int sd_flag) > +find_idlest_group(struct sched_domain *sd, struct task_struct *p, int sd_flag) > { > struct sched_group *idlest = NULL, *group = sd->groups; > + struct sched_group *local_group = sd->groups; > struct sched_group *most_spare_sg = NULL; > - unsigned long min_runnable_load = ULONG_MAX, this_runnable_load = 0; > - unsigned long min_avg_load = ULONG_MAX, this_avg_load = 0; > + unsigned long min_runnable_load = ULONG_MAX, this_runnable_load = ULONG_MAX; > + unsigned long min_avg_load = ULONG_MAX, this_avg_load = ULONG_MAX; Good catch for this_runnable_load, indeed there is a problem is local_group is not allowed > unsigned long most_spare = 0, this_spare = 0; > int load_idx = sd->forkexec_idx; > int imbalance_scale = 100 + (sd->imbalance_pct-100)/2; > unsigned long imbalance = scale_load_down(NICE_0_LOAD) * > (sd->imbalance_pct-100) / 100; > > + if (!cpumask_intersects(sched_domain_span(sd), &p->cpus_allowed)) > + return NULL; You should put that test above at upper level in the while (sd) loop or even before the while(sd) loop as there is even no need to start this while loop in this case. There is no need to call find_idlest_group if there is no cpu allowed and no group to find. Then, you can keep the current behavior of find_idlest_group() which return null if there no best group than the local one > + > if (sd_flag & SD_BALANCE_WAKE) > load_idx = sd->wake_idx; > > do { > unsigned long load, avg_load, runnable_load; > unsigned long spare_cap, max_spare_cap; > - int local_group; > + bool group_is_local = group == local_group; > int i; > > /* Skip over this group if it has no CPUs allowed */ [snip] > @@ -5927,8 +5927,12 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f > continue; > } > > - group = find_idlest_group(sd, p, cpu, sd_flag); > + group = find_idlest_group(sd, p, sd_flag); > if (!group) { > + break; > + } else if (group == sd->groups) { > + new_cpu = cpu; The " new_cpu = cpu" above should be put before the while(sd) loop like in Peter's proposal > + /* Now try balancing at a lower domain level of cpu */ > sd = sd->child; > continue; > } > -- > 2.14.1 >