Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754451AbdGYTcb (ORCPT ); Tue, 25 Jul 2017 15:32:31 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:36888 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754169AbdGYTZ3 (ORCPT ); Tue, 25 Jul 2017 15:25:29 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Tyler Baicar , Jeffrey Hugo , Peter Zijlstra , Austin Christ , Dietmar Eggemann , Linus Torvalds , Peter Zijlstra , Thomas Gleixner , Timur Tabi , Ingo Molnar , Mel Gorman Subject: [PATCH 4.12 179/196] sched/fair: Fix load_balance() affinity redo path Date: Tue, 25 Jul 2017 12:22:58 -0700 Message-Id: <20170725192054.817985519@linuxfoundation.org> X-Mailer: git-send-email 2.13.3 In-Reply-To: <20170725192046.422343510@linuxfoundation.org> References: <20170725192046.422343510@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4359 Lines: 112 4.12-stable review patch. If anyone has any objections, please let me know. ------------------ From: Jeffrey Hugo commit 65a4433aebe36c8c6abeb69b99ef00274b971c6c upstream. If load_balance() fails to migrate any tasks because all tasks were affined, load_balance() removes the source CPU from consideration and attempts to redo and balance among the new subset of CPUs. There is a bug in this code path where the algorithm considers all active CPUs in the system (minus the source that was just masked out). This is not valid for two reasons: some active CPUs may not be in the current scheduling domain and one of the active CPUs is dst_cpu. These CPUs should not be considered, as we cannot pull load from them. Instead of failing out of load_balance(), we may end up redoing the search with no valid CPUs and incorrectly concluding the domain is balanced. Additionally, if the group_imbalance flag was just set, it may also be incorrectly unset, thus the flag will not be seen by other CPUs in future load_balance() runs as that algorithm intends. Fix the check by removing CPUs not in the current domain and the dst_cpu from considertation, thus limiting the evaluation to valid remaining CPUs from which load might be migrated. Co-authored-by: Austin Christ Co-authored-by: Dietmar Eggemann Tested-by: Tyler Baicar Signed-off-by: Jeffrey Hugo Acked-by: Peter Zijlstra Cc: Austin Christ Cc: Dietmar Eggemann Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Timur Tabi Link: http://lkml.kernel.org/r/1496863138-11322-2-git-send-email-jhugo@codeaurora.org Signed-off-by: Ingo Molnar Signed-off-by: Mel Gorman Signed-off-by: Greg Kroah-Hartman --- kernel/sched/fair.c | 32 ++++++++++++++++++++------------ 1 file changed, 20 insertions(+), 12 deletions(-) --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6619,10 +6619,10 @@ int can_migrate_task(struct task_struct * our sched_group. We may want to revisit it if we couldn't * meet load balance goals by pulling other tasks on src_cpu. * - * Also avoid computing new_dst_cpu if we have already computed - * one in current iteration. + * Avoid computing new_dst_cpu for NEWLY_IDLE or if we have + * already computed one in current iteration. */ - if (!env->dst_grpmask || (env->flags & LBF_DST_PINNED)) + if (env->idle == CPU_NEWLY_IDLE || (env->flags & LBF_DST_PINNED)) return 0; /* Prevent to re-select dst_cpu via env's cpus */ @@ -7973,14 +7973,7 @@ static int load_balance(int this_cpu, st .tasks = LIST_HEAD_INIT(env.tasks), }; - /* - * For NEWLY_IDLE load_balancing, we don't need to consider - * other cpus in our group - */ - if (idle == CPU_NEWLY_IDLE) - env.dst_grpmask = NULL; - - cpumask_copy(cpus, cpu_active_mask); + cpumask_and(cpus, sched_domain_span(sd), cpu_active_mask); schedstat_inc(sd->lb_count[idle]); @@ -8102,7 +8095,15 @@ more_balance: /* All tasks on this runqueue were pinned by CPU affinity */ if (unlikely(env.flags & LBF_ALL_PINNED)) { cpumask_clear_cpu(cpu_of(busiest), cpus); - if (!cpumask_empty(cpus)) { + /* + * Attempting to continue load balancing at the current + * sched_domain level only makes sense if there are + * active CPUs remaining as possible busiest CPUs to + * pull load from which are not contained within the + * destination group that is receiving any migrated + * load. + */ + if (!cpumask_subset(cpus, env.dst_grpmask)) { env.loop = 0; env.loop_break = sched_nr_migrate_break; goto redo; @@ -8398,6 +8399,13 @@ static int active_load_balance_cpu_stop( .src_cpu = busiest_rq->cpu, .src_rq = busiest_rq, .idle = CPU_IDLE, + /* + * can_migrate_task() doesn't need to compute new_dst_cpu + * for active balancing. Since we have CPU_IDLE, but no + * @dst_grpmask we need to make that test go away with lying + * about DST_PINNED. + */ + .flags = LBF_DST_PINNED, }; schedstat_inc(sd->alb_count);