Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751407AbdFBW1h (ORCPT ); Fri, 2 Jun 2017 18:27:37 -0400 Received: from smtp.codeaurora.org ([198.145.29.96]:59286 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751061AbdFBW1e (ORCPT ); Fri, 2 Jun 2017 18:27:34 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 smtp.codeaurora.org B15CB608D4 Authentication-Results: pdx-caf-mail.web.codeaurora.org; dmarc=none (p=none dis=none) header.from=codeaurora.org Authentication-Results: pdx-caf-mail.web.codeaurora.org; spf=none smtp.mailfrom=jhugo@codeaurora.org From: Jeffrey Hugo To: Ingo Molnar , Peter Zijlstra , linux-kernel@vger.kernel.org Cc: Dietmar Eggemann , Austin Christ , Tyler Baicar , Timur Tabi , Jeffrey Hugo Subject: [PATCH V4 2/2] sched/fair: Remove group imbalance from calculate_imbalance() Date: Fri, 2 Jun 2017 16:27:12 -0600 Message-Id: <1496442432-330-3-git-send-email-jhugo@codeaurora.org> X-Mailer: git-send-email 1.8.5.2 In-Reply-To: <1496442432-330-1-git-send-email-jhugo@codeaurora.org> References: <1496442432-330-1-git-send-email-jhugo@codeaurora.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2450 Lines: 54 The group_imbalance path in calculate_imbalance() made sense when it was added back in 2007 with commit 908a7c1b9b80 ("sched: fix improper load balance across sched domain") because busiest->load_per_task factored into the amount of imbalance that was calculated. That is not the case today. The group_imbalance path can only affect the outcome of calculate_imbalance() when the average load of the domain is less than the original busiest->load_per_task. In this case, busiest->load_per_task is overwritten with the scheduling domain load average. Thus busiest->load_per_task no longer represents actual load that can be moved. At the final comparison between env->imbalance and busiest->load_per_task, imbalance may be larger than the new busiest->load_per_task causing the check to fail under the assumption that there is a task that could be migrated to satisfy the imbalance. However env->imbalance may still be smaller than the original busiest->load_per_task, thus it is unlikely that there is a task that can be migrated to satisfy the imbalance. Calculate_imbalance() would not choose to run fix_small_imbalance() when we expect it should. In the worst case, this can result in idle cpus. Since the group imbalance path in calculate_imbalance() is at best a NOP but otherwise harmful, remove it. Co-authored-by: Austin Christ Signed-off-by: Jeffrey Hugo Tested-by: Tyler Baicar --- kernel/sched/fair.c | 9 --------- 1 file changed, 9 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 84255ab..3600713 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7760,15 +7760,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s local = &sds->local_stat; busiest = &sds->busiest_stat; - if (busiest->group_type == group_imbalanced) { - /* - * In the group_imb case we cannot rely on group-wide averages - * to ensure cpu-load equilibrium, look at wider averages. XXX - */ - busiest->load_per_task = - min(busiest->load_per_task, sds->avg_load); - } - /* * Avg load of busiest sg can be less and avg load of local sg can * be greater than avg load across all sgs of sd because avg load -- Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc. Qualcomm Technologies, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project.