Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754441AbdGNNWQ (ORCPT ); Fri, 14 Jul 2017 09:22:16 -0400 Received: from mail-yw0-f196.google.com ([209.85.161.196]:32982 "EHLO mail-yw0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754101AbdGNNVO (ORCPT ); Fri, 14 Jul 2017 09:21:14 -0400 From: Josef Bacik To: mingo@redhat.com, peterz@infradead.org, linux-kernel@vger.kernel.org, umgwanakikbuti@gmail.com, tj@kernel.org, kernel-team@fb.com Cc: Josef Bacik Subject: [PATCH 3/7] sched/fair: fix definitions of effective load Date: Fri, 14 Jul 2017 13:21:00 +0000 Message-Id: <1500038464-8742-4-git-send-email-josef@toxicpanda.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1500038464-8742-1-git-send-email-josef@toxicpanda.com> References: <1500038464-8742-1-git-send-email-josef@toxicpanda.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1242 Lines: 35 From: Josef Bacik It appears as though we've reversed the definitions of 'this_effective_load' and 'prev_effective_load'. We seem to be using the wrong CPU capacity for both of these parameters, and we want to add the imbalance percentage to the current CPU to make sure we're meeting a high enough load imbalance threshold to justify moving the task. Signed-off-by: Josef Bacik --- kernel/sched/fair.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 5d4489e..d958634 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5679,11 +5679,11 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, * Otherwise check if either cpus are near enough in load to allow this * task to be woken on this_cpu. */ - this_eff_load = 100; - this_eff_load *= capacity_of(prev_cpu); + prev_eff_load = 100; + prev_eff_load *= capacity_of(prev_cpu); - prev_eff_load = 100 + (sd->imbalance_pct - 100) / 2; - prev_eff_load *= capacity_of(this_cpu); + this_eff_load = 100 + (sd->imbalance_pct - 100) / 2; + this_eff_load *= capacity_of(this_cpu); if (this_load > 0) { this_eff_load *= this_load + -- 2.9.3