Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754408AbdGNNV6 (ORCPT ); Fri, 14 Jul 2017 09:21:58 -0400 Received: from mail-yw0-f193.google.com ([209.85.161.193]:32987 "EHLO mail-yw0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754173AbdGNNVQ (ORCPT ); Fri, 14 Jul 2017 09:21:16 -0400 From: Josef Bacik To: mingo@redhat.com, peterz@infradead.org, linux-kernel@vger.kernel.org, umgwanakikbuti@gmail.com, tj@kernel.org, kernel-team@fb.com Cc: Josef Bacik Subject: [PATCH 5/7] sched/fair: use the task weight instead of average in effective_load Date: Fri, 14 Jul 2017 13:21:02 +0000 Message-Id: <1500038464-8742-6-git-send-email-josef@toxicpanda.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1500038464-8742-1-git-send-email-josef@toxicpanda.com> References: <1500038464-8742-1-git-send-email-josef@toxicpanda.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1965 Lines: 58 From: Josef Bacik This is a preparation patch for the next patch. When adding a new task to a cfs_rq we do not add our load_avg to the existing cfs_rq, we add our weight, and that changes how the load average moves as the cfs_rq/task runs. Using the load average in our effective_load calculation is going to be slightly inaccurate from what we want to be computing (the actual effect of waking this task on this cpu), and is going to bias us towards always affinity waking tasks. Signed-off-by: Josef Bacik --- kernel/sched/fair.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ee8dced..4e4fc5d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5646,7 +5646,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, s64 this_eff_load, prev_eff_load; int idx, this_cpu; struct task_group *tg; - unsigned long weight; + unsigned long weight, avg; int balanced; idx = sd->wake_idx; @@ -5661,14 +5661,15 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, */ if (sync) { tg = task_group(current); - weight = current->se.avg.load_avg; + weight = se_weight(¤t->se); + avg = current->se.avg.load_avg; - this_load += effective_load(tg, this_cpu, -weight, -weight); - load += effective_load(tg, prev_cpu, 0, -weight); + this_load += effective_load(tg, this_cpu, -avg, -weight); } tg = task_group(p); - weight = p->se.avg.load_avg; + weight = se_weight(&p->se); + avg = p->se.avg.load_avg; /* * In low-load situations, where prev_cpu is idle and this_cpu is idle @@ -5687,7 +5688,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, if (this_load > 0) { this_eff_load *= this_load + - effective_load(tg, this_cpu, weight, weight); + effective_load(tg, this_cpu, avg, weight); prev_eff_load *= load; } -- 2.9.3