Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751947AbdHBNKQ (ORCPT ); Wed, 2 Aug 2017 09:10:16 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:53526 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751676AbdHBNKP (ORCPT ); Wed, 2 Aug 2017 09:10:15 -0400 From: Brendan Jackman To: Ingo Molnar , Peter Zijlstra , linux-kernel@vger.kernel.org Cc: Joel Fernandes , Andres Oportus , Dietmar Eggemann , Vincent Guittot , Josef Bacik , Morten Rasmussen Subject: [PATCH] sched/fair: Sync task util before slow-path wakeup Date: Wed, 2 Aug 2017 14:10:02 +0100 Message-Id: <20170802131002.31576-1-brendan.jackman@arm.com> X-Mailer: git-send-email 2.13.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1687 Lines: 46 We use task_util in find_idlest_group via capacity_spare_wake. This task_util is updated in wake_cap. However wake_cap is not the only reason for ending up in find_idlest_group - we could have been sent there by wake_wide. So explicitly sync the task util with prev_cpu when we are about to head to find_idlest_group. We could simply do this at the beginning of select_task_rq_fair (i.e. irrespective of whether we're heading to select_idle_sibling or find_idlest_group & co), but I didn't want to slow down the select_idle_sibling path more than necessary. Don't do this during fork balancing, we won't need the task_util and we'd just clobber the last_update_time, which is supposed to be 0. Signed-off-by: Brendan Jackman Cc: Dietmar Eggemann Cc: Vincent Guittot Cc: Josef Bacik Cc: Ingo Molnar Cc: Morten Rasmussen Cc: Peter Zijlstra --- kernel/sched/fair.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c95880e216f6..62869ff252b4 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5913,6 +5913,14 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f new_cpu = cpu; } + if (sd && !(sd_flag & SD_BALANCE_FORK)) + /* + * We're going to need the task's util for capacity_spare_wake + * in select_idlest_group. Sync it up to prev_cpu's + * last_update_time. + */ + sync_entity_load_avg(&p->se); + if (!sd) { pick_cpu: if (sd_flag & SD_BALANCE_WAKE) /* XXX always ? */ -- 2.13.0