Received: by 10.223.185.111 with SMTP id b44csp108668wrg; Fri, 9 Mar 2018 01:54:53 -0800 (PST) X-Google-Smtp-Source: AG47ELu3RFYdE5JkG9j/rZeds9h+O1ex5lT8fAirVvnqmP99b1whHPpP/g4I0pUeUwnod0ht7RpS X-Received: by 2002:a17:902:42a3:: with SMTP id h32-v6mr27691069pld.231.1520589293478; Fri, 09 Mar 2018 01:54:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520589293; cv=none; d=google.com; s=arc-20160816; b=OnX4H5VplLM0P6uaNccsv4BTDHa+qYZnQ+GVzd1bZXQoKutVtQ3RtDmxXUdnRsXM6a 9TEo17NVKzKwzcVFiL3RLFoUdg5A4CqH9Q5Topy2hqB/XAToLigJ8zxU3HG5So6DCnL2 ad3RSDEk7mi8tEwZc6bKn4BeYeEY0uIKOXK/2yg/Z7MXv/CYNqWpR4w6bqOS59JxGEgg 2OpUzWPAdtSqFEruhsr5QixLQ4GWPNFqe+Zuwl9FtZhdZ9BvuDGMBFBy/OKHHuWS57qi W50HZU+qwhdIXA6A37PEdxutZspu3CRp5W+r2UDMxajAyOuNQJcbjzQ9SUEMkMHUN7VC yUng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=Cjnr+hD73Jx897qpg/ctNkZ4Kx049DwviR5ElA8/W7Q=; b=ThlfO8vkeKeRIiCMnKFyNmuSKSsIjGF0COs63FxAJ/hcPZVKFMJGz5nc0tVFOiouI8 fF2a3c0d3GnflplQSQZrk/tDkIcE+evvBGP5VxpJiOoZdfjygbNvfZvBnOLWsxQqVq74 dfqA0biGhbMBWCCWTYH3snl+OEZRKETT2G+7VaMQ7PNUt9Qogp817RO9/LrBSX91t+nO fxLhbfrXdQmUFeT0rWY3NaLvsoMdphWHRaGHzSuFTE4Blj/WEBFS/TBqQlh2xpcJj48F ueaqTOwvWG9722idp5SP5U+wUz5mUcxG9DYkBcrkq1Rs8UFnBMb8N2pEdsTR2Jn6f7OI BHpw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w9si593936pfl.193.2018.03.09.01.54.39; Fri, 09 Mar 2018 01:54:53 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751361AbeCIJxj (ORCPT + 99 others); Fri, 9 Mar 2018 04:53:39 -0500 Received: from foss.arm.com ([217.140.101.70]:49344 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751242AbeCIJxC (ORCPT ); Fri, 9 Mar 2018 04:53:02 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2E6B51610; Fri, 9 Mar 2018 01:53:02 -0800 (PST) Received: from e110439-lin.cambridge.arm.com (e110439-lin.cambridge.arm.com [10.1.210.68]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A48E23F25C; Fri, 9 Mar 2018 01:52:59 -0800 (PST) From: Patrick Bellasi To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org Cc: Ingo Molnar , Peter Zijlstra , "Rafael J . Wysocki" , Viresh Kumar , Vincent Guittot , Paul Turner , Dietmar Eggemann , Morten Rasmussen , Juri Lelli , Todd Kjos , Joel Fernandes , Steve Muckle Subject: [PATCH v6 2/4] sched/fair: use util_est in LB and WU paths Date: Fri, 9 Mar 2018 09:52:43 +0000 Message-Id: <20180309095245.11071-3-patrick.bellasi@arm.com> X-Mailer: git-send-email 2.15.1 In-Reply-To: <20180309095245.11071-1-patrick.bellasi@arm.com> References: <20180309095245.11071-1-patrick.bellasi@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When the scheduler looks at the CPU utilization, the current PELT value for a CPU is returned straight away. In certain scenarios this can have undesired side effects on task placement. For example, since the task utilization is decayed at wakeup time, when a long sleeping big task is enqueued it does not add immediately a significant contribution to the target CPU. As a result we generate a race condition where other tasks can be placed on the same CPU while it is still considered relatively empty. In order to reduce this kind of race conditions, this patch introduces the required support to integrate the usage of the CPU's estimated utilization in the wakeup path, via cpu_util_wake(), as well as in the load-balance path, via cpu_util() which is used by update_sg_lb_stats(). The estimated utilization of a CPU is defined to be the maximum between its PELT's utilization and the sum of the estimated utilization (at previous dequeue time) of all the tasks currently RUNNABLE on that CPU. This allows to properly represent the spare capacity of a CPU which, for example, has just got a big task running since a long sleep period. Signed-off-by: Patrick Bellasi Reviewed-by: Dietmar Eggemann Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Rafael J. Wysocki Cc: Viresh Kumar Cc: Paul Turner Cc: Vincent Guittot Cc: Morten Rasmussen Cc: Dietmar Eggemann Cc: linux-kernel@vger.kernel.org Cc: linux-pm@vger.kernel.org --- Changes in v6: - folded cpu_util_est code into cpu_util - updated cpu_util documentation - slightly cleaned up cpu_util_wake code - update changelog to better match code concepts Changes in v5: - always use int instead of long whenever possible (Peter) - add missing READ_ONCE barriers (Peter) Changes in v4: - rebased on today's tip/sched/core (commit 460e8c3340a2) - ensure cpu_util_wake() is cpu_capacity_orig()'s clamped (Pavan) Changes in v3: - rebased on today's tip/sched/core (commit 07881166a892) Changes in v2: - rebase on top of v4.15-rc2 - tested that overhauled PELT code does not affect the util_est --- kernel/sched/fair.c | 84 ++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 70 insertions(+), 14 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c54560829a52..5cf4aa39a6ca 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6431,11 +6431,13 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) return target; } -/* - * cpu_util returns the amount of capacity of a CPU that is used by CFS - * tasks. The unit of the return value must be the one of capacity so we can - * compare the utilization with the capacity of the CPU that is available for - * CFS task (ie cpu_capacity). +/** + * Amount of capacity of a CPU that is (estimated to be) used by CFS tasks + * @cpu: the CPU to get the utilization of + * + * The unit of the return value must be the one of capacity so we can compare + * the utilization with the capacity of the CPU that is available for CFS task + * (ie cpu_capacity). * * cfs_rq.avg.util_avg is the sum of running time of runnable tasks plus the * recent utilization of currently non-runnable tasks on a CPU. It represents @@ -6446,6 +6448,14 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) * current capacity (capacity_curr <= capacity_orig) of the CPU because it is * the running time on this CPU scaled by capacity_curr. * + * The estimated utilization of a CPU is defined to be the maximum between its + * cfs_rq.avg.util_avg and the sum of the estimated utilization of the tasks + * currently RUNNABLE on that CPU. + * This allows to properly represent the expected utilization of a CPU which + * has just got a big task running since a long sleep period. At the same time + * however it preserves the benefits of the "blocked utilization" in + * describing the potential for other tasks waking up on the same CPU. + * * Nevertheless, cfs_rq.avg.util_avg can be higher than capacity_curr or even * higher than capacity_orig because of unfortunate rounding in * cfs.avg.util_avg or just after migrating tasks and new task wakeups until @@ -6456,13 +6466,21 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) * available capacity. We allow utilization to overshoot capacity_curr (but not * capacity_orig) as it useful for predicting the capacity required after task * migrations (scheduler-driven DVFS). + * + * Return: the (estimated) utilization for the specified CPU */ -static unsigned long cpu_util(int cpu) +static inline unsigned long cpu_util(int cpu) { - unsigned long util = cpu_rq(cpu)->cfs.avg.util_avg; - unsigned long capacity = capacity_orig_of(cpu); + struct cfs_rq *cfs_rq; + unsigned int util; + + cfs_rq = &cpu_rq(cpu)->cfs; + util = READ_ONCE(cfs_rq->avg.util_avg); + + if (sched_feat(UTIL_EST)) + util = max(util, READ_ONCE(cfs_rq->avg.util_est.enqueued)); - return (util >= capacity) ? capacity : util; + return min_t(unsigned long, util, capacity_orig_of(cpu)); } /* @@ -6471,16 +6489,54 @@ static unsigned long cpu_util(int cpu) */ static unsigned long cpu_util_wake(int cpu, struct task_struct *p) { - unsigned long util, capacity; + struct cfs_rq *cfs_rq; + unsigned int util; /* Task has no contribution or is new */ - if (cpu != task_cpu(p) || !p->se.avg.last_update_time) + if (cpu != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time)) return cpu_util(cpu); - capacity = capacity_orig_of(cpu); - util = max_t(long, cpu_rq(cpu)->cfs.avg.util_avg - task_util(p), 0); + cfs_rq = &cpu_rq(cpu)->cfs; + util = READ_ONCE(cfs_rq->avg.util_avg); + + /* Discount task's blocked util from CPU's util */ + util -= min_t(unsigned int, util, task_util(p)); - return (util >= capacity) ? capacity : util; + /* + * Covered cases: + * + * a) if *p is the only task sleeping on this CPU, then: + * cpu_util (== task_util) > util_est (== 0) + * and thus we return: + * cpu_util_wake = (cpu_util - task_util) = 0 + * + * b) if other tasks are SLEEPING on this CPU, which is now exiting + * IDLE, then: + * cpu_util >= task_util + * cpu_util > util_est (== 0) + * and thus we discount *p's blocked utilization to return: + * cpu_util_wake = (cpu_util - task_util) >= 0 + * + * c) if other tasks are RUNNABLE on that CPU and + * util_est > cpu_util + * then we use util_est since it returns a more restrictive + * estimation of the spare capacity on that CPU, by just + * considering the expected utilization of tasks already + * runnable on that CPU. + * + * Cases a) and b) are covered by the above code, while case c) is + * covered by the following code when estimated utilization is + * enabled. + */ + if (sched_feat(UTIL_EST)) + util = max(util, READ_ONCE(cfs_rq->avg.util_est.enqueued)); + + /* + * Utilization (estimated) can exceed the CPU capacity, thus let's + * clamp to the maximum CPU capacity to ensure consistency with + * the cpu_util call. + */ + return min_t(unsigned long, util, capacity_orig_of(cpu)); } /* -- 2.15.1