Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp1051280ybv; Thu, 20 Feb 2020 12:10:33 -0800 (PST) X-Google-Smtp-Source: APXvYqzCazFAfZKnLB32inM5rYPTte6f/mqu8wEffWGKHBPqjUtjkH1atxQmTl/II+jPfpTrbZh7 X-Received: by 2002:a9d:7f11:: with SMTP id j17mr26694096otq.281.1582229432859; Thu, 20 Feb 2020 12:10:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1582229432; cv=none; d=google.com; s=arc-20160816; b=H6mG/qkHYOAjrg03oiz+l+daBA0Hs+aSIfs06Ysia/q/PQp8irR5W3h6SKeIltPwAA 6zvlD9lzatWS3VukkICWNv+O+HHHTzSZ3Ib6R8K0nOJXdpQpAEUEAz4Y0hAQOhCjBzEE jnphyFgQl3LiksL3KcfHg7dNdXbv9HSjX928x8VKJtwjAVaCTihfa/X/ici/k8Xi+qmg VWBs/XO/WDK9PSsplNET7ESgVMSBRQq7QE//2mgdapaORhyK55rlXailqtXALCr3kflY +k70EMauF0gfgQyTyh2WakqY2nYA0YE4yDz49GdV+GAyxHDifknr+l34fRD7IgwMpGO2 Bhlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :robot-unsubscribe:robot-id:message-id:mime-version:references :in-reply-to:cc:subject:to:reply-to:from:date; bh=dDFsiGPI7Tuzy5Jt4Edf7cz9LKOmo85Jf3a5jqOjf0I=; b=IthvivwxE65HGXhnbJFu8Bmyapg9iepC+GAz9oukuTmuNNBypFa75vsF5cLZdWkXt6 QHhIJvfOVPPi1BFaruPFtRTBOntW+7j/0JdiFGjb9HoWxeANRthJNPqAQI9KOI5rMb58 r3QU5XOT20Gm4pbZpOx1Em+7282n2aSyazlb/S5ID4o33W+PfvVt5fRKskwL4Kp4lklT tNqCb+miMVPnfPuI+dAa2bsPz4t/nuBsCXwMbJoDrLKJbD9x7xas4aNPtYdRtosUUR7u AP6hlcUCow+HnIknj8BS7UwzIWkydQ02wm7nLs3blH7SE28/XycekgbSnIOmaAYwtFLv 1zig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h139si163799oib.85.2020.02.20.12.10.20; Thu, 20 Feb 2020 12:10:32 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729072AbgBTUJf (ORCPT + 99 others); Thu, 20 Feb 2020 15:09:35 -0500 Received: from Galois.linutronix.de ([193.142.43.55]:43871 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729058AbgBTUJU (ORCPT ); Thu, 20 Feb 2020 15:09:20 -0500 Received: from [5.158.153.53] (helo=tip-bot2.lab.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1j4s8H-0006hT-TJ; Thu, 20 Feb 2020 21:09:14 +0100 Received: from [127.0.1.1] (localhost [IPv6:::1]) by tip-bot2.lab.linutronix.de (Postfix) with ESMTP id 8653C1C20C5; Thu, 20 Feb 2020 21:09:13 +0100 (CET) Date: Thu, 20 Feb 2020 20:09:13 -0000 From: "tip-bot2 for Morten Rasmussen" Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: sched/core] sched/fair: Add asymmetric CPU capacity wakeup scan Cc: Morten Rasmussen , Valentin Schneider , "Peter Zijlstra (Intel)" , Ingo Molnar , Thomas Gleixner , Quentin Perret , x86 , LKML In-Reply-To: <20200206191957.12325-2-valentin.schneider@arm.com> References: <20200206191957.12325-2-valentin.schneider@arm.com> MIME-Version: 1.0 Message-ID: <158222935329.13786.16711123258402589223.tip-bot2@tip-bot2> X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following commit has been merged into the sched/core branch of tip: Commit-ID: b7a331615d254191e7f5f0e35aec9adcd6acdc54 Gitweb: https://git.kernel.org/tip/b7a331615d254191e7f5f0e35aec9adcd6acdc54 Author: Morten Rasmussen AuthorDate: Thu, 06 Feb 2020 19:19:54 Committer: Thomas Gleixner CommitterDate: Thu, 20 Feb 2020 21:03:14 +01:00 sched/fair: Add asymmetric CPU capacity wakeup scan Issue ===== On asymmetric CPU capacity topologies, we currently rely on wake_cap() to drive select_task_rq_fair() towards either: - its slow-path (find_idlest_cpu()) if either the previous or current (waking) CPU has too little capacity for the waking task - its fast-path (select_idle_sibling()) otherwise Commit: 3273163c6775 ("sched/fair: Let asymmetric CPU configurations balance at wake-up") points out that this relies on the assumption that "[...]the CPU capacities within an SD_SHARE_PKG_RESOURCES domain (sd_llc) are homogeneous". This assumption no longer holds on newer generations of big.LITTLE systems (DynamIQ), which can accommodate CPUs of different compute capacity within a single LLC domain. To hopefully paint a better picture, a regular big.LITTLE topology would look like this: +---------+ +---------+ | L2 | | L2 | +----+----+ +----+----+ |CPU0|CPU1| |CPU2|CPU3| +----+----+ +----+----+ ^^^ ^^^ LITTLEs bigs which would result in the following scheduler topology: DIE [ ] <- sd_asym_cpucapacity MC [ ] [ ] <- sd_llc 0 1 2 3 Conversely, a DynamIQ topology could look like: +-------------------+ | L3 | +----+----+----+----+ | L2 | L2 | L2 | L2 | +----+----+----+----+ |CPU0|CPU1|CPU2|CPU3| +----+----+----+----+ ^^^^^ ^^^^^ LITTLEs bigs which would result in the following scheduler topology: MC [ ] <- sd_llc, sd_asym_cpucapacity 0 1 2 3 What this means is that, on DynamIQ systems, we could pass the wake_cap() test (IOW presume the waking task fits on the CPU capacities of some LLC domain), thus go through select_idle_sibling(). This function operates on an LLC domain, which here spans both bigs and LITTLEs, so it could very well pick a CPU of too small capacity for the task, despite there being fitting idle CPUs - it very much depends on the CPU iteration order, on which we have absolutely no guarantees capacity-wise. Implementation ============== Introduce yet another select_idle_sibling() helper function that takes CPU capacity into account. The policy is to pick the first idle CPU which is big enough for the task (task_util * margin < cpu_capacity). If no idle CPU is big enough, we pick the idle one with the highest capacity. Unlike other select_idle_sibling() helpers, this one operates on the sd_asym_cpucapacity sched_domain pointer, which is guaranteed to span all known CPU capacities in the system. As such, this will work for both "legacy" big.LITTLE (LITTLEs & bigs split at MC, joined at DIE) and for newer DynamIQ systems (e.g. LITTLEs and bigs in the same MC domain). Note that this limits the scope of select_idle_sibling() to select_idle_capacity() for asymmetric CPU capacity systems - the LLC domain will not be scanned, and no further heuristic will be applied. Signed-off-by: Morten Rasmussen Signed-off-by: Valentin Schneider Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner Reviewed-by: Quentin Perret Link: https://lkml.kernel.org/r/20200206191957.12325-2-valentin.schneider@arm.com --- kernel/sched/fair.c | 56 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 56 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 1a0ce83..6fb47a2 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5897,6 +5897,40 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t } /* + * Scan the asym_capacity domain for idle CPUs; pick the first idle one on which + * the task fits. If no CPU is big enough, but there are idle ones, try to + * maximize capacity. + */ +static int +select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target) +{ + unsigned long best_cap = 0; + int cpu, best_cpu = -1; + struct cpumask *cpus; + + sync_entity_load_avg(&p->se); + + cpus = this_cpu_cpumask_var_ptr(select_idle_mask); + cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); + + for_each_cpu_wrap(cpu, cpus, target) { + unsigned long cpu_cap = capacity_of(cpu); + + if (!available_idle_cpu(cpu) && !sched_idle_cpu(cpu)) + continue; + if (task_fits_capacity(p, cpu_cap)) + return cpu; + + if (cpu_cap > best_cap) { + best_cap = cpu_cap; + best_cpu = cpu; + } + } + + return best_cpu; +} + +/* * Try and locate an idle core/thread in the LLC cache domain. */ static int select_idle_sibling(struct task_struct *p, int prev, int target) @@ -5904,6 +5938,28 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) struct sched_domain *sd; int i, recent_used_cpu; + /* + * For asymmetric CPU capacity systems, our domain of interest is + * sd_asym_cpucapacity rather than sd_llc. + */ + if (static_branch_unlikely(&sched_asym_cpucapacity)) { + sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, target)); + /* + * On an asymmetric CPU capacity system where an exclusive + * cpuset defines a symmetric island (i.e. one unique + * capacity_orig value through the cpuset), the key will be set + * but the CPUs within that cpuset will not have a domain with + * SD_ASYM_CPUCAPACITY. These should follow the usual symmetric + * capacity path. + */ + if (!sd) + goto symmetric; + + i = select_idle_capacity(p, sd, target); + return ((unsigned)i < nr_cpumask_bits) ? i : target; + } + +symmetric: if (available_idle_cpu(target) || sched_idle_cpu(target)) return target;