Received: by 2002:ac0:e350:0:0:0:0:0 with SMTP id g16csp197415imn; Fri, 29 Jul 2022 04:27:11 -0700 (PDT) X-Google-Smtp-Source: AGRyM1u/hYNbOLtYTyFiVh4NepM99P7qr6Xgbm4uTmKtpadiqkGZA5sbMprlakNZ/ZZoU62PFDcO X-Received: by 2002:a05:6a00:1589:b0:52a:eb00:71d8 with SMTP id u9-20020a056a00158900b0052aeb0071d8mr3005248pfk.38.1659094031241; Fri, 29 Jul 2022 04:27:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1659094031; cv=none; d=google.com; s=arc-20160816; b=H8eWlWN/JK+/6jm9tr6t6crJBKlAPTv7KXzZrSdmAHGU8LrI1Ni6c+FNmqGu08NBV+ 5586YNK64DNhYS3+Upq/k61GlqSH9Q1MZfV/eHPHl03ZVXQoeL8NFwE+YycSr9u/NKTV vxdhbB0nH9pV7oEFmzEmnU+0h9qQ+SpiGtq5wNg5ie3aoD0IYMIh+ajscFba/iZjfB7k 07tbTMZXyfznhvVtlo5gZQjZ+jb4soU8r1ROq84n/weMr0hEhRoUYYBA8Po1RnWzMyVK LgYJRuk64kDeGHIAZoXzANcTzb5rHc8RrpNJ9Fh7U9EvpKx4lHV3t+XzR5DhUj5MghhG e3GQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=AP4oQwwIAprzSXiMqRCarIyg+fR3Hq09ldqke5WKwbY=; b=xo06hfOycmkW7WABU9dp/EhXQmURa0Q1E+WqY0QZmuJltydrX4PLntK63Uoov4IT2J D6VUJPeafCxkhguA0pdqA8tCffIdyOpSgsGKnsT3w/Qss9JWUybkoCfN62WDUDJxKE8d NyQJHKoa1grzi3hNGyZCgtOWdLleP3TUNOckXwZ8oHlhPZOcfwZ2dtj/diswZmNLlAzl o9R23Dbp1bsig4Y+kudepSHfWm2HdnS9LdrQVzIireRfXAlrYmzn2SS9lW60b6XgyjZu /senP65ztZ87hi2Y7q6ik/FQmKEvajyp4o011UHhtyztlurv1nP/5l1TRMh4Te7SGqD1 zU0w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i2-20020a170902c94200b0016dbc1b5b31si4031983pla.61.2022.07.29.04.26.55; Fri, 29 Jul 2022 04:27:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235890AbiG2LNe (ORCPT + 99 others); Fri, 29 Jul 2022 07:13:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235834AbiG2LN3 (ORCPT ); Fri, 29 Jul 2022 07:13:29 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7B1BF7B34C for ; Fri, 29 Jul 2022 04:13:28 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DA723106F; Fri, 29 Jul 2022 04:13:28 -0700 (PDT) Received: from localhost.localdomain (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 339D73F73D; Fri, 29 Jul 2022 04:13:26 -0700 (PDT) From: Dietmar Eggemann To: Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Steven Rostedt Cc: Daniel Bristot de Oliveira , Valentin Schneider , Mel Gorman , Ben Segall , Luca Abeni , linux-kernel@vger.kernel.org Subject: [PATCH v2 1/3] sched: Introduce sched_asym_cpucap_active() Date: Fri, 29 Jul 2022 13:13:03 +0200 Message-Id: <20220729111305.1275158-2-dietmar.eggemann@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220729111305.1275158-1-dietmar.eggemann@arm.com> References: <20220729111305.1275158-1-dietmar.eggemann@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Create an inline helper for conditional code to be only executed on asymmetric CPU capacity systems. This makes these (currently ~10 and future) conditions a lot more readable. Signed-off-by: Dietmar Eggemann --- kernel/sched/cpudeadline.c | 2 +- kernel/sched/deadline.c | 4 ++-- kernel/sched/fair.c | 8 ++++---- kernel/sched/rt.c | 4 ++-- kernel/sched/sched.h | 5 +++++ 5 files changed, 14 insertions(+), 9 deletions(-) diff --git a/kernel/sched/cpudeadline.c b/kernel/sched/cpudeadline.c index 02d970a879ed..57c92d751bcd 100644 --- a/kernel/sched/cpudeadline.c +++ b/kernel/sched/cpudeadline.c @@ -123,7 +123,7 @@ int cpudl_find(struct cpudl *cp, struct task_struct *p, unsigned long cap, max_cap = 0; int cpu, max_cpu = -1; - if (!static_branch_unlikely(&sched_asym_cpucapacity)) + if (!sched_asym_cpucap_active()) return 1; /* Ensure the capacity of the CPUs fits the task. */ diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 5867e186c39a..3f9d90b8a8b6 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -144,7 +144,7 @@ static inline unsigned long __dl_bw_capacity(int i) */ static inline unsigned long dl_bw_capacity(int i) { - if (!static_branch_unlikely(&sched_asym_cpucapacity) && + if (!sched_asym_cpucap_active() && capacity_orig_of(i) == SCHED_CAPACITY_SCALE) { return dl_bw_cpus(i) << SCHED_CAPACITY_SHIFT; } else { @@ -1846,7 +1846,7 @@ select_task_rq_dl(struct task_struct *p, int cpu, int flags) * Take the capacity of the CPU into account to * ensure it fits the requirement of the task. */ - if (static_branch_unlikely(&sched_asym_cpucapacity)) + if (sched_asym_cpucap_active()) select_rq |= !dl_task_fits_capacity(p, cpu); if (select_rq) { diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2fc47257ae91..3b186c9c4ea1 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4262,7 +4262,7 @@ static inline int task_fits_capacity(struct task_struct *p, static inline void update_misfit_status(struct task_struct *p, struct rq *rq) { - if (!static_branch_unlikely(&sched_asym_cpucapacity)) + if (!sched_asym_cpucap_active()) return; if (!p || p->nr_cpus_allowed == 1) { @@ -6506,7 +6506,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target) static inline bool asym_fits_capacity(unsigned long task_util, int cpu) { - if (static_branch_unlikely(&sched_asym_cpucapacity)) + if (sched_asym_cpucap_active()) return fits_capacity(task_util, capacity_of(cpu)); return true; @@ -6526,7 +6526,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) * On asymmetric system, update task utilization because we will check * that the task fits with cpu's capacity. */ - if (static_branch_unlikely(&sched_asym_cpucapacity)) { + if (sched_asym_cpucap_active()) { sync_entity_load_avg(&p->se); task_util = uclamp_task_util(p); } @@ -6580,7 +6580,7 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target) * For asymmetric CPU capacity systems, our domain of interest is * sd_asym_cpucapacity rather than sd_llc. */ - if (static_branch_unlikely(&sched_asym_cpucapacity)) { + if (sched_asym_cpucap_active()) { sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, target)); /* * On an asymmetric CPU capacity system where an exclusive diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 55f39c8f4203..054b6711e961 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -509,7 +509,7 @@ static inline bool rt_task_fits_capacity(struct task_struct *p, int cpu) unsigned int cpu_cap; /* Only heterogeneous systems can benefit from this check */ - if (!static_branch_unlikely(&sched_asym_cpucapacity)) + if (!sched_asym_cpucap_active()) return true; min_cap = uclamp_eff_value(p, UCLAMP_MIN); @@ -1897,7 +1897,7 @@ static int find_lowest_rq(struct task_struct *task) * If we're on asym system ensure we consider the different capacities * of the CPUs when searching for the lowest_mask. */ - if (static_branch_unlikely(&sched_asym_cpucapacity)) { + if (sched_asym_cpucap_active()) { ret = cpupri_find_fitness(&task_rq(task)->rd->cpupri, task, lowest_mask, diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 73ae32898f25..72704b2b4a45 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -1808,6 +1808,11 @@ DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing); DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity); extern struct static_key_false sched_asym_cpucapacity; +static __always_inline bool sched_asym_cpucap_active(void) +{ + return static_branch_unlikely(&sched_asym_cpucapacity); +} + struct sched_group_capacity { atomic_t ref; /* -- 2.25.1