Received: by 2002:a05:6602:2086:0:0:0:0 with SMTP id a6csp4637432ioa; Wed, 27 Apr 2022 08:01:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw15AU+CvwV3yvIj0z8UlQ+OppZyVVHHTtV1W5oXQrWYW5ib2de3RfMru58/tHqjPJABPRm X-Received: by 2002:a05:6808:17a3:b0:324:fcbf:3142 with SMTP id bg35-20020a05680817a300b00324fcbf3142mr10691267oib.5.1651071680791; Wed, 27 Apr 2022 08:01:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1651071680; cv=none; d=google.com; s=arc-20160816; b=qjk0Q5GNQ0lckPXErw9nt/OZd3stjlOdgy1+l0xqPG3cIQBY1SPqUPFHdGOPSFrJsF mQ++Q5SpIsMDC4Hp0xUXP2R/QjePY47/gdo4RpeF2PI6kIt/XdiY2gm4vLhSmURxmCWc 7O3AN3YOUtIEkj3c87aj8BtDVJOCP+TBEBmfKApLvvA/Fg3hw38QWtrPHCUwqD7Wl6mp tOA5jsavgGMKnwtm7kPWzdJeHiWKiAAxaUM7Xro9hSKuTPKV5BAsr9HFW1viPclXR/gt RRcBb+FphOWa6gyhpF7H2rQXmaddydJgynDxMFRb3F0dMIykccAv7KdtNXY8GnmL6PUJ iZ3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=/GNCoylUVaZbOdgtQidPAyARlROfqK684mbGm/+fgNM=; b=ev8X8bkHBu9gyCMpzX7numE32bMpM8nvhdXi4ELxbUun5sImPL9b1jc376BXLZ4bfG aHDcTu7OzeTzFBqJZkdCjmh4lk9UwB9s7EoPquqtUPOQAwkf5KCMhPBu2tZojgAZ0PGi Rz9f0UUS/41Dn00pkaHsHL3p2GkhbKjkgytRVZLGcMX0gA15jB/B/jHMYntw94if3YN6 iR5Q8psmCFuEWy/qatJLh4l30yXuwqudKu0Zx9HuuE7ELKXDQumHSTFmZMwEmVHUJs5D w8ypgBJnknJ3lgI8/1XsMBm4TwG7u3+Y6Kdkt+2HIF8G0tJzLBTRi0tgi3DHFc5wRubN 5ANw== ARC-Authentication-Results: i=1; mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id z22-20020a9d65d6000000b006054d84f206si998780oth.107.2022.04.27.08.01.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Apr 2022 08:01:20 -0700 (PDT) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BA49F25C50; Wed, 27 Apr 2022 07:34:00 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238184AbiD0Ogp (ORCPT + 99 others); Wed, 27 Apr 2022 10:36:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238090AbiD0Ogc (ORCPT ); Wed, 27 Apr 2022 10:36:32 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7A5A32711 for ; Wed, 27 Apr 2022 07:33:20 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4494E1474; Wed, 27 Apr 2022 07:33:20 -0700 (PDT) Received: from localhost.localdomain (unknown [10.57.44.232]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BDAB23F5A1; Wed, 27 Apr 2022 07:33:18 -0700 (PDT) From: Vincent Donnefort To: peterz@infradead.org, mingo@redhat.com, vincent.guittot@linaro.org Cc: linux-kernel@vger.kernel.org, dietmar.eggemann@arm.com, morten.rasmussen@arm.com, chris.redpath@arm.com, qperret@google.com Subject: [PATCH v7 3/7] sched, drivers: Remove max param from effective_cpu_util()/sched_cpu_util() Date: Wed, 27 Apr 2022 15:33:00 +0100 Message-Id: <20220427143304.3950488-4-vincent.donnefort@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220427143304.3950488-1-vincent.donnefort@arm.com> References: <20220427143304.3950488-1-vincent.donnefort@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RDNS_NONE, SPF_HELO_NONE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dietmar Eggemann effective_cpu_util() already has a `int cpu' parameter which allows to retrieve the CPU capacity scale factor (or maximum CPU capacity) inside this function via an arch_scale_cpu_capacity(cpu). A lot of code calling effective_cpu_util() (or the shim sched_cpu_util()) needs the maximum CPU capacity, i.e. it will call arch_scale_cpu_capacity() already. But not having to pass it into effective_cpu_util() will make the EAS wake-up code easier, especially when the maximum CPU capacity reduced by the thermal pressure is passed through the EAS wake-up functions. Due to the asymmetric CPU capacity support of arm/arm64 architectures, arch_scale_cpu_capacity(int cpu) is a per-CPU variable read access via per_cpu(cpu_scale, cpu) on such a system. On all other architectures it is a a compile-time constant (SCHED_CAPACITY_SCALE). Signed-off-by: Dietmar Eggemann diff --git a/drivers/powercap/dtpm_cpu.c b/drivers/powercap/dtpm_cpu.c index bca2f912d349..024dba4e6575 100644 --- a/drivers/powercap/dtpm_cpu.c +++ b/drivers/powercap/dtpm_cpu.c @@ -71,34 +71,19 @@ static u64 set_pd_power_limit(struct dtpm *dtpm, u64 power_limit) static u64 scale_pd_power_uw(struct cpumask *pd_mask, u64 power) { - unsigned long max = 0, sum_util = 0; + unsigned long max, sum_util = 0; int cpu; - for_each_cpu_and(cpu, pd_mask, cpu_online_mask) { - - /* - * The capacity is the same for all CPUs belonging to - * the same perf domain, so a single call to - * arch_scale_cpu_capacity() is enough. However, we - * need the CPU parameter to be initialized by the - * loop, so the call ends up in this block. - * - * We can initialize 'max' with a cpumask_first() call - * before the loop but the bits computation is not - * worth given the arch_scale_cpu_capacity() just - * returns a value where the resulting assembly code - * will be optimized by the compiler. - */ - max = arch_scale_cpu_capacity(cpu); - sum_util += sched_cpu_util(cpu, max); - } - /* - * In the improbable case where all the CPUs of the perf - * domain are offline, 'max' will be zero and will lead to an - * illegal operation with a zero division. + * The capacity is the same for all CPUs belonging to + * the same perf domain. */ - return max ? (power * ((sum_util << 10) / max)) >> 10 : 0; + max = arch_scale_cpu_capacity(cpumask_first(pd_mask)); + + for_each_cpu_and(cpu, pd_mask, cpu_online_mask) + sum_util += sched_cpu_util(cpu); + + return (power * ((sum_util << 10) / max)) >> 10; } static u64 get_pd_power_uw(struct dtpm *dtpm) diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c index 0bfb8eebd126..3f514ff3d9aa 100644 --- a/drivers/thermal/cpufreq_cooling.c +++ b/drivers/thermal/cpufreq_cooling.c @@ -137,11 +137,9 @@ static u32 cpu_power_to_freq(struct cpufreq_cooling_device *cpufreq_cdev, static u32 get_load(struct cpufreq_cooling_device *cpufreq_cdev, int cpu, int cpu_idx) { - unsigned long max = arch_scale_cpu_capacity(cpu); - unsigned long util; + unsigned long util = sched_cpu_util(cpu); - util = sched_cpu_util(cpu, max); - return (util * 100) / max; + return (util * 100) / arch_scale_cpu_capacity(cpu); } #else /* !CONFIG_SMP */ static u32 get_load(struct cpufreq_cooling_device *cpufreq_cdev, int cpu, diff --git a/include/linux/sched.h b/include/linux/sched.h index 67f06f72c50e..c1705effb3a4 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2255,7 +2255,7 @@ static inline bool owner_on_cpu(struct task_struct *owner) } /* Returns effective CPU energy utilization, as seen by the scheduler */ -unsigned long sched_cpu_util(int cpu, unsigned long max); +unsigned long sched_cpu_util(int cpu); #endif /* CONFIG_SMP */ #ifdef CONFIG_RSEQ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 068c088e9584..a62d25ec5b0d 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -7061,12 +7061,14 @@ struct task_struct *idle_task(int cpu) * required to meet deadlines. */ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs, - unsigned long max, enum cpu_util_type type, + enum cpu_util_type type, struct task_struct *p) { - unsigned long dl_util, util, irq; + unsigned long dl_util, util, irq, max; struct rq *rq = cpu_rq(cpu); + max = arch_scale_cpu_capacity(cpu); + if (!uclamp_is_used() && type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt)) { return max; @@ -7146,10 +7148,9 @@ unsigned long effective_cpu_util(int cpu, unsigned long util_cfs, return min(max, util); } -unsigned long sched_cpu_util(int cpu, unsigned long max) +unsigned long sched_cpu_util(int cpu) { - return effective_cpu_util(cpu, cpu_util_cfs(cpu), max, - ENERGY_UTIL, NULL); + return effective_cpu_util(cpu, cpu_util_cfs(cpu), ENERGY_UTIL, NULL); } #endif /* CONFIG_SMP */ diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 3dbf351d12d5..1207c78f85c1 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -157,11 +157,10 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy, static void sugov_get_util(struct sugov_cpu *sg_cpu) { struct rq *rq = cpu_rq(sg_cpu->cpu); - unsigned long max = arch_scale_cpu_capacity(sg_cpu->cpu); - sg_cpu->max = max; + sg_cpu->max = arch_scale_cpu_capacity(sg_cpu->cpu); sg_cpu->bw_dl = cpu_bw_dl(rq); - sg_cpu->util = effective_cpu_util(sg_cpu->cpu, cpu_util_cfs(sg_cpu->cpu), max, + sg_cpu->util = effective_cpu_util(sg_cpu->cpu, cpu_util_cfs(sg_cpu->cpu), FREQUENCY_UTIL, NULL); } diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 9cd506dc682c..e0d5b1ba565d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6699,12 +6699,11 @@ static long compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) { struct cpumask *pd_mask = perf_domain_span(pd); - unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask)); - unsigned long max_util = 0, sum_util = 0; - unsigned long _cpu_cap = cpu_cap; + unsigned long max_util = 0, sum_util = 0, cpu_cap; int cpu; - _cpu_cap -= arch_scale_thermal_pressure(cpumask_first(pd_mask)); + cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask)); + cpu_cap -= arch_scale_thermal_pressure(cpumask_first(pd_mask)); /* * The capacity state of CPUs of the current rd can be driven by CPUs @@ -6741,10 +6740,10 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) * is already enough to scale the EM reported power * consumption at the (eventually clamped) cpu_capacity. */ - cpu_util = effective_cpu_util(cpu, util_running, cpu_cap, - ENERGY_UTIL, NULL); + cpu_util = effective_cpu_util(cpu, util_running, ENERGY_UTIL, + NULL); - sum_util += min(cpu_util, _cpu_cap); + sum_util += min(cpu_util, cpu_cap); /* * Performance domain frequency: utilization clamping @@ -6753,12 +6752,12 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd) * NOTE: in case RT tasks are running, by default the * FREQUENCY_UTIL's utilization can be max OPP. */ - cpu_util = effective_cpu_util(cpu, util_freq, cpu_cap, - FREQUENCY_UTIL, tsk); - max_util = max(max_util, min(cpu_util, _cpu_cap)); + cpu_util = effective_cpu_util(cpu, util_freq, FREQUENCY_UTIL, + tsk); + max_util = max(max_util, min(cpu_util, cpu_cap)); } - return em_cpu_energy(pd->em_pd, max_util, sum_util, _cpu_cap); + return em_cpu_energy(pd->em_pd, max_util, sum_util, cpu_cap); } /* diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 07014e8cbae2..f902f3e27e48 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2878,7 +2878,7 @@ enum cpu_util_type { }; unsigned long effective_cpu_util(int cpu, unsigned long util_cfs, - unsigned long max, enum cpu_util_type type, + enum cpu_util_type type, struct task_struct *p); static inline unsigned long cpu_bw_dl(struct rq *rq) -- 2.25.1