Received: by 2002:a05:7412:8d1c:b0:fa:4c10:6cad with SMTP id bj28csp291609rdb; Wed, 17 Jan 2024 01:59:25 -0800 (PST) X-Google-Smtp-Source: AGHT+IEZG10/SkO+gA9rkY9wnFhz/jJ1vjZgZOcJTe1K7T04l7AM7NgiURYRcUxwpJ9ZaTgC2BrC X-Received: by 2002:ac8:7f41:0:b0:429:bb3a:74cf with SMTP id g1-20020ac87f41000000b00429bb3a74cfmr9187805qtk.134.1705485565733; Wed, 17 Jan 2024 01:59:25 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1705485565; cv=pass; d=google.com; s=arc-20160816; b=x4fMdca3L2DNXpnyeV/2e5n+qmKWVa2sXKJXfkzobTXaqPqTfCSI+tlSpFAWYQKmOE 8qfwrX1c7fI/jz4GwWOcJvxvIMLlKk537UJWmy4t51WDasx7Jr8xxMPOPs/sh6TSYzIi u2mtYe2M5c9UAgc6yjQ/dcm4s6dYfTTXSoWRhFWsGMM5giNK4taeUtEIP+VrFwI1xkUL rZanrUSb3zNp7/u7jk9EVleppdjj/48fk2TJPZeJ7aN3H2qqVtx11fgmJmbddIxFWDmP C5j8A2Txl2vOdpt5QBL5fmDtWJOT5OanbAhb1S/HQJW7NnAdixuzyyZ8U2HDBuRRalQ5 8Wbw== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=ZINypmwat5+i8MJ9cPFMeVkZtg2IYHtVn2VYqzfXE3c=; fh=dnwF3PGywHXrL+zB51aLYhCKaTICdBffwESxtAIta2U=; b=XK2LalC5Pcl4qdcilvzkIxb3KubbozagNUhF9RnjZbErTZnSwfWXcmZjYz3IGKu3UI J7itGXZR/o8fkBZIRH85yFd0c8Aw+zdDUdBdtD6HFhQih/hL8u3YowMw+eNVp6f/9alm isHMCn5v7gcIDFqKzuWFVW21MPzbPETePZWRceHlRp8U1ajVlSpL8R8nT1cQ8AcB68fZ DU4Deq39JhSoYrBAiGGpTS98PGWQw9yifiD5KuzB0/gVn2uw1c2kS4t6UYpgYrapwCK4 NVp1k1J9qzX8Jw1SLlrvNTWH8DnGwPnKkmxBGUwM72YyqTaknTBs5lcvnZx1kAO9Dx2b RGxw== ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-28783-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-28783-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id z20-20020ac87f94000000b0042a0dad17b5si1121746qtj.100.2024.01.17.01.59.25 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Jan 2024 01:59:25 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-28783-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-28783-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-28783-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 70C8E1C24C38 for ; Wed, 17 Jan 2024 09:59:25 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id ABD781F60B; Wed, 17 Jan 2024 09:56:38 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0071C1F610; Wed, 17 Jan 2024 09:56:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705485398; cv=none; b=KjxuQQxSmHLgZst+5nuMdIRgn6GKWbrilD9IUnLRfhDI32YVXNrcJqQTDSPNDwKqlHIV5/4fRObrHMR5QfqHD1S315CR8MdtLOHPbep/0uCZUWg/hJRyWnd4YIpIdx4O7qgTt7P4hJ88U0gNOESZaiULRbO4q4MxfQ0RnLqxlU4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1705485398; c=relaxed/simple; bh=84f7v19he9h+VGLJC7VvKsqw+c/ii3EBrHtzSfm0L+A=; h=Received:Received:From:To:Cc:Subject:Date:Message-Id:X-Mailer: In-Reply-To:References:MIME-Version:Content-Transfer-Encoding; b=sp+XJFB7aEBT8Y5rh0ACdQnHuMxI8kC5QqzUC4NkT3Q1zCrmIAlTqIGJK12YafdW+xdiMJEgSatiS4Q3kSpe+W0ti5PLus+SX/SB5/MQ0inFkvZrD8mxfbUNZoypdNQVutHoRAu6O/ioCDdZu/GlULglGQPbB/PlL/tdCBM1HU8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 84F80DA7; Wed, 17 Jan 2024 01:57:21 -0800 (PST) Received: from e129166.arm.com (unknown [10.57.90.139]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B4E7C3F5A1; Wed, 17 Jan 2024 01:56:32 -0800 (PST) From: Lukasz Luba To: linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, rafael@kernel.org Cc: lukasz.luba@arm.com, dietmar.eggemann@arm.com, rui.zhang@intel.com, amit.kucheria@verdurent.com, amit.kachhap@gmail.com, daniel.lezcano@linaro.org, viresh.kumar@linaro.org, len.brown@intel.com, pavel@ucw.cz, mhiramat@kernel.org, qyousef@layalina.io, wvw@google.com, xuewen.yan94@gmail.com Subject: [PATCH v7 13/23] PM: EM: Add performance field to struct em_perf_state and optimize Date: Wed, 17 Jan 2024 09:57:04 +0000 Message-Id: <20240117095714.1524808-14-lukasz.luba@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240117095714.1524808-1-lukasz.luba@arm.com> References: <20240117095714.1524808-1-lukasz.luba@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit The performance doesn't scale linearly with the frequency. Also, it may be different in different workloads. Some CPUs are designed to be particularly good at some applications e.g. images or video processing and other CPUs in different. When those different types of CPUs are combined in one SoC they should be properly modeled to get max of the HW in Energy Aware Scheduler (EAS). The Energy Model (EM) provides the power vs. performance curves to the EAS, but assumes the CPUs capacity is fixed and scales linearly with the frequency. This patch allows to adjust the curve on the 'performance' axis as well. Code speed optimization: Removing map_util_freq() allows to avoid one division and one multiplication operations from the EAS hot code path. Signed-off-by: Lukasz Luba --- include/linux/energy_model.h | 24 ++++++++++++------------ kernel/power/energy_model.c | 27 +++++++++++++++++++++++++++ 2 files changed, 39 insertions(+), 12 deletions(-) diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index 5ebe9dbec8e1..689d71f6b56f 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -13,6 +13,7 @@ /** * struct em_perf_state - Performance state of a performance domain + * @performance: CPU performance (capacity) at a given frequency * @frequency: The frequency in KHz, for consistency with CPUFreq * @power: The power consumed at this level (by 1 CPU or by a registered * device). It can be a total power: static and dynamic. @@ -21,6 +22,7 @@ * @flags: see "em_perf_state flags" description below. */ struct em_perf_state { + unsigned long performance; unsigned long frequency; unsigned long power; unsigned long cost; @@ -196,25 +198,25 @@ void em_table_free(struct em_perf_table __rcu *table); * em_pd_get_efficient_state() - Get an efficient performance state from the EM * @table: List of performance states, in ascending order * @nr_perf_states: Number of performance states - * @freq: Frequency to map with the EM + * @max_util: Max utilization to map with the EM * @pd_flags: Performance Domain flags * * It is called from the scheduler code quite frequently and as a consequence * doesn't implement any check. * - * Return: An efficient performance state id, high enough to meet @freq + * Return: An efficient performance state id, high enough to meet @max_util * requirement. */ static inline int em_pd_get_efficient_state(struct em_perf_state *table, int nr_perf_states, - unsigned long freq, unsigned long pd_flags) + unsigned long max_util, unsigned long pd_flags) { struct em_perf_state *ps; int i; for (i = 0; i < nr_perf_states; i++) { ps = &table[i]; - if (ps->frequency >= freq) { + if (ps->performance >= max_util) { if (pd_flags & EM_PERF_DOMAIN_SKIP_INEFFICIENCIES && ps->flags & EM_PERF_STATE_INEFFICIENT) continue; @@ -245,9 +247,9 @@ static inline unsigned long em_cpu_energy(struct em_perf_domain *pd, unsigned long max_util, unsigned long sum_util, unsigned long allowed_cpu_cap) { - unsigned long freq, ref_freq, scale_cpu; struct em_perf_table *em_table; struct em_perf_state *ps; + unsigned long scale_cpu; int cpu, i; #ifdef CONFIG_SCHED_DEBUG @@ -260,26 +262,24 @@ static inline unsigned long em_cpu_energy(struct em_perf_domain *pd, /* * In order to predict the performance state, map the utilization of * the most utilized CPU of the performance domain to a requested - * frequency, like schedutil. Take also into account that the real - * frequency might be set lower (due to thermal capping). Thus, clamp + * performance, like schedutil. Take also into account that the real + * performance might be set lower (due to thermal capping). Thus, clamp * max utilization to the allowed CPU capacity before calculating - * effective frequency. + * effective performance. */ cpu = cpumask_first(to_cpumask(pd->cpus)); scale_cpu = arch_scale_cpu_capacity(cpu); - ref_freq = arch_scale_freq_ref(cpu); max_util = map_util_perf(max_util); max_util = min(max_util, allowed_cpu_cap); - freq = map_util_freq(max_util, ref_freq, scale_cpu); /* * Find the lowest performance state of the Energy Model above the - * requested frequency. + * requested performance. */ em_table = rcu_dereference(pd->em_table); i = em_pd_get_efficient_state(em_table->state, pd->nr_perf_states, - freq, pd->flags); + max_util, pd->flags); ps = &em_table->state[i]; /* diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c index 190042640935..2a817b92804b 100644 --- a/kernel/power/energy_model.c +++ b/kernel/power/energy_model.c @@ -46,6 +46,7 @@ static void em_debug_create_ps(struct em_perf_state *ps, struct dentry *pd) debugfs_create_ulong("frequency", 0444, d, &ps->frequency); debugfs_create_ulong("power", 0444, d, &ps->power); debugfs_create_ulong("cost", 0444, d, &ps->cost); + debugfs_create_ulong("performance", 0444, d, &ps->performance); debugfs_create_ulong("inefficient", 0444, d, &ps->flags); } @@ -159,6 +160,30 @@ struct em_perf_table __rcu *em_table_alloc(struct em_perf_domain *pd) return table; } +static void em_init_performance(struct device *dev, struct em_perf_domain *pd, + struct em_perf_state *table, int nr_states) +{ + u64 fmax, max_cap; + int i, cpu; + + /* This is needed only for CPUs and EAS skip other devices */ + if (!_is_cpu_device(dev)) + return; + + cpu = cpumask_first(em_span_cpus(pd)); + + /* + * Calculate the performance value for each frequency with + * linear relationship. The final CPU capacity might not be ready at + * boot time, but the EM will be updated a bit later with correct one. + */ + fmax = (u64) table[nr_states - 1].frequency; + max_cap = (u64) arch_scale_cpu_capacity(cpu); + for (i = 0; i < nr_states; i++) + table[i].performance = div64_u64(max_cap * table[i].frequency, + fmax); +} + static int em_compute_costs(struct device *dev, struct em_perf_state *table, struct em_data_callback *cb, int nr_states, unsigned long flags) @@ -318,6 +343,8 @@ static int em_create_perf_table(struct device *dev, struct em_perf_domain *pd, table[i].frequency = prev_freq = freq; } + em_init_performance(dev, pd, table, nr_states); + ret = em_compute_costs(dev, table, cb, nr_states, flags); if (ret) return -EINVAL; -- 2.25.1