2022-11-28 17:38:30

by Yuan, Perry

[permalink] [raw]
Subject: [PATCH v5 0/9] Implement AMD Pstate EPP Driver

Hi all,

This patchset implements one new AMD CPU frequency driver
"amd-pstate-epp” instance for better performance and power control.
CPPC has a parameter called energy preference performance (EPP).
The EPP is used in the CCLK DPM controller to drive the frequency that a core
is going to operate during short periods of activity.
EPP values will be utilized for different OS profiles (balanced, performance, power savings).

AMD Energy Performance Preference (EPP) provides a hint to the hardware
if software wants to bias toward performance (0x0) or energy efficiency (0xff)
The lowlevel power firmware will calculate the runtime frequency according to the EPP preference
value. So the EPP hint will impact the CPU cores frequency responsiveness.

We use the RAPL interface with "perf" tool to get the energy data of the package power.
Performance Per Watt (PPW) Calculation:

The PPW calculation is referred by below paper:
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsoftware.intel.com%2Fcontent%2Fdam%2Fdevelop%2Fexternal%2Fus%2Fen%2Fdocuments%2Fperformance-per-what-paper.pdf&data=04%7C01%7CPerry.Yuan%40amd.com%7Cac66e8ce98044e9b062708d9ab47c8d8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637729147708574423%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=TPOvCE%2Frbb0ptBreWNxHqOi9YnVhcHGKG88vviDLb00%3D&reserved=0

Below formula is referred from below spec to measure the PPW:

(F / t) / P = F * t / (t * E) = F / E,

"F" is the number of frames per second.
"P" is power measured in watts.
"E" is energy measured in joules.

Gitsouce Benchmark Data on ROME Server CPU
+------------------------------+------------------------------+------------+------------------+
| Kernel Module | PPW (1 / s * J) |Energy(J) | PPW Improvement (%)|
+==============================+==============================+============+==================+
| acpi-cpufreq:schedutil | 5.85658E-05 | 17074.8 | base |
+------------------------------+------------------------------+------------+------------------+
| acpi-cpufreq:ondemand | 5.03079E-05 | 19877.6 | -14.10% |
+------------------------------+------------------------------+------------+------------------+
| acpi-cpufreq:performance | 5.88132E-05 | 17003 | 0.42% |
+------------------------------+------------------------------+------------+------------------+
| amd-pstate:ondemand | 4.60295E-05 | 21725.2 | -21.41% |
+------------------------------+------------------------------+------------+------------------+
| amd-pstate:schedutil | 4.70026E-05 | 21275.4 | -19.7% |
+------------------------------+------------------------------+------------+------------------+
| amd-pstate:performance | 5.80094E-05 | 17238.6 | -0.95% |
+------------------------------+------------------------------+------------+------------------+
| EPP:performance | 5.8292E-05 | 17155 | -0.47% |
+------------------------------+------------------------------+------------+------------------+
| EPP: balance performance: | 6.71709E-05 | 14887.4 | 14.69% |
+------------------------------+------------------------------+------------+------------------+
| EPP:power | 6.66951E-05 | 4993.6 | 13.88% |
+------------------------------+------------------------------+------------+------------------+

Tbench Benchmark Data on ROME Server CPU
+---------------------------------------------+-------------------+--------------+-------------+------------------+
| Kernel Module | PPW MB / (s * J) |Throughput(MB/s)| Energy (J)|PPW Improvement(%)|
+=============================================+===================+==============+=============+==================+
| acpi_cpufreq: schedutil | 46.39 | 17191 | 37057.3 | base |
+---------------------------------------------+-------------------+--------------+-------------+------------------+
| acpi_cpufreq: ondemand | 51.51 | 19269.5 | 37406.5 | 11.04 % |
+---------------------------------------------+-------------------+--------------+-------------+------------------+
| acpi_cpufreq: performance | 45.96 | 17063.7 | 37123.7 | -0.74 % |
+---------------------------------------------+-------------------+--------------+-------------+------------------+
| EPP:powersave: performance(0) | 54.46 | 20263.1 | 37205 | 17.87 % |
+---------------------------------------------+-------------------+--------------+-------------+------------------+
| EPP:powersave: balance performance | 55.03 | 20481.9 | 37221.5 | 19.14 % |
+---------------------------------------------+-------------------+--------------+-------------+------------------+
| EPP:powersave: balance_power | 54.43 | 20245.9 | 37194.2 | 17.77 % |
+---------------------------------------------+-------------------+--------------+-------------+------------------+
| EPP:powersave: power(255) | 54.26 | 20181.7 | 37197.4 | 17.40 % |
+---------------------------------------------+-------------------+--------------+-------------+------------------+
| amd-pstate: schedutil | 48.22 | 17844.9 | 37006.6 | 3.80 % |
+---------------------------------------------+-------------------+--------------+-------------+------------------+
| amd-pstate: ondemand | 61.30 | 22988 | 37503.4 | 33.72 % |
+---------------------------------------------+-------------------+--------------+-------------+------------------+
| amd-pstate: performance | 54.52 | 20252.6 | 37147.8 | 17.81 % |
+---------------------------------------------+-------------------+--------------+-------------+------------------+

changes from v4:
* rebase driver based on the v6.1-rc7
* remove the builtin changes patch because pstate driver has been
changed to builtin type by another thread patch
* update Documentation: amd-pstate: add amd pstate driver mode introduction
* replace sprintf with sysfs_emit() instead.
* fix typo for cppc_set_epp_perf() in cppc_acpi.h header

changes from v3:
* add one more document update patch for the active and passive mode
introducion.
* drive most of the feedbacks from Mario
* drive feedback from Rafael for the cppc_acpi driver.
* remove the epp raw data set/get function
* set the amd-pstate drive by passing kernel parameter
* set amd-pstate driver disabled by default if no kernel parameter
input from booting
* get cppc_set_auto_epp and cppc_set_epp_perf combined
* pick up reviewed by flag from Mario

changes from v2:
* change pstate driver as builtin type from module
* drop patch "export cpufreq cpu release and acquire"
* squash patch of shared mem into single patch of epp implementation
* add one new patch to support frequency boost control
* add patch to expose driver working status checking
* rebase driver into v6.1-rc4 kernel release
* move some declaration to amd-pstate.h
* drive feedback from Mario for the online/offline patch
* drive feedback from Mario for the suspend/resume patch
* drive feedback from Ray for the cppc_acpi and some other patches
* drive feedback from Nathan for the epp patch

changes from v1:
* rebased to v6.0
* drive feedbacks from Mario for the suspend/resume patch
* drive feedbacks from Nathan for the EPP support on msr type
* fix some typos and code style indent problems
* update commit comments for patch 4/7
* change the `epp_enabled` module param name to `epp`
* set the default epp mode to be false
* add testing for the x86_energy_perf_policy utility patchset(will
send that utility patchset with another thread)

v4: https://lore.kernel.org/lkml/[email protected]/
v3: https://lore.kernel.org/all/[email protected]/
v2: https://lore.kernel.org/all/[email protected]/
v1: https://lore.kernel.org/all/[email protected]/

Perry Yuan (9):
ACPI: CPPC: Add AMD pstate energy performance preference cppc control
Documentation: amd-pstate: add EPP profiles introduction
cpufreq: amd_pstate: implement Pstate EPP support for the AMD
processors
cpufreq: amd_pstate: implement amd pstate cpu online and offline
callback
cpufreq: amd-pstate: implement suspend and resume callbacks
cpufreq: amd-pstate: add frequency dynamic boost sysfs control
cpufreq: amd_pstate: add driver working mode status sysfs entry
Documentation: amd-pstate: add amd pstate driver mode introduction
Documentation: introduce amd pstate active mode kernel command line
options

.../admin-guide/kernel-parameters.txt | 7 +
Documentation/admin-guide/pm/amd-pstate.rst | 45 +-
drivers/acpi/cppc_acpi.c | 114 ++-
drivers/cpufreq/amd-pstate.c | 872 +++++++++++++++++-
include/acpi/cppc_acpi.h | 12 +
include/linux/amd-pstate.h | 82 ++
6 files changed, 1118 insertions(+), 14 deletions(-)

--
2.34.1


2022-11-28 18:14:01

by Yuan, Perry

[permalink] [raw]
Subject: [PATCH v5 7/9] cpufreq: amd_pstate: add driver working mode status sysfs entry

From: Perry Yuan <[email protected]>

While amd-pstate driver was loaded with specific driver mode, it will
need to check which mode is enabled for the pstate driver,add this sysfs
entry to show the current status

$ cat /sys/devices/system/cpu/amd-pstate/status
active

Signed-off-by: Perry Yuan <[email protected]>
---
drivers/cpufreq/amd-pstate.c | 44 ++++++++++++++++++++++++++++++++++++
1 file changed, 44 insertions(+)

diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 3335e7aa76f1..07c26178853d 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -65,6 +65,8 @@ static int cppc_load __initdata;
static int epp_off __initdata;

static struct cpufreq_driver *default_pstate_driver;
+static struct cpufreq_driver amd_pstate_epp_driver;
+static struct cpufreq_driver amd_pstate_driver;
static struct amd_cpudata **all_cpu_data;

static struct amd_pstate_params global_params;
@@ -807,6 +809,46 @@ static ssize_t store_boost(struct kobject *a,
return count;
}

+static ssize_t amd_pstate_show_status(char *buf)
+{
+ if (!default_pstate_driver)
+ return sysfs_emit(buf, "off\n");
+
+ return sysfs_emit(buf, "%s\n", default_pstate_driver == &amd_pstate_epp_driver ?
+ "active" : "passive");
+}
+
+static int amd_pstate_update_status(const char *buf, size_t size)
+{
+ /* FIXME! */
+ return -EOPNOTSUPP;
+}
+
+static ssize_t show_status(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ ssize_t ret;
+
+ mutex_lock(&amd_pstate_driver_lock);
+ ret = amd_pstate_show_status(buf);
+ mutex_unlock(&amd_pstate_driver_lock);
+
+ return ret;
+}
+
+static ssize_t store_status(struct kobject *a, struct kobj_attribute *b,
+ const char *buf, size_t count)
+{
+ char *p = memchr(buf, '\n', count);
+ int ret;
+
+ mutex_lock(&amd_pstate_driver_lock);
+ ret = amd_pstate_update_status(buf, p ? p - buf : count);
+ mutex_unlock(&amd_pstate_driver_lock);
+
+ return ret < 0 ? ret : count;
+}
+
cpufreq_freq_attr_ro(amd_pstate_max_freq);
cpufreq_freq_attr_ro(amd_pstate_lowest_nonlinear_freq);

@@ -814,6 +856,7 @@ cpufreq_freq_attr_ro(amd_pstate_highest_perf);
cpufreq_freq_attr_rw(energy_performance_preference);
cpufreq_freq_attr_ro(energy_performance_available_preferences);
define_one_global_rw(boost);
+define_one_global_rw(status);

static struct freq_attr *amd_pstate_attr[] = {
&amd_pstate_max_freq,
@@ -833,6 +876,7 @@ static struct freq_attr *amd_pstate_epp_attr[] = {

static struct attribute *pstate_global_attributes[] = {
&boost.attr,
+ &status.attr,
NULL
};

--
2.34.1

2022-11-28 18:35:26

by Yuan, Perry

[permalink] [raw]
Subject: [PATCH v5 3/9] cpufreq: amd_pstate: implement Pstate EPP support for the AMD processors

From: Perry Yuan <[email protected]>

Add EPP driver support for AMD SoCs which support a dedicated MSR for
CPPC. EPP is used by the DPM controller to configure the frequency that
a core operates at during short periods of activity.

The SoC EPP targets are configured on a scale from 0 to 255 where 0
represents maximum performance and 255 represents maximum efficiency.

The amd-pstate driver exports profile string names to userspace that are
tied to specific EPP values.

The balance_performance string (0x80) provides the best balance for
efficiency versus power on most systems, but users can choose other
strings to meet their needs as well.

$ cat /sys/devices/system/cpu/cpufreq/policy0/energy_performance_available_preferences
default performance balance_performance balance_power power

$ cat /sys/devices/system/cpu/cpufreq/policy0/energy_performance_preference
balance_performance

Signed-off-by: Perry Yuan <[email protected]>
---
drivers/cpufreq/amd-pstate.c | 635 ++++++++++++++++++++++++++++++++++-
include/linux/amd-pstate.h | 81 +++++
2 files changed, 710 insertions(+), 6 deletions(-)

diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 204e39006dda..5a19b832afdf 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -59,8 +59,132 @@
* we disable it by default to go acpi-cpufreq on these processors and add a
* module parameter to be able to enable it manually for debugging.
*/
-static struct cpufreq_driver amd_pstate_driver;
+static bool shared_mem __read_mostly;
+static int cppc_active __read_mostly;
static int cppc_load __initdata;
+static int epp_off __initdata;
+
+static struct cpufreq_driver *default_pstate_driver;
+static struct amd_cpudata **all_cpu_data;
+
+static struct amd_pstate_params global_params;
+
+static DEFINE_MUTEX(amd_pstate_limits_lock);
+static DEFINE_MUTEX(amd_pstate_driver_lock);
+
+static bool cppc_boost __read_mostly;
+struct kobject *amd_pstate_kobj;
+
+#ifdef CONFIG_ACPI_CPPC_LIB
+static s16 amd_pstate_get_epp(struct amd_cpudata *cpudata, u64 cppc_req_cached)
+{
+ s16 epp;
+ struct cppc_perf_caps perf_caps;
+ int ret;
+
+ if (boot_cpu_has(X86_FEATURE_CPPC)) {
+ if (!cppc_req_cached) {
+ epp = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,
+ &cppc_req_cached);
+ if (epp)
+ return epp;
+ }
+ epp = (cppc_req_cached >> 24) & 0xFF;
+ } else {
+ ret = cppc_get_epp_caps(cpudata->cpu, &perf_caps);
+ if (ret < 0) {
+ pr_debug("Could not retrieve energy perf value (%d)\n", ret);
+ return -EIO;
+ }
+ epp = (s16) perf_caps.energy_perf;
+ }
+
+ return epp;
+}
+#endif
+
+static int amd_pstate_get_energy_pref_index(struct amd_cpudata *cpudata)
+{
+ s16 epp;
+ int index = -EINVAL;
+
+ epp = amd_pstate_get_epp(cpudata, 0);
+ if (epp < 0)
+ return epp;
+
+ switch (epp) {
+ case AMD_CPPC_EPP_PERFORMANCE:
+ index = EPP_INDEX_PERFORMANCE;
+ break;
+ case AMD_CPPC_EPP_BALANCE_PERFORMANCE:
+ index = EPP_INDEX_BALANCE_PERFORMANCE;
+ break;
+ case AMD_CPPC_EPP_BALANCE_POWERSAVE:
+ index = EPP_INDEX_BALANCE_POWERSAVE;
+ break;
+ case AMD_CPPC_EPP_POWERSAVE:
+ index = EPP_INDEX_POWERSAVE;
+ break;
+ default:
+ break;
+ }
+
+ return index;
+}
+
+#ifdef CONFIG_ACPI_CPPC_LIB
+static int amd_pstate_set_epp(struct amd_cpudata *cpudata, u32 epp)
+{
+ int ret;
+ struct cppc_perf_ctrls perf_ctrls;
+
+ if (boot_cpu_has(X86_FEATURE_CPPC)) {
+ u64 value = READ_ONCE(cpudata->cppc_req_cached);
+
+ value &= ~GENMASK_ULL(31, 24);
+ value |= (u64)epp << 24;
+ WRITE_ONCE(cpudata->cppc_req_cached, value);
+
+ ret = wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
+ if (!ret)
+ cpudata->epp_cached = epp;
+ } else {
+ perf_ctrls.energy_perf = epp;
+ ret = cppc_set_epp_perf(cpudata->cpu, &perf_ctrls, 1);
+ if (ret) {
+ pr_debug("failed to set energy perf value (%d)\n", ret);
+ return ret;
+ }
+ cpudata->epp_cached = epp;
+ }
+
+ return ret;
+}
+
+static int amd_pstate_set_energy_pref_index(struct amd_cpudata *cpudata,
+ int pref_index)
+{
+ int epp = -EINVAL;
+ int ret;
+
+ if (!pref_index) {
+ pr_debug("EPP pref_index is invalid\n");
+ return -EINVAL;
+ }
+
+ if (epp == -EINVAL)
+ epp = epp_values[pref_index];
+
+ if (epp > 0 && cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) {
+ pr_debug("EPP cannot be set under performance policy\n");
+ return -EBUSY;
+ }
+
+ ret = amd_pstate_set_epp(cpudata, epp);
+
+ return ret;
+}
+#endif

static inline int pstate_enable(bool enable)
{
@@ -70,11 +194,21 @@ static inline int pstate_enable(bool enable)
static int cppc_enable(bool enable)
{
int cpu, ret = 0;
+ struct cppc_perf_ctrls perf_ctrls;

for_each_present_cpu(cpu) {
ret = cppc_set_enable(cpu, enable);
if (ret)
return ret;
+
+ /* Enable autonomous mode for EPP */
+ if (!cppc_active) {
+ /* Set desired perf as zero to allow EPP firmware control */
+ perf_ctrls.desired_perf = 0;
+ ret = cppc_set_perf(cpu, &perf_ctrls);
+ if (ret)
+ return ret;
+ }
}

return ret;
@@ -417,7 +551,7 @@ static void amd_pstate_boost_init(struct amd_cpudata *cpudata)
return;

cpudata->boost_supported = true;
- amd_pstate_driver.boost_enabled = true;
+ default_pstate_driver->boost_enabled = true;
}

static void amd_perf_ctl_reset(unsigned int cpu)
@@ -591,10 +725,61 @@ static ssize_t show_amd_pstate_highest_perf(struct cpufreq_policy *policy,
return sprintf(&buf[0], "%u\n", perf);
}

+static ssize_t show_energy_performance_available_preferences(
+ struct cpufreq_policy *policy, char *buf)
+{
+ int i = 0;
+ int ret = 0;
+
+ while (energy_perf_strings[i] != NULL)
+ ret += sprintf(&buf[ret], "%s ", energy_perf_strings[i++]);
+
+ ret += sysfs_emit(&buf[ret], "\n");
+
+ return ret;
+}
+
+static ssize_t store_energy_performance_preference(
+ struct cpufreq_policy *policy, const char *buf, size_t count)
+{
+ struct amd_cpudata *cpudata = policy->driver_data;
+ char str_preference[21];
+ ssize_t ret;
+
+ ret = sscanf(buf, "%20s", str_preference);
+ if (ret != 1)
+ return -EINVAL;
+
+ ret = match_string(energy_perf_strings, -1, str_preference);
+ if (ret < 0)
+ return -EINVAL;
+
+ mutex_lock(&amd_pstate_limits_lock);
+ ret = amd_pstate_set_energy_pref_index(cpudata, ret);
+ mutex_unlock(&amd_pstate_limits_lock);
+
+ return ret ?: count;
+}
+
+static ssize_t show_energy_performance_preference(
+ struct cpufreq_policy *policy, char *buf)
+{
+ struct amd_cpudata *cpudata = policy->driver_data;
+ int preference;
+
+ preference = amd_pstate_get_energy_pref_index(cpudata);
+ if (preference < 0)
+ return preference;
+
+ return sysfs_emit(buf, "%s\n", energy_perf_strings[preference]);
+}
+
cpufreq_freq_attr_ro(amd_pstate_max_freq);
cpufreq_freq_attr_ro(amd_pstate_lowest_nonlinear_freq);

cpufreq_freq_attr_ro(amd_pstate_highest_perf);
+cpufreq_freq_attr_rw(energy_performance_preference);
+cpufreq_freq_attr_ro(energy_performance_available_preferences);

static struct freq_attr *amd_pstate_attr[] = {
&amd_pstate_max_freq,
@@ -603,6 +788,415 @@ static struct freq_attr *amd_pstate_attr[] = {
NULL,
};

+static struct freq_attr *amd_pstate_epp_attr[] = {
+ &amd_pstate_max_freq,
+ &amd_pstate_lowest_nonlinear_freq,
+ &amd_pstate_highest_perf,
+ &energy_performance_preference,
+ &energy_performance_available_preferences,
+ NULL,
+};
+
+static inline void update_boost_state(void)
+{
+ u64 misc_en;
+ struct amd_cpudata *cpudata;
+
+ cpudata = all_cpu_data[0];
+ rdmsrl(MSR_K7_HWCR, misc_en);
+ global_params.cppc_boost_disabled = misc_en & BIT_ULL(25);
+}
+
+static int amd_pstate_init_cpu(unsigned int cpunum)
+{
+ struct amd_cpudata *cpudata;
+
+ cpudata = all_cpu_data[cpunum];
+ if (!cpudata) {
+ cpudata = kzalloc(sizeof(*cpudata), GFP_KERNEL);
+ if (!cpudata)
+ return -ENOMEM;
+ WRITE_ONCE(all_cpu_data[cpunum], cpudata);
+
+ cpudata->cpu = cpunum;
+ }
+ cpudata->epp_powersave = -EINVAL;
+ cpudata->epp_policy = 0;
+ pr_debug("controlling: cpu %d\n", cpunum);
+ return 0;
+}
+
+static int __amd_pstate_cpu_init(struct cpufreq_policy *policy)
+{
+ int min_freq, max_freq, nominal_freq, lowest_nonlinear_freq, ret;
+ struct amd_cpudata *cpudata;
+ struct device *dev;
+ int rc;
+ u64 value;
+
+ rc = amd_pstate_init_cpu(policy->cpu);
+ if (rc)
+ return rc;
+
+ cpudata = all_cpu_data[policy->cpu];
+
+ dev = get_cpu_device(policy->cpu);
+ if (!dev)
+ goto free_cpudata1;
+
+ rc = amd_pstate_init_perf(cpudata);
+ if (rc)
+ goto free_cpudata1;
+
+ min_freq = amd_get_min_freq(cpudata);
+ max_freq = amd_get_max_freq(cpudata);
+ nominal_freq = amd_get_nominal_freq(cpudata);
+ lowest_nonlinear_freq = amd_get_lowest_nonlinear_freq(cpudata);
+ if (min_freq < 0 || max_freq < 0 || min_freq > max_freq) {
+ dev_err(dev, "min_freq(%d) or max_freq(%d) value is incorrect\n",
+ min_freq, max_freq);
+ ret = -EINVAL;
+ goto free_cpudata1;
+ }
+
+ policy->min = min_freq;
+ policy->max = max_freq;
+
+ policy->cpuinfo.min_freq = min_freq;
+ policy->cpuinfo.max_freq = max_freq;
+ /* It will be updated by governor */
+ policy->cur = policy->cpuinfo.min_freq;
+
+ /* Initial processor data capability frequencies */
+ cpudata->max_freq = max_freq;
+ cpudata->min_freq = min_freq;
+ cpudata->nominal_freq = nominal_freq;
+ cpudata->lowest_nonlinear_freq = lowest_nonlinear_freq;
+
+ policy->driver_data = cpudata;
+
+ update_boost_state();
+ cpudata->epp_cached = amd_pstate_get_epp(cpudata, value);
+
+ policy->min = policy->cpuinfo.min_freq;
+ policy->max = policy->cpuinfo.max_freq;
+
+ if (boot_cpu_has(X86_FEATURE_CPPC))
+ policy->fast_switch_possible = true;
+
+ if (!shared_mem && boot_cpu_has(X86_FEATURE_CPPC)) {
+ ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value);
+ if (ret)
+ return ret;
+ WRITE_ONCE(cpudata->cppc_req_cached, value);
+
+ ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1, &value);
+ if (ret)
+ return ret;
+ WRITE_ONCE(cpudata->cppc_cap1_cached, value);
+ }
+ amd_pstate_boost_init(cpudata);
+
+ return 0;
+
+free_cpudata1:
+ kfree(cpudata);
+ return ret;
+}
+
+static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
+{
+ int ret;
+
+ ret = __amd_pstate_cpu_init(policy);
+ if (ret)
+ return ret;
+ /*
+ * Set the policy to powersave to provide a valid fallback value in case
+ * the default cpufreq governor is neither powersave nor performance.
+ */
+ policy->policy = CPUFREQ_POLICY_POWERSAVE;
+
+ return 0;
+}
+
+static int amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
+{
+ pr_debug("CPU %d exiting\n", policy->cpu);
+ policy->fast_switch_possible = false;
+ return 0;
+}
+
+static void amd_pstate_update_max_freq(unsigned int cpu)
+{
+ struct cpufreq_policy *policy = policy = cpufreq_cpu_get(cpu);
+
+ if (!policy)
+ return;
+
+ refresh_frequency_limits(policy);
+ cpufreq_cpu_put(policy);
+}
+
+static void amd_pstate_epp_update_limits(unsigned int cpu)
+{
+ mutex_lock(&amd_pstate_driver_lock);
+ update_boost_state();
+ if (global_params.cppc_boost_disabled) {
+ for_each_possible_cpu(cpu)
+ amd_pstate_update_max_freq(cpu);
+ } else {
+ cpufreq_update_policy(cpu);
+ }
+ mutex_unlock(&amd_pstate_driver_lock);
+}
+
+static int cppc_boost_hold_time_ns = 3 * NSEC_PER_MSEC;
+
+static inline void amd_pstate_boost_up(struct amd_cpudata *cpudata)
+{
+ u64 hwp_req = READ_ONCE(cpudata->cppc_req_cached);
+ u64 hwp_cap = READ_ONCE(cpudata->cppc_cap1_cached);
+ u32 max_limit = (hwp_req & 0xff);
+ u32 min_limit = (hwp_req & 0xff00) >> 8;
+ u32 boost_level1;
+
+ /* If max and min are equal or already at max, nothing to boost */
+ if (max_limit == min_limit)
+ return;
+
+ /* Set boost max and min to initial value */
+ if (!cpudata->cppc_boost_min)
+ cpudata->cppc_boost_min = min_limit;
+
+ boost_level1 = ((AMD_CPPC_NOMINAL_PERF(hwp_cap) + min_limit) >> 1);
+
+ if (cpudata->cppc_boost_min < boost_level1)
+ cpudata->cppc_boost_min = boost_level1;
+ else if (cpudata->cppc_boost_min < AMD_CPPC_NOMINAL_PERF(hwp_cap))
+ cpudata->cppc_boost_min = AMD_CPPC_NOMINAL_PERF(hwp_cap);
+ else if (cpudata->cppc_boost_min == AMD_CPPC_NOMINAL_PERF(hwp_cap))
+ cpudata->cppc_boost_min = max_limit;
+ else
+ return;
+
+ hwp_req &= ~AMD_CPPC_MIN_PERF(~0L);
+ hwp_req |= AMD_CPPC_MIN_PERF(cpudata->cppc_boost_min);
+ wrmsrl_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, hwp_req);
+ cpudata->last_update = cpudata->sample.time;
+}
+
+static inline void amd_pstate_boost_down(struct amd_cpudata *cpudata)
+{
+ bool expired;
+
+ if (cpudata->cppc_boost_min) {
+ expired = time_after64(cpudata->sample.time, cpudata->last_update +
+ cppc_boost_hold_time_ns);
+
+ if (expired) {
+ wrmsrl_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,
+ cpudata->cppc_req_cached);
+ cpudata->cppc_boost_min = 0;
+ }
+ }
+
+ cpudata->last_update = cpudata->sample.time;
+}
+
+static inline void amd_pstate_boost_update_util(struct amd_cpudata *cpudata,
+ u64 time)
+{
+ cpudata->sample.time = time;
+ if (smp_processor_id() != cpudata->cpu)
+ return;
+
+ if (cpudata->sched_flags & SCHED_CPUFREQ_IOWAIT) {
+ bool do_io = false;
+
+ cpudata->sched_flags = 0;
+ /*
+ * Set iowait_boost flag and update time. Since IO WAIT flag
+ * is set all the time, we can't just conclude that there is
+ * some IO bound activity is scheduled on this CPU with just
+ * one occurrence. If we receive at least two in two
+ * consecutive ticks, then we treat as boost candidate.
+ * This is leveraged from Intel Pstate driver.
+ */
+ if (time_before64(time, cpudata->last_io_update + 2 * TICK_NSEC))
+ do_io = true;
+
+ cpudata->last_io_update = time;
+
+ if (do_io)
+ amd_pstate_boost_up(cpudata);
+
+ } else {
+ amd_pstate_boost_down(cpudata);
+ }
+}
+
+static inline void amd_pstate_cppc_update_hook(struct update_util_data *data,
+ u64 time, unsigned int flags)
+{
+ struct amd_cpudata *cpudata = container_of(data,
+ struct amd_cpudata, update_util);
+
+ cpudata->sched_flags |= flags;
+
+ if (smp_processor_id() == cpudata->cpu)
+ amd_pstate_boost_update_util(cpudata, time);
+}
+
+static void amd_pstate_clear_update_util_hook(unsigned int cpu)
+{
+ struct amd_cpudata *cpudata = all_cpu_data[cpu];
+
+ if (!cpudata->update_util_set)
+ return;
+
+ cpufreq_remove_update_util_hook(cpu);
+ cpudata->update_util_set = false;
+ synchronize_rcu();
+}
+
+static void amd_pstate_set_update_util_hook(unsigned int cpu_num)
+{
+ struct amd_cpudata *cpudata = all_cpu_data[cpu_num];
+
+ if (!cppc_boost) {
+ if (cpudata->update_util_set)
+ amd_pstate_clear_update_util_hook(cpudata->cpu);
+ return;
+ }
+
+ if (cpudata->update_util_set)
+ return;
+
+ cpudata->sample.time = 0;
+ cpufreq_add_update_util_hook(cpu_num, &cpudata->update_util,
+ amd_pstate_cppc_update_hook);
+ cpudata->update_util_set = true;
+}
+
+static void amd_pstate_epp_init(unsigned int cpu)
+{
+ struct amd_cpudata *cpudata = all_cpu_data[cpu];
+ u32 max_perf, min_perf;
+ u64 value;
+ s16 epp;
+ int ret;
+
+ max_perf = READ_ONCE(cpudata->highest_perf);
+ min_perf = READ_ONCE(cpudata->lowest_perf);
+
+ value = READ_ONCE(cpudata->cppc_req_cached);
+
+ if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
+ min_perf = max_perf;
+
+ /* Initial min/max values for CPPC Performance Controls Register */
+ value &= ~AMD_CPPC_MIN_PERF(~0L);
+ value |= AMD_CPPC_MIN_PERF(min_perf);
+
+ value &= ~AMD_CPPC_MAX_PERF(~0L);
+ value |= AMD_CPPC_MAX_PERF(max_perf);
+
+ /* CPPC EPP feature require to set zero to the desire perf bit */
+ value &= ~AMD_CPPC_DES_PERF(~0L);
+ value |= AMD_CPPC_DES_PERF(0);
+
+ if (cpudata->epp_policy == cpudata->policy)
+ goto skip_epp;
+
+ cpudata->epp_policy = cpudata->policy;
+
+ if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) {
+ epp = amd_pstate_get_epp(cpudata, value);
+ cpudata->epp_powersave = epp;
+ if (epp < 0)
+ goto skip_epp;
+ /* force the epp value to be zero for performance policy */
+ epp = 0;
+ } else {
+ if (cpudata->epp_powersave < 0)
+ goto skip_epp;
+ /* Get BIOS pre-defined epp value */
+ epp = amd_pstate_get_epp(cpudata, value);
+ if (epp)
+ goto skip_epp;
+ epp = cpudata->epp_powersave;
+ }
+ /* Set initial EPP value */
+ if (boot_cpu_has(X86_FEATURE_CPPC)) {
+ value &= ~GENMASK_ULL(31, 24);
+ value |= (u64)epp << 24;
+ }
+
+skip_epp:
+ WRITE_ONCE(cpudata->cppc_req_cached, value);
+ ret = wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
+ if (!ret)
+ cpudata->epp_cached = epp;
+}
+
+static void amd_pstate_set_max_limits(struct amd_cpudata *cpudata)
+{
+ u64 hwp_cap = READ_ONCE(cpudata->cppc_cap1_cached);
+ u64 hwp_req = READ_ONCE(cpudata->cppc_req_cached);
+ u32 max_limit = (hwp_cap >> 24) & 0xff;
+
+ hwp_req &= ~AMD_CPPC_MIN_PERF(~0L);
+ hwp_req |= AMD_CPPC_MIN_PERF(max_limit);
+ wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, hwp_req);
+}
+
+static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
+{
+ struct amd_cpudata *cpudata;
+
+ if (!policy->cpuinfo.max_freq)
+ return -ENODEV;
+
+ pr_debug("set_policy: cpuinfo.max %u policy->max %u\n",
+ policy->cpuinfo.max_freq, policy->max);
+
+ cpudata = all_cpu_data[policy->cpu];
+ cpudata->policy = policy->policy;
+
+ if (boot_cpu_has(X86_FEATURE_CPPC)) {
+ mutex_lock(&amd_pstate_limits_lock);
+
+ if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) {
+ amd_pstate_clear_update_util_hook(policy->cpu);
+ amd_pstate_set_max_limits(cpudata);
+ } else {
+ amd_pstate_set_update_util_hook(policy->cpu);
+ }
+
+ if (boot_cpu_has(X86_FEATURE_CPPC))
+ amd_pstate_epp_init(policy->cpu);
+
+ mutex_unlock(&amd_pstate_limits_lock);
+ }
+
+ return 0;
+}
+
+static void amd_pstate_verify_cpu_policy(struct amd_cpudata *cpudata,
+ struct cpufreq_policy_data *policy)
+{
+ update_boost_state();
+ cpufreq_verify_within_cpu_limits(policy);
+}
+
+static int amd_pstate_epp_verify_policy(struct cpufreq_policy_data *policy)
+{
+ amd_pstate_verify_cpu_policy(all_cpu_data[policy->cpu], policy);
+ pr_debug("policy_max =%d, policy_min=%d\n", policy->max, policy->min);
+ return 0;
+}
+
static struct cpufreq_driver amd_pstate_driver = {
.flags = CPUFREQ_CONST_LOOPS | CPUFREQ_NEED_UPDATE_LIMITS,
.verify = amd_pstate_verify,
@@ -616,8 +1210,20 @@ static struct cpufreq_driver amd_pstate_driver = {
.attr = amd_pstate_attr,
};

+static struct cpufreq_driver amd_pstate_epp_driver = {
+ .flags = CPUFREQ_CONST_LOOPS,
+ .verify = amd_pstate_epp_verify_policy,
+ .setpolicy = amd_pstate_epp_set_policy,
+ .init = amd_pstate_epp_cpu_init,
+ .exit = amd_pstate_epp_cpu_exit,
+ .update_limits = amd_pstate_epp_update_limits,
+ .name = "amd_pstate_epp",
+ .attr = amd_pstate_epp_attr,
+};
+
static int __init amd_pstate_init(void)
{
+ static struct amd_cpudata **cpudata;
int ret;

if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
@@ -641,10 +1247,17 @@ static int __init amd_pstate_init(void)
if (cpufreq_get_current_driver())
return -EEXIST;

+ if (!epp_off) {
+ WRITE_ONCE(cppc_active, 1);
+ if (!default_pstate_driver)
+ default_pstate_driver = &amd_pstate_epp_driver;
+ }
+
/* capability check */
if (boot_cpu_has(X86_FEATURE_CPPC)) {
pr_debug("AMD CPPC MSR based functionality is supported\n");
- amd_pstate_driver.adjust_perf = amd_pstate_adjust_perf;
+ if (!cppc_active)
+ default_pstate_driver->adjust_perf = amd_pstate_adjust_perf;
} else {
pr_debug("AMD CPPC shared memory based functionality is supported\n");
static_call_update(amd_pstate_enable, cppc_enable);
@@ -652,6 +1265,10 @@ static int __init amd_pstate_init(void)
static_call_update(amd_pstate_update_perf, cppc_update_perf);
}

+ cpudata = vzalloc(array_size(sizeof(void *), num_possible_cpus()));
+ if (!cpudata)
+ return -ENOMEM;
+ WRITE_ONCE(all_cpu_data, cpudata);
/* enable amd pstate feature */
ret = amd_pstate_enable(true);
if (ret) {
@@ -659,9 +1276,9 @@ static int __init amd_pstate_init(void)
return ret;
}

- ret = cpufreq_register_driver(&amd_pstate_driver);
+ ret = cpufreq_register_driver(default_pstate_driver);
if (ret)
- pr_err("failed to register amd_pstate_driver with return %d\n",
+ pr_err("failed to register amd pstate driver with return %d\n",
ret);

return ret;
@@ -676,8 +1293,14 @@ static int __init amd_pstate_param(char *str)
if (!strcmp(str, "disable")) {
cppc_load = 0;
pr_info("driver is explicitly disabled\n");
- } else if (!strcmp(str, "passive"))
+ } else if (!strcmp(str, "passive")) {
+ epp_off = 1;
cppc_load = 1;
+ default_pstate_driver = &amd_pstate_driver;
+ } else if (!strcmp(str, "active")) {
+ cppc_load = 1;
+ default_pstate_driver = &amd_pstate_epp_driver;
+ }

return 0;
}
diff --git a/include/linux/amd-pstate.h b/include/linux/amd-pstate.h
index 1c4b8659f171..7e6e8cab97b3 100644
--- a/include/linux/amd-pstate.h
+++ b/include/linux/amd-pstate.h
@@ -25,6 +25,7 @@ struct amd_aperf_mperf {
u64 aperf;
u64 mperf;
u64 tsc;
+ u64 time;
};

/**
@@ -47,6 +48,18 @@ struct amd_aperf_mperf {
* @prev: Last Aperf/Mperf/tsc count value read from register
* @freq: current cpu frequency value
* @boost_supported: check whether the Processor or SBIOS supports boost mode
+ * @epp_powersave: Last saved CPPC energy performance preference
+ when policy switched to performance
+ * @epp_policy: Last saved policy used to set energy-performance preference
+ * @epp_cached: Cached CPPC energy-performance preference value
+ * @policy: Cpufreq policy value
+ * @sched_flags: Store scheduler flags for possible cross CPU update
+ * @update_util_set: CPUFreq utility callback is set
+ * @last_update: Time stamp of the last performance state update
+ * @cppc_boost_min: Last CPPC boosted min performance state
+ * @cppc_cap1_cached: Cached value of the last CPPC Capabilities MSR
+ * @update_util: Cpufreq utility callback information
+ * @sample: the stored performance sample
*
* The amd_cpudata is key private data for each CPU thread in AMD P-State, and
* represents all the attributes and goals that AMD P-State requests at runtime.
@@ -72,6 +85,74 @@ struct amd_cpudata {

u64 freq;
bool boost_supported;
+
+ /* EPP feature related attributes*/
+ s16 epp_powersave;
+ s16 epp_policy;
+ s16 epp_cached;
+ u32 policy;
+ u32 sched_flags;
+ bool update_util_set;
+ u64 last_update;
+ u64 last_io_update;
+ u32 cppc_boost_min;
+ u64 cppc_cap1_cached;
+ struct update_util_data update_util;
+ struct amd_aperf_mperf sample;
+};
+
+/**
+ * struct amd_pstate_params - global parameters for the performance control
+ * @ cppc_boost_disabled wheher the core performance boost disabled
+ */
+struct amd_pstate_params {
+ bool cppc_boost_disabled;
+};
+
+#define AMD_CPPC_EPP_PERFORMANCE 0x00
+#define AMD_CPPC_EPP_BALANCE_PERFORMANCE 0x80
+#define AMD_CPPC_EPP_BALANCE_POWERSAVE 0xBF
+#define AMD_CPPC_EPP_POWERSAVE 0xFF
+
+/*
+ * AMD Energy Preference Performance (EPP)
+ * The EPP is used in the CCLK DPM controller to drive
+ * the frequency that a core is going to operate during
+ * short periods of activity. EPP values will be utilized for
+ * different OS profiles (balanced, performance, power savings)
+ * display strings corresponding to EPP index in the
+ * energy_perf_strings[]
+ * index String
+ *-------------------------------------
+ * 0 default
+ * 1 performance
+ * 2 balance_performance
+ * 3 balance_power
+ * 4 power
+ */
+enum energy_perf_value_index {
+ EPP_INDEX_DEFAULT = 0,
+ EPP_INDEX_PERFORMANCE,
+ EPP_INDEX_BALANCE_PERFORMANCE,
+ EPP_INDEX_BALANCE_POWERSAVE,
+ EPP_INDEX_POWERSAVE,
+};
+
+static const char * const energy_perf_strings[] = {
+ [EPP_INDEX_DEFAULT] = "default",
+ [EPP_INDEX_PERFORMANCE] = "performance",
+ [EPP_INDEX_BALANCE_PERFORMANCE] = "balance_performance",
+ [EPP_INDEX_BALANCE_POWERSAVE] = "balance_power",
+ [EPP_INDEX_POWERSAVE] = "power",
+ NULL
+};
+
+static unsigned int epp_values[] = {
+ [EPP_INDEX_DEFAULT] = 0,
+ [EPP_INDEX_PERFORMANCE] = AMD_CPPC_EPP_PERFORMANCE,
+ [EPP_INDEX_BALANCE_PERFORMANCE] = AMD_CPPC_EPP_BALANCE_PERFORMANCE,
+ [EPP_INDEX_BALANCE_POWERSAVE] = AMD_CPPC_EPP_BALANCE_POWERSAVE,
+ [EPP_INDEX_POWERSAVE] = AMD_CPPC_EPP_POWERSAVE,
};

#endif /* _LINUX_AMD_PSTATE_H */
--
2.34.1

2022-11-28 18:36:49

by Yuan, Perry

[permalink] [raw]
Subject: [PATCH v5 4/9] cpufreq: amd_pstate: implement amd pstate cpu online and offline callback

From: Perry Yuan <[email protected]>

Adds online and offline driver callback support to allow cpu cores go
offline and help to restore the previous working states when core goes
back online later for EPP driver mode.

Signed-off-by: Perry Yuan <[email protected]>
---
drivers/cpufreq/amd-pstate.c | 89 ++++++++++++++++++++++++++++++++++++
include/linux/amd-pstate.h | 1 +
2 files changed, 90 insertions(+)

diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index 5a19b832afdf..b77f5b1f3565 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -1183,6 +1183,93 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
return 0;
}

+static void amd_pstate_epp_reenable(struct amd_cpudata *cpudata)
+{
+ struct cppc_perf_ctrls perf_ctrls;
+ u64 value, max_perf;
+ int ret;
+
+ ret = amd_pstate_enable(true);
+ if (ret)
+ pr_err("failed to enable amd pstate during resume, return %d\n", ret);
+
+ value = READ_ONCE(cpudata->cppc_req_cached);
+ max_perf = READ_ONCE(cpudata->highest_perf);
+
+ if (boot_cpu_has(X86_FEATURE_CPPC)) {
+ wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
+ } else {
+ perf_ctrls.max_perf = max_perf;
+ perf_ctrls.energy_perf = AMD_CPPC_ENERGY_PERF_PREF(cpudata->epp_cached);
+ cppc_set_perf(cpudata->cpu, &perf_ctrls);
+ }
+}
+
+static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
+{
+ struct amd_cpudata *cpudata = all_cpu_data[policy->cpu];
+
+ pr_debug("AMD CPU Core %d going online\n", cpudata->cpu);
+
+ if (cppc_active) {
+ amd_pstate_epp_reenable(cpudata);
+ cpudata->suspended = false;
+ }
+
+ return 0;
+}
+
+static void amd_pstate_epp_offline(struct cpufreq_policy *policy)
+{
+ struct amd_cpudata *cpudata = all_cpu_data[policy->cpu];
+ struct cppc_perf_ctrls perf_ctrls;
+ int min_perf;
+ u64 value;
+
+ min_perf = READ_ONCE(cpudata->lowest_perf);
+ value = READ_ONCE(cpudata->cppc_req_cached);
+
+ mutex_lock(&amd_pstate_limits_lock);
+ if (boot_cpu_has(X86_FEATURE_CPPC)) {
+ cpudata->epp_policy = CPUFREQ_POLICY_UNKNOWN;
+
+ /* Set max perf same as min perf */
+ value &= ~AMD_CPPC_MAX_PERF(~0L);
+ value |= AMD_CPPC_MAX_PERF(min_perf);
+ value &= ~AMD_CPPC_MIN_PERF(~0L);
+ value |= AMD_CPPC_MIN_PERF(min_perf);
+ wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
+ } else {
+ perf_ctrls.desired_perf = 0;
+ perf_ctrls.max_perf = min_perf;
+ perf_ctrls.energy_perf = AMD_CPPC_ENERGY_PERF_PREF(AMD_CPPC_EPP_POWERSAVE);
+ cppc_set_perf(cpudata->cpu, &perf_ctrls);
+ }
+ mutex_unlock(&amd_pstate_limits_lock);
+}
+
+static int amd_pstate_cpu_offline(struct cpufreq_policy *policy)
+{
+ struct amd_cpudata *cpudata = all_cpu_data[policy->cpu];
+
+ pr_debug("AMD CPU Core %d going offline\n", cpudata->cpu);
+
+ if (cpudata->suspended)
+ return 0;
+
+ if (cppc_active)
+ amd_pstate_epp_offline(policy);
+
+ return 0;
+}
+
+static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
+{
+ amd_pstate_clear_update_util_hook(policy->cpu);
+
+ return amd_pstate_cpu_offline(policy);
+}
+
static void amd_pstate_verify_cpu_policy(struct amd_cpudata *cpudata,
struct cpufreq_policy_data *policy)
{
@@ -1217,6 +1304,8 @@ static struct cpufreq_driver amd_pstate_epp_driver = {
.init = amd_pstate_epp_cpu_init,
.exit = amd_pstate_epp_cpu_exit,
.update_limits = amd_pstate_epp_update_limits,
+ .offline = amd_pstate_epp_cpu_offline,
+ .online = amd_pstate_epp_cpu_online,
.name = "amd_pstate_epp",
.attr = amd_pstate_epp_attr,
};
diff --git a/include/linux/amd-pstate.h b/include/linux/amd-pstate.h
index 7e6e8cab97b3..c0ad7eedcae3 100644
--- a/include/linux/amd-pstate.h
+++ b/include/linux/amd-pstate.h
@@ -99,6 +99,7 @@ struct amd_cpudata {
u64 cppc_cap1_cached;
struct update_util_data update_util;
struct amd_aperf_mperf sample;
+ bool suspended;
};

/**
--
2.34.1

2022-11-29 11:42:54

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v5 3/9] cpufreq: amd_pstate: implement Pstate EPP support for the AMD processors

Hi Perry,

I love your patch! Perhaps something to improve:

[auto build test WARNING on rafael-pm/linux-next]
[also build test WARNING on linus/master v6.1-rc7 next-20221129]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Perry-Yuan/Implement-AMD-Pstate-EPP-Driver/20221129-011556
base: https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
patch link: https://lore.kernel.org/r/20221128170314.2276636-4-perry.yuan%40amd.com
patch subject: [PATCH v5 3/9] cpufreq: amd_pstate: implement Pstate EPP support for the AMD processors
config: i386-randconfig-a002-20221128
compiler: gcc-11 (Debian 11.3.0-8) 11.3.0
reproduce (this is a W=1 build):
# https://github.com/intel-lab-lkp/linux/commit/e353f533252a9e07649872be9e9012c4500d9133
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Perry-Yuan/Implement-AMD-Pstate-EPP-Driver/20221129-011556
git checkout e353f533252a9e07649872be9e9012c4500d9133
# save the config file
mkdir build_dir && cp config build_dir/.config
make W=1 O=build_dir ARCH=i386 SHELL=/bin/bash drivers/cpufreq/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <[email protected]>

All warnings (new ones prefixed by >>):

In file included from drivers/cpufreq/amd-pstate-ut.c:29:
>> include/linux/amd-pstate.h:150:21: warning: 'epp_values' defined but not used [-Wunused-variable]
150 | static unsigned int epp_values[] = {
| ^~~~~~~~~~
>> include/linux/amd-pstate.h:141:27: warning: 'energy_perf_strings' defined but not used [-Wunused-const-variable=]
141 | static const char * const energy_perf_strings[] = {
| ^~~~~~~~~~~~~~~~~~~


vim +/epp_values +150 include/linux/amd-pstate.h

140
> 141 static const char * const energy_perf_strings[] = {
142 [EPP_INDEX_DEFAULT] = "default",
143 [EPP_INDEX_PERFORMANCE] = "performance",
144 [EPP_INDEX_BALANCE_PERFORMANCE] = "balance_performance",
145 [EPP_INDEX_BALANCE_POWERSAVE] = "balance_power",
146 [EPP_INDEX_POWERSAVE] = "power",
147 NULL
148 };
149
> 150 static unsigned int epp_values[] = {
151 [EPP_INDEX_DEFAULT] = 0,
152 [EPP_INDEX_PERFORMANCE] = AMD_CPPC_EPP_PERFORMANCE,
153 [EPP_INDEX_BALANCE_PERFORMANCE] = AMD_CPPC_EPP_BALANCE_PERFORMANCE,
154 [EPP_INDEX_BALANCE_POWERSAVE] = AMD_CPPC_EPP_BALANCE_POWERSAVE,
155 [EPP_INDEX_POWERSAVE] = AMD_CPPC_EPP_POWERSAVE,
156 };
157

--
0-DAY CI Kernel Test Service
https://01.org/lkp


Attachments:
(No filename) (2.84 kB)
config (142.94 kB)
Download all attachments

2022-11-29 13:11:56

by Wyes Karny

[permalink] [raw]
Subject: Re: [PATCH v5 3/9] cpufreq: amd_pstate: implement Pstate EPP support for the AMD processors

Hi Perry,

On 11/28/2022 10:33 PM, Perry Yuan wrote:
> From: Perry Yuan <[email protected]>
>
> Add EPP driver support for AMD SoCs which support a dedicated MSR for
> CPPC. EPP is used by the DPM controller to configure the frequency that
> a core operates at during short periods of activity.
>
> The SoC EPP targets are configured on a scale from 0 to 255 where 0
> represents maximum performance and 255 represents maximum efficiency.
>
> The amd-pstate driver exports profile string names to userspace that are
> tied to specific EPP values.
>
> The balance_performance string (0x80) provides the best balance for
> efficiency versus power on most systems, but users can choose other
> strings to meet their needs as well.
>
> $ cat /sys/devices/system/cpu/cpufreq/policy0/energy_performance_available_preferences
> default performance balance_performance balance_power power
>
> $ cat /sys/devices/system/cpu/cpufreq/policy0/energy_performance_preference
> balance_performance
>
> Signed-off-by: Perry Yuan <[email protected]>
> ---
> drivers/cpufreq/amd-pstate.c | 635 ++++++++++++++++++++++++++++++++++-
> include/linux/amd-pstate.h | 81 +++++
> 2 files changed, 710 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
> index 204e39006dda..5a19b832afdf 100644
> --- a/drivers/cpufreq/amd-pstate.c
> +++ b/drivers/cpufreq/amd-pstate.c
> @@ -59,8 +59,132 @@
> * we disable it by default to go acpi-cpufreq on these processors and add a
> * module parameter to be able to enable it manually for debugging.
> */
> -static struct cpufreq_driver amd_pstate_driver;
> +static bool shared_mem __read_mostly;

`shared_mem` is initialized to 0 here but it is not being overwite anywhere.

> +static int cppc_active __read_mostly;
> static int cppc_load __initdata;
> +static int epp_off __initdata;

Not sure why we need `cppc_active` and `epp_off` two veriables to control `active mode`.
Is is because `epp_off` is marked as __initdata, therefore if you read `epp_off` in non-init functions
it shows zero?
In that case, can we remove __initdata from `epp_off` and use this same variable elsewhere?

> +
> +static struct cpufreq_driver *default_pstate_driver;
> +static struct amd_cpudata **all_cpu_data;
> +
> +static struct amd_pstate_params global_params;
> +
> +static DEFINE_MUTEX(amd_pstate_limits_lock);
> +static DEFINE_MUTEX(amd_pstate_driver_lock);
> +
> +static bool cppc_boost __read_mostly;
> +struct kobject *amd_pstate_kobj;
> +
> +#ifdef CONFIG_ACPI_CPPC_LIB
> +static s16 amd_pstate_get_epp(struct amd_cpudata *cpudata, u64 cppc_req_cached)
> +{
> + s16 epp;
> + struct cppc_perf_caps perf_caps;
> + int ret;
> +
> + if (boot_cpu_has(X86_FEATURE_CPPC)) {
> + if (!cppc_req_cached) {
> + epp = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,
> + &cppc_req_cached);
> + if (epp)
> + return epp;
> + }
> + epp = (cppc_req_cached >> 24) & 0xFF;
> + } else {
> + ret = cppc_get_epp_caps(cpudata->cpu, &perf_caps);
> + if (ret < 0) {
> + pr_debug("Could not retrieve energy perf value (%d)\n", ret);
> + return -EIO;
> + }
> + epp = (s16) perf_caps.energy_perf;
> + }
> +
> + return epp;
> +}
> +#endif
> +
> +static int amd_pstate_get_energy_pref_index(struct amd_cpudata *cpudata)
> +{
> + s16 epp;
> + int index = -EINVAL;
> +
> + epp = amd_pstate_get_epp(cpudata, 0);
> + if (epp < 0)
> + return epp;
> +
> + switch (epp) {
> + case AMD_CPPC_EPP_PERFORMANCE:
> + index = EPP_INDEX_PERFORMANCE;
> + break;
> + case AMD_CPPC_EPP_BALANCE_PERFORMANCE:
> + index = EPP_INDEX_BALANCE_PERFORMANCE;
> + break;
> + case AMD_CPPC_EPP_BALANCE_POWERSAVE:
> + index = EPP_INDEX_BALANCE_POWERSAVE;
> + break;
> + case AMD_CPPC_EPP_POWERSAVE:
> + index = EPP_INDEX_POWERSAVE;
> + break;
> + default:
> + break;
> + }
> +
> + return index;
> +}
> +
> +#ifdef CONFIG_ACPI_CPPC_LIB
> +static int amd_pstate_set_epp(struct amd_cpudata *cpudata, u32 epp)
> +{
> + int ret;
> + struct cppc_perf_ctrls perf_ctrls;
> +
> + if (boot_cpu_has(X86_FEATURE_CPPC)) {
> + u64 value = READ_ONCE(cpudata->cppc_req_cached);
> +
> + value &= ~GENMASK_ULL(31, 24);
> + value |= (u64)epp << 24;

small nit, AMD_CPPC_ENERGY_PERF_PREF macro can be used here, instead of hardcoding values.

> + WRITE_ONCE(cpudata->cppc_req_cached, value);
> +
> + ret = wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
> + if (!ret)
> + cpudata->epp_cached = epp;
> + } else {
> + perf_ctrls.energy_perf = epp;
> + ret = cppc_set_epp_perf(cpudata->cpu, &perf_ctrls, 1);
> + if (ret) {
> + pr_debug("failed to set energy perf value (%d)\n", ret);
> + return ret;
> + }
> + cpudata->epp_cached = epp;
> + }
> +
> + return ret;
> +}
> +
> +static int amd_pstate_set_energy_pref_index(struct amd_cpudata *cpudata,
> + int pref_index)
> +{
> + int epp = -EINVAL;
> + int ret;
> +
> + if (!pref_index) {
> + pr_debug("EPP pref_index is invalid\n");
> + return -EINVAL;
> + }
> +
> + if (epp == -EINVAL)
> + epp = epp_values[pref_index];
> +
> + if (epp > 0 && cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) {
> + pr_debug("EPP cannot be set under performance policy\n");
> + return -EBUSY;
> + }
> +
> + ret = amd_pstate_set_epp(cpudata, epp);
> +
> + return ret;
> +}
> +#endif
>
> static inline int pstate_enable(bool enable)
> {
> @@ -70,11 +194,21 @@ static inline int pstate_enable(bool enable)
> static int cppc_enable(bool enable)
> {
> int cpu, ret = 0;
> + struct cppc_perf_ctrls perf_ctrls;
>
> for_each_present_cpu(cpu) {
> ret = cppc_set_enable(cpu, enable);
> if (ret)
> return ret;
> +
> + /* Enable autonomous mode for EPP */
> + if (!cppc_active) {

It should be: if (cppc_active) { right?

> + /* Set desired perf as zero to allow EPP firmware control */
> + perf_ctrls.desired_perf = 0;
> + ret = cppc_set_perf(cpu, &perf_ctrls);

Don't we have to do same thing (setting des_perf to 0) in `pstate_enable` function?
Because for active + MSR supported processors will invoke `pstate_enable` function.

> + if (ret)
> + return ret;
> + }

> }
>
> return ret;
> @@ -417,7 +551,7 @@ static void amd_pstate_boost_init(struct amd_cpudata *cpudata)
> return;
>
> cpudata->boost_supported = true;
> - amd_pstate_driver.boost_enabled = true;
> + default_pstate_driver->boost_enabled = true;
> }
>
> static void amd_perf_ctl_reset(unsigned int cpu)
> @@ -591,10 +725,61 @@ static ssize_t show_amd_pstate_highest_perf(struct cpufreq_policy *policy,
> return sprintf(&buf[0], "%u\n", perf);
> }
>
> +static ssize_t show_energy_performance_available_preferences(
> + struct cpufreq_policy *policy, char *buf)
> +{
> + int i = 0;
> + int ret = 0;
> +
> + while (energy_perf_strings[i] != NULL)
> + ret += sprintf(&buf[ret], "%s ", energy_perf_strings[i++]);
> +
> + ret += sysfs_emit(&buf[ret], "\n");
> +
> + return ret;
> +}
> +
> +static ssize_t store_energy_performance_preference(
> + struct cpufreq_policy *policy, const char *buf, size_t count)
> +{
> + struct amd_cpudata *cpudata = policy->driver_data;
> + char str_preference[21];
> + ssize_t ret;
> +
> + ret = sscanf(buf, "%20s", str_preference);
> + if (ret != 1)
> + return -EINVAL;
> +
> + ret = match_string(energy_perf_strings, -1, str_preference);
> + if (ret < 0)
> + return -EINVAL;
> +
> + mutex_lock(&amd_pstate_limits_lock);
> + ret = amd_pstate_set_energy_pref_index(cpudata, ret);
> + mutex_unlock(&amd_pstate_limits_lock);
> +
> + return ret ?: count;
> +}
> +
> +static ssize_t show_energy_performance_preference(
> + struct cpufreq_policy *policy, char *buf)
> +{
> + struct amd_cpudata *cpudata = policy->driver_data;
> + int preference;
> +
> + preference = amd_pstate_get_energy_pref_index(cpudata);
> + if (preference < 0)
> + return preference;
> +
> + return sysfs_emit(buf, "%s\n", energy_perf_strings[preference]);
> +}
> +
> cpufreq_freq_attr_ro(amd_pstate_max_freq);
> cpufreq_freq_attr_ro(amd_pstate_lowest_nonlinear_freq);
>
> cpufreq_freq_attr_ro(amd_pstate_highest_perf);
> +cpufreq_freq_attr_rw(energy_performance_preference);
> +cpufreq_freq_attr_ro(energy_performance_available_preferences);
>
> static struct freq_attr *amd_pstate_attr[] = {
> &amd_pstate_max_freq,
> @@ -603,6 +788,415 @@ static struct freq_attr *amd_pstate_attr[] = {
> NULL,
> };
>
> +static struct freq_attr *amd_pstate_epp_attr[] = {
> + &amd_pstate_max_freq,
> + &amd_pstate_lowest_nonlinear_freq,
> + &amd_pstate_highest_perf,
> + &energy_performance_preference,
> + &energy_performance_available_preferences,
> + NULL,
> +};
> +
> +static inline void update_boost_state(void)
> +{
> + u64 misc_en;
> + struct amd_cpudata *cpudata;
> +
> + cpudata = all_cpu_data[0];
> + rdmsrl(MSR_K7_HWCR, misc_en);
> + global_params.cppc_boost_disabled = misc_en & BIT_ULL(25);
> +}
> +
> +static int amd_pstate_init_cpu(unsigned int cpunum)
> +{
> + struct amd_cpudata *cpudata;
> +
> + cpudata = all_cpu_data[cpunum];
> + if (!cpudata) {
> + cpudata = kzalloc(sizeof(*cpudata), GFP_KERNEL);
> + if (!cpudata)
> + return -ENOMEM;
> + WRITE_ONCE(all_cpu_data[cpunum], cpudata);
> +
> + cpudata->cpu = cpunum;
> + }
> + cpudata->epp_powersave = -EINVAL;
> + cpudata->epp_policy = 0;
> + pr_debug("controlling: cpu %d\n", cpunum);
> + return 0;
> +}
> +
> +static int __amd_pstate_cpu_init(struct cpufreq_policy *policy)
> +{
> + int min_freq, max_freq, nominal_freq, lowest_nonlinear_freq, ret;
> + struct amd_cpudata *cpudata;
> + struct device *dev;
> + int rc;
> + u64 value;
> +
> + rc = amd_pstate_init_cpu(policy->cpu);
> + if (rc)
> + return rc;
> +
> + cpudata = all_cpu_data[policy->cpu];
> +
> + dev = get_cpu_device(policy->cpu);
> + if (!dev)
> + goto free_cpudata1;
> +
> + rc = amd_pstate_init_perf(cpudata);
> + if (rc)
> + goto free_cpudata1;
> +
> + min_freq = amd_get_min_freq(cpudata);
> + max_freq = amd_get_max_freq(cpudata);
> + nominal_freq = amd_get_nominal_freq(cpudata);
> + lowest_nonlinear_freq = amd_get_lowest_nonlinear_freq(cpudata);
> + if (min_freq < 0 || max_freq < 0 || min_freq > max_freq) {
> + dev_err(dev, "min_freq(%d) or max_freq(%d) value is incorrect\n",
> + min_freq, max_freq);
> + ret = -EINVAL;
> + goto free_cpudata1;
> + }
> +
> + policy->min = min_freq;
> + policy->max = max_freq;
> +
> + policy->cpuinfo.min_freq = min_freq;
> + policy->cpuinfo.max_freq = max_freq;
> + /* It will be updated by governor */
> + policy->cur = policy->cpuinfo.min_freq;
> +
> + /* Initial processor data capability frequencies */
> + cpudata->max_freq = max_freq;
> + cpudata->min_freq = min_freq;
> + cpudata->nominal_freq = nominal_freq;
> + cpudata->lowest_nonlinear_freq = lowest_nonlinear_freq;
> +
> + policy->driver_data = cpudata;
> +
> + update_boost_state();
> + cpudata->epp_cached = amd_pstate_get_epp(cpudata, value);
> +
> + policy->min = policy->cpuinfo.min_freq;
> + policy->max = policy->cpuinfo.max_freq;
> +
> + if (boot_cpu_has(X86_FEATURE_CPPC))
> + policy->fast_switch_possible = true;
> +
> + if (!shared_mem && boot_cpu_has(X86_FEATURE_CPPC)) {

Here `shared_mem` is used but it is always 0, as it is not overwritten anywhere.
Do we need `shared_mem` variable at all, can we use only X86_FEATURE_CPPC here?

> + ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, &value);
> + if (ret)
> + return ret;
> + WRITE_ONCE(cpudata->cppc_req_cached, value);
> +
> + ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1, &value);
> + if (ret)
> + return ret;
> + WRITE_ONCE(cpudata->cppc_cap1_cached, value);
> + }
> + amd_pstate_boost_init(cpudata);
> +
> + return 0;
> +
> +free_cpudata1:
> + kfree(cpudata);
> + return ret;
> +}
> +
> +static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy)
> +{
> + int ret;
> +
> + ret = __amd_pstate_cpu_init(policy);
> + if (ret)
> + return ret;
> + /*
> + * Set the policy to powersave to provide a valid fallback value in case
> + * the default cpufreq governor is neither powersave nor performance.
> + */
> + policy->policy = CPUFREQ_POLICY_POWERSAVE;
> +
> + return 0;
> +}
> +
> +static int amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy)
> +{
> + pr_debug("CPU %d exiting\n", policy->cpu);
> + policy->fast_switch_possible = false;
> + return 0;
> +}
> +
> +static void amd_pstate_update_max_freq(unsigned int cpu)
> +{
> + struct cpufreq_policy *policy = policy = cpufreq_cpu_get(cpu);
> +
> + if (!policy)
> + return;
> +
> + refresh_frequency_limits(policy);
> + cpufreq_cpu_put(policy);
> +}
> +
> +static void amd_pstate_epp_update_limits(unsigned int cpu)
> +{
> + mutex_lock(&amd_pstate_driver_lock);
> + update_boost_state();
> + if (global_params.cppc_boost_disabled) {
> + for_each_possible_cpu(cpu)
> + amd_pstate_update_max_freq(cpu);
> + } else {
> + cpufreq_update_policy(cpu);
> + }
> + mutex_unlock(&amd_pstate_driver_lock);
> +}
> +
> +static int cppc_boost_hold_time_ns = 3 * NSEC_PER_MSEC;
> +
> +static inline void amd_pstate_boost_up(struct amd_cpudata *cpudata)
> +{
> + u64 hwp_req = READ_ONCE(cpudata->cppc_req_cached);
> + u64 hwp_cap = READ_ONCE(cpudata->cppc_cap1_cached);
> + u32 max_limit = (hwp_req & 0xff);
> + u32 min_limit = (hwp_req & 0xff00) >> 8;
> + u32 boost_level1;
> +
> + /* If max and min are equal or already at max, nothing to boost */
> + if (max_limit == min_limit)
> + return;
> +
> + /* Set boost max and min to initial value */
> + if (!cpudata->cppc_boost_min)
> + cpudata->cppc_boost_min = min_limit;
> +
> + boost_level1 = ((AMD_CPPC_NOMINAL_PERF(hwp_cap) + min_limit) >> 1);
> +
> + if (cpudata->cppc_boost_min < boost_level1)
> + cpudata->cppc_boost_min = boost_level1;
> + else if (cpudata->cppc_boost_min < AMD_CPPC_NOMINAL_PERF(hwp_cap))
> + cpudata->cppc_boost_min = AMD_CPPC_NOMINAL_PERF(hwp_cap);
> + else if (cpudata->cppc_boost_min == AMD_CPPC_NOMINAL_PERF(hwp_cap))
> + cpudata->cppc_boost_min = max_limit;
> + else
> + return;
> +
> + hwp_req &= ~AMD_CPPC_MIN_PERF(~0L);
> + hwp_req |= AMD_CPPC_MIN_PERF(cpudata->cppc_boost_min);
> + wrmsrl_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, hwp_req);
> + cpudata->last_update = cpudata->sample.time;
> +}
> +
> +static inline void amd_pstate_boost_down(struct amd_cpudata *cpudata)
> +{
> + bool expired;
> +
> + if (cpudata->cppc_boost_min) {
> + expired = time_after64(cpudata->sample.time, cpudata->last_update +
> + cppc_boost_hold_time_ns);
> +
> + if (expired) {
> + wrmsrl_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,
> + cpudata->cppc_req_cached);
> + cpudata->cppc_boost_min = 0;
> + }
> + }
> +
> + cpudata->last_update = cpudata->sample.time;
> +}
> +
> +static inline void amd_pstate_boost_update_util(struct amd_cpudata *cpudata,
> + u64 time)
> +{
> + cpudata->sample.time = time;
> + if (smp_processor_id() != cpudata->cpu)
> + return;
> +
> + if (cpudata->sched_flags & SCHED_CPUFREQ_IOWAIT) {
> + bool do_io = false;
> +
> + cpudata->sched_flags = 0;
> + /*
> + * Set iowait_boost flag and update time. Since IO WAIT flag
> + * is set all the time, we can't just conclude that there is
> + * some IO bound activity is scheduled on this CPU with just
> + * one occurrence. If we receive at least two in two
> + * consecutive ticks, then we treat as boost candidate.
> + * This is leveraged from Intel Pstate driver.
> + */
> + if (time_before64(time, cpudata->last_io_update + 2 * TICK_NSEC))
> + do_io = true;
> +
> + cpudata->last_io_update = time;
> +
> + if (do_io)
> + amd_pstate_boost_up(cpudata);
> +
> + } else {
> + amd_pstate_boost_down(cpudata);
> + }
> +}
> +
> +static inline void amd_pstate_cppc_update_hook(struct update_util_data *data,
> + u64 time, unsigned int flags)
> +{
> + struct amd_cpudata *cpudata = container_of(data,
> + struct amd_cpudata, update_util);
> +
> + cpudata->sched_flags |= flags;
> +
> + if (smp_processor_id() == cpudata->cpu)
> + amd_pstate_boost_update_util(cpudata, time);
> +}
> +
> +static void amd_pstate_clear_update_util_hook(unsigned int cpu)
> +{
> + struct amd_cpudata *cpudata = all_cpu_data[cpu];
> +
> + if (!cpudata->update_util_set)
> + return;
> +
> + cpufreq_remove_update_util_hook(cpu);
> + cpudata->update_util_set = false;
> + synchronize_rcu();
> +}
> +
> +static void amd_pstate_set_update_util_hook(unsigned int cpu_num)
> +{
> + struct amd_cpudata *cpudata = all_cpu_data[cpu_num];
> +
> + if (!cppc_boost) {
> + if (cpudata->update_util_set)
> + amd_pstate_clear_update_util_hook(cpudata->cpu);
> + return;
> + }
> +
> + if (cpudata->update_util_set)
> + return;
> +
> + cpudata->sample.time = 0;
> + cpufreq_add_update_util_hook(cpu_num, &cpudata->update_util,
> + amd_pstate_cppc_update_hook);
> + cpudata->update_util_set = true;
> +}
> +
> +static void amd_pstate_epp_init(unsigned int cpu)
> +{
> + struct amd_cpudata *cpudata = all_cpu_data[cpu];
> + u32 max_perf, min_perf;
> + u64 value;
> + s16 epp;
> + int ret;
> +
> + max_perf = READ_ONCE(cpudata->highest_perf);
> + min_perf = READ_ONCE(cpudata->lowest_perf);
> +
> + value = READ_ONCE(cpudata->cppc_req_cached);
> +
> + if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
> + min_perf = max_perf;
> +
> + /* Initial min/max values for CPPC Performance Controls Register */
> + value &= ~AMD_CPPC_MIN_PERF(~0L);
> + value |= AMD_CPPC_MIN_PERF(min_perf);
> +
> + value &= ~AMD_CPPC_MAX_PERF(~0L);
> + value |= AMD_CPPC_MAX_PERF(max_perf);
> +
> + /* CPPC EPP feature require to set zero to the desire perf bit */
> + value &= ~AMD_CPPC_DES_PERF(~0L);
> + value |= AMD_CPPC_DES_PERF(0);
> +
> + if (cpudata->epp_policy == cpudata->policy)
> + goto skip_epp;
> +
> + cpudata->epp_policy = cpudata->policy;
> +
> + if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) {
> + epp = amd_pstate_get_epp(cpudata, value);
> + cpudata->epp_powersave = epp;
> + if (epp < 0)
> + goto skip_epp;
> + /* force the epp value to be zero for performance policy */
> + epp = 0;
> + } else {
> + if (cpudata->epp_powersave < 0)
> + goto skip_epp;
> + /* Get BIOS pre-defined epp value */
> + epp = amd_pstate_get_epp(cpudata, value);
> + if (epp)
> + goto skip_epp;
> + epp = cpudata->epp_powersave;
> + }
> + /* Set initial EPP value */
> + if (boot_cpu_has(X86_FEATURE_CPPC)) {
> + value &= ~GENMASK_ULL(31, 24);
> + value |= (u64)epp << 24;
> + }
> +
> +skip_epp:
> + WRITE_ONCE(cpudata->cppc_req_cached, value);
> + ret = wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
> + if (!ret)
> + cpudata->epp_cached = epp;
> +}
> +
> +static void amd_pstate_set_max_limits(struct amd_cpudata *cpudata)
> +{
> + u64 hwp_cap = READ_ONCE(cpudata->cppc_cap1_cached);
> + u64 hwp_req = READ_ONCE(cpudata->cppc_req_cached);
> + u32 max_limit = (hwp_cap >> 24) & 0xff;
> +
> + hwp_req &= ~AMD_CPPC_MIN_PERF(~0L);
> + hwp_req |= AMD_CPPC_MIN_PERF(max_limit);
> + wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, hwp_req);
> +}
> +
> +static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
> +{
> + struct amd_cpudata *cpudata;
> +
> + if (!policy->cpuinfo.max_freq)
> + return -ENODEV;
> +
> + pr_debug("set_policy: cpuinfo.max %u policy->max %u\n",
> + policy->cpuinfo.max_freq, policy->max);
> +
> + cpudata = all_cpu_data[policy->cpu];
> + cpudata->policy = policy->policy;
> +
> + if (boot_cpu_has(X86_FEATURE_CPPC)) {
> + mutex_lock(&amd_pstate_limits_lock);
> +
> + if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) {
> + amd_pstate_clear_update_util_hook(policy->cpu);
> + amd_pstate_set_max_limits(cpudata);
> + } else {
> + amd_pstate_set_update_util_hook(policy->cpu);
> + }
> +
> + if (boot_cpu_has(X86_FEATURE_CPPC))
> + amd_pstate_epp_init(policy->cpu);
> +
> + mutex_unlock(&amd_pstate_limits_lock);
> + }
> +
> + return 0;
> +}
> +
> +static void amd_pstate_verify_cpu_policy(struct amd_cpudata *cpudata,
> + struct cpufreq_policy_data *policy)
> +{
> + update_boost_state();
> + cpufreq_verify_within_cpu_limits(policy);
> +}
> +
> +static int amd_pstate_epp_verify_policy(struct cpufreq_policy_data *policy)
> +{
> + amd_pstate_verify_cpu_policy(all_cpu_data[policy->cpu], policy);
> + pr_debug("policy_max =%d, policy_min=%d\n", policy->max, policy->min);
> + return 0;
> +}
> +
> static struct cpufreq_driver amd_pstate_driver = {
> .flags = CPUFREQ_CONST_LOOPS | CPUFREQ_NEED_UPDATE_LIMITS,
> .verify = amd_pstate_verify,
> @@ -616,8 +1210,20 @@ static struct cpufreq_driver amd_pstate_driver = {
> .attr = amd_pstate_attr,
> };
>
> +static struct cpufreq_driver amd_pstate_epp_driver = {
> + .flags = CPUFREQ_CONST_LOOPS,
> + .verify = amd_pstate_epp_verify_policy,
> + .setpolicy = amd_pstate_epp_set_policy,
> + .init = amd_pstate_epp_cpu_init,
> + .exit = amd_pstate_epp_cpu_exit,
> + .update_limits = amd_pstate_epp_update_limits,
> + .name = "amd_pstate_epp",
> + .attr = amd_pstate_epp_attr,
> +};
> +
> static int __init amd_pstate_init(void)
> {
> + static struct amd_cpudata **cpudata;
> int ret;
>
> if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
> @@ -641,10 +1247,17 @@ static int __init amd_pstate_init(void)
> if (cpufreq_get_current_driver())
> return -EEXIST;
>
> + if (!epp_off) {
> + WRITE_ONCE(cppc_active, 1);
> + if (!default_pstate_driver)
> + default_pstate_driver = &amd_pstate_epp_driver;
> + }
> +
> /* capability check */
> if (boot_cpu_has(X86_FEATURE_CPPC)) {
> pr_debug("AMD CPPC MSR based functionality is supported\n");
> - amd_pstate_driver.adjust_perf = amd_pstate_adjust_perf;
> + if (!cppc_active)
> + default_pstate_driver->adjust_perf = amd_pstate_adjust_perf;
> } else {
> pr_debug("AMD CPPC shared memory based functionality is supported\n");
> static_call_update(amd_pstate_enable, cppc_enable);
> @@ -652,6 +1265,10 @@ static int __init amd_pstate_init(void)
> static_call_update(amd_pstate_update_perf, cppc_update_perf);
> }
>
> + cpudata = vzalloc(array_size(sizeof(void *), num_possible_cpus()));
> + if (!cpudata)
> + return -ENOMEM;
> + WRITE_ONCE(all_cpu_data, cpudata);
> /* enable amd pstate feature */
> ret = amd_pstate_enable(true);
> if (ret) {
> @@ -659,9 +1276,9 @@ static int __init amd_pstate_init(void)
> return ret;
> }
>
> - ret = cpufreq_register_driver(&amd_pstate_driver);
> + ret = cpufreq_register_driver(default_pstate_driver);
> if (ret)
> - pr_err("failed to register amd_pstate_driver with return %d\n",
> + pr_err("failed to register amd pstate driver with return %d\n",
> ret);
>
> return ret;
> @@ -676,8 +1293,14 @@ static int __init amd_pstate_param(char *str)
> if (!strcmp(str, "disable")) {
> cppc_load = 0;
> pr_info("driver is explicitly disabled\n");
> - } else if (!strcmp(str, "passive"))
> + } else if (!strcmp(str, "passive")) {
> + epp_off = 1;
> cppc_load = 1;
> + default_pstate_driver = &amd_pstate_driver;
> + } else if (!strcmp(str, "active")) {
> + cppc_load = 1;
> + default_pstate_driver = &amd_pstate_epp_driver;
> + }
>
> return 0;
> }
> diff --git a/include/linux/amd-pstate.h b/include/linux/amd-pstate.h
> index 1c4b8659f171..7e6e8cab97b3 100644
> --- a/include/linux/amd-pstate.h
> +++ b/include/linux/amd-pstate.h
> @@ -25,6 +25,7 @@ struct amd_aperf_mperf {
> u64 aperf;
> u64 mperf;
> u64 tsc;
> + u64 time;
> };
>
> /**
> @@ -47,6 +48,18 @@ struct amd_aperf_mperf {
> * @prev: Last Aperf/Mperf/tsc count value read from register
> * @freq: current cpu frequency value
> * @boost_supported: check whether the Processor or SBIOS supports boost mode
> + * @epp_powersave: Last saved CPPC energy performance preference
> + when policy switched to performance
> + * @epp_policy: Last saved policy used to set energy-performance preference
> + * @epp_cached: Cached CPPC energy-performance preference value
> + * @policy: Cpufreq policy value
> + * @sched_flags: Store scheduler flags for possible cross CPU update
> + * @update_util_set: CPUFreq utility callback is set
> + * @last_update: Time stamp of the last performance state update
> + * @cppc_boost_min: Last CPPC boosted min performance state
> + * @cppc_cap1_cached: Cached value of the last CPPC Capabilities MSR
> + * @update_util: Cpufreq utility callback information
> + * @sample: the stored performance sample
> *
> * The amd_cpudata is key private data for each CPU thread in AMD P-State, and
> * represents all the attributes and goals that AMD P-State requests at runtime.
> @@ -72,6 +85,74 @@ struct amd_cpudata {
>
> u64 freq;
> bool boost_supported;
> +
> + /* EPP feature related attributes*/
> + s16 epp_powersave;
> + s16 epp_policy;
> + s16 epp_cached;
> + u32 policy;
> + u32 sched_flags;
> + bool update_util_set;
> + u64 last_update;
> + u64 last_io_update;
> + u32 cppc_boost_min;
> + u64 cppc_cap1_cached;
> + struct update_util_data update_util;
> + struct amd_aperf_mperf sample;
> +};
> +
> +/**
> + * struct amd_pstate_params - global parameters for the performance control
> + * @ cppc_boost_disabled wheher the core performance boost disabled
> + */
> +struct amd_pstate_params {
> + bool cppc_boost_disabled;
> +};
> +
> +#define AMD_CPPC_EPP_PERFORMANCE 0x00
> +#define AMD_CPPC_EPP_BALANCE_PERFORMANCE 0x80
> +#define AMD_CPPC_EPP_BALANCE_POWERSAVE 0xBF
> +#define AMD_CPPC_EPP_POWERSAVE 0xFF
> +
> +/*
> + * AMD Energy Preference Performance (EPP)
> + * The EPP is used in the CCLK DPM controller to drive
> + * the frequency that a core is going to operate during
> + * short periods of activity. EPP values will be utilized for
> + * different OS profiles (balanced, performance, power savings)
> + * display strings corresponding to EPP index in the
> + * energy_perf_strings[]
> + * index String
> + *-------------------------------------
> + * 0 default
> + * 1 performance
> + * 2 balance_performance
> + * 3 balance_power
> + * 4 power
> + */
> +enum energy_perf_value_index {
> + EPP_INDEX_DEFAULT = 0,
> + EPP_INDEX_PERFORMANCE,
> + EPP_INDEX_BALANCE_PERFORMANCE,
> + EPP_INDEX_BALANCE_POWERSAVE,
> + EPP_INDEX_POWERSAVE,
> +};
> +
> +static const char * const energy_perf_strings[] = {
> + [EPP_INDEX_DEFAULT] = "default",
> + [EPP_INDEX_PERFORMANCE] = "performance",
> + [EPP_INDEX_BALANCE_PERFORMANCE] = "balance_performance",
> + [EPP_INDEX_BALANCE_POWERSAVE] = "balance_power",
> + [EPP_INDEX_POWERSAVE] = "power",
> + NULL
> +};
> +
> +static unsigned int epp_values[] = {
> + [EPP_INDEX_DEFAULT] = 0,
> + [EPP_INDEX_PERFORMANCE] = AMD_CPPC_EPP_PERFORMANCE,
> + [EPP_INDEX_BALANCE_PERFORMANCE] = AMD_CPPC_EPP_BALANCE_PERFORMANCE,
> + [EPP_INDEX_BALANCE_POWERSAVE] = AMD_CPPC_EPP_BALANCE_POWERSAVE,
> + [EPP_INDEX_POWERSAVE] = AMD_CPPC_EPP_POWERSAVE,
> };
>
> #endif /* _LINUX_AMD_PSTATE_H */

Thanks & Regards,
Wyes
--
Thanks & Regards,
Wyes

2022-11-29 19:46:28

by Tor Vic

[permalink] [raw]
Subject: Re: [PATCH v5 0/9] Implement AMD Pstate EPP Driver


On 28.11.22 17:03, Perry Yuan wrote:
> Hi all,
>
> This patchset implements one new AMD CPU frequency driver
> "amd-pstate-epp” instance for better performance and power control.
> CPPC has a parameter called energy preference performance (EPP).
> The EPP is used in the CCLK DPM controller to drive the frequency that a core
> is going to operate during short periods of activity.
> EPP values will be utilized for different OS profiles (balanced, performance, power savings).

Hi,

what is the correct way to use this?
On my Zen2 machine, I passed "amd_pstate=active" to my cmdline, which
seems to work correctly according to the following cpupower output:

analyzing CPU 0:
driver: amd_pstate_epp
CPUs which run at the same hardware frequency: 0
CPUs which need to have their frequency coordinated by software: 0
maximum transition latency: Cannot determine or is not supported.
hardware limits: 550 MHz - 4.67 GHz
available cpufreq governors: performance powersave
current policy: frequency should be within 550 MHz and 4.67 GHz.
The governor "powersave" may decide which speed to use
within this range.
current CPU frequency: Unable to call hardware
current CPU frequency: 550 MHz (asserted by call to kernel)
boost state support:
Supported: yes
Active: no

(above was in idle state - under full load I get the expected 4.1-4.2 GHz)

But is it correct that it uses the "powersave" governor?
If I try to change the governor with

sudo $PATHTOEXEC/cpupower frequency-set -g schedutil

I will get an error (ondemand and schedutil are built-in):

Setting cpu: 0
Error setting new values. Common errors:
- Do you have proper administration rights? (super-user?)
- Is the governor you requested available and modprobed?
- Trying to set an invalid policy?
- Trying to set a specific frequency, but userspace governor is not
available,
for example because of hardware which cannot be set to a specific
frequency
or because the userspace governor isn't loaded?

Maybe this should be clarified in more detail.

Cheers,
Tor Vic

>
> AMD Energy Performance Preference (EPP) provides a hint to the hardware
> if software wants to bias toward performance (0x0) or energy efficiency (0xff)
> The lowlevel power firmware will calculate the runtime frequency according to the EPP preference
> value. So the EPP hint will impact the CPU cores frequency responsiveness.
>
> We use the RAPL interface with "perf" tool to get the energy data of the package power.
> Performance Per Watt (PPW) Calculation:
>
> The PPW calculation is referred by below paper:
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsoftware.intel.com%2Fcontent%2Fdam%2Fdevelop%2Fexternal%2Fus%2Fen%2Fdocuments%2Fperformance-per-what-paper.pdf&amp;data=04%7C01%7CPerry.Yuan%40amd.com%7Cac66e8ce98044e9b062708d9ab47c8d8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637729147708574423%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=TPOvCE%2Frbb0ptBreWNxHqOi9YnVhcHGKG88vviDLb00%3D&amp;reserved=0
>
> Below formula is referred from below spec to measure the PPW:
>
> (F / t) / P = F * t / (t * E) = F / E,
>
> "F" is the number of frames per second.
> "P" is power measured in watts.
> "E" is energy measured in joules.
>
> Gitsouce Benchmark Data on ROME Server CPU
> +------------------------------+------------------------------+------------+------------------+
> | Kernel Module | PPW (1 / s * J) |Energy(J) | PPW Improvement (%)|
> +==============================+==============================+============+==================+
> | acpi-cpufreq:schedutil | 5.85658E-05 | 17074.8 | base |
> +------------------------------+------------------------------+------------+------------------+
> | acpi-cpufreq:ondemand | 5.03079E-05 | 19877.6 | -14.10% |
> +------------------------------+------------------------------+------------+------------------+
> | acpi-cpufreq:performance | 5.88132E-05 | 17003 | 0.42% |
> +------------------------------+------------------------------+------------+------------------+
> | amd-pstate:ondemand | 4.60295E-05 | 21725.2 | -21.41% |
> +------------------------------+------------------------------+------------+------------------+
> | amd-pstate:schedutil | 4.70026E-05 | 21275.4 | -19.7% |
> +------------------------------+------------------------------+------------+------------------+
> | amd-pstate:performance | 5.80094E-05 | 17238.6 | -0.95% |
> +------------------------------+------------------------------+------------+------------------+
> | EPP:performance | 5.8292E-05 | 17155 | -0.47% |
> +------------------------------+------------------------------+------------+------------------+
> | EPP: balance performance: | 6.71709E-05 | 14887.4 | 14.69% |
> +------------------------------+------------------------------+------------+------------------+
> | EPP:power | 6.66951E-05 | 4993.6 | 13.88% |
> +------------------------------+------------------------------+------------+------------------+
>
> Tbench Benchmark Data on ROME Server CPU
> +---------------------------------------------+-------------------+--------------+-------------+------------------+
> | Kernel Module | PPW MB / (s * J) |Throughput(MB/s)| Energy (J)|PPW Improvement(%)|
> +=============================================+===================+==============+=============+==================+
> | acpi_cpufreq: schedutil | 46.39 | 17191 | 37057.3 | base |
> +---------------------------------------------+-------------------+--------------+-------------+------------------+
> | acpi_cpufreq: ondemand | 51.51 | 19269.5 | 37406.5 | 11.04 % |
> +---------------------------------------------+-------------------+--------------+-------------+------------------+
> | acpi_cpufreq: performance | 45.96 | 17063.7 | 37123.7 | -0.74 % |
> +---------------------------------------------+-------------------+--------------+-------------+------------------+
> | EPP:powersave: performance(0) | 54.46 | 20263.1 | 37205 | 17.87 % |
> +---------------------------------------------+-------------------+--------------+-------------+------------------+
> | EPP:powersave: balance performance | 55.03 | 20481.9 | 37221.5 | 19.14 % |
> +---------------------------------------------+-------------------+--------------+-------------+------------------+
> | EPP:powersave: balance_power | 54.43 | 20245.9 | 37194.2 | 17.77 % |
> +---------------------------------------------+-------------------+--------------+-------------+------------------+
> | EPP:powersave: power(255) | 54.26 | 20181.7 | 37197.4 | 17.40 % |
> +---------------------------------------------+-------------------+--------------+-------------+------------------+
> | amd-pstate: schedutil | 48.22 | 17844.9 | 37006.6 | 3.80 % |
> +---------------------------------------------+-------------------+--------------+-------------+------------------+
> | amd-pstate: ondemand | 61.30 | 22988 | 37503.4 | 33.72 % |
> +---------------------------------------------+-------------------+--------------+-------------+------------------+
> | amd-pstate: performance | 54.52 | 20252.6 | 37147.8 | 17.81 % |
> +---------------------------------------------+-------------------+--------------+-------------+------------------+
>
> changes from v4:
> * rebase driver based on the v6.1-rc7
> * remove the builtin changes patch because pstate driver has been
> changed to builtin type by another thread patch
> * update Documentation: amd-pstate: add amd pstate driver mode introduction
> * replace sprintf with sysfs_emit() instead.
> * fix typo for cppc_set_epp_perf() in cppc_acpi.h header
>
> changes from v3:
> * add one more document update patch for the active and passive mode
> introducion.
> * drive most of the feedbacks from Mario
> * drive feedback from Rafael for the cppc_acpi driver.
> * remove the epp raw data set/get function
> * set the amd-pstate drive by passing kernel parameter
> * set amd-pstate driver disabled by default if no kernel parameter
> input from booting
> * get cppc_set_auto_epp and cppc_set_epp_perf combined
> * pick up reviewed by flag from Mario
>
> changes from v2:
> * change pstate driver as builtin type from module
> * drop patch "export cpufreq cpu release and acquire"
> * squash patch of shared mem into single patch of epp implementation
> * add one new patch to support frequency boost control
> * add patch to expose driver working status checking
> * rebase driver into v6.1-rc4 kernel release
> * move some declaration to amd-pstate.h
> * drive feedback from Mario for the online/offline patch
> * drive feedback from Mario for the suspend/resume patch
> * drive feedback from Ray for the cppc_acpi and some other patches
> * drive feedback from Nathan for the epp patch
>
> changes from v1:
> * rebased to v6.0
> * drive feedbacks from Mario for the suspend/resume patch
> * drive feedbacks from Nathan for the EPP support on msr type
> * fix some typos and code style indent problems
> * update commit comments for patch 4/7
> * change the `epp_enabled` module param name to `epp`
> * set the default epp mode to be false
> * add testing for the x86_energy_perf_policy utility patchset(will
> send that utility patchset with another thread)
>
> v4: https://lore.kernel.org/lkml/[email protected]/
> v3: https://lore.kernel.org/all/[email protected]/
> v2: https://lore.kernel.org/all/[email protected]/
> v1: https://lore.kernel.org/all/[email protected]/
>
> Perry Yuan (9):
> ACPI: CPPC: Add AMD pstate energy performance preference cppc control
> Documentation: amd-pstate: add EPP profiles introduction
> cpufreq: amd_pstate: implement Pstate EPP support for the AMD
> processors
> cpufreq: amd_pstate: implement amd pstate cpu online and offline
> callback
> cpufreq: amd-pstate: implement suspend and resume callbacks
> cpufreq: amd-pstate: add frequency dynamic boost sysfs control
> cpufreq: amd_pstate: add driver working mode status sysfs entry
> Documentation: amd-pstate: add amd pstate driver mode introduction
> Documentation: introduce amd pstate active mode kernel command line
> options
>
> .../admin-guide/kernel-parameters.txt | 7 +
> Documentation/admin-guide/pm/amd-pstate.rst | 45 +-
> drivers/acpi/cppc_acpi.c | 114 ++-
> drivers/cpufreq/amd-pstate.c | 872 +++++++++++++++++-
> include/acpi/cppc_acpi.h | 12 +
> include/linux/amd-pstate.h | 82 ++
> 6 files changed, 1118 insertions(+), 14 deletions(-)
>

2022-11-30 01:29:47

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v5 3/9] cpufreq: amd_pstate: implement Pstate EPP support for the AMD processors

Hi Perry,

I love your patch! Perhaps something to improve:

[auto build test WARNING on rafael-pm/linux-next]
[also build test WARNING on linus/master v6.1-rc7 next-20221129]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Perry-Yuan/Implement-AMD-Pstate-EPP-Driver/20221129-011556
base: https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
patch link: https://lore.kernel.org/r/20221128170314.2276636-4-perry.yuan%40amd.com
patch subject: [PATCH v5 3/9] cpufreq: amd_pstate: implement Pstate EPP support for the AMD processors
config: x86_64-allmodconfig
compiler: gcc-11 (Debian 11.3.0-8) 11.3.0
reproduce (this is a W=1 build):
# https://github.com/intel-lab-lkp/linux/commit/e353f533252a9e07649872be9e9012c4500d9133
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Perry-Yuan/Implement-AMD-Pstate-EPP-Driver/20221129-011556
git checkout e353f533252a9e07649872be9e9012c4500d9133
# save the config file
mkdir build_dir && cp config build_dir/.config
make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash drivers/cpufreq/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <[email protected]>

All warnings (new ones prefixed by >>):

drivers/cpufreq/amd-pstate.c: In function 'update_boost_state':
>> drivers/cpufreq/amd-pstate.c:803:29: warning: variable 'cpudata' set but not used [-Wunused-but-set-variable]
803 | struct amd_cpudata *cpudata;
| ^~~~~~~


vim +/cpudata +803 drivers/cpufreq/amd-pstate.c

799
800 static inline void update_boost_state(void)
801 {
802 u64 misc_en;
> 803 struct amd_cpudata *cpudata;
804
805 cpudata = all_cpu_data[0];
806 rdmsrl(MSR_K7_HWCR, misc_en);
807 global_params.cppc_boost_disabled = misc_en & BIT_ULL(25);
808 }
809

--
0-DAY CI Kernel Test Service
https://01.org/lkp


Attachments:
(No filename) (2.20 kB)
config (295.90 kB)
Download all attachments

2022-12-02 08:26:06

by Yuan, Perry

[permalink] [raw]
Subject: RE: [PATCH v5 3/9] cpufreq: amd_pstate: implement Pstate EPP support for the AMD processors

[AMD Official Use Only - General]



> -----Original Message-----
> From: Karny, Wyes <[email protected]>
> Sent: Tuesday, November 29, 2022 8:49 PM
> To: Yuan, Perry <[email protected]>; [email protected];
> Limonciello, Mario <[email protected]>; Huang, Ray
> <[email protected]>; [email protected]
> Cc: Sharma, Deepak <[email protected]>; Fontenot, Nathan
> <[email protected]>; Deucher, Alexander
> <[email protected]>; Huang, Shimmer
> <[email protected]>; Du, Xiaojian <[email protected]>; Meng,
> Li (Jassmine) <[email protected]>; [email protected]; linux-
> [email protected]
> Subject: Re: [PATCH v5 3/9] cpufreq: amd_pstate: implement Pstate EPP
> support for the AMD processors
>
> Hi Perry,
>
> On 11/28/2022 10:33 PM, Perry Yuan wrote:
> > From: Perry Yuan <[email protected]>
> >
> > Add EPP driver support for AMD SoCs which support a dedicated MSR for
> > CPPC. EPP is used by the DPM controller to configure the frequency
> > that a core operates at during short periods of activity.
> >
> > The SoC EPP targets are configured on a scale from 0 to 255 where 0
> > represents maximum performance and 255 represents maximum
> efficiency.
> >
> > The amd-pstate driver exports profile string names to userspace that
> > are tied to specific EPP values.
> >
> > The balance_performance string (0x80) provides the best balance for
> > efficiency versus power on most systems, but users can choose other
> > strings to meet their needs as well.
> >
> > $ cat
> >
> /sys/devices/system/cpu/cpufreq/policy0/energy_performance_available_
> p
> > references default performance balance_performance balance_power
> power
> >
> > $ cat
> >
> /sys/devices/system/cpu/cpufreq/policy0/energy_performance_preferenc
> e
> > balance_performance
> >
> > Signed-off-by: Perry Yuan <[email protected]>
> > ---
> > drivers/cpufreq/amd-pstate.c | 635
> ++++++++++++++++++++++++++++++++++-
> > include/linux/amd-pstate.h | 81 +++++
> > 2 files changed, 710 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/cpufreq/amd-pstate.c
> > b/drivers/cpufreq/amd-pstate.c index 204e39006dda..5a19b832afdf 100644
> > --- a/drivers/cpufreq/amd-pstate.c
> > +++ b/drivers/cpufreq/amd-pstate.c
> > @@ -59,8 +59,132 @@
> > * we disable it by default to go acpi-cpufreq on these processors and add
> a
> > * module parameter to be able to enable it manually for debugging.
> > */
> > -static struct cpufreq_driver amd_pstate_driver;
> > +static bool shared_mem __read_mostly;
>
> `shared_mem` is initialized to 0 here but it is not being overwite anywhere.
>
> > +static int cppc_active __read_mostly;
> > static int cppc_load __initdata;
> > +static int epp_off __initdata;
>
> Not sure why we need `cppc_active` and `epp_off` two veriables to control
> `active mode`.
> Is is because `epp_off` is marked as __initdata, therefore if you read
> `epp_off` in non-init functions it shows zero?
> In that case, can we remove __initdata from `epp_off` and use this same
> variable elsewhere?

Good point.
I have removed the epp_off to avoid confusion in the V6 patch.


Perry

>
> > +
> > +static struct cpufreq_driver *default_pstate_driver; static struct
> > +amd_cpudata **all_cpu_data;
> > +
> > +static struct amd_pstate_params global_params;
> > +
> > +static DEFINE_MUTEX(amd_pstate_limits_lock);
> > +static DEFINE_MUTEX(amd_pstate_driver_lock);
> > +
> > +static bool cppc_boost __read_mostly; struct kobject
> > +*amd_pstate_kobj;
> > +
> > +#ifdef CONFIG_ACPI_CPPC_LIB
> > +static s16 amd_pstate_get_epp(struct amd_cpudata *cpudata, u64
> > +cppc_req_cached) {
> > + s16 epp;
> > + struct cppc_perf_caps perf_caps;
> > + int ret;
> > +
> > + if (boot_cpu_has(X86_FEATURE_CPPC)) {
> > + if (!cppc_req_cached) {
> > + epp = rdmsrl_on_cpu(cpudata->cpu,
> MSR_AMD_CPPC_REQ,
> > + &cppc_req_cached);
> > + if (epp)
> > + return epp;
> > + }
> > + epp = (cppc_req_cached >> 24) & 0xFF;
> > + } else {
> > + ret = cppc_get_epp_caps(cpudata->cpu, &perf_caps);
> > + if (ret < 0) {
> > + pr_debug("Could not retrieve energy perf value
> (%d)\n", ret);
> > + return -EIO;
> > + }
> > + epp = (s16) perf_caps.energy_perf;
> > + }
> > +
> > + return epp;
> > +}
> > +#endif
> > +
> > +static int amd_pstate_get_energy_pref_index(struct amd_cpudata
> > +*cpudata) {
> > + s16 epp;
> > + int index = -EINVAL;
> > +
> > + epp = amd_pstate_get_epp(cpudata, 0);
> > + if (epp < 0)
> > + return epp;
> > +
> > + switch (epp) {
> > + case AMD_CPPC_EPP_PERFORMANCE:
> > + index = EPP_INDEX_PERFORMANCE;
> > + break;
> > + case AMD_CPPC_EPP_BALANCE_PERFORMANCE:
> > + index = EPP_INDEX_BALANCE_PERFORMANCE;
> > + break;
> > + case AMD_CPPC_EPP_BALANCE_POWERSAVE:
> > + index = EPP_INDEX_BALANCE_POWERSAVE;
> > + break;
> > + case AMD_CPPC_EPP_POWERSAVE:
> > + index = EPP_INDEX_POWERSAVE;
> > + break;
> > + default:
> > + break;
> > + }
> > +
> > + return index;
> > +}
> > +
> > +#ifdef CONFIG_ACPI_CPPC_LIB
> > +static int amd_pstate_set_epp(struct amd_cpudata *cpudata, u32 epp) {
> > + int ret;
> > + struct cppc_perf_ctrls perf_ctrls;
> > +
> > + if (boot_cpu_has(X86_FEATURE_CPPC)) {
> > + u64 value = READ_ONCE(cpudata->cppc_req_cached);
> > +
> > + value &= ~GENMASK_ULL(31, 24);
> > + value |= (u64)epp << 24;
>
> small nit, AMD_CPPC_ENERGY_PERF_PREF macro can be used here, instead
> of hardcoding values.
>
> > + WRITE_ONCE(cpudata->cppc_req_cached, value);
> > +
> > + ret = wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,
> value);
> > + if (!ret)
> > + cpudata->epp_cached = epp;
> > + } else {
> > + perf_ctrls.energy_perf = epp;
> > + ret = cppc_set_epp_perf(cpudata->cpu, &perf_ctrls, 1);
> > + if (ret) {
> > + pr_debug("failed to set energy perf value (%d)\n",
> ret);
> > + return ret;
> > + }
> > + cpudata->epp_cached = epp;
> > + }
> > +
> > + return ret;
> > +}
> > +
> > +static int amd_pstate_set_energy_pref_index(struct amd_cpudata
> *cpudata,
> > + int pref_index)
> > +{
> > + int epp = -EINVAL;
> > + int ret;
> > +
> > + if (!pref_index) {
> > + pr_debug("EPP pref_index is invalid\n");
> > + return -EINVAL;
> > + }
> > +
> > + if (epp == -EINVAL)
> > + epp = epp_values[pref_index];
> > +
> > + if (epp > 0 && cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
> {
> > + pr_debug("EPP cannot be set under performance policy\n");
> > + return -EBUSY;
> > + }
> > +
> > + ret = amd_pstate_set_epp(cpudata, epp);
> > +
> > + return ret;
> > +}
> > +#endif
> >
> > static inline int pstate_enable(bool enable) { @@ -70,11 +194,21 @@
> > static inline int pstate_enable(bool enable) static int
> > cppc_enable(bool enable) {
> > int cpu, ret = 0;
> > + struct cppc_perf_ctrls perf_ctrls;
> >
> > for_each_present_cpu(cpu) {
> > ret = cppc_set_enable(cpu, enable);
> > if (ret)
> > return ret;
> > +
> > + /* Enable autonomous mode for EPP */
> > + if (!cppc_active) {
>
> It should be: if (cppc_active) { right?
>
> > + /* Set desired perf as zero to allow EPP firmware
> control */
> > + perf_ctrls.desired_perf = 0;
> > + ret = cppc_set_perf(cpu, &perf_ctrls);
>
> Don't we have to do same thing (setting des_perf to 0) in `pstate_enable`
> function?
> Because for active + MSR supported processors will invoke `pstate_enable`
> function.
>
> > + if (ret)
> > + return ret;
> > + }
>
> > }
> >
> > return ret;
> > @@ -417,7 +551,7 @@ static void amd_pstate_boost_init(struct
> amd_cpudata *cpudata)
> > return;
> >
> > cpudata->boost_supported = true;
> > - amd_pstate_driver.boost_enabled = true;
> > + default_pstate_driver->boost_enabled = true;
> > }
> >
> > static void amd_perf_ctl_reset(unsigned int cpu) @@ -591,10 +725,61
> > @@ static ssize_t show_amd_pstate_highest_perf(struct cpufreq_policy
> *policy,
> > return sprintf(&buf[0], "%u\n", perf); }
> >
> > +static ssize_t show_energy_performance_available_preferences(
> > + struct cpufreq_policy *policy, char *buf) {
> > + int i = 0;
> > + int ret = 0;
> > +
> > + while (energy_perf_strings[i] != NULL)
> > + ret += sprintf(&buf[ret], "%s ", energy_perf_strings[i++]);
> > +
> > + ret += sysfs_emit(&buf[ret], "\n");
> > +
> > + return ret;
> > +}
> > +
> > +static ssize_t store_energy_performance_preference(
> > + struct cpufreq_policy *policy, const char *buf, size_t count) {
> > + struct amd_cpudata *cpudata = policy->driver_data;
> > + char str_preference[21];
> > + ssize_t ret;
> > +
> > + ret = sscanf(buf, "%20s", str_preference);
> > + if (ret != 1)
> > + return -EINVAL;
> > +
> > + ret = match_string(energy_perf_strings, -1, str_preference);
> > + if (ret < 0)
> > + return -EINVAL;
> > +
> > + mutex_lock(&amd_pstate_limits_lock);
> > + ret = amd_pstate_set_energy_pref_index(cpudata, ret);
> > + mutex_unlock(&amd_pstate_limits_lock);
> > +
> > + return ret ?: count;
> > +}
> > +
> > +static ssize_t show_energy_performance_preference(
> > + struct cpufreq_policy *policy, char *buf) {
> > + struct amd_cpudata *cpudata = policy->driver_data;
> > + int preference;
> > +
> > + preference = amd_pstate_get_energy_pref_index(cpudata);
> > + if (preference < 0)
> > + return preference;
> > +
> > + return sysfs_emit(buf, "%s\n", energy_perf_strings[preference]); }
> > +
> > cpufreq_freq_attr_ro(amd_pstate_max_freq);
> > cpufreq_freq_attr_ro(amd_pstate_lowest_nonlinear_freq);
> >
> > cpufreq_freq_attr_ro(amd_pstate_highest_perf);
> > +cpufreq_freq_attr_rw(energy_performance_preference);
> > +cpufreq_freq_attr_ro(energy_performance_available_preferences);
> >
> > static struct freq_attr *amd_pstate_attr[] = {
> > &amd_pstate_max_freq,
> > @@ -603,6 +788,415 @@ static struct freq_attr *amd_pstate_attr[] = {
> > NULL,
> > };
> >
> > +static struct freq_attr *amd_pstate_epp_attr[] = {
> > + &amd_pstate_max_freq,
> > + &amd_pstate_lowest_nonlinear_freq,
> > + &amd_pstate_highest_perf,
> > + &energy_performance_preference,
> > + &energy_performance_available_preferences,
> > + NULL,
> > +};
> > +
> > +static inline void update_boost_state(void) {
> > + u64 misc_en;
> > + struct amd_cpudata *cpudata;
> > +
> > + cpudata = all_cpu_data[0];
> > + rdmsrl(MSR_K7_HWCR, misc_en);
> > + global_params.cppc_boost_disabled = misc_en & BIT_ULL(25); }
> > +
> > +static int amd_pstate_init_cpu(unsigned int cpunum) {
> > + struct amd_cpudata *cpudata;
> > +
> > + cpudata = all_cpu_data[cpunum];
> > + if (!cpudata) {
> > + cpudata = kzalloc(sizeof(*cpudata), GFP_KERNEL);
> > + if (!cpudata)
> > + return -ENOMEM;
> > + WRITE_ONCE(all_cpu_data[cpunum], cpudata);
> > +
> > + cpudata->cpu = cpunum;
> > + }
> > + cpudata->epp_powersave = -EINVAL;
> > + cpudata->epp_policy = 0;
> > + pr_debug("controlling: cpu %d\n", cpunum);
> > + return 0;
> > +}
> > +
> > +static int __amd_pstate_cpu_init(struct cpufreq_policy *policy) {
> > + int min_freq, max_freq, nominal_freq, lowest_nonlinear_freq, ret;
> > + struct amd_cpudata *cpudata;
> > + struct device *dev;
> > + int rc;
> > + u64 value;
> > +
> > + rc = amd_pstate_init_cpu(policy->cpu);
> > + if (rc)
> > + return rc;
> > +
> > + cpudata = all_cpu_data[policy->cpu];
> > +
> > + dev = get_cpu_device(policy->cpu);
> > + if (!dev)
> > + goto free_cpudata1;
> > +
> > + rc = amd_pstate_init_perf(cpudata);
> > + if (rc)
> > + goto free_cpudata1;
> > +
> > + min_freq = amd_get_min_freq(cpudata);
> > + max_freq = amd_get_max_freq(cpudata);
> > + nominal_freq = amd_get_nominal_freq(cpudata);
> > + lowest_nonlinear_freq = amd_get_lowest_nonlinear_freq(cpudata);
> > + if (min_freq < 0 || max_freq < 0 || min_freq > max_freq) {
> > + dev_err(dev, "min_freq(%d) or max_freq(%d) value is
> incorrect\n",
> > + min_freq, max_freq);
> > + ret = -EINVAL;
> > + goto free_cpudata1;
> > + }
> > +
> > + policy->min = min_freq;
> > + policy->max = max_freq;
> > +
> > + policy->cpuinfo.min_freq = min_freq;
> > + policy->cpuinfo.max_freq = max_freq;
> > + /* It will be updated by governor */
> > + policy->cur = policy->cpuinfo.min_freq;
> > +
> > + /* Initial processor data capability frequencies */
> > + cpudata->max_freq = max_freq;
> > + cpudata->min_freq = min_freq;
> > + cpudata->nominal_freq = nominal_freq;
> > + cpudata->lowest_nonlinear_freq = lowest_nonlinear_freq;
> > +
> > + policy->driver_data = cpudata;
> > +
> > + update_boost_state();
> > + cpudata->epp_cached = amd_pstate_get_epp(cpudata, value);
> > +
> > + policy->min = policy->cpuinfo.min_freq;
> > + policy->max = policy->cpuinfo.max_freq;
> > +
> > + if (boot_cpu_has(X86_FEATURE_CPPC))
> > + policy->fast_switch_possible = true;
> > +
> > + if (!shared_mem && boot_cpu_has(X86_FEATURE_CPPC)) {
>
> Here `shared_mem` is used but it is always 0, as it is not overwritten
> anywhere.
> Do we need `shared_mem` variable at all, can we use only
> X86_FEATURE_CPPC here?
>
> > + ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,
> &value);
> > + if (ret)
> > + return ret;
> > + WRITE_ONCE(cpudata->cppc_req_cached, value);
> > +
> > + ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1,
> &value);
> > + if (ret)
> > + return ret;
> > + WRITE_ONCE(cpudata->cppc_cap1_cached, value);
> > + }
> > + amd_pstate_boost_init(cpudata);
> > +
> > + return 0;
> > +
> > +free_cpudata1:
> > + kfree(cpudata);
> > + return ret;
> > +}
> > +
> > +static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) {
> > + int ret;
> > +
> > + ret = __amd_pstate_cpu_init(policy);
> > + if (ret)
> > + return ret;
> > + /*
> > + * Set the policy to powersave to provide a valid fallback value in case
> > + * the default cpufreq governor is neither powersave nor
> performance.
> > + */
> > + policy->policy = CPUFREQ_POLICY_POWERSAVE;
> > +
> > + return 0;
> > +}
> > +
> > +static int amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy) {
> > + pr_debug("CPU %d exiting\n", policy->cpu);
> > + policy->fast_switch_possible = false;
> > + return 0;
> > +}
> > +
> > +static void amd_pstate_update_max_freq(unsigned int cpu) {
> > + struct cpufreq_policy *policy = policy = cpufreq_cpu_get(cpu);
> > +
> > + if (!policy)
> > + return;
> > +
> > + refresh_frequency_limits(policy);
> > + cpufreq_cpu_put(policy);
> > +}
> > +
> > +static void amd_pstate_epp_update_limits(unsigned int cpu) {
> > + mutex_lock(&amd_pstate_driver_lock);
> > + update_boost_state();
> > + if (global_params.cppc_boost_disabled) {
> > + for_each_possible_cpu(cpu)
> > + amd_pstate_update_max_freq(cpu);
> > + } else {
> > + cpufreq_update_policy(cpu);
> > + }
> > + mutex_unlock(&amd_pstate_driver_lock);
> > +}
> > +
> > +static int cppc_boost_hold_time_ns = 3 * NSEC_PER_MSEC;
> > +
> > +static inline void amd_pstate_boost_up(struct amd_cpudata *cpudata) {
> > + u64 hwp_req = READ_ONCE(cpudata->cppc_req_cached);
> > + u64 hwp_cap = READ_ONCE(cpudata->cppc_cap1_cached);
> > + u32 max_limit = (hwp_req & 0xff);
> > + u32 min_limit = (hwp_req & 0xff00) >> 8;
> > + u32 boost_level1;
> > +
> > + /* If max and min are equal or already at max, nothing to boost */
> > + if (max_limit == min_limit)
> > + return;
> > +
> > + /* Set boost max and min to initial value */
> > + if (!cpudata->cppc_boost_min)
> > + cpudata->cppc_boost_min = min_limit;
> > +
> > + boost_level1 = ((AMD_CPPC_NOMINAL_PERF(hwp_cap) +
> min_limit) >> 1);
> > +
> > + if (cpudata->cppc_boost_min < boost_level1)
> > + cpudata->cppc_boost_min = boost_level1;
> > + else if (cpudata->cppc_boost_min <
> AMD_CPPC_NOMINAL_PERF(hwp_cap))
> > + cpudata->cppc_boost_min =
> AMD_CPPC_NOMINAL_PERF(hwp_cap);
> > + else if (cpudata->cppc_boost_min ==
> AMD_CPPC_NOMINAL_PERF(hwp_cap))
> > + cpudata->cppc_boost_min = max_limit;
> > + else
> > + return;
> > +
> > + hwp_req &= ~AMD_CPPC_MIN_PERF(~0L);
> > + hwp_req |= AMD_CPPC_MIN_PERF(cpudata->cppc_boost_min);
> > + wrmsrl_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ,
> hwp_req);
> > + cpudata->last_update = cpudata->sample.time; }
> > +
> > +static inline void amd_pstate_boost_down(struct amd_cpudata *cpudata)
> > +{
> > + bool expired;
> > +
> > + if (cpudata->cppc_boost_min) {
> > + expired = time_after64(cpudata->sample.time, cpudata-
> >last_update +
> > + cppc_boost_hold_time_ns);
> > +
> > + if (expired) {
> > + wrmsrl_safe_on_cpu(cpudata->cpu,
> MSR_AMD_CPPC_REQ,
> > + cpudata->cppc_req_cached);
> > + cpudata->cppc_boost_min = 0;
> > + }
> > + }
> > +
> > + cpudata->last_update = cpudata->sample.time; }
> > +
> > +static inline void amd_pstate_boost_update_util(struct amd_cpudata
> *cpudata,
> > + u64 time)
> > +{
> > + cpudata->sample.time = time;
> > + if (smp_processor_id() != cpudata->cpu)
> > + return;
> > +
> > + if (cpudata->sched_flags & SCHED_CPUFREQ_IOWAIT) {
> > + bool do_io = false;
> > +
> > + cpudata->sched_flags = 0;
> > + /*
> > + * Set iowait_boost flag and update time. Since IO WAIT flag
> > + * is set all the time, we can't just conclude that there is
> > + * some IO bound activity is scheduled on this CPU with just
> > + * one occurrence. If we receive at least two in two
> > + * consecutive ticks, then we treat as boost candidate.
> > + * This is leveraged from Intel Pstate driver.
> > + */
> > + if (time_before64(time, cpudata->last_io_update + 2 *
> TICK_NSEC))
> > + do_io = true;
> > +
> > + cpudata->last_io_update = time;
> > +
> > + if (do_io)
> > + amd_pstate_boost_up(cpudata);
> > +
> > + } else {
> > + amd_pstate_boost_down(cpudata);
> > + }
> > +}
> > +
> > +static inline void amd_pstate_cppc_update_hook(struct update_util_data
> *data,
> > + u64 time, unsigned int flags)
> > +{
> > + struct amd_cpudata *cpudata = container_of(data,
> > + struct amd_cpudata, update_util);
> > +
> > + cpudata->sched_flags |= flags;
> > +
> > + if (smp_processor_id() == cpudata->cpu)
> > + amd_pstate_boost_update_util(cpudata, time); }
> > +
> > +static void amd_pstate_clear_update_util_hook(unsigned int cpu) {
> > + struct amd_cpudata *cpudata = all_cpu_data[cpu];
> > +
> > + if (!cpudata->update_util_set)
> > + return;
> > +
> > + cpufreq_remove_update_util_hook(cpu);
> > + cpudata->update_util_set = false;
> > + synchronize_rcu();
> > +}
> > +
> > +static void amd_pstate_set_update_util_hook(unsigned int cpu_num) {
> > + struct amd_cpudata *cpudata = all_cpu_data[cpu_num];
> > +
> > + if (!cppc_boost) {
> > + if (cpudata->update_util_set)
> > + amd_pstate_clear_update_util_hook(cpudata->cpu);
> > + return;
> > + }
> > +
> > + if (cpudata->update_util_set)
> > + return;
> > +
> > + cpudata->sample.time = 0;
> > + cpufreq_add_update_util_hook(cpu_num, &cpudata->update_util,
> > +
> amd_pstate_cppc_update_hook);
> > + cpudata->update_util_set = true;
> > +}
> > +
> > +static void amd_pstate_epp_init(unsigned int cpu) {
> > + struct amd_cpudata *cpudata = all_cpu_data[cpu];
> > + u32 max_perf, min_perf;
> > + u64 value;
> > + s16 epp;
> > + int ret;
> > +
> > + max_perf = READ_ONCE(cpudata->highest_perf);
> > + min_perf = READ_ONCE(cpudata->lowest_perf);
> > +
> > + value = READ_ONCE(cpudata->cppc_req_cached);
> > +
> > + if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
> > + min_perf = max_perf;
> > +
> > + /* Initial min/max values for CPPC Performance Controls Register */
> > + value &= ~AMD_CPPC_MIN_PERF(~0L);
> > + value |= AMD_CPPC_MIN_PERF(min_perf);
> > +
> > + value &= ~AMD_CPPC_MAX_PERF(~0L);
> > + value |= AMD_CPPC_MAX_PERF(max_perf);
> > +
> > + /* CPPC EPP feature require to set zero to the desire perf bit */
> > + value &= ~AMD_CPPC_DES_PERF(~0L);
> > + value |= AMD_CPPC_DES_PERF(0);
> > +
> > + if (cpudata->epp_policy == cpudata->policy)
> > + goto skip_epp;
> > +
> > + cpudata->epp_policy = cpudata->policy;
> > +
> > + if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) {
> > + epp = amd_pstate_get_epp(cpudata, value);
> > + cpudata->epp_powersave = epp;
> > + if (epp < 0)
> > + goto skip_epp;
> > + /* force the epp value to be zero for performance policy */
> > + epp = 0;
> > + } else {
> > + if (cpudata->epp_powersave < 0)
> > + goto skip_epp;
> > + /* Get BIOS pre-defined epp value */
> > + epp = amd_pstate_get_epp(cpudata, value);
> > + if (epp)
> > + goto skip_epp;
> > + epp = cpudata->epp_powersave;
> > + }
> > + /* Set initial EPP value */
> > + if (boot_cpu_has(X86_FEATURE_CPPC)) {
> > + value &= ~GENMASK_ULL(31, 24);
> > + value |= (u64)epp << 24;
> > + }
> > +
> > +skip_epp:
> > + WRITE_ONCE(cpudata->cppc_req_cached, value);
> > + ret = wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
> > + if (!ret)
> > + cpudata->epp_cached = epp;
> > +}
> > +
> > +static void amd_pstate_set_max_limits(struct amd_cpudata *cpudata) {
> > + u64 hwp_cap = READ_ONCE(cpudata->cppc_cap1_cached);
> > + u64 hwp_req = READ_ONCE(cpudata->cppc_req_cached);
> > + u32 max_limit = (hwp_cap >> 24) & 0xff;
> > +
> > + hwp_req &= ~AMD_CPPC_MIN_PERF(~0L);
> > + hwp_req |= AMD_CPPC_MIN_PERF(max_limit);
> > + wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, hwp_req); }
> > +
> > +static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy) {
> > + struct amd_cpudata *cpudata;
> > +
> > + if (!policy->cpuinfo.max_freq)
> > + return -ENODEV;
> > +
> > + pr_debug("set_policy: cpuinfo.max %u policy->max %u\n",
> > + policy->cpuinfo.max_freq, policy->max);
> > +
> > + cpudata = all_cpu_data[policy->cpu];
> > + cpudata->policy = policy->policy;
> > +
> > + if (boot_cpu_has(X86_FEATURE_CPPC)) {
> > + mutex_lock(&amd_pstate_limits_lock);
> > +
> > + if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) {
> > + amd_pstate_clear_update_util_hook(policy->cpu);
> > + amd_pstate_set_max_limits(cpudata);
> > + } else {
> > + amd_pstate_set_update_util_hook(policy->cpu);
> > + }
> > +
> > + if (boot_cpu_has(X86_FEATURE_CPPC))
> > + amd_pstate_epp_init(policy->cpu);
> > +
> > + mutex_unlock(&amd_pstate_limits_lock);
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static void amd_pstate_verify_cpu_policy(struct amd_cpudata *cpudata,
> > + struct cpufreq_policy_data *policy)
> {
> > + update_boost_state();
> > + cpufreq_verify_within_cpu_limits(policy);
> > +}
> > +
> > +static int amd_pstate_epp_verify_policy(struct cpufreq_policy_data
> > +*policy) {
> > + amd_pstate_verify_cpu_policy(all_cpu_data[policy->cpu], policy);
> > + pr_debug("policy_max =%d, policy_min=%d\n", policy->max, policy-
> >min);
> > + return 0;
> > +}
> > +
> > static struct cpufreq_driver amd_pstate_driver = {
> > .flags = CPUFREQ_CONST_LOOPS |
> CPUFREQ_NEED_UPDATE_LIMITS,
> > .verify = amd_pstate_verify,
> > @@ -616,8 +1210,20 @@ static struct cpufreq_driver amd_pstate_driver =
> {
> > .attr = amd_pstate_attr,
> > };
> >
> > +static struct cpufreq_driver amd_pstate_epp_driver = {
> > + .flags = CPUFREQ_CONST_LOOPS,
> > + .verify = amd_pstate_epp_verify_policy,
> > + .setpolicy = amd_pstate_epp_set_policy,
> > + .init = amd_pstate_epp_cpu_init,
> > + .exit = amd_pstate_epp_cpu_exit,
> > + .update_limits = amd_pstate_epp_update_limits,
> > + .name = "amd_pstate_epp",
> > + .attr = amd_pstate_epp_attr,
> > +};
> > +
> > static int __init amd_pstate_init(void) {
> > + static struct amd_cpudata **cpudata;
> > int ret;
> >
> > if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) @@ -641,10
> +1247,17
> > @@ static int __init amd_pstate_init(void)
> > if (cpufreq_get_current_driver())
> > return -EEXIST;
> >
> > + if (!epp_off) {
> > + WRITE_ONCE(cppc_active, 1);
> > + if (!default_pstate_driver)
> > + default_pstate_driver = &amd_pstate_epp_driver;
> > + }
> > +
> > /* capability check */
> > if (boot_cpu_has(X86_FEATURE_CPPC)) {
> > pr_debug("AMD CPPC MSR based functionality is
> supported\n");
> > - amd_pstate_driver.adjust_perf = amd_pstate_adjust_perf;
> > + if (!cppc_active)
> > + default_pstate_driver->adjust_perf =
> amd_pstate_adjust_perf;
> > } else {
> > pr_debug("AMD CPPC shared memory based functionality is
> supported\n");
> > static_call_update(amd_pstate_enable, cppc_enable); @@ -
> 652,6
> > +1265,10 @@ static int __init amd_pstate_init(void)
> > static_call_update(amd_pstate_update_perf,
> cppc_update_perf);
> > }
> >
> > + cpudata = vzalloc(array_size(sizeof(void *), num_possible_cpus()));
> > + if (!cpudata)
> > + return -ENOMEM;
> > + WRITE_ONCE(all_cpu_data, cpudata);
> > /* enable amd pstate feature */
> > ret = amd_pstate_enable(true);
> > if (ret) {
> > @@ -659,9 +1276,9 @@ static int __init amd_pstate_init(void)
> > return ret;
> > }
> >
> > - ret = cpufreq_register_driver(&amd_pstate_driver);
> > + ret = cpufreq_register_driver(default_pstate_driver);
> > if (ret)
> > - pr_err("failed to register amd_pstate_driver with
> return %d\n",
> > + pr_err("failed to register amd pstate driver with
> return %d\n",
> > ret);
> >
> > return ret;
> > @@ -676,8 +1293,14 @@ static int __init amd_pstate_param(char *str)
> > if (!strcmp(str, "disable")) {
> > cppc_load = 0;
> > pr_info("driver is explicitly disabled\n");
> > - } else if (!strcmp(str, "passive"))
> > + } else if (!strcmp(str, "passive")) {
> > + epp_off = 1;
> > cppc_load = 1;
> > + default_pstate_driver = &amd_pstate_driver;
> > + } else if (!strcmp(str, "active")) {
> > + cppc_load = 1;
> > + default_pstate_driver = &amd_pstate_epp_driver;
> > + }
> >
> > return 0;
> > }
> > diff --git a/include/linux/amd-pstate.h b/include/linux/amd-pstate.h
> > index 1c4b8659f171..7e6e8cab97b3 100644
> > --- a/include/linux/amd-pstate.h
> > +++ b/include/linux/amd-pstate.h
> > @@ -25,6 +25,7 @@ struct amd_aperf_mperf {
> > u64 aperf;
> > u64 mperf;
> > u64 tsc;
> > + u64 time;
> > };
> >
> > /**
> > @@ -47,6 +48,18 @@ struct amd_aperf_mperf {
> > * @prev: Last Aperf/Mperf/tsc count value read from register
> > * @freq: current cpu frequency value
> > * @boost_supported: check whether the Processor or SBIOS supports
> > boost mode
> > + * @epp_powersave: Last saved CPPC energy performance preference
> > + when policy switched to performance
> > + * @epp_policy: Last saved policy used to set energy-performance
> > +preference
> > + * @epp_cached: Cached CPPC energy-performance preference value
> > + * @policy: Cpufreq policy value
> > + * @sched_flags: Store scheduler flags for possible cross CPU update
> > + * @update_util_set: CPUFreq utility callback is set
> > + * @last_update: Time stamp of the last performance state update
> > + * @cppc_boost_min: Last CPPC boosted min performance state
> > + * @cppc_cap1_cached: Cached value of the last CPPC Capabilities MSR
> > + * @update_util: Cpufreq utility callback information
> > + * @sample: the stored performance sample
> > *
> > * The amd_cpudata is key private data for each CPU thread in AMD P-
> State, and
> > * represents all the attributes and goals that AMD P-State requests at
> runtime.
> > @@ -72,6 +85,74 @@ struct amd_cpudata {
> >
> > u64 freq;
> > bool boost_supported;
> > +
> > + /* EPP feature related attributes*/
> > + s16 epp_powersave;
> > + s16 epp_policy;
> > + s16 epp_cached;
> > + u32 policy;
> > + u32 sched_flags;
> > + bool update_util_set;
> > + u64 last_update;
> > + u64 last_io_update;
> > + u32 cppc_boost_min;
> > + u64 cppc_cap1_cached;
> > + struct update_util_data update_util;
> > + struct amd_aperf_mperf sample;
> > +};
> > +
> > +/**
> > + * struct amd_pstate_params - global parameters for the performance
> > +control
> > + * @ cppc_boost_disabled wheher the core performance boost disabled
> > +*/ struct amd_pstate_params {
> > + bool cppc_boost_disabled;
> > +};
> > +
> > +#define AMD_CPPC_EPP_PERFORMANCE 0x00
> > +#define AMD_CPPC_EPP_BALANCE_PERFORMANCE 0x80
> > +#define AMD_CPPC_EPP_BALANCE_POWERSAVE 0xBF
> > +#define AMD_CPPC_EPP_POWERSAVE 0xFF
> > +
> > +/*
> > + * AMD Energy Preference Performance (EPP)
> > + * The EPP is used in the CCLK DPM controller to drive
> > + * the frequency that a core is going to operate during
> > + * short periods of activity. EPP values will be utilized for
> > + * different OS profiles (balanced, performance, power savings)
> > + * display strings corresponding to EPP index in the
> > + * energy_perf_strings[]
> > + * index String
> > + *-------------------------------------
> > + * 0 default
> > + * 1 performance
> > + * 2 balance_performance
> > + * 3 balance_power
> > + * 4 power
> > + */
> > +enum energy_perf_value_index {
> > + EPP_INDEX_DEFAULT = 0,
> > + EPP_INDEX_PERFORMANCE,
> > + EPP_INDEX_BALANCE_PERFORMANCE,
> > + EPP_INDEX_BALANCE_POWERSAVE,
> > + EPP_INDEX_POWERSAVE,
> > +};
> > +
> > +static const char * const energy_perf_strings[] = {
> > + [EPP_INDEX_DEFAULT] = "default",
> > + [EPP_INDEX_PERFORMANCE] = "performance",
> > + [EPP_INDEX_BALANCE_PERFORMANCE] = "balance_performance",
> > + [EPP_INDEX_BALANCE_POWERSAVE] = "balance_power",
> > + [EPP_INDEX_POWERSAVE] = "power",
> > + NULL
> > +};
> > +
> > +static unsigned int epp_values[] = {
> > + [EPP_INDEX_DEFAULT] = 0,
> > + [EPP_INDEX_PERFORMANCE] = AMD_CPPC_EPP_PERFORMANCE,
> > + [EPP_INDEX_BALANCE_PERFORMANCE] =
> AMD_CPPC_EPP_BALANCE_PERFORMANCE,
> > + [EPP_INDEX_BALANCE_POWERSAVE] =
> AMD_CPPC_EPP_BALANCE_POWERSAVE,
> > + [EPP_INDEX_POWERSAVE] = AMD_CPPC_EPP_POWERSAVE,
> > };
> >
> > #endif /* _LINUX_AMD_PSTATE_H */
>
> Thanks & Regards,
> Wyes
> --
> Thanks & Regards,
> Wyes


Attachments:
winmail.dat (27.55 kB)