Define topology_init_cpu_capacity_cppc() to use highest performance
values from _CPC objects to obtain and set maximum capacity information
for each CPU. acpi_cppc_processor_probe() is a good point at which to
trigger the initialization of CPU (u-arch) capacity values, as at this
point the highest performance values can be obtained from each CPU's
_CPC objects. Architectures can therefore use this functionality
through arch_init_invariance_cppc().
The performance scale used by CPPC is a unified scale for all CPUs in
the system. Therefore, by obtaining the raw highest performance values
from the _CPC objects, and normalizing them on the [0, 1024] capacity
scale, used by the task scheduler, we obtain the CPU capacity of each
CPU.
While an ACPI Notify(0x85) could alert about a change in the highest
performance value, which should in turn retrigger the CPU capacity
computations, this notification is not currently handled by the ACPI
processor driver. When supported, a call to arch_init_invariance_cppc()
would perform the update.
Signed-off-by: Ionela Voinescu <[email protected]>
Tested-by: Valentin Schneider <[email protected]>
Cc: Sudeep Holla <[email protected]>
---
drivers/base/arch_topology.c | 40 +++++++++++++++++++++++++++++++++++
include/linux/arch_topology.h | 4 ++++
2 files changed, 44 insertions(+)
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index 976154140f0b..ad2d95920ad1 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -339,6 +339,46 @@ bool __init topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu)
return !ret;
}
+#ifdef CONFIG_ACPI_CPPC_LIB
+#include <acpi/cppc_acpi.h>
+
+void topology_init_cpu_capacity_cppc(void)
+{
+ struct cppc_perf_caps perf_caps;
+ int cpu;
+
+ if (likely(acpi_disabled || !acpi_cpc_valid()))
+ return;
+
+ raw_capacity = kcalloc(num_possible_cpus(), sizeof(*raw_capacity),
+ GFP_KERNEL);
+ if (!raw_capacity)
+ return;
+
+ for_each_possible_cpu(cpu) {
+ if (!cppc_get_perf_caps(cpu, &perf_caps) &&
+ (perf_caps.highest_perf >= perf_caps.nominal_perf) &&
+ (perf_caps.highest_perf >= perf_caps.lowest_perf)) {
+ raw_capacity[cpu] = perf_caps.highest_perf;
+ pr_debug("cpu_capacity: CPU%d cpu_capacity=%u (raw).\n",
+ cpu, raw_capacity[cpu]);
+ continue;
+ }
+
+ pr_err("cpu_capacity: CPU%d missing/invalid highest performance.\n", cpu);
+ pr_err("cpu_capacity: partial information: fallback to 1024 for all CPUs\n");
+ goto exit;
+ }
+
+ topology_normalize_cpu_scale();
+ schedule_work(&update_topology_flags_work);
+ pr_debug("cpu_capacity: cpu_capacity initialization done\n");
+
+exit:
+ free_raw_capacity();
+}
+#endif
+
#ifdef CONFIG_CPU_FREQ
static cpumask_var_t cpus_to_visit;
static void parsing_done_workfn(struct work_struct *work);
diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
index cce6136b300a..58cbe18d825c 100644
--- a/include/linux/arch_topology.h
+++ b/include/linux/arch_topology.h
@@ -11,6 +11,10 @@
void topology_normalize_cpu_scale(void);
int topology_update_cpu_topology(void);
+#ifdef CONFIG_ACPI_CPPC_LIB
+void topology_init_cpu_capacity_cppc(void);
+#endif
+
struct device_node;
bool topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu);
--
2.25.1
Hi Ionela,
On 2022/3/3 2:09, Ionela Voinescu wrote:
> Define topology_init_cpu_capacity_cppc() to use highest performance
> values from _CPC objects to obtain and set maximum capacity information
> for each CPU. acpi_cppc_processor_probe() is a good point at which to
> trigger the initialization of CPU (u-arch) capacity values, as at this
> point the highest performance values can be obtained from each CPU's
> _CPC objects. Architectures can therefore use this functionality
> through arch_init_invariance_cppc().
>
> The performance scale used by CPPC is a unified scale for all CPUs in
> the system. Therefore, by obtaining the raw highest performance values
> from the _CPC objects, and normalizing them on the [0, 1024] capacity
> scale, used by the task scheduler, we obtain the CPU capacity of each
> CPU.
>
So we're going to use highest performance rather than nominal performance,
and I checked the discussion in v2 [1]. Maybe we should also document this
in sched-capacity.rst that where scheduler get the capacity from on ACPI
based system? Currently we only have DT part but after this patch it's
also supported on ACPI based system.
Out of curiosity, since we have raw capacity now on ACPI system, seems we
are able to scale the capacity with freq_factor now? looked into
register_cpufreq_notifier().
[1] https://lore.kernel.org/lkml/[email protected]/
Thanks,
Yicong
> While an ACPI Notify(0x85) could alert about a change in the highest
> performance value, which should in turn retrigger the CPU capacity
> computations, this notification is not currently handled by the ACPI
> processor driver. When supported, a call to arch_init_invariance_cppc()
> would perform the update.
>
> Signed-off-by: Ionela Voinescu <[email protected]>
> Tested-by: Valentin Schneider <[email protected]>
> Cc: Sudeep Holla <[email protected]>
> ---
> drivers/base/arch_topology.c | 40 +++++++++++++++++++++++++++++++++++
> include/linux/arch_topology.h | 4 ++++
> 2 files changed, 44 insertions(+)
>
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index 976154140f0b..ad2d95920ad1 100644
> --- a/drivers/base/arch_topology.c
> +++ b/drivers/base/arch_topology.c
> @@ -339,6 +339,46 @@ bool __init topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu)
> return !ret;
> }
>
> +#ifdef CONFIG_ACPI_CPPC_LIB
> +#include <acpi/cppc_acpi.h>
> +
> +void topology_init_cpu_capacity_cppc(void)
> +{
> + struct cppc_perf_caps perf_caps;
> + int cpu;
> +
> + if (likely(acpi_disabled || !acpi_cpc_valid()))
> + return;
> +
> + raw_capacity = kcalloc(num_possible_cpus(), sizeof(*raw_capacity),
> + GFP_KERNEL);
> + if (!raw_capacity)
> + return;
> +
> + for_each_possible_cpu(cpu) {
> + if (!cppc_get_perf_caps(cpu, &perf_caps) &&
> + (perf_caps.highest_perf >= perf_caps.nominal_perf) &&
> + (perf_caps.highest_perf >= perf_caps.lowest_perf)) {
> + raw_capacity[cpu] = perf_caps.highest_perf;
> + pr_debug("cpu_capacity: CPU%d cpu_capacity=%u (raw).\n",
> + cpu, raw_capacity[cpu]);
> + continue;
> + }
> +
> + pr_err("cpu_capacity: CPU%d missing/invalid highest performance.\n", cpu);
> + pr_err("cpu_capacity: partial information: fallback to 1024 for all CPUs\n");
> + goto exit;
> + }
> +
> + topology_normalize_cpu_scale();
> + schedule_work(&update_topology_flags_work);
> + pr_debug("cpu_capacity: cpu_capacity initialization done\n");
> +
> +exit:
> + free_raw_capacity();
> +}
> +#endif
> +
> #ifdef CONFIG_CPU_FREQ
> static cpumask_var_t cpus_to_visit;
> static void parsing_done_workfn(struct work_struct *work);
> diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
> index cce6136b300a..58cbe18d825c 100644
> --- a/include/linux/arch_topology.h
> +++ b/include/linux/arch_topology.h
> @@ -11,6 +11,10 @@
> void topology_normalize_cpu_scale(void);
> int topology_update_cpu_topology(void);
>
> +#ifdef CONFIG_ACPI_CPPC_LIB
> +void topology_init_cpu_capacity_cppc(void);
> +#endif
> +
> struct device_node;
> bool topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu);
>
>
On Wed, Mar 02, 2022 at 06:09:12PM +0000, Ionela Voinescu wrote:
> Define topology_init_cpu_capacity_cppc() to use highest performance
> values from _CPC objects to obtain and set maximum capacity information
> for each CPU. acpi_cppc_processor_probe() is a good point at which to
> trigger the initialization of CPU (u-arch) capacity values, as at this
> point the highest performance values can be obtained from each CPU's
> _CPC objects. Architectures can therefore use this functionality
> through arch_init_invariance_cppc().
>
> The performance scale used by CPPC is a unified scale for all CPUs in
> the system. Therefore, by obtaining the raw highest performance values
> from the _CPC objects, and normalizing them on the [0, 1024] capacity
> scale, used by the task scheduler, we obtain the CPU capacity of each
> CPU.
>
> While an ACPI Notify(0x85) could alert about a change in the highest
> performance value, which should in turn retrigger the CPU capacity
> computations, this notification is not currently handled by the ACPI
> processor driver. When supported, a call to arch_init_invariance_cppc()
> would perform the update.
>
> Signed-off-by: Ionela Voinescu <[email protected]>
> Tested-by: Valentin Schneider <[email protected]>
> Cc: Sudeep Holla <[email protected]>
Looks good to me. FWIW,
Acked-by: Sudeep Holla <[email protected]>
--
Regards,
Sudeep
Hi Yicong,
On Wednesday 09 Mar 2022 at 18:21:30 (+0800), Yicong Yang wrote:
> Hi Ionela,
>
> On 2022/3/3 2:09, Ionela Voinescu wrote:
> > Define topology_init_cpu_capacity_cppc() to use highest performance
> > values from _CPC objects to obtain and set maximum capacity information
> > for each CPU. acpi_cppc_processor_probe() is a good point at which to
> > trigger the initialization of CPU (u-arch) capacity values, as at this
> > point the highest performance values can be obtained from each CPU's
> > _CPC objects. Architectures can therefore use this functionality
> > through arch_init_invariance_cppc().
> >
> > The performance scale used by CPPC is a unified scale for all CPUs in
> > the system. Therefore, by obtaining the raw highest performance values
> > from the _CPC objects, and normalizing them on the [0, 1024] capacity
> > scale, used by the task scheduler, we obtain the CPU capacity of each
> > CPU.
> >
>
> So we're going to use highest performance rather than nominal performance,
> and I checked the discussion in v2 [1]. Maybe we should also document this
> in sched-capacity.rst that where scheduler get the capacity from on ACPI
> based system? Currently we only have DT part but after this patch it's
> also supported on ACPI based system.
>
It's a very good point. I'll send a separate patch for this with added
information in "3.1 CPU capacity" in sched-capacity.rst. I'll send this
separate and not with the rebase that Rafael requested to avoid
confusing things.
> Out of curiosity, since we have raw capacity now on ACPI system, seems we
> are able to scale the capacity with freq_factor now? looked into
> register_cpufreq_notifier().
>
The freq_factor is only used for DT systems where one provides
"capacity-dmips-mhz" in DT. This entry actually represents DMIPS/MHz.
So the freq_factor, set to:
per_cpu(freq_factor, cpu) = policy->cpuinfo.max_freq / 1000;
is used to obtain the performance at the maximum frequency, basically
DMIPS = (Dhrystone) million instructions per second, by multiplying this
raw value from DT with the freq_factor. After this, all these value for
each CPU type are normalized on a scale [0, 1024], resulting in what we
call CPU capacity.
For ACPI systems freq_factor will have the default value of 1 when we
call topology_normalize_cpu_scale(), as the performance value obtained
from _CPC is already representative for the highest frequency of the CPU
and not performance/Hz as we get from DT. Therefore, we are not and
should not use a freq_factor here.
Hopefully I understood your question correctly.
Thanks,
Ionela.
> [1] https://lore.kernel.org/lkml/[email protected]/
>
> Thanks,
> Yicong
On 2022/3/9 23:37, Ionela Voinescu wrote:
> Hi Yicong,
>
> On Wednesday 09 Mar 2022 at 18:21:30 (+0800), Yicong Yang wrote:
>> Hi Ionela,
>>
>> On 2022/3/3 2:09, Ionela Voinescu wrote:
>>> Define topology_init_cpu_capacity_cppc() to use highest performance
>>> values from _CPC objects to obtain and set maximum capacity information
>>> for each CPU. acpi_cppc_processor_probe() is a good point at which to
>>> trigger the initialization of CPU (u-arch) capacity values, as at this
>>> point the highest performance values can be obtained from each CPU's
>>> _CPC objects. Architectures can therefore use this functionality
>>> through arch_init_invariance_cppc().
>>>
>>> The performance scale used by CPPC is a unified scale for all CPUs in
>>> the system. Therefore, by obtaining the raw highest performance values
>>> from the _CPC objects, and normalizing them on the [0, 1024] capacity
>>> scale, used by the task scheduler, we obtain the CPU capacity of each
>>> CPU.
>>>
>>
>> So we're going to use highest performance rather than nominal performance,
>> and I checked the discussion in v2 [1]. Maybe we should also document this
>> in sched-capacity.rst that where scheduler get the capacity from on ACPI
>> based system? Currently we only have DT part but after this patch it's
>> also supported on ACPI based system.
>>
>
> It's a very good point. I'll send a separate patch for this with added
> information in "3.1 CPU capacity" in sched-capacity.rst. I'll send this
> separate and not with the rebase that Rafael requested to avoid
> confusing things.
>
sure. it's up to you. thanks.
>> Out of curiosity, since we have raw capacity now on ACPI system, seems we
>> are able to scale the capacity with freq_factor now? looked into
>> register_cpufreq_notifier().
>>
>
> The freq_factor is only used for DT systems where one provides
> "capacity-dmips-mhz" in DT. This entry actually represents DMIPS/MHz.
>
> So the freq_factor, set to:
>
> per_cpu(freq_factor, cpu) = policy->cpuinfo.max_freq / 1000;
>
> is used to obtain the performance at the maximum frequency, basically
> DMIPS = (Dhrystone) million instructions per second, by multiplying this
> raw value from DT with the freq_factor. After this, all these value for
> each CPU type are normalized on a scale [0, 1024], resulting in what we
> call CPU capacity.
>
Thanks for the illustration. I can understand what DT does as it's defined
clearly in [1]. The CPUs in the system may have different capacity-dmips-mhz
as well as frequency so we first obtain the DMIPS/MHz and then scaled it with
the max frequency.
> For ACPI systems freq_factor will have the default value of 1 when we
> call topology_normalize_cpu_scale(), as the performance value obtained
> from _CPC is already representative for the highest frequency of the CPU
> and not performance/Hz as we get from DT. Therefore, we are not and
> should not use a freq_factor here.
>
> Hopefully I understood your question correctly.
>
Seems we have different meaning of raw capacity on DT based and ACPI based system.
On DT based system it doesn't consider the max frequency of each CPU so we need
to take the frequency into account later. But on ACPI based system the max perf
has already take the max frequency into account and we don't need to consider
the frequency differences among the CPUs. If so, the comment [2] is no more
correct as we don't need to scale the capcity according to the frequnecy but
not because that we cannot get the raw cpu capacity on ACPI based system.
The CPUs on ACPI based system may also have different DMIPS and maximum frequency,
the same with the DT based system. Is it possible to keep consistence with
what DT does? As defined by the spec[3], the CPPC aims to "maintaining a performance
definition that backs a continuous, abstract, unit-less performance scale" and
"leaves the definition of the exact performance metric to the platform." So is it
possible we can also interpret it as "capacity-dmips-mhz"? Then to scale the cpu
with max frequency provided by cpufreq, for example cppc_cpufreq. I'm not sure I'm
right and understand it correctly, please correct me if I'm wrong.
For this series, the arm64 part works and based on 5.17-rc7 I tested this on qemu
with modified DSDT:
estuary:/$ cat /sys/devices/system/cpu/cpu*/acpi_cppc/highest_perf
10000
10000
10000
10000
5000
5000
5000
5000
estuary:/$ cat /sys/devices/system/cpu/cpu*/cpu_capacity
1024
1024
1024
1024
512
512
512
512
If needed
Tested-by: Yicong Yang <[email protected]>
[1] https://github.com/torvalds/linux/blob/master/Documentation/devicetree/bindings/arm/cpu-capacity.txt#L43
[2] https://github.com/torvalds/linux/blob/3bf7edc84a9eb4007dd9a0cb8878a7e1d5ec6a3b/drivers/base/arch_topology.c#L390
[3] https://uefi.org/specs/ACPI/6.4/08_Processor_Configuration_and_Control/declaring-processors.html#collaborative-processor-performance-control
Thanks.
> Thanks,
> Ionela.
>
>> [1] https://lore.kernel.org/lkml/[email protected]/
>>
>> Thanks,
>> Yicong
> .
>
On 2022/3/10 23:08, Ionela Voinescu wrote:
> On Thursday 10 Mar 2022 at 14:39:12 (+0800), Yicong Yang wrote:
>> On 2022/3/9 23:37, Ionela Voinescu wrote:
>>> Hi Yicong,
>>>
>>> On Wednesday 09 Mar 2022 at 18:21:30 (+0800), Yicong Yang wrote:
>>>> Hi Ionela,
> [..]
>>>> Out of curiosity, since we have raw capacity now on ACPI system, seems we
>>>> are able to scale the capacity with freq_factor now? looked into
>>>> register_cpufreq_notifier().
>>>>
>>>
>>> The freq_factor is only used for DT systems where one provides
>>> "capacity-dmips-mhz" in DT. This entry actually represents DMIPS/MHz.
>>>
>>> So the freq_factor, set to:
>>>
>>> per_cpu(freq_factor, cpu) = policy->cpuinfo.max_freq / 1000;
>>>
>>> is used to obtain the performance at the maximum frequency, basically
>>> DMIPS = (Dhrystone) million instructions per second, by multiplying this
>>> raw value from DT with the freq_factor. After this, all these value for
>>> each CPU type are normalized on a scale [0, 1024], resulting in what we
>>> call CPU capacity.
>>>
>>
>> Thanks for the illustration. I can understand what DT does as it's defined
>> clearly in [1]. The CPUs in the system may have different capacity-dmips-mhz
>> as well as frequency so we first obtain the DMIPS/MHz and then scaled it with
>> the max frequency.
>>
>>> For ACPI systems freq_factor will have the default value of 1 when we
>>> call topology_normalize_cpu_scale(), as the performance value obtained
>>> from _CPC is already representative for the highest frequency of the CPU
>>> and not performance/Hz as we get from DT. Therefore, we are not and
>>> should not use a freq_factor here.
>>>
>>> Hopefully I understood your question correctly.
>>>
>>
>> Seems we have different meaning of raw capacity on DT based and ACPI based system.
>> On DT based system it doesn't consider the max frequency of each CPU so we need
>> to take the frequency into account later. But on ACPI based system the max perf
>> has already take the max frequency into account and we don't need to consider
>> the frequency differences among the CPUs. If so, the comment [2] is no more
>> correct as we don't need to scale the capcity according to the frequnecy but
>> not because that we cannot get the raw cpu capacity on ACPI based system.
>>
>
> Correct! I've fixed that comment in v4 [1].
>
>> The CPUs on ACPI based system may also have different DMIPS and maximum frequency,
>> the same with the DT based system. Is it possible to keep consistence with
>> what DT does? As defined by the spec[3], the CPPC aims to "maintaining a performance
>> definition that backs a continuous, abstract, unit-less performance scale" and
>> "leaves the definition of the exact performance metric to the platform." So is it
>> possible we can also interpret it as "capacity-dmips-mhz"?
>
> I don't believe so because of:
>
> "Platforms must use the same performance scale for all processors in the
> system. On platforms with heterogeneous processors, the performance
> characteristics of all processors may not be identical. In this case, the
> platform must synthesize a performance scale that adjusts for differences
> in processors, such that any two processors running the same workload at
> the same performance level will complete in approximately the same time."
>
> So IMO, given this description, it's not appropriate to provide/use
> capacity-dmips-mhz "performance levels" in _CPC. To have a very simple
> example, let's assume we have a system with two CPUs of the same u-arch
> but one of them is clocked at 1GHz while the other is clocked at 2GHz
> (fixed frequency for both). If we are to run a theoretical benchmark on
> both we would get 50% on the first and 100% on the second (considering we
> normalize the performance scores on a scale [0, 100%]). So we could have
> 50 and 100 as highest performance levels in _CPC, if one uses a system
> wide performance scale as described in the specification.
>
> But if we convert those values to DMIPS/MHz we would get the same value
> for both CPUs. But if we provide this same value as highest performance
> level in _CPC for both we break the rule of "running the same workload
> at the same performance level will complete in approximately the same
> time."
>
Thanks for the clarification. That sounds reasonable to me. :)
>> Then to scale the cpu with max frequency provided by cpufreq, for example
>> cppc_cpufreq. I'm not sure I'm right and understand it correctly, please
>> correct me if I'm wrong.
>
> All frequency information on ACPI system is optional so even if one
> would ever want to do something like the above, one might not know what> is the maximum frequency of a CPU. I believe the tendency in recent
I doubt that. Even the frequency in the _CPC is optional, the kernel may
get the maximum frequency in other ways. On my platform it's gotten from
DMI, see cppc_cpufreq_perf_to_khz(). But I'm not sure for other cpufreq
drivers.
Thanks,
Yicong
On Thursday 10 Mar 2022 at 14:39:12 (+0800), Yicong Yang wrote:
> On 2022/3/9 23:37, Ionela Voinescu wrote:
> > Hi Yicong,
> >
> > On Wednesday 09 Mar 2022 at 18:21:30 (+0800), Yicong Yang wrote:
> >> Hi Ionela,
[..]
> >> Out of curiosity, since we have raw capacity now on ACPI system, seems we
> >> are able to scale the capacity with freq_factor now? looked into
> >> register_cpufreq_notifier().
> >>
> >
> > The freq_factor is only used for DT systems where one provides
> > "capacity-dmips-mhz" in DT. This entry actually represents DMIPS/MHz.
> >
> > So the freq_factor, set to:
> >
> > per_cpu(freq_factor, cpu) = policy->cpuinfo.max_freq / 1000;
> >
> > is used to obtain the performance at the maximum frequency, basically
> > DMIPS = (Dhrystone) million instructions per second, by multiplying this
> > raw value from DT with the freq_factor. After this, all these value for
> > each CPU type are normalized on a scale [0, 1024], resulting in what we
> > call CPU capacity.
> >
>
> Thanks for the illustration. I can understand what DT does as it's defined
> clearly in [1]. The CPUs in the system may have different capacity-dmips-mhz
> as well as frequency so we first obtain the DMIPS/MHz and then scaled it with
> the max frequency.
>
> > For ACPI systems freq_factor will have the default value of 1 when we
> > call topology_normalize_cpu_scale(), as the performance value obtained
> > from _CPC is already representative for the highest frequency of the CPU
> > and not performance/Hz as we get from DT. Therefore, we are not and
> > should not use a freq_factor here.
> >
> > Hopefully I understood your question correctly.
> >
>
> Seems we have different meaning of raw capacity on DT based and ACPI based system.
> On DT based system it doesn't consider the max frequency of each CPU so we need
> to take the frequency into account later. But on ACPI based system the max perf
> has already take the max frequency into account and we don't need to consider
> the frequency differences among the CPUs. If so, the comment [2] is no more
> correct as we don't need to scale the capcity according to the frequnecy but
> not because that we cannot get the raw cpu capacity on ACPI based system.
>
Correct! I've fixed that comment in v4 [1].
> The CPUs on ACPI based system may also have different DMIPS and maximum frequency,
> the same with the DT based system. Is it possible to keep consistence with
> what DT does? As defined by the spec[3], the CPPC aims to "maintaining a performance
> definition that backs a continuous, abstract, unit-less performance scale" and
> "leaves the definition of the exact performance metric to the platform." So is it
> possible we can also interpret it as "capacity-dmips-mhz"?
I don't believe so because of:
"Platforms must use the same performance scale for all processors in the
system. On platforms with heterogeneous processors, the performance
characteristics of all processors may not be identical. In this case, the
platform must synthesize a performance scale that adjusts for differences
in processors, such that any two processors running the same workload at
the same performance level will complete in approximately the same time."
So IMO, given this description, it's not appropriate to provide/use
capacity-dmips-mhz "performance levels" in _CPC. To have a very simple
example, let's assume we have a system with two CPUs of the same u-arch
but one of them is clocked at 1GHz while the other is clocked at 2GHz
(fixed frequency for both). If we are to run a theoretical benchmark on
both we would get 50% on the first and 100% on the second (considering we
normalize the performance scores on a scale [0, 100%]). So we could have
50 and 100 as highest performance levels in _CPC, if one uses a system
wide performance scale as described in the specification.
But if we convert those values to DMIPS/MHz we would get the same value
for both CPUs. But if we provide this same value as highest performance
level in _CPC for both we break the rule of "running the same workload
at the same performance level will complete in approximately the same
time."
> Then to scale the cpu with max frequency provided by cpufreq, for example
> cppc_cpufreq. I'm not sure I'm right and understand it correctly, please
> correct me if I'm wrong.
All frequency information on ACPI system is optional so even if one
would ever want to do something like the above, one might not know what
is the maximum frequency of a CPU. I believe the tendency in recent
systems (even DT based) is the opposite - to hide frequency information
(usually only known to firmware) and only work with abstract performance
scales. So likely in the future is the DT path that might change.
> For this series, the arm64 part works and based on 5.17-rc7 I tested this on qemu
> with modified DSDT:
>
> estuary:/$ cat /sys/devices/system/cpu/cpu*/acpi_cppc/highest_perf
> 10000
> 10000
> 10000
> 10000
> 5000
> 5000
> 5000
> 5000
> estuary:/$ cat /sys/devices/system/cpu/cpu*/cpu_capacity
> 1024
> 1024
> 1024
> 1024
> 512
> 512
> 512
> 512
>
> If needed
>
> Tested-by: Yicong Yang <[email protected]>
>
[1] https://lore.kernel.org/lkml/[email protected]/T/#u
Many thanks,
Ionela.
> [1] https://github.com/torvalds/linux/blob/master/Documentation/devicetree/bindings/arm/cpu-capacity.txt#L43
> [2] https://github.com/torvalds/linux/blob/3bf7edc84a9eb4007dd9a0cb8878a7e1d5ec6a3b/drivers/base/arch_topology.c#L390
> [3] https://uefi.org/specs/ACPI/6.4/08_Processor_Configuration_and_Control/declaring-processors.html#collaborative-processor-performance-control
>
> Thanks.
>
> > Thanks,
> > Ionela.
> >
> >> [1] https://lore.kernel.org/lkml/[email protected]/
> >>
> >> Thanks,
> >> Yicong
> > .
> >