2021-06-10 15:05:07

by Lukasz Luba

[permalink] [raw]
Subject: [PATCH v3 0/3] Add allowed CPU capacity knowledge to EAS

Hi all,

The patch set v3 aims to add knowledge about reduced CPU capacity
into the Energy Model (EM) and Energy Aware Scheduler (EAS). Currently the
issue is that SchedUtil CPU frequency and EM frequency are not aligned,
when there is a CPU thermal capping. This causes an estimation error.
This patch set provides the information about allowed CPU capacity
into the EM (thanks to thermal pressure information). This improves the
energy estimation. More info about this mechanism can be found in the
patches comments.

There is a new patch 1/3 in this v3, addressing an issue triggered for
hotplugged out CPU. The offline CPUs don't have proper value stored by
thermal framework in their per-cpu thermal_pressure. Thus, the thermal
pressure geometric series machinery reads 'stale' value when the CPU
is back online. The patch fixes it, so all mechanisms like
load balance, not only EAS, would have more accurate CPU capacity
information for those 'returning online' CPUs. I've added also related
cpu cooling maintainers to the CC of this patch set.

Changelog:
v3:
- switched to 'raw' per-cpu thermal pressure instead of thermal pressure
geometric series signal, since it more suited for purpose of
this use case: predicting SchedUtil frequency (Vincent, Dietmar)
- added more comment in the patch 2/3 header for use case when thermal
capping might be applied even the CPUs are not over-utilized
(Dietmar)
- added ACK tag from Rafael for SchedUtil part
- added a fix patch for offline CPUs in cpufreq_cooling and per-cpu
thermal_pressure missing update
v2 [2]:
- clamp the returned value from effective_cpu_util() and avoid irq
util scaling issues (Quentin)
v1 is available at [1]

Regards,
Lukasz

[1] https://lore.kernel.org/linux-pm/[email protected]/
[2] https://lore.kernel.org/lkml/[email protected]/

Lukasz Luba (3):
thermal: cpufreq_cooling: Update also offline CPUs per-cpu
thermal_pressure
sched/fair: Take thermal pressure into account while estimating energy
sched/cpufreq: Consider reduced CPU capacity in energy calculation

drivers/thermal/cpufreq_cooling.c | 2 +-
include/linux/energy_model.h | 16 +++++++++++++---
include/linux/sched/cpufreq.h | 2 +-
kernel/sched/cpufreq_schedutil.c | 1 +
kernel/sched/fair.c | 14 ++++++++++----
5 files changed, 26 insertions(+), 9 deletions(-)

--
2.17.1


2021-06-10 15:06:05

by Lukasz Luba

[permalink] [raw]
Subject: [PATCH v3 1/3] thermal: cpufreq_cooling: Update also offline CPUs per-cpu thermal_pressure

The thermal pressure signal gives information to the scheduler about
reduced CPU capacity due to thermal. It is based on a value stored in a
per-cpu 'thermal_pressure' variable. The online CPUs will get the new
value there, while the offline won't. Unfortunately, when the CPU is back
online, the value read from per-cpu variable might be wrong (stale data).
This might affect the scheduler decisions, since it sees the CPU capacity
differently than what is actually available.

Fix it by making sure that all online+offline CPUs would get the proper
value in their per-cpu variable when thermal framework sets capping.

Fixes: f12e4f66ab6a3 ("thermal/cpu-cooling: Update thermal pressure in case of a maximum frequency capping")
Signed-off-by: Lukasz Luba <[email protected]>
---
drivers/thermal/cpufreq_cooling.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
index eeb4e4b76c0b..43b1ae8a7789 100644
--- a/drivers/thermal/cpufreq_cooling.c
+++ b/drivers/thermal/cpufreq_cooling.c
@@ -478,7 +478,7 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency);
if (ret >= 0) {
cpufreq_cdev->cpufreq_state = state;
- cpus = cpufreq_cdev->policy->cpus;
+ cpus = cpufreq_cdev->policy->related_cpus;
max_capacity = arch_scale_cpu_capacity(cpumask_first(cpus));
capacity = frequency * max_capacity;
capacity /= cpufreq_cdev->policy->cpuinfo.max_freq;
--
2.17.1

2021-06-10 15:07:17

by Lukasz Luba

[permalink] [raw]
Subject: [PATCH v3 2/3] sched/fair: Take thermal pressure into account while estimating energy

Energy Aware Scheduling (EAS) needs to be able to predict the frequency
requests made by the SchedUtil governor to properly estimate energy used
in the future. It has to take into account CPUs utilization and forecast
Performance Domain (PD) frequency. There is a corner case when the max
allowed frequency might be reduced due to thermal. SchedUtil is aware of
that reduced frequency, so it should be taken into account also in EAS
estimations.

SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
to 'policy::max'. SchedUtil is responsible to respect that upper limit
while setting the frequency through CPUFreq drivers. This effective
frequency is stored internally in 'sugov_policy::next_freq' and EAS has
to predict that value.

In the existing code the raw value of arch_scale_cpu_capacity() is used
for clamping the returned CPU utilization from effective_cpu_util().
This patch fixes issue with too big single CPU utilization, by introducing
clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
capacity reduced by thermal pressure signal. We rely on this load avg
geometric series in similar way as other mechanisms in the scheduler.

Thanks to knowledge about allowed CPU capacity, we don't get too big value
for a single CPU utilization, which is then added to the util sum. The
util sum is used as a source of information for estimating whole PD energy.
To avoid wrong energy estimation in EAS (due to capped frequency), make
sure that the calculation of util sum is aware of allowed CPU capacity.

This thermal pressure might be visible in scenarios where the CPUs are not
heavily loaded, but some other component (like GPU) drastically reduced
available power budget and increased the SoC temperature. Thus, we still
use EAS for task placement and CPUs are not over-utilized.

Signed-off-by: Lukasz Luba <[email protected]>
---
kernel/sched/fair.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 161b92aa1c79..237726217dad 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6527,8 +6527,12 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
struct cpumask *pd_mask = perf_domain_span(pd);
unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
unsigned long max_util = 0, sum_util = 0;
+ unsigned long _cpu_cap, thermal_pressure;
int cpu;

+ thermal_pressure = arch_scale_thermal_pressure(cpumask_first(pd_mask));
+ _cpu_cap = cpu_cap - thermal_pressure;
+
/*
* The capacity state of CPUs of the current rd can be driven by CPUs
* of another rd if they belong to the same pd. So, account for the
@@ -6564,8 +6568,10 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
* is already enough to scale the EM reported power
* consumption at the (eventually clamped) cpu_capacity.
*/
- sum_util += effective_cpu_util(cpu, util_running, cpu_cap,
- ENERGY_UTIL, NULL);
+ cpu_util = effective_cpu_util(cpu, util_running, cpu_cap,
+ ENERGY_UTIL, NULL);
+
+ sum_util += min(cpu_util, _cpu_cap);

/*
* Performance domain frequency: utilization clamping
@@ -6576,7 +6582,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
*/
cpu_util = effective_cpu_util(cpu, util_freq, cpu_cap,
FREQUENCY_UTIL, tsk);
- max_util = max(max_util, cpu_util);
+ max_util = max(max_util, min(cpu_util, _cpu_cap));
}

return em_cpu_energy(pd->em_pd, max_util, sum_util);
--
2.17.1

2021-06-10 15:07:39

by Lukasz Luba

[permalink] [raw]
Subject: [PATCH v3 3/3] sched/cpufreq: Consider reduced CPU capacity in energy calculation

Energy Aware Scheduling (EAS) needs to predict the decisions made by
SchedUtil. The map_util_freq() exists to do that.

There are corner cases where the max allowed frequency might be reduced
(due to thermal). SchedUtil as a CPUFreq governor, is aware of that
but EAS is not. This patch aims to address it.

SchedUtil stores the maximum allowed frequency in
'sugov_policy::next_freq' field. EAS has to predict that value, which is
the real used frequency. That value is made after a call to
cpufreq_driver_resolve_freq() which clamps to the CPUFreq policy limits.
In the existing code EAS is not able to predict that real frequency.
This leads to energy estimation errors.

To avoid wrong energy estimation in EAS (due to frequency miss prediction)
make sure that the step which calculates Performance Domain frequency,
is also aware of the allowed CPU capacity.

Furthermore, modify map_util_freq() to not extend the frequency value.
Instead, use map_util_perf() to extend the util value in both places:
SchedUtil and EAS, but for EAS clamp it to max allowed CPU capacity.
In the end, we achieve the same desirable behavior for both subsystems
and alignment in regards to the real CPU frequency.

Acked-by: Rafael J. Wysocki <[email protected]> (For the schedutil part)
Signed-off-by: Lukasz Luba <[email protected]>
---
include/linux/energy_model.h | 16 +++++++++++++---
include/linux/sched/cpufreq.h | 2 +-
kernel/sched/cpufreq_schedutil.c | 1 +
kernel/sched/fair.c | 2 +-
4 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
index 757fc60658fa..3f221dbf5f95 100644
--- a/include/linux/energy_model.h
+++ b/include/linux/energy_model.h
@@ -91,6 +91,8 @@ void em_dev_unregister_perf_domain(struct device *dev);
* @pd : performance domain for which energy has to be estimated
* @max_util : highest utilization among CPUs of the domain
* @sum_util : sum of the utilization of all CPUs in the domain
+ * @allowed_cpu_cap : maximum allowed CPU capacity for the @pd, which
+ might reflect reduced frequency (due to thermal)
*
* This function must be used only for CPU devices. There is no validation,
* i.e. if the EM is a CPU type and has cpumask allocated. It is called from
@@ -100,7 +102,8 @@ void em_dev_unregister_perf_domain(struct device *dev);
* a capacity state satisfying the max utilization of the domain.
*/
static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
- unsigned long max_util, unsigned long sum_util)
+ unsigned long max_util, unsigned long sum_util,
+ unsigned long allowed_cpu_cap)
{
unsigned long freq, scale_cpu;
struct em_perf_state *ps;
@@ -112,11 +115,17 @@ static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
/*
* In order to predict the performance state, map the utilization of
* the most utilized CPU of the performance domain to a requested
- * frequency, like schedutil.
+ * frequency, like schedutil. Take also into account that the real
+ * frequency might be set lower (due to thermal capping). Thus, clamp
+ * max utilization to the allowed CPU capacity before calculating
+ * effective frequency.
*/
cpu = cpumask_first(to_cpumask(pd->cpus));
scale_cpu = arch_scale_cpu_capacity(cpu);
ps = &pd->table[pd->nr_perf_states - 1];
+
+ max_util = map_util_perf(max_util);
+ max_util = min(max_util, allowed_cpu_cap);
freq = map_util_freq(max_util, ps->frequency, scale_cpu);

/*
@@ -209,7 +218,8 @@ static inline struct em_perf_domain *em_pd_get(struct device *dev)
return NULL;
}
static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
- unsigned long max_util, unsigned long sum_util)
+ unsigned long max_util, unsigned long sum_util,
+ unsigned long allowed_cpu_cap)
{
return 0;
}
diff --git a/include/linux/sched/cpufreq.h b/include/linux/sched/cpufreq.h
index 6205578ab6ee..bdd31ab93bc5 100644
--- a/include/linux/sched/cpufreq.h
+++ b/include/linux/sched/cpufreq.h
@@ -26,7 +26,7 @@ bool cpufreq_this_cpu_can_update(struct cpufreq_policy *policy);
static inline unsigned long map_util_freq(unsigned long util,
unsigned long freq, unsigned long cap)
{
- return (freq + (freq >> 2)) * util / cap;
+ return freq * util / cap;
}

static inline unsigned long map_util_perf(unsigned long util)
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 4f09afd2f321..57124614363d 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -151,6 +151,7 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
unsigned int freq = arch_scale_freq_invariant() ?
policy->cpuinfo.max_freq : policy->cur;

+ util = map_util_perf(util);
freq = map_util_freq(util, freq, max);

if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 237726217dad..3795b96a149b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6585,7 +6585,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
max_util = max(max_util, min(cpu_util, _cpu_cap));
}

- return em_cpu_energy(pd->em_pd, max_util, sum_util);
+ return em_cpu_energy(pd->em_pd, max_util, sum_util, _cpu_cap);
}

/*
--
2.17.1

2021-06-14 10:14:35

by Viresh Kumar

[permalink] [raw]
Subject: Re: [PATCH v3 1/3] thermal: cpufreq_cooling: Update also offline CPUs per-cpu thermal_pressure

On 10-06-21, 16:03, Lukasz Luba wrote:
> The thermal pressure signal gives information to the scheduler about
> reduced CPU capacity due to thermal. It is based on a value stored in a
> per-cpu 'thermal_pressure' variable. The online CPUs will get the new
> value there, while the offline won't. Unfortunately, when the CPU is back
> online, the value read from per-cpu variable might be wrong (stale data).
> This might affect the scheduler decisions, since it sees the CPU capacity
> differently than what is actually available.
>
> Fix it by making sure that all online+offline CPUs would get the proper
> value in their per-cpu variable when thermal framework sets capping.
>
> Fixes: f12e4f66ab6a3 ("thermal/cpu-cooling: Update thermal pressure in case of a maximum frequency capping")
> Signed-off-by: Lukasz Luba <[email protected]>
> ---
> drivers/thermal/cpufreq_cooling.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
> index eeb4e4b76c0b..43b1ae8a7789 100644
> --- a/drivers/thermal/cpufreq_cooling.c
> +++ b/drivers/thermal/cpufreq_cooling.c
> @@ -478,7 +478,7 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
> ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency);
> if (ret >= 0) {
> cpufreq_cdev->cpufreq_state = state;
> - cpus = cpufreq_cdev->policy->cpus;
> + cpus = cpufreq_cdev->policy->related_cpus;
> max_capacity = arch_scale_cpu_capacity(cpumask_first(cpus));
> capacity = frequency * max_capacity;
> capacity /= cpufreq_cdev->policy->cpuinfo.max_freq;

Acked-by: Viresh Kumar <[email protected]>

--
viresh

2021-06-14 10:23:08

by Lukasz Luba

[permalink] [raw]
Subject: Re: [PATCH v3 1/3] thermal: cpufreq_cooling: Update also offline CPUs per-cpu thermal_pressure



On 6/14/21 11:12 AM, Viresh Kumar wrote:
> On 10-06-21, 16:03, Lukasz Luba wrote:
>> The thermal pressure signal gives information to the scheduler about
>> reduced CPU capacity due to thermal. It is based on a value stored in a
>> per-cpu 'thermal_pressure' variable. The online CPUs will get the new
>> value there, while the offline won't. Unfortunately, when the CPU is back
>> online, the value read from per-cpu variable might be wrong (stale data).
>> This might affect the scheduler decisions, since it sees the CPU capacity
>> differently than what is actually available.
>>
>> Fix it by making sure that all online+offline CPUs would get the proper
>> value in their per-cpu variable when thermal framework sets capping.
>>
>> Fixes: f12e4f66ab6a3 ("thermal/cpu-cooling: Update thermal pressure in case of a maximum frequency capping")
>> Signed-off-by: Lukasz Luba <[email protected]>
>> ---
>> drivers/thermal/cpufreq_cooling.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
>> index eeb4e4b76c0b..43b1ae8a7789 100644
>> --- a/drivers/thermal/cpufreq_cooling.c
>> +++ b/drivers/thermal/cpufreq_cooling.c
>> @@ -478,7 +478,7 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
>> ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency);
>> if (ret >= 0) {
>> cpufreq_cdev->cpufreq_state = state;
>> - cpus = cpufreq_cdev->policy->cpus;
>> + cpus = cpufreq_cdev->policy->related_cpus;
>> max_capacity = arch_scale_cpu_capacity(cpumask_first(cpus));
>> capacity = frequency * max_capacity;
>> capacity /= cpufreq_cdev->policy->cpuinfo.max_freq;
>
> Acked-by: Viresh Kumar <[email protected]>
>

Thank you Viresh!

Regards,
Lukasz

2021-06-14 15:31:32

by Lukasz Luba

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] sched/fair: Take thermal pressure into account while estimating energy

Hi Vincent,

Gentle ping. Could you have a look at this implementation, please?


On 6/10/21 4:03 PM, Lukasz Luba wrote:

[snip]

> @@ -6527,8 +6527,12 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> struct cpumask *pd_mask = perf_domain_span(pd);
> unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
> unsigned long max_util = 0, sum_util = 0;
> + unsigned long _cpu_cap, thermal_pressure;
> int cpu;
>
> + thermal_pressure = arch_scale_thermal_pressure(cpumask_first(pd_mask));
> + _cpu_cap = cpu_cap - thermal_pressure;

I've done the implementation according to your suggestion. That should
provide the consistent usage.

Regards,
Lukasz

2021-06-14 15:50:26

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] sched/fair: Take thermal pressure into account while estimating energy

On Mon, 14 Jun 2021 at 17:29, Lukasz Luba <[email protected]> wrote:
>
> Hi Vincent,
>
> Gentle ping. Could you have a look at this implementation, please?

Ah yes, this has been lost in my inbox. Let me have a look at it

>
>
> On 6/10/21 4:03 PM, Lukasz Luba wrote:
>
> [snip]
>
> > @@ -6527,8 +6527,12 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> > struct cpumask *pd_mask = perf_domain_span(pd);
> > unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
> > unsigned long max_util = 0, sum_util = 0;
> > + unsigned long _cpu_cap, thermal_pressure;
> > int cpu;
> >
> > + thermal_pressure = arch_scale_thermal_pressure(cpumask_first(pd_mask));
> > + _cpu_cap = cpu_cap - thermal_pressure;
>
> I've done the implementation according to your suggestion. That should
> provide the consistent usage.
>
> Regards,
> Lukasz

2021-06-14 16:06:27

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] sched/fair: Take thermal pressure into account while estimating energy

On Thu, 10 Jun 2021 at 17:03, Lukasz Luba <[email protected]> wrote:
>
> Energy Aware Scheduling (EAS) needs to be able to predict the frequency
> requests made by the SchedUtil governor to properly estimate energy used
> in the future. It has to take into account CPUs utilization and forecast
> Performance Domain (PD) frequency. There is a corner case when the max
> allowed frequency might be reduced due to thermal. SchedUtil is aware of
> that reduced frequency, so it should be taken into account also in EAS
> estimations.
>
> SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
> a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
> to 'policy::max'. SchedUtil is responsible to respect that upper limit
> while setting the frequency through CPUFreq drivers. This effective
> frequency is stored internally in 'sugov_policy::next_freq' and EAS has
> to predict that value.
>
> In the existing code the raw value of arch_scale_cpu_capacity() is used
> for clamping the returned CPU utilization from effective_cpu_util().
> This patch fixes issue with too big single CPU utilization, by introducing
> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
> capacity reduced by thermal pressure signal. We rely on this load avg

you don't rely on load avg value but on raw thermal pressure value now

> geometric series in similar way as other mechanisms in the scheduler.
>
> Thanks to knowledge about allowed CPU capacity, we don't get too big value
> for a single CPU utilization, which is then added to the util sum. The
> util sum is used as a source of information for estimating whole PD energy.
> To avoid wrong energy estimation in EAS (due to capped frequency), make
> sure that the calculation of util sum is aware of allowed CPU capacity.
>
> This thermal pressure might be visible in scenarios where the CPUs are not
> heavily loaded, but some other component (like GPU) drastically reduced
> available power budget and increased the SoC temperature. Thus, we still
> use EAS for task placement and CPUs are not over-utilized.
>
> Signed-off-by: Lukasz Luba <[email protected]>
> ---
> kernel/sched/fair.c | 12 +++++++++---
> 1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 161b92aa1c79..237726217dad 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6527,8 +6527,12 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> struct cpumask *pd_mask = perf_domain_span(pd);
> unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
> unsigned long max_util = 0, sum_util = 0;
> + unsigned long _cpu_cap, thermal_pressure;
> int cpu;
>
> + thermal_pressure = arch_scale_thermal_pressure(cpumask_first(pd_mask));

Do you really need to use this intermediate variable thermal_pressure
? Seems to be used only below

With these 2 comments above fixed,

Reviewed-by: Vincent Guittot <[email protected]>

> + _cpu_cap = cpu_cap - thermal_pressure;
> +
> /*
> * The capacity state of CPUs of the current rd can be driven by CPUs
> * of another rd if they belong to the same pd. So, account for the
> @@ -6564,8 +6568,10 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> * is already enough to scale the EM reported power
> * consumption at the (eventually clamped) cpu_capacity.
> */
> - sum_util += effective_cpu_util(cpu, util_running, cpu_cap,
> - ENERGY_UTIL, NULL);
> + cpu_util = effective_cpu_util(cpu, util_running, cpu_cap,
> + ENERGY_UTIL, NULL);
> +
> + sum_util += min(cpu_util, _cpu_cap);
>
> /*
> * Performance domain frequency: utilization clamping
> @@ -6576,7 +6582,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> */
> cpu_util = effective_cpu_util(cpu, util_freq, cpu_cap,
> FREQUENCY_UTIL, tsk);
> - max_util = max(max_util, cpu_util);
> + max_util = max(max_util, min(cpu_util, _cpu_cap));
> }
>
> return em_cpu_energy(pd->em_pd, max_util, sum_util);
> --
> 2.17.1
>

2021-06-14 18:26:24

by Lukasz Luba

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] sched/fair: Take thermal pressure into account while estimating energy



On 6/14/21 5:03 PM, Vincent Guittot wrote:
> On Thu, 10 Jun 2021 at 17:03, Lukasz Luba <[email protected]> wrote:

[snip]

>> In the existing code the raw value of arch_scale_cpu_capacity() is used
>> for clamping the returned CPU utilization from effective_cpu_util().
>> This patch fixes issue with too big single CPU utilization, by introducing
>> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
>> capacity reduced by thermal pressure signal. We rely on this load avg
>
> you don't rely on load avg value but on raw thermal pressure value now

Good catch, I'll change that description.

>
>> geometric series in similar way as other mechanisms in the scheduler.
>>

[snip]

>>
>> + thermal_pressure = arch_scale_thermal_pressure(cpumask_first(pd_mask));
>
> Do you really need to use this intermediate variable thermal_pressure
> ? Seems to be used only below

True, it's used only here. I'll remove this variable in the v4.

>
> With these 2 comments above fixed,
>
> Reviewed-by: Vincent Guittot <[email protected]>

Thank you for the review!

Regards,
Lukasz