2021-06-04 08:11:29

by Lukasz Luba

[permalink] [raw]
Subject: [PATCH v2 0/2] Add allowed CPU capacity knowledge to EAS

Hi all,

The patch set v2 aims to add knowledge about reduced CPU capacity
into the Energy Model (EM) and Energy Aware Scheduler (EAS). Currently the
issue is that SchedUtil CPU frequency and EM frequency are not aligned,
when there is a CPU thermal capping. This causes an estimation error.
This patch set provides the information about allowed CPU capacity
into the EM (thanks to thermal pressure signal). This improves the
energy estimation. More info about this mechanism can be found in the
patches comments.

Changelog:
v2:
- clamp the returned value from effective_cpu_util() and avoid irq
util scaling issues (Quentin)
v1 is available at [1]

Regards,
Lukasz

[1] https://lore.kernel.org/linux-pm/[email protected]/

Lukasz Luba (2):
sched/fair: Take thermal pressure into account while estimating energy
sched/cpufreq: Consider reduced CPU capacity in energy calculation

include/linux/energy_model.h | 16 +++++++++++++---
include/linux/sched/cpufreq.h | 2 +-
kernel/sched/cpufreq_schedutil.c | 1 +
kernel/sched/fair.c | 19 +++++++++++++++----
4 files changed, 30 insertions(+), 8 deletions(-)

--
2.17.1


2021-06-04 08:13:26

by Lukasz Luba

[permalink] [raw]
Subject: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy

Energy Aware Scheduling (EAS) needs to be able to predict the frequency
requests made by the SchedUtil governor to properly estimate energy used
in the future. It has to take into account CPUs utilization and forecast
Performance Domain (PD) frequency. There is a corner case when the max
allowed frequency might be reduced due to thermal. SchedUtil is aware of
that reduced frequency, so it should be taken into account also in EAS
estimations.

SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
to 'policy::max'. SchedUtil is responsible to respect that upper limit
while setting the frequency through CPUFreq drivers. This effective
frequency is stored internally in 'sugov_policy::next_freq' and EAS has
to predict that value.

In the existing code the raw value of arch_scale_cpu_capacity() is used
for clamping the returned CPU utilization from effective_cpu_util().
This patch fixes issue with too big single CPU utilization, by introducing
clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
capacity reduced by thermal pressure signal. We rely on this load avg
geometric series in similar way as other mechanisms in the scheduler.

Thanks to knowledge about allowed CPU capacity, we don't get too big value
for a single CPU utilization, which is then added to the util sum. The
util sum is used as a source of information for estimating whole PD energy.
To avoid wrong energy estimation in EAS (due to capped frequency), make
sure that the calculation of util sum is aware of allowed CPU capacity.

Signed-off-by: Lukasz Luba <[email protected]>
---
kernel/sched/fair.c | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 161b92aa1c79..1aeddecabc20 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6527,6 +6527,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
struct cpumask *pd_mask = perf_domain_span(pd);
unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
unsigned long max_util = 0, sum_util = 0;
+ unsigned long _cpu_cap = cpu_cap;
int cpu;

/*
@@ -6558,14 +6559,24 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
cpu_util_next(cpu, p, -1) + task_util_est(p);
}

+ /*
+ * Take the thermal pressure from non-idle CPUs. They have
+ * most up-to-date information. For idle CPUs thermal pressure
+ * signal is not updated so often.
+ */
+ if (!idle_cpu(cpu))
+ _cpu_cap = cpu_cap - thermal_load_avg(cpu_rq(cpu));
+
/*
* Busy time computation: utilization clamping is not
* required since the ratio (sum_util / cpu_capacity)
* is already enough to scale the EM reported power
* consumption at the (eventually clamped) cpu_capacity.
*/
- sum_util += effective_cpu_util(cpu, util_running, cpu_cap,
- ENERGY_UTIL, NULL);
+ cpu_util = effective_cpu_util(cpu, util_running, cpu_cap,
+ ENERGY_UTIL, NULL);
+
+ sum_util += min(cpu_util, _cpu_cap);

/*
* Performance domain frequency: utilization clamping
@@ -6576,7 +6587,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
*/
cpu_util = effective_cpu_util(cpu, util_freq, cpu_cap,
FREQUENCY_UTIL, tsk);
- max_util = max(max_util, cpu_util);
+ max_util = max(max_util, min(cpu_util, _cpu_cap));
}

return em_cpu_energy(pd->em_pd, max_util, sum_util);
--
2.17.1

2021-06-04 08:14:38

by Lukasz Luba

[permalink] [raw]
Subject: [PATCH v2 2/2] sched/cpufreq: Consider reduced CPU capacity in energy calculation

Energy Aware Scheduling (EAS) needs to predict the decisions made by
SchedUtil. The map_util_freq() exists to do that.

There are corner cases where the max allowed frequency might be reduced
(due to thermal). SchedUtil as a CPUFreq governor, is aware of that
but EAS is not. This patch aims to address it.

SchedUtil stores the maximum allowed frequency in
'sugov_policy::next_freq' field. EAS has to predict that value, which is
the real used frequency. That value is made after a call to
cpufreq_driver_resolve_freq() which clamps to the CPUFreq policy limits.
In the existing code EAS is not able to predict that real frequency.
This leads to energy estimation errors.

To avoid wrong energy estimation in EAS (due to frequency miss prediction)
make sure that the step which calculates Performance Domain frequency,
is also aware of the allowed CPU capacity.

Furthermore, modify map_util_freq() to not extend the frequency value.
Instead, use map_util_perf() to extend the util value in both places:
SchedUtil and EAS, but for EAS clamp it to max allowed CPU capacity.
In the end, we achieve the same desirable behavior for both subsystems
and alignment in regards to the real CPU frequency.

Signed-off-by: Lukasz Luba <[email protected]>
---
include/linux/energy_model.h | 16 +++++++++++++---
include/linux/sched/cpufreq.h | 2 +-
kernel/sched/cpufreq_schedutil.c | 1 +
kernel/sched/fair.c | 2 +-
4 files changed, 16 insertions(+), 5 deletions(-)

diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
index 757fc60658fa..3f221dbf5f95 100644
--- a/include/linux/energy_model.h
+++ b/include/linux/energy_model.h
@@ -91,6 +91,8 @@ void em_dev_unregister_perf_domain(struct device *dev);
* @pd : performance domain for which energy has to be estimated
* @max_util : highest utilization among CPUs of the domain
* @sum_util : sum of the utilization of all CPUs in the domain
+ * @allowed_cpu_cap : maximum allowed CPU capacity for the @pd, which
+ might reflect reduced frequency (due to thermal)
*
* This function must be used only for CPU devices. There is no validation,
* i.e. if the EM is a CPU type and has cpumask allocated. It is called from
@@ -100,7 +102,8 @@ void em_dev_unregister_perf_domain(struct device *dev);
* a capacity state satisfying the max utilization of the domain.
*/
static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
- unsigned long max_util, unsigned long sum_util)
+ unsigned long max_util, unsigned long sum_util,
+ unsigned long allowed_cpu_cap)
{
unsigned long freq, scale_cpu;
struct em_perf_state *ps;
@@ -112,11 +115,17 @@ static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
/*
* In order to predict the performance state, map the utilization of
* the most utilized CPU of the performance domain to a requested
- * frequency, like schedutil.
+ * frequency, like schedutil. Take also into account that the real
+ * frequency might be set lower (due to thermal capping). Thus, clamp
+ * max utilization to the allowed CPU capacity before calculating
+ * effective frequency.
*/
cpu = cpumask_first(to_cpumask(pd->cpus));
scale_cpu = arch_scale_cpu_capacity(cpu);
ps = &pd->table[pd->nr_perf_states - 1];
+
+ max_util = map_util_perf(max_util);
+ max_util = min(max_util, allowed_cpu_cap);
freq = map_util_freq(max_util, ps->frequency, scale_cpu);

/*
@@ -209,7 +218,8 @@ static inline struct em_perf_domain *em_pd_get(struct device *dev)
return NULL;
}
static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
- unsigned long max_util, unsigned long sum_util)
+ unsigned long max_util, unsigned long sum_util,
+ unsigned long allowed_cpu_cap)
{
return 0;
}
diff --git a/include/linux/sched/cpufreq.h b/include/linux/sched/cpufreq.h
index 6205578ab6ee..bdd31ab93bc5 100644
--- a/include/linux/sched/cpufreq.h
+++ b/include/linux/sched/cpufreq.h
@@ -26,7 +26,7 @@ bool cpufreq_this_cpu_can_update(struct cpufreq_policy *policy);
static inline unsigned long map_util_freq(unsigned long util,
unsigned long freq, unsigned long cap)
{
- return (freq + (freq >> 2)) * util / cap;
+ return freq * util / cap;
}

static inline unsigned long map_util_perf(unsigned long util)
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index 4f09afd2f321..57124614363d 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -151,6 +151,7 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
unsigned int freq = arch_scale_freq_invariant() ?
policy->cpuinfo.max_freq : policy->cur;

+ util = map_util_perf(util);
freq = map_util_freq(util, freq, max);

if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1aeddecabc20..9a79bbd9425b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6590,7 +6590,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
max_util = max(max_util, min(cpu_util, _cpu_cap));
}

- return em_cpu_energy(pd->em_pd, max_util, sum_util);
+ return em_cpu_energy(pd->em_pd, max_util, sum_util, _cpu_cap);
}

/*
--
2.17.1

2021-06-09 15:59:05

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] sched/cpufreq: Consider reduced CPU capacity in energy calculation

On Fri, Jun 4, 2021 at 10:10 AM Lukasz Luba <[email protected]> wrote:
>
> Energy Aware Scheduling (EAS) needs to predict the decisions made by
> SchedUtil. The map_util_freq() exists to do that.
>
> There are corner cases where the max allowed frequency might be reduced
> (due to thermal). SchedUtil as a CPUFreq governor, is aware of that
> but EAS is not. This patch aims to address it.
>
> SchedUtil stores the maximum allowed frequency in
> 'sugov_policy::next_freq' field. EAS has to predict that value, which is
> the real used frequency. That value is made after a call to
> cpufreq_driver_resolve_freq() which clamps to the CPUFreq policy limits.
> In the existing code EAS is not able to predict that real frequency.
> This leads to energy estimation errors.
>
> To avoid wrong energy estimation in EAS (due to frequency miss prediction)
> make sure that the step which calculates Performance Domain frequency,
> is also aware of the allowed CPU capacity.
>
> Furthermore, modify map_util_freq() to not extend the frequency value.
> Instead, use map_util_perf() to extend the util value in both places:
> SchedUtil and EAS, but for EAS clamp it to max allowed CPU capacity.
> In the end, we achieve the same desirable behavior for both subsystems
> and alignment in regards to the real CPU frequency.
>
> Signed-off-by: Lukasz Luba <[email protected]>

For the schedutil part

Acked-by: Rafael J. Wysocki <[email protected]>

> ---
> include/linux/energy_model.h | 16 +++++++++++++---
> include/linux/sched/cpufreq.h | 2 +-
> kernel/sched/cpufreq_schedutil.c | 1 +
> kernel/sched/fair.c | 2 +-
> 4 files changed, 16 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h
> index 757fc60658fa..3f221dbf5f95 100644
> --- a/include/linux/energy_model.h
> +++ b/include/linux/energy_model.h
> @@ -91,6 +91,8 @@ void em_dev_unregister_perf_domain(struct device *dev);
> * @pd : performance domain for which energy has to be estimated
> * @max_util : highest utilization among CPUs of the domain
> * @sum_util : sum of the utilization of all CPUs in the domain
> + * @allowed_cpu_cap : maximum allowed CPU capacity for the @pd, which
> + might reflect reduced frequency (due to thermal)
> *
> * This function must be used only for CPU devices. There is no validation,
> * i.e. if the EM is a CPU type and has cpumask allocated. It is called from
> @@ -100,7 +102,8 @@ void em_dev_unregister_perf_domain(struct device *dev);
> * a capacity state satisfying the max utilization of the domain.
> */
> static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
> - unsigned long max_util, unsigned long sum_util)
> + unsigned long max_util, unsigned long sum_util,
> + unsigned long allowed_cpu_cap)
> {
> unsigned long freq, scale_cpu;
> struct em_perf_state *ps;
> @@ -112,11 +115,17 @@ static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
> /*
> * In order to predict the performance state, map the utilization of
> * the most utilized CPU of the performance domain to a requested
> - * frequency, like schedutil.
> + * frequency, like schedutil. Take also into account that the real
> + * frequency might be set lower (due to thermal capping). Thus, clamp
> + * max utilization to the allowed CPU capacity before calculating
> + * effective frequency.
> */
> cpu = cpumask_first(to_cpumask(pd->cpus));
> scale_cpu = arch_scale_cpu_capacity(cpu);
> ps = &pd->table[pd->nr_perf_states - 1];
> +
> + max_util = map_util_perf(max_util);
> + max_util = min(max_util, allowed_cpu_cap);
> freq = map_util_freq(max_util, ps->frequency, scale_cpu);
>
> /*
> @@ -209,7 +218,8 @@ static inline struct em_perf_domain *em_pd_get(struct device *dev)
> return NULL;
> }
> static inline unsigned long em_cpu_energy(struct em_perf_domain *pd,
> - unsigned long max_util, unsigned long sum_util)
> + unsigned long max_util, unsigned long sum_util,
> + unsigned long allowed_cpu_cap)
> {
> return 0;
> }
> diff --git a/include/linux/sched/cpufreq.h b/include/linux/sched/cpufreq.h
> index 6205578ab6ee..bdd31ab93bc5 100644
> --- a/include/linux/sched/cpufreq.h
> +++ b/include/linux/sched/cpufreq.h
> @@ -26,7 +26,7 @@ bool cpufreq_this_cpu_can_update(struct cpufreq_policy *policy);
> static inline unsigned long map_util_freq(unsigned long util,
> unsigned long freq, unsigned long cap)
> {
> - return (freq + (freq >> 2)) * util / cap;
> + return freq * util / cap;
> }
>
> static inline unsigned long map_util_perf(unsigned long util)
> diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
> index 4f09afd2f321..57124614363d 100644
> --- a/kernel/sched/cpufreq_schedutil.c
> +++ b/kernel/sched/cpufreq_schedutil.c
> @@ -151,6 +151,7 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy,
> unsigned int freq = arch_scale_freq_invariant() ?
> policy->cpuinfo.max_freq : policy->cur;
>
> + util = map_util_perf(util);
> freq = map_util_freq(util, freq, max);
>
> if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1aeddecabc20..9a79bbd9425b 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6590,7 +6590,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> max_util = max(max_util, min(cpu_util, _cpu_cap));
> }
>
> - return em_cpu_energy(pd->em_pd, max_util, sum_util);
> + return em_cpu_energy(pd->em_pd, max_util, sum_util, _cpu_cap);
> }
>
> /*
> --
> 2.17.1
>

2021-06-10 08:04:15

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy

On Fri, 4 Jun 2021 at 10:10, Lukasz Luba <[email protected]> wrote:
>
> Energy Aware Scheduling (EAS) needs to be able to predict the frequency
> requests made by the SchedUtil governor to properly estimate energy used
> in the future. It has to take into account CPUs utilization and forecast
> Performance Domain (PD) frequency. There is a corner case when the max
> allowed frequency might be reduced due to thermal. SchedUtil is aware of
> that reduced frequency, so it should be taken into account also in EAS
> estimations.
>
> SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
> a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
> to 'policy::max'. SchedUtil is responsible to respect that upper limit
> while setting the frequency through CPUFreq drivers. This effective
> frequency is stored internally in 'sugov_policy::next_freq' and EAS has
> to predict that value.
>
> In the existing code the raw value of arch_scale_cpu_capacity() is used
> for clamping the returned CPU utilization from effective_cpu_util().
> This patch fixes issue with too big single CPU utilization, by introducing
> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
> capacity reduced by thermal pressure signal. We rely on this load avg
> geometric series in similar way as other mechanisms in the scheduler.
>
> Thanks to knowledge about allowed CPU capacity, we don't get too big value
> for a single CPU utilization, which is then added to the util sum. The
> util sum is used as a source of information for estimating whole PD energy.
> To avoid wrong energy estimation in EAS (due to capped frequency), make
> sure that the calculation of util sum is aware of allowed CPU capacity.
>
> Signed-off-by: Lukasz Luba <[email protected]>
> ---
> kernel/sched/fair.c | 17 ++++++++++++++---
> 1 file changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 161b92aa1c79..1aeddecabc20 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6527,6 +6527,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> struct cpumask *pd_mask = perf_domain_span(pd);
> unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
> unsigned long max_util = 0, sum_util = 0;
> + unsigned long _cpu_cap = cpu_cap;
> int cpu;
>
> /*
> @@ -6558,14 +6559,24 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> cpu_util_next(cpu, p, -1) + task_util_est(p);
> }
>
> + /*
> + * Take the thermal pressure from non-idle CPUs. They have
> + * most up-to-date information. For idle CPUs thermal pressure
> + * signal is not updated so often.

What do you mean by "not updated so often" ? Do you have a value ?

Thermal pressure is updated at the same rate as other PELT values of
an idle CPU. Why is it a problem there ?

> + */
> + if (!idle_cpu(cpu))
> + _cpu_cap = cpu_cap - thermal_load_avg(cpu_rq(cpu));
> +
> /*
> * Busy time computation: utilization clamping is not
> * required since the ratio (sum_util / cpu_capacity)
> * is already enough to scale the EM reported power
> * consumption at the (eventually clamped) cpu_capacity.
> */
> - sum_util += effective_cpu_util(cpu, util_running, cpu_cap,
> - ENERGY_UTIL, NULL);
> + cpu_util = effective_cpu_util(cpu, util_running, cpu_cap,
> + ENERGY_UTIL, NULL);
> +
> + sum_util += min(cpu_util, _cpu_cap);
>
> /*
> * Performance domain frequency: utilization clamping
> @@ -6576,7 +6587,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> */
> cpu_util = effective_cpu_util(cpu, util_freq, cpu_cap,
> FREQUENCY_UTIL, tsk);
> - max_util = max(max_util, cpu_util);
> + max_util = max(max_util, min(cpu_util, _cpu_cap));
> }
>
> return em_cpu_energy(pd->em_pd, max_util, sum_util);
> --
> 2.17.1
>

2021-06-10 08:20:36

by Lukasz Luba

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] sched/cpufreq: Consider reduced CPU capacity in energy calculation



On 6/9/21 4:01 PM, Rafael J. Wysocki wrote:
> On Fri, Jun 4, 2021 at 10:10 AM Lukasz Luba <[email protected]> wrote:
>>
>> Energy Aware Scheduling (EAS) needs to predict the decisions made by
>> SchedUtil. The map_util_freq() exists to do that.
>>
>> There are corner cases where the max allowed frequency might be reduced
>> (due to thermal). SchedUtil as a CPUFreq governor, is aware of that
>> but EAS is not. This patch aims to address it.
>>
>> SchedUtil stores the maximum allowed frequency in
>> 'sugov_policy::next_freq' field. EAS has to predict that value, which is
>> the real used frequency. That value is made after a call to
>> cpufreq_driver_resolve_freq() which clamps to the CPUFreq policy limits.
>> In the existing code EAS is not able to predict that real frequency.
>> This leads to energy estimation errors.
>>
>> To avoid wrong energy estimation in EAS (due to frequency miss prediction)
>> make sure that the step which calculates Performance Domain frequency,
>> is also aware of the allowed CPU capacity.
>>
>> Furthermore, modify map_util_freq() to not extend the frequency value.
>> Instead, use map_util_perf() to extend the util value in both places:
>> SchedUtil and EAS, but for EAS clamp it to max allowed CPU capacity.
>> In the end, we achieve the same desirable behavior for both subsystems
>> and alignment in regards to the real CPU frequency.
>>
>> Signed-off-by: Lukasz Luba <[email protected]>
>
> For the schedutil part
>
> Acked-by: Rafael J. Wysocki <[email protected]>
>


Thank you Rafael!

Regards,
Lukasz

2021-06-10 08:44:47

by Dietmar Eggemann

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy

On 04/06/2021 10:09, Lukasz Luba wrote:
> Energy Aware Scheduling (EAS) needs to be able to predict the frequency
> requests made by the SchedUtil governor to properly estimate energy used
> in the future. It has to take into account CPUs utilization and forecast
> Performance Domain (PD) frequency. There is a corner case when the max
> allowed frequency might be reduced due to thermal. SchedUtil is aware of
> that reduced frequency, so it should be taken into account also in EAS
> estimations.
>
> SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
> a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
> to 'policy::max'. SchedUtil is responsible to respect that upper limit
> while setting the frequency through CPUFreq drivers. This effective
> frequency is stored internally in 'sugov_policy::next_freq' and EAS has
> to predict that value.
>
> In the existing code the raw value of arch_scale_cpu_capacity() is used
> for clamping the returned CPU utilization from effective_cpu_util().
> This patch fixes issue with too big single CPU utilization, by introducing
> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
> capacity reduced by thermal pressure signal. We rely on this load avg
> geometric series in similar way as other mechanisms in the scheduler.
>
> Thanks to knowledge about allowed CPU capacity, we don't get too big value
> for a single CPU utilization, which is then added to the util sum. The
> util sum is used as a source of information for estimating whole PD energy.
> To avoid wrong energy estimation in EAS (due to capped frequency), make
> sure that the calculation of util sum is aware of allowed CPU capacity.
>

So essentially what you want to do is:

Make EAS aware of the frequency clamping schedutil can be faced with:

get_next_freq() -> cpufreq_driver_resolve_freq() ->
clamp_val(target_freq, policy->min, policy->max) (1)

by subtracting CPU's Thermal Pressure (ThPr) signal from the original
CPU capacity `arch_scale_cpu_capacity()` (2).

---

Isn't there a conceptional flaw in this design? Let's say we have a
big.Little system with two cpufreq cooling devices and a thermal zone
(something like Hikey 960). To create a ThPr scenario we have to run
stuff on the CPUs (e.g. hackbench (3)).
Eventually cpufreq_set_cur_state() [drivers/thermal/cpufreq_cooling.c]
will set thermal_pressure to `(2) - (2)*freq/policy->cpuinfo.max_freq`
and PELT will provide the ThPr signal via thermal_load_avg().
But to create this scenario, the system will become overutilized
(system-wide data, if one CPU is overutilized, the whole system is) so
EAS is disabled (i.e. find_energy_efficient_cpu() and compute_emergy()
are not executed).

I can see that there are episodes in which EAS is running and
thermal_load_avg() != 0 but those have to be when (3) has stopped and
you see the ThPr signal just decaying (no accruing of new ThPr). The
cpufreq cooling device can still issue cpufreq_set_cur_state() but only
with decreasing states.

---

IMHO, a precise description of how you envision the system setup,
incorporating all participating subsystems, would be helpful here.

> Signed-off-by: Lukasz Luba <[email protected]>
> ---
> kernel/sched/fair.c | 17 ++++++++++++++---
> 1 file changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 161b92aa1c79..1aeddecabc20 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6527,6 +6527,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> struct cpumask *pd_mask = perf_domain_span(pd);
> unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
> unsigned long max_util = 0, sum_util = 0;
> + unsigned long _cpu_cap = cpu_cap;
> int cpu;
>
> /*
> @@ -6558,14 +6559,24 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> cpu_util_next(cpu, p, -1) + task_util_est(p);
> }
>
> + /*
> + * Take the thermal pressure from non-idle CPUs. They have
> + * most up-to-date information. For idle CPUs thermal pressure
> + * signal is not updated so often.
> + */
> + if (!idle_cpu(cpu))
> + _cpu_cap = cpu_cap - thermal_load_avg(cpu_rq(cpu));
> +

This one is probably the result of the fact that cpufreq cooling device
sets the ThPr for all CPUs of the policy (Frequency Domain (FD) or
Performance Domain (PD)) but PELT updates are happening per-CPU. And
only !idle CPUs get the update in scheduler_tick().

Looks like thermal_pressure [per_cpu(thermal_pressure, cpu),
drivers/base/arch_topology.c] set by cpufreq_set_cur_state() is always
in sync with policy->max/cpuinfo_max_freq).
So for your use case this instantaneous `signal` is better than the PELT
one. It's precise (no decaying when frequency clamping is already gone)
and you avoid the per-cpu update issue.

> /*
> * Busy time computation: utilization clamping is not
> * required since the ratio (sum_util / cpu_capacity)
> * is already enough to scale the EM reported power
> * consumption at the (eventually clamped) cpu_capacity.
> */
> - sum_util += effective_cpu_util(cpu, util_running, cpu_cap,
> - ENERGY_UTIL, NULL);
> + cpu_util = effective_cpu_util(cpu, util_running, cpu_cap,
> + ENERGY_UTIL, NULL);
> +
> + sum_util += min(cpu_util, _cpu_cap);
>
> /*
> * Performance domain frequency: utilization clamping
> @@ -6576,7 +6587,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> */
> cpu_util = effective_cpu_util(cpu, util_freq, cpu_cap,
> FREQUENCY_UTIL, tsk);
> - max_util = max(max_util, cpu_util);
> + max_util = max(max_util, min(cpu_util, _cpu_cap));
> }
>
> return em_cpu_energy(pd->em_pd, max_util, sum_util);
>

2021-06-10 08:45:14

by Lukasz Luba

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy



On 6/10/21 8:59 AM, Vincent Guittot wrote:
> On Fri, 4 Jun 2021 at 10:10, Lukasz Luba <[email protected]> wrote:
>>
>> Energy Aware Scheduling (EAS) needs to be able to predict the frequency
>> requests made by the SchedUtil governor to properly estimate energy used
>> in the future. It has to take into account CPUs utilization and forecast
>> Performance Domain (PD) frequency. There is a corner case when the max
>> allowed frequency might be reduced due to thermal. SchedUtil is aware of
>> that reduced frequency, so it should be taken into account also in EAS
>> estimations.
>>
>> SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
>> a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
>> to 'policy::max'. SchedUtil is responsible to respect that upper limit
>> while setting the frequency through CPUFreq drivers. This effective
>> frequency is stored internally in 'sugov_policy::next_freq' and EAS has
>> to predict that value.
>>
>> In the existing code the raw value of arch_scale_cpu_capacity() is used
>> for clamping the returned CPU utilization from effective_cpu_util().
>> This patch fixes issue with too big single CPU utilization, by introducing
>> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
>> capacity reduced by thermal pressure signal. We rely on this load avg
>> geometric series in similar way as other mechanisms in the scheduler.
>>
>> Thanks to knowledge about allowed CPU capacity, we don't get too big value
>> for a single CPU utilization, which is then added to the util sum. The
>> util sum is used as a source of information for estimating whole PD energy.
>> To avoid wrong energy estimation in EAS (due to capped frequency), make
>> sure that the calculation of util sum is aware of allowed CPU capacity.
>>
>> Signed-off-by: Lukasz Luba <[email protected]>
>> ---
>> kernel/sched/fair.c | 17 ++++++++++++++---
>> 1 file changed, 14 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 161b92aa1c79..1aeddecabc20 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -6527,6 +6527,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>> struct cpumask *pd_mask = perf_domain_span(pd);
>> unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
>> unsigned long max_util = 0, sum_util = 0;
>> + unsigned long _cpu_cap = cpu_cap;
>> int cpu;
>>
>> /*
>> @@ -6558,14 +6559,24 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>> cpu_util_next(cpu, p, -1) + task_util_est(p);
>> }
>>
>> + /*
>> + * Take the thermal pressure from non-idle CPUs. They have
>> + * most up-to-date information. For idle CPUs thermal pressure
>> + * signal is not updated so often.
>
> What do you mean by "not updated so often" ? Do you have a value ?
>
> Thermal pressure is updated at the same rate as other PELT values of
> an idle CPU. Why is it a problem there ?
>


For idle CPU the value is updated 'remotely' by some other CPU
running nohz_idle_balance(). That goes into
update_blocked_averages() if the flags and checks are OK inside
update_nohz_stats(). Sometimes this is not called
because other_have_blocked() returned false. It can happen for a long
idle CPU, which all signals in that function has 0 [1].

This will cause that we don't check what is a new value stored by
thermal cpufreq_cooling for the thermal pressure [2]. We should feed
that value into the 'signal' machinery inside the
__update_blocked_others() [3]. Unfortunately, in a corner case there's
a flag (rq->has_blocked_load) which blocks the check of a
raw thermal value and prevents feeding it into thermal pressure signal
(since it's a long idle CPU, there is no load) [4].

It has implication on this patch, because I cannot e.g. take first
CPU from the PD mask and blindly check it's thermal pressure,
because it can be idle for a long time. I don't want to have two
loop, first just for taking the latest thermal pressure for the PD.
Thus, I want to re-use the existing loop to take the latest information
from non-idle CPU and pass use.

Regards,
Lukasz


[1] https://elixir.bootlin.com/linux/latest/source/kernel/sched/fair.c#L7909
[2]
https://elixir.bootlin.com/linux/latest/source/drivers/thermal/cpufreq_cooling.c#L494
[3] https://elixir.bootlin.com/linux/latest/source/kernel/sched/fair.c#L7958
[4] https://elixir.bootlin.com/linux/latest/source/kernel/sched/fair.c#L8433

2021-06-10 09:08:38

by Lukasz Luba

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy



On 6/10/21 9:42 AM, Dietmar Eggemann wrote:

[snip]

>
> So essentially what you want to do is:
>
> Make EAS aware of the frequency clamping schedutil can be faced with:
>
> get_next_freq() -> cpufreq_driver_resolve_freq() ->
> clamp_val(target_freq, policy->min, policy->max) (1)
>
> by subtracting CPU's Thermal Pressure (ThPr) signal from the original
> CPU capacity `arch_scale_cpu_capacity()` (2).
>
> ---
>
> Isn't there a conceptional flaw in this design? Let's say we have a
> big.Little system with two cpufreq cooling devices and a thermal zone
> (something like Hikey 960). To create a ThPr scenario we have to run
> stuff on the CPUs (e.g. hackbench (3)).
> Eventually cpufreq_set_cur_state() [drivers/thermal/cpufreq_cooling.c]
> will set thermal_pressure to `(2) - (2)*freq/policy->cpuinfo.max_freq`
> and PELT will provide the ThPr signal via thermal_load_avg().
> But to create this scenario, the system will become overutilized
> (system-wide data, if one CPU is overutilized, the whole system is) so
> EAS is disabled (i.e. find_energy_efficient_cpu() and compute_emergy()
> are not executed).

Not always, it depends on thermal governor decision, workload and
'power actors' (in IPA naming convention). Then it depends when and how
hard you clamp the CPUs. They (CPUs) don't have to be always
overutilized, they might be even 50-70% utilized but the GPU reduced
power budget by 2 Watts, so CPUs left with only 1W. Which is still OK
for the CPUs, since they are only 'feeding' the GPU with new 'jobs'.

>
> I can see that there are episodes in which EAS is running and
> thermal_load_avg() != 0 but those have to be when (3) has stopped and
> you see the ThPr signal just decaying (no accruing of new ThPr). The
> cpufreq cooling device can still issue cpufreq_set_cur_state() but only
> with decreasing states.

It is true for some CPU heavy workloads, when no other SoC components
are involved like: GPU, DSP, ISP, encoders, etc. For other workloads
when CPUs don't have to do a lot, but thermal pressure might be seen on
them, this patch help.

>
> ---
>
> IMHO, a precise description of how you envision the system setup,
> incorporating all participating subsystems, would be helpful here.

True, I hope this description above would help to understand the
scenario.

>
>> Signed-off-by: Lukasz Luba <[email protected]>
>> ---
>> kernel/sched/fair.c | 17 ++++++++++++++---
>> 1 file changed, 14 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 161b92aa1c79..1aeddecabc20 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -6527,6 +6527,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>> struct cpumask *pd_mask = perf_domain_span(pd);
>> unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
>> unsigned long max_util = 0, sum_util = 0;
>> + unsigned long _cpu_cap = cpu_cap;
>> int cpu;
>>
>> /*
>> @@ -6558,14 +6559,24 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>> cpu_util_next(cpu, p, -1) + task_util_est(p);
>> }
>>
>> + /*
>> + * Take the thermal pressure from non-idle CPUs. They have
>> + * most up-to-date information. For idle CPUs thermal pressure
>> + * signal is not updated so often.
>> + */
>> + if (!idle_cpu(cpu))
>> + _cpu_cap = cpu_cap - thermal_load_avg(cpu_rq(cpu));
>> +
>
> This one is probably the result of the fact that cpufreq cooling device
> sets the ThPr for all CPUs of the policy (Frequency Domain (FD) or
> Performance Domain (PD)) but PELT updates are happening per-CPU. And
> only !idle CPUs get the update in scheduler_tick().
>
> Looks like thermal_pressure [per_cpu(thermal_pressure, cpu),
> drivers/base/arch_topology.c] set by cpufreq_set_cur_state() is always
> in sync with policy->max/cpuinfo_max_freq).
> So for your use case this instantaneous `signal` is better than the PELT
> one. It's precise (no decaying when frequency clamping is already gone)
> and you avoid the per-cpu update issue.

Yes, this code implementation tries to address those issues.

2021-06-10 09:15:47

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy

On Thu, 10 Jun 2021 at 10:42, Lukasz Luba <[email protected]> wrote:
>
>
>
> On 6/10/21 8:59 AM, Vincent Guittot wrote:
> > On Fri, 4 Jun 2021 at 10:10, Lukasz Luba <[email protected]> wrote:
> >>
> >> Energy Aware Scheduling (EAS) needs to be able to predict the frequency
> >> requests made by the SchedUtil governor to properly estimate energy used
> >> in the future. It has to take into account CPUs utilization and forecast
> >> Performance Domain (PD) frequency. There is a corner case when the max
> >> allowed frequency might be reduced due to thermal. SchedUtil is aware of
> >> that reduced frequency, so it should be taken into account also in EAS
> >> estimations.
> >>
> >> SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
> >> a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
> >> to 'policy::max'. SchedUtil is responsible to respect that upper limit
> >> while setting the frequency through CPUFreq drivers. This effective
> >> frequency is stored internally in 'sugov_policy::next_freq' and EAS has
> >> to predict that value.
> >>
> >> In the existing code the raw value of arch_scale_cpu_capacity() is used
> >> for clamping the returned CPU utilization from effective_cpu_util().
> >> This patch fixes issue with too big single CPU utilization, by introducing
> >> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
> >> capacity reduced by thermal pressure signal. We rely on this load avg
> >> geometric series in similar way as other mechanisms in the scheduler.
> >>
> >> Thanks to knowledge about allowed CPU capacity, we don't get too big value
> >> for a single CPU utilization, which is then added to the util sum. The
> >> util sum is used as a source of information for estimating whole PD energy.
> >> To avoid wrong energy estimation in EAS (due to capped frequency), make
> >> sure that the calculation of util sum is aware of allowed CPU capacity.
> >>
> >> Signed-off-by: Lukasz Luba <[email protected]>
> >> ---
> >> kernel/sched/fair.c | 17 ++++++++++++++---
> >> 1 file changed, 14 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >> index 161b92aa1c79..1aeddecabc20 100644
> >> --- a/kernel/sched/fair.c
> >> +++ b/kernel/sched/fair.c
> >> @@ -6527,6 +6527,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> >> struct cpumask *pd_mask = perf_domain_span(pd);
> >> unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
> >> unsigned long max_util = 0, sum_util = 0;
> >> + unsigned long _cpu_cap = cpu_cap;
> >> int cpu;
> >>
> >> /*
> >> @@ -6558,14 +6559,24 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> >> cpu_util_next(cpu, p, -1) + task_util_est(p);
> >> }
> >>
> >> + /*
> >> + * Take the thermal pressure from non-idle CPUs. They have
> >> + * most up-to-date information. For idle CPUs thermal pressure
> >> + * signal is not updated so often.
> >
> > What do you mean by "not updated so often" ? Do you have a value ?
> >
> > Thermal pressure is updated at the same rate as other PELT values of
> > an idle CPU. Why is it a problem there ?
> >
>
>
> For idle CPU the value is updated 'remotely' by some other CPU
> running nohz_idle_balance(). That goes into
> update_blocked_averages() if the flags and checks are OK inside
> update_nohz_stats(). Sometimes this is not called
> because other_have_blocked() returned false. It can happen for a long

So i miss that you were in a loop and the below was called for each
cpu and _cpu_cap was overwritten

+ if (!idle_cpu(cpu))
+ _cpu_cap = cpu_cap - thermal_load_avg(cpu_rq(cpu));

But that also means that if the 1st cpus of the pd are idle, they will
use original capacity whereas the other ones will remove the thermal
pressure. Isn't this a problem ? You don't use the same capacity for
all cpus in the performance domain regarding the thermal pressure?

> idle CPU, which all signals in that function has 0 [1].
>
> This will cause that we don't check what is a new value stored by
> thermal cpufreq_cooling for the thermal pressure [2]. We should feed
> that value into the 'signal' machinery inside the
> __update_blocked_others() [3]. Unfortunately, in a corner case there's
> a flag (rq->has_blocked_load) which blocks the check of a
> raw thermal value and prevents feeding it into thermal pressure signal
> (since it's a long idle CPU, there is no load) [4].
>
> It has implication on this patch, because I cannot e.g. take first
> CPU from the PD mask and blindly check it's thermal pressure,
> because it can be idle for a long time. I don't want to have two
> loop, first just for taking the latest thermal pressure for the PD.
> Thus, I want to re-use the existing loop to take the latest information
> from non-idle CPU and pass use.
>
> Regards,
> Lukasz
>
>
> [1] https://elixir.bootlin.com/linux/latest/source/kernel/sched/fair.c#L7909
> [2]
> https://elixir.bootlin.com/linux/latest/source/drivers/thermal/cpufreq_cooling.c#L494
> [3] https://elixir.bootlin.com/linux/latest/source/kernel/sched/fair.c#L7958
> [4] https://elixir.bootlin.com/linux/latest/source/kernel/sched/fair.c#L8433

2021-06-10 09:40:13

by Lukasz Luba

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy



On 6/10/21 10:11 AM, Vincent Guittot wrote:
> On Thu, 10 Jun 2021 at 10:42, Lukasz Luba <[email protected]> wrote:
>>
>>
>>
>> On 6/10/21 8:59 AM, Vincent Guittot wrote:
>>> On Fri, 4 Jun 2021 at 10:10, Lukasz Luba <[email protected]> wrote:
>>>>
>>>> Energy Aware Scheduling (EAS) needs to be able to predict the frequency
>>>> requests made by the SchedUtil governor to properly estimate energy used
>>>> in the future. It has to take into account CPUs utilization and forecast
>>>> Performance Domain (PD) frequency. There is a corner case when the max
>>>> allowed frequency might be reduced due to thermal. SchedUtil is aware of
>>>> that reduced frequency, so it should be taken into account also in EAS
>>>> estimations.
>>>>
>>>> SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
>>>> a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
>>>> to 'policy::max'. SchedUtil is responsible to respect that upper limit
>>>> while setting the frequency through CPUFreq drivers. This effective
>>>> frequency is stored internally in 'sugov_policy::next_freq' and EAS has
>>>> to predict that value.
>>>>
>>>> In the existing code the raw value of arch_scale_cpu_capacity() is used
>>>> for clamping the returned CPU utilization from effective_cpu_util().
>>>> This patch fixes issue with too big single CPU utilization, by introducing
>>>> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
>>>> capacity reduced by thermal pressure signal. We rely on this load avg
>>>> geometric series in similar way as other mechanisms in the scheduler.
>>>>
>>>> Thanks to knowledge about allowed CPU capacity, we don't get too big value
>>>> for a single CPU utilization, which is then added to the util sum. The
>>>> util sum is used as a source of information for estimating whole PD energy.
>>>> To avoid wrong energy estimation in EAS (due to capped frequency), make
>>>> sure that the calculation of util sum is aware of allowed CPU capacity.
>>>>
>>>> Signed-off-by: Lukasz Luba <[email protected]>
>>>> ---
>>>> kernel/sched/fair.c | 17 ++++++++++++++---
>>>> 1 file changed, 14 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>> index 161b92aa1c79..1aeddecabc20 100644
>>>> --- a/kernel/sched/fair.c
>>>> +++ b/kernel/sched/fair.c
>>>> @@ -6527,6 +6527,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>>>> struct cpumask *pd_mask = perf_domain_span(pd);
>>>> unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
>>>> unsigned long max_util = 0, sum_util = 0;
>>>> + unsigned long _cpu_cap = cpu_cap;
>>>> int cpu;
>>>>
>>>> /*
>>>> @@ -6558,14 +6559,24 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>>>> cpu_util_next(cpu, p, -1) + task_util_est(p);
>>>> }
>>>>
>>>> + /*
>>>> + * Take the thermal pressure from non-idle CPUs. They have
>>>> + * most up-to-date information. For idle CPUs thermal pressure
>>>> + * signal is not updated so often.
>>>
>>> What do you mean by "not updated so often" ? Do you have a value ?
>>>
>>> Thermal pressure is updated at the same rate as other PELT values of
>>> an idle CPU. Why is it a problem there ?
>>>
>>
>>
>> For idle CPU the value is updated 'remotely' by some other CPU
>> running nohz_idle_balance(). That goes into
>> update_blocked_averages() if the flags and checks are OK inside
>> update_nohz_stats(). Sometimes this is not called
>> because other_have_blocked() returned false. It can happen for a long
>
> So i miss that you were in a loop and the below was called for each
> cpu and _cpu_cap was overwritten
>
> + if (!idle_cpu(cpu))
> + _cpu_cap = cpu_cap - thermal_load_avg(cpu_rq(cpu));
>
> But that also means that if the 1st cpus of the pd are idle, they will
> use original capacity whereas the other ones will remove the thermal
> pressure. Isn't this a problem ? You don't use the same capacity for
> all cpus in the performance domain regarding the thermal pressure?

True, but in the experiments for idle CPUs I haven't
observed that they still have some big util (bigger than _cpu_cap).
It decayed already, so it's not a problem for idle CPUs.

Although, it might be my test case which didn't trigger something.
Is it worth to add the loop above this one, to be 100% sure and
get a thermal pressure signal from some running CPU?
Then apply the same value always inside the 2nd loop?

2021-06-10 09:44:26

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy

On Thu, 10 Jun 2021 at 11:36, Lukasz Luba <[email protected]> wrote:
>
>
>
> On 6/10/21 10:11 AM, Vincent Guittot wrote:
> > On Thu, 10 Jun 2021 at 10:42, Lukasz Luba <[email protected]> wrote:
> >>
> >>
> >>
> >> On 6/10/21 8:59 AM, Vincent Guittot wrote:
> >>> On Fri, 4 Jun 2021 at 10:10, Lukasz Luba <[email protected]> wrote:
> >>>>
> >>>> Energy Aware Scheduling (EAS) needs to be able to predict the frequency
> >>>> requests made by the SchedUtil governor to properly estimate energy used
> >>>> in the future. It has to take into account CPUs utilization and forecast
> >>>> Performance Domain (PD) frequency. There is a corner case when the max
> >>>> allowed frequency might be reduced due to thermal. SchedUtil is aware of
> >>>> that reduced frequency, so it should be taken into account also in EAS
> >>>> estimations.
> >>>>
> >>>> SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
> >>>> a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
> >>>> to 'policy::max'. SchedUtil is responsible to respect that upper limit
> >>>> while setting the frequency through CPUFreq drivers. This effective
> >>>> frequency is stored internally in 'sugov_policy::next_freq' and EAS has
> >>>> to predict that value.
> >>>>
> >>>> In the existing code the raw value of arch_scale_cpu_capacity() is used
> >>>> for clamping the returned CPU utilization from effective_cpu_util().
> >>>> This patch fixes issue with too big single CPU utilization, by introducing
> >>>> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
> >>>> capacity reduced by thermal pressure signal. We rely on this load avg
> >>>> geometric series in similar way as other mechanisms in the scheduler.
> >>>>
> >>>> Thanks to knowledge about allowed CPU capacity, we don't get too big value
> >>>> for a single CPU utilization, which is then added to the util sum. The
> >>>> util sum is used as a source of information for estimating whole PD energy.
> >>>> To avoid wrong energy estimation in EAS (due to capped frequency), make
> >>>> sure that the calculation of util sum is aware of allowed CPU capacity.
> >>>>
> >>>> Signed-off-by: Lukasz Luba <[email protected]>
> >>>> ---
> >>>> kernel/sched/fair.c | 17 ++++++++++++++---
> >>>> 1 file changed, 14 insertions(+), 3 deletions(-)
> >>>>
> >>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >>>> index 161b92aa1c79..1aeddecabc20 100644
> >>>> --- a/kernel/sched/fair.c
> >>>> +++ b/kernel/sched/fair.c
> >>>> @@ -6527,6 +6527,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> >>>> struct cpumask *pd_mask = perf_domain_span(pd);
> >>>> unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
> >>>> unsigned long max_util = 0, sum_util = 0;
> >>>> + unsigned long _cpu_cap = cpu_cap;
> >>>> int cpu;
> >>>>
> >>>> /*
> >>>> @@ -6558,14 +6559,24 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> >>>> cpu_util_next(cpu, p, -1) + task_util_est(p);
> >>>> }
> >>>>
> >>>> + /*
> >>>> + * Take the thermal pressure from non-idle CPUs. They have
> >>>> + * most up-to-date information. For idle CPUs thermal pressure
> >>>> + * signal is not updated so often.
> >>>
> >>> What do you mean by "not updated so often" ? Do you have a value ?
> >>>
> >>> Thermal pressure is updated at the same rate as other PELT values of
> >>> an idle CPU. Why is it a problem there ?
> >>>
> >>
> >>
> >> For idle CPU the value is updated 'remotely' by some other CPU
> >> running nohz_idle_balance(). That goes into
> >> update_blocked_averages() if the flags and checks are OK inside
> >> update_nohz_stats(). Sometimes this is not called
> >> because other_have_blocked() returned false. It can happen for a long
> >
> > So i miss that you were in a loop and the below was called for each
> > cpu and _cpu_cap was overwritten
> >
> > + if (!idle_cpu(cpu))
> > + _cpu_cap = cpu_cap - thermal_load_avg(cpu_rq(cpu));
> >
> > But that also means that if the 1st cpus of the pd are idle, they will
> > use original capacity whereas the other ones will remove the thermal
> > pressure. Isn't this a problem ? You don't use the same capacity for
> > all cpus in the performance domain regarding the thermal pressure?
>
> True, but in the experiments for idle CPUs I haven't
> observed that they still have some big util (bigger than _cpu_cap).
> It decayed already, so it's not a problem for idle CPUs.

But it's a problem because there is a random behavior : some idle cpu
will use original capacity whereas others will use the capped value
set by non idle CPUs. You must have consistent behavior across all
idle cpus.

Then, if it's not a problem why adding the if (!idle_cpu(cpu))

>
> Although, it might be my test case which didn't trigger something.
> Is it worth to add the loop above this one, to be 100% sure and
> get a thermal pressure signal from some running CPU?
> Then apply the same value always inside the 2nd loop?

2021-06-10 09:46:51

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy

On Thu, 10 Jun 2021 at 11:36, Lukasz Luba <[email protected]> wrote:
>
>
>
> On 6/10/21 10:11 AM, Vincent Guittot wrote:
> > On Thu, 10 Jun 2021 at 10:42, Lukasz Luba <[email protected]> wrote:
> >>
> >>
> >>
> >> On 6/10/21 8:59 AM, Vincent Guittot wrote:
> >>> On Fri, 4 Jun 2021 at 10:10, Lukasz Luba <[email protected]> wrote:
> >>>>
> >>>> Energy Aware Scheduling (EAS) needs to be able to predict the frequency
> >>>> requests made by the SchedUtil governor to properly estimate energy used
> >>>> in the future. It has to take into account CPUs utilization and forecast
> >>>> Performance Domain (PD) frequency. There is a corner case when the max
> >>>> allowed frequency might be reduced due to thermal. SchedUtil is aware of
> >>>> that reduced frequency, so it should be taken into account also in EAS
> >>>> estimations.
> >>>>
> >>>> SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
> >>>> a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
> >>>> to 'policy::max'. SchedUtil is responsible to respect that upper limit
> >>>> while setting the frequency through CPUFreq drivers. This effective
> >>>> frequency is stored internally in 'sugov_policy::next_freq' and EAS has
> >>>> to predict that value.
> >>>>
> >>>> In the existing code the raw value of arch_scale_cpu_capacity() is used
> >>>> for clamping the returned CPU utilization from effective_cpu_util().
> >>>> This patch fixes issue with too big single CPU utilization, by introducing
> >>>> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
> >>>> capacity reduced by thermal pressure signal. We rely on this load avg
> >>>> geometric series in similar way as other mechanisms in the scheduler.
> >>>>
> >>>> Thanks to knowledge about allowed CPU capacity, we don't get too big value
> >>>> for a single CPU utilization, which is then added to the util sum. The
> >>>> util sum is used as a source of information for estimating whole PD energy.
> >>>> To avoid wrong energy estimation in EAS (due to capped frequency), make
> >>>> sure that the calculation of util sum is aware of allowed CPU capacity.
> >>>>
> >>>> Signed-off-by: Lukasz Luba <[email protected]>
> >>>> ---
> >>>> kernel/sched/fair.c | 17 ++++++++++++++---
> >>>> 1 file changed, 14 insertions(+), 3 deletions(-)
> >>>>
> >>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >>>> index 161b92aa1c79..1aeddecabc20 100644
> >>>> --- a/kernel/sched/fair.c
> >>>> +++ b/kernel/sched/fair.c
> >>>> @@ -6527,6 +6527,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> >>>> struct cpumask *pd_mask = perf_domain_span(pd);
> >>>> unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
> >>>> unsigned long max_util = 0, sum_util = 0;
> >>>> + unsigned long _cpu_cap = cpu_cap;
> >>>> int cpu;
> >>>>
> >>>> /*
> >>>> @@ -6558,14 +6559,24 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
> >>>> cpu_util_next(cpu, p, -1) + task_util_est(p);
> >>>> }
> >>>>
> >>>> + /*
> >>>> + * Take the thermal pressure from non-idle CPUs. They have
> >>>> + * most up-to-date information. For idle CPUs thermal pressure
> >>>> + * signal is not updated so often.
> >>>
> >>> What do you mean by "not updated so often" ? Do you have a value ?
> >>>
> >>> Thermal pressure is updated at the same rate as other PELT values of
> >>> an idle CPU. Why is it a problem there ?
> >>>
> >>
> >>
> >> For idle CPU the value is updated 'remotely' by some other CPU
> >> running nohz_idle_balance(). That goes into
> >> update_blocked_averages() if the flags and checks are OK inside
> >> update_nohz_stats(). Sometimes this is not called
> >> because other_have_blocked() returned false. It can happen for a long
> >
> > So i miss that you were in a loop and the below was called for each
> > cpu and _cpu_cap was overwritten
> >
> > + if (!idle_cpu(cpu))
> > + _cpu_cap = cpu_cap - thermal_load_avg(cpu_rq(cpu));
> >
> > But that also means that if the 1st cpus of the pd are idle, they will
> > use original capacity whereas the other ones will remove the thermal
> > pressure. Isn't this a problem ? You don't use the same capacity for
> > all cpus in the performance domain regarding the thermal pressure?
>
> True, but in the experiments for idle CPUs I haven't
> observed that they still have some big util (bigger than _cpu_cap).
> It decayed already, so it's not a problem for idle CPUs.
>
> Although, it might be my test case which didn't trigger something.
> Is it worth to add the loop above this one, to be 100% sure and
> get a thermal pressure signal from some running CPU?
> Then apply the same value always inside the 2nd loop?

Either it's a problem and you must make sure to use the same capacity
for all cpus of a PD

Or it's not but in this case you don't need if (!idle_cpu(cpu))

2021-06-10 09:54:31

by Lukasz Luba

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy



On 6/10/21 10:41 AM, Vincent Guittot wrote:
> On Thu, 10 Jun 2021 at 11:36, Lukasz Luba <[email protected]> wrote:
>>
>>
>>
>> On 6/10/21 10:11 AM, Vincent Guittot wrote:
>>> On Thu, 10 Jun 2021 at 10:42, Lukasz Luba <[email protected]> wrote:
>>>>
>>>>
>>>>
>>>> On 6/10/21 8:59 AM, Vincent Guittot wrote:
>>>>> On Fri, 4 Jun 2021 at 10:10, Lukasz Luba <[email protected]> wrote:
>>>>>>
>>>>>> Energy Aware Scheduling (EAS) needs to be able to predict the frequency
>>>>>> requests made by the SchedUtil governor to properly estimate energy used
>>>>>> in the future. It has to take into account CPUs utilization and forecast
>>>>>> Performance Domain (PD) frequency. There is a corner case when the max
>>>>>> allowed frequency might be reduced due to thermal. SchedUtil is aware of
>>>>>> that reduced frequency, so it should be taken into account also in EAS
>>>>>> estimations.
>>>>>>
>>>>>> SchedUtil, as a CPUFreq governor, knows the maximum allowed frequency of
>>>>>> a CPU, thanks to cpufreq_driver_resolve_freq() and internal clamping
>>>>>> to 'policy::max'. SchedUtil is responsible to respect that upper limit
>>>>>> while setting the frequency through CPUFreq drivers. This effective
>>>>>> frequency is stored internally in 'sugov_policy::next_freq' and EAS has
>>>>>> to predict that value.
>>>>>>
>>>>>> In the existing code the raw value of arch_scale_cpu_capacity() is used
>>>>>> for clamping the returned CPU utilization from effective_cpu_util().
>>>>>> This patch fixes issue with too big single CPU utilization, by introducing
>>>>>> clamping to the allowed CPU capacity. The allowed CPU capacity is a CPU
>>>>>> capacity reduced by thermal pressure signal. We rely on this load avg
>>>>>> geometric series in similar way as other mechanisms in the scheduler.
>>>>>>
>>>>>> Thanks to knowledge about allowed CPU capacity, we don't get too big value
>>>>>> for a single CPU utilization, which is then added to the util sum. The
>>>>>> util sum is used as a source of information for estimating whole PD energy.
>>>>>> To avoid wrong energy estimation in EAS (due to capped frequency), make
>>>>>> sure that the calculation of util sum is aware of allowed CPU capacity.
>>>>>>
>>>>>> Signed-off-by: Lukasz Luba <[email protected]>
>>>>>> ---
>>>>>> kernel/sched/fair.c | 17 ++++++++++++++---
>>>>>> 1 file changed, 14 insertions(+), 3 deletions(-)
>>>>>>
>>>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>>>> index 161b92aa1c79..1aeddecabc20 100644
>>>>>> --- a/kernel/sched/fair.c
>>>>>> +++ b/kernel/sched/fair.c
>>>>>> @@ -6527,6 +6527,7 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>>>>>> struct cpumask *pd_mask = perf_domain_span(pd);
>>>>>> unsigned long cpu_cap = arch_scale_cpu_capacity(cpumask_first(pd_mask));
>>>>>> unsigned long max_util = 0, sum_util = 0;
>>>>>> + unsigned long _cpu_cap = cpu_cap;
>>>>>> int cpu;
>>>>>>
>>>>>> /*
>>>>>> @@ -6558,14 +6559,24 @@ compute_energy(struct task_struct *p, int dst_cpu, struct perf_domain *pd)
>>>>>> cpu_util_next(cpu, p, -1) + task_util_est(p);
>>>>>> }
>>>>>>
>>>>>> + /*
>>>>>> + * Take the thermal pressure from non-idle CPUs. They have
>>>>>> + * most up-to-date information. For idle CPUs thermal pressure
>>>>>> + * signal is not updated so often.
>>>>>
>>>>> What do you mean by "not updated so often" ? Do you have a value ?
>>>>>
>>>>> Thermal pressure is updated at the same rate as other PELT values of
>>>>> an idle CPU. Why is it a problem there ?
>>>>>
>>>>
>>>>
>>>> For idle CPU the value is updated 'remotely' by some other CPU
>>>> running nohz_idle_balance(). That goes into
>>>> update_blocked_averages() if the flags and checks are OK inside
>>>> update_nohz_stats(). Sometimes this is not called
>>>> because other_have_blocked() returned false. It can happen for a long
>>>
>>> So i miss that you were in a loop and the below was called for each
>>> cpu and _cpu_cap was overwritten
>>>
>>> + if (!idle_cpu(cpu))
>>> + _cpu_cap = cpu_cap - thermal_load_avg(cpu_rq(cpu));
>>>
>>> But that also means that if the 1st cpus of the pd are idle, they will
>>> use original capacity whereas the other ones will remove the thermal
>>> pressure. Isn't this a problem ? You don't use the same capacity for
>>> all cpus in the performance domain regarding the thermal pressure?
>>
>> True, but in the experiments for idle CPUs I haven't
>> observed that they still have some big util (bigger than _cpu_cap).
>> It decayed already, so it's not a problem for idle CPUs.
>
> But it's a problem because there is a random behavior : some idle cpu
> will use original capacity whereas others will use the capped value
> set by non idle CPUs. You must have consistent behavior across all
> idle cpus.
>
> Then, if it's not a problem why adding the if (!idle_cpu(cpu))

To capture the signal value from a running CPU, which then I pass
into the em_cpu_energy() in path 2/2. My apologies for confusion,
this can be just local variable for patch 1/2.

I can create the _cpu_cap as local variable inside this loop,
just for this patch. Then in patch 2/2 I will remove it and define
above the loop, to be available for the call to
em_cpu_energy().

2021-06-10 10:08:48

by Dietmar Eggemann

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy

On 10/06/2021 11:04, Lukasz Luba wrote:
>
>
> On 6/10/21 9:42 AM, Dietmar Eggemann wrote:
>
> [snip]
>
>>
>> So essentially what you want to do is:
>>
>> Make EAS aware of the frequency clamping schedutil can be faced with:
>>
>>    get_next_freq() -> cpufreq_driver_resolve_freq() ->
>> clamp_val(target_freq, policy->min, policy->max) (1)
>>
>> by subtracting CPU's Thermal Pressure (ThPr) signal from the original
>> CPU capacity `arch_scale_cpu_capacity()` (2).
>>
>> ---
>>
>> Isn't there a conceptional flaw in this design? Let's say we have a
>> big.Little system with two cpufreq cooling devices and a thermal zone
>> (something like Hikey 960). To create a ThPr scenario we have to run
>> stuff on the CPUs (e.g. hackbench (3)).
>> Eventually cpufreq_set_cur_state() [drivers/thermal/cpufreq_cooling.c]
>> will set thermal_pressure to `(2) - (2)*freq/policy->cpuinfo.max_freq`
>> and PELT will provide the ThPr signal via thermal_load_avg().
>> But to create this scenario, the system will become overutilized
>> (system-wide data, if one CPU is overutilized, the whole system is) so
>> EAS is disabled (i.e. find_energy_efficient_cpu() and compute_emergy()
>> are not executed).
>
> Not always, it depends on thermal governor decision, workload and
> 'power actors' (in IPA naming convention). Then it depends when and how
> hard you clamp the CPUs. They (CPUs) don't have to be always
> overutilized, they might be even 50-70% utilized but the GPU reduced
> power budget by 2 Watts, so CPUs left with only 1W. Which is still OK
> for the CPUs, since they are only 'feeding' the GPU with new 'jobs'.

All this pretty much confines the usefulness of you proposed change. A
precise description of it with the patches is necessary to allow people
to start from there while exploring your patches.

>> I can see that there are episodes in which EAS is running and
>> thermal_load_avg() != 0 but those have to be when (3) has stopped and
>> you see the ThPr signal just decaying (no accruing of new ThPr). The
>> cpufreq cooling device can still issue cpufreq_set_cur_state() but only
>> with decreasing states.
>
> It is true for some CPU heavy workloads, when no other SoC components
> are involved like: GPU, DSP, ISP, encoders, etc. For other workloads
> when CPUs don't have to do a lot, but thermal pressure might be seen on
> them, this patch help.
>
>>
>> ---
>>
>> IMHO, a precise description of how you envision the system setup,
>> incorporating all participating subsystems, would be helpful here.
>
> True, I hope this description above would help to understand the
> scenario.

This description belongs in the patch header. The scenario in which your
functionality would improve things has to be clear.
I'm sure that not everybody looking at this patches is immediately aware
on how IPA setups work and which specific setup you have in mind here.

>>> Signed-off-by: Lukasz Luba <[email protected]>
>>> ---
>>>   kernel/sched/fair.c | 17 ++++++++++++++---
>>>   1 file changed, 14 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>> index 161b92aa1c79..1aeddecabc20 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -6527,6 +6527,7 @@ compute_energy(struct task_struct *p, int
>>> dst_cpu, struct perf_domain *pd)
>>>       struct cpumask *pd_mask = perf_domain_span(pd);
>>>       unsigned long cpu_cap =
>>> arch_scale_cpu_capacity(cpumask_first(pd_mask));
>>>       unsigned long max_util = 0, sum_util = 0;
>>> +    unsigned long _cpu_cap = cpu_cap;
>>>       int cpu;
>>>         /*
>>> @@ -6558,14 +6559,24 @@ compute_energy(struct task_struct *p, int
>>> dst_cpu, struct perf_domain *pd)
>>>                   cpu_util_next(cpu, p, -1) + task_util_est(p);
>>>           }
>>>   +        /*
>>> +         * Take the thermal pressure from non-idle CPUs. They have
>>> +         * most up-to-date information. For idle CPUs thermal pressure
>>> +         * signal is not updated so often.
>>> +         */
>>> +        if (!idle_cpu(cpu))
>>> +            _cpu_cap = cpu_cap - thermal_load_avg(cpu_rq(cpu));
>>> +
>>
>> This one is probably the result of the fact that cpufreq cooling device
>> sets the ThPr for all CPUs of the policy (Frequency Domain (FD) or
>> Performance Domain (PD)) but PELT updates are happening per-CPU. And
>> only !idle CPUs get the update in scheduler_tick().
>>
>> Looks like thermal_pressure [per_cpu(thermal_pressure, cpu),
>> drivers/base/arch_topology.c] set by cpufreq_set_cur_state() is always
>> in sync with policy->max/cpuinfo_max_freq).
>> So for your use case this instantaneous `signal` is better than the PELT
>> one. It's precise (no decaying when frequency clamping is already gone)
>> and you avoid the per-cpu update issue.
>
> Yes, this code implementation tries to address those issues.

The point I was making here is: why using the PELT signal
thermal_load_avg() and not per_cpu(thermal_pressure, cpu) directly,
given the fact that the latter perfectly represents the frequency clamping?

2021-06-10 10:40:17

by Lukasz Luba

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy



On 6/10/21 11:07 AM, Dietmar Eggemann wrote:
> On 10/06/2021 11:04, Lukasz Luba wrote:
>>

[snip]

>> Not always, it depends on thermal governor decision, workload and
>> 'power actors' (in IPA naming convention). Then it depends when and how
>> hard you clamp the CPUs. They (CPUs) don't have to be always
>> overutilized, they might be even 50-70% utilized but the GPU reduced
>> power budget by 2 Watts, so CPUs left with only 1W. Which is still OK
>> for the CPUs, since they are only 'feeding' the GPU with new 'jobs'.
>
> All this pretty much confines the usefulness of you proposed change. A
> precise description of it with the patches is necessary to allow people
> to start from there while exploring your patches.

OK, I see your point.

[snip]

>> True, I hope this description above would help to understand the
>> scenario.
>
> This description belongs in the patch header. The scenario in which your
> functionality would improve things has to be clear.
> I'm sure that not everybody looking at this patches is immediately aware
> on how IPA setups work and which specific setup you have in mind here.

Agree. I will add this description into the patch header for v3.

[snip]

>>
>> Yes, this code implementation tries to address those issues.
>
> The point I was making here is: why using the PELT signal
> thermal_load_avg() and not per_cpu(thermal_pressure, cpu) directly,
> given the fact that the latter perfectly represents the frequency clamping?
>

Good question. I wanted to be aligned with other parts in the fair.c
like cpu_capacity() and all it's users. The CPU capacity is reduced by
RT, DL, IRQ and thermal load avg, not the 'raw' value from the
per-cpu variable.

TBH I cannot recall what was the argument back then
when thermal pressure geometric series was introduced.
Maybe to have a better control how fast it raises and decays
so other mechanisms in the scheduler will see the change in thermal
as not so sharp... (?)


Vincent do you remember the motivation to have geometric series
in thermal pressure and not use just the 'raw' value from per-cpu?

2021-06-10 12:23:03

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy

On Thu, 10 Jun 2021 at 12:37, Lukasz Luba <[email protected]> wrote:
>
>
>
> On 6/10/21 11:07 AM, Dietmar Eggemann wrote:
> > On 10/06/2021 11:04, Lukasz Luba wrote:
> >>
>
> [snip]
>
> >> Not always, it depends on thermal governor decision, workload and
> >> 'power actors' (in IPA naming convention). Then it depends when and how
> >> hard you clamp the CPUs. They (CPUs) don't have to be always
> >> overutilized, they might be even 50-70% utilized but the GPU reduced
> >> power budget by 2 Watts, so CPUs left with only 1W. Which is still OK
> >> for the CPUs, since they are only 'feeding' the GPU with new 'jobs'.
> >
> > All this pretty much confines the usefulness of you proposed change. A
> > precise description of it with the patches is necessary to allow people
> > to start from there while exploring your patches.
>
> OK, I see your point.
>
> [snip]
>
> >> True, I hope this description above would help to understand the
> >> scenario.
> >
> > This description belongs in the patch header. The scenario in which your
> > functionality would improve things has to be clear.
> > I'm sure that not everybody looking at this patches is immediately aware
> > on how IPA setups work and which specific setup you have in mind here.
>
> Agree. I will add this description into the patch header for v3.
>
> [snip]
>
> >>
> >> Yes, this code implementation tries to address those issues.
> >
> > The point I was making here is: why using the PELT signal
> > thermal_load_avg() and not per_cpu(thermal_pressure, cpu) directly,
> > given the fact that the latter perfectly represents the frequency clamping?
> >
>
> Good question. I wanted to be aligned with other parts in the fair.c
> like cpu_capacity() and all it's users. The CPU capacity is reduced by
> RT, DL, IRQ and thermal load avg, not the 'raw' value from the
> per-cpu variable.
>
> TBH I cannot recall what was the argument back then
> when thermal pressure geometric series was introduced.
> Maybe to have a better control how fast it raises and decays
> so other mechanisms in the scheduler will see the change in thermal
> as not so sharp... (?)
>
>
> Vincent do you remember the motivation to have geometric series
> in thermal pressure and not use just the 'raw' value from per-cpu?

In order to have thermal pressure synced with other metrics used by
the scheduler like util/rt/dl_avg. As an example, when thermal
pressure will decrease because thermal capping is removed, the
utilization will increase at the same pace as thermal will decrease
and it will not create some fake spare cycle. util_avg is the average
expected utilization of the cpu, thermal pressure reflects the average
stolen capacity for the same averaging time scale but this can be the
result of a toggle between several OPP

Using current capping is quite volatile to make a decision as it might
have changed by the time you apply your decision.

2021-06-10 12:34:01

by Lukasz Luba

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy



On 6/10/21 1:19 PM, Vincent Guittot wrote:
> On Thu, 10 Jun 2021 at 12:37, Lukasz Luba <[email protected]> wrote:
>>
>>
>>
>> On 6/10/21 11:07 AM, Dietmar Eggemann wrote:
>>> On 10/06/2021 11:04, Lukasz Luba wrote:
>>>>
>>
>> [snip]
>>
>>>> Not always, it depends on thermal governor decision, workload and
>>>> 'power actors' (in IPA naming convention). Then it depends when and how
>>>> hard you clamp the CPUs. They (CPUs) don't have to be always
>>>> overutilized, they might be even 50-70% utilized but the GPU reduced
>>>> power budget by 2 Watts, so CPUs left with only 1W. Which is still OK
>>>> for the CPUs, since they are only 'feeding' the GPU with new 'jobs'.
>>>
>>> All this pretty much confines the usefulness of you proposed change. A
>>> precise description of it with the patches is necessary to allow people
>>> to start from there while exploring your patches.
>>
>> OK, I see your point.
>>
>> [snip]
>>
>>>> True, I hope this description above would help to understand the
>>>> scenario.
>>>
>>> This description belongs in the patch header. The scenario in which your
>>> functionality would improve things has to be clear.
>>> I'm sure that not everybody looking at this patches is immediately aware
>>> on how IPA setups work and which specific setup you have in mind here.
>>
>> Agree. I will add this description into the patch header for v3.
>>
>> [snip]
>>
>>>>
>>>> Yes, this code implementation tries to address those issues.
>>>
>>> The point I was making here is: why using the PELT signal
>>> thermal_load_avg() and not per_cpu(thermal_pressure, cpu) directly,
>>> given the fact that the latter perfectly represents the frequency clamping?
>>>
>>
>> Good question. I wanted to be aligned with other parts in the fair.c
>> like cpu_capacity() and all it's users. The CPU capacity is reduced by
>> RT, DL, IRQ and thermal load avg, not the 'raw' value from the
>> per-cpu variable.
>>
>> TBH I cannot recall what was the argument back then
>> when thermal pressure geometric series was introduced.
>> Maybe to have a better control how fast it raises and decays
>> so other mechanisms in the scheduler will see the change in thermal
>> as not so sharp... (?)
>>
>>
>> Vincent do you remember the motivation to have geometric series
>> in thermal pressure and not use just the 'raw' value from per-cpu?
>
> In order to have thermal pressure synced with other metrics used by
> the scheduler like util/rt/dl_avg. As an example, when thermal
> pressure will decrease because thermal capping is removed, the
> utilization will increase at the same pace as thermal will decrease
> and it will not create some fake spare cycle. util_avg is the average
> expected utilization of the cpu, thermal pressure reflects the average
> stolen capacity for the same averaging time scale but this can be the
> result of a toggle between several OPP
>
> Using current capping is quite volatile to make a decision as it might
> have changed by the time you apply your decision.
>

So for this scenario, where we want to just align EAS with SchedUtil
frequency decision, which is instantaneous and has 'raw' value
of capping from policy->max, shouldn't we use:

thermal_pressure = arch_scale_thermal_pressure(cpu_id)

instead of geometric series thermal_pressure signal?

This would avoid the hassling with idle CPUs and not updated
thermal signal.

2021-06-10 12:44:14

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy

On Thu, 10 Jun 2021 at 14:30, Lukasz Luba <[email protected]> wrote:
>
>
>
> On 6/10/21 1:19 PM, Vincent Guittot wrote:
> > On Thu, 10 Jun 2021 at 12:37, Lukasz Luba <[email protected]> wrote:
> >>
> >>
> >>
> >> On 6/10/21 11:07 AM, Dietmar Eggemann wrote:
> >>> On 10/06/2021 11:04, Lukasz Luba wrote:
> >>>>
> >>
> >> [snip]
> >>
> >>>> Not always, it depends on thermal governor decision, workload and
> >>>> 'power actors' (in IPA naming convention). Then it depends when and how
> >>>> hard you clamp the CPUs. They (CPUs) don't have to be always
> >>>> overutilized, they might be even 50-70% utilized but the GPU reduced
> >>>> power budget by 2 Watts, so CPUs left with only 1W. Which is still OK
> >>>> for the CPUs, since they are only 'feeding' the GPU with new 'jobs'.
> >>>
> >>> All this pretty much confines the usefulness of you proposed change. A
> >>> precise description of it with the patches is necessary to allow people
> >>> to start from there while exploring your patches.
> >>
> >> OK, I see your point.
> >>
> >> [snip]
> >>
> >>>> True, I hope this description above would help to understand the
> >>>> scenario.
> >>>
> >>> This description belongs in the patch header. The scenario in which your
> >>> functionality would improve things has to be clear.
> >>> I'm sure that not everybody looking at this patches is immediately aware
> >>> on how IPA setups work and which specific setup you have in mind here.
> >>
> >> Agree. I will add this description into the patch header for v3.
> >>
> >> [snip]
> >>
> >>>>
> >>>> Yes, this code implementation tries to address those issues.
> >>>
> >>> The point I was making here is: why using the PELT signal
> >>> thermal_load_avg() and not per_cpu(thermal_pressure, cpu) directly,
> >>> given the fact that the latter perfectly represents the frequency clamping?
> >>>
> >>
> >> Good question. I wanted to be aligned with other parts in the fair.c
> >> like cpu_capacity() and all it's users. The CPU capacity is reduced by
> >> RT, DL, IRQ and thermal load avg, not the 'raw' value from the
> >> per-cpu variable.
> >>
> >> TBH I cannot recall what was the argument back then
> >> when thermal pressure geometric series was introduced.
> >> Maybe to have a better control how fast it raises and decays
> >> so other mechanisms in the scheduler will see the change in thermal
> >> as not so sharp... (?)
> >>
> >>
> >> Vincent do you remember the motivation to have geometric series
> >> in thermal pressure and not use just the 'raw' value from per-cpu?
> >
> > In order to have thermal pressure synced with other metrics used by
> > the scheduler like util/rt/dl_avg. As an example, when thermal
> > pressure will decrease because thermal capping is removed, the
> > utilization will increase at the same pace as thermal will decrease
> > and it will not create some fake spare cycle. util_avg is the average
> > expected utilization of the cpu, thermal pressure reflects the average
> > stolen capacity for the same averaging time scale but this can be the
> > result of a toggle between several OPP
> >
> > Using current capping is quite volatile to make a decision as it might
> > have changed by the time you apply your decision.
> >
>
> So for this scenario, where we want to just align EAS with SchedUtil
> frequency decision, which is instantaneous and has 'raw' value
> of capping from policy->max, shouldn't we use:
>
> thermal_pressure = arch_scale_thermal_pressure(cpu_id)

Yes you should probably use arch_scale_thermal_pressure(cpu) instead
of thermal_load_avg(rq) in this case



>
> instead of geometric series thermal_pressure signal?
>
> This would avoid the hassling with idle CPUs and not updated
> thermal signal.

2021-06-10 12:56:08

by Lukasz Luba

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] sched/fair: Take thermal pressure into account while estimating energy



On 6/10/21 1:40 PM, Vincent Guittot wrote:
> On Thu, 10 Jun 2021 at 14:30, Lukasz Luba <[email protected]> wrote:

[snip]

>>
>> So for this scenario, where we want to just align EAS with SchedUtil
>> frequency decision, which is instantaneous and has 'raw' value
>> of capping from policy->max, shouldn't we use:
>>
>> thermal_pressure = arch_scale_thermal_pressure(cpu_id)
>
> Yes you should probably use arch_scale_thermal_pressure(cpu) instead
> of thermal_load_avg(rq) in this case
>

Thank you Vincent for valuable opinions!
I will rewrite it and experiment with a new approach,
then send a v3.

Regards,
Lukasz