On 22/08/2023 00:45, Qais Yousef wrote:
> When uclamp_max is being used, the util of the task could be higher than
> the spare capacity of the CPU, but due to uclamp_max value we force fit
> it there.
>
> The way the condition for checking for max_spare_cap in
> find_energy_efficient_cpu() was constructed; it ignored any CPU that has
> its spare_cap less than or _equal_ to max_spare_cap. Since we initialize
> max_spare_cap to 0; this lead to never setting max_spare_cap_cpu and
> hence ending up never performing compute_energy() for this cluster and
> missing an opportunity for a better energy efficient placement to honour
> uclamp_max setting.
>
> max_spare_cap = 0;
> cpu_cap = capacity_of(cpu) - task_util(p); // 0 if task_util(p) is high
Nitpick:
s/task_util(p)/cpu_util(cpu, p, cpu, ...) which is
max(cpu_util + task_util, cpu_util_est + task_util_est)
>
> ...
>
> util_fits_cpu(...); // will return true if uclamp_max forces it to fit
>
> ...
>
> // this logic will fail to update max_spare_cap_cpu if cpu_cap is 0
> if (cpu_cap > max_spare_cap) {
> max_spare_cap = cpu_cap;
> max_spare_cap_cpu = cpu;
> }
>
> prev_spare_cap suffers from a similar problem.
>
> Fix the logic by converting the variables into long and treating -1
> value as 'not populated' instead of 0 which is a viable and correct
> spare capacity value. We need to be careful signed comparison is used
> when comparing with cpu_cap in one of the conditions.
>
> Fixes: 1d42509e475c ("sched/fair: Make EAS wakeup placement consider uclamp restrictions")
> Reviewed-by: Vincent Guittot <[email protected]>
> Signed-off-by: Qais Yousef (Google) <[email protected]>
> ---
> kernel/sched/fair.c | 11 +++++------
> 1 file changed, 5 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0b7445cd5af9..5da6538ed220 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7707,11 +7707,10 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> for (; pd; pd = pd->next) {
> unsigned long util_min = p_util_min, util_max = p_util_max;
> unsigned long cpu_cap, cpu_thermal_cap, util;
> - unsigned long cur_delta, max_spare_cap = 0;
> + long prev_spare_cap = -1, max_spare_cap = -1;
> unsigned long rq_util_min, rq_util_max;
> - unsigned long prev_spare_cap = 0;
> + unsigned long cur_delta, base_energy;
> int max_spare_cap_cpu = -1;
> - unsigned long base_energy;
> int fits, max_fits = -1;
>
> cpumask_and(cpus, perf_domain_span(pd), cpu_online_mask);
> @@ -7774,7 +7773,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> prev_spare_cap = cpu_cap;
> prev_fits = fits;
> } else if ((fits > max_fits) ||
> - ((fits == max_fits) && (cpu_cap > max_spare_cap))) {
> + ((fits == max_fits) && ((long)cpu_cap > max_spare_cap))) {
> /*
> * Find the CPU with the maximum spare capacity
> * among the remaining CPUs in the performance
> @@ -7786,7 +7785,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> }
> }
>
> - if (max_spare_cap_cpu < 0 && prev_spare_cap == 0)
> + if (max_spare_cap_cpu < 0 && prev_spare_cap < 0)
> continue;
>
> eenv_pd_busy_time(&eenv, cpus, p);
> @@ -7794,7 +7793,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> base_energy = compute_energy(&eenv, pd, cpus, p, -1);
>
> /* Evaluate the energy impact of using prev_cpu. */
> - if (prev_spare_cap > 0) {
> + if (prev_spare_cap > -1) {
> prev_delta = compute_energy(&eenv, pd, cpus, p,
> prev_cpu);
> /* CPU utilization has changed */
We still need a solution to deal with situations in which `pd + task
contribution` > `pd_capacity`:
compute_energy()
if (dst_cpu >= 0)
busy_time = min(pd_capacity, pd_busy_time + task_busy_time);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pd + task contribution
busy_time is based on util (ENERGY_UTIL), not on the uclamp values
(FREQUENCY_UTIL) we try to fit into a PD (and finally onto a CPU).
With that as a reminder for us and the change in the cover letter:
Reviewed-by: Dietmar Eggemann <[email protected]>