2020-04-08 10:44:09

by Valentin Schneider

[permalink] [raw]
Subject: Re: [PATCH 2/4] sched/deadline: Improve admission control for asymmetric CPU capacities


On 08/04/20 10:50, Dietmar Eggemann wrote:
> +++ b/kernel/sched/sched.h
> @@ -304,11 +304,14 @@ void __dl_add(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
> __dl_update(dl_b, -((s32)tsk_bw / cpus));
> }
>
> +static inline unsigned long rd_capacity(int cpu);
> +
> static inline
> -bool __dl_overflow(struct dl_bw *dl_b, int cpus, u64 old_bw, u64 new_bw)
> +bool __dl_overflow(struct dl_bw *dl_b, int cpu, u64 old_bw, u64 new_bw)
> {
> return dl_b->bw != -1 &&
> - dl_b->bw * cpus < dl_b->total_bw - old_bw + new_bw;
> + cap_scale(dl_b->bw, rd_capacity(cpu)) <
> + dl_b->total_bw - old_bw + new_bw;
> }
>

I don't think this is strictly equivalent to what we have now for the SMP
case. 'cpus' used to come from dl_bw_cpus(), which is an ugly way of
writing

cpumask_weight(rd->span AND cpu_active_mask);

The rd->cpu_capacity_orig field you added gets set once per domain rebuild,
so it also happens in sched_cpu_(de)activate() but is separate from
touching cpu_active_mask. AFAICT this mean we can observe a CPU as !active
but still see its capacity_orig accounted in a root_domain.


> extern void init_dl_bw(struct dl_bw *dl_b);


2020-04-08 13:48:32

by Dietmar Eggemann

[permalink] [raw]
Subject: Re: [PATCH 2/4] sched/deadline: Improve admission control for asymmetric CPU capacities

On 08.04.20 12:42, Valentin Schneider wrote:
>
> On 08/04/20 10:50, Dietmar Eggemann wrote:
>> +++ b/kernel/sched/sched.h
>> @@ -304,11 +304,14 @@ void __dl_add(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
>> __dl_update(dl_b, -((s32)tsk_bw / cpus));
>> }
>>
>> +static inline unsigned long rd_capacity(int cpu);
>> +
>> static inline
>> -bool __dl_overflow(struct dl_bw *dl_b, int cpus, u64 old_bw, u64 new_bw)
>> +bool __dl_overflow(struct dl_bw *dl_b, int cpu, u64 old_bw, u64 new_bw)
>> {
>> return dl_b->bw != -1 &&
>> - dl_b->bw * cpus < dl_b->total_bw - old_bw + new_bw;
>> + cap_scale(dl_b->bw, rd_capacity(cpu)) <
>> + dl_b->total_bw - old_bw + new_bw;
>> }
>>
>
> I don't think this is strictly equivalent to what we have now for the SMP
> case. 'cpus' used to come from dl_bw_cpus(), which is an ugly way of
> writing
>
> cpumask_weight(rd->span AND cpu_active_mask);
>
> The rd->cpu_capacity_orig field you added gets set once per domain rebuild,
> so it also happens in sched_cpu_(de)activate() but is separate from
> touching cpu_active_mask. AFAICT this mean we can observe a CPU as !active
> but still see its capacity_orig accounted in a root_domain.

I see what you mean.

The

int dl_bw_cpus(int i) {
...
for_each_cpu_and(i, rd->span, cpu_active_mask)
cpus++;
...
}

should be there to handle the 'rd->span &nsub cpu_active_mask' case.

We could use a similar implementation for s/cpus/capacity:

unsigned long dl_bw_capacity(int i) {
...
for_each_cpu_and(i, rd->span, cpu_active_mask)
cap += arch_scale_cpu_capacity(i);
...
}

[...]

2020-04-08 16:29:45

by luca abeni

[permalink] [raw]
Subject: Re: [PATCH 2/4] sched/deadline: Improve admission control for asymmetric CPU capacities

Hi Valentin,

On Wed, 08 Apr 2020 11:42:14 +0100
Valentin Schneider <[email protected]> wrote:

> On 08/04/20 10:50, Dietmar Eggemann wrote:
> > +++ b/kernel/sched/sched.h
> > @@ -304,11 +304,14 @@ void __dl_add(struct dl_bw *dl_b, u64 tsk_bw,
> > int cpus) __dl_update(dl_b, -((s32)tsk_bw / cpus));
> > }
> >
> > +static inline unsigned long rd_capacity(int cpu);
> > +
> > static inline
> > -bool __dl_overflow(struct dl_bw *dl_b, int cpus, u64 old_bw, u64
> > new_bw) +bool __dl_overflow(struct dl_bw *dl_b, int cpu, u64
> > old_bw, u64 new_bw) {
> > return dl_b->bw != -1 &&
> > - dl_b->bw * cpus < dl_b->total_bw - old_bw + new_bw;
> > + cap_scale(dl_b->bw, rd_capacity(cpu)) <
> > + dl_b->total_bw - old_bw + new_bw;
> > }
> >
>
> I don't think this is strictly equivalent to what we have now for the
> SMP case. 'cpus' used to come from dl_bw_cpus(), which is an ugly way
> of writing
>
> cpumask_weight(rd->span AND cpu_active_mask);
>
> The rd->cpu_capacity_orig field you added gets set once per domain
> rebuild, so it also happens in sched_cpu_(de)activate() but is
> separate from touching cpu_active_mask. AFAICT this mean we can
> observe a CPU as !active but still see its capacity_orig accounted in
> a root_domain.

Sorry, I suspect this is my fault, because the bug comes from my
original patch.
When I wrote the original code, I believed that when a CPU is
deactivated it is also removed from its root domain.

I now see that I was wrong.


Luca