2021-03-26 10:48:26

by Peter Zijlstra

[permalink] [raw]
Subject: [PATCH 9/9] sched,fair: Alternative sched_slice()

The current sched_slice() seems to have issues; there's two possible
things that could be improved:

- the 'nr_running' used for __sched_period() is daft when cgroups are
considered. Using the RQ wide h_nr_running seems like a much more
consistent number.

- (esp) cgroups can slice it real fine, which makes for easy
over-scheduling, ensure min_gran is what the name says.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
---
kernel/sched/fair.c | 15 ++++++++++++++-
kernel/sched/features.h | 3 +++
2 files changed, 17 insertions(+), 1 deletion(-)

--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -680,7 +680,16 @@ static u64 __sched_period(unsigned long
*/
static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
- u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq);
+ unsigned int nr_running = cfs_rq->nr_running;
+ u64 slice;
+
+ if (sched_feat(ALT_PERIOD))
+ nr_running = rq_of(cfs_rq)->cfs.h_nr_running;
+
+ slice = __sched_period(nr_running + !se->on_rq);
+
+ if (sched_feat(BASE_SLICE))
+ slice -= sysctl_sched_min_granularity;

for_each_sched_entity(se) {
struct load_weight *load;
@@ -697,6 +706,10 @@ static u64 sched_slice(struct cfs_rq *cf
}
slice = __calc_delta(slice, se->load.weight, load);
}
+
+ if (sched_feat(BASE_SLICE))
+ slice += sysctl_sched_min_granularity;
+
return slice;
}

--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -90,3 +90,6 @@ SCHED_FEAT(WA_BIAS, true)
*/
SCHED_FEAT(UTIL_EST, true)
SCHED_FEAT(UTIL_EST_FASTUP, true)
+
+SCHED_FEAT(ALT_PERIOD, true)
+SCHED_FEAT(BASE_SLICE, true)



2021-03-26 12:13:04

by Dietmar Eggemann

[permalink] [raw]
Subject: Re: [PATCH 9/9] sched,fair: Alternative sched_slice()

On 26/03/2021 11:34, Peter Zijlstra wrote:
> The current sched_slice() seems to have issues; there's two possible
> things that could be improved:
>
> - the 'nr_running' used for __sched_period() is daft when cgroups are
> considered. Using the RQ wide h_nr_running seems like a much more
> consistent number.
>
> - (esp) cgroups can slice it real fine, which makes for easy
> over-scheduling, ensure min_gran is what the name says.

So ALT_PERIOD considers all runnable CFS tasks now and BASE_SLICE
guarantees min_gran as a floor for cgroup (hierarchies) with small
weight value(s)?

>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> ---
> kernel/sched/fair.c | 15 ++++++++++++++-
> kernel/sched/features.h | 3 +++
> 2 files changed, 17 insertions(+), 1 deletion(-)
>
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -680,7 +680,16 @@ static u64 __sched_period(unsigned long
> */
> static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
> {
> - u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq);
> + unsigned int nr_running = cfs_rq->nr_running;
> + u64 slice;
> +
> + if (sched_feat(ALT_PERIOD))
> + nr_running = rq_of(cfs_rq)->cfs.h_nr_running;
> +
> + slice = __sched_period(nr_running + !se->on_rq);
> +
> + if (sched_feat(BASE_SLICE))
> + slice -= sysctl_sched_min_granularity;
>
> for_each_sched_entity(se) {
> struct load_weight *load;
> @@ -697,6 +706,10 @@ static u64 sched_slice(struct cfs_rq *cf
> }
> slice = __calc_delta(slice, se->load.weight, load);
> }
> +
> + if (sched_feat(BASE_SLICE))
> + slice += sysctl_sched_min_granularity;
> +
> return slice;
> }
>
> --- a/kernel/sched/features.h
> +++ b/kernel/sched/features.h
> @@ -90,3 +90,6 @@ SCHED_FEAT(WA_BIAS, true)
> */
> SCHED_FEAT(UTIL_EST, true)
> SCHED_FEAT(UTIL_EST_FASTUP, true)
> +
> +SCHED_FEAT(ALT_PERIOD, true)
> +SCHED_FEAT(BASE_SLICE, true)
>
>

2021-03-26 14:10:23

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 9/9] sched,fair: Alternative sched_slice()

On Fri, Mar 26, 2021 at 01:08:44PM +0100, Dietmar Eggemann wrote:
> On 26/03/2021 11:34, Peter Zijlstra wrote:
> > The current sched_slice() seems to have issues; there's two possible
> > things that could be improved:
> >
> > - the 'nr_running' used for __sched_period() is daft when cgroups are
> > considered. Using the RQ wide h_nr_running seems like a much more
> > consistent number.
> >
> > - (esp) cgroups can slice it real fine, which makes for easy
> > over-scheduling, ensure min_gran is what the name says.
>
> So ALT_PERIOD considers all runnable CFS tasks now and BASE_SLICE
> guarantees min_gran as a floor for cgroup (hierarchies) with small
> weight value(s)?

Pretty much.

The previous cfs_rq->nr_running is just how many runnable thingies there
are in whatever cgroup you happen to be in on our CPU, not counting its
child cgroups nor whatever is upwards. Which is a pretty arbitrary
value.

By always using h_nr_running of the root, we get a consistent number and
the period is the same for all tasks on the CPU.

And yes, low weight cgroups, or even just a nice -20 and 19 task
together would result in *tiny* slices, which then leads to
over-scheduling. So by only scaling the part between period and
min_gran, we still get a variable slice, but also avoid the worst cases.

2021-03-26 15:39:12

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH 9/9] sched,fair: Alternative sched_slice()

On Fri, 26 Mar 2021 at 11:43, Peter Zijlstra <[email protected]> wrote:
>
> The current sched_slice() seems to have issues; there's two possible
> things that could be improved:
>
> - the 'nr_running' used for __sched_period() is daft when cgroups are
> considered. Using the RQ wide h_nr_running seems like a much more
> consistent number.
>
> - (esp) cgroups can slice it real fine, which makes for easy
> over-scheduling, ensure min_gran is what the name says.
>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> ---
> kernel/sched/fair.c | 15 ++++++++++++++-
> kernel/sched/features.h | 3 +++
> 2 files changed, 17 insertions(+), 1 deletion(-)
>
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -680,7 +680,16 @@ static u64 __sched_period(unsigned long
> */
> static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
> {
> - u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq);
> + unsigned int nr_running = cfs_rq->nr_running;
> + u64 slice;
> +
> + if (sched_feat(ALT_PERIOD))
> + nr_running = rq_of(cfs_rq)->cfs.h_nr_running;
> +
> + slice = __sched_period(nr_running + !se->on_rq);
> +
> + if (sched_feat(BASE_SLICE))
> + slice -= sysctl_sched_min_granularity;
>
> for_each_sched_entity(se) {
> struct load_weight *load;
> @@ -697,6 +706,10 @@ static u64 sched_slice(struct cfs_rq *cf
> }
> slice = __calc_delta(slice, se->load.weight, load);
> }
> +
> + if (sched_feat(BASE_SLICE))
> + slice += sysctl_sched_min_granularity;

Why not only doing a max of slice and sysctl_sched_min_granularity
instead of scaling only the part above sysctl_sched_min_granularity ?

With your change, cases where the slices would have been in a good
range already, will be modified as well

> +
> return slice;
> }
>
> --- a/kernel/sched/features.h
> +++ b/kernel/sched/features.h
> @@ -90,3 +90,6 @@ SCHED_FEAT(WA_BIAS, true)
> */
> SCHED_FEAT(UTIL_EST, true)
> SCHED_FEAT(UTIL_EST_FASTUP, true)
> +
> +SCHED_FEAT(ALT_PERIOD, true)
> +SCHED_FEAT(BASE_SLICE, true)
>
>

2021-03-26 18:32:27

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 9/9] sched,fair: Alternative sched_slice()

On Fri, Mar 26, 2021 at 04:37:03PM +0100, Vincent Guittot wrote:
> On Fri, 26 Mar 2021 at 11:43, Peter Zijlstra <[email protected]> wrote:
> >
> > The current sched_slice() seems to have issues; there's two possible
> > things that could be improved:
> >
> > - the 'nr_running' used for __sched_period() is daft when cgroups are
> > considered. Using the RQ wide h_nr_running seems like a much more
> > consistent number.
> >
> > - (esp) cgroups can slice it real fine, which makes for easy
> > over-scheduling, ensure min_gran is what the name says.
> >
> > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > ---
> > kernel/sched/fair.c | 15 ++++++++++++++-
> > kernel/sched/features.h | 3 +++
> > 2 files changed, 17 insertions(+), 1 deletion(-)
> >
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -680,7 +680,16 @@ static u64 __sched_period(unsigned long
> > */
> > static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
> > {
> > - u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq);
> > + unsigned int nr_running = cfs_rq->nr_running;
> > + u64 slice;
> > +
> > + if (sched_feat(ALT_PERIOD))
> > + nr_running = rq_of(cfs_rq)->cfs.h_nr_running;
> > +
> > + slice = __sched_period(nr_running + !se->on_rq);
> > +
> > + if (sched_feat(BASE_SLICE))
> > + slice -= sysctl_sched_min_granularity;
> >
> > for_each_sched_entity(se) {
> > struct load_weight *load;
> > @@ -697,6 +706,10 @@ static u64 sched_slice(struct cfs_rq *cf
> > }
> > slice = __calc_delta(slice, se->load.weight, load);
> > }
> > +
> > + if (sched_feat(BASE_SLICE))
> > + slice += sysctl_sched_min_granularity;
>
> Why not only doing a max of slice and sysctl_sched_min_granularity
> instead of scaling only the part above sysctl_sched_min_granularity ?
>
> With your change, cases where the slices would have been in a good
> range already, will be modified as well

Can do I suppose. Not sure how I ended up with this.