2021-06-04 10:30:11

by Odin Ugedal

[permalink] [raw]
Subject: [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle

This fixes an issue where fairness is decreased since cfs_rq's can
end up not being decayed properly. For two sibling control groups with
the same priority, this can often lead to a load ratio of 99/1 (!!).

This happen because when a cfs_rq is throttled, all the descendant cfs_rq's
will be removed from the leaf list. When they initial cfs_rq is
unthrottled, it will currently only re add descendant cfs_rq's if they
have one or more entities enqueued. This is not a perfect heuristic.

Instead, we insert all cfs_rq's that contain one or more enqueued
entities, or it its load is not completely decayed.

Can often lead to situations like this for equally weighted control
groups:

$ ps u -C stress
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 10009 88.8 0.0 3676 100 pts/1 R+ 11:04 0:13 stress --cpu 1
root 10023 3.0 0.0 3676 104 pts/1 R+ 11:04 0:00 stress --cpu 1

Fixes: 31bc6aeaab1d ("sched/fair: Optimize update_blocked_averages()")
Signed-off-by: Odin Ugedal <[email protected]>
---
Changes since v1:
- Replaced cfs_rq field with using tg_load_avg_contrib
- Went from 3 to 1 patches; one is merged and one is replaced
by a new patchset.
Changes since v2:
- Use !cfs_rq_is_decayed() instead of tg_load_avg_contrib
- Moved cfs_rq_is_decayed to above its new use
Changes since v3:
- (hopefully) Fix config for !CONFIG_SMP
kernel/sched/fair.c | 40 +++++++++++++++++++++-------------------
1 file changed, 21 insertions(+), 19 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 794c2cb945f8..eec32f214ff8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -712,6 +712,25 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se)
return calc_delta_fair(sched_slice(cfs_rq, se), se);
}

+static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
+{
+ if (cfs_rq->load.weight)
+ return false;
+
+#ifdef CONFIG_SMP
+ if (cfs_rq->avg.load_sum)
+ return false;
+
+ if (cfs_rq->avg.util_sum)
+ return false;
+
+ if (cfs_rq->avg.runnable_sum)
+ return false;
+#endif
+
+ return true;
+}
+
#include "pelt.h"
#ifdef CONFIG_SMP

@@ -4719,8 +4738,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
cfs_rq->throttled_clock_task;

- /* Add cfs_rq with already running entity in the list */
- if (cfs_rq->nr_running >= 1)
+ /* Add cfs_rq with load or one or more already running entities to the list */
+ if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running)
list_add_leaf_cfs_rq(cfs_rq);
}

@@ -7895,23 +7914,6 @@ static bool __update_blocked_others(struct rq *rq, bool *done)

#ifdef CONFIG_FAIR_GROUP_SCHED

-static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
-{
- if (cfs_rq->load.weight)
- return false;
-
- if (cfs_rq->avg.load_sum)
- return false;
-
- if (cfs_rq->avg.util_sum)
- return false;
-
- if (cfs_rq->avg.runnable_sum)
- return false;
-
- return true;
-}
-
static bool __update_blocked_fair(struct rq *rq, bool *done)
{
struct cfs_rq *cfs_rq, *pos;
--
2.31.1


2021-06-07 13:34:03

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle

On Fri, 4 Jun 2021 at 12:26, Odin Ugedal <[email protected]> wrote:
>
> This fixes an issue where fairness is decreased since cfs_rq's can
> end up not being decayed properly. For two sibling control groups with
> the same priority, this can often lead to a load ratio of 99/1 (!!).
>
> This happen because when a cfs_rq is throttled, all the descendant cfs_rq's
> will be removed from the leaf list. When they initial cfs_rq is
> unthrottled, it will currently only re add descendant cfs_rq's if they
> have one or more entities enqueued. This is not a perfect heuristic.
>
> Instead, we insert all cfs_rq's that contain one or more enqueued
> entities, or it its load is not completely decayed.
>
> Can often lead to situations like this for equally weighted control
> groups:
>
> $ ps u -C stress
> USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
> root 10009 88.8 0.0 3676 100 pts/1 R+ 11:04 0:13 stress --cpu 1
> root 10023 3.0 0.0 3676 104 pts/1 R+ 11:04 0:00 stress --cpu 1
>
> Fixes: 31bc6aeaab1d ("sched/fair: Optimize update_blocked_averages()")
> Signed-off-by: Odin Ugedal <[email protected]>
> ---
> Changes since v1:
> - Replaced cfs_rq field with using tg_load_avg_contrib
> - Went from 3 to 1 patches; one is merged and one is replaced
> by a new patchset.
> Changes since v2:
> - Use !cfs_rq_is_decayed() instead of tg_load_avg_contrib
> - Moved cfs_rq_is_decayed to above its new use
> Changes since v3:
> - (hopefully) Fix config for !CONFIG_SMP
> kernel/sched/fair.c | 40 +++++++++++++++++++++-------------------
> 1 file changed, 21 insertions(+), 19 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 794c2cb945f8..eec32f214ff8 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -712,6 +712,25 @@ static u64 sched_vslice(struct cfs_rq *cfs_rq, struct sched_entity *se)
> return calc_delta_fair(sched_slice(cfs_rq, se), se);
> }
>
> +static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)

It's not the best place for this function:
- pelt.h header file is included below but cfs_rq_is_decayed() uses PELT
- CONFIG_SMP is already defined few lines below
- cfs_rq_is_decayed() is only used with CONFIG_FAIR_GROUP_SCHED and
now with CONFIG_CFS_BANDWIDTH which depends on the former

so moving cfs_rq_is_decayed() just above update_tg_load_avg() with
other functions used for propagating and updating tg load seems a
better place

> +{
> + if (cfs_rq->load.weight)
> + return false;
> +
> +#ifdef CONFIG_SMP
> + if (cfs_rq->avg.load_sum)
> + return false;
> +
> + if (cfs_rq->avg.util_sum)
> + return false;
> +
> + if (cfs_rq->avg.runnable_sum)
> + return false;
> +#endif
> +
> + return true;
> +}
> +
> #include "pelt.h"
> #ifdef CONFIG_SMP
>
> @@ -4719,8 +4738,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
> cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
> cfs_rq->throttled_clock_task;
>
> - /* Add cfs_rq with already running entity in the list */
> - if (cfs_rq->nr_running >= 1)
> + /* Add cfs_rq with load or one or more already running entities to the list */
> + if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running)
> list_add_leaf_cfs_rq(cfs_rq);
> }
>
> @@ -7895,23 +7914,6 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
>
> #ifdef CONFIG_FAIR_GROUP_SCHED
>
> -static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
> -{
> - if (cfs_rq->load.weight)
> - return false;
> -
> - if (cfs_rq->avg.load_sum)
> - return false;
> -
> - if (cfs_rq->avg.util_sum)
> - return false;
> -
> - if (cfs_rq->avg.runnable_sum)
> - return false;
> -
> - return true;
> -}
> -
> static bool __update_blocked_fair(struct rq *rq, bool *done)
> {
> struct cfs_rq *cfs_rq, *pos;
> --
> 2.31.1
>

2021-06-07 13:40:35

by Odin Ugedal

[permalink] [raw]
Subject: Re: [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle

> It's not the best place for this function:
> - pelt.h header file is included below but cfs_rq_is_decayed() uses PELT
> - CONFIG_SMP is already defined few lines below
> - cfs_rq_is_decayed() is only used with CONFIG_FAIR_GROUP_SCHED and
> now with CONFIG_CFS_BANDWIDTH which depends on the former
>
> so moving cfs_rq_is_decayed() just above update_tg_load_avg() with
> other functions used for propagating and updating tg load seems a
> better place

Ack. When looking at it now, your suggestion makes more sense. Will fix it.

Thanks
Odin

2021-06-08 16:44:04

by Michal Koutný

[permalink] [raw]
Subject: Re: [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle

Hello.

On Fri, Jun 04, 2021 at 12:23:14PM +0200, Odin Ugedal <[email protected]> wrote:

> @@ -4719,8 +4738,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
> cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
> cfs_rq->throttled_clock_task;
>
> - /* Add cfs_rq with already running entity in the list */
> - if (cfs_rq->nr_running >= 1)
> + /* Add cfs_rq with load or one or more already running entities to the list */
> + if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running)
> list_add_leaf_cfs_rq(cfs_rq);
> }

Can there be a decayed cfs_rq with positive nr_running?
I.e. can the condition be simplified to just the decayed check?

(I'm looking at account_entity_enqueue() but I don't know if an entity's
weight can be zero in some singular cases.)

Thanks,
Michal


Attachments:
(No filename) (840.00 B)
signature.asc (849.00 B)
Digital signature
Download all attachments

2021-06-10 06:54:32

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v4] sched/fair: Correctly insert cfs_rq's to list on unthrottle

On Tue, 8 Jun 2021 at 18:39, Michal Koutný <[email protected]> wrote:
>
> Hello.
>
> On Fri, Jun 04, 2021 at 12:23:14PM +0200, Odin Ugedal <[email protected]> wrote:
>
> > @@ -4719,8 +4738,8 @@ static int tg_unthrottle_up(struct task_group *tg, void *data)
> > cfs_rq->throttled_clock_task_time += rq_clock_task(rq) -
> > cfs_rq->throttled_clock_task;
> >
> > - /* Add cfs_rq with already running entity in the list */
> > - if (cfs_rq->nr_running >= 1)
> > + /* Add cfs_rq with load or one or more already running entities to the list */
> > + if (!cfs_rq_is_decayed(cfs_rq) || cfs_rq->nr_running)
> > list_add_leaf_cfs_rq(cfs_rq);
> > }
>
> Can there be a decayed cfs_rq with positive nr_running?
> I.e. can the condition be simplified to just the decayed check?

Yes, nothing prevent a task with a null load to be enqueued on a
throttle cfs as an example

>
> (I'm looking at account_entity_enqueue() but I don't know if an entity's
> weight can be zero in some singular cases.)
>
> Thanks,
> Michal