Odin reported some fairness problem between cgroup because of stalled
value in cfs_rq->tg_load_avg_contrib:
https://lkml.org/lkml/2021/5/18/566
2 problems generated this situation:
-1st: After propagating load in the hierarchy, load_sum can be null
whereas load_avg isn't so the cfs_rq is removed whereas it still
contribute to th tg's load
-2nd: cfs_rq->tg_load_avg_contrib was not always updated after
significant changes like becoming null because cfs_rq had already
been updated when propagating a child load.
Vincent Guittot (2):
sched/fair: keep load_avg and load_sum synced
sched/fair: make sure to update tg contrib for blocked load
kernel/sched/fair.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
--
2.17.1
During the update of fair blocked load (__update_blocked_fair()), we update
the contribution of the cfs in tg->load_avg if cfs_rq's pelt has decayed.
Nevertheless, the pelt values of a cfs_rq could have been recently updated
while propagating the change of a child. In uchthis case, cfs_rq's pelt will
not decayed because it has already been updated and we don't update
tg->load_avg.
__update_blocked_fair
...
for_each_leaf_cfs_rq_safe: child cfs_rq
update cfs_rq_load_avg() for child cfs_rq
...
update_load_avg(cfs_rq_of(se), se, 0)
...
update cfs_rq_load_avg() for parent cfs_rq
-propagation of child's load makes parent cfs_rq->load_sum
becoming null
-UPDATE_TG is not set so it doesn't update parent
cfs_rq->tg_load_avg_contrib
..
for_each_leaf_cfs_rq_safe: parent cfs_rq
update cfs_rq_load_avg() for parent cfs_rq
- nothing to do because parent cfs_rq has already been updated
recently so cfs_rq->tg_load_avg_contrib is not updated
...
parent cfs_rq is decayed
list_del_leaf_cfs_rq parent cfs_rq
- but it still contibutes to tg->load_avg
we must set UPDATE_TG flags when propagting pending load to the parent
Fixes: 039ae8bcf7a5 ("sched/fair: Fix O(nr_cgroups) in the load balancing path")
Reported-by: Odin Ugedal <[email protected]>
Signed-off-by: Vincent Guittot <[email protected]>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2859545d95fb..dcb3b1a6813c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8048,7 +8048,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
/* Propagate pending load changes to the parent, if any: */
se = cfs_rq->tg->se[cpu];
if (se && !skip_blocked_update(se))
- update_load_avg(cfs_rq_of(se), se, 0);
+ update_load_avg(cfs_rq_of(se), se, UPDATE_TG);
/*
* There can be a lot of idle CPU cgroups. Don't let fully
--
2.17.1
Hi Odin,
On Thu, 27 May 2021 at 14:29, Vincent Guittot
<[email protected]> wrote:
>
> Odin reported some fairness problem between cgroup because of stalled
> value in cfs_rq->tg_load_avg_contrib:
>
> https://lkml.org/lkml/2021/5/18/566
>
>
> 2 problems generated this situation:
> -1st: After propagating load in the hierarchy, load_sum can be null
> whereas load_avg isn't so the cfs_rq is removed whereas it still
> contribute to th tg's load
> -2nd: cfs_rq->tg_load_avg_contrib was not always updated after
> significant changes like becoming null because cfs_rq had already
> been updated when propagating a child load.
>
This series fixes the problem triggered by your 1st script on my test
machine. But could you confirm that this patchset also fixes the
problem on yours
Regards,
Vincent
>
> Vincent Guittot (2):
> sched/fair: keep load_avg and load_sum synced
> sched/fair: make sure to update tg contrib for blocked load
>
> kernel/sched/fair.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> --
> 2.17.1
>
Hi,
Yes, this does the trick!
I have tested this locally and it works as expected, as discussed in
the previous thread (more info there). Together with the patch "[PATCH
2/3] sched/fair: Correctly insert cfs_rq's to list on unthrottle", I
am unable to reproduce the issue locally; so this is good to go.
Feel free to add this to both patches when the warnings on the first
patch are fixed;
Reviewed-by: Odin Ugedal <[email protected]>
Thanks
Odin
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: 02da26ad5ed6ea8680e5d01f20661439611ed776
Gitweb: https://git.kernel.org/tip/02da26ad5ed6ea8680e5d01f20661439611ed776
Author: Vincent Guittot <[email protected]>
AuthorDate: Thu, 27 May 2021 14:29:16 +02:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 31 May 2021 10:14:48 +02:00
sched/fair: Make sure to update tg contrib for blocked load
During the update of fair blocked load (__update_blocked_fair()), we
update the contribution of the cfs in tg->load_avg if cfs_rq's pelt
has decayed. Nevertheless, the pelt values of a cfs_rq could have
been recently updated while propagating the change of a child. In this
case, cfs_rq's pelt will not decayed because it has already been
updated and we don't update tg->load_avg.
__update_blocked_fair
...
for_each_leaf_cfs_rq_safe: child cfs_rq
update cfs_rq_load_avg() for child cfs_rq
...
update_load_avg(cfs_rq_of(se), se, 0)
...
update cfs_rq_load_avg() for parent cfs_rq
-propagation of child's load makes parent cfs_rq->load_sum
becoming null
-UPDATE_TG is not set so it doesn't update parent
cfs_rq->tg_load_avg_contrib
..
for_each_leaf_cfs_rq_safe: parent cfs_rq
update cfs_rq_load_avg() for parent cfs_rq
- nothing to do because parent cfs_rq has already been updated
recently so cfs_rq->tg_load_avg_contrib is not updated
...
parent cfs_rq is decayed
list_del_leaf_cfs_rq parent cfs_rq
- but it still contibutes to tg->load_avg
we must set UPDATE_TG flags when propagting pending load to the parent
Fixes: 039ae8bcf7a5 ("sched/fair: Fix O(nr_cgroups) in the load balancing path")
Reported-by: Odin Ugedal <[email protected]>
Signed-off-by: Vincent Guittot <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Odin Ugedal <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f4795b8..e7c8277 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8029,7 +8029,7 @@ static bool __update_blocked_fair(struct rq *rq, bool *done)
/* Propagate pending load changes to the parent, if any: */
se = cfs_rq->tg->se[cpu];
if (se && !skip_blocked_update(se))
- update_load_avg(cfs_rq_of(se), se, 0);
+ update_load_avg(cfs_rq_of(se), se, UPDATE_TG);
/*
* There can be a lot of idle CPU cgroups. Don't let fully