2021-06-01 08:38:09

by Dietmar Eggemann

[permalink] [raw]
Subject: [PATCH] sched/fair: Return early from update_tg_cfs_load() if delta == 0

In case the _avg delta is 0 there is no need to update se's _avg
(level n) nor cfs_rq's _avg (level n-1). These values stay the same.

Since cfs_rq's _avg isn't changed, i.e. no load is propagated down,
cfs_rq's _sum should stay the same as well.

So bail out after se's _sum has been updated.

Signed-off-by: Dietmar Eggemann <[email protected]>
---

This patch is against current tip/sched/urgent, commit f268c3737eca
("tick/nohz: Only check for RCU deferred wakeup on user/guest entry
when needed").
It needs commit 7c7ad626d9a0 ("sched/fair: Keep load_avg and load_sum
synced").

kernel/sched/fair.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e7c8277e3d54..ce8e0e10e5d4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3548,9 +3548,12 @@ update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq
load_sum = (s64)se_weight(se) * runnable_sum;
load_avg = div_s64(load_sum, divider);

+ se->avg.load_sum = runnable_sum;
+
delta = load_avg - se->avg.load_avg;
+ if (!delta)
+ return;

- se->avg.load_sum = runnable_sum;
se->avg.load_avg = load_avg;

add_positive(&cfs_rq->avg.load_avg, delta);
--
2.25.1


2021-06-01 14:13:34

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH] sched/fair: Return early from update_tg_cfs_load() if delta == 0

On Tue, 1 Jun 2021 at 10:36, Dietmar Eggemann <[email protected]> wrote:
>
> In case the _avg delta is 0 there is no need to update se's _avg
> (level n) nor cfs_rq's _avg (level n-1). These values stay the same.
>
> Since cfs_rq's _avg isn't changed, i.e. no load is propagated down,
> cfs_rq's _sum should stay the same as well.
>
> So bail out after se's _sum has been updated.
>
> Signed-off-by: Dietmar Eggemann <[email protected]>

Reviewed-by: Vincent Guittot <[email protected]>

> ---
>
> This patch is against current tip/sched/urgent, commit f268c3737eca
> ("tick/nohz: Only check for RCU deferred wakeup on user/guest entry
> when needed").
> It needs commit 7c7ad626d9a0 ("sched/fair: Keep load_avg and load_sum
> synced").
>
> kernel/sched/fair.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e7c8277e3d54..ce8e0e10e5d4 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3548,9 +3548,12 @@ update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq
> load_sum = (s64)se_weight(se) * runnable_sum;
> load_avg = div_s64(load_sum, divider);
>
> + se->avg.load_sum = runnable_sum;
> +
> delta = load_avg - se->avg.load_avg;
> + if (!delta)
> + return;
>
> - se->avg.load_sum = runnable_sum;
> se->avg.load_avg = load_avg;
>
> add_positive(&cfs_rq->avg.load_avg, delta);
> --
> 2.25.1
>

Subject: [tip: sched/core] sched/fair: Return early from update_tg_cfs_load() if delta == 0

The following commit has been merged into the sched/core branch of tip:

Commit-ID: 83c5e9d573e1f0757f324d01adb6ee77b49c3f0e
Gitweb: https://git.kernel.org/tip/83c5e9d573e1f0757f324d01adb6ee77b49c3f0e
Author: Dietmar Eggemann <[email protected]>
AuthorDate: Tue, 01 Jun 2021 10:36:16 +02:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Thu, 17 Jun 2021 14:11:42 +02:00

sched/fair: Return early from update_tg_cfs_load() if delta == 0

In case the _avg delta is 0 there is no need to update se's _avg
(level n) nor cfs_rq's _avg (level n-1). These values stay the same.

Since cfs_rq's _avg isn't changed, i.e. no load is propagated down,
cfs_rq's _sum should stay the same as well.

So bail out after se's _sum has been updated.

Signed-off-by: Dietmar Eggemann <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Vincent Guittot <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
kernel/sched/fair.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 198514d..06c8ba7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3502,9 +3502,12 @@ update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se, struct cfs_rq
load_sum = (s64)se_weight(se) * runnable_sum;
load_avg = div_s64(load_sum, divider);

+ se->avg.load_sum = runnable_sum;
+
delta = load_avg - se->avg.load_avg;
+ if (!delta)
+ return;

- se->avg.load_sum = runnable_sum;
se->avg.load_avg = load_avg;

add_positive(&cfs_rq->avg.load_avg, delta);