2024-02-06 04:39:54

by David Vernet

[permalink] [raw]
Subject: [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest()

update_sd_pick_busiest() (and its caller) has some room for small
optimizations, and some improvements in readability.

- In update_sd_lb_stats(), we're using a goto to skip a single if check.
Let's remove the goto and just add another condition to the if.

- In update_sd_pick_busiest(), only update a group_misfit_task group to
be the busiest if it has strictly more load than the current busiest
task, rather than >= the load.

- When comparing the current struct sched_group with the yet-busiest
domain in update_sd_pick_busiest(), if the two groups have the same
group type, we're currently doing a bit of unnecessary work for any
group >= group_misfit_task. We're comparing the two groups, and then
returning only if false (the group in question is not the busiest).
Othewise, we break, do an extra unnecessary conditional check that's
vacuously false for any group type > group_fully_busy, and then always
return true. This patch series has us instead simply return directly
in the switch statement, saving some bytes in load_balance().

---

v1: https://lore.kernel.org/lkml/[email protected]/
v2: https://lore.kernel.org/all/[email protected]/

v2 -> v3:
- Add Vincent's Reviewed-by tags
- Fix stale commit summary sentence (Vincent)

v1 -> v2 changes:

- Split the patch series into separate patches (Valentin)
- Update the group_misfit_task busiest check to use strict inequality

David Vernet (3):
sched/fair: Remove unnecessary goto in update_sd_lb_stats()
sched/fair: Do strict inequality check for busiest misfit task group
sched/fair: Simplify some logic in update_sd_pick_busiest()

kernel/sched/fair.c | 19 ++++---------------
1 file changed, 4 insertions(+), 15 deletions(-)

--
2.43.0



2024-02-06 04:40:02

by David Vernet

[permalink] [raw]
Subject: [PATCH v3 2/3] sched/fair: Do strict inequality check for busiest misfit task group

In update_sd_pick_busiest(), when comparing two sched groups that are
both of type group_misfit_task, we currently consider the new group as
busier than the current busiest group even if the new group has the
same misfit task load as the current busiest group. We can avoid some
unnecessary writes if we instead only consider the newest group to be
the busiest if it has a higher load than the current busiest. This
matches the behavior of other group types where we compare load, such as
two groups that are both overloaded.

Let's update the group_misfit_task type comparison to also only update
the busiest group in the event of strict inequality.

Reviewed-by: Vincent Guittot <[email protected]>
Signed-off-by: David Vernet <[email protected]>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e7519ea434b1..76d03106040d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10028,7 +10028,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
* If we have more than one misfit sg go with the biggest
* misfit.
*/
- if (sgs->group_misfit_task_load < busiest->group_misfit_task_load)
+ if (sgs->group_misfit_task_load <= busiest->group_misfit_task_load)
return false;
break;

--
2.43.0


2024-02-06 04:40:24

by David Vernet

[permalink] [raw]
Subject: [PATCH v3 3/3] sched/fair: Simplify some logic in update_sd_pick_busiest()

When comparing the current struct sched_group with the yet-busiest
domain in update_sd_pick_busiest(), if the two groups have the same
group type, we're currently doing a bit of unnecessary work for any
group >= group_misfit_task. We're comparing the two groups, and then
returning only if false (the group in question is not the busiest).
Othewise, we break, do an extra unnecessary conditional check that's
vacuously false for any group type > group_fully_busy, and then always
return true.

Let's just return directly in the switch statement instead. This doesn't
change the size of vmlinux with llvm 17 (not surprising given that all
of this is inlined in load_balance()), but it does shrink load_balance()
by 88 bytes on x86. Given that it also improves readability, this seems
worth doing.

Reviewed-by: Vincent Guittot <[email protected]>
Reviewed-by: Valentin Schneider <[email protected]>
Signed-off-by: David Vernet <[email protected]>
---
kernel/sched/fair.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 76d03106040d..fa049f866461 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10006,9 +10006,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
switch (sgs->group_type) {
case group_overloaded:
/* Select the overloaded group with highest avg_load. */
- if (sgs->avg_load <= busiest->avg_load)
- return false;
- break;
+ return sgs->avg_load > busiest->avg_load;

case group_imbalanced:
/*
@@ -10019,18 +10017,14 @@ static bool update_sd_pick_busiest(struct lb_env *env,

case group_asym_packing:
/* Prefer to move from lowest priority CPU's work */
- if (sched_asym_prefer(sg->asym_prefer_cpu, sds->busiest->asym_prefer_cpu))
- return false;
- break;
+ return sched_asym_prefer(sds->busiest->asym_prefer_cpu, sg->asym_prefer_cpu);

case group_misfit_task:
/*
* If we have more than one misfit sg go with the biggest
* misfit.
*/
- if (sgs->group_misfit_task_load <= busiest->group_misfit_task_load)
- return false;
- break;
+ return sgs->group_misfit_task_load > busiest->group_misfit_task_load;

case group_smt_balance:
/*
--
2.43.0


2024-02-06 04:42:23

by David Vernet

[permalink] [raw]
Subject: [PATCH v3 1/3] sched/fair: Remove unnecessary goto in update_sd_lb_stats()

In update_sd_lb_stats(), when we're iterating over the sched groups that
comprise a sched domain, we're skipping the call to
update_sd_pick_busiest() for the sched group that contains the local /
destination CPU. We use a goto to skip the call, but we could just as
easily check !local_group, as there's no other logic that we need to
skip with the goto. Let's remove the goto, and check for !local_group in
the if statement instead.

Reviewed-by: Vincent Guittot <[email protected]>
Reviewed-by: Valentin Schneider <[email protected]>
Signed-off-by: David Vernet <[email protected]>
---
kernel/sched/fair.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b803030c3a03..e7519ea434b1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10578,16 +10578,11 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd

update_sg_lb_stats(env, sds, sg, sgs, &sg_status);

- if (local_group)
- goto next_group;
-
-
- if (update_sd_pick_busiest(env, sds, sg, sgs)) {
+ if (!local_group && update_sd_pick_busiest(env, sds, sg, sgs)) {
sds->busiest = sg;
sds->busiest_stat = *sgs;
}

-next_group:
/* Now, start updating sd_lb_stats */
sds->total_load += sgs->group_load;
sds->total_capacity += sgs->group_capacity;
--
2.43.0


2024-02-06 13:28:18

by Valentin Schneider

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] sched/fair: Do strict inequality check for busiest misfit task group

On 05/02/24 22:39, David Vernet wrote:
> In update_sd_pick_busiest(), when comparing two sched groups that are
> both of type group_misfit_task, we currently consider the new group as
> busier than the current busiest group even if the new group has the
> same misfit task load as the current busiest group. We can avoid some
> unnecessary writes if we instead only consider the newest group to be
> the busiest if it has a higher load than the current busiest. This
> matches the behavior of other group types where we compare load, such as
> two groups that are both overloaded.
>
> Let's update the group_misfit_task type comparison to also only update
> the busiest group in the event of strict inequality.
>
> Reviewed-by: Vincent Guittot <[email protected]>

Reviewed-by: Valentin Schneider <[email protected]>


2024-02-16 19:49:42

by David Vernet

[permalink] [raw]
Subject: Re: [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest()

On Mon, Feb 05, 2024 at 10:39:18PM -0600, David Vernet wrote:
> update_sd_pick_busiest() (and its caller) has some room for small
> optimizations, and some improvements in readability.

Hello Peter, hello Ingo,

Friendly ping. Is there anything else required for this to land?

Thanks,
David

>
> - In update_sd_lb_stats(), we're using a goto to skip a single if check.
> Let's remove the goto and just add another condition to the if.
>
> - In update_sd_pick_busiest(), only update a group_misfit_task group to
> be the busiest if it has strictly more load than the current busiest
> task, rather than >= the load.
>
> - When comparing the current struct sched_group with the yet-busiest
> domain in update_sd_pick_busiest(), if the two groups have the same
> group type, we're currently doing a bit of unnecessary work for any
> group >= group_misfit_task. We're comparing the two groups, and then
> returning only if false (the group in question is not the busiest).
> Othewise, we break, do an extra unnecessary conditional check that's
> vacuously false for any group type > group_fully_busy, and then always
> return true. This patch series has us instead simply return directly
> in the switch statement, saving some bytes in load_balance().
>
> ---
>
> v1: https://lore.kernel.org/lkml/[email protected]/
> v2: https://lore.kernel.org/all/[email protected]/
>
> v2 -> v3:
> - Add Vincent's Reviewed-by tags
> - Fix stale commit summary sentence (Vincent)
>
> v1 -> v2 changes:
>
> - Split the patch series into separate patches (Valentin)
> - Update the group_misfit_task busiest check to use strict inequality
>
> David Vernet (3):
> sched/fair: Remove unnecessary goto in update_sd_lb_stats()
> sched/fair: Do strict inequality check for busiest misfit task group
> sched/fair: Simplify some logic in update_sd_pick_busiest()
>
> kernel/sched/fair.c | 19 ++++---------------
> 1 file changed, 4 insertions(+), 15 deletions(-)
>
> --
> 2.43.0
>


Attachments:
(No filename) (2.05 kB)
signature.asc (235.00 B)
Download all attachments

2024-02-26 17:51:31

by David Vernet

[permalink] [raw]
Subject: Re: [PATCH v3 0/3] sched/fair: Simplify and optimize update_sd_pick_busiest()

On Fri, Feb 16, 2024 at 01:44:40PM -0600, David Vernet wrote:
> Hello Peter, hello Ingo,
>
> Friendly ping. Is there anything else required for this to land?

Hello,

Sending another ping.

Thanks,
David

>
> Thanks,
> David
>
> >
> > - In update_sd_lb_stats(), we're using a goto to skip a single if check.
> > Let's remove the goto and just add another condition to the if.
> >
> > - In update_sd_pick_busiest(), only update a group_misfit_task group to
> > be the busiest if it has strictly more load than the current busiest
> > task, rather than >= the load.
> >
> > - When comparing the current struct sched_group with the yet-busiest
> > domain in update_sd_pick_busiest(), if the two groups have the same
> > group type, we're currently doing a bit of unnecessary work for any
> > group >= group_misfit_task. We're comparing the two groups, and then
> > returning only if false (the group in question is not the busiest).
> > Othewise, we break, do an extra unnecessary conditional check that's
> > vacuously false for any group type > group_fully_busy, and then always
> > return true. This patch series has us instead simply return directly
> > in the switch statement, saving some bytes in load_balance().
> >
> > ---
> >
> > v1: https://lore.kernel.org/lkml/[email protected]/
> > v2: https://lore.kernel.org/all/[email protected]/
> >
> > v2 -> v3:
> > - Add Vincent's Reviewed-by tags
> > - Fix stale commit summary sentence (Vincent)
> >
> > v1 -> v2 changes:
> >
> > - Split the patch series into separate patches (Valentin)
> > - Update the group_misfit_task busiest check to use strict inequality
> >
> > David Vernet (3):
> > sched/fair: Remove unnecessary goto in update_sd_lb_stats()
> > sched/fair: Do strict inequality check for busiest misfit task group
> > sched/fair: Simplify some logic in update_sd_pick_busiest()
> >
> > kernel/sched/fair.c | 19 ++++---------------
> > 1 file changed, 4 insertions(+), 15 deletions(-)
> >
> > --
> > 2.43.0
> >



Attachments:
(No filename) (2.08 kB)
signature.asc (235.00 B)
Download all attachments
Subject: [tip: sched/core] sched/fair: Remove unnecessary goto in update_sd_lb_stats()

The following commit has been merged into the sched/core branch of tip:

Commit-ID: 9dfbc26d27aaf0f5958c5972188f16fe977e5af5
Gitweb: https://git.kernel.org/tip/9dfbc26d27aaf0f5958c5972188f16fe977e5af5
Author: David Vernet <[email protected]>
AuthorDate: Mon, 05 Feb 2024 22:39:19 -06:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Wed, 28 Feb 2024 15:19:23 +01:00

sched/fair: Remove unnecessary goto in update_sd_lb_stats()

In update_sd_lb_stats(), when we're iterating over the sched groups that
comprise a sched domain, we're skipping the call to
update_sd_pick_busiest() for the sched group that contains the local /
destination CPU. We use a goto to skip the call, but we could just as
easily check !local_group, as there's no other logic that we need to
skip with the goto. Let's remove the goto, and check for !local_group in
the if statement instead.

Signed-off-by: David Vernet <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Vincent Guittot <[email protected]>
Reviewed-by: Valentin Schneider <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
kernel/sched/fair.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 352222d..41dda53 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10580,16 +10580,11 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd

update_sg_lb_stats(env, sds, sg, sgs, &sg_status);

- if (local_group)
- goto next_group;
-
-
- if (update_sd_pick_busiest(env, sds, sg, sgs)) {
+ if (!local_group && update_sd_pick_busiest(env, sds, sg, sgs)) {
sds->busiest = sg;
sds->busiest_stat = *sgs;
}

-next_group:
/* Now, start updating sd_lb_stats */
sds->total_load += sgs->group_load;
sds->total_capacity += sgs->group_capacity;

Subject: [tip: sched/core] sched/fair: Do strict inequality check for busiest misfit task group

The following commit has been merged into the sched/core branch of tip:

Commit-ID: 7f1a7229718d788f26a711374da83adc2689837f
Gitweb: https://git.kernel.org/tip/7f1a7229718d788f26a711374da83adc2689837f
Author: David Vernet <[email protected]>
AuthorDate: Mon, 05 Feb 2024 22:39:20 -06:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Wed, 28 Feb 2024 15:19:24 +01:00

sched/fair: Do strict inequality check for busiest misfit task group

In update_sd_pick_busiest(), when comparing two sched groups that are
both of type group_misfit_task, we currently consider the new group as
busier than the current busiest group even if the new group has the
same misfit task load as the current busiest group. We can avoid some
unnecessary writes if we instead only consider the newest group to be
the busiest if it has a higher load than the current busiest. This
matches the behavior of other group types where we compare load, such as
two groups that are both overloaded.

Let's update the group_misfit_task type comparison to also only update
the busiest group in the event of strict inequality.

Signed-off-by: David Vernet <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Vincent Guittot <[email protected]>
Reviewed-by: Valentin Schneider <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 41dda53..448520f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10032,7 +10032,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
* If we have more than one misfit sg go with the biggest
* misfit.
*/
- if (sgs->group_misfit_task_load < busiest->group_misfit_task_load)
+ if (sgs->group_misfit_task_load <= busiest->group_misfit_task_load)
return false;
break;


Subject: [tip: sched/core] sched/fair: Simplify the update_sd_pick_busiest() logic

The following commit has been merged into the sched/core branch of tip:

Commit-ID: 7e9f7d17fe6c23432fda9a3648a858b7589cb4aa
Gitweb: https://git.kernel.org/tip/7e9f7d17fe6c23432fda9a3648a858b7589cb4aa
Author: David Vernet <[email protected]>
AuthorDate: Mon, 05 Feb 2024 22:39:21 -06:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Wed, 28 Feb 2024 15:19:26 +01:00

sched/fair: Simplify the update_sd_pick_busiest() logic

When comparing the current struct sched_group with the yet-busiest
domain in update_sd_pick_busiest(), if the two groups have the same
group type, we're currently doing a bit of unnecessary work for any
group >= group_misfit_task. We're comparing the two groups, and then
returning only if false (the group in question is not the busiest).

Otherwise, we break out, do an extra unnecessary conditional check that's
vacuously false for any group type > group_fully_busy, and then always
return true.

Let's just return directly in the switch statement instead. This doesn't
change the size of vmlinux with llvm 17 (not surprising given that all
of this is inlined in load_balance()), but it does shrink load_balance()
by 88 bytes on x86. Given that it also improves readability, this seems
worth doing.

Signed-off-by: David Vernet <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Reviewed-by: Vincent Guittot <[email protected]>
Reviewed-by: Valentin Schneider <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
---
kernel/sched/fair.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 448520f..51fe17f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10010,9 +10010,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
switch (sgs->group_type) {
case group_overloaded:
/* Select the overloaded group with highest avg_load. */
- if (sgs->avg_load <= busiest->avg_load)
- return false;
- break;
+ return sgs->avg_load > busiest->avg_load;

case group_imbalanced:
/*
@@ -10023,18 +10021,14 @@ static bool update_sd_pick_busiest(struct lb_env *env,

case group_asym_packing:
/* Prefer to move from lowest priority CPU's work */
- if (sched_asym_prefer(sg->asym_prefer_cpu, sds->busiest->asym_prefer_cpu))
- return false;
- break;
+ return sched_asym_prefer(sds->busiest->asym_prefer_cpu, sg->asym_prefer_cpu);

case group_misfit_task:
/*
* If we have more than one misfit sg go with the biggest
* misfit.
*/
- if (sgs->group_misfit_task_load <= busiest->group_misfit_task_load)
- return false;
- break;
+ return sgs->group_misfit_task_load > busiest->group_misfit_task_load;

case group_smt_balance:
/*