2024-02-04 04:47:16

by David Vernet

[permalink] [raw]
Subject: [PATCH v2 2/3] sched/fair: Do strict inequality check for busiest misfit task group

In update_sd_pick_busiest(), when comparing two sched groups that are
both of type group_misfit_task, we currently consider the new group as
busier than the current busiest group even if the new group has the
same misfit task load as the current busiest group. We can avoid some
unnecessary writes if we instead only consider the newest group to be
the busiest if it has a higher load than the current busiest. This
matches the behavior of other group types where we compare load, such as
two groups that are both overloaded.

Let's update the group_misfit_task type comparison to also only update
the busiest group in the event of strict inequality.

Signed-off-by: David Vernet <[email protected]>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e7519ea434b1..76d03106040d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10028,7 +10028,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
* If we have more than one misfit sg go with the biggest
* misfit.
*/
- if (sgs->group_misfit_task_load < busiest->group_misfit_task_load)
+ if (sgs->group_misfit_task_load <= busiest->group_misfit_task_load)
return false;
break;

--
2.43.0



2024-02-04 11:46:14

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] sched/fair: Do strict inequality check for busiest misfit task group

On Sun, 4 Feb 2024 at 05:46, David Vernet <[email protected]> wrote:
>
> In update_sd_pick_busiest(), when comparing two sched groups that are
> both of type group_misfit_task, we currently consider the new group as
> busier than the current busiest group even if the new group has the
> same misfit task load as the current busiest group. We can avoid some
> unnecessary writes if we instead only consider the newest group to be
> the busiest if it has a higher load than the current busiest. This
> matches the behavior of other group types where we compare load, such as
> two groups that are both overloaded.
>
> Let's update the group_misfit_task type comparison to also only update
> the busiest group in the event of strict inequality.

fair enough

Reviewed-by: Vincent Guittot <[email protected]>

>
> Signed-off-by: David Vernet <[email protected]>
> ---
> kernel/sched/fair.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e7519ea434b1..76d03106040d 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10028,7 +10028,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
> * If we have more than one misfit sg go with the biggest
> * misfit.
> */
> - if (sgs->group_misfit_task_load < busiest->group_misfit_task_load)
> + if (sgs->group_misfit_task_load <= busiest->group_misfit_task_load)
> return false;
> break;
>
> --
> 2.43.0
>