In the load_balance function, if the balance fails due to
affinity,then parent group's imbalance will be set to 1.
However, there will be a scene where balance is achieved,
but the imbalance flag is still set to 1, which needs to
be fixed.
The specific trigger scenarios are as follows. In the
load_balance function, the first loop picks out the busiest
cpu. During the process of pulling the process from the
busiest cpu, it is found that all tasks cannot be run on the
DST cpu. At this time, both LBF_SOME_PINNED andLBF_ALL_PINNED
of env.flags are set. Because balance fails and LBF_SOME_PINNED
is set, the parent group's mbalance flag will be set to 1. At
the same time, because LBF_ALL_PINNED is set, it will re-enter
the second cycle to select another busiest cpu to pull the process.
Because the busiest CPU has changed, the process can be pulled to
DST cpu, so it is possible to reach a balance state.
But at this time, the parent group's imbalance flag is not set
correctly again. As a result, the load balance is successfully
achieved, but the parent group's imbalance flag is incorrectly
set to 1. In this case, the upper-layer load balance will
erroneously select this domain as the busiest group, thereby
breaking the balance.
Signed-off-by: zgpeng <[email protected]>
---
kernel/sched/fair.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d4bd299..e137917 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10019,13 +10019,13 @@ static int load_balance(int this_cpu, struct rq *this_rq,
}
/*
- * We failed to reach balance because of affinity.
+ * According to balance status, set group_imbalance correctly.
*/
if (sd_parent) {
int *group_imbalance = &sd_parent->groups->sgc->imbalance;
- if ((env.flags & LBF_SOME_PINNED) && env.imbalance > 0)
- *group_imbalance = 1;
+ if (env.flags & LBF_SOME_PINNED)
+ *group_imbalance = env.imbalance > 0 ? 1 : 0;
}
/* All tasks on this runqueue were pinned by CPU affinity */
--
2.9.5
On Sun, 10 Apr 2022 12:58:36 +0800
zgpeng <[email protected]> wrote:
> The specific trigger scenarios are as follows. In the
> load_balance function, the first loop picks out the busiest
> cpu. During the process of pulling the process from the
> busiest cpu, it is found that all tasks cannot be run on the
> DST cpu. At this time, both LBF_SOME_PINNED andLBF_ALL_PINNED
> of env.flags are set. Because balance fails and LBF_SOME_PINNED
> is set, the parent group's mbalance flag will be set to 1. At
> the same time, because LBF_ALL_PINNED is set, it will re-enter
> the second cycle to select another busiest cpu to pull the process.
> Because the busiest CPU has changed, the process can be pulled to
> DST cpu, so it is possible to reach a balance state.
But is the next cpu selected really the busiest CPU? Or is the original
still the busiest but doesn't have any tasks that can migrate to the dst
CPU? Does it really mean the parent is now unbalanced?
-- Steve
>
> But at this time, the parent group's imbalance flag is not set
> correctly again. As a result, the load balance is successfully
> achieved, but the parent group's imbalance flag is incorrectly
> set to 1. In this case, the upper-layer load balance will
> erroneously select this domain as the busiest group, thereby
> breaking the balance.
On Sun, 10 Apr 2022 at 06:58, zgpeng <[email protected]> wrote:
>
> In the load_balance function, if the balance fails due to
> affinity,then parent group's imbalance will be set to 1.
> However, there will be a scene where balance is achieved,
> but the imbalance flag is still set to 1, which needs to
> be fixed.
>
> The specific trigger scenarios are as follows. In the
> load_balance function, the first loop picks out the busiest
> cpu. During the process of pulling the process from the
> busiest cpu, it is found that all tasks cannot be run on the
> DST cpu. At this time, both LBF_SOME_PINNED andLBF_ALL_PINNED
> of env.flags are set. Because balance fails and LBF_SOME_PINNED
shouldn't LBF_DST_PINNED and dst_cpu have been set ?
and goto more_balance should clear env.imbalance before we set group's
imbalance ?
> is set, the parent group's mbalance flag will be set to 1. At
> the same time, because LBF_ALL_PINNED is set, it will re-enter
> the second cycle to select another busiest cpu to pull the process.
> Because the busiest CPU has changed, the process can be pulled to
> DST cpu, so it is possible to reach a balance state.
The new load_balance will be done without the previous busiest cpu
>
> But at this time, the parent group's imbalance flag is not set
> correctly again. As a result, the load balance is successfully
> achieved, but the parent group's imbalance flag is incorrectly
> set to 1. In this case, the upper-layer load balance will
> erroneously select this domain as the busiest group, thereby
> breaking the balance.
>
> Signed-off-by: zgpeng <[email protected]>
> ---
> kernel/sched/fair.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index d4bd299..e137917 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -10019,13 +10019,13 @@ static int load_balance(int this_cpu, struct rq *this_rq,
> }
>
> /*
> - * We failed to reach balance because of affinity.
> + * According to balance status, set group_imbalance correctly.
> */
> if (sd_parent) {
> int *group_imbalance = &sd_parent->groups->sgc->imbalance;
>
> - if ((env.flags & LBF_SOME_PINNED) && env.imbalance > 0)
> - *group_imbalance = 1;
> + if (env.flags & LBF_SOME_PINNED)
> + *group_imbalance = env.imbalance > 0 ? 1 : 0;
> }
>
> /* All tasks on this runqueue were pinned by CPU affinity */
> --
> 2.9.5
>