During our testing, we found a case that shares no longer
working correctly, the cgroup topology is like:
/sys/fs/cgroup/cpu/A (shares=102400)
/sys/fs/cgroup/cpu/A/B (shares=2)
/sys/fs/cgroup/cpu/A/B/C (shares=1024)
/sys/fs/cgroup/cpu/D (shares=1024)
/sys/fs/cgroup/cpu/D/E (shares=1024)
/sys/fs/cgroup/cpu/D/E/F (shares=1024)
The same benchmark is running in group C & F, no other tasks are
running, the benchmark is capable to consumed all the CPUs.
We suppose the group C will win more CPU resources since it could
enjoy all the shares of group A, but it's F who wins much more.
The reason is because we have group B with shares as 2, since
A->cfs_rq.load.weight == B->se.load.weight == B->shares/nr_cpus,
so A->cfs_rq.load.weight become very small.
And in calc_group_shares() we calculate shares as:
load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
shares = (tg_shares * load) / tg_weight;
Since the 'cfs_rq->load.weight' is too small, the load become 0
after scale down, although 'tg_shares' is 102400, shares of the se
which stand for group A on root cfs_rq become 2.
While the se of D on root cfs_rq is far more bigger than 2, so it
wins the battle.
Thus when scale_load_down() scale real weight down to 0, it's no
longer telling the real story, the caller will have the wrong
information and the calculation will be buggy.
This patch add check in scale_load_down(), so the real weight will
be >= MIN_SHARES after scale, after applied the group C wins as
expected.
Cc: Ben Segall <[email protected]>
Reviewed-by: Vincent Guittot <[email protected]>
Suggested-by: Peter Zijlstra <[email protected]>
Signed-off-by: Michael Wang <[email protected]>
---
v2:
* replace MIN_SHARE with 2UL to cover CONFIG_FAIR_GROUP_SCHED=n case
kernel/sched/sched.h | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 2a0caf394dd4..9bca26bd60d9 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -118,7 +118,13 @@ extern long calc_load_fold_active(struct rq *this_rq, long adjust);
#ifdef CONFIG_64BIT
# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
# define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT)
-# define scale_load_down(w) ((w) >> SCHED_FIXEDPOINT_SHIFT)
+# define scale_load_down(w) \
+({ \
+ unsigned long __w = (w); \
+ if (__w) \
+ __w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
+ __w; \
+})
#else
# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT)
# define scale_load(w) (w)
--
2.14.4.44.g2045bb6
Hi Peter, Vincent
My apologies to missing the case when CONFIG_FAIR_GROUP_SCHED
is disabled, I've replaced the MIN_SHARE with 2UL as it was
defined, sorry for the trouble...
Regards,
Michael Wang
On 2020/3/18 上午10:15, 王贇 wrote:
> During our testing, we found a case that shares no longer
> working correctly, the cgroup topology is like:
>
> /sys/fs/cgroup/cpu/A (shares=102400)
> /sys/fs/cgroup/cpu/A/B (shares=2)
> /sys/fs/cgroup/cpu/A/B/C (shares=1024)
>
> /sys/fs/cgroup/cpu/D (shares=1024)
> /sys/fs/cgroup/cpu/D/E (shares=1024)
> /sys/fs/cgroup/cpu/D/E/F (shares=1024)
>
> The same benchmark is running in group C & F, no other tasks are
> running, the benchmark is capable to consumed all the CPUs.
>
> We suppose the group C will win more CPU resources since it could
> enjoy all the shares of group A, but it's F who wins much more.
>
> The reason is because we have group B with shares as 2, since
> A->cfs_rq.load.weight == B->se.load.weight == B->shares/nr_cpus,
> so A->cfs_rq.load.weight become very small.
>
> And in calc_group_shares() we calculate shares as:
>
> load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
> shares = (tg_shares * load) / tg_weight;
>
> Since the 'cfs_rq->load.weight' is too small, the load become 0
> after scale down, although 'tg_shares' is 102400, shares of the se
> which stand for group A on root cfs_rq become 2.
>
> While the se of D on root cfs_rq is far more bigger than 2, so it
> wins the battle.
>
> Thus when scale_load_down() scale real weight down to 0, it's no
> longer telling the real story, the caller will have the wrong
> information and the calculation will be buggy.
>
> This patch add check in scale_load_down(), so the real weight will
> be >= MIN_SHARES after scale, after applied the group C wins as
> expected.
>
> Cc: Ben Segall <[email protected]>
> Reviewed-by: Vincent Guittot <[email protected]>
> Suggested-by: Peter Zijlstra <[email protected]>
> Signed-off-by: Michael Wang <[email protected]>
> ---
> v2:
> * replace MIN_SHARE with 2UL to cover CONFIG_FAIR_GROUP_SCHED=n case
>
> kernel/sched/sched.h | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 2a0caf394dd4..9bca26bd60d9 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -118,7 +118,13 @@ extern long calc_load_fold_active(struct rq *this_rq, long adjust);
> #ifdef CONFIG_64BIT
> # define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
> # define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT)
> -# define scale_load_down(w) ((w) >> SCHED_FIXEDPOINT_SHIFT)
> +# define scale_load_down(w) \
> +({ \
> + unsigned long __w = (w); \
> + if (__w) \
> + __w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
> + __w; \
> +})
> #else
> # define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT)
> # define scale_load(w) (w)
>
On Wed, Mar 18, 2020 at 10:23:49AM +0800, 王贇 wrote:
> Hi Peter, Vincent
>
> My apologies to missing the case when CONFIG_FAIR_GROUP_SCHED
> is disabled, I've replaced the MIN_SHARE with 2UL as it was
> defined, sorry for the trouble...
No worries, that's what we have those robots for ;-) I'll update the
patch and send it off again shortly.
The following commit has been merged into the sched/core branch of tip:
Commit-ID: 26cf52229efc87e2effa9d788f9b33c40fb3358a
Gitweb: https://git.kernel.org/tip/26cf52229efc87e2effa9d788f9b33c40fb3358a
Author: Michael Wang <[email protected]>
AuthorDate: Wed, 18 Mar 2020 10:15:15 +08:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Fri, 20 Mar 2020 13:06:19 +01:00
sched: Avoid scale real weight down to zero
During our testing, we found a case that shares no longer
working correctly, the cgroup topology is like:
/sys/fs/cgroup/cpu/A (shares=102400)
/sys/fs/cgroup/cpu/A/B (shares=2)
/sys/fs/cgroup/cpu/A/B/C (shares=1024)
/sys/fs/cgroup/cpu/D (shares=1024)
/sys/fs/cgroup/cpu/D/E (shares=1024)
/sys/fs/cgroup/cpu/D/E/F (shares=1024)
The same benchmark is running in group C & F, no other tasks are
running, the benchmark is capable to consumed all the CPUs.
We suppose the group C will win more CPU resources since it could
enjoy all the shares of group A, but it's F who wins much more.
The reason is because we have group B with shares as 2, since
A->cfs_rq.load.weight == B->se.load.weight == B->shares/nr_cpus,
so A->cfs_rq.load.weight become very small.
And in calc_group_shares() we calculate shares as:
load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
shares = (tg_shares * load) / tg_weight;
Since the 'cfs_rq->load.weight' is too small, the load become 0
after scale down, although 'tg_shares' is 102400, shares of the se
which stand for group A on root cfs_rq become 2.
While the se of D on root cfs_rq is far more bigger than 2, so it
wins the battle.
Thus when scale_load_down() scale real weight down to 0, it's no
longer telling the real story, the caller will have the wrong
information and the calculation will be buggy.
This patch add check in scale_load_down(), so the real weight will
be >= MIN_SHARES after scale, after applied the group C wins as
expected.
Suggested-by: Peter Zijlstra <[email protected]>
Signed-off-by: Michael Wang <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Vincent Guittot <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
kernel/sched/sched.h | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 9e173fa..1e72d1b 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -118,7 +118,13 @@ extern long calc_load_fold_active(struct rq *this_rq, long adjust);
#ifdef CONFIG_64BIT
# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
# define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT)
-# define scale_load_down(w) ((w) >> SCHED_FIXEDPOINT_SHIFT)
+# define scale_load_down(w) \
+({ \
+ unsigned long __w = (w); \
+ if (__w) \
+ __w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
+ __w; \
+})
#else
# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT)
# define scale_load(w) (w)