do_sched_cfs_period_timer() will refill cfs_b runtime and call
distribute_cfs_runtime to unthrottle cfs_rq, sometimes cfs_b->runtime
will allocate all quota to one cfs_rq incorrectly, then other cfs_rqs
attached to this cfs_b can't get runtime and will be throttled.
We find that one throttled cfs_rq has non-negative
cfs_rq->runtime_remaining and cause an unexpetced cast from s64 to u64
in snippet: distribute_cfs_runtime() {
runtime = -cfs_rq->runtime_remaining + 1; }.
The runtime here will change to a large number and consume all
cfs_b->runtime in this cfs_b period.
According to Ben Segall, the throttled cfs_rq can have
account_cfs_rq_runtime called on it because it is throttled before
idle_balance, and the idle_balance calls update_rq_clock to add time
that is accounted to the task.
This commit prevents cfs_rq to be assgined new runtime if it has been
throttled until that distribute_cfs_runtime is called.
Signed-off-by: Liangyan <[email protected]>
Reviewed-by: Ben Segall <[email protected]>
Reviewed-by: Valentin Schneider <[email protected]>
---
kernel/sched/fair.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bc9cfeaac8bd..500f5db0de0b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4470,6 +4470,8 @@ static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec)
if (likely(cfs_rq->runtime_remaining > 0))
return;
+ if (cfs_rq->throttled)
+ return;
/*
* if we're unable to extend our runtime we resched so that the active
* hierarchy can be throttled
@@ -4673,6 +4675,9 @@ static u64 distribute_cfs_runtime(struct cfs_bandwidth *cfs_b,
if (!cfs_rq_throttled(cfs_rq))
goto next;
+ /* By the above check, this should never be true */
+ SCHED_WARN_ON(cfs_rq->runtime_remaining > 0);
+
runtime = -cfs_rq->runtime_remaining + 1;
if (runtime > remaining)
runtime = remaining;
--
2.14.4.44.g2045bb6
On 26/08/2019 13:16, Liangyan wrote:
> do_sched_cfs_period_timer() will refill cfs_b runtime and call
> distribute_cfs_runtime to unthrottle cfs_rq, sometimes cfs_b->runtime
> will allocate all quota to one cfs_rq incorrectly, then other cfs_rqs
> attached to this cfs_b can't get runtime and will be throttled.
>
> We find that one throttled cfs_rq has non-negative
> cfs_rq->runtime_remaining and cause an unexpetced cast from s64 to u64
> in snippet: distribute_cfs_runtime() {
> runtime = -cfs_rq->runtime_remaining + 1; }.
> The runtime here will change to a large number and consume all
> cfs_b->runtime in this cfs_b period.
>
> According to Ben Segall, the throttled cfs_rq can have
> account_cfs_rq_runtime called on it because it is throttled before
> idle_balance, and the idle_balance calls update_rq_clock to add time
> that is accounted to the task.
>
> This commit prevents cfs_rq to be assgined new runtime if it has been
> throttled until that distribute_cfs_runtime is called.
>
> Signed-off-by: Liangyan <[email protected]>
> Reviewed-by: Ben Segall <[email protected]>
> Reviewed-by: Valentin Schneider <[email protected]>
@Peter/Ingo, if we care about it I believe it can't hurt to strap
Cc: <[email protected]>
Fixes: d3d9dc330236 ("sched: Throttle entities exceeding their allowed bandwidth")
to the thing.
On Wed, Aug 28, 2019 at 11:16:52AM +0100, Valentin Schneider wrote:
> On 26/08/2019 13:16, Liangyan wrote:
> > do_sched_cfs_period_timer() will refill cfs_b runtime and call
> > distribute_cfs_runtime to unthrottle cfs_rq, sometimes cfs_b->runtime
> > will allocate all quota to one cfs_rq incorrectly, then other cfs_rqs
> > attached to this cfs_b can't get runtime and will be throttled.
> >
> > We find that one throttled cfs_rq has non-negative
> > cfs_rq->runtime_remaining and cause an unexpetced cast from s64 to u64
> > in snippet: distribute_cfs_runtime() {
> > runtime = -cfs_rq->runtime_remaining + 1; }.
> > The runtime here will change to a large number and consume all
> > cfs_b->runtime in this cfs_b period.
> >
> > According to Ben Segall, the throttled cfs_rq can have
> > account_cfs_rq_runtime called on it because it is throttled before
> > idle_balance, and the idle_balance calls update_rq_clock to add time
> > that is accounted to the task.
> >
> > This commit prevents cfs_rq to be assgined new runtime if it has been
> > throttled until that distribute_cfs_runtime is called.
> >
> > Signed-off-by: Liangyan <[email protected]>
> > Reviewed-by: Ben Segall <[email protected]>
> > Reviewed-by: Valentin Schneider <[email protected]>
>
> @Peter/Ingo, if we care about it I believe it can't hurt to strap
>
> Cc: <[email protected]>
> Fixes: d3d9dc330236 ("sched: Throttle entities exceeding their allowed bandwidth")
>
> to the thing.
OK, done.
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: 5e2d2cc2588bd3307ce3937acbc2ed03c830a861
Gitweb: https://git.kernel.org/tip/5e2d2cc2588bd3307ce3937acbc2ed03c830a861
Author: Liangyan <[email protected]>
AuthorDate: Mon, 26 Aug 2019 20:16:33 +08:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Tue, 03 Sep 2019 08:55:07 +02:00
sched/fair: Don't assign runtime for throttled cfs_rq
do_sched_cfs_period_timer() will refill cfs_b runtime and call
distribute_cfs_runtime to unthrottle cfs_rq, sometimes cfs_b->runtime
will allocate all quota to one cfs_rq incorrectly, then other cfs_rqs
attached to this cfs_b can't get runtime and will be throttled.
We find that one throttled cfs_rq has non-negative
cfs_rq->runtime_remaining and cause an unexpetced cast from s64 to u64
in snippet:
distribute_cfs_runtime() {
runtime = -cfs_rq->runtime_remaining + 1;
}
The runtime here will change to a large number and consume all
cfs_b->runtime in this cfs_b period.
According to Ben Segall, the throttled cfs_rq can have
account_cfs_rq_runtime called on it because it is throttled before
idle_balance, and the idle_balance calls update_rq_clock to add time
that is accounted to the task.
This commit prevents cfs_rq to be assgined new runtime if it has been
throttled until that distribute_cfs_runtime is called.
Signed-off-by: Liangyan <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Valentin Schneider <[email protected]>
Reviewed-by: Ben Segall <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Fixes: d3d9dc330236 ("sched: Throttle entities exceeding their allowed bandwidth")
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
kernel/sched/fair.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bc9cfea..500f5db 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4470,6 +4470,8 @@ static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec)
if (likely(cfs_rq->runtime_remaining > 0))
return;
+ if (cfs_rq->throttled)
+ return;
/*
* if we're unable to extend our runtime we resched so that the active
* hierarchy can be throttled
@@ -4673,6 +4675,9 @@ static u64 distribute_cfs_runtime(struct cfs_bandwidth *cfs_b,
if (!cfs_rq_throttled(cfs_rq))
goto next;
+ /* By the above check, this should never be true */
+ SCHED_WARN_ON(cfs_rq->runtime_remaining > 0);
+
runtime = -cfs_rq->runtime_remaining + 1;
if (runtime > remaining)
runtime = remaining;