2017-04-10 09:24:27

by Xunlei Pang

[permalink] [raw]
Subject: [PATCH] sched/deadline: Throttle a constrained task activated if overflow

I was testing Daniel's changes with his test case in the commit
df8eac8cafce ("sched/deadline: Throttle a constrained deadline
task activated after the deadline"), and tweaked it a little.

Instead of having the runtime equal to the deadline, I tweaked
runtime, deadline and sleep value to ensure every time it calls
dl_check_constrained_dl() with "dl_se->deadline > rq_clock(rq)"
as well as true dl_entity_overflow(), so it does replenishing
every wake up in update_dl_entity(), and break its bandwidth.

Daniel's test case had:
attr.sched_runtime = 2 * 1000 * 1000; /* 2 ms */
attr.sched_deadline = 2 * 1000 * 1000; /* 2 ms*/
attr.sched_period = 2 * 1000 * 1000 * 1000; /* 2 s */
ts.tv_sec = 0;
ts.tv_nsec = 2000 * 1000; /* 2 ms */

I changed it to:
attr.sched_runtime = 5 * 1000 * 1000; /* 5 ms */
attr.sched_deadline = 7 * 1000 * 1000; /* 7 ms */
attr.sched_period = 1 * 1000 * 1000 * 1000; /* 1 s */
ts.tv_sec = 0;
ts.tv_nsec = 1000 * 1000; /* 1 ms */

The change above can result in over 25% of the CPU on my machine.

In order to avoid the beakage, we improve dl_check_constrained_dl()
to prevent dl tasks from being activated until the next period if it
runs out of bandwidth of the current period.

Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Steven Rostedt <[email protected]>
Signed-off-by: Xunlei Pang <[email protected]>
---
kernel/sched/deadline.c | 47 +++++++++++++++++++++++++++--------------------
1 file changed, 27 insertions(+), 20 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index a2ce590..e7be3b4 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -702,28 +702,33 @@ void init_dl_task_timer(struct sched_dl_entity *dl_se)
* works fine for implicit deadline tasks (deadline == period), and the
* CBS was designed for implicit deadline tasks. However, a task with
* constrained deadline (deadine < period) might be awakened after the
- * deadline, but before the next period. In this case, replenishing the
- * task would allow it to run for runtime / deadline. As in this case
- * deadline < period, CBS enables a task to run for more than the
- * runtime / period. In a very loaded system, this can cause a domino
- * effect, making other tasks miss their deadlines.
+ * deadline before the next period, or before the deadline but bandwidth
+ * of the current period was used up. In these cases, replenishing the
+ * task would allow it to run for more than runtime / period. In a very
+ * loaded system, this can cause a domino effect, making other tasks miss
+ * their deadlines.
*
- * To avoid this problem, in the activation of a constrained deadline
- * task after the deadline but before the next period, throttle the
- * task and set the replenishing timer to the begin of the next period,
- * unless it is boosted.
+ * To avoid these problems, in the activation of a constrained deadline
+ * task, throttle the task as needed and set the replenishing timer to
+ * the begin of the next period.
*/
static inline void dl_check_constrained_dl(struct sched_dl_entity *dl_se)
{
struct task_struct *p = dl_task_of(dl_se);
struct rq *rq = rq_of_dl_rq(dl_rq_of_se(dl_se));

- if (dl_time_before(dl_se->deadline, rq_clock(rq)) &&
- dl_time_before(rq_clock(rq), dl_next_period(dl_se))) {
- if (unlikely(dl_se->dl_boosted || !start_dl_timer(p)))
- return;
- dl_se->dl_throttled = 1;
- }
+ if (dl_time_before(dl_next_period(dl_se), rq_clock(rq)))
+ return;
+
+ /* Still have available bandwidth in the current period. */
+ if (dl_time_before(rq_clock(rq), dl_se->deadline) &&
+ !dl_entity_overflow(dl_se, dl_se, rq_clock(rq)))
+ return;
+
+ if (!start_dl_timer(p))
+ return;
+
+ dl_se->dl_throttled = 1;
}

static
@@ -990,12 +995,14 @@ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags)
}

/*
- * Check if a constrained deadline task was activated
- * after the deadline but before the next period.
- * If that is the case, the task will be throttled and
- * the replenishment timer will be set to the next period.
+ * Check if a constrained deadline task is allowed to be
+ * activated in the current period. If not, the task will
+ * be throttled and the replenishment timer will be set to
+ * the next period. Skip boosted and throttled tasks.
*/
- if (!p->dl.dl_throttled && dl_is_constrained(&p->dl))
+ if (!p->dl.dl_boosted &&
+ !p->dl.dl_throttled &&
+ dl_is_constrained(&p->dl))
dl_check_constrained_dl(&p->dl);

/*
--
1.8.3.1


Subject: Re: [PATCH] sched/deadline: Throttle a constrained task activated if overflow

On 04/10/2017 11:22 AM, Xunlei Pang wrote:
> I was testing Daniel's changes with his test case in the commit
> df8eac8cafce ("sched/deadline: Throttle a constrained deadline
> task activated after the deadline"), and tweaked it a little.
>
> Instead of having the runtime equal to the deadline, I tweaked
> runtime, deadline and sleep value to ensure every time it calls
> dl_check_constrained_dl() with "dl_se->deadline > rq_clock(rq)"
> as well as true dl_entity_overflow(), so it does replenishing
> every wake up in update_dl_entity(), and break its bandwidth.
>
> Daniel's test case had:
> attr.sched_runtime = 2 * 1000 * 1000; /* 2 ms */
> attr.sched_deadline = 2 * 1000 * 1000; /* 2 ms*/
> attr.sched_period = 2 * 1000 * 1000 * 1000; /* 2 s */
> ts.tv_sec = 0;
> ts.tv_nsec = 2000 * 1000; /* 2 ms */
>
> I changed it to:
> attr.sched_runtime = 5 * 1000 * 1000; /* 5 ms */
> attr.sched_deadline = 7 * 1000 * 1000; /* 7 ms */
> attr.sched_period = 1 * 1000 * 1000 * 1000; /* 1 s */
> ts.tv_sec = 0;
> ts.tv_nsec = 1000 * 1000; /* 1 ms */
>
> The change above can result in over 25% of the CPU on my machine.
>
> In order to avoid the beakage, we improve dl_check_constrained_dl()
> to prevent dl tasks from being activated until the next period if it
> runs out of bandwidth of the current period.

The problem now is that, with your patch, we will throttle the task
with some possible runtime. Moreover, the task did not brake any
rule, like being awakened after the deadline - the user-space is not
misbehaving.

That is +- what the reproducer is doing when using your patch,
(I put some trace_printk when noticing the overflow in the wakeup).

<idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my current runtime is 3657361 and the deadline is 4613027 from now
<idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my dl_runtime is 5000000

and so the task will be throttled with 3657361 ns runtime available.

As we can see, it is really breaking the density:

5ms / 7ms (.714285) < 3657361 / 4613027 (.792833)

Well, it is not breaking that much. Trying to be less pessimist, we can
compute a new runtime with following equation:

runtime = (dl_runtime / dl_deadline) * (deadline - now)

That is, a runtime which fits in the task's density.

In code:

dl_se->runtime = (div64_u64(dl_se->dl_runtime << 20, dl_se->dl_deadline) * (dl_se->deadline - rq_clock(rq))) >> 20;

For example (a trace_printk) showing the adjusted runtime for the
previous task:
<idle>-0 [007] d.h. 1505.066440: enqueue_task_dl: I can still run for 3294208 (it is not that bad)

With the adjusted runtime, we have the following density:

3294208 / 4613027 = .714109

as .714109 < .714285

Voilà. We can still use 3.2 ms of runtime! Not bad at all.

However, even this result is, somehow, controversial. Because we are
reducing the task's reservation (Q/P). But that is very close to the
implicit deadline behavior: when it "overflows", the runtime is truncated
(a replenishment...) and the deadline is postponed. In this case, we do
not need to postpone the deadline, just to "truncate" the runtime to fit
in the density... it is not that bad.

Another possibility is not to replenish a constrained deadline
task twice in a period. In this case, the task would run further
the deadline, potentially causing problems for implicit deadline tasks.
But! even implicit deadline would do it in a overloaded system. The
problem is that, by using the density the system easily becomes
overloaded (too pessimistic).

Resuming (so far):

1) We can be pessimistic to the constrained deadline task,
with Xunlei's patch;
2) Compute a new runtime to fit in the density - somehow pessimistic.
3) Avoid replenishing a constrained deadline before the next period.
but the system will become overload very easily (density).

I think the option 2 is the mid-term.
I am showing a _proof of concept_ patch bellow. I is based in the
Xunlei's changes in the verification. I need to polish it... but...
you guys can read the idea in C.

I have the 3) too, but I am not sure if it is as good as 2.

We still need to think more about the solution.... test more... I will
continue working on this tomorrow, discussing with luca and tommaso
as well.

Thoughts? Am I missing something (probably, I am tired :-) )?

---
kernel/sched/deadline.c | 53 ++++++++++++++++++++++++++++---------------------
1 file changed, 30 insertions(+), 23 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 7508129..6fa4887 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -696,34 +696,38 @@ void init_dl_task_timer(struct sched_dl_entity *dl_se)
}

/*
- * During the activation, CBS checks if it can reuse the current task's
- * runtime and period. If the deadline of the task is in the past, CBS
- * cannot use the runtime, and so it replenishes the task. This rule
- * works fine for implicit deadline tasks (deadline == period), and the
- * CBS was designed for implicit deadline tasks. However, a task with
- * constrained deadline (deadine < period) might be awakened after the
- * deadline, but before the next period. In this case, replenishing the
- * task would allow it to run for runtime / deadline. As in this case
- * deadline < period, CBS enables a task to run for more than the
- * runtime / period. In a very loaded system, this can cause a domino
- * effect, making other tasks miss their deadlines.
- *
- * To avoid this problem, in the activation of a constrained deadline
- * task after the deadline but before the next period, throttle the
- * task and set the replenishing timer to the begin of the next period,
- * unless it is boosted.
+ * XXX: Daniel will document it better in a clean patch, this is
+ * a proof of concept.
*/
static inline void dl_check_constrained_dl(struct sched_dl_entity *dl_se)
{
struct task_struct *p = dl_task_of(dl_se);
struct rq *rq = rq_of_dl_rq(dl_rq_of_se(dl_se));
+ u64 clock = rq_clock(rq);

- if (dl_time_before(dl_se->deadline, rq_clock(rq)) &&
- dl_time_before(rq_clock(rq), dl_next_period(dl_se))) {
- if (unlikely(dl_se->dl_boosted || !start_dl_timer(p)))
- return;
- dl_se->dl_throttled = 1;
+ /* we are in a new period */
+ if (dl_time_before(dl_next_period(dl_se), rq_clock(rq)))
+ return;
+
+ /* are we ok with the deadline and density? */
+ if (dl_time_before(rq_clock(rq), dl_se->deadline) &&
+ !dl_entity_overflow(dl_se, dl_se, rq_clock(rq)))
+ return;
+
+ /* is the density the problem? */
+ if (dl_entity_overflow(dl_se, dl_se, clock)) {
+ /* reduces the runtime so it fits in the density */
+ dl_se->runtime =
+ (div64_u64(dl_se->dl_runtime << 20, dl_se->dl_deadline) *
+ (dl_se->deadline - clock)) >> 20;
+ WARN_ON(dl_se->runtime < 0);
+ return;
}
+
+ if (!start_dl_timer(p))
+ return;
+
+ dl_se->dl_throttled = 1;
}

static
@@ -996,8 +1000,11 @@ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags)
* If that is the case, the task will be throttled and
* the replenishment timer will be set to the next period.
*/
- if (!dl_se->dl_throttled && dl_is_constrained(dl_se))
- dl_check_constrained_dl(dl_se);
+ if (!p->dl.dl_boosted &&
+ !p->dl.dl_throttled &&
+ dl_is_constrained(&p->dl))
+ dl_check_constrained_dl(&p->dl);
+

/*
* If p is throttled, we do nothing. In fact, if it exhausted
--
2.5.0

2017-04-11 01:35:30

by Xunlei Pang

[permalink] [raw]
Subject: Re: [PATCH] sched/deadline: Throttle a constrained task activated if overflow

On 04/11/2017 at 04:47 AM, Daniel Bristot de Oliveira wrote:
> On 04/10/2017 11:22 AM, Xunlei Pang wrote:
>> I was testing Daniel's changes with his test case in the commit
>> df8eac8cafce ("sched/deadline: Throttle a constrained deadline
>> task activated after the deadline"), and tweaked it a little.
>>
>> Instead of having the runtime equal to the deadline, I tweaked
>> runtime, deadline and sleep value to ensure every time it calls
>> dl_check_constrained_dl() with "dl_se->deadline > rq_clock(rq)"
>> as well as true dl_entity_overflow(), so it does replenishing
>> every wake up in update_dl_entity(), and break its bandwidth.
>>
>> Daniel's test case had:
>> attr.sched_runtime = 2 * 1000 * 1000; /* 2 ms */
>> attr.sched_deadline = 2 * 1000 * 1000; /* 2 ms*/
>> attr.sched_period = 2 * 1000 * 1000 * 1000; /* 2 s */
>> ts.tv_sec = 0;
>> ts.tv_nsec = 2000 * 1000; /* 2 ms */
>>
>> I changed it to:
>> attr.sched_runtime = 5 * 1000 * 1000; /* 5 ms */
>> attr.sched_deadline = 7 * 1000 * 1000; /* 7 ms */
>> attr.sched_period = 1 * 1000 * 1000 * 1000; /* 1 s */
>> ts.tv_sec = 0;
>> ts.tv_nsec = 1000 * 1000; /* 1 ms */
>>
>> The change above can result in over 25% of the CPU on my machine.
>>
>> In order to avoid the beakage, we improve dl_check_constrained_dl()
>> to prevent dl tasks from being activated until the next period if it
>> runs out of bandwidth of the current period.
> The problem now is that, with your patch, we will throttle the task
> with some possible runtime. Moreover, the task did not brake any
> rule, like being awakened after the deadline - the user-space is not
> misbehaving.
>
> That is +- what the reproducer is doing when using your patch,
> (I put some trace_printk when noticing the overflow in the wakeup).
>
> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my current runtime is 3657361 and the deadline is 4613027 from now
> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my dl_runtime is 5000000
>
> and so the task will be throttled with 3657361 ns runtime available.
>
> As we can see, it is really breaking the density:
>
> 5ms / 7ms (.714285) < 3657361 / 4613027 (.792833)

For the runtime 5ms, deadline 7ms, 3657361 is the remaining runtime, so the actual runtime is
(5000000 - 3657361) = 1342639, and the actual density is 1342639 / 7000000 (.191806),
not break the limited density 5ms / 7ms (.714285).

"3657361 / 4613027 (.792833)" means available density greater than limited density(.714285),
this means true dl_entity_overflow(), and can't meet the requirement in the current period any more,
thus need to throttle it until next period.

Regards,
Xunlei

>
> Well, it is not breaking that much. Trying to be less pessimist, we can
> compute a new runtime with following equation:
>
> runtime = (dl_runtime / dl_deadline) * (deadline - now)
>
> That is, a runtime which fits in the task's density.
>
> In code:
>
> dl_se->runtime = (div64_u64(dl_se->dl_runtime << 20, dl_se->dl_deadline) * (dl_se->deadline - rq_clock(rq))) >> 20;
>
> For example (a trace_printk) showing the adjusted runtime for the
> previous task:
> <idle>-0 [007] d.h. 1505.066440: enqueue_task_dl: I can still run for 3294208 (it is not that bad)
>
> With the adjusted runtime, we have the following density:
>
> 3294208 / 4613027 = .714109
>
> as .714109 < .714285
>
> Voilà. We can still use 3.2 ms of runtime! Not bad at all.
>
> However, even this result is, somehow, controversial. Because we are
> reducing the task's reservation (Q/P). But that is very close to the
> implicit deadline behavior: when it "overflows", the runtime is truncated
> (a replenishment...) and the deadline is postponed. In this case, we do
> not need to postpone the deadline, just to "truncate" the runtime to fit
> in the density... it is not that bad.
>
> Another possibility is not to replenish a constrained deadline
> task twice in a period. In this case, the task would run further
> the deadline, potentially causing problems for implicit deadline tasks.
> But! even implicit deadline would do it in a overloaded system. The
> problem is that, by using the density the system easily becomes
> overloaded (too pessimistic).
>
> Resuming (so far):
>
> 1) We can be pessimistic to the constrained deadline task,
> with Xunlei's patch;
> 2) Compute a new runtime to fit in the density - somehow pessimistic.
> 3) Avoid replenishing a constrained deadline before the next period.
> but the system will become overload very easily (density).
>
> I think the option 2 is the mid-term.
> I am showing a _proof of concept_ patch bellow. I is based in the
> Xunlei's changes in the verification. I need to polish it... but...
> you guys can read the idea in C.
>
> I have the 3) too, but I am not sure if it is as good as 2.
>
> We still need to think more about the solution.... test more... I will
> continue working on this tomorrow, discussing with luca and tommaso
> as well.
>
> Thoughts? Am I missing something (probably, I am tired :-) )?
>
> ---
> kernel/sched/deadline.c | 53 ++++++++++++++++++++++++++++---------------------
> 1 file changed, 30 insertions(+), 23 deletions(-)
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 7508129..6fa4887 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -696,34 +696,38 @@ void init_dl_task_timer(struct sched_dl_entity *dl_se)
> }
>
> /*
> - * During the activation, CBS checks if it can reuse the current task's
> - * runtime and period. If the deadline of the task is in the past, CBS
> - * cannot use the runtime, and so it replenishes the task. This rule
> - * works fine for implicit deadline tasks (deadline == period), and the
> - * CBS was designed for implicit deadline tasks. However, a task with
> - * constrained deadline (deadine < period) might be awakened after the
> - * deadline, but before the next period. In this case, replenishing the
> - * task would allow it to run for runtime / deadline. As in this case
> - * deadline < period, CBS enables a task to run for more than the
> - * runtime / period. In a very loaded system, this can cause a domino
> - * effect, making other tasks miss their deadlines.
> - *
> - * To avoid this problem, in the activation of a constrained deadline
> - * task after the deadline but before the next period, throttle the
> - * task and set the replenishing timer to the begin of the next period,
> - * unless it is boosted.
> + * XXX: Daniel will document it better in a clean patch, this is
> + * a proof of concept.
> */
> static inline void dl_check_constrained_dl(struct sched_dl_entity *dl_se)
> {
> struct task_struct *p = dl_task_of(dl_se);
> struct rq *rq = rq_of_dl_rq(dl_rq_of_se(dl_se));
> + u64 clock = rq_clock(rq);
>
> - if (dl_time_before(dl_se->deadline, rq_clock(rq)) &&
> - dl_time_before(rq_clock(rq), dl_next_period(dl_se))) {
> - if (unlikely(dl_se->dl_boosted || !start_dl_timer(p)))
> - return;
> - dl_se->dl_throttled = 1;
> + /* we are in a new period */
> + if (dl_time_before(dl_next_period(dl_se), rq_clock(rq)))
> + return;
> +
> + /* are we ok with the deadline and density? */
> + if (dl_time_before(rq_clock(rq), dl_se->deadline) &&
> + !dl_entity_overflow(dl_se, dl_se, rq_clock(rq)))
> + return;
> +
> + /* is the density the problem? */
> + if (dl_entity_overflow(dl_se, dl_se, clock)) {
> + /* reduces the runtime so it fits in the density */
> + dl_se->runtime =
> + (div64_u64(dl_se->dl_runtime << 20, dl_se->dl_deadline) *
> + (dl_se->deadline - clock)) >> 20;
> + WARN_ON(dl_se->runtime < 0);
> + return;
> }
> +
> + if (!start_dl_timer(p))
> + return;
> +
> + dl_se->dl_throttled = 1;
> }
>
> static
> @@ -996,8 +1000,11 @@ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags)
> * If that is the case, the task will be throttled and
> * the replenishment timer will be set to the next period.
> */
> - if (!dl_se->dl_throttled && dl_is_constrained(dl_se))
> - dl_check_constrained_dl(dl_se);
> + if (!p->dl.dl_boosted &&
> + !p->dl.dl_throttled &&
> + dl_is_constrained(&p->dl))
> + dl_check_constrained_dl(&p->dl);
> +
>
> /*
> * If p is throttled, we do nothing. In fact, if it exhausted

2017-04-11 05:52:47

by Xunlei Pang

[permalink] [raw]
Subject: Re: [PATCH] sched/deadline: Throttle a constrained task activated if overflow

On 04/11/2017 at 04:47 AM, Daniel Bristot de Oliveira wrote:
> On 04/10/2017 11:22 AM, Xunlei Pang wrote:
>> I was testing Daniel's changes with his test case in the commit
>> df8eac8cafce ("sched/deadline: Throttle a constrained deadline
>> task activated after the deadline"), and tweaked it a little.
>>
>> Instead of having the runtime equal to the deadline, I tweaked
>> runtime, deadline and sleep value to ensure every time it calls
>> dl_check_constrained_dl() with "dl_se->deadline > rq_clock(rq)"
>> as well as true dl_entity_overflow(), so it does replenishing
>> every wake up in update_dl_entity(), and break its bandwidth.
>>
>> Daniel's test case had:
>> attr.sched_runtime = 2 * 1000 * 1000; /* 2 ms */
>> attr.sched_deadline = 2 * 1000 * 1000; /* 2 ms*/
>> attr.sched_period = 2 * 1000 * 1000 * 1000; /* 2 s */
>> ts.tv_sec = 0;
>> ts.tv_nsec = 2000 * 1000; /* 2 ms */
>>
>> I changed it to:
>> attr.sched_runtime = 5 * 1000 * 1000; /* 5 ms */
>> attr.sched_deadline = 7 * 1000 * 1000; /* 7 ms */
>> attr.sched_period = 1 * 1000 * 1000 * 1000; /* 1 s */
>> ts.tv_sec = 0;
>> ts.tv_nsec = 1000 * 1000; /* 1 ms */
>>
>> The change above can result in over 25% of the CPU on my machine.
>>
>> In order to avoid the beakage, we improve dl_check_constrained_dl()
>> to prevent dl tasks from being activated until the next period if it
>> runs out of bandwidth of the current period.
> The problem now is that, with your patch, we will throttle the task
> with some possible runtime. Moreover, the task did not brake any
> rule, like being awakened after the deadline - the user-space is not
> misbehaving.
>
> That is +- what the reproducer is doing when using your patch,
> (I put some trace_printk when noticing the overflow in the wakeup).
>
> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my current runtime is 3657361 and the deadline is 4613027 from now
> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my dl_runtime is 5000000
>
> and so the task will be throttled with 3657361 ns runtime available.
>
> As we can see, it is really breaking the density:
>
> 5ms / 7ms (.714285) < 3657361 / 4613027 (.792833)
>
> Well, it is not breaking that much. Trying to be less pessimist, we can
> compute a new runtime with following equation:
>
> runtime = (dl_runtime / dl_deadline) * (deadline - now)
>
> That is, a runtime which fits in the task's density.

This is a good point, to make the best use of remaining deadline, let me think more.

Regards,
Xunlei

>
> In code:
>
> dl_se->runtime = (div64_u64(dl_se->dl_runtime << 20, dl_se->dl_deadline) * (dl_se->deadline - rq_clock(rq))) >> 20;
>
> For example (a trace_printk) showing the adjusted runtime for the
> previous task:
> <idle>-0 [007] d.h. 1505.066440: enqueue_task_dl: I can still run for 3294208 (it is not that bad)
>
> With the adjusted runtime, we have the following density:
>
> 3294208 / 4613027 = .714109
>
> as .714109 < .714285
>
> Voilà. We can still use 3.2 ms of runtime! Not bad at all.
>
> However, even this result is, somehow, controversial. Because we are
> reducing the task's reservation (Q/P). But that is very close to the
> implicit deadline behavior: when it "overflows", the runtime is truncated
> (a replenishment...) and the deadline is postponed. In this case, we do
> not need to postpone the deadline, just to "truncate" the runtime to fit
> in the density... it is not that bad.
>
> Another possibility is not to replenish a constrained deadline
> task twice in a period. In this case, the task would run further
> the deadline, potentially causing problems for implicit deadline tasks.
> But! even implicit deadline would do it in a overloaded system. The
> problem is that, by using the density the system easily becomes
> overloaded (too pessimistic).
>
> Resuming (so far):
>
> 1) We can be pessimistic to the constrained deadline task,
> with Xunlei's patch;
> 2) Compute a new runtime to fit in the density - somehow pessimistic.
> 3) Avoid replenishing a constrained deadline before the next period.
> but the system will become overload very easily (density).
>
> I think the option 2 is the mid-term.
> I am showing a _proof of concept_ patch bellow. I is based in the
> Xunlei's changes in the verification. I need to polish it... but...
> you guys can read the idea in C.
>
> I have the 3) too, but I am not sure if it is as good as 2.
>
> We still need to think more about the solution.... test more... I will
> continue working on this tomorrow, discussing with luca and tommaso
> as well.
>
> Thoughts? Am I missing something (probably, I am tired :-) )?
>
> ---
> kernel/sched/deadline.c | 53 ++++++++++++++++++++++++++++---------------------
> 1 file changed, 30 insertions(+), 23 deletions(-)
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 7508129..6fa4887 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -696,34 +696,38 @@ void init_dl_task_timer(struct sched_dl_entity *dl_se)
> }
>
> /*
> - * During the activation, CBS checks if it can reuse the current task's
> - * runtime and period. If the deadline of the task is in the past, CBS
> - * cannot use the runtime, and so it replenishes the task. This rule
> - * works fine for implicit deadline tasks (deadline == period), and the
> - * CBS was designed for implicit deadline tasks. However, a task with
> - * constrained deadline (deadine < period) might be awakened after the
> - * deadline, but before the next period. In this case, replenishing the
> - * task would allow it to run for runtime / deadline. As in this case
> - * deadline < period, CBS enables a task to run for more than the
> - * runtime / period. In a very loaded system, this can cause a domino
> - * effect, making other tasks miss their deadlines.
> - *
> - * To avoid this problem, in the activation of a constrained deadline
> - * task after the deadline but before the next period, throttle the
> - * task and set the replenishing timer to the begin of the next period,
> - * unless it is boosted.
> + * XXX: Daniel will document it better in a clean patch, this is
> + * a proof of concept.
> */
> static inline void dl_check_constrained_dl(struct sched_dl_entity *dl_se)
> {
> struct task_struct *p = dl_task_of(dl_se);
> struct rq *rq = rq_of_dl_rq(dl_rq_of_se(dl_se));
> + u64 clock = rq_clock(rq);
>
> - if (dl_time_before(dl_se->deadline, rq_clock(rq)) &&
> - dl_time_before(rq_clock(rq), dl_next_period(dl_se))) {
> - if (unlikely(dl_se->dl_boosted || !start_dl_timer(p)))
> - return;
> - dl_se->dl_throttled = 1;
> + /* we are in a new period */
> + if (dl_time_before(dl_next_period(dl_se), rq_clock(rq)))
> + return;
> +
> + /* are we ok with the deadline and density? */
> + if (dl_time_before(rq_clock(rq), dl_se->deadline) &&
> + !dl_entity_overflow(dl_se, dl_se, rq_clock(rq)))
> + return;
> +
> + /* is the density the problem? */
> + if (dl_entity_overflow(dl_se, dl_se, clock)) {
> + /* reduces the runtime so it fits in the density */
> + dl_se->runtime =
> + (div64_u64(dl_se->dl_runtime << 20, dl_se->dl_deadline) *
> + (dl_se->deadline - clock)) >> 20;
> + WARN_ON(dl_se->runtime < 0);
> + return;
> }
> +
> + if (!start_dl_timer(p))
> + return;
> +
> + dl_se->dl_throttled = 1;
> }
>
> static
> @@ -996,8 +1000,11 @@ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags)
> * If that is the case, the task will be throttled and
> * the replenishment timer will be set to the next period.
> */
> - if (!dl_se->dl_throttled && dl_is_constrained(dl_se))
> - dl_check_constrained_dl(dl_se);
> + if (!p->dl.dl_boosted &&
> + !p->dl.dl_throttled &&
> + dl_is_constrained(&p->dl))
> + dl_check_constrained_dl(&p->dl);
> +
>
> /*
> * If p is throttled, we do nothing. In fact, if it exhausted

2017-04-11 07:05:19

by Xunlei Pang

[permalink] [raw]
Subject: Re: [PATCH] sched/deadline: Throttle a constrained task activated if overflow

On 04/11/2017 at 01:53 PM, Xunlei Pang wrote:
> On 04/11/2017 at 04:47 AM, Daniel Bristot de Oliveira wrote:
>> On 04/10/2017 11:22 AM, Xunlei Pang wrote:
>>> I was testing Daniel's changes with his test case in the commit
>>> df8eac8cafce ("sched/deadline: Throttle a constrained deadline
>>> task activated after the deadline"), and tweaked it a little.
>>>
>>> Instead of having the runtime equal to the deadline, I tweaked
>>> runtime, deadline and sleep value to ensure every time it calls
>>> dl_check_constrained_dl() with "dl_se->deadline > rq_clock(rq)"
>>> as well as true dl_entity_overflow(), so it does replenishing
>>> every wake up in update_dl_entity(), and break its bandwidth.
>>>
>>> Daniel's test case had:
>>> attr.sched_runtime = 2 * 1000 * 1000; /* 2 ms */
>>> attr.sched_deadline = 2 * 1000 * 1000; /* 2 ms*/
>>> attr.sched_period = 2 * 1000 * 1000 * 1000; /* 2 s */
>>> ts.tv_sec = 0;
>>> ts.tv_nsec = 2000 * 1000; /* 2 ms */
>>>
>>> I changed it to:
>>> attr.sched_runtime = 5 * 1000 * 1000; /* 5 ms */
>>> attr.sched_deadline = 7 * 1000 * 1000; /* 7 ms */
>>> attr.sched_period = 1 * 1000 * 1000 * 1000; /* 1 s */
>>> ts.tv_sec = 0;
>>> ts.tv_nsec = 1000 * 1000; /* 1 ms */
>>>
>>> The change above can result in over 25% of the CPU on my machine.
>>>
>>> In order to avoid the beakage, we improve dl_check_constrained_dl()
>>> to prevent dl tasks from being activated until the next period if it
>>> runs out of bandwidth of the current period.
>> The problem now is that, with your patch, we will throttle the task
>> with some possible runtime. Moreover, the task did not brake any
>> rule, like being awakened after the deadline - the user-space is not
>> misbehaving.
>>
>> That is +- what the reproducer is doing when using your patch,
>> (I put some trace_printk when noticing the overflow in the wakeup).
>>
>> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my current runtime is 3657361 and the deadline is 4613027 from now
>> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my dl_runtime is 5000000
>>
>> and so the task will be throttled with 3657361 ns runtime available.
>>
>> As we can see, it is really breaking the density:
>>
>> 5ms / 7ms (.714285) < 3657361 / 4613027 (.792833)
>>
>> Well, it is not breaking that much. Trying to be less pessimist, we can
>> compute a new runtime with following equation:
>>
>> runtime = (dl_runtime / dl_deadline) * (deadline - now)
>>
>> That is, a runtime which fits in the task's density.
> This is a good point, to make the best use of remaining deadline, let me think more.

I don't know if this will make things more complicated, we can see in update_dl_entity():
if (dl_time_before(dl_se->deadline, rq_clock(rq)) ||
dl_entity_overflow(dl_se, pi_se, rq_clock(rq))) {
dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;
dl_se->runtime = pi_se->dl_runtime;
}

Looks like we have similar issue for non-constrained tasks in case of true dl_entity_overflow(), althought
its deadline is promoted(BTW, I think when overflow it should be dl_se->deadline += pi_se->dl_deadline?),
the previous runtime is discarded, we may need to apply the same runtime truncating logic on it as well
if we want to truncate runtime.

Regards,
Xunlei

>> In code:
>>
>> dl_se->runtime = (div64_u64(dl_se->dl_runtime << 20, dl_se->dl_deadline) * (dl_se->deadline - rq_clock(rq))) >> 20;
>>
>> For example (a trace_printk) showing the adjusted runtime for the
>> previous task:
>> <idle>-0 [007] d.h. 1505.066440: enqueue_task_dl: I can still run for 3294208 (it is not that bad)
>>
>> With the adjusted runtime, we have the following density:
>>
>> 3294208 / 4613027 = .714109
>>
>> as .714109 < .714285
>>
>> Voilà. We can still use 3.2 ms of runtime! Not bad at all.
>>
>> However, even this result is, somehow, controversial. Because we are
>> reducing the task's reservation (Q/P). But that is very close to the
>> implicit deadline behavior: when it "overflows", the runtime is truncated
>> (a replenishment...) and the deadline is postponed. In this case, we do
>> not need to postpone the deadline, just to "truncate" the runtime to fit
>> in the density... it is not that bad.
>>
>> Another possibility is not to replenish a constrained deadline
>> task twice in a period. In this case, the task would run further
>> the deadline, potentially causing problems for implicit deadline tasks.
>> But! even implicit deadline would do it in a overloaded system. The
>> problem is that, by using the density the system easily becomes
>> overloaded (too pessimistic).
>>
>> Resuming (so far):
>>
>> 1) We can be pessimistic to the constrained deadline task,
>> with Xunlei's patch;
>> 2) Compute a new runtime to fit in the density - somehow pessimistic.
>> 3) Avoid replenishing a constrained deadline before the next period.
>> but the system will become overload very easily (density).
>>
>> I think the option 2 is the mid-term.
>> I am showing a _proof of concept_ patch bellow. I is based in the
>> Xunlei's changes in the verification. I need to polish it... but...
>> you guys can read the idea in C.
>>
>> I have the 3) too, but I am not sure if it is as good as 2.
>>
>> We still need to think more about the solution.... test more... I will
>> continue working on this tomorrow, discussing with luca and tommaso
>> as well.
>>
>> Thoughts? Am I missing something (probably, I am tired :-) )?
>>
>> ---
>> kernel/sched/deadline.c | 53 ++++++++++++++++++++++++++++---------------------
>> 1 file changed, 30 insertions(+), 23 deletions(-)
>>
>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>> index 7508129..6fa4887 100644
>> --- a/kernel/sched/deadline.c
>> +++ b/kernel/sched/deadline.c
>> @@ -696,34 +696,38 @@ void init_dl_task_timer(struct sched_dl_entity *dl_se)
>> }
>>
>> /*
>> - * During the activation, CBS checks if it can reuse the current task's
>> - * runtime and period. If the deadline of the task is in the past, CBS
>> - * cannot use the runtime, and so it replenishes the task. This rule
>> - * works fine for implicit deadline tasks (deadline == period), and the
>> - * CBS was designed for implicit deadline tasks. However, a task with
>> - * constrained deadline (deadine < period) might be awakened after the
>> - * deadline, but before the next period. In this case, replenishing the
>> - * task would allow it to run for runtime / deadline. As in this case
>> - * deadline < period, CBS enables a task to run for more than the
>> - * runtime / period. In a very loaded system, this can cause a domino
>> - * effect, making other tasks miss their deadlines.
>> - *
>> - * To avoid this problem, in the activation of a constrained deadline
>> - * task after the deadline but before the next period, throttle the
>> - * task and set the replenishing timer to the begin of the next period,
>> - * unless it is boosted.
>> + * XXX: Daniel will document it better in a clean patch, this is
>> + * a proof of concept.
>> */
>> static inline void dl_check_constrained_dl(struct sched_dl_entity *dl_se)
>> {
>> struct task_struct *p = dl_task_of(dl_se);
>> struct rq *rq = rq_of_dl_rq(dl_rq_of_se(dl_se));
>> + u64 clock = rq_clock(rq);
>>
>> - if (dl_time_before(dl_se->deadline, rq_clock(rq)) &&
>> - dl_time_before(rq_clock(rq), dl_next_period(dl_se))) {
>> - if (unlikely(dl_se->dl_boosted || !start_dl_timer(p)))
>> - return;
>> - dl_se->dl_throttled = 1;
>> + /* we are in a new period */
>> + if (dl_time_before(dl_next_period(dl_se), rq_clock(rq)))
>> + return;
>> +
>> + /* are we ok with the deadline and density? */
>> + if (dl_time_before(rq_clock(rq), dl_se->deadline) &&
>> + !dl_entity_overflow(dl_se, dl_se, rq_clock(rq)))
>> + return;
>> +
>> + /* is the density the problem? */
>> + if (dl_entity_overflow(dl_se, dl_se, clock)) {
>> + /* reduces the runtime so it fits in the density */
>> + dl_se->runtime =
>> + (div64_u64(dl_se->dl_runtime << 20, dl_se->dl_deadline) *
>> + (dl_se->deadline - clock)) >> 20;
>> + WARN_ON(dl_se->runtime < 0);
>> + return;
>> }
>> +
>> + if (!start_dl_timer(p))
>> + return;
>> +
>> + dl_se->dl_throttled = 1;
>> }
>>
>> static
>> @@ -996,8 +1000,11 @@ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags)
>> * If that is the case, the task will be throttled and
>> * the replenishment timer will be set to the next period.
>> */
>> - if (!dl_se->dl_throttled && dl_is_constrained(dl_se))
>> - dl_check_constrained_dl(dl_se);
>> + if (!p->dl.dl_boosted &&
>> + !p->dl.dl_throttled &&
>> + dl_is_constrained(&p->dl))
>> + dl_check_constrained_dl(&p->dl);
>> +
>>
>> /*
>> * If p is throttled, we do nothing. In fact, if it exhausted

Subject: Re: [PATCH] sched/deadline: Throttle a constrained task activated if overflow

On 04/11/2017 03:36 AM, Xunlei Pang wrote:
> On 04/11/2017 at 04:47 AM, Daniel Bristot de Oliveira wrote:
>> On 04/10/2017 11:22 AM, Xunlei Pang wrote:
>>> I was testing Daniel's changes with his test case in the commit
>>> df8eac8cafce ("sched/deadline: Throttle a constrained deadline
>>> task activated after the deadline"), and tweaked it a little.
>>>
>>> Instead of having the runtime equal to the deadline, I tweaked
>>> runtime, deadline and sleep value to ensure every time it calls
>>> dl_check_constrained_dl() with "dl_se->deadline > rq_clock(rq)"
>>> as well as true dl_entity_overflow(), so it does replenishing
>>> every wake up in update_dl_entity(), and break its bandwidth.
>>>
>>> Daniel's test case had:
>>> attr.sched_runtime = 2 * 1000 * 1000; /* 2 ms */
>>> attr.sched_deadline = 2 * 1000 * 1000; /* 2 ms*/
>>> attr.sched_period = 2 * 1000 * 1000 * 1000; /* 2 s */
>>> ts.tv_sec = 0;
>>> ts.tv_nsec = 2000 * 1000; /* 2 ms */
>>>
>>> I changed it to:
>>> attr.sched_runtime = 5 * 1000 * 1000; /* 5 ms */
>>> attr.sched_deadline = 7 * 1000 * 1000; /* 7 ms */
>>> attr.sched_period = 1 * 1000 * 1000 * 1000; /* 1 s */
>>> ts.tv_sec = 0;
>>> ts.tv_nsec = 1000 * 1000; /* 1 ms */
>>>
>>> The change above can result in over 25% of the CPU on my machine.
>>>
>>> In order to avoid the beakage, we improve dl_check_constrained_dl()
>>> to prevent dl tasks from being activated until the next period if it
>>> runs out of bandwidth of the current period.
>> The problem now is that, with your patch, we will throttle the task
>> with some possible runtime. Moreover, the task did not brake any
>> rule, like being awakened after the deadline - the user-space is not
>> misbehaving.
>>
>> That is +- what the reproducer is doing when using your patch,
>> (I put some trace_printk when noticing the overflow in the wakeup).
>>
>> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my current runtime is 3657361 and the deadline is 4613027 from now
>> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my dl_runtime is 5000000
>>
>> and so the task will be throttled with 3657361 ns runtime available.
>>
>> As we can see, it is really breaking the density:
>>
>> 5ms / 7ms (.714285) < 3657361 / 4613027 (.792833)
> For the runtime 5ms, deadline 7ms, 3657361 is the remaining runtime, so the actual runtime is
> (5000000 - 3657361) = 1342639, and the actual density is 1342639 / 7000000 (.191806),

The past density... we are not looking back here.

> not break the limited density 5ms / 7ms (.714285).
>
> "3657361 / 4613027 (.792833)" means available density greater than limited density(.714285),
> this means true dl_entity_overflow(), and can't meet the requirement in the current period any more,
> thus need to throttle it until next period.

This means that it has potential to break the rules, but it did not brake
yet. It will break if it runs longer than:

(dl_runtime / dl_deadline) * (deadline - now)

By throttling the task too early, you will punish user-space, even if the
user-space is not breaking the rules. I pointed a less pessimistic solution,
but there are more possible solutions.

Actually, from the theoretical standing point this is an open issue, that is,
the perfect solution is unknown for self-suspending constrained deadline
tasks.

-- Daniel

Subject: Re: [PATCH] sched/deadline: Throttle a constrained task activated if overflow

On 04/11/2017 09:06 AM, Xunlei Pang wrote:
> On 04/11/2017 at 01:53 PM, Xunlei Pang wrote:
>> On 04/11/2017 at 04:47 AM, Daniel Bristot de Oliveira wrote:
>>> On 04/10/2017 11:22 AM, Xunlei Pang wrote:
>>>> I was testing Daniel's changes with his test case in the commit
>>>> df8eac8cafce ("sched/deadline: Throttle a constrained deadline
>>>> task activated after the deadline"), and tweaked it a little.
>>>>
>>>> Instead of having the runtime equal to the deadline, I tweaked
>>>> runtime, deadline and sleep value to ensure every time it calls
>>>> dl_check_constrained_dl() with "dl_se->deadline > rq_clock(rq)"
>>>> as well as true dl_entity_overflow(), so it does replenishing
>>>> every wake up in update_dl_entity(), and break its bandwidth.
>>>>
>>>> Daniel's test case had:
>>>> attr.sched_runtime = 2 * 1000 * 1000; /* 2 ms */
>>>> attr.sched_deadline = 2 * 1000 * 1000; /* 2 ms*/
>>>> attr.sched_period = 2 * 1000 * 1000 * 1000; /* 2 s */
>>>> ts.tv_sec = 0;
>>>> ts.tv_nsec = 2000 * 1000; /* 2 ms */
>>>>
>>>> I changed it to:
>>>> attr.sched_runtime = 5 * 1000 * 1000; /* 5 ms */
>>>> attr.sched_deadline = 7 * 1000 * 1000; /* 7 ms */
>>>> attr.sched_period = 1 * 1000 * 1000 * 1000; /* 1 s */
>>>> ts.tv_sec = 0;
>>>> ts.tv_nsec = 1000 * 1000; /* 1 ms */
>>>>
>>>> The change above can result in over 25% of the CPU on my machine.
>>>>
>>>> In order to avoid the beakage, we improve dl_check_constrained_dl()
>>>> to prevent dl tasks from being activated until the next period if it
>>>> runs out of bandwidth of the current period.
>>> The problem now is that, with your patch, we will throttle the task
>>> with some possible runtime. Moreover, the task did not brake any
>>> rule, like being awakened after the deadline - the user-space is not
>>> misbehaving.
>>>
>>> That is +- what the reproducer is doing when using your patch,
>>> (I put some trace_printk when noticing the overflow in the wakeup).
>>>
>>> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my current runtime is 3657361 and the deadline is 4613027 from now
>>> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my dl_runtime is 5000000
>>>
>>> and so the task will be throttled with 3657361 ns runtime available.
>>>
>>> As we can see, it is really breaking the density:
>>>
>>> 5ms / 7ms (.714285) < 3657361 / 4613027 (.792833)
>>>
>>> Well, it is not breaking that much. Trying to be less pessimist, we can
>>> compute a new runtime with following equation:
>>>
>>> runtime = (dl_runtime / dl_deadline) * (deadline - now)
>>>
>>> That is, a runtime which fits in the task's density.
>> This is a good point, to make the best use of remaining deadline, let me think more.
> I don't know if this will make things more complicated, we can see in update_dl_entity():
> if (dl_time_before(dl_se->deadline, rq_clock(rq)) ||
> dl_entity_overflow(dl_se, pi_se, rq_clock(rq))) {
> dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;
> dl_se->runtime = pi_se->dl_runtime;
> }
>
> Looks like we have similar issue for non-constrained tasks in case of true dl_entity_overflow(), although
> its deadline is promoted(BTW, I think when overflow it should be dl_se->deadline += pi_se->dl_deadline?),
> the previous runtime is discarded, we may need to apply the same runtime truncating logic on it as well
> if we want to truncate runtime.

For implicit deadline, the density is equals to the utilization.
So things here are less pessimistic, and we are using the same
rule for overflow and admission.

If you push "dl_se->deadline += pi_se->dl_deadline", you will give

runtime / (deadline - now) + period.

runtime.

As the deadline is in the future, (the !dl_time_before(dl_se->deadline,
rq_clock(rq)), then (deadline - now) + period > period. Therefore, the
next period will receive less than runtime / period & the task will
receive a lower priority - a farther deadline.

Well, we could not to truncate the runtime, but than we would carry
things from the past. If the task self-suspended for a long
period, the sum of the residual runtime + runtime could give
a higher utilization than Q/P.

Moreover, the current rule provides a guarantee that Tommaso likes.
The CBS self-adjusts the period in the case of a timing drift of the
external "event" that activates the task, without breaking Q/P.

(He can explain more about it....)

The current rule guarantees Q/P, that is the well known CBS.

Am I missing something?

-- Daniel

2017-04-12 02:07:38

by Xunlei Pang

[permalink] [raw]
Subject: Re: [PATCH] sched/deadline: Throttle a constrained task activated if overflow

On 04/11/2017 at 05:24 PM, Daniel Bristot de Oliveira wrote:
> On 04/11/2017 09:06 AM, Xunlei Pang wrote:
>> On 04/11/2017 at 01:53 PM, Xunlei Pang wrote:
>>> On 04/11/2017 at 04:47 AM, Daniel Bristot de Oliveira wrote:
>>>> On 04/10/2017 11:22 AM, Xunlei Pang wrote:
>>>>> I was testing Daniel's changes with his test case in the commit
>>>>> df8eac8cafce ("sched/deadline: Throttle a constrained deadline
>>>>> task activated after the deadline"), and tweaked it a little.
>>>>>
>>>>> Instead of having the runtime equal to the deadline, I tweaked
>>>>> runtime, deadline and sleep value to ensure every time it calls
>>>>> dl_check_constrained_dl() with "dl_se->deadline > rq_clock(rq)"
>>>>> as well as true dl_entity_overflow(), so it does replenishing
>>>>> every wake up in update_dl_entity(), and break its bandwidth.
>>>>>
>>>>> Daniel's test case had:
>>>>> attr.sched_runtime = 2 * 1000 * 1000; /* 2 ms */
>>>>> attr.sched_deadline = 2 * 1000 * 1000; /* 2 ms*/
>>>>> attr.sched_period = 2 * 1000 * 1000 * 1000; /* 2 s */
>>>>> ts.tv_sec = 0;
>>>>> ts.tv_nsec = 2000 * 1000; /* 2 ms */
>>>>>
>>>>> I changed it to:
>>>>> attr.sched_runtime = 5 * 1000 * 1000; /* 5 ms */
>>>>> attr.sched_deadline = 7 * 1000 * 1000; /* 7 ms */
>>>>> attr.sched_period = 1 * 1000 * 1000 * 1000; /* 1 s */
>>>>> ts.tv_sec = 0;
>>>>> ts.tv_nsec = 1000 * 1000; /* 1 ms */
>>>>>
>>>>> The change above can result in over 25% of the CPU on my machine.
>>>>>
>>>>> In order to avoid the beakage, we improve dl_check_constrained_dl()
>>>>> to prevent dl tasks from being activated until the next period if it
>>>>> runs out of bandwidth of the current period.
>>>> The problem now is that, with your patch, we will throttle the task
>>>> with some possible runtime. Moreover, the task did not brake any
>>>> rule, like being awakened after the deadline - the user-space is not
>>>> misbehaving.
>>>>
>>>> That is +- what the reproducer is doing when using your patch,
>>>> (I put some trace_printk when noticing the overflow in the wakeup).
>>>>
>>>> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my current runtime is 3657361 and the deadline is 4613027 from now
>>>> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my dl_runtime is 5000000
>>>>
>>>> and so the task will be throttled with 3657361 ns runtime available.
>>>>
>>>> As we can see, it is really breaking the density:
>>>>
>>>> 5ms / 7ms (.714285) < 3657361 / 4613027 (.792833)
>>>>
>>>> Well, it is not breaking that much. Trying to be less pessimist, we can
>>>> compute a new runtime with following equation:
>>>>
>>>> runtime = (dl_runtime / dl_deadline) * (deadline - now)
>>>>
>>>> That is, a runtime which fits in the task's density.
>>> This is a good point, to make the best use of remaining deadline, let me think more.
>> I don't know if this will make things more complicated, we can see in update_dl_entity():
>> if (dl_time_before(dl_se->deadline, rq_clock(rq)) ||
>> dl_entity_overflow(dl_se, pi_se, rq_clock(rq))) {
>> dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;
>> dl_se->runtime = pi_se->dl_runtime;
>> }
>>
>> Looks like we have similar issue for non-constrained tasks in case of true dl_entity_overflow(), although
>> its deadline is promoted(BTW, I think when overflow it should be dl_se->deadline += pi_se->dl_deadline?),
>> the previous runtime is discarded, we may need to apply the same runtime truncating logic on it as well
>> if we want to truncate runtime.
> For implicit deadline, the density is equals to the utilization.
> So things here are less pessimistic, and we are using the same
> rule for overflow and admission.
>
> If you push "dl_se->deadline += pi_se->dl_deadline", you will give
>
> runtime / (deadline - now) + period.
>
> runtime.
>
> As the deadline is in the future, (the !dl_time_before(dl_se->deadline,
> rq_clock(rq)), then (deadline - now) + period > period. Therefore, the
> next period will receive less than runtime / period & the task will
> receive a lower priority - a farther deadline.

Yes, I think I got your meaning, thanks for the detailed explanation.
This is indeed an issue, that's why I suggested to do truncate here as well.

But in the current implementation, promote deadline to "clock + pi_se->dl_deadline",
will cause overlapping between periods, kind of like
[ period n ]
[ period n+1 ]
the overlapped time window is (deadline -now), is this a problem?

>
> Well, we could not to truncate the runtime, but than we would carry
> things from the past. If the task self-suspended for a long
> period, the sum of the residual runtime + runtime could give
> a higher utilization than Q/P.
>
> Moreover, the current rule provides a guarantee that Tommaso likes.
> The CBS self-adjusts the period in the case of a timing drift of the
> external "event" that activates the task, without breaking Q/P.
>
> (He can explain more about it....)

Anyway implicit deadline has the same issue, image in update_dl_entity() when "deadline > rq_clock"
and true dl_entity_overflow(), it can run part of dl_se->runtime within (dl_se->deadline - rq_clock), no?
what do you think we truncate runtime in the common update_dl_entity()? Like:

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index a2ce590..e92aad6 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -497,11 +497,21 @@ static void update_dl_entity(struct sched_dl_entity *dl_se,
{
struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
struct rq *rq = rq_of_dl_rq(dl_rq);
+ u64 clock = rq_clock(rq);

- if (dl_time_before(dl_se->deadline, rq_clock(rq)) ||
- dl_entity_overflow(dl_se, pi_se, rq_clock(rq))) {
- dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;
+ if (!dl_time_before(clock, dl_se->deadline)) {
+ dl_se->deadline = clock + pi_se->dl_deadline;
dl_se->runtime = pi_se->dl_runtime;
+ return;
+ }
+
+ /* Truncate the runtime to fit the density in the current period after overflow. */
+ if (dl_entity_overflow(dl_se, pi_se, clock)) {
+ unsigned long ratio;
+
+ /* TODO: save the quotient in a variable like dl_se->dl_bw */
+ ratio = to_ratio(pi_se->dl_deadline, pi_se->dl_runtime);
+ dl_se->runtime = (ratio * (dl_se->deadline - clock)) >> 20;
}
}

2017-04-12 05:26:44

by Xunlei Pang

[permalink] [raw]
Subject: Re: [PATCH] sched/deadline: Throttle a constrained task activated if overflow

On 04/11/2017 at 04:47 AM, Daniel Bristot de Oliveira wrote:
> On 04/10/2017 11:22 AM, Xunlei Pang wrote:
>> I was testing Daniel's changes with his test case in the commit
>> df8eac8cafce ("sched/deadline: Throttle a constrained deadline
>> task activated after the deadline"), and tweaked it a little.
>>
>> Instead of having the runtime equal to the deadline, I tweaked
>> runtime, deadline and sleep value to ensure every time it calls
>> dl_check_constrained_dl() with "dl_se->deadline > rq_clock(rq)"
>> as well as true dl_entity_overflow(), so it does replenishing
>> every wake up in update_dl_entity(), and break its bandwidth.
>>
>> Daniel's test case had:
>> attr.sched_runtime = 2 * 1000 * 1000; /* 2 ms */
>> attr.sched_deadline = 2 * 1000 * 1000; /* 2 ms*/
>> attr.sched_period = 2 * 1000 * 1000 * 1000; /* 2 s */
>> ts.tv_sec = 0;
>> ts.tv_nsec = 2000 * 1000; /* 2 ms */
>>
>> I changed it to:
>> attr.sched_runtime = 5 * 1000 * 1000; /* 5 ms */
>> attr.sched_deadline = 7 * 1000 * 1000; /* 7 ms */
>> attr.sched_period = 1 * 1000 * 1000 * 1000; /* 1 s */
>> ts.tv_sec = 0;
>> ts.tv_nsec = 1000 * 1000; /* 1 ms */
>>
>> The change above can result in over 25% of the CPU on my machine.
>>
>> In order to avoid the beakage, we improve dl_check_constrained_dl()
>> to prevent dl tasks from being activated until the next period if it
>> runs out of bandwidth of the current period.
> The problem now is that, with your patch, we will throttle the task
> with some possible runtime. Moreover, the task did not brake any
> rule, like being awakened after the deadline - the user-space is not
> misbehaving.
>
> That is +- what the reproducer is doing when using your patch,
> (I put some trace_printk when noticing the overflow in the wakeup).
>
> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my current runtime is 3657361 and the deadline is 4613027 from now
> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my dl_runtime is 5000000
>
> and so the task will be throttled with 3657361 ns runtime available.
>
> As we can see, it is really breaking the density:
>
> 5ms / 7ms (.714285) < 3657361 / 4613027 (.792833)
>
> Well, it is not breaking that much. Trying to be less pessimist, we can
> compute a new runtime with following equation:
>
> runtime = (dl_runtime / dl_deadline) * (deadline - now)
>
> That is, a runtime which fits in the task's density.
>
> In code:
>
> dl_se->runtime = (div64_u64(dl_se->dl_runtime << 20, dl_se->dl_deadline) * (dl_se->deadline - rq_clock(rq))) >> 20;
>
> For example (a trace_printk) showing the adjusted runtime for the
> previous task:
> <idle>-0 [007] d.h. 1505.066440: enqueue_task_dl: I can still run for 3294208 (it is not that bad)
>
> With the adjusted runtime, we have the following density:
>
> 3294208 / 4613027 = .714109
>
> as .714109 < .714285
>
> Voilà. We can still use 3.2 ms of runtime! Not bad at all.
>
> However, even this result is, somehow, controversial. Because we are
> reducing the task's reservation (Q/P). But that is very close to the
> implicit deadline behavior: when it "overflows", the runtime is truncated
> (a replenishment...) and the deadline is postponed. In this case, we do
> not need to postpone the deadline, just to "truncate" the runtime to fit
> in the density... it is not that bad.
>
> Another possibility is not to replenish a constrained deadline
> task twice in a period. In this case, the task would run further
> the deadline, potentially causing problems for implicit deadline tasks.
> But! even implicit deadline would do it in a overloaded system. The
> problem is that, by using the density the system easily becomes
> overloaded (too pessimistic).
>
> Resuming (so far):
>
> 1) We can be pessimistic to the constrained deadline task,
> with Xunlei's patch;
> 2) Compute a new runtime to fit in the density - somehow pessimistic.
> 3) Avoid replenishing a constrained deadline before the next period.
> but the system will become overload very easily (density).
>
> I think the option 2 is the mid-term.
> I am showing a _proof of concept_ patch bellow. I is based in the
> Xunlei's changes in the verification. I need to polish it... but...
> you guys can read the idea in C.
>
> I have the 3) too, but I am not sure if it is as good as 2.
>
> We still need to think more about the solution.... test more... I will
> continue working on this tomorrow, discussing with luca and tommaso
> as well.
>
> Thoughts? Am I missing something (probably, I am tired :-) )?
>
> ---
> kernel/sched/deadline.c | 53 ++++++++++++++++++++++++++++---------------------
> 1 file changed, 30 insertions(+), 23 deletions(-)
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 7508129..6fa4887 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -696,34 +696,38 @@ void init_dl_task_timer(struct sched_dl_entity *dl_se)
> }
>
> /*
> - * During the activation, CBS checks if it can reuse the current task's
> - * runtime and period. If the deadline of the task is in the past, CBS
> - * cannot use the runtime, and so it replenishes the task. This rule
> - * works fine for implicit deadline tasks (deadline == period), and the
> - * CBS was designed for implicit deadline tasks. However, a task with
> - * constrained deadline (deadine < period) might be awakened after the
> - * deadline, but before the next period. In this case, replenishing the
> - * task would allow it to run for runtime / deadline. As in this case
> - * deadline < period, CBS enables a task to run for more than the
> - * runtime / period. In a very loaded system, this can cause a domino
> - * effect, making other tasks miss their deadlines.
> - *
> - * To avoid this problem, in the activation of a constrained deadline
> - * task after the deadline but before the next period, throttle the
> - * task and set the replenishing timer to the begin of the next period,
> - * unless it is boosted.
> + * XXX: Daniel will document it better in a clean patch, this is
> + * a proof of concept.
> */
> static inline void dl_check_constrained_dl(struct sched_dl_entity *dl_se)
> {
> struct task_struct *p = dl_task_of(dl_se);
> struct rq *rq = rq_of_dl_rq(dl_rq_of_se(dl_se));
> + u64 clock = rq_clock(rq);
>
> - if (dl_time_before(dl_se->deadline, rq_clock(rq)) &&
> - dl_time_before(rq_clock(rq), dl_next_period(dl_se))) {
> - if (unlikely(dl_se->dl_boosted || !start_dl_timer(p)))
> - return;
> - dl_se->dl_throttled = 1;
> + /* we are in a new period */
> + if (dl_time_before(dl_next_period(dl_se), rq_clock(rq)))
> + return;
> +
> + /* are we ok with the deadline and density? */
> + if (dl_time_before(rq_clock(rq), dl_se->deadline) &&
> + !dl_entity_overflow(dl_se, dl_se, rq_clock(rq)))
> + return;
> +
> + /* is the density the problem? */
> + if (dl_entity_overflow(dl_se, dl_se, clock)) {
> + /* reduces the runtime so it fits in the density */
> + dl_se->runtime =
> + (div64_u64(dl_se->dl_runtime << 20, dl_se->dl_deadline) *
> + (dl_se->deadline - clock)) >> 20;

The more I read the code, the more I am confused why dl_entity_overflow() is needed, if the
task is before its deadline, just let it run. E.g. For (runtime 2ms, deadline 4ms, period 8ms),
for some reason was preempted after running a very short time 0.1ms, after 0.9ms it was
scheduled back and immediately sleep 1ms, when it is awakened, remaining runtime is 1.9ms,
remaining deadline(deadline-now) within this period is 2ms, but dl_entity_overflow() is true.
However, clearly it can be correctly run 1.9ms before deadline comes wthin this period.

We can add a condition in dl_runtime_exceeded(), if its deadline is passed, then zero
its runtime if positive, and a new period begins.

I did some tests with the following patch, seems it works well, please correct me if I am wrong.
---
kernel/sched/deadline.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index a2ce590..600c530 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -498,8 +498,7 @@ static void update_dl_entity(struct sched_dl_entity *dl_se,
struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
struct rq *rq = rq_of_dl_rq(dl_rq);

- if (dl_time_before(dl_se->deadline, rq_clock(rq)) ||
- dl_entity_overflow(dl_se, pi_se, rq_clock(rq))) {
+ if (!dl_time_before(rq_clock(rq), dl_se->deadline)) {
dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;
dl_se->runtime = pi_se->dl_runtime;
}
@@ -722,13 +721,22 @@ static inline void dl_check_constrained_dl(struct sched_dl_entity *dl_se)
dl_time_before(rq_clock(rq), dl_next_period(dl_se))) {
if (unlikely(dl_se->dl_boosted || !start_dl_timer(p)))
return;
+
+ if (dl_se->runtime > 0)
+ dl_se->runtime = 0;
+
dl_se->dl_throttled = 1;
}
}

static
-int dl_runtime_exceeded(struct sched_dl_entity *dl_se)
+int dl_runtime_exceeded(struct rq *rq, struct sched_dl_entity *dl_se)
{
+ if (!dl_time_before(rq_clock(rq), dl_se->deadline)) {
+ if (dl_se->runtime > 0)
+ dl_se->runtime = 0;
+ }
+
return (dl_se->runtime <= 0);
}

@@ -779,7 +787,7 @@ static void update_curr_dl(struct rq *rq)
dl_se->runtime -= delta_exec;

throttle:
- if (dl_runtime_exceeded(dl_se) || dl_se->dl_yielded) {
+ if (dl_runtime_exceeded(rq, dl_se) || dl_se->dl_yielded) {
dl_se->dl_throttled = 1;
__dequeue_task_dl(rq, curr, 0);
if (unlikely(dl_se->dl_boosted || !start_dl_timer(curr)))
--
1.8.3.1

2017-04-12 06:55:55

by luca abeni

[permalink] [raw]
Subject: Re: [PATCH] sched/deadline: Throttle a constrained task activated if overflow

Hi,

On Wed, 12 Apr 2017 13:27:32 +0800
Xunlei Pang <[email protected]> wrote:
[...]
> The more I read the code, the more I am confused why
> dl_entity_overflow() is needed, if the task is before its deadline,
> just let it run.

Sorry for jumping in this thread; I did not read all of the previous
emails, but I think I can answer this question :)

dl_entity_overflow() is needed to check if a deadline task that is
waking up can use its current runtime and scheduling deadline without
breaking the guarantees for other deadline tasks.

If the relative deadline of the tasks is equal to their period, this
check is mathematically correct (and its correctness has been formally
proved in some papers - see for example
http://disi.unitn.it/~abeni/tr-98-01.pdf).

If the relative deadline is different from the period, then the check
is an approximation (and this is the big issue here). I am still not
sure about what is the best thing to do in this case.

> E.g. For (runtime 2ms, deadline 4ms, period 8ms),
> for some reason was preempted after running a very short time 0.1ms,
> after 0.9ms it was scheduled back and immediately sleep 1ms, when it
> is awakened, remaining runtime is 1.9ms, remaining
> deadline(deadline-now) within this period is 2ms, but
> dl_entity_overflow() is true. However, clearly it can be correctly
> run 1.9ms before deadline comes wthin this period.

Sure, in this case the task can run for 1.9ms before the deadline...
But doing so, it might break the guarantees for other deadline tasks.
This is what the check is trying to avoid.


>
> We can add a condition in dl_runtime_exceeded(), if its deadline is
> passed, then zero its runtime if positive, and a new period begins.
>
> I did some tests with the following patch, seems it works well,
> please correct me if I am wrong. ---
> kernel/sched/deadline.c | 16 ++++++++++++----
> 1 file changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index a2ce590..600c530 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -498,8 +498,7 @@ static void update_dl_entity(struct
> sched_dl_entity *dl_se, struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
> struct rq *rq = rq_of_dl_rq(dl_rq);
>
> - if (dl_time_before(dl_se->deadline, rq_clock(rq)) ||
> - dl_entity_overflow(dl_se, pi_se, rq_clock(rq))) {
> + if (!dl_time_before(rq_clock(rq), dl_se->deadline)) {
> dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;
> dl_se->runtime = pi_se->dl_runtime;
> }

I think this will break the guarantees for tasks with relative deadline
equal to period (think about a task with runtime 5ms, period 10ms and
relative deadline 10ms... What happens if the task executes for 4.9ms,
then blocks and immediately wakes up?)


Luca

> @@ -722,13 +721,22 @@ static inline void
> dl_check_constrained_dl(struct sched_dl_entity *dl_se)
> dl_time_before(rq_clock(rq), dl_next_period(dl_se))) { if
> (unlikely(dl_se->dl_boosted || !start_dl_timer(p))) return;
> +
> + if (dl_se->runtime > 0)
> + dl_se->runtime = 0;
> +
> dl_se->dl_throttled = 1;
> }
> }
>
> static
> -int dl_runtime_exceeded(struct sched_dl_entity *dl_se)
> +int dl_runtime_exceeded(struct rq *rq, struct sched_dl_entity *dl_se)
> {
> + if (!dl_time_before(rq_clock(rq), dl_se->deadline)) {
> + if (dl_se->runtime > 0)
> + dl_se->runtime = 0;
> + }
> +
> return (dl_se->runtime <= 0);
> }
>
> @@ -779,7 +787,7 @@ static void update_curr_dl(struct rq *rq)
> dl_se->runtime -= delta_exec;
>
> throttle:
> - if (dl_runtime_exceeded(dl_se) || dl_se->dl_yielded) {
> + if (dl_runtime_exceeded(rq, dl_se) || dl_se->dl_yielded) {
> dl_se->dl_throttled = 1;
> __dequeue_task_dl(rq, curr, 0);
> if (unlikely(dl_se->dl_boosted || !start_dl_timer(curr)))

2017-04-12 12:29:22

by Xunlei Pang

[permalink] [raw]
Subject: Re: [PATCH] sched/deadline: Throttle a constrained task activated if overflow

On 04/12/2017 at 02:55 PM, Luca Abeni wrote:
> Hi,
>
> On Wed, 12 Apr 2017 13:27:32 +0800
> Xunlei Pang <[email protected]> wrote:
> [...]
>> The more I read the code, the more I am confused why
>> dl_entity_overflow() is needed, if the task is before its deadline,
>> just let it run.
> Sorry for jumping in this thread; I did not read all of the previous
> emails, but I think I can answer this question :)
>
> dl_entity_overflow() is needed to check if a deadline task that is
> waking up can use its current runtime and scheduling deadline without
> breaking the guarantees for other deadline tasks.
>
> If the relative deadline of the tasks is equal to their period, this
> check is mathematically correct (and its correctness has been formally
> proved in some papers - see for example
> http://disi.unitn.it/~abeni/tr-98-01.pdf).

Will have a look, thanks for the pointer.

> If the relative deadline is different from the period, then the check
> is an approximation (and this is the big issue here). I am still not
> sure about what is the best thing to do in this case.
>
>> E.g. For (runtime 2ms, deadline 4ms, period 8ms),
>> for some reason was preempted after running a very short time 0.1ms,
>> after 0.9ms it was scheduled back and immediately sleep 1ms, when it
>> is awakened, remaining runtime is 1.9ms, remaining
>> deadline(deadline-now) within this period is 2ms, but
>> dl_entity_overflow() is true. However, clearly it can be correctly
>> run 1.9ms before deadline comes wthin this period.
> Sure, in this case the task can run for 1.9ms before the deadline...
> But doing so, it might break the guarantees for other deadline tasks.
> This is what the check is trying to avoid.

Image this deadline task was preempted after running 0.1ms by another one with an
earlier absolute deadline for 1.9ms, after scheduled back it will have the same status
(1.9ms remaining runtime, 2ms remaining deadline) under the current implementation.

I can hardly figure out the difference, why is wake-up(will check dl_entity_overflow())
so special?

>
>> We can add a condition in dl_runtime_exceeded(), if its deadline is
>> passed, then zero its runtime if positive, and a new period begins.
>>
>> I did some tests with the following patch, seems it works well,
>> please correct me if I am wrong. ---
>> kernel/sched/deadline.c | 16 ++++++++++++----
>> 1 file changed, 12 insertions(+), 4 deletions(-)
>>
>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>> index a2ce590..600c530 100644
>> --- a/kernel/sched/deadline.c
>> +++ b/kernel/sched/deadline.c
>> @@ -498,8 +498,7 @@ static void update_dl_entity(struct
>> sched_dl_entity *dl_se, struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
>> struct rq *rq = rq_of_dl_rq(dl_rq);
>>
>> - if (dl_time_before(dl_se->deadline, rq_clock(rq)) ||
>> - dl_entity_overflow(dl_se, pi_se, rq_clock(rq))) {
>> + if (!dl_time_before(rq_clock(rq), dl_se->deadline)) {
>> dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;
>> dl_se->runtime = pi_se->dl_runtime;
>> }
> I think this will break the guarantees for tasks with relative deadline
> equal to period (think about a task with runtime 5ms, period 10ms and
> relative deadline 10ms... What happens if the task executes for 4.9ms,
> then blocks and immediately wakes up?)

For your example, dl_se->deadline is after now, the if condition is false,
update_dl_entity() actually does nothing, that is, after wake-up, it will
carry (5ms-4.9ms)=0.1ms remaining runtime and (10ms-4.9ms)=5.1ms remaining
deadline/period, continue within the current period. After running further
0.1ms, will be throttled by the timer in update_curr_dl().

It's actually the original logic, I just removed dl_entity_overflow() condition.


Regards,
Xunlei

>
>
> Luca
>
>> @@ -722,13 +721,22 @@ static inline void
>> dl_check_constrained_dl(struct sched_dl_entity *dl_se)
>> dl_time_before(rq_clock(rq), dl_next_period(dl_se))) { if
>> (unlikely(dl_se->dl_boosted || !start_dl_timer(p))) return;
>> +
>> + if (dl_se->runtime > 0)
>> + dl_se->runtime = 0;
>> +
>> dl_se->dl_throttled = 1;
>> }
>> }
>>
>> static
>> -int dl_runtime_exceeded(struct sched_dl_entity *dl_se)
>> +int dl_runtime_exceeded(struct rq *rq, struct sched_dl_entity *dl_se)
>> {
>> + if (!dl_time_before(rq_clock(rq), dl_se->deadline)) {
>> + if (dl_se->runtime > 0)
>> + dl_se->runtime = 0;
>> + }
>> +
>> return (dl_se->runtime <= 0);
>> }
>>
>> @@ -779,7 +787,7 @@ static void update_curr_dl(struct rq *rq)
>> dl_se->runtime -= delta_exec;
>>
>> throttle:
>> - if (dl_runtime_exceeded(dl_se) || dl_se->dl_yielded) {
>> + if (dl_runtime_exceeded(rq, dl_se) || dl_se->dl_yielded) {
>> dl_se->dl_throttled = 1;
>> __dequeue_task_dl(rq, curr, 0);
>> if (unlikely(dl_se->dl_boosted || !start_dl_timer(curr)))

2017-04-12 12:36:06

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH] sched/deadline: Throttle a constrained task activated if overflow

On Wed, 12 Apr 2017 20:30:04 +0800
Xunlei Pang <[email protected]> wrote:

> Image this deadline task was preempted after running 0.1ms by another one with an
> earlier absolute deadline for 1.9ms, after scheduled back it will have the same status
> (1.9ms remaining runtime, 2ms remaining deadline) under the current implementation.
>
> I can hardly figure out the difference, why is wake-up(will check dl_entity_overflow())
> so special?

The difference between a wake-up and being preempted is that one is
voluntarily done by the application, the other is done by the
scheduler. The preemption done by the schedule must very well be in
line with the algorithms of the scheduler. But for cases where the task
voluntarily sleeps and then wakes up at some arbitrary time, the
scheduler has no idea what its intent was. Not to mention, it must
still guarantee the bandwidth for other tasks.

-- Steve

2017-04-12 13:10:22

by luca abeni

[permalink] [raw]
Subject: Re: [PATCH] sched/deadline: Throttle a constrained task activated if overflow

On Wed, 12 Apr 2017 20:30:04 +0800
Xunlei Pang <[email protected]> wrote:
[...]
> > If the relative deadline is different from the period, then the
> > check is an approximation (and this is the big issue here). I am
> > still not sure about what is the best thing to do in this case.
> >
> >> E.g. For (runtime 2ms, deadline 4ms, period 8ms),
> >> for some reason was preempted after running a very short time
> >> 0.1ms, after 0.9ms it was scheduled back and immediately sleep
> >> 1ms, when it is awakened, remaining runtime is 1.9ms, remaining
> >> deadline(deadline-now) within this period is 2ms, but
> >> dl_entity_overflow() is true. However, clearly it can be correctly
> >> run 1.9ms before deadline comes wthin this period.
> > Sure, in this case the task can run for 1.9ms before the deadline...
> > But doing so, it might break the guarantees for other deadline
> > tasks. This is what the check is trying to avoid.
>
> Image this deadline task was preempted after running 0.1ms by another
> one with an earlier absolute deadline for 1.9ms, after scheduled back
> it will have the same status (1.9ms remaining runtime, 2ms remaining
> deadline) under the current implementation.

The big difference is that in your first example the task blocks for
some time while being the earliest-deadline task (so, the scheduling
algorithm would give some execution time to the task, but the task
does not use it... So, when the task wakes up, it has to check if now it
is too late for using its reserved execution time).
On the other hand, in this second example the task is preempted by an
earlier-deadline (higher priority) task, so there is no risk to have
execution time reserved to the task that the task does not use
(if I understand well your examples).


> >> We can add a condition in dl_runtime_exceeded(), if its deadline is
> >> passed, then zero its runtime if positive, and a new period begins.
> >>
> >> I did some tests with the following patch, seems it works well,
> >> please correct me if I am wrong. ---
> >> kernel/sched/deadline.c | 16 ++++++++++++----
> >> 1 file changed, 12 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> >> index a2ce590..600c530 100644
> >> --- a/kernel/sched/deadline.c
> >> +++ b/kernel/sched/deadline.c
> >> @@ -498,8 +498,7 @@ static void update_dl_entity(struct
> >> sched_dl_entity *dl_se, struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
> >> struct rq *rq = rq_of_dl_rq(dl_rq);
> >>
> >> - if (dl_time_before(dl_se->deadline, rq_clock(rq)) ||
> >> - dl_entity_overflow(dl_se, pi_se, rq_clock(rq))) {
> >> + if (!dl_time_before(rq_clock(rq), dl_se->deadline)) {
> >> dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;
> >> dl_se->runtime = pi_se->dl_runtime;
> >> }
> > I think this will break the guarantees for tasks with relative
> > deadline equal to period (think about a task with runtime 5ms,
> > period 10ms and relative deadline 10ms... What happens if the task
> > executes for 4.9ms, then blocks and immediately wakes up?)
>
> For your example, dl_se->deadline is after now, the if condition is
> false, update_dl_entity() actually does nothing, that is, after
> wake-up, it will carry (5ms-4.9ms)=0.1ms remaining runtime and
> (10ms-4.9ms)=5.1ms remaining deadline/period, continue within the
> current period. After running further 0.1ms, will be throttled by the
> timer in update_curr_dl().

Ok, sorry; I saw you inverted rq_clock(rq) and dl_se->deadline), but I
did not notice you added a "!". My fault.

In this case, with this change the task cannot consume more than
runtime every period (which is good), but it still risks to cause
unexpected missed deadlines.
(if relative deadline == period, on uni-processor the current algorithm
guarantees that the runtime will always be consumed before the
scheduling deadline; on multi-processor it guarantees that the runtime
will be consumed before a maximum delay from the deadline; with this
change, it is not possible to provide this guarantee anymore).


Luca

>
> It's actually the original logic, I just removed dl_entity_overflow()
> condition.
>
>
> Regards,
> Xunlei
>
> >
> >
> > Luca
> >
> >> @@ -722,13 +721,22 @@ static inline void
> >> dl_check_constrained_dl(struct sched_dl_entity *dl_se)
> >> dl_time_before(rq_clock(rq), dl_next_period(dl_se))) { if
> >> (unlikely(dl_se->dl_boosted || !start_dl_timer(p))) return;
> >> +
> >> + if (dl_se->runtime > 0)
> >> + dl_se->runtime = 0;
> >> +
> >> dl_se->dl_throttled = 1;
> >> }
> >> }
> >>
> >> static
> >> -int dl_runtime_exceeded(struct sched_dl_entity *dl_se)
> >> +int dl_runtime_exceeded(struct rq *rq, struct sched_dl_entity
> >> *dl_se) {
> >> + if (!dl_time_before(rq_clock(rq), dl_se->deadline)) {
> >> + if (dl_se->runtime > 0)
> >> + dl_se->runtime = 0;
> >> + }
> >> +
> >> return (dl_se->runtime <= 0);
> >> }
> >>
> >> @@ -779,7 +787,7 @@ static void update_curr_dl(struct rq *rq)
> >> dl_se->runtime -= delta_exec;
> >>
> >> throttle:
> >> - if (dl_runtime_exceeded(dl_se) || dl_se->dl_yielded) {
> >> + if (dl_runtime_exceeded(rq, dl_se) || dl_se->dl_yielded) {
> >> dl_se->dl_throttled = 1;
> >> __dequeue_task_dl(rq, curr, 0);
> >> if (unlikely(dl_se->dl_boosted
> >> || !start_dl_timer(curr)))
>

2017-04-13 03:05:19

by Xunlei Pang

[permalink] [raw]
Subject: Re: [PATCH] sched/deadline: Throttle a constrained task activated if overflow

On 04/12/2017 at 09:10 PM, luca abeni wrote:
> On Wed, 12 Apr 2017 20:30:04 +0800
> Xunlei Pang <[email protected]> wrote:
> [...]
>>> If the relative deadline is different from the period, then the
>>> check is an approximation (and this is the big issue here). I am
>>> still not sure about what is the best thing to do in this case.
>>>
>>>> E.g. For (runtime 2ms, deadline 4ms, period 8ms),
>>>> for some reason was preempted after running a very short time
>>>> 0.1ms, after 0.9ms it was scheduled back and immediately sleep
>>>> 1ms, when it is awakened, remaining runtime is 1.9ms, remaining
>>>> deadline(deadline-now) within this period is 2ms, but
>>>> dl_entity_overflow() is true. However, clearly it can be correctly
>>>> run 1.9ms before deadline comes wthin this period.
>>> Sure, in this case the task can run for 1.9ms before the deadline...
>>> But doing so, it might break the guarantees for other deadline
>>> tasks. This is what the check is trying to avoid.
>> Image this deadline task was preempted after running 0.1ms by another
>> one with an earlier absolute deadline for 1.9ms, after scheduled back
>> it will have the same status (1.9ms remaining runtime, 2ms remaining
>> deadline) under the current implementation.
> The big difference is that in your first example the task blocks for
> some time while being the earliest-deadline task (so, the scheduling
> algorithm would give some execution time to the task, but the task
> does not use it... So, when the task wakes up, it has to check if now it
> is too late for using its reserved execution time).
> On the other hand, in this second example the task is preempted by an
> earlier-deadline (higher priority) task, so there is no risk to have
> execution time reserved to the task that the task does not use
> (if I understand well your examples).

Yes, make sense, thanks!

>
>
>>>> We can add a condition in dl_runtime_exceeded(), if its deadline is
>>>> passed, then zero its runtime if positive, and a new period begins.
>>>>
>>>> I did some tests with the following patch, seems it works well,
>>>> please correct me if I am wrong. ---
>>>> kernel/sched/deadline.c | 16 ++++++++++++----
>>>> 1 file changed, 12 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>>>> index a2ce590..600c530 100644
>>>> --- a/kernel/sched/deadline.c
>>>> +++ b/kernel/sched/deadline.c
>>>> @@ -498,8 +498,7 @@ static void update_dl_entity(struct
>>>> sched_dl_entity *dl_se, struct dl_rq *dl_rq = dl_rq_of_se(dl_se);
>>>> struct rq *rq = rq_of_dl_rq(dl_rq);
>>>>
>>>> - if (dl_time_before(dl_se->deadline, rq_clock(rq)) ||
>>>> - dl_entity_overflow(dl_se, pi_se, rq_clock(rq))) {
>>>> + if (!dl_time_before(rq_clock(rq), dl_se->deadline)) {
>>>> dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline;
>>>> dl_se->runtime = pi_se->dl_runtime;
>>>> }
>>> I think this will break the guarantees for tasks with relative
>>> deadline equal to period (think about a task with runtime 5ms,
>>> period 10ms and relative deadline 10ms... What happens if the task
>>> executes for 4.9ms, then blocks and immediately wakes up?)
>> For your example, dl_se->deadline is after now, the if condition is
>> false, update_dl_entity() actually does nothing, that is, after
>> wake-up, it will carry (5ms-4.9ms)=0.1ms remaining runtime and
>> (10ms-4.9ms)=5.1ms remaining deadline/period, continue within the
>> current period. After running further 0.1ms, will be throttled by the
>> timer in update_curr_dl().
> Ok, sorry; I saw you inverted rq_clock(rq) and dl_se->deadline), but I
> did not notice you added a "!". My fault.
>
> In this case, with this change the task cannot consume more than
> runtime every period (which is good), but it still risks to cause
> unexpected missed deadlines.
> (if relative deadline == period, on uni-processor the current algorithm
> guarantees that the runtime will always be consumed before the
> scheduling deadline; on multi-processor it guarantees that the runtime
> will be consumed before a maximum delay from the deadline; with this
> change, it is not possible to provide this guarantee anymore).
>
>
> Luca
>
>> It's actually the original logic, I just removed dl_entity_overflow()
>> condition.
>>
>>
>> Regards,
>> Xunlei
>>
>>>
>>> Luca
>>>
>>>> @@ -722,13 +721,22 @@ static inline void
>>>> dl_check_constrained_dl(struct sched_dl_entity *dl_se)
>>>> dl_time_before(rq_clock(rq), dl_next_period(dl_se))) { if
>>>> (unlikely(dl_se->dl_boosted || !start_dl_timer(p))) return;
>>>> +
>>>> + if (dl_se->runtime > 0)
>>>> + dl_se->runtime = 0;
>>>> +
>>>> dl_se->dl_throttled = 1;
>>>> }
>>>> }
>>>>
>>>> static
>>>> -int dl_runtime_exceeded(struct sched_dl_entity *dl_se)
>>>> +int dl_runtime_exceeded(struct rq *rq, struct sched_dl_entity
>>>> *dl_se) {
>>>> + if (!dl_time_before(rq_clock(rq), dl_se->deadline)) {
>>>> + if (dl_se->runtime > 0)
>>>> + dl_se->runtime = 0;
>>>> + }
>>>> +
>>>> return (dl_se->runtime <= 0);
>>>> }
>>>>
>>>> @@ -779,7 +787,7 @@ static void update_curr_dl(struct rq *rq)
>>>> dl_se->runtime -= delta_exec;
>>>>
>>>> throttle:
>>>> - if (dl_runtime_exceeded(dl_se) || dl_se->dl_yielded) {
>>>> + if (dl_runtime_exceeded(rq, dl_se) || dl_se->dl_yielded) {
>>>> dl_se->dl_throttled = 1;
>>>> __dequeue_task_dl(rq, curr, 0);
>>>> if (unlikely(dl_se->dl_boosted
>>>> || !start_dl_timer(curr)))

2017-04-13 08:35:38

by Xunlei Pang

[permalink] [raw]
Subject: Re: [PATCH] sched/deadline: Throttle a constrained task activated if overflow

On 04/11/2017 at 04:47 AM, Daniel Bristot de Oliveira wrote:
> On 04/10/2017 11:22 AM, Xunlei Pang wrote:
>> I was testing Daniel's changes with his test case in the commit
>> df8eac8cafce ("sched/deadline: Throttle a constrained deadline
>> task activated after the deadline"), and tweaked it a little.
>>
>> Instead of having the runtime equal to the deadline, I tweaked
>> runtime, deadline and sleep value to ensure every time it calls
>> dl_check_constrained_dl() with "dl_se->deadline > rq_clock(rq)"
>> as well as true dl_entity_overflow(), so it does replenishing
>> every wake up in update_dl_entity(), and break its bandwidth.
>>
>> Daniel's test case had:
>> attr.sched_runtime = 2 * 1000 * 1000; /* 2 ms */
>> attr.sched_deadline = 2 * 1000 * 1000; /* 2 ms*/
>> attr.sched_period = 2 * 1000 * 1000 * 1000; /* 2 s */
>> ts.tv_sec = 0;
>> ts.tv_nsec = 2000 * 1000; /* 2 ms */
>>
>> I changed it to:
>> attr.sched_runtime = 5 * 1000 * 1000; /* 5 ms */
>> attr.sched_deadline = 7 * 1000 * 1000; /* 7 ms */
>> attr.sched_period = 1 * 1000 * 1000 * 1000; /* 1 s */
>> ts.tv_sec = 0;
>> ts.tv_nsec = 1000 * 1000; /* 1 ms */
>>
>> The change above can result in over 25% of the CPU on my machine.
>>
>> In order to avoid the beakage, we improve dl_check_constrained_dl()
>> to prevent dl tasks from being activated until the next period if it
>> runs out of bandwidth of the current period.
> The problem now is that, with your patch, we will throttle the task
> with some possible runtime. Moreover, the task did not brake any
> rule, like being awakened after the deadline - the user-space is not
> misbehaving.
>
> That is +- what the reproducer is doing when using your patch,
> (I put some trace_printk when noticing the overflow in the wakeup).
>
> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my current runtime is 3657361 and the deadline is 4613027 from now
> <idle>-0 [007] d.h. 1505.066439: enqueue_task_dl: my dl_runtime is 5000000
>
> and so the task will be throttled with 3657361 ns runtime available.
>
> As we can see, it is really breaking the density:
>
> 5ms / 7ms (.714285) < 3657361 / 4613027 (.792833)
>
> Well, it is not breaking that much. Trying to be less pessimist, we can
> compute a new runtime with following equation:
>
> runtime = (dl_runtime / dl_deadline) * (deadline - now)
>
> That is, a runtime which fits in the task's density.
>
> In code:
>
> dl_se->runtime = (div64_u64(dl_se->dl_runtime << 20, dl_se->dl_deadline) * (dl_se->deadline - rq_clock(rq))) >> 20;
>
> For example (a trace_printk) showing the adjusted runtime for the
> previous task:
> <idle>-0 [007] d.h. 1505.066440: enqueue_task_dl: I can still run for 3294208 (it is not that bad)
>
> With the adjusted runtime, we have the following density:
>
> 3294208 / 4613027 = .714109
>
> as .714109 < .714285
>
> Voilà. We can still use 3.2 ms of runtime! Not bad at all.
>
> However, even this result is, somehow, controversial. Because we are
> reducing the task's reservation (Q/P). But that is very close to the
> implicit deadline behavior: when it "overflows", the runtime is truncated
> (a replenishment...) and the deadline is postponed. In this case, we do
> not need to postpone the deadline, just to "truncate" the runtime to fit
> in the density... it is not that bad.
>
> Another possibility is not to replenish a constrained deadline
> task twice in a period. In this case, the task would run further
> the deadline, potentially causing problems for implicit deadline tasks.
> But! even implicit deadline would do it in a overloaded system. The
> problem is that, by using the density the system easily becomes
> overloaded (too pessimistic).
>
> Resuming (so far):
>
> 1) We can be pessimistic to the constrained deadline task,
> with Xunlei's patch;
> 2) Compute a new runtime to fit in the density - somehow pessimistic.
> 3) Avoid replenishing a constrained deadline before the next period.
> but the system will become overload very easily (density).
>
> I think the option 2 is the mid-term.
> I am showing a _proof of concept_ patch bellow. I is based in the
> Xunlei's changes in the verification. I need to polish it... but...
> you guys can read the idea in C.
>
> I have the 3) too, but I am not sure if it is as good as 2.
>
> We still need to think more about the solution.... test more... I will
> continue working on this tomorrow, discussing with luca and tommaso
> as well.
>
> Thoughts? Am I missing something (probably, I am tired :-) )?

Hi Daniel,

I think you can send out a formal patch for review, thanks!

Regards,
Xunlei

>
> ---
> kernel/sched/deadline.c | 53 ++++++++++++++++++++++++++++---------------------
> 1 file changed, 30 insertions(+), 23 deletions(-)
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 7508129..6fa4887 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -696,34 +696,38 @@ void init_dl_task_timer(struct sched_dl_entity *dl_se)
> }
>
> /*
> - * During the activation, CBS checks if it can reuse the current task's
> - * runtime and period. If the deadline of the task is in the past, CBS
> - * cannot use the runtime, and so it replenishes the task. This rule
> - * works fine for implicit deadline tasks (deadline == period), and the
> - * CBS was designed for implicit deadline tasks. However, a task with
> - * constrained deadline (deadine < period) might be awakened after the
> - * deadline, but before the next period. In this case, replenishing the
> - * task would allow it to run for runtime / deadline. As in this case
> - * deadline < period, CBS enables a task to run for more than the
> - * runtime / period. In a very loaded system, this can cause a domino
> - * effect, making other tasks miss their deadlines.
> - *
> - * To avoid this problem, in the activation of a constrained deadline
> - * task after the deadline but before the next period, throttle the
> - * task and set the replenishing timer to the begin of the next period,
> - * unless it is boosted.
> + * XXX: Daniel will document it better in a clean patch, this is
> + * a proof of concept.
> */
> static inline void dl_check_constrained_dl(struct sched_dl_entity *dl_se)
> {
> struct task_struct *p = dl_task_of(dl_se);
> struct rq *rq = rq_of_dl_rq(dl_rq_of_se(dl_se));
> + u64 clock = rq_clock(rq);
>
> - if (dl_time_before(dl_se->deadline, rq_clock(rq)) &&
> - dl_time_before(rq_clock(rq), dl_next_period(dl_se))) {
> - if (unlikely(dl_se->dl_boosted || !start_dl_timer(p)))
> - return;
> - dl_se->dl_throttled = 1;
> + /* we are in a new period */
> + if (dl_time_before(dl_next_period(dl_se), rq_clock(rq)))
> + return;
> +
> + /* are we ok with the deadline and density? */
> + if (dl_time_before(rq_clock(rq), dl_se->deadline) &&
> + !dl_entity_overflow(dl_se, dl_se, rq_clock(rq)))
> + return;
> +
> + /* is the density the problem? */
> + if (dl_entity_overflow(dl_se, dl_se, clock)) {
> + /* reduces the runtime so it fits in the density */
> + dl_se->runtime =
> + (div64_u64(dl_se->dl_runtime << 20, dl_se->dl_deadline) *
> + (dl_se->deadline - clock)) >> 20;
> + WARN_ON(dl_se->runtime < 0);
> + return;
> }
> +
> + if (!start_dl_timer(p))
> + return;
> +
> + dl_se->dl_throttled = 1;
> }
>
> static
> @@ -996,8 +1000,11 @@ static void enqueue_task_dl(struct rq *rq, struct task_struct *p, int flags)
> * If that is the case, the task will be throttled and
> * the replenishment timer will be set to the next period.
> */
> - if (!dl_se->dl_throttled && dl_is_constrained(dl_se))
> - dl_check_constrained_dl(dl_se);
> + if (!p->dl.dl_boosted &&
> + !p->dl.dl_throttled &&
> + dl_is_constrained(&p->dl))
> + dl_check_constrained_dl(&p->dl);
> +
>
> /*
> * If p is throttled, we do nothing. In fact, if it exhausted