Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754486AbdDKJYe (ORCPT ); Tue, 11 Apr 2017 05:24:34 -0400 Received: from mail-qk0-f171.google.com ([209.85.220.171]:35285 "EHLO mail-qk0-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753491AbdDKJYb (ORCPT ); Tue, 11 Apr 2017 05:24:31 -0400 Subject: Re: [PATCH] sched/deadline: Throttle a constrained task activated if overflow To: xlpang@redhat.com, linux-kernel@vger.kernel.org References: <1491816131-20268-1-git-send-email-xlpang@redhat.com> <4d24b94c-1e57-b82c-ff0f-099157ba5526@redhat.com> <58EC6F63.4070708@redhat.com> <58EC8065.6030806@redhat.com> Cc: Peter Zijlstra , Juri Lelli , Ingo Molnar , Luca Abeni , Steven Rostedt , Tommaso Cucinotta , =?UTF-8?Q?R=c3=b4mulo_Silva_de_Oliveira?= , Mathieu Poirier From: Daniel Bristot de Oliveira Message-ID: <541883c2-a8bb-0b03-0987-292ee8b2b25e@redhat.com> Date: Tue, 11 Apr 2017 11:24:26 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0 MIME-Version: 1.0 In-Reply-To: <58EC8065.6030806@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4476 Lines: 100 On 04/11/2017 09:06 AM, Xunlei Pang wrote: > On 04/11/2017 at 01:53 PM, Xunlei Pang wrote: >> On 04/11/2017 at 04:47 AM, Daniel Bristot de Oliveira wrote: >>> On 04/10/2017 11:22 AM, Xunlei Pang wrote: >>>> I was testing Daniel's changes with his test case in the commit >>>> df8eac8cafce ("sched/deadline: Throttle a constrained deadline >>>> task activated after the deadline"), and tweaked it a little. >>>> >>>> Instead of having the runtime equal to the deadline, I tweaked >>>> runtime, deadline and sleep value to ensure every time it calls >>>> dl_check_constrained_dl() with "dl_se->deadline > rq_clock(rq)" >>>> as well as true dl_entity_overflow(), so it does replenishing >>>> every wake up in update_dl_entity(), and break its bandwidth. >>>> >>>> Daniel's test case had: >>>> attr.sched_runtime = 2 * 1000 * 1000; /* 2 ms */ >>>> attr.sched_deadline = 2 * 1000 * 1000; /* 2 ms*/ >>>> attr.sched_period = 2 * 1000 * 1000 * 1000; /* 2 s */ >>>> ts.tv_sec = 0; >>>> ts.tv_nsec = 2000 * 1000; /* 2 ms */ >>>> >>>> I changed it to: >>>> attr.sched_runtime = 5 * 1000 * 1000; /* 5 ms */ >>>> attr.sched_deadline = 7 * 1000 * 1000; /* 7 ms */ >>>> attr.sched_period = 1 * 1000 * 1000 * 1000; /* 1 s */ >>>> ts.tv_sec = 0; >>>> ts.tv_nsec = 1000 * 1000; /* 1 ms */ >>>> >>>> The change above can result in over 25% of the CPU on my machine. >>>> >>>> In order to avoid the beakage, we improve dl_check_constrained_dl() >>>> to prevent dl tasks from being activated until the next period if it >>>> runs out of bandwidth of the current period. >>> The problem now is that, with your patch, we will throttle the task >>> with some possible runtime. Moreover, the task did not brake any >>> rule, like being awakened after the deadline - the user-space is not >>> misbehaving. >>> >>> That is +- what the reproducer is doing when using your patch, >>> (I put some trace_printk when noticing the overflow in the wakeup). >>> >>> -0 [007] d.h. 1505.066439: enqueue_task_dl: my current runtime is 3657361 and the deadline is 4613027 from now >>> -0 [007] d.h. 1505.066439: enqueue_task_dl: my dl_runtime is 5000000 >>> >>> and so the task will be throttled with 3657361 ns runtime available. >>> >>> As we can see, it is really breaking the density: >>> >>> 5ms / 7ms (.714285) < 3657361 / 4613027 (.792833) >>> >>> Well, it is not breaking that much. Trying to be less pessimist, we can >>> compute a new runtime with following equation: >>> >>> runtime = (dl_runtime / dl_deadline) * (deadline - now) >>> >>> That is, a runtime which fits in the task's density. >> This is a good point, to make the best use of remaining deadline, let me think more. > I don't know if this will make things more complicated, we can see in update_dl_entity(): > if (dl_time_before(dl_se->deadline, rq_clock(rq)) || > dl_entity_overflow(dl_se, pi_se, rq_clock(rq))) { > dl_se->deadline = rq_clock(rq) + pi_se->dl_deadline; > dl_se->runtime = pi_se->dl_runtime; > } > > Looks like we have similar issue for non-constrained tasks in case of true dl_entity_overflow(), although > its deadline is promoted(BTW, I think when overflow it should be dl_se->deadline += pi_se->dl_deadline?), > the previous runtime is discarded, we may need to apply the same runtime truncating logic on it as well > if we want to truncate runtime. For implicit deadline, the density is equals to the utilization. So things here are less pessimistic, and we are using the same rule for overflow and admission. If you push "dl_se->deadline += pi_se->dl_deadline", you will give runtime / (deadline - now) + period. runtime. As the deadline is in the future, (the !dl_time_before(dl_se->deadline, rq_clock(rq)), then (deadline - now) + period > period. Therefore, the next period will receive less than runtime / period & the task will receive a lower priority - a farther deadline. Well, we could not to truncate the runtime, but than we would carry things from the past. If the task self-suspended for a long period, the sum of the residual runtime + runtime could give a higher utilization than Q/P. Moreover, the current rule provides a guarantee that Tommaso likes. The CBS self-adjusts the period in the case of a timing drift of the external "event" that activates the task, without breaking Q/P. (He can explain more about it....) The current rule guarantees Q/P, that is the well known CBS. Am I missing something? -- Daniel