Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758698AbcJYJJ7 (ORCPT ); Tue, 25 Oct 2016 05:09:59 -0400 Received: from mail-qt0-f171.google.com ([209.85.216.171]:38041 "EHLO mail-qt0-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756133AbcJYJJ5 (ORCPT ); Tue, 25 Oct 2016 05:09:57 -0400 Subject: Re: [RFC v3 1/6] Track the active utilisation To: Luca Abeni , linux-kernel@vger.kernel.org References: <1477317998-7487-1-git-send-email-luca.abeni@unitn.it> <1477317998-7487-2-git-send-email-luca.abeni@unitn.it> Cc: Peter Zijlstra , Ingo Molnar , Juri Lelli , Claudio Scordino , Steven Rostedt From: Daniel Bristot de Oliveira Message-ID: Date: Tue, 25 Oct 2016 11:09:52 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.3.0 MIME-Version: 1.0 In-Reply-To: <1477317998-7487-2-git-send-email-luca.abeni@unitn.it> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2244 Lines: 64 Il 24/10/2016 16:06, Luca Abeni ha scritto: > The active utilisation here is defined as the total utilisation of the > active (TASK_RUNNING) tasks queued on a runqueue. Hence, it is increased > when a task wakes up and is decreased when a task blocks. > > When a task is migrated from CPUi to CPUj, immediately subtract the task's > utilisation from CPUi and add it to CPUj. This mechanism is implemented by > modifying the pull and push functions. > Note: this is not fully correct from the theoretical point of view > (the utilisation should be removed from CPUi only at the 0 lag time), > but doing the right thing would be _MUCH_ more complex (leaving the > timer armed when the task is on a different CPU... Inactive timers should > be moved from per-task timers to per-runqueue lists of timers! Bah...) > > The utilisation tracking mechanism implemented in this commit can be > fixed / improved by decreasing the active utilisation at the so-called > "0-lag time" instead of when the task blocks. > > Signed-off-by: Juri Lelli > Signed-off-by: Luca Abeni > --- > kernel/sched/deadline.c | 39 ++++++++++++++++++++++++++++++++++++++- > kernel/sched/sched.h | 6 ++++++ > 2 files changed, 44 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c > index 37e2449..3d95c1d 100644 > --- a/kernel/sched/deadline.c > +++ b/kernel/sched/deadline.c > @@ -43,6 +43,22 @@ static inline int on_dl_rq(struct sched_dl_entity *dl_se) > return !RB_EMPTY_NODE(&dl_se->rb_node); > } > > +static void add_running_bw(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) > +{ > + u64 se_bw = dl_se->dl_bw; > + > + dl_rq->running_bw += se_bw; > +} why not... static *inline* void add_running_bw(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) { dl_rq->running_bw += dl_se->dl_bw; } am I missing something? > +static void sub_running_bw(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq) > +{ > + u64 se_bw = dl_se->dl_bw; > + > + dl_rq->running_bw -= se_bw; > + if (WARN_ON(dl_rq->running_bw < 0)) > + dl_rq->running_bw = 0; > +} (if I am not missing anything...) the same in the above function: use inline and remove the se_bw variable. -- Daniel