Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752143AbcDEMYa (ORCPT ); Tue, 5 Apr 2016 08:24:30 -0400 Received: from casper.infradead.org ([85.118.1.10]:34110 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750835AbcDEMY3 (ORCPT ); Tue, 5 Apr 2016 08:24:29 -0400 Date: Tue, 5 Apr 2016 14:24:25 +0200 From: Peter Zijlstra To: Luca Abeni Cc: linux-kernel@vger.kernel.org, Ingo Molnar , Juri Lelli Subject: Re: [RFC v2 2/7] Correctly track the active utilisation for migrating tasks Message-ID: <20160405122425.GV3430@twins.programming.kicks-ass.net> References: <1459523553-29089-1-git-send-email-luca.abeni@unitn.it> <1459523553-29089-3-git-send-email-luca.abeni@unitn.it> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1459523553-29089-3-git-send-email-luca.abeni@unitn.it> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1727 Lines: 47 On Fri, Apr 01, 2016 at 05:12:28PM +0200, Luca Abeni wrote: > Fix active utilisation accounting on migration: when a task is migrated > from CPUi to CPUj, immediately subtract the task's utilisation from > CPUi and add it to CPUj. This mechanism is implemented by modifying the > pull and push functions. > > Note: this is not fully correct from the theoretical point of view > (the utilisation should be removed from CPUi only at the 0 lag time), > but doing the right thing would be _MUCH_ more complex (leaving the > timer armed when the task is on a different CPU... Inactive timers should > be moved from per-task timers to per-runqueue lists of timers! Bah...) > > Signed-off-by: Luca Abeni > --- > kernel/sched/deadline.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c > index 3c64ebf..05cfccb 100644 > --- a/kernel/sched/deadline.c > +++ b/kernel/sched/deadline.c > @@ -1530,7 +1530,9 @@ retry: > } > > deactivate_task(rq, next_task, 0); > + sub_running_bw(&next_task->dl, &rq->dl); > set_task_cpu(next_task, later_rq->cpu); > + add_running_bw(&next_task->dl, &later_rq->dl); > activate_task(later_rq, next_task, 0); > ret = 1; > > @@ -1618,7 +1620,9 @@ static void pull_dl_task(struct rq *this_rq) > resched = true; > > deactivate_task(src_rq, p, 0); > + sub_running_bw(&p->dl, &src_rq->dl); > set_task_cpu(p, this_cpu); > + add_running_bw(&p->dl, &this_rq->dl); > activate_task(this_rq, p, 0); > dmin = p->dl.deadline; > Are these the only places a DL task might be migrated from? In particular I worry about the case where we assign an existing DL task to a different cpuset. Juri?