Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753797AbcKHTKJ (ORCPT ); Tue, 8 Nov 2016 14:10:09 -0500 Received: from mail-wm0-f50.google.com ([74.125.82.50]:36031 "EHLO mail-wm0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751559AbcKHTKH (ORCPT ); Tue, 8 Nov 2016 14:10:07 -0500 Date: Tue, 8 Nov 2016 20:09:56 +0100 From: Luca Abeni To: Juri Lelli Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Ingo Molnar , Claudio Scordino , Steven Rostedt Subject: Re: [RFC v3 1/6] Track the active utilisation Message-ID: <20161108200956.35b3e7c7@utopia> In-Reply-To: <20161108185309.GG16920@e106622-lin> References: <1477317998-7487-1-git-send-email-luca.abeni@unitn.it> <1477317998-7487-2-git-send-email-luca.abeni@unitn.it> <20161101164451.GA2769@ARMvm> <20161101221014.27eb441a@utopia> <20161108175635.GF16920@e106622-lin> <20161108191730.29c54a98@utopia> <20161108185309.GG16920@e106622-lin> X-Mailer: Claws Mail 3.11.1 (GTK+ 2.24.25; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2849 Lines: 71 Hi again, On Tue, 8 Nov 2016 18:53:09 +0000 Juri Lelli wrote: [...] > > > Also, AFAIU, do_exit() works on current and the TASK_DEAD case is > > > handled in finish_task_switch(), so I don't think we are taking > > > care of the "task is dying" condition. > > Ok, so I am missing something... The state is set to TASK_DEAD, and > > then schedule() is called... So, __schedule() sees the dying task as > > "prev" and invokes deactivate_task() with the DEQUEUE_SLEEP flag... > > After that, finish_task_switch() calls task_dead_dl(). Is this > > wrong? If not, why aren't we taking care of the "task is dying" > > condition? > > > > No, I think you are right. But, semantically this cleanup goes in > task_dead_dl(), IMHO. Just to be sure I understand correctly: you suggest to add a check for "state == TASK_DEAD" (skipping the cleanup if the condition is true) in dequeue_task_dl(), and to add a sub_running_bw() in task_dead_dl()... Is this understanding correct? > It's most probably moot if it complicates > things, but it might be helpful to differentiate the case between a > task that is actually going to sleep (and for which we want to > activate the timer) and a task that is dying (and for which we want > to release bw immediately). I suspect the two cases should be handled in the same way :) > So, it actually matters for next patch, > not here. But, maybe we want to do things clean from start? You mean, because patch 2/6 adds + if (hrtimer_active(&p->dl.inactive_timer)) { + raw_spin_lock_irq(&task_rq(p)->lock); + sub_running_bw(&p->dl, dl_rq_of_se(&p->dl)); + raw_spin_unlock_irq(&task_rq(p)->lock); + } in task_dead_dl()? I suspect this hunk is actually unneeded (worse, it is wrong :). I am trying to remember why it is there, but I cannot find any reason... In the next days, I'll run some tests to check if that hunk is actually needed. If yes, then I'll modify patch 1/6 as you suggest; if it is not needed, I'll remove it from patch 2/6 and I'll not do this change to patch 1/6... Is this ok? Thanks, Luca > > > > > > Peter, does what I'm saying make any sense? :) > > > > > > I still have to set up things here to test these patches (sorry, > > > I was travelling), but could you try to create some tasks and > > > that kill them from another shell to see if the accounting > > > deviates or not? Or did you already do this test? > > I think this is one of the tests I tried... > > I have to check if I changed this code after the test (but I do not > > think I did). Anyway, tomorrow I'll write a script for automating > > this test, and I'll leave it running for some hours. > > > > OK, thanks. As said I think that you actually handle the case already, > but I'll try to setup testing as well soon. > > Thanks, > > - Juri