Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934032AbbLPHmj (ORCPT ); Wed, 16 Dec 2015 02:42:39 -0500 Received: from mga01.intel.com ([192.55.52.88]:34284 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932681AbbLPHmi (ORCPT ); Wed, 16 Dec 2015 02:42:38 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,436,1444719600"; d="scan'208";a="872507977" Date: Wed, 16 Dec 2015 07:55:56 +0800 From: Yuyang Du To: Steve Muckle Cc: Peter Zijlstra , Ingo Molnar , Morten Rasmussen , Dietmar Eggemann , Patrick Bellasi , Juri Lelli , Vincent Guittot , "linux-kernel@vger.kernel.org" Subject: Re: PELT initial task load and wake_up_new_task() Message-ID: <20151215235556.GI28098@intel.com> References: <566B8009.2090006@linaro.org> <20151213191319.GA28098@intel.com> <566F61AA.4020904@linaro.org> <20151215022432.GG28098@intel.com> <56705FE1.5000600@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <56705FE1.5000600@linaro.org> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3053 Lines: 78 On Tue, Dec 15, 2015 at 10:45:53AM -0800, Steve Muckle wrote: > On 12/14/2015 06:24 PM, Yuyang Du wrote: > >>> On Fri, Dec 11, 2015 at 06:01:45PM -0800, Steve Muckle wrote: > >>>> In init_entity_runnable_average() the last_update_time is initialized to > >>>> zero. The task is given max load and utilization as a pessimistic > >>>> initial estimate. > >>>> > >>>> But if in wake_up_new_task() the task is placed on a CPU other than > >>>> where it was created, __update_load_avg() will be called via > >>>> set_task_cpu() -> migrate_task_rq_fair() -> remove_entity_load_avg(). > >>>> > >>>> Since last_update_time is zero the delta will be huge and the task's > >>>> load will be entirely decayed away before it is enqueued at the > >>>> destination CPU. > >>> > >>> Since the new task's last_update_time is equal to 0, it will not be decayed. > >> > >> Can you point me to the code for that logic? I don't see anything that > >> prevents the decay when a newly woken task is placed on a different CPU > >> via the call chain I mentioned above. My testing also shows the load > >> being decayed to zero. > >> > > You may search the last_update_time, and see it would be treated differently > > if it is 0. Hope this may be helpful. > > Are you referring to the test in enqueue_entity_load_avg()? If so that > isn't called until after remove_entity_load_avg() in this scenario, > which has no check on last_update_time. Indeed it is. Sorry that I did not look at this carefully before. I think it should still be regarded as migration. It looks better as such. Hope the following patch should work. --- Subject: [PATCH] sched: Fix new task's load avg removed from source CPU in wake_up_new_task() If a newly created task is selected to go to a different CPU in fork balance when it wakes up the first time, its load averages should not be removed from the source CPU since they are never added to it before. Fix it in remove_entity_load_avg(): when entity's last_update_time is 0, simply return. This should precisely identify the case in question, because in normal migration, the last_update_time is set to 0 after remove_entity_load_avg(). Reported-by: Steve Muckle Signed-off-by: Yuyang Du --- kernel/sched/fair.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e3266eb..4676988 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2909,6 +2909,12 @@ void remove_entity_load_avg(struct sched_entity *se) struct cfs_rq *cfs_rq = cfs_rq_of(se); u64 last_update_time; + /* + * Newly created task should not be removed from the source CPU before migration + */ + if (se->avg.last_update_time == 0) + return; + #ifndef CONFIG_64BIT u64 last_update_time_copy; -- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/