Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933592AbcJQJKC (ORCPT ); Mon, 17 Oct 2016 05:10:02 -0400 Received: from mail-qk0-f180.google.com ([209.85.220.180]:35520 "EHLO mail-qk0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933115AbcJQJJl (ORCPT ); Mon, 17 Oct 2016 05:09:41 -0400 Date: Mon, 17 Oct 2016 11:09:03 +0200 From: Vincent Guittot To: Joseph Salisbury Cc: Dietmar Eggemann , Ingo Molnar , Peter Zijlstra , Linus Torvalds , Thomas Gleixner , LKML , Mike Galbraith , omer.akram@canonical.com Subject: Re: [v4.8-rc1 Regression] sched/fair: Apply more PELT fixes Message-ID: <20161017090903.GA11962@linaro.org> References: <57FE6302.6060103@canonical.com> <57FFADC8.2020602@canonical.com> <43c59cba-2044-1de2-0f78-8f346bd1e3cb@arm.com> <20161014151827.GA10379@linaro.org> <2bb765e7-8a5f-c525-a6ae-fbec6fae6354@canonical.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <2bb765e7-8a5f-c525-a6ae-fbec6fae6354@canonical.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3646 Lines: 90 Le Friday 14 Oct 2016 ? 12:04:02 (-0400), Joseph Salisbury a ?crit : > On 10/14/2016 11:18 AM, Vincent Guittot wrote: > > Le Friday 14 Oct 2016 ? 14:10:07 (+0100), Dietmar Eggemann a ?crit : > >> On 14/10/16 09:24, Vincent Guittot wrote: > >>> On 13 October 2016 at 23:34, Vincent Guittot wrote: > >>>> On 13 October 2016 at 20:49, Dietmar Eggemann wrote: > >>>>> On 13/10/16 17:48, Vincent Guittot wrote: > >>>>>> On 13 October 2016 at 17:52, Joseph Salisbury > >>>>>> wrote: > >>>>>>> On 10/13/2016 06:58 AM, Vincent Guittot wrote: > >>>>>>>> Hi, > >>>>>>>> > >>>>>>>> On 12 October 2016 at 18:21, Joseph Salisbury > >>>>>>>> wrote: > >>>>>>>>> On 10/12/2016 08:20 AM, Vincent Guittot wrote: > >>>>>>>>>> On 8 October 2016 at 13:49, Mike Galbraith wrote: > >>>>>>>>>>> On Sat, 2016-10-08 at 13:37 +0200, Vincent Guittot wrote: > >>>>>>>>>>>> On 8 October 2016 at 10:39, Ingo Molnar wrote: > >>>>>>>>>>>>> * Peter Zijlstra wrote: > >>>>>>>>>>>>> > >>>>>>>>>>>>>> On Fri, Oct 07, 2016 at 03:38:23PM -0400, Joseph Salisbury wrote: > >> [...] > >> > >>>>> When I create a tg_root/tg_x/tg_y_1 and a tg_root/tg_x/tg_y_2 group, the tg_x->load_avg > >>>>> becomes > 6*1024 before any tasks ran in it. > >>>> This is normal as se->avg.load_avg is initialized to > >>>> scale_load_down(se->load.weight) and this se->avg.load_avg will be > >>>> added to tg_x[cpu]->cfs_rq->avg.load_avg when attached to the cfs_rq > >> Yeah, you right, even when I've created 50 second level groups, > >> tg_x->load_avg is ~6800. > >> > >> Could it have something to do with the fact that .se->load.weight = 2 > >> for all these task groups? on a 64bit system? > > I don't think so, the problem really comes from tg->load_avg = 381697 > > but sum of cfs_rq[cpu]->tg_load_avg_contrib = 1013 which is << tg->load_avg > > and cfs_rq[cpu]->tg_load_avg_contrib == cfs_rq[cpu]->avg.load_avg so we can't > > expect any negative delta to remove this large value > > > >> In case we call __update_load_avg(..., se->on_rq * > >> scale_load_down(se->load.weight), ...) we pass a weight argument of 0 > >> for these se's. > >> > >> Does not happen with: > >> > >> - if (shares < MIN_SHARES) > >> - shares = MIN_SHARES; > >> + if (shares < scale_load(MIN_SHARES)) > >> + shares = scale_load(MIN_SHARES); > >> > >> in calc_cfs_shares(). > >> > >> [...] > >> > >> > Adding Omer to CC list, as he is able to reproduce this bug. Could you try the patch below on top of the faulty kernel ? --- kernel/sched/fair.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8b03fb5..8926685 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2902,7 +2902,8 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa, */ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq, int force) { - long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib; + unsigned long load_avg = READ_ONCE(cfs_rq->avg.load_avg); + long delta = load_avg - cfs_rq->tg_load_avg_contrib; /* * No need to update load_avg for root_task_group as it is not used. @@ -2912,7 +2913,7 @@ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq, int force) if (force || abs(delta) > cfs_rq->tg_load_avg_contrib / 64) { atomic_long_add(delta, &cfs_rq->tg->load_avg); - cfs_rq->tg_load_avg_contrib = cfs_rq->avg.load_avg; + cfs_rq->tg_load_avg_contrib = load_avg; } } -- 2.7.4 >