Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754337AbbGFRhB (ORCPT ); Mon, 6 Jul 2015 13:37:01 -0400 Received: from eu-smtp-delivery-143.mimecast.com ([207.82.80.143]:13853 "EHLO eu-smtp-delivery-143.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752574AbbGFRg7 convert rfc822-to-8bit (ORCPT ); Mon, 6 Jul 2015 13:36:59 -0400 Message-ID: <559ABCB8.6020209@arm.com> Date: Mon, 06 Jul 2015 18:36:56 +0100 From: Dietmar Eggemann User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: Yuyang Du , Morten Rasmussen CC: Mike Galbraith , Peter Zijlstra , Rabin Vincent , "mingo@redhat.com" , "linux-kernel@vger.kernel.org" , Paul Turner , Ben Segall Subject: Re: [PATCH?] Livelock in pick_next_task_fair() / idle_balance() References: <20150630143057.GA31689@axis.com> <1435728995.9397.7.camel@gmail.com> <20150701145551.GA15690@axis.com> <20150701204404.GH25159@twins.programming.kicks-ass.net> <20150701232511.GA5197@intel.com> <1435824347.5351.18.camel@gmail.com> <20150702010539.GB5197@intel.com> <20150702114032.GA7598@e105550-lin.cambridge.arm.com> <20150702193702.GD5197@intel.com> <20150703093441.GA15477@e105550-lin.cambridge.arm.com> <20150705201241.GE5197@intel.com> In-Reply-To: <20150705201241.GE5197@intel.com> X-OriginalArrivalTime: 06 Jul 2015 17:36:55.0476 (UTC) FILETIME=[5A20B740:01D0B812] X-MC-Unique: xGg7wCvzSPWMZraida_sHA-1 Content-Type: text/plain; charset=WINDOWS-1252 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2958 Lines: 68 Hi Yuyang, On 05/07/15 21:12, Yuyang Du wrote: > Hi Morten, > > On Fri, Jul 03, 2015 at 10:34:41AM +0100, Morten Rasmussen wrote: >>>> IOW, since task groups include blocked load in the load_avg_contrib (see >>>> __update_group_entity_contrib() and __update_cfs_rq_tg_load_contrib()) the >>>> imbalance includes blocked load and hence env->imbalance >= >>>> sum(task_h_load(p)) for all tasks p on the rq. Which leads to >>>> detach_tasks() emptying the rq completely in the reported scenario where >>>> blocked load > runnable load. >>> >>> Whenever I want to know the load avg concerning task group, I need to >>> walk through the complete codes again, I prefer not to do it this time. >>> But it should not be that simply to say "the 118 comes from the blocked load". >> >> But the whole hierarchy of group entities is updated each time we enqueue >> or dequeue a task. I don't see how the group entity load_avg_contrib is >> not up to date? Why do you need to update it again? >> >> In any case, we have one task in the group hierarchy which has a >> load_avg_contrib of 0 and the grand-grand parent group entity has a >> load_avg_contrib of 118 and no additional tasks. That load contribution >> must be from tasks which are no longer around on the rq? No? > > load_avg_contrib has WEIGHT inside, so the most I can say is: > SE: 8f456e00's load_avg_contrib 118 = (its cfs_rq's runnable + blocked) / (tg->load_avg + 1) * tg->shares > > The tg->shares is probably 1024 (at least 911). So we are just left with: > > cfs_rq / tg = 11.5% > > I myself did question the sudden jump from 0 to 118 (see a previous reply). Do you mean the jump from system-rngd.slice (0) (tg.css.id=3) to system.slice (118) (tg.css.id=2)? Maybe, the 118 might come from another tg hierarchy (w/ tg.css.id >= 3) inside the system.slice group representing another service. Rabin, could you share the content of your /sys/fs/cgroup/cpu/system.slice directory and of /proc/cgroups ? Whether 118 comes from the cfs_rq->blocked_load_avg of one of the tg levels of one of the other system.slice tg hierarchies or it results from not updating the se.avg.load_avg_contrib values of se's representing tg's immediately is not that important I guess. Even if we're able to sync both things (task en/dequeue and tg se.avg.load_avg_contrib update) perfectly (by calling update_cfs_rq_blocked_load() always w/ force_update=1 and immediately after that update_entity_load_avg() for all tg se's in one hierarchy, we would still have to deal w/ the blocked load part if the tg se representing system.slice contributes to cpu_rq(cpu)->cfs.runnable_load_avg. -- Dietmar > > But anyway, this really is irrelevant to the discusstion. > [...] -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/