Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932962AbbFVGfW (ORCPT ); Mon, 22 Jun 2015 02:35:22 -0400 Received: from mga11.intel.com ([192.55.52.93]:6424 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753573AbbFVGfR (ORCPT ); Mon, 22 Jun 2015 02:35:17 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,658,1427785200"; d="scan'208";a="747967699" Date: Mon, 22 Jun 2015 06:43:06 +0800 From: Yuyang Du To: Boqun Feng Cc: mingo@kernel.org, peterz@infradead.org, linux-kernel@vger.kernel.org, pjt@google.com, bsegall@google.com, morten.rasmussen@arm.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, len.brown@intel.com, rafael.j.wysocki@intel.com, fengguang.wu@intel.com, srikar@linux.vnet.ibm.com Subject: Re: [PATCH v8 2/4] sched: Rewrite runnable load and utilization average tracking Message-ID: <20150621224306.GC3933@intel.com> References: <1434396367-27979-1-git-send-email-yuyang.du@intel.com> <1434396367-27979-3-git-send-email-yuyang.du@intel.com> <20150619060038.GA1240@fixme-laptop.cn.ibm.com> <20150618230554.GA3436@intel.com> <20150619075724.GA5331@fixme-laptop.cn.ibm.com> <20150619031116.GA3933@intel.com> <20150619122207.GB5331@fixme-laptop.cn.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150619122207.GB5331@fixme-laptop.cn.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2462 Lines: 57 On Fri, Jun 19, 2015 at 08:22:07PM +0800, Boqun Feng wrote: > > It is not the rewrite patch "lacks" aggregation, it is needless. The stock > > has to do a bottom-up update and aggregate, because 1) it updates the > > load at an entity granularity, 2) the blocked load is separate. > > Yep, you are right, the aggregation is not necessary. > > Let me see if I understand you, in the rewrite, when we > update_cfs_rq_load_avg() we need neither to aggregate child's load_avg, > nor to update cfs_rq->load.weight. Because: > > 1) For the load before cfs_rq->last_update_time, it's already in the > ->load_avg, and decay will do the job. > 2) For the load from cfs_rq->last_update_time to now, we calculate > with cfs_rq->load.weight, and the weight should be weight at > ->last_update_time rather than now. > > Right? Yes. > > If update_cfs_shares() is done here, it is good, but probably not necessary > > though. However, we do need to update_tg_load_avg() here, because if cfs_rq's > > We may have another problem even we udpate_tg_load_avg(), because after > the loop, for each cfs_rq, ->load.weight is not up-to-date, right? So > next time before we update_cfs_rq_load_avg(), we need to guarantee that > the cfs_rq->load.weight is already updated, right? And IMO, we don't > have that guarantee yet, do we? If we update weight, we must update load_avg. But if we update load_avg, we may need to update weight. Yes, your comment here is valid, but we already update the shares as needed in the cases when they are "active", update_blocked_averages() is largely for inactive group entities, so we should be fine here. > > load change, the parent tg's load_avg should change too. I will upload a next > > version soon. > > > > In addition, an update to the stress + dbench test case: > > > > I have a Core i7, not a Xeon Nehalem, and I have a patch that may not impact > > the result. Then, the dbench runs at very low CPU utilization ~1%. Boqun said > > this may result from cgroup control, the dbench I/O is low. > > > > Anyway, I can't reproduce the results, the CPU0's util is 92+%, and other CPUs > > have ~100% util. > > Thank you for looking into that problem, and I will test with your new > version of patch ;-) That would be good. I played the dbench "as is", and its output looks pretty fine. Thanks, Yuyang -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in Please read the FAQ at http://www.tux.org/lkml/