Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752927AbcD1KnL (ORCPT ); Thu, 28 Apr 2016 06:43:11 -0400 Received: from mga04.intel.com ([192.55.52.120]:45577 "EHLO mga04.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753345AbcD1KnJ (ORCPT ); Thu, 28 Apr 2016 06:43:09 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,546,1455004800"; d="scan'208";a="794166516" Date: Thu, 28 Apr 2016 11:01:13 +0800 From: Yuyang Du To: Peter Zijlstra Cc: mingo@kernel.org, linux-kernel@vger.kernel.org, bsegall@google.com, pjt@google.com, morten.rasmussen@arm.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, lizefan@huawei.com, umgwanakikbuti@gmail.com Subject: Re: [PATCH v3 4/6] sched/fair: Remove scale_load_down() for load_avg Message-ID: <20160428030113.GA16093@intel.com> References: <1459829551-21625-1-git-send-email-yuyang.du@intel.com> <1459829551-21625-5-git-send-email-yuyang.du@intel.com> <20160428102532.GY3430@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160428102532.GY3430@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1724 Lines: 34 On Thu, Apr 28, 2016 at 12:25:32PM +0200, Peter Zijlstra wrote: > On Tue, Apr 05, 2016 at 12:12:29PM +0800, Yuyang Du wrote: > > Currently, load_avg = scale_load_down(load) * runnable%. The extra scaling > > down of load does not make much sense, because load_avg is primarily THE > > load and on top of that, we take runnable time into account. > > > > We therefore remove scale_load_down() for load_avg. But we need to > > carefully consider the overflow risk if load has higher range > > (2*SCHED_FIXEDPOINT_SHIFT). The only case an overflow may occur due > > to us is on 64bit kernel with increased load range. In that case, > > the 64bit load_sum can afford 4251057 (=2^64/47742/88761/1024) > > entities with the highest load (=88761*1024) always runnable on one > > single cfs_rq, which may be an issue, but should be fine. Even if this > > occurs at the end of day, on the condition where it occurs, the > > load average will not be useful anyway. > > I do feel we need a little more words on the actual ramification of > overflowing here. > > Yes, having 4m tasks on a single runqueue will be somewhat unlikely, but > if it happens, then what will the user experience? How long (if ever) > does it take for numbers to correct themselves etc.. > > > Signed-off-by: Yuyang Du > > [update calculate_imbalance] > > Signed-off-by: Vincent Guittot > > This SoB Chain suggests you wrote it and Vincent send it on, yet this > email is from you and Vincent isn't anywhere. Something's not right. Since you started to review patches, I just sent you more, :) What a coincidance. I actually don't know the rules for this SoB, let me learn how to do this co-signed-off.