Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752641AbcD1KZr (ORCPT ); Thu, 28 Apr 2016 06:25:47 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:58848 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752541AbcD1KZo (ORCPT ); Thu, 28 Apr 2016 06:25:44 -0400 Date: Thu, 28 Apr 2016 12:25:32 +0200 From: Peter Zijlstra To: Yuyang Du Cc: mingo@kernel.org, linux-kernel@vger.kernel.org, bsegall@google.com, pjt@google.com, morten.rasmussen@arm.com, vincent.guittot@linaro.org, dietmar.eggemann@arm.com, lizefan@huawei.com, umgwanakikbuti@gmail.com Subject: Re: [PATCH v3 4/6] sched/fair: Remove scale_load_down() for load_avg Message-ID: <20160428102532.GY3430@twins.programming.kicks-ass.net> References: <1459829551-21625-1-git-send-email-yuyang.du@intel.com> <1459829551-21625-5-git-send-email-yuyang.du@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1459829551-21625-5-git-send-email-yuyang.du@intel.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1431 Lines: 28 On Tue, Apr 05, 2016 at 12:12:29PM +0800, Yuyang Du wrote: > Currently, load_avg = scale_load_down(load) * runnable%. The extra scaling > down of load does not make much sense, because load_avg is primarily THE > load and on top of that, we take runnable time into account. > > We therefore remove scale_load_down() for load_avg. But we need to > carefully consider the overflow risk if load has higher range > (2*SCHED_FIXEDPOINT_SHIFT). The only case an overflow may occur due > to us is on 64bit kernel with increased load range. In that case, > the 64bit load_sum can afford 4251057 (=2^64/47742/88761/1024) > entities with the highest load (=88761*1024) always runnable on one > single cfs_rq, which may be an issue, but should be fine. Even if this > occurs at the end of day, on the condition where it occurs, the > load average will not be useful anyway. I do feel we need a little more words on the actual ramification of overflowing here. Yes, having 4m tasks on a single runqueue will be somewhat unlikely, but if it happens, then what will the user experience? How long (if ever) does it take for numbers to correct themselves etc.. > Signed-off-by: Yuyang Du > [update calculate_imbalance] > Signed-off-by: Vincent Guittot This SoB Chain suggests you wrote it and Vincent send it on, yet this email is from you and Vincent isn't anywhere. Something's not right.