Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751913AbcLCXZy (ORCPT ); Sat, 3 Dec 2016 18:25:54 -0500 Received: from mail-wm0-f54.google.com ([74.125.82.54]:37956 "EHLO mail-wm0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751396AbcLCXZw (ORCPT ); Sat, 3 Dec 2016 18:25:52 -0500 Date: Sat, 3 Dec 2016 23:25:03 +0000 From: Matt Fleming To: Vincent Guittot Cc: peterz@infradead.org, mingo@kernel.org, linux-kernel@vger.kernel.org, Morten.Rasmussen@arm.com, dietmar.eggemann@arm.com, kernellwp@gmail.com, yuyang.du@intel.com, umgwanakikbuti@gmail.com Subject: Re: [PATCH 1/2 v2] sched: fix find_idlest_group for fork Message-ID: <20161203232503.GJ20785@codeblueprint.co.uk> References: <1480088073-11642-1-git-send-email-vincent.guittot@linaro.org> <1480088073-11642-2-git-send-email-vincent.guittot@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1480088073-11642-2-git-send-email-vincent.guittot@linaro.org> User-Agent: Mutt/1.5.24+41 (02bc14ed1569) (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2459 Lines: 52 On Fri, 25 Nov, at 04:34:32PM, Vincent Guittot wrote: > During fork, the utilization of a task is init once the rq has been > selected because the current utilization level of the rq is used to set > the utilization of the fork task. As the task's utilization is still > null at this step of the fork sequence, it doesn't make sense to look for > some spare capacity that can fit the task's utilization. > Furthermore, I can see perf regressions for the test "hackbench -P -g 1" > because the least loaded policy is always bypassed and tasks are not > spread during fork. > > With this patch and the fix below, we are back to same performances as > for v4.8. The fix below is only a temporary one used for the test until a > smarter solution is found because we can't simply remove the test which is > useful for others benchmarks > > @@ -5708,13 +5708,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t > > avg_cost = this_sd->avg_scan_cost; > > - /* > - * Due to large variance we need a large fuzz factor; hackbench in > - * particularly is sensitive here. > - */ > - if ((avg_idle / 512) < avg_cost) > - return -1; > - > time = local_clock(); > > for_each_cpu_wrap(cpu, sched_domain_span(sd), target, wrap) { > OK, I need to point out that I didn't apply the above hunk when testing this patch series. But I wouldn't have expected that to impact our fork-intensive workloads so much. Let me know if you'd like me to re-run with it applied. I don't see much of a difference, positive or negative, for the majority of the test machines, it's mainly a wash. However, the following 4-cpu Xeon E5504 machine does show a nice win, with thread counts in the mid-range (note, the second column is number of hackbench groups, where each group has 40 tasks), hackbench-process-pipes 4.9.0-rc6 4.9.0-rc6 4.9.0-rc6 tip-sched fix-fig-for-fork fix-sig Amean 1 0.2193 ( 0.00%) 0.2014 ( 8.14%) 0.1746 ( 20.39%) Amean 3 0.4489 ( 0.00%) 0.3544 ( 21.04%) 0.3284 ( 26.83%) Amean 5 0.6173 ( 0.00%) 0.4690 ( 24.02%) 0.4977 ( 19.37%) Amean 7 0.7323 ( 0.00%) 0.6367 ( 13.05%) 0.6267 ( 14.42%) Amean 12 0.9716 ( 0.00%) 1.0187 ( -4.85%) 0.9351 ( 3.75%) Amean 16 1.2866 ( 0.00%) 1.2664 ( 1.57%) 1.2131 ( 5.71%)