Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753468AbcJJRes (ORCPT ); Mon, 10 Oct 2016 13:34:48 -0400 Received: from mail-qk0-f171.google.com ([209.85.220.171]:34124 "EHLO mail-qk0-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753060AbcJJReq (ORCPT ); Mon, 10 Oct 2016 13:34:46 -0400 Date: Mon, 10 Oct 2016 19:34:40 +0200 From: Vincent Guittot To: Matt Fleming Cc: Wanpeng Li , Peter Zijlstra , Ingo Molnar , "linux-kernel@vger.kernel.org" , Mike Galbraith , Yuyang Du , Dietmar Eggemann Subject: Re: [PATCH] sched/fair: Do not decay new task load on first enqueue Message-ID: <20161010173440.GA28945@linaro.org> References: <20160923115808.2330-1-matt@codeblueprint.co.uk> <20160928101422.GR5016@twins.programming.kicks-ass.net> <20160928193731.GD16071@codeblueprint.co.uk> <20161010100107.GZ16071@codeblueprint.co.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 6119 Lines: 161 Le Monday 10 Oct 2016 ? 14:29:28 (+0200), Vincent Guittot a ?crit : > On 10 October 2016 at 12:01, Matt Fleming wrote: > > On Sun, 09 Oct, at 11:39:27AM, Wanpeng Li wrote: > >> > >> The difference between this patch and Peterz's is your patch have a > >> delta since activate_task()->enqueue_task() does do update_rq_clock(), > >> so why don't have the delta will cause low cpu machines (4 or 8) to > >> regress against your another reply in this thread? > > > > Both my patch and Peter's patch cause issues with low cpu machines. In > > <20161004201105.GP16071@codeblueprint.co.uk> I said, > > > > "This patch causes some low cpu machines (4 or 8) to regress. It turns > > out they regress with my patch too." > > > > Have I misunderstood your question? > > > > I ran out of time to investigate this last week, though I did try all > > proposed patches, including Vincent's, and none of them produced wins > > across the board. > > I have tried to reprocude your issue on my target an hikey board (ARM > based octo cores) but i failed to see a regression with commit > 7dc603c9028e. Neverthless, i can see tasks not been well spread > during fork as you mentioned. So I have studied a bit more the > spreading issue during fork last week and i have a new version of my > proposed patch that i'm going to send soon. With this patch, i can see > a good spread of tasks during the fork sequence and some kind of perf > improvement even if it's bit difficult as the variance is quite > important with hackbench test so it's mainly an improvement of > repeatability of the result > Subject: [PATCH] sched: use load_avg for selecting idlest group select_busiest_group only compares the runnable_load_avg when looking for the idlest group. But on fork intensive use case like hackbenchw here task blocked quickly after the fork, this can lead to selecting the same CPU whereas other CPUs, which have similar runnable load but a lower load_avg, could be chosen instead. When the runnable_load_avg of 2 CPUs are close, we now take into account the amount of blocked load as a 2nd selection factor. For use case like hackbench, this enable the scheduler to select different CPUs during the fork sequence and to spread tasks across the system. Tests have been done on a Hikey board (ARM based octo cores) for several kernel. The result below gives min, max, avg and stdev values of 18 runs with each configuration. The v4.8+patches configuration also includes the changes below which is part of the proposal made by Peter to ensure that the clock will be up to date when the fork task will be attached to the rq. @@ -2568,6 +2568,7 @@ void wake_up_new_task(struct task_struct *p) __set_task_cpu(p, select_task_rq(p, task_cpu(p), SD_BALANCE_FORK, 0)); #endif rq = __task_rq_lock(p, &rf); + update_rq_clock(rq); post_init_entity_util_avg(&p->se); activate_task(rq, p, 0); hackbench -P -g 1 ea86cb4b7621 7dc603c9028e v4.8 v4.8+patches min 0.049 0.050 0.051 0,048 avg 0.057 0.057(0%) 0.057(0%) 0,055(+5%) max 0.066 0.068 0.070 0,063 stdev +/-9% +/-9% +/-8% +/-9% Signed-off-by: Vincent Guittot --- kernel/sched/fair.c | 40 ++++++++++++++++++++++++++++++++-------- 1 file changed, 32 insertions(+), 8 deletions(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 039de34..628b00b 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5166,15 +5166,16 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu, int sd_flag) { struct sched_group *idlest = NULL, *group = sd->groups; - unsigned long min_load = ULONG_MAX, this_load = 0; + unsigned long min_runnable_load = ULONG_MAX, this_runnable_load = 0; + unsigned long min_avg_load = ULONG_MAX, this_avg_load = 0; int load_idx = sd->forkexec_idx; - int imbalance = 100 + (sd->imbalance_pct-100)/2; + unsigned long imbalance = (scale_load_down(NICE_0_LOAD)*(sd->imbalance_pct-100))/100; if (sd_flag & SD_BALANCE_WAKE) load_idx = sd->wake_idx; do { - unsigned long load, avg_load; + unsigned long load, avg_load, runnable_load; int local_group; int i; @@ -5188,6 +5189,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, /* Tally up the load of all CPUs in the group */ avg_load = 0; + runnable_load = 0; for_each_cpu(i, sched_group_cpus(group)) { /* Bias balancing toward cpus of our domain */ @@ -5196,21 +5198,43 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, else load = target_load(i, load_idx); - avg_load += load; + runnable_load += load; + + avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs); } /* Adjust by relative CPU capacity of the group */ avg_load = (avg_load * SCHED_CAPACITY_SCALE) / group->sgc->capacity; + runnable_load = (runnable_load * SCHED_CAPACITY_SCALE) / group->sgc->capacity; if (local_group) { - this_load = avg_load; - } else if (avg_load < min_load) { - min_load = avg_load; + this_runnable_load = runnable_load; + this_avg_load = avg_load; + } else if (min_runnable_load > (runnable_load + imbalance)) { + /* + * The runnable load is significantly smaller so we + * can pick this new cpu + */ + min_runnable_load = runnable_load; + min_avg_load = avg_load; + idlest = group; + } else if ((runnable_load < (min_runnable_load + imbalance)) && + (100*min_avg_load > sd->imbalance_pct*avg_load)) { + /* + * The runnable loads are close so we take into account + * blocked load throught avg_load which is blocked + + * runnable load + */ + min_avg_load = avg_load; idlest = group; } + } while (group = group->next, group != sd->groups); - if (!idlest || 100*this_load < imbalance*min_load) + if (!idlest || + (min_runnable_load > (this_runnable_load + imbalance)) || + ((this_runnable_load < (min_runnable_load + imbalance)) && + (100*min_avg_load > sd->imbalance_pct*this_avg_load))) return NULL; return idlest; } -- 2.7.4