Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758061Ab0BMUgW (ORCPT ); Sat, 13 Feb 2010 15:36:22 -0500 Received: from e23smtp06.au.ibm.com ([202.81.31.148]:47045 "EHLO e23smtp06.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757913Ab0BMUgV (ORCPT ); Sat, 13 Feb 2010 15:36:21 -0500 Date: Sun, 14 Feb 2010 02:06:11 +0530 From: Vaidyanathan Srinivasan To: Suresh Siddha Cc: Peter Zijlstra , Ingo Molnar , LKML , "Ma, Ling" , "Zhang, Yanmin" , ego@in.ibm.com Subject: Re: [patch] sched: fix SMT scheduler regression in find_busiest_queue() Message-ID: <20100213203611.GJ5882@dirshya.in.ibm.com> Reply-To: svaidy@linux.vnet.ibm.com References: <1266023662.2808.118.camel@sbs-t61.sc.intel.com> <20100213182748.GB5882@dirshya.in.ibm.com> <20100213202552.GI5882@dirshya.in.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <20100213202552.GI5882@dirshya.in.ibm.com> User-Agent: Mutt/1.5.20 (2009-06-14) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 7346 Lines: 179 * Vaidyanathan Srinivasan [2010-02-14 01:55:52]: > * Vaidyanathan Srinivasan [2010-02-13 23:57:48]: > > > * Suresh Siddha [2010-02-12 17:14:22]: > > > > > PeterZ/Ingo, > > > > > > Ling Ma and Yanmin reported this SMT scheduler regression which lead to > > > the condition where both the SMT threads on a core are busy while the > > > other cores in the same socket are completely idle, causing major > > > performance regression. I have appended a fix for this. This is > > > relatively low risk fix and if you agree with both the fix and > > > risk-assessment, can we please push this to Linus so that we can address > > > this in 2.6.33. > > > > Hi Suresh, > > > > I have been looking at this issue in order to make > > sched_smt_powersavings work. In my simple tests I find that the > > default behavior is to have one task per core first since the total > > cpu power of the core will be 1178 (589*2) that is not sufficient to > > keep two tasks balanced in the group. > > > > In the scenario that you have described, even though the group has > > been identified as busiest, the find_busiest_queue() will return null > > since wl will be 1780 {load(1024)*SCHED_LOAD_SCALE/power(589)} leading > > to wl being greater than imbalance. > > > > The fix that you have posted will solve the problem described. > > However we need to make sched_smt_powersavings also work by increasing > > the group capacity and allowing two tasks to run in a core. > > > > As Peter mentioned, SD_PREFER_SIBLING flag is meant to spread the work > > across group at any sched domain so that the solution will work for > > pre-Nehalem quad cores also. But it still needs some work to get it > > right. Please refer to my earlier bug report at: > > > > http://lkml.org/lkml/2010/2/8/80 > > > > The solution you have posted will not work for non-HT quad cores where > > we want the tasks to be spread across cache domains for best > > performance though not a severe performance regression as in the case > > of Nehalem. > > > > I will test your solution in different scenarios and post updates. > > I have tested this patch on Nehalem and it provides the desired result > when sched_smt/mc_powersavings=0. One task per core is placed before > using sibling threads. However the core usage is mostly balanced across > different packages but not always. Incorrect consolidation to SMT > threads when free cores are available does not happen once this fix is > applied. > > > Thanks, > > Vaidy > > > > > > > thanks, > > > suresh > > > --- > > > > > > From: Suresh Siddha > > > Subject: sched: fix SMT scheduler regression in find_busiest_queue() > > > > > > Fix a SMT scheduler performance regression that is leading to a scenario > > > where SMT threads in one core are completely idle while both the SMT threads > > > in another core (on the same socket) are busy. > > > > > > This is caused by this commit (with the problematic code highlighted) > > > > > > commit bdb94aa5dbd8b55e75f5a50b61312fe589e2c2d1 > > > Author: Peter Zijlstra > > > Date: Tue Sep 1 10:34:38 2009 +0200 > > > > > > sched: Try to deal with low capacity > > > > > > @@ -4203,15 +4223,18 @@ find_busiest_queue() > > > ... > > > for_each_cpu(i, sched_group_cpus(group)) { > > > + unsigned long power = power_of(i); > > > > > > ... > > > > > > - wl = weighted_cpuload(i); > > > + wl = weighted_cpuload(i) * SCHED_LOAD_SCALE; > > > + wl /= power; > > > > > > - if (rq->nr_running == 1 && wl > imbalance) > > > + if (capacity && rq->nr_running == 1 && wl > imbalance) > > > continue; > > > > > > On a SMT system, power of the HT logical cpu will be 589 and > > > the scheduler load imbalance (for scenarios like the one mentioned above) > > > can be approximately 1024 (SCHED_LOAD_SCALE). The above change of scaling > > > the weighted load with the power will result in "wl > imbalance" and > > > ultimately resulting in find_busiest_queue() return NULL, causing > > > load_balance() to think that the load is well balanced. But infact > > > one of the tasks can be moved to the idle core for optimal performance. > > > > > > We don't need to use the weighted load (wl) scaled by the cpu power to > > > compare with imabalance. In that condition, we already know there is only a > > > single task "rq->nr_running == 1" and the comparison between imbalance, > > > wl is to make sure that we select the correct priority thread which matches > > > imbalance. So we really need to compare the imabalnce with the original > > > weighted load of the cpu and not the scaled load. > > > > > > But in other conditions where we want the most hammered(busiest) cpu, we can > > > use scaled load to ensure that we consider the cpu power in addition to the > > > actual load on that cpu, so that we can move the load away from the > > > guy that is getting most hammered with respect to the actual capacity, > > > as compared with the rest of the cpu's in that busiest group. > > > > > > Fix it. > > > > > > Reported-by: Ma Ling > > > Initial-Analysis-by: Zhang, Yanmin > > > Signed-off-by: Suresh Siddha > > Acked-by: Vaidyanathan Srinivasan > > > > Cc: stable@kernel.org [2.6.32.x] > > > --- > > > > > > diff --git a/kernel/sched.c b/kernel/sched.c > > > index 3a8fb30..bef5369 100644 > > > --- a/kernel/sched.c > > > +++ b/kernel/sched.c > > > @@ -4119,12 +4119,23 @@ find_busiest_queue(struct sched_group *group, enum cpu_idle_type idle, > > > continue; > > > > > > rq = cpu_rq(i); > > > - wl = weighted_cpuload(i) * SCHED_LOAD_SCALE; > > > - wl /= power; > > > + wl = weighted_cpuload(i); > > > > > > + /* > > > + * When comparing with imbalance, use weighted_cpuload() > > > + * which is not scaled with the cpu power. > > > + */ > > > if (capacity && rq->nr_running == 1 && wl > imbalance) > > > continue; > > > > > > + /* > > > + * For the load comparisons with the other cpu's, consider > > > + * the weighted_cpuload() scaled with the cpu power, so that > > > + * the load can be moved away from the cpu that is potentially > > > + * running at a lower capacity. > > > + */ > > > + wl = (wl * SCHED_LOAD_SCALE) / power; > > > + > > > if (wl > max_load) { > > > max_load = wl; > > > busiest = rq; > > > > > > In addition to the above fix, for sched_smt_powersavings to work, the group capacity of the core (mc level) should be made 2 in update_sg_lb_stats() by changing the DIV_ROUND_CLOSEST to DIV_RPUND_UP() sgs->group_capacity = DIV_ROUND_UP(group->cpu_power, SCHED_LOAD_SCALE); Ideally we can change this to DIV_ROUND_UP and let SD_PREFER_SIBLING flag to force capacity to 1. Need to see if there are any side effects of setting SD_PREFER_SIBLING at SIBLING level sched domain based on sched_smt_powersavings flag. Will post a patch after more tests. --Vaidy -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/