Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753827Ab3DKJBB (ORCPT ); Thu, 11 Apr 2013 05:01:01 -0400 Received: from mout.gmx.net ([212.227.17.22]:59165 "EHLO mout.gmx.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752998Ab3DKJA6 (ORCPT ); Thu, 11 Apr 2013 05:00:58 -0400 X-Authenticated: #14349625 X-Provags-ID: V01U2FsdGVkX18Nl9I33RIN5wb+JpLuSB51HhF87GVJ0HjTBrS0a5 uGfBLcVH2dcRSc Message-ID: <1365670840.19620.132.camel@marge.simpson.net> Subject: Re: [PATCH] sched: wake-affine throttle From: Mike Galbraith To: Michael Wang Cc: Peter Zijlstra , Peter Zijlstra , LKML , Ingo Molnar , Alex Shi , Namhyung Kim , Paul Turner , Andrew Morton , "Nikunj A. Dadhania" , Ram Pai Date: Thu, 11 Apr 2013 11:00:40 +0200 In-Reply-To: <1365669862.19620.129.camel@marge.simpson.net> References: <5164DCE7.8080906@linux.vnet.ibm.com> <1365583873.30071.31.camel@laptop> <51652F43.7000300@linux.vnet.ibm.com> <516651C8.307@linux.vnet.ibm.com> <1365665447.19620.102.camel@marge.simpson.net> <516673BF.4080404@linux.vnet.ibm.com> <1365669862.19620.129.camel@marge.simpson.net> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.2.3 Content-Transfer-Encoding: 7bit Mime-Version: 1.0 X-Y-GMX-Trusted: 0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1660 Lines: 32 On Thu, 2013-04-11 at 10:44 +0200, Mike Galbraith wrote: > On Thu, 2013-04-11 at 16:26 +0800, Michael Wang wrote: > > > The 1:N is a good reason to explain why the chance that wakee's hot data > > cached on curr_cpu is lower, and since it's just 'lower' not 'extinct', > > after the throttle interval large enough, it will be balanced, this > > could be proved, since during my test, when the interval become too big, > > the improvement start to drop. > > Magnitude of improvement drops just because there's less damage done > methinks. You'll eventually run out of measurable damage :) > > Yes, it's not really extinct, you _can_ reap a gain, it's just not at > all likely to work out. A more symetric load will fare better, but any > 1:N thing just has to spread far and wide to have any chance to perform. > > Hmm...that's an interesting point, the workload contain different > > 'priority' works, and depend on each other, if mother starving, all the > > kids could do nothing but wait for her, may be that's the reason why the > > benefit is so significant, since in such case, mother's little quicker > > respond will make all the kids happy :) > > Exactly. The entire load is server latency bound. Keep the server on > cpu, the load performs as best it can given unavoidable data miss cost. (ie serial producer, parallel consumer... choke point lies with utterly unscalable serial work producer) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/