Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755602Ab3IIUkl (ORCPT ); Mon, 9 Sep 2013 16:40:41 -0400 Received: from g5t0006.atlanta.hp.com ([15.192.0.43]:47227 "EHLO g5t0006.atlanta.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755007Ab3IIUkg (ORCPT ); Mon, 9 Sep 2013 16:40:36 -0400 Message-ID: <1378759233.10318.4.camel@j-VirtualBox> Subject: Re: [RFC][PATCH v4 3/3] sched: Periodically decay max cost of idle balance From: Jason Low To: Peter Zijlstra Cc: mingo@redhat.com, linux-kernel@vger.kernel.org, efault@gmx.de, pjt@google.com, preeti@linux.vnet.ibm.com, akpm@linux-foundation.org, mgorman@suse.de, riel@redhat.com, aswin@hp.com, scott.norton@hp.com, srikar@linux.vnet.ibm.com Date: Mon, 09 Sep 2013 13:40:33 -0700 In-Reply-To: <20130909114453.GR31370@twins.programming.kicks-ass.net> References: <1377806736-3752-1-git-send-email-jason.low2@hp.com> <1377806736-3752-4-git-send-email-jason.low2@hp.com> <20130830101817.GE10002@twins.programming.kicks-ass.net> <20130830102941.GF10002@twins.programming.kicks-ass.net> <1378274579.3004.9.camel@j-VirtualBox> <20130909114453.GR31370@twins.programming.kicks-ass.net> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.2.3-0ubuntu6 Content-Transfer-Encoding: 7bit Mime-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2129 Lines: 60 On Mon, 2013-09-09 at 13:44 +0200, Peter Zijlstra wrote: > On Tue, Sep 03, 2013 at 11:02:59PM -0700, Jason Low wrote: > > On Fri, 2013-08-30 at 12:29 +0200, Peter Zijlstra wrote: > > > rcu_read_lock(); > > > for_each_domain(cpu, sd) { > > > + /* > > > + * Decay the newidle max times here because this is a regular > > > + * visit to all the domains. Decay ~0.5% per second. > > > + */ > > > + if (time_after(jiffies, sd->next_decay_max_lb_cost)) { > > > + sd->max_newidle_lb_cost = > > > + (sd->max_newidle_lb_cost * 254) / 256; > > > > I initially picked 0.5%, but after trying it out, it appears to decay very > > slowing when the max is at a high value. Should we increase the decay a > > little bit more? Maybe something like: > > > > sd->max_newidle_lb_cost = (sd->max_newidle_lb_cost * 63) / 64; > > So the half-life in either case is is given by: > > n = ln(1/2) / ln(x) > > which gives 88 seconds for x := 254/256 or 44 seconds for x := 63/64. > > I don't really care too much, but obviously something like: > > 256*exp(ln(.5)/60) ~= 253 > > Is attractive ;-) > > > > + /* > > > + * Stop the load balance at this level. There is another > > > + * CPU in our sched group which is doing load balancing more > > > + * actively. > > > + */ > > > + if (!continue_balancing) { > > > > Is "continue_balancing" named "balance" in older kernels? > > Yeah, this patch crossed paths with a series remodeling the > load-balancer a bit, that should all be pushed-out to tip/master. > > In particular see commit: > 23f0d20 sched: Factor out code to should_we_balance() > > > Here are the AIM7 results with the other 2 patches + this patch with the > > slightly higher decay value. > > Just to clarify, 'this patch' is the one I sent? Yes, I was referring to the one you sent out. Also, I did the s/smp_processor_id()/this_cpu/ on patch 2. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/