Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759333AbYJIMGZ (ORCPT ); Thu, 9 Oct 2008 08:06:25 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1758974AbYJIMFe (ORCPT ); Thu, 9 Oct 2008 08:05:34 -0400 Received: from e28smtp05.in.ibm.com ([59.145.155.5]:46944 "EHLO e28esmtp05.in.ibm.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1758945AbYJIMFc (ORCPT ); Thu, 9 Oct 2008 08:05:32 -0400 From: Vaidyanathan Srinivasan Subject: [RFC PATCH v2 3/5] sched: collect statistics required for powersave balance To: Linux Kernel , Suresh B Siddha , Venkatesh Pallipadi , Peter Zijlstra Cc: Ingo Molnar , Dipankar Sarma , Balbir Singh , Vatsa , Gautham R Shenoy , Andi Kleen , David Collier-Brown , Tim Connors , Max Krasnyansky , Vaidyanathan Srinivasan Date: Thu, 09 Oct 2008 17:39:44 +0530 Message-ID: <20081009120944.27010.90362.stgit@drishya.in.ibm.com> In-Reply-To: <20081009120705.27010.12857.stgit@drishya.in.ibm.com> References: <20081009120705.27010.12857.stgit@drishya.in.ibm.com> User-Agent: StGIT/0.14.2 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3770 Lines: 123 Update sched domain level statistics with the minimum load and group leader who can pull more tasks. Also suggest a powersave movement if the domain is otherwise balanced. Signed-off-by: Vaidyanathan Srinivasan --- kernel/sched.c | 96 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 96 insertions(+), 0 deletions(-) diff --git a/kernel/sched.c b/kernel/sched.c index cfd83d9..c99b5bd 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -3229,6 +3229,102 @@ void update_sd_loads(struct sd_loads *sdl, struct group_loads *gl) } } +#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT) +void update_powersavings_group_loads(struct sd_loads *sdl, + struct group_loads *gl, + enum cpu_idle_type idle) +{ + int group_capacity = gl->group->__cpu_power / SCHED_LOAD_SCALE; + + /* + * Busy processors will not participate in power savings + * balance. + */ + if (idle == CPU_NOT_IDLE || + !(sdl->sd->flags & SD_POWERSAVINGS_BALANCE)) + return; + + /* + * If the this group is idle or completely loaded, then there + * is no opportunity to do power savings balance with this group + */ + if (gl->nr_running >= group_capacity || gl->nr_running == 0) + return; + + /* + * Calculate the group which has the least non-idle load. + * This is the group from where we need to pick up the load + * for saving power + */ + if (!sdl->min_load_group.group) + sdl->min_load_group = *gl; + else { + if (gl->nr_running < sdl->min_load_group.nr_running) + sdl->min_load_group = *gl; + /* If the loads are equal, then prefer the cpu with + * less logical number + */ + else if (gl->nr_running == sdl->min_load_group.nr_running && + first_cpu(gl->group->cpumask) < + first_cpu(sdl->min_load_group.group->cpumask)) + sdl->min_load_group = *gl; + } + + /* + * Calculate the group which is almost near its + * capacity but still has some space to pick up some load + * from other group and save more power + */ + + if (gl->nr_running > 0 && gl->nr_running <= group_capacity - 1) { + if (!sdl->power_save_leader_group.group) + sdl->power_save_leader_group = *gl; + else { + if (gl->nr_running > + sdl->power_save_leader_group.nr_running) + sdl->power_save_leader_group = *gl; + else if (gl->nr_running == + sdl->power_save_leader_group.nr_running && + first_cpu(gl->group->cpumask) < + first_cpu(sdl->min_load_group.group->cpumask)) + sdl->power_save_leader_group = *gl; + } + } +} + +static struct sched_group *powersavings_balance_group(struct sd_loads *sdl, + struct group_loads *gl, enum cpu_idle_type idle, + unsigned long *imbalance) +{ + *imbalance = 0; + if (idle == CPU_NOT_IDLE || !(sdl->sd->flags & SD_POWERSAVINGS_BALANCE)) + return NULL; + + if (sdl->local.group == sdl->power_save_leader_group.group && + sdl->power_save_leader_group.group != + sdl->min_load_group.group) { + *imbalance = sdl->min_load_group.avg_load_per_task; + return sdl->min_load_group.group; + } + + return NULL; +} +#else +void update_powersavings_group_loads(struct sd_loads *sdl, + struct group_loads *gl, enum cpu_idle_type idle) +{ + return; +} + +static struct sched_group *powersavings_balance_group(struct sd_loads *sdl, + struct group_loads *gl, enum cpu_idle_type idle, + unsigned long *imbalance) +{ + *imbalance = 0; + return NULL; +} +#endif + /* * find_busiest_group finds and returns the busiest CPU group within the * domain. It calculates and returns the amount of weighted load which -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/