Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754694AbbHQJTy (ORCPT ); Mon, 17 Aug 2015 05:19:54 -0400 Received: from mail-pa0-f54.google.com ([209.85.220.54]:34788 "EHLO mail-pa0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750775AbbHQJTv (ORCPT ); Mon, 17 Aug 2015 05:19:51 -0400 Date: Mon, 17 Aug 2015 17:19:42 +0800 From: Leo Yan To: morten.rasmussen@arm.com Cc: peterz@infradead.org, mingo@redhat.com, vincent.guittot@linaro.org, daniel.lezcano@linaro.org, Dietmar Eggemann , yuyang.du@intel.com, mturquette@baylibre.com, rjw@rjwysocki.net, Juri Lelli , sgurrappadi@nvidia.com, pang.xunlei@zte.com.cn, linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Russell King Subject: Re: [RFCv5, 18/46] arm: topology: Define TC2 energy and provide it to the scheduler Message-ID: <20150817091942.GA754@leoy-linaro> References: <1436293469-25707-19-git-send-email-morten.rasmussen@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1436293469-25707-19-git-send-email-morten.rasmussen@arm.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 7390 Lines: 188 Hi Morten, On Tue, Jul 07, 2015 at 07:24:01PM +0100, Morten Rasmussen wrote: > From: Dietmar Eggemann > > This patch is only here to be able to test provisioning of energy related > data from an arch topology shim layer to the scheduler. Since there is no > code today which deals with extracting energy related data from the dtb or > acpi, and process it in the topology shim layer, the content of the > sched_group_energy structures as well as the idle_state and capacity_state > arrays are hard-coded here. > > This patch defines the sched_group_energy structure as well as the > idle_state and capacity_state array for the cluster (relates to sched > groups (sgs) in DIE sched domain level) and for the core (relates to sgs > in MC sd level) for a Cortex A7 as well as for a Cortex A15. > It further provides related implementations of the sched_domain_energy_f > functions (cpu_cluster_energy() and cpu_core_energy()). > > To be able to propagate this information from the topology shim layer to > the scheduler, the elements of the arm_topology[] table have been > provisioned with the appropriate sched_domain_energy_f functions. > > cc: Russell King > > Signed-off-by: Dietmar Eggemann > > --- > arch/arm/kernel/topology.c | 118 +++++++++++++++++++++++++++++++++++++++++++-- > 1 file changed, 115 insertions(+), 3 deletions(-) > > diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c > index b35d3e5..bbe20c7 100644 > --- a/arch/arm/kernel/topology.c > +++ b/arch/arm/kernel/topology.c > @@ -274,6 +274,119 @@ void store_cpu_topology(unsigned int cpuid) > cpu_topology[cpuid].socket_id, mpidr); > } > > +/* > + * ARM TC2 specific energy cost model data. There are no unit requirements for > + * the data. Data can be normalized to any reference point, but the > + * normalization must be consistent. That is, one bogo-joule/watt must be the > + * same quantity for all data, but we don't care what it is. > + */ > +static struct idle_state idle_states_cluster_a7[] = { > + { .power = 25 }, /* WFI */ This state is confused. Is this state corresponding to all CPUs have been powered off but L2 cache RAM array and SCU are still power on? > + { .power = 10 }, /* cluster-sleep-l */ Is this status means all CPU and cluster have been powered off, if so then it will have no power consumption anymore... > + }; > + > +static struct idle_state idle_states_cluster_a15[] = { > + { .power = 70 }, /* WFI */ > + { .power = 25 }, /* cluster-sleep-b */ > + }; > + > +static struct capacity_state cap_states_cluster_a7[] = { > + /* Cluster only power */ > + { .cap = 150, .power = 2967, }, /* 350 MHz */ For cluster level's capacity, does it mean need run benchmark on all CPUs within cluster? > + { .cap = 172, .power = 2792, }, /* 400 MHz */ > + { .cap = 215, .power = 2810, }, /* 500 MHz */ > + { .cap = 258, .power = 2815, }, /* 600 MHz */ > + { .cap = 301, .power = 2919, }, /* 700 MHz */ > + { .cap = 344, .power = 2847, }, /* 800 MHz */ > + { .cap = 387, .power = 3917, }, /* 900 MHz */ > + { .cap = 430, .power = 4905, }, /* 1000 MHz */ > + }; > + > +static struct capacity_state cap_states_cluster_a15[] = { > + /* Cluster only power */ > + { .cap = 426, .power = 7920, }, /* 500 MHz */ > + { .cap = 512, .power = 8165, }, /* 600 MHz */ > + { .cap = 597, .power = 8172, }, /* 700 MHz */ > + { .cap = 682, .power = 8195, }, /* 800 MHz */ > + { .cap = 768, .power = 8265, }, /* 900 MHz */ > + { .cap = 853, .power = 8446, }, /* 1000 MHz */ > + { .cap = 938, .power = 11426, }, /* 1100 MHz */ > + { .cap = 1024, .power = 15200, }, /* 1200 MHz */ > + }; > + > +static struct sched_group_energy energy_cluster_a7 = { > + .nr_idle_states = ARRAY_SIZE(idle_states_cluster_a7), > + .idle_states = idle_states_cluster_a7, > + .nr_cap_states = ARRAY_SIZE(cap_states_cluster_a7), > + .cap_states = cap_states_cluster_a7, > +}; > + > +static struct sched_group_energy energy_cluster_a15 = { > + .nr_idle_states = ARRAY_SIZE(idle_states_cluster_a15), > + .idle_states = idle_states_cluster_a15, > + .nr_cap_states = ARRAY_SIZE(cap_states_cluster_a15), > + .cap_states = cap_states_cluster_a15, > +}; > + > +static struct idle_state idle_states_core_a7[] = { > + { .power = 0 }, /* WFI */ Should have two idle states for CPU level (WFI and CPU's power off)? > + }; > + > +static struct idle_state idle_states_core_a15[] = { > + { .power = 0 }, /* WFI */ > + }; > + > +static struct capacity_state cap_states_core_a7[] = { > + /* Power per cpu */ > + { .cap = 150, .power = 187, }, /* 350 MHz */ > + { .cap = 172, .power = 275, }, /* 400 MHz */ > + { .cap = 215, .power = 334, }, /* 500 MHz */ > + { .cap = 258, .power = 407, }, /* 600 MHz */ > + { .cap = 301, .power = 447, }, /* 700 MHz */ > + { .cap = 344, .power = 549, }, /* 800 MHz */ > + { .cap = 387, .power = 761, }, /* 900 MHz */ > + { .cap = 430, .power = 1024, }, /* 1000 MHz */ > + }; > + > +static struct capacity_state cap_states_core_a15[] = { > + /* Power per cpu */ > + { .cap = 426, .power = 2021, }, /* 500 MHz */ > + { .cap = 512, .power = 2312, }, /* 600 MHz */ > + { .cap = 597, .power = 2756, }, /* 700 MHz */ > + { .cap = 682, .power = 3125, }, /* 800 MHz */ > + { .cap = 768, .power = 3524, }, /* 900 MHz */ > + { .cap = 853, .power = 3846, }, /* 1000 MHz */ > + { .cap = 938, .power = 5177, }, /* 1100 MHz */ > + { .cap = 1024, .power = 6997, }, /* 1200 MHz */ > + }; > + > +static struct sched_group_energy energy_core_a7 = { > + .nr_idle_states = ARRAY_SIZE(idle_states_core_a7), > + .idle_states = idle_states_core_a7, > + .nr_cap_states = ARRAY_SIZE(cap_states_core_a7), > + .cap_states = cap_states_core_a7, > +}; > + > +static struct sched_group_energy energy_core_a15 = { > + .nr_idle_states = ARRAY_SIZE(idle_states_core_a15), > + .idle_states = idle_states_core_a15, > + .nr_cap_states = ARRAY_SIZE(cap_states_core_a15), > + .cap_states = cap_states_core_a15, > +}; > + > +/* sd energy functions */ > +static inline const struct sched_group_energy *cpu_cluster_energy(int cpu) > +{ > + return cpu_topology[cpu].socket_id ? &energy_cluster_a7 : > + &energy_cluster_a15; > +} > + > +static inline const struct sched_group_energy *cpu_core_energy(int cpu) > +{ > + return cpu_topology[cpu].socket_id ? &energy_core_a7 : > + &energy_core_a15; > +} > + > static inline int cpu_corepower_flags(void) > { > return SD_SHARE_PKG_RESOURCES | SD_SHARE_POWERDOMAIN | \ > @@ -282,10 +395,9 @@ static inline int cpu_corepower_flags(void) > > static struct sched_domain_topology_level arm_topology[] = { > #ifdef CONFIG_SCHED_MC > - { cpu_corepower_mask, cpu_corepower_flags, SD_INIT_NAME(GMC) }, > - { cpu_coregroup_mask, cpu_core_flags, SD_INIT_NAME(MC) }, > + { cpu_coregroup_mask, cpu_corepower_flags, cpu_core_energy, SD_INIT_NAME(MC) }, > #endif > - { cpu_cpu_mask, SD_INIT_NAME(DIE) }, > + { cpu_cpu_mask, 0, cpu_cluster_energy, SD_INIT_NAME(DIE) }, > { NULL, }, > }; > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/