2012-06-19 08:29:54

by Vincent Guittot

[permalink] [raw]
Subject: [PATCH v2 0/5] ARM: topology: set the capacity of each cores for big.LITTLE

This patchset creates an arch_scale_freq_power function for ARM, which is used
to set the relative capacity of each core of a big.LITTLE system. It also removes
the broken power estimation of x86.

Modification since v1:
- Add and update explanation about the use of the table and the range of the value
- Remove the use of NR_CPUS and use nr_cpu_ids instead
- Remove broken power estimation of x86

Peter Zijlstra (1):
sched, x86: Remove broken power estimation

Vincent Guittot (4):
ARM: topology: Add arch_scale_freq_power function
ARM: topology: factorize the update of sibling masks
ARM: topology: Update cpu_power according to DT information
sched: cpu_power: enable ARCH_POWER

arch/arm/include/asm/topology.h | 2 +
arch/arm/kernel/topology.c | 210 +++++++++++++++++++++++++++++++++++----
arch/x86/kernel/cpu/Makefile | 2 +-
arch/x86/kernel/cpu/sched.c | 55 ----------
kernel/sched/features.h | 2 +-
5 files changed, 193 insertions(+), 78 deletions(-)
delete mode 100644 arch/x86/kernel/cpu/sched.c

--
1.7.9.5


2012-06-19 08:30:08

by Vincent Guittot

[permalink] [raw]
Subject: [PATCH v2 1/5] ARM: topology: Add arch_scale_freq_power function

Add infrastructure to be able to modify the cpu_power of each core

Signed-off-by: Vincent Guittot <[email protected]>
---
arch/arm/include/asm/topology.h | 2 ++
arch/arm/kernel/topology.c | 38 +++++++++++++++++++++++++++++++++++++-
2 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/arch/arm/include/asm/topology.h b/arch/arm/include/asm/topology.h
index 58b8b84..78e4c85 100644
--- a/arch/arm/include/asm/topology.h
+++ b/arch/arm/include/asm/topology.h
@@ -27,11 +27,13 @@ void init_cpu_topology(void);
void store_cpu_topology(unsigned int cpuid);
const struct cpumask *cpu_coregroup_mask(int cpu);

+void set_power_scale(unsigned int cpu, unsigned long power);
#else

static inline void init_cpu_topology(void) { }
static inline void store_cpu_topology(unsigned int cpuid) { }

+static inline void set_power_scale(unsigned int cpu, unsigned long power) { }
#endif

#include <asm-generic/topology.h>
diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c
index 8200dea..37e2e57 100644
--- a/arch/arm/kernel/topology.c
+++ b/arch/arm/kernel/topology.c
@@ -22,6 +22,37 @@
#include <asm/cputype.h>
#include <asm/topology.h>

+/*
+ * cpu power scale management
+ */
+
+/*
+ * cpu power table
+ * This per cpu data structure describes the relative capacity of each core.
+ * On a heteregenous system, cores don't have the same computation capacity
+ * and we reflect that difference in the cpu_power field so the scheduler can
+ * take this difference into account during load balance. A per cpu structure
+ * is preferred because each CPU updates its own cpu_power field during the
+ * load balance except for idle cores. One idle core is selected to run the
+ * rebalance_domains for all idle cores and the cpu_power can be updated
+ * during this sequence.
+ */
+static DEFINE_PER_CPU(unsigned long, cpu_scale);
+
+unsigned long arch_scale_freq_power(struct sched_domain *sd, int cpu)
+{
+ return per_cpu(cpu_scale, cpu);
+}
+
+void set_power_scale(unsigned int cpu, unsigned long power)
+{
+ per_cpu(cpu_scale, cpu) = power;
+}
+
+/*
+ * cpu topology management
+ */
+
#define MPIDR_SMP_BITMASK (0x3 << 30)
#define MPIDR_SMP_VALUE (0x2 << 30)

@@ -41,6 +72,9 @@
#define MPIDR_LEVEL2_MASK 0xFF
#define MPIDR_LEVEL2_SHIFT 16

+/*
+ * cpu topology table
+ */
struct cputopo_arm cpu_topology[NR_CPUS];

const struct cpumask *cpu_coregroup_mask(int cpu)
@@ -134,7 +168,7 @@ void init_cpu_topology(void)
{
unsigned int cpu;

- /* init core mask */
+ /* init core mask and power*/
for_each_possible_cpu(cpu) {
struct cputopo_arm *cpu_topo = &(cpu_topology[cpu]);

@@ -143,6 +177,8 @@ void init_cpu_topology(void)
cpu_topo->socket_id = -1;
cpumask_clear(&cpu_topo->core_sibling);
cpumask_clear(&cpu_topo->thread_sibling);
+
+ per_cpu(cpu_scale, cpu) = SCHED_POWER_SCALE;
}
smp_wmb();
}
--
1.7.9.5

2012-06-19 08:30:14

by Vincent Guittot

[permalink] [raw]
Subject: [PATCH v2 2/5] ARM: topology: factorize the update of sibling masks

This factorization has also been proposed in another patchset that has not been
merged yet:
http://lists.infradead.org/pipermail/linux-arm-kernel/2012-January/080873.html
So, this patch could be dropped depending of the state of the other one.

Signed-off-by: Lorenzo Pieralisi <[email protected]>
Signed-off-by: Vincent Guittot <[email protected]>
---
arch/arm/kernel/topology.c | 47 ++++++++++++++++++++++++--------------------
1 file changed, 26 insertions(+), 21 deletions(-)

diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c
index 37e2e57..92c2fb3 100644
--- a/arch/arm/kernel/topology.c
+++ b/arch/arm/kernel/topology.c
@@ -82,6 +82,31 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
return &cpu_topology[cpu].core_sibling;
}

+void update_siblings_masks(unsigned int cpuid)
+{
+ struct cputopo_arm *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
+ int cpu;
+ /* update core and thread sibling masks */
+ for_each_possible_cpu(cpu) {
+ cpu_topo = &cpu_topology[cpu];
+
+ if (cpuid_topo->socket_id == cpu_topo->socket_id) {
+ cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
+ if (cpu != cpuid)
+ cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
+
+ if (cpuid_topo->core_id == cpu_topo->core_id) {
+ cpumask_set_cpu(cpuid,
+ &cpu_topo->thread_sibling);
+ if (cpu != cpuid)
+ cpumask_set_cpu(cpu,
+ &cpuid_topo->thread_sibling);
+ }
+ }
+ }
+ smp_wmb();
+}
+
/*
* store_cpu_topology is called at boot when only one cpu is running
* and with the mutex cpu_hotplug.lock locked, when several cpus have booted,
@@ -91,7 +116,6 @@ void store_cpu_topology(unsigned int cpuid)
{
struct cputopo_arm *cpuid_topo = &cpu_topology[cpuid];
unsigned int mpidr;
- unsigned int cpu;

/* If the cpu topology has been already set, just return */
if (cpuid_topo->core_id != -1)
@@ -133,26 +157,7 @@ void store_cpu_topology(unsigned int cpuid)
cpuid_topo->socket_id = -1;
}

- /* update core and thread sibling masks */
- for_each_possible_cpu(cpu) {
- struct cputopo_arm *cpu_topo = &cpu_topology[cpu];
-
- if (cpuid_topo->socket_id == cpu_topo->socket_id) {
- cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
- if (cpu != cpuid)
- cpumask_set_cpu(cpu,
- &cpuid_topo->core_sibling);
-
- if (cpuid_topo->core_id == cpu_topo->core_id) {
- cpumask_set_cpu(cpuid,
- &cpu_topo->thread_sibling);
- if (cpu != cpuid)
- cpumask_set_cpu(cpu,
- &cpuid_topo->thread_sibling);
- }
- }
- }
- smp_wmb();
+ update_siblings_masks(cpuid);

printk(KERN_INFO "CPU%u: thread %d, cpu %d, socket %d, mpidr %x\n",
cpuid, cpu_topology[cpuid].thread_id,
--
1.7.9.5

2012-06-19 08:30:21

by Vincent Guittot

[permalink] [raw]
Subject: [PATCH v2 3/5] ARM: topology: Update cpu_power according to DT information

Use cpu compatibility field and clock-frequency field of DT to
estimate the capacity of each core of the system and to update
the cpu_power field accordingly.
This patch enables to put more running tasks on big cores than
on LITTLE ones. But this patch doesn't ensure that long running
tasks will run on big cores and short ones on LITTLE cores.

Signed-off-by: Vincent Guittot <[email protected]>
---
arch/arm/kernel/topology.c | 127 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 127 insertions(+)

diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c
index 92c2fb3..1922ea8 100644
--- a/arch/arm/kernel/topology.c
+++ b/arch/arm/kernel/topology.c
@@ -17,7 +17,9 @@
#include <linux/percpu.h>
#include <linux/node.h>
#include <linux/nodemask.h>
+#include <linux/of.h>
#include <linux/sched.h>
+#include <linux/slab.h>

#include <asm/cputype.h>
#include <asm/topology.h>
@@ -49,6 +51,126 @@ void set_power_scale(unsigned int cpu, unsigned long power)
per_cpu(cpu_scale, cpu) = power;
}

+#ifdef CONFIG_OF
+struct cpu_efficiency {
+ const char *compatible;
+ unsigned long efficiency;
+};
+
+/*
+ * Table of relative efficiency of each processors
+ * The efficiency value must fit in 20bit. The final
+ * cpu_scale value must be in the range
+ * 0 < cpu_scale < 2*SCHED_POWER_SCALE.
+ * Processors that are not defined in the table,
+ * use the default SCHED_POWER_SCALE value for cpu_scale.
+ */
+struct cpu_efficiency table_efficiency[] = {
+ {"arm,cortex-a15", 3891},
+ {"arm,cortex-a7", 2048},
+ {NULL, },
+};
+
+struct cpu_capacity {
+ unsigned long hwid;
+ unsigned long capacity;
+};
+
+struct cpu_capacity *cpu_capacity;
+
+unsigned long middle_capacity = 1;
+
+static void __init parse_dt_topology(void)
+{
+ struct cpu_efficiency *cpu_eff;
+ struct device_node *cn = NULL;
+ unsigned long min_capacity = (unsigned long)(-1);
+ unsigned long max_capacity = 0;
+ unsigned long capacity = 0;
+ int alloc_size, cpu = 0;
+
+ alloc_size = nr_cpu_ids * sizeof(struct cpu_capacity);
+ cpu_capacity = (struct cpu_capacity *)kzalloc(alloc_size, GFP_NOWAIT);
+
+ while ((cn = of_find_node_by_type(cn, "cpu"))) {
+ const u32 *rate, *reg;
+ char *compatible;
+ int len;
+
+ if (cpu >= num_possible_cpus())
+ break;
+
+ compatible = of_get_property(cn, "compatible", &len);
+
+ for (cpu_eff = table_efficiency; cpu_eff->compatible; cpu_eff++)
+ if (of_device_is_compatible(cn, cpu_eff->compatible))
+ break;
+
+ if (cpu_eff->compatible == NULL)
+ continue;
+
+ rate = of_get_property(cn, "clock-frequency", &len);
+ if (!rate || len != 4) {
+ pr_err("%s missing clock-frequency property\n",
+ cn->full_name);
+ continue;
+ }
+
+ reg = of_get_property(cn, "reg", &len);
+ if (!reg || len != 4) {
+ pr_err("%s missing reg property\n", cn->full_name);
+ continue;
+ }
+
+ capacity = ((be32_to_cpup(rate)) >> 20)
+ * cpu_eff->efficiency;
+
+ /* Save min capacity of the system */
+ if (capacity < min_capacity)
+ min_capacity = capacity;
+
+ /* Save max capacity of the system */
+ if (capacity > max_capacity)
+ max_capacity = capacity;
+
+ cpu_capacity[cpu].capacity = capacity;
+ cpu_capacity[cpu++].hwid = be32_to_cpup(reg);
+ }
+
+ if (cpu < num_possible_cpus())
+ cpu_capacity[cpu].hwid = (unsigned long)(-1);
+
+ middle_capacity = (min_capacity + max_capacity) >> 11;
+}
+
+void update_cpu_power(unsigned int cpu, unsigned long hwid)
+{
+ unsigned int idx = 0;
+
+ /* look for the cpu's hwid in the cpu capacity table */
+ for (idx = 0; idx < num_possible_cpus(); idx++) {
+ if (cpu_capacity[idx].hwid == hwid)
+ break;
+
+ if (cpu_capacity[idx].hwid == -1)
+ return;
+ }
+
+ if (idx == num_possible_cpus())
+ return;
+
+ set_power_scale(cpu, cpu_capacity[idx].capacity / middle_capacity);
+
+ printk(KERN_INFO "CPU%u: update cpu_power %lu\n",
+ cpu, arch_scale_freq_power(NULL, cpu));
+}
+
+#else
+static inline void parse_dt_topology(void) {}
+static inline void update_cpu_power(unsigned int cpuid, unsigned int mpidr) {}
+#endif
+
+
/*
* cpu topology management
*/
@@ -62,6 +184,7 @@ void set_power_scale(unsigned int cpu, unsigned long power)
* These masks reflect the current use of the affinity levels.
* The affinity level can be up to 16 bits according to ARM ARM
*/
+#define MPIDR_HWID_BITMASK 0xFFFFFF

#define MPIDR_LEVEL0_MASK 0x3
#define MPIDR_LEVEL0_SHIFT 0
@@ -159,6 +282,8 @@ void store_cpu_topology(unsigned int cpuid)

update_siblings_masks(cpuid);

+ update_cpu_power(cpuid, mpidr & MPIDR_HWID_BITMASK);
+
printk(KERN_INFO "CPU%u: thread %d, cpu %d, socket %d, mpidr %x\n",
cpuid, cpu_topology[cpuid].thread_id,
cpu_topology[cpuid].core_id,
@@ -186,4 +311,6 @@ void init_cpu_topology(void)
per_cpu(cpu_scale, cpu) = SCHED_POWER_SCALE;
}
smp_wmb();
+
+ parse_dt_topology();
}
--
1.7.9.5

2012-06-19 08:30:28

by Vincent Guittot

[permalink] [raw]
Subject: [PATCH v2 5/5] sched: cpu_power: enable ARCH_POWER

Heteregeneous ARM platform uses arch_scale_freq_power function
to reflect the relative capacity of each core

Signed-off-by: Vincent Guittot <[email protected]>
---
kernel/sched/features.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index de00a48..d98ae90 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -42,7 +42,7 @@ SCHED_FEAT(CACHE_HOT_BUDDY, true)
/*
* Use arch dependent cpu power functions
*/
-SCHED_FEAT(ARCH_POWER, false)
+SCHED_FEAT(ARCH_POWER, true)

SCHED_FEAT(HRTICK, false)
SCHED_FEAT(DOUBLE_TICK, false)
--
1.7.9.5

2012-06-19 08:31:09

by Vincent Guittot

[permalink] [raw]
Subject: [PATCH v2 4/5] sched, x86: Remove broken power estimation

From: Peter Zijlstra <[email protected]>

The x86 sched power implementation has been broken forever and gets in
the way of other stuff, remove it.

For archaeological interest, fixing this code would require dealing with
the cross-cpu calling of these functions and more importantly, we need
to filter idle time out of the a/m-perf stuff because the ratio will go
down to 0 when idle, giving a 0 capacity which is not what we'd want.

Signed-off-by: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
---
arch/x86/kernel/cpu/Makefile | 2 +-
arch/x86/kernel/cpu/sched.c | 55 ------------------------------------------
2 files changed, 1 insertion(+), 56 deletions(-)
delete mode 100644 arch/x86/kernel/cpu/sched.c

diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index 6ab6aa2..c598126 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -14,7 +14,7 @@ CFLAGS_common.o := $(nostackp)

obj-y := intel_cacheinfo.o scattered.o topology.o
obj-y += proc.o capflags.o powerflags.o common.o
-obj-y += vmware.o hypervisor.o sched.o mshyperv.o
+obj-y += vmware.o hypervisor.o mshyperv.o
obj-y += rdrand.o
obj-y += match.o

diff --git a/arch/x86/kernel/cpu/sched.c b/arch/x86/kernel/cpu/sched.c
deleted file mode 100644
index a640ae5..0000000
--- a/arch/x86/kernel/cpu/sched.c
+++ /dev/null
@@ -1,55 +0,0 @@
-#include <linux/sched.h>
-#include <linux/math64.h>
-#include <linux/percpu.h>
-#include <linux/irqflags.h>
-
-#include <asm/cpufeature.h>
-#include <asm/processor.h>
-
-#ifdef CONFIG_SMP
-
-static DEFINE_PER_CPU(struct aperfmperf, old_perf_sched);
-
-static unsigned long scale_aperfmperf(void)
-{
- struct aperfmperf val, *old = &__get_cpu_var(old_perf_sched);
- unsigned long ratio, flags;
-
- local_irq_save(flags);
- get_aperfmperf(&val);
- local_irq_restore(flags);
-
- ratio = calc_aperfmperf_ratio(old, &val);
- *old = val;
-
- return ratio;
-}
-
-unsigned long arch_scale_freq_power(struct sched_domain *sd, int cpu)
-{
- /*
- * do aperf/mperf on the cpu level because it includes things
- * like turbo mode, which are relevant to full cores.
- */
- if (boot_cpu_has(X86_FEATURE_APERFMPERF))
- return scale_aperfmperf();
-
- /*
- * maybe have something cpufreq here
- */
-
- return default_scale_freq_power(sd, cpu);
-}
-
-unsigned long arch_scale_smt_power(struct sched_domain *sd, int cpu)
-{
- /*
- * aperf/mperf already includes the smt gain
- */
- if (boot_cpu_has(X86_FEATURE_APERFMPERF))
- return SCHED_LOAD_SCALE;
-
- return default_scale_smt_power(sd, cpu);
-}
-
-#endif
--
1.7.9.5

2012-06-19 09:01:56

by Yong Zhang

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] sched: cpu_power: enable ARCH_POWER

On Tue, Jun 19, 2012 at 10:28:56AM +0200, Vincent Guittot wrote:
> Heteregeneous ARM platform uses arch_scale_freq_power function
> to reflect the relative capacity of each core
>
> Signed-off-by: Vincent Guittot <[email protected]>
> ---
> kernel/sched/features.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/features.h b/kernel/sched/features.h
> index de00a48..d98ae90 100644
> --- a/kernel/sched/features.h
> +++ b/kernel/sched/features.h
> @@ -42,7 +42,7 @@ SCHED_FEAT(CACHE_HOT_BUDDY, true)
> /*
> * Use arch dependent cpu power functions
> */
> -SCHED_FEAT(ARCH_POWER, false)
> +SCHED_FEAT(ARCH_POWER, true)

Hmmm...seems we can remove this knob completely.

__weak arch_scale_smt_power() is doing all the trick.

Thanks,
Yong

2012-06-20 05:08:49

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v2 2/5] ARM: topology: factorize the update of sibling masks




On Tue, 19 Jun 2012 10:28:53 +0200, Vincent Guittot wrote:
> This factorization has also been proposed in another patchset that has not been
> merged yet:
> http://lists.infradead.org/pipermail/linux-arm-kernel/2012-January/080873.html
> So, this patch could be dropped depending of the state of the other one.
>
> Signed-off-by: Lorenzo Pieralisi <[email protected]>
> Signed-off-by: Vincent Guittot <[email protected]>
> ---
> arch/arm/kernel/topology.c | 47 ++++++++++++++++++++++++--------------------
> 1 file changed, 26 insertions(+), 21 deletions(-)
>
> diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c
> index 37e2e57..92c2fb3 100644
> --- a/arch/arm/kernel/topology.c
> +++ b/arch/arm/kernel/topology.c
> @@ -82,6 +82,31 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
> return &cpu_topology[cpu].core_sibling;
> }
>
> +void update_siblings_masks(unsigned int cpuid)
> +{
> + struct cputopo_arm *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
> + int cpu;
> + /* update core and thread sibling masks */
> + for_each_possible_cpu(cpu) {
> + cpu_topo = &cpu_topology[cpu];
> +
> + if (cpuid_topo->socket_id == cpu_topo->socket_id) {

I think this will make the code a little bit cleaner:

if (cpuid_topo->socket_id != cpu_topo->socket_id)
continue;

Thanks,
Namhyung


> + cpumask_set_cpu(cpuid, &cpu_topo->core_sibling);
> + if (cpu != cpuid)
> + cpumask_set_cpu(cpu, &cpuid_topo->core_sibling);
> +
> + if (cpuid_topo->core_id == cpu_topo->core_id) {
> + cpumask_set_cpu(cpuid,
> + &cpu_topo->thread_sibling);
> + if (cpu != cpuid)
> + cpumask_set_cpu(cpu,
> + &cpuid_topo->thread_sibling);
> + }
> + }
> + }
> + smp_wmb();
> +}
> +

2012-06-20 08:43:53

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v2 3/5] ARM: topology: Update cpu_power according to DT information



On 20 June 2012 03:15, Namhyung Kim <[email protected]> wrote:
>
>
>
> On Tue, 19 Jun 2012 10:28:54 +0200, Vincent Guittot wrote:
>> Use cpu compatibility field and clock-frequency field of DT to
>> estimate the capacity of each core of the system and to update
>> the cpu_power field accordingly.
>> This patch enables to put more running tasks on big cores than
>> on LITTLE ones. But this patch doesn't ensure that long running
>> tasks will run on big cores and short ones on LITTLE cores.
>>
>> Signed-off-by: Vincent Guittot <[email protected]>
>> ---
> [snip]
>> +static void __init parse_dt_topology(void)
>> +{
>> + ? ? struct cpu_efficiency *cpu_eff;
>> + ? ? struct device_node *cn = NULL;
>> + ? ? unsigned long min_capacity = (unsigned long)(-1);
>> + ? ? unsigned long max_capacity = 0;
>> + ? ? unsigned long capacity = 0;
>> + ? ? int alloc_size, cpu = 0;
>> +
>> + ? ? alloc_size = nr_cpu_ids * sizeof(struct cpu_capacity);
>> + ? ? cpu_capacity = (struct cpu_capacity *)kzalloc(alloc_size, GFP_NOWAIT);
>> +
>> + ? ? while ((cn = of_find_node_by_type(cn, "cpu"))) {
>> + ? ? ? ? ? ? const u32 *rate, *reg;
>> + ? ? ? ? ? ? char *compatible;
>> + ? ? ? ? ? ? int len;
>> +
>> + ? ? ? ? ? ? if (cpu >= num_possible_cpus())
>> + ? ? ? ? ? ? ? ? ? ? break;
>> +
>> + ? ? ? ? ? ? compatible = of_get_property(cn, "compatible", &len);
>
> Why is this line needed?

not needed is the final version, should have been removed.

>
>
>> +
>> + ? ? ? ? ? ? for (cpu_eff = table_efficiency; cpu_eff->compatible; cpu_eff++)
>> + ? ? ? ? ? ? ? ? ? ? if (of_device_is_compatible(cn, cpu_eff->compatible))
>> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? break;
>> +
>> + ? ? ? ? ? ? if (cpu_eff->compatible == NULL)
>> + ? ? ? ? ? ? ? ? ? ? continue;
>> +
>> + ? ? ? ? ? ? rate = of_get_property(cn, "clock-frequency", &len);
>> + ? ? ? ? ? ? if (!rate || len != 4) {
>> + ? ? ? ? ? ? ? ? ? ? pr_err("%s missing clock-frequency property\n",
>> + ? ? ? ? ? ? ? ? ? ? ? ? ? ? cn->full_name);
>> + ? ? ? ? ? ? ? ? ? ? continue;
>> + ? ? ? ? ? ? }
>> +
>> + ? ? ? ? ? ? reg = of_get_property(cn, "reg", &len);
>> + ? ? ? ? ? ? if (!reg || len != 4) {
>> + ? ? ? ? ? ? ? ? ? ? pr_err("%s missing reg property\n", cn->full_name);
>> + ? ? ? ? ? ? ? ? ? ? continue;
>> + ? ? ? ? ? ? }
>> +
>> + ? ? ? ? ? ? capacity = ((be32_to_cpup(rate)) >> 20)
>> + ? ? ? ? ? ? ? ? ? ? * cpu_eff->efficiency;
>
> Why did you break this line?

It was more than 80 chars large previously but no more the case.
I'm going to correct

Thanks
>
> Thanks,
> Namhyung
>
>
>
>
> _______________________________________________
> linaro-dev mailing list
> [email protected]
> http://lists.linaro.org/mailman/listinfo/linaro-dev

2012-06-20 09:04:59

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] ARM: topology: Add arch_scale_freq_power function



On 20 June 2012 02:51, Namhyung Kim <[email protected]> wrote:
>
>
>
> Hi, Vincent
>
> Just a couple of nitpicks..
>
>
> On Tue, 19 Jun 2012 10:28:52 +0200, Vincent Guittot wrote:
>> Add infrastructure to be able to modify the cpu_power of each core
>>
>> Signed-off-by: Vincent Guittot <[email protected]>
>> ---
>> ?arch/arm/include/asm/topology.h | ? ?2 ++
>> ?arch/arm/kernel/topology.c ? ? ?| ? 38 +++++++++++++++++++++++++++++++++++++-
>> ?2 files changed, 39 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm/include/asm/topology.h b/arch/arm/include/asm/topology.h
>> index 58b8b84..78e4c85 100644
>> --- a/arch/arm/include/asm/topology.h
>> +++ b/arch/arm/include/asm/topology.h
>> @@ -27,11 +27,13 @@ void init_cpu_topology(void);
>> ?void store_cpu_topology(unsigned int cpuid);
>> ?const struct cpumask *cpu_coregroup_mask(int cpu);
>>
>> +void set_power_scale(unsigned int cpu, unsigned long power);
>> ?#else
>>
>> ?static inline void init_cpu_topology(void) { }
>> ?static inline void store_cpu_topology(unsigned int cpuid) { }
>>
>> +static inline void set_power_scale(unsigned int cpu, unsigned long power) { }
>> ?#endif
>>
>> ?#include <asm-generic/topology.h>
>> diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c
>> index 8200dea..37e2e57 100644
>> --- a/arch/arm/kernel/topology.c
>> +++ b/arch/arm/kernel/topology.c
>> @@ -22,6 +22,37 @@
>> ?#include <asm/cputype.h>
>> ?#include <asm/topology.h>
>>
>> +/*
>> + * cpu power scale management
>> + */
>> +
>> +/*
>> + * cpu power table
>> + * This per cpu data structure describes the relative capacity of each core.
>> + * On a heteregenous system, cores don't have the same computation capacity
>> + * and we reflect that difference in the cpu_power field so the scheduler can
>> + * take this difference into account during load balance. A per cpu structure
>> + * is preferred because each CPU updates its own cpu_power field during the
>> + * load balance except for idle cores. One idle core is selected to run the
>> + * rebalance_domains for all idle cores and the cpu_power can be updated
>> + * during this sequence.
>> + */
>> +static DEFINE_PER_CPU(unsigned long, cpu_scale);
>> +
>> +unsigned long arch_scale_freq_power(struct sched_domain *sd, int cpu)
>> +{
>> + ? ? return per_cpu(cpu_scale, cpu);
>> +}
>> +
>> +void set_power_scale(unsigned int cpu, unsigned long power)
>> +{
>> + ? ? per_cpu(cpu_scale, cpu) = power;
>> +}
>
> Isn't it a static function?

It's not static because the function could also be used outside. This
patchset updates the capacity according to DT information but the
capacity could also be updated depending of other inputs like a
modification of the maximum frequency of a core.

I can make it static for now and could remove the static attribute when needed

>
>
>> +
>> +/*
>> + * cpu topology management
>> + */
>> +
>> ?#define MPIDR_SMP_BITMASK (0x3 << 30)
>> ?#define MPIDR_SMP_VALUE (0x2 << 30)
>>
>> @@ -41,6 +72,9 @@
>> ?#define MPIDR_LEVEL2_MASK 0xFF
>> ?#define MPIDR_LEVEL2_SHIFT 16
>>
>> +/*
>> + * cpu topology table
>> + */
>> ?struct cputopo_arm cpu_topology[NR_CPUS];
>>
>> ?const struct cpumask *cpu_coregroup_mask(int cpu)
>> @@ -134,7 +168,7 @@ void init_cpu_topology(void)
>> ?{
>> ? ? ? unsigned int cpu;
>>
>> - ? ? /* init core mask */
>> + ? ? /* init core mask and power*/
>> ? ? ? for_each_possible_cpu(cpu) {
>> ? ? ? ? ? ? ? struct cputopo_arm *cpu_topo = &(cpu_topology[cpu]);
>>
>> @@ -143,6 +177,8 @@ void init_cpu_topology(void)
>> ? ? ? ? ? ? ? cpu_topo->socket_id = -1;
>> ? ? ? ? ? ? ? cpumask_clear(&cpu_topo->core_sibling);
>> ? ? ? ? ? ? ? cpumask_clear(&cpu_topo->thread_sibling);
>> +
>> + ? ? ? ? ? ? per_cpu(cpu_scale, cpu) = SCHED_POWER_SCALE;
>
> Wouldn't it be better using:
>
> ? ? ? ? ? ? ? ?set_power_scale(cpu, SCHED_POWER_SCALE);
> ?

Yes

Thanks
>
> Thanks,
> Namhyung
>
>
>> ? ? ? }
>> ? ? ? smp_wmb();
>> ?}
>
>
>
> _______________________________________________
> linaro-dev mailing list
> [email protected]
> http://lists.linaro.org/mailman/listinfo/linaro-dev

2012-06-20 09:11:07

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] sched: cpu_power: enable ARCH_POWER

On 19 June 2012 11:01, Yong Zhang <[email protected]> wrote:
> On Tue, Jun 19, 2012 at 10:28:56AM +0200, Vincent Guittot wrote:
>> Heteregeneous ARM platform uses arch_scale_freq_power function
>> to reflect the relative capacity of each core
>>
>> Signed-off-by: Vincent Guittot <[email protected]>
>> ---
>> ?kernel/sched/features.h | ? ?2 +-
>> ?1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/kernel/sched/features.h b/kernel/sched/features.h
>> index de00a48..d98ae90 100644
>> --- a/kernel/sched/features.h
>> +++ b/kernel/sched/features.h
>> @@ -42,7 +42,7 @@ SCHED_FEAT(CACHE_HOT_BUDDY, true)
>> ?/*
>> ? * Use arch dependent cpu power functions
>> ? */
>> -SCHED_FEAT(ARCH_POWER, false)
>> +SCHED_FEAT(ARCH_POWER, true)
>
> Hmmm...seems we can remove this knob completely.
>

Peter,

Do you prefer to keep it or remove it completely ?

Thanks

> __weak arch_scale_smt_power() is doing all the trick.
>
> Thanks,
> Yong

2012-06-20 09:19:59

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 5/5] sched: cpu_power: enable ARCH_POWER

On Wed, 2012-06-20 at 11:11 +0200, Vincent Guittot wrote:
> On 19 June 2012 11:01, Yong Zhang <[email protected]> wrote:
> > On Tue, Jun 19, 2012 at 10:28:56AM +0200, Vincent Guittot wrote:
> >> Heteregeneous ARM platform uses arch_scale_freq_power function
> >> to reflect the relative capacity of each core
> >>
> >> Signed-off-by: Vincent Guittot <[email protected]>
> >> ---
> >> kernel/sched/features.h | 2 +-
> >> 1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/kernel/sched/features.h b/kernel/sched/features.h
> >> index de00a48..d98ae90 100644
> >> --- a/kernel/sched/features.h
> >> +++ b/kernel/sched/features.h
> >> @@ -42,7 +42,7 @@ SCHED_FEAT(CACHE_HOT_BUDDY, true)
> >> /*
> >> * Use arch dependent cpu power functions
> >> */
> >> -SCHED_FEAT(ARCH_POWER, false)
> >> +SCHED_FEAT(ARCH_POWER, true)
> >
> > Hmmm...seems we can remove this knob completely.
> >
>
> Peter,
>
> Do you prefer to keep it or remove it completely ?

I prefer to keep it, the 'we can remove it' is true for all feature
bits, they should never be twiddled in normal circumstances, its a debug
feature. As such it makes sense to allow disabling the arch cpu-power
fiddling.