From: "Gautham R. Shenoy" <[email protected]>
Hi,
This is the second iteration of the patchset to add support for big-core on POWER9.
The earlier version can be found here: https://lkml.org/lkml/2018/5/11/245.
The changes from the previous version:
- Added comments explaining the "ibm,thread-groups" device tree property.
- Uses cleaner device-tree parsing functions to parse the u32 arrays.
- Adds a sysfs file listing the small-core siblings for every CPU.
- Enables the scheduler optimization by setting the CPU_FTR_ASYM_SMT bit
in the cur_cpu_spec->cpu_features on detecting the presence
of interleaved big-core.
- Handles the corner case where there is only a single thread-group
or when there is a single thread in a thread-group.
Description:
~~~~~~~~~~~~~~~~~~~~
A pair of IBM POWER9 SMT4 cores can be fused together to form a
big-core with 8 SMT threads. This can be discovered via the
"ibm,thread-groups" CPU property in the device tree which will
indicate which group of threads that share the L1 cache, translation
cache and instruction data flow. If there are multiple such group of
threads, then the core is a big-core. Furthermore, the thread-ids of
such a big-core is obtained by interleaving the thread-ids of the
component SMT4 cores.
Eg: Threads in the pair of component SMT4 cores of an interleaved
big-core are numbered {0,2,4,6} and {1,3,5,7} respectively.
On such a big-core, when multiple tasks are scheduled to run on the
big-core, we get the best performance when the tasks are spread across
the pair of SMT4 cores.
The Linux scheduler supports a flag called "SD_ASYM_PACKING" which
when set in the SMT sched-domain, biases the load-balancing of the
tasks on the smaller numbered threads in the core. On an big-core
whose threads are interleavings of the threads of the small cores,
enabling SD_ASYM_PACKING in the SMT sched-domain automatically results
in spreading the tasks uniformly across the associated pair of SMT4
cores, thereby yielding better performance.
This patchset contains two patches which on detecting the presence of
interleaved big-cores will enable the the CPU_FTR_ASYM_SMT bit in the
cur_cpu_spec->cpu_feature.
Patch 1: adds support to detect the presence of
big-cores and reports the small-core siblings of each CPU X
via the sysfs file "/sys/devices/system/cpu/cpuX/big_core_siblings".
Patch 2: checks if the thread-ids of the component small-cores are
interleaved, in which case we enable the the CPU_FTR_ASYM_SMT bit in
the cur_cpu_spec->cpu_features which results in the SD_ASYM_PACKING
flag being set at the SMT level sched-domain.
Results:
~~~~~~~~~~~~~~~~~
Experimental results for ebizzy with 2 threads, bound to a single big-core
show a marked improvement with this patchset over the 4.18-rc3 vanilla
kernel.
The result of 100 such runs for 4.18-rc3 kernel and the 4.18-rc3 +
big-core-patches are as follows
4.18-rc3:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
records/s : # samples : Histogram
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[0 - 1000000] : 0 : #
[1000000 - 2000000] : 3 : #
[2000000 - 3000000] : 16 : ####
[3000000 - 4000000] : 11 : ###
[4000000 - 5000000] : 0 : #
[5000000 - 6000000] : 70 : ###############
4.18-rc3 + big-core-patches
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
records/s : # samples : Histogram
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[0 - 1000000] : 0 : #
[1000000 - 2000000] : 0 : #
[2000000 - 3000000] : 6 : ##
[3000000 - 4000000] : 0 : #
[4000000 - 5000000] : 2 : #
[5000000 - 6000000] : 92 : ###################
Gautham R. Shenoy (2):
powerpc: Detect the presence of big-cores via "ibm,thread-groups"
powerpc: Enable CPU_FTR_ASYM_SMT for interleaved big-cores
Documentation/ABI/testing/sysfs-devices-system-cpu | 8 +
arch/powerpc/include/asm/cputhreads.h | 22 +++
arch/powerpc/kernel/setup-common.c | 177 ++++++++++++++++++++-
arch/powerpc/kernel/sysfs.c | 35 ++++
4 files changed, 241 insertions(+), 1 deletion(-)
--
1.9.4
From: "Gautham R. Shenoy" <[email protected]>
On IBM POWER9, the device tree exposes a property array identifed by
"ibm,thread-groups" which will indicate which groups of threads share a
particular set of resources.
As of today we only have one form of grouping identifying the group of
threads in the core that share the L1 cache, translation cache and
instruction data flow.
This patch defines the helper function to parse the contents of
"ibm,thread-groups" and a new structure to contain the parsed output.
The patch also creates the sysfs file named "small_core_siblings" that
returns the physical ids of the threads in the core that share the L1
cache, translation cache and instruction data flow.
Signed-off-by: Gautham R. Shenoy <[email protected]>
---
Documentation/ABI/testing/sysfs-devices-system-cpu | 8 ++
arch/powerpc/include/asm/cputhreads.h | 22 +++++
arch/powerpc/kernel/setup-common.c | 110 +++++++++++++++++++++
arch/powerpc/kernel/sysfs.c | 35 +++++++
4 files changed, 175 insertions(+)
diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index 9c5e7732..53a823a 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -487,3 +487,11 @@ Description: Information about CPU vulnerabilities
"Not affected" CPU is not affected by the vulnerability
"Vulnerable" CPU is affected and no mitigation in effect
"Mitigation: $M" CPU is affected and mitigation $M is in effect
+
+What: /sys/devices/system/cpu/cpu[0-9]+/small_core_sibings
+Date: 03-Jul-2018
+KernelVersion: v4.18.0
+Contact: Gautham R. Shenoy <[email protected]>
+Description: List of Physical ids of CPUs which share the the L1 cache,
+ translation cache and instruction data-flow with this CPU.
+Values: Comma separated list of decimal integers.
diff --git a/arch/powerpc/include/asm/cputhreads.h b/arch/powerpc/include/asm/cputhreads.h
index d71a909..33226d7 100644
--- a/arch/powerpc/include/asm/cputhreads.h
+++ b/arch/powerpc/include/asm/cputhreads.h
@@ -23,11 +23,13 @@
extern int threads_per_core;
extern int threads_per_subcore;
extern int threads_shift;
+extern bool has_big_cores;
extern cpumask_t threads_core_mask;
#else
#define threads_per_core 1
#define threads_per_subcore 1
#define threads_shift 0
+#define has_big_cores 0
#define threads_core_mask (*get_cpu_mask(0))
#endif
@@ -69,12 +71,32 @@ static inline cpumask_t cpu_online_cores_map(void)
return cpu_thread_mask_to_cores(cpu_online_mask);
}
+#define MAX_THREAD_LIST_SIZE 8
+struct thread_groups {
+ unsigned int property;
+ unsigned int nr_groups;
+ unsigned int threads_per_group;
+ unsigned int thread_list[MAX_THREAD_LIST_SIZE];
+};
+
#ifdef CONFIG_SMP
int cpu_core_index_of_thread(int cpu);
int cpu_first_thread_of_core(int core);
+int parse_thread_groups(struct device_node *dn, struct thread_groups *tg);
+int get_cpu_thread_group_start(int cpu, struct thread_groups *tg);
#else
static inline int cpu_core_index_of_thread(int cpu) { return cpu; }
static inline int cpu_first_thread_of_core(int core) { return core; }
+static inline int parse_thread_groups(struct device_node *dn,
+ struct thread_groups *tg)
+{
+ return -ENODATA;
+}
+
+static inline int get_cpu_thread_group_start(int cpu, struct thread_groups *tg)
+{
+ return -1;
+}
#endif
static inline int cpu_thread_in_core(int cpu)
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index 40b44bb..a78ec66 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -402,10 +402,12 @@ void __init check_for_initrd(void)
#ifdef CONFIG_SMP
int threads_per_core, threads_per_subcore, threads_shift;
+bool has_big_cores = true;
cpumask_t threads_core_mask;
EXPORT_SYMBOL_GPL(threads_per_core);
EXPORT_SYMBOL_GPL(threads_per_subcore);
EXPORT_SYMBOL_GPL(threads_shift);
+EXPORT_SYMBOL_GPL(has_big_cores);
EXPORT_SYMBOL_GPL(threads_core_mask);
static void __init cpu_init_thread_core_maps(int tpc)
@@ -433,6 +435,108 @@ static void __init cpu_init_thread_core_maps(int tpc)
u32 *cpu_to_phys_id = NULL;
+/*
+ * parse_thread_groups: Parses the "ibm,thread-groups" device tree
+ * property for the CPU device node dn and stores
+ * the parsed output in the thread_groups
+ * structure tg.
+ *
+ * ibm,thread-groups[0..N-1] array defines which group of threads in
+ * the CPU-device node can be grouped together based on the property.
+ *
+ * ibm,thread-groups[0] tells us the property based on which the
+ * threads are being grouped together. If this value is 1, it implies
+ * that the threads in the same group share L1, translation cache.
+ *
+ * ibm,thread-groups[1] tells us how many such thread groups exist.
+ *
+ * ibm,thread-groups[2] tells us the number of threads in each such
+ * group.
+ *
+ * ibm,thread-groups[3..N-1] is the list of threads identified by
+ * "ibm,ppc-interrupt-server#s" arranged as per their membership in
+ * the grouping.
+ *
+ * Example: If ibm,thread-groups = [1,2,4,5,6,7,8,9,10,11,12] it
+ * implies that there are 2 groups of 4 threads each, where each group
+ * of threads share L1, translation cache.
+ *
+ * The "ibm,ppc-interrupt-server#s" of the first group is {5,6,7,8}
+ * and the "ibm,ppc-interrupt-server#s" of the second group is {9, 10,
+ * 11, 12} structure
+ *
+ * Returns 0 on success, -EINVAL if the property does not exist,
+ * -ENODATA if property does not have a value, and -EOVERFLOW if the
+ * property data isn't large enough.
+ */
+int parse_thread_groups(struct device_node *dn,
+ struct thread_groups *tg)
+{
+ unsigned int nr_groups, threads_per_group, property;
+ int i;
+ u32 thread_group_array[3 + MAX_THREAD_LIST_SIZE];
+ u32 *thread_list;
+ size_t total_threads;
+ int ret;
+
+ ret = of_property_read_u32_array(dn, "ibm,thread-groups",
+ thread_group_array, 3);
+
+ if (ret)
+ return ret;
+
+ property = thread_group_array[0];
+ nr_groups = thread_group_array[1];
+ threads_per_group = thread_group_array[2];
+ total_threads = nr_groups * threads_per_group;
+
+ ret = of_property_read_u32_array(dn, "ibm,thread-groups",
+ thread_group_array,
+ 3 + total_threads);
+ if (ret)
+ return ret;
+
+ thread_list = &thread_group_array[3];
+
+ for (i = 0 ; i < total_threads; i++)
+ tg->thread_list[i] = thread_list[i];
+
+ tg->property = property;
+ tg->nr_groups = nr_groups;
+ tg->threads_per_group = threads_per_group;
+
+ return 0;
+}
+
+/*
+ * get_cpu_thread_group_start : Searches the thread group in tg->thread_list
+ * that @cpu belongs to.
+ *
+ * Returns the index to tg->thread_list that points to the the start
+ * of the thread_group that @cpu belongs to.
+ *
+ * Returns -1 if cpu doesn't belong to any of the groups pointed
+ * to by tg->thread_list.
+ */
+int get_cpu_thread_group_start(int cpu, struct thread_groups *tg)
+{
+ int hw_cpu_id = get_hard_smp_processor_id(cpu);
+ int i, j;
+
+ for (i = 0; i < tg->nr_groups; i++) {
+ int group_start = i * tg->threads_per_group;
+
+ for (j = 0; j < tg->threads_per_group; j++) {
+ int idx = group_start + j;
+
+ if (tg->thread_list[idx] == hw_cpu_id)
+ return group_start;
+ }
+ }
+
+ return -1;
+}
+
/**
* setup_cpu_maps - initialize the following cpu maps:
* cpu_possible_mask
@@ -467,6 +571,7 @@ void __init smp_setup_cpu_maps(void)
const __be32 *intserv;
__be32 cpu_be;
int j, len;
+ struct thread_groups tg = {.nr_groups = 0};
DBG(" * %pOF...\n", dn);
@@ -505,6 +610,11 @@ void __init smp_setup_cpu_maps(void)
cpu++;
}
+ if (parse_thread_groups(dn, &tg) ||
+ tg.nr_groups < 1 || tg.property != 1) {
+ has_big_cores = false;
+ }
+
if (cpu >= nr_cpu_ids) {
of_node_put(dn);
break;
diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
index 755dc98..f5717de 100644
--- a/arch/powerpc/kernel/sysfs.c
+++ b/arch/powerpc/kernel/sysfs.c
@@ -18,6 +18,7 @@
#include <asm/smp.h>
#include <asm/pmc.h>
#include <asm/firmware.h>
+#include <asm/cputhreads.h>
#include "cacheinfo.h"
#include "setup.h"
@@ -1025,6 +1026,33 @@ static ssize_t show_physical_id(struct device *dev,
}
static DEVICE_ATTR(physical_id, 0444, show_physical_id, NULL);
+static ssize_t show_small_core_siblings(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct cpu *cpu = container_of(dev, struct cpu, dev);
+ struct device_node *dn = of_get_cpu_node(cpu->dev.id, NULL);
+ struct thread_groups tg;
+ int i, j;
+ ssize_t ret = 0;
+
+ if (parse_thread_groups(dn, &tg))
+ return -ENODATA;
+
+ i = get_cpu_thread_group_start(cpu->dev.id, &tg);
+
+ if (i == -1)
+ return -ENODATA;
+
+ for (j = 0; j < tg.threads_per_group - 1; j++)
+ ret += sprintf(buf + ret, "%d,", tg.thread_list[i + j]);
+
+ ret += sprintf(buf + ret, "%d\n", tg.thread_list[i + j]);
+
+ return ret;
+}
+static DEVICE_ATTR(small_core_siblings, 0444, show_small_core_siblings, NULL);
+
static int __init topology_init(void)
{
int cpu, r;
@@ -1048,6 +1076,13 @@ static int __init topology_init(void)
register_cpu(c, cpu);
device_create_file(&c->dev, &dev_attr_physical_id);
+
+ if (has_big_cores) {
+ const struct device_attribute *attr =
+ &dev_attr_small_core_siblings;
+
+ device_create_file(&c->dev, attr);
+ }
}
}
r = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "powerpc/topology:online",
--
1.9.4
From: "Gautham R. Shenoy" <[email protected]>
A pair of IBM POWER9 SMT4 cores can be fused together to form a big-core
with 8 SMT threads. This can be discovered via the "ibm,thread-groups"
CPU property in the device tree which will indicate which group of
threads that share the L1 cache, translation cache and instruction data
flow. If there are multiple such group of threads, then the core is a
big-core.
Furthermore, if the thread-ids of the threads of the big-core can be
obtained by interleaving the thread-ids of the thread-groups
(component small core), then such a big-core is called an interleaved
big-core.
Eg: Threads in the pair of component SMT4 cores of an interleaved
big-core are numbered {0,2,4,6} and {1,3,5,7} respectively.
The SMT4 cores forming a big-core are more or less independent
units. Thus when multiple tasks are scheduled to run on the fused
core, we get the best performance when the tasks are spread across the
pair of SMT4 cores.
This patch enables CPU_FTR_ASYM_SMT bit in the cpu-features on
detecting the presence of interleaved big-cores at boot up. This will
will bias the load-balancing of tasks on smaller numbered threads,
which will automatically result in spreading the tasks uniformly
across the associated pair of SMT4 cores.
Signed-off-by: Gautham R. Shenoy <[email protected]>
---
arch/powerpc/kernel/setup-common.c | 67 +++++++++++++++++++++++++++++++++++++-
1 file changed, 66 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index a78ec66..f63d797 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -537,6 +537,56 @@ int get_cpu_thread_group_start(int cpu, struct thread_groups *tg)
return -1;
}
+/*
+ * check_interleaved_big_core - Checks if the thread group tg
+ * corresponds to a big-core whose threads are interleavings of the
+ * threads of the component small cores.
+ *
+ * @tg: A thread-group struct for the core.
+ *
+ * Returns true if the core is a interleaved big-core.
+ * Returns false otherwise.
+ */
+static inline bool check_interleaved_big_core(struct thread_groups *tg)
+{
+ int nr_groups;
+ int threads_per_group;
+ int cur_cpu, next_cpu, i, j;
+
+ nr_groups = tg->nr_groups;
+ threads_per_group = tg->threads_per_group;
+
+ if (tg->property != 1)
+ return false;
+
+ if (nr_groups < 2 || threads_per_group < 2)
+ return false;
+
+ /*
+ * In case of an interleaved big-core, the thread-ids of the
+ * big-core can be obtained by interleaving the the thread-ids
+ * of the component small
+ *
+ * Eg: On a 8-thread big-core with two SMT4 small cores, the
+ * threads of the two component small cores will be
+ * {0, 2, 4, 6} and {1, 3, 5, 7}.
+ */
+ for (i = 0; i < nr_groups; i++) {
+ int group_start = i * threads_per_group;
+
+ for (j = 0; j < threads_per_group - 1; j++) {
+ int cur_idx = group_start + j;
+
+ cur_cpu = tg->thread_list[cur_idx];
+ next_cpu = tg->thread_list[cur_idx + 1];
+ if (next_cpu != cur_cpu + nr_groups)
+ return false;
+ }
+ }
+
+ return true;
+}
+
/**
* setup_cpu_maps - initialize the following cpu maps:
* cpu_possible_mask
@@ -560,6 +610,7 @@ void __init smp_setup_cpu_maps(void)
struct device_node *dn;
int cpu = 0;
int nthreads = 1;
+ bool has_interleaved_big_core = true;
DBG("smp_setup_cpu_maps()\n");
@@ -613,6 +664,12 @@ void __init smp_setup_cpu_maps(void)
if (parse_thread_groups(dn, &tg) ||
tg.nr_groups < 1 || tg.property != 1) {
has_big_cores = false;
+ has_interleaved_big_core = false;
+ }
+
+ if (has_interleaved_big_core) {
+ has_interleaved_big_core =
+ check_interleaved_big_core(&tg);
}
if (cpu >= nr_cpu_ids) {
@@ -669,7 +726,15 @@ void __init smp_setup_cpu_maps(void)
vdso_data->processorCount = num_present_cpus();
#endif /* CONFIG_PPC64 */
- /* Initialize CPU <=> thread mapping/
+ if (has_interleaved_big_core) {
+ int key = __builtin_ctzl(CPU_FTR_ASYM_SMT);
+
+ cur_cpu_spec->cpu_features |= CPU_FTR_ASYM_SMT;
+ static_branch_enable(&cpu_feature_keys[key]);
+ pr_info("Detected interleaved big-cores\n");
+ }
+
+ /* Initialize CPU <=> thread mapping/
*
* WARNING: We assume that the number of threads is the same for
* every CPU in the system. If that is not the case, then some code
--
1.9.4
On Tue, Jul 03, 2018 at 04:33:50PM +0530, Gautham R. Shenoy wrote:
> From: "Gautham R. Shenoy" <[email protected]>
>
> On IBM POWER9, the device tree exposes a property array identifed by
> "ibm,thread-groups" which will indicate which groups of threads share a
> particular set of resources.
>
> As of today we only have one form of grouping identifying the group of
> threads in the core that share the L1 cache, translation cache and
> instruction data flow.
>
> This patch defines the helper function to parse the contents of
> "ibm,thread-groups" and a new structure to contain the parsed output.
>
> The patch also creates the sysfs file named "small_core_siblings" that
> returns the physical ids of the threads in the core that share the L1
> cache, translation cache and instruction data flow.
>
> Signed-off-by: Gautham R. Shenoy <[email protected]>
> ---
> Documentation/ABI/testing/sysfs-devices-system-cpu | 8 ++
> arch/powerpc/include/asm/cputhreads.h | 22 +++++
> arch/powerpc/kernel/setup-common.c | 110 +++++++++++++++++++++
> arch/powerpc/kernel/sysfs.c | 35 +++++++
> 4 files changed, 175 insertions(+)
>
> diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
> index 9c5e7732..53a823a 100644
> --- a/Documentation/ABI/testing/sysfs-devices-system-cpu
> +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
> @@ -487,3 +487,11 @@ Description: Information about CPU vulnerabilities
> "Not affected" CPU is not affected by the vulnerability
> "Vulnerable" CPU is affected and no mitigation in effect
> "Mitigation: $M" CPU is affected and mitigation $M is in effect
> +
> +What: /sys/devices/system/cpu/cpu[0-9]+/small_core_sibings
s/small_core_sibings/small_core_siblings
By the way, big_core_siblings was mentioned in the introductory email.
> +Date: 03-Jul-2018
> +KernelVersion: v4.18.0
> +Contact: Gautham R. Shenoy <[email protected]>
> +Description: List of Physical ids of CPUs which share the the L1 cache,
> + translation cache and instruction data-flow with this CPU.
> +Values: Comma separated list of decimal integers.
> diff --git a/arch/powerpc/include/asm/cputhreads.h b/arch/powerpc/include/asm/cputhreads.h
> index d71a909..33226d7 100644
> --- a/arch/powerpc/include/asm/cputhreads.h
> +++ b/arch/powerpc/include/asm/cputhreads.h
> @@ -23,11 +23,13 @@
> extern int threads_per_core;
> extern int threads_per_subcore;
> extern int threads_shift;
> +extern bool has_big_cores;
> extern cpumask_t threads_core_mask;
> #else
> #define threads_per_core 1
> #define threads_per_subcore 1
> #define threads_shift 0
> +#define has_big_cores 0
> #define threads_core_mask (*get_cpu_mask(0))
> #endif
>
> @@ -69,12 +71,32 @@ static inline cpumask_t cpu_online_cores_map(void)
> return cpu_thread_mask_to_cores(cpu_online_mask);
> }
>
> +#define MAX_THREAD_LIST_SIZE 8
> +struct thread_groups {
> + unsigned int property;
> + unsigned int nr_groups;
> + unsigned int threads_per_group;
> + unsigned int thread_list[MAX_THREAD_LIST_SIZE];
> +};
> +
> #ifdef CONFIG_SMP
> int cpu_core_index_of_thread(int cpu);
> int cpu_first_thread_of_core(int core);
> +int parse_thread_groups(struct device_node *dn, struct thread_groups *tg);
> +int get_cpu_thread_group_start(int cpu, struct thread_groups *tg);
> #else
> static inline int cpu_core_index_of_thread(int cpu) { return cpu; }
> static inline int cpu_first_thread_of_core(int core) { return core; }
> +static inline int parse_thread_groups(struct device_node *dn,
> + struct thread_groups *tg)
> +{
> + return -ENODATA;
> +}
> +
> +static inline int get_cpu_thread_group_start(int cpu, struct thread_groups *tg)
> +{
> + return -1;
> +}
> #endif
>
> static inline int cpu_thread_in_core(int cpu)
> diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
> index 40b44bb..a78ec66 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -402,10 +402,12 @@ void __init check_for_initrd(void)
> #ifdef CONFIG_SMP
>
> int threads_per_core, threads_per_subcore, threads_shift;
> +bool has_big_cores = true;
> cpumask_t threads_core_mask;
> EXPORT_SYMBOL_GPL(threads_per_core);
> EXPORT_SYMBOL_GPL(threads_per_subcore);
> EXPORT_SYMBOL_GPL(threads_shift);
> +EXPORT_SYMBOL_GPL(has_big_cores);
> EXPORT_SYMBOL_GPL(threads_core_mask);
>
> static void __init cpu_init_thread_core_maps(int tpc)
> @@ -433,6 +435,108 @@ static void __init cpu_init_thread_core_maps(int tpc)
>
> u32 *cpu_to_phys_id = NULL;
>
> +/*
> + * parse_thread_groups: Parses the "ibm,thread-groups" device tree
> + * property for the CPU device node dn and stores
> + * the parsed output in the thread_groups
> + * structure tg.
Perhaps document the arguments of this function, as done in the second
patch?
> + *
> + * ibm,thread-groups[0..N-1] array defines which group of threads in
> + * the CPU-device node can be grouped together based on the property.
> + *
> + * ibm,thread-groups[0] tells us the property based on which the
> + * threads are being grouped together. If this value is 1, it implies
> + * that the threads in the same group share L1, translation cache.
> + *
> + * ibm,thread-groups[1] tells us how many such thread groups exist.
> + *
> + * ibm,thread-groups[2] tells us the number of threads in each such
> + * group.
> + *
> + * ibm,thread-groups[3..N-1] is the list of threads identified by
> + * "ibm,ppc-interrupt-server#s" arranged as per their membership in
> + * the grouping.
> + *
> + * Example: If ibm,thread-groups = [1,2,4,5,6,7,8,9,10,11,12] it
> + * implies that there are 2 groups of 4 threads each, where each group
> + * of threads share L1, translation cache.
> + *
> + * The "ibm,ppc-interrupt-server#s" of the first group is {5,6,7,8}
> + * and the "ibm,ppc-interrupt-server#s" of the second group is {9, 10,
> + * 11, 12} structure
> + *
> + * Returns 0 on success, -EINVAL if the property does not exist,
> + * -ENODATA if property does not have a value, and -EOVERFLOW if the
> + * property data isn't large enough.
> + */
> +int parse_thread_groups(struct device_node *dn,
> + struct thread_groups *tg)
> +{
> + unsigned int nr_groups, threads_per_group, property;
> + int i;
> + u32 thread_group_array[3 + MAX_THREAD_LIST_SIZE];
> + u32 *thread_list;
> + size_t total_threads;
> + int ret;
> +
> + ret = of_property_read_u32_array(dn, "ibm,thread-groups",
> + thread_group_array, 3);
> +
> + if (ret)
> + return ret;
> +
> + property = thread_group_array[0];
> + nr_groups = thread_group_array[1];
> + threads_per_group = thread_group_array[2];
> + total_threads = nr_groups * threads_per_group;
> +
> + ret = of_property_read_u32_array(dn, "ibm,thread-groups",
> + thread_group_array,
> + 3 + total_threads);
> + if (ret)
> + return ret;
> +
> + thread_list = &thread_group_array[3];
> +
> + for (i = 0 ; i < total_threads; i++)
> + tg->thread_list[i] = thread_list[i];
> +
> + tg->property = property;
> + tg->nr_groups = nr_groups;
> + tg->threads_per_group = threads_per_group;
> +
> + return 0;
> +}
> +
> +/*
> + * get_cpu_thread_group_start : Searches the thread group in tg->thread_list
> + * that @cpu belongs to.
Same here.
> + *
> + * Returns the index to tg->thread_list that points to the the start
> + * of the thread_group that @cpu belongs to.
> + *
> + * Returns -1 if cpu doesn't belong to any of the groups pointed
> + * to by tg->thread_list.
> + */
> +int get_cpu_thread_group_start(int cpu, struct thread_groups *tg)
> +{
> + int hw_cpu_id = get_hard_smp_processor_id(cpu);
> + int i, j;
> +
> + for (i = 0; i < tg->nr_groups; i++) {
> + int group_start = i * tg->threads_per_group;
> +
> + for (j = 0; j < tg->threads_per_group; j++) {
> + int idx = group_start + j;
> +
> + if (tg->thread_list[idx] == hw_cpu_id)
> + return group_start;
> + }
> + }
> +
> + return -1;
> +}
> +
> /**
> * setup_cpu_maps - initialize the following cpu maps:
> * cpu_possible_mask
> @@ -467,6 +571,7 @@ void __init smp_setup_cpu_maps(void)
> const __be32 *intserv;
> __be32 cpu_be;
> int j, len;
> + struct thread_groups tg = {.nr_groups = 0};
We assume has_big_cores = true but here we initialize .nr_groups
otherwise. It's kind of contradictory.
What if has_big_cores is assumed false and members of tg are initialized
with zeroes?
>
> DBG(" * %pOF...\n", dn);
>
> @@ -505,6 +610,11 @@ void __init smp_setup_cpu_maps(void)
> cpu++;
> }
>
> + if (parse_thread_groups(dn, &tg) ||
> + tg.nr_groups < 1 || tg.property != 1) {
> + has_big_cores = false;
> + }
> +
parse_thread_groups() returns before setting tg.property if property
doesn't exist. Are we confident that tg.property won't contain any
garbage that could lead to a false positive here? Shouldn't we also
initialize .property when declaring tg?
What if this logic is encapsulated in a function? For example:
has_big_cores = dt_has_big_cores(dn, &tg);
> if (cpu >= nr_cpu_ids) {
> of_node_put(dn);
> break;
> diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
> index 755dc98..f5717de 100644
> --- a/arch/powerpc/kernel/sysfs.c
> +++ b/arch/powerpc/kernel/sysfs.c
> @@ -18,6 +18,7 @@
> #include <asm/smp.h>
> #include <asm/pmc.h>
> #include <asm/firmware.h>
> +#include <asm/cputhreads.h>
>
> #include "cacheinfo.h"
> #include "setup.h"
> @@ -1025,6 +1026,33 @@ static ssize_t show_physical_id(struct device *dev,
> }
> static DEVICE_ATTR(physical_id, 0444, show_physical_id, NULL);
>
> +static ssize_t show_small_core_siblings(struct device *dev,
> + struct device_attribute *attr,
> + char *buf)
> +{
> + struct cpu *cpu = container_of(dev, struct cpu, dev);
> + struct device_node *dn = of_get_cpu_node(cpu->dev.id, NULL);
> + struct thread_groups tg;
> + int i, j;
> + ssize_t ret = 0;
> +
> + if (parse_thread_groups(dn, &tg))
> + return -ENODATA;
> +
> + i = get_cpu_thread_group_start(cpu->dev.id, &tg);
> +
> + if (i == -1)
> + return -ENODATA;
> +
> + for (j = 0; j < tg.threads_per_group - 1; j++)
> + ret += sprintf(buf + ret, "%d,", tg.thread_list[i + j]);
> +
> + ret += sprintf(buf + ret, "%d\n", tg.thread_list[i + j]);
> +
> + return ret;
> +}
> +static DEVICE_ATTR(small_core_siblings, 0444, show_small_core_siblings, NULL);
> +
> static int __init topology_init(void)
> {
> int cpu, r;
> @@ -1048,6 +1076,13 @@ static int __init topology_init(void)
> register_cpu(c, cpu);
>
> device_create_file(&c->dev, &dev_attr_physical_id);
> +
> + if (has_big_cores) {
> + const struct device_attribute *attr =
> + &dev_attr_small_core_siblings;
> +
> + device_create_file(&c->dev, attr);
> + }
> }
> }
> r = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "powerpc/topology:online",
> --
> 1.9.4
>
Cheers
Murilo
On Tue, Jul 03, 2018 at 04:33:51PM +0530, Gautham R. Shenoy wrote:
> From: "Gautham R. Shenoy" <[email protected]>
>
> A pair of IBM POWER9 SMT4 cores can be fused together to form a big-core
> with 8 SMT threads. This can be discovered via the "ibm,thread-groups"
> CPU property in the device tree which will indicate which group of
> threads that share the L1 cache, translation cache and instruction data
> flow. If there are multiple such group of threads, then the core is a
> big-core.
>
> Furthermore, if the thread-ids of the threads of the big-core can be
> obtained by interleaving the thread-ids of the thread-groups
> (component small core), then such a big-core is called an interleaved
> big-core.
>
> Eg: Threads in the pair of component SMT4 cores of an interleaved
> big-core are numbered {0,2,4,6} and {1,3,5,7} respectively.
>
> The SMT4 cores forming a big-core are more or less independent
> units. Thus when multiple tasks are scheduled to run on the fused
> core, we get the best performance when the tasks are spread across the
> pair of SMT4 cores.
>
> This patch enables CPU_FTR_ASYM_SMT bit in the cpu-features on
> detecting the presence of interleaved big-cores at boot up. This will
> will bias the load-balancing of tasks on smaller numbered threads,
> which will automatically result in spreading the tasks uniformly
> across the associated pair of SMT4 cores.
>
> Signed-off-by: Gautham R. Shenoy <[email protected]>
> ---
> arch/powerpc/kernel/setup-common.c | 67 +++++++++++++++++++++++++++++++++++++-
> 1 file changed, 66 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
> index a78ec66..f63d797 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -537,6 +537,56 @@ int get_cpu_thread_group_start(int cpu, struct thread_groups *tg)
> return -1;
> }
>
> +/*
> + * check_interleaved_big_core - Checks if the thread group tg
> + * corresponds to a big-core whose threads are interleavings of the
> + * threads of the component small cores.
> + *
> + * @tg: A thread-group struct for the core.
> + *
> + * Returns true if the core is a interleaved big-core.
> + * Returns false otherwise.
> + */
> +static inline bool check_interleaved_big_core(struct thread_groups *tg)
> +{
> + int nr_groups;
> + int threads_per_group;
> + int cur_cpu, next_cpu, i, j;
> +
> + nr_groups = tg->nr_groups;
> + threads_per_group = tg->threads_per_group;
> +
> + if (tg->property != 1)
> + return false;
> +
> + if (nr_groups < 2 || threads_per_group < 2)
> + return false;
> +
> + /*
> + * In case of an interleaved big-core, the thread-ids of the
> + * big-core can be obtained by interleaving the the thread-ids
> + * of the component small
> + *
> + * Eg: On a 8-thread big-core with two SMT4 small cores, the
> + * threads of the two component small cores will be
> + * {0, 2, 4, 6} and {1, 3, 5, 7}.
> + */
> + for (i = 0; i < nr_groups; i++) {
> + int group_start = i * threads_per_group;
> +
> + for (j = 0; j < threads_per_group - 1; j++) {
> + int cur_idx = group_start + j;
> +
> + cur_cpu = tg->thread_list[cur_idx];
> + next_cpu = tg->thread_list[cur_idx + 1];
> + if (next_cpu != cur_cpu + nr_groups)
> + return false;
> + }
> + }
> +
> + return true;
> +}
> +
> /**
> * setup_cpu_maps - initialize the following cpu maps:
> * cpu_possible_mask
> @@ -560,6 +610,7 @@ void __init smp_setup_cpu_maps(void)
> struct device_node *dn;
> int cpu = 0;
> int nthreads = 1;
> + bool has_interleaved_big_core = true;
>
> DBG("smp_setup_cpu_maps()\n");
>
> @@ -613,6 +664,12 @@ void __init smp_setup_cpu_maps(void)
> if (parse_thread_groups(dn, &tg) ||
> tg.nr_groups < 1 || tg.property != 1) {
> has_big_cores = false;
> + has_interleaved_big_core = false;
> + }
> +
> + if (has_interleaved_big_core) {
> + has_interleaved_big_core =
> + check_interleaved_big_core(&tg);
> }
>
> if (cpu >= nr_cpu_ids) {
> @@ -669,7 +726,15 @@ void __init smp_setup_cpu_maps(void)
> vdso_data->processorCount = num_present_cpus();
> #endif /* CONFIG_PPC64 */
>
> - /* Initialize CPU <=> thread mapping/
> + if (has_interleaved_big_core) {
> + int key = __builtin_ctzl(CPU_FTR_ASYM_SMT);
> +
> + cur_cpu_spec->cpu_features |= CPU_FTR_ASYM_SMT;
> + static_branch_enable(&cpu_feature_keys[key]);
> + pr_info("Detected interleaved big-cores\n");
> + }
Shouldn't we use cpu_has_feature(CPU_FTR_ASYM_SMT) before setting it?
> +
> + /* Initialize CPU <=> thread mapping/
> *
> * WARNING: We assume that the number of threads is the same for
> * every CPU in the system. If that is not the case, then some code
> --
> 1.9.4
>
--
Murilo
Hi Murilo,
Thanks for the review.
On Tue, Jul 03, 2018 at 02:53:46PM -0300, Murilo Opsfelder Araujo wrote:
[..snip..]
> > - /* Initialize CPU <=> thread mapping/
> > + if (has_interleaved_big_core) {
> > + int key = __builtin_ctzl(CPU_FTR_ASYM_SMT);
> > +
> > + cur_cpu_spec->cpu_features |= CPU_FTR_ASYM_SMT;
> > + static_branch_enable(&cpu_feature_keys[key]);
> > + pr_info("Detected interleaved big-cores\n");
> > + }
>
> Shouldn't we use cpu_has_feature(CPU_FTR_ASYM_SMT) before setting
> > it?
Are you suggesting that we do the following?
if (has_interleaved_big_core &&
!cpu_has_feature(CPU_FTR_ASYM_SMT)) {
...
}
Currently CPU_FTR_ASYM_SMT is set at compile time for only POWER7
where running the tasks on lower numbered threads give us the benefit
of SMT thread folding. Interleaved big core is a feature introduced
only on POWER9. Thus, we know that CPU_FTR_ASYM_SMT is not set in
cpu_features at this point.
>
> > +
> > + /* Initialize CPU <=> thread mapping/
> > *
> > * WARNING: We assume that the number of threads is the same for
> > * every CPU in the system. If that is not the case, then some code
> > --
> > 1.9.4
> >
>
> --
> Murilo
--
Thanks and Regards
gautham.
Hello Murilo,
Thanks for reviewing the patch. Replies inline.
On Tue, Jul 03, 2018 at 02:16:55PM -0300, Murilo Opsfelder Araujo wrote:
> On Tue, Jul 03, 2018 at 04:33:50PM +0530, Gautham R. Shenoy wrote:
> > From: "Gautham R. Shenoy" <[email protected]>
> >
> > On IBM POWER9, the device tree exposes a property array identifed by
> > "ibm,thread-groups" which will indicate which groups of threads share a
> > particular set of resources.
> >
> > As of today we only have one form of grouping identifying the group of
> > threads in the core that share the L1 cache, translation cache and
> > instruction data flow.
> >
> > This patch defines the helper function to parse the contents of
> > "ibm,thread-groups" and a new structure to contain the parsed output.
> >
> > The patch also creates the sysfs file named "small_core_siblings" that
> > returns the physical ids of the threads in the core that share the L1
> > cache, translation cache and instruction data flow.
> >
> > Signed-off-by: Gautham R. Shenoy <[email protected]>
> > ---
> > Documentation/ABI/testing/sysfs-devices-system-cpu | 8 ++
> > arch/powerpc/include/asm/cputhreads.h | 22 +++++
> > arch/powerpc/kernel/setup-common.c | 110 +++++++++++++++++++++
> > arch/powerpc/kernel/sysfs.c | 35 +++++++
> > 4 files changed, 175 insertions(+)
> >
> > diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
> > index 9c5e7732..53a823a 100644
> > --- a/Documentation/ABI/testing/sysfs-devices-system-cpu
> > +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
> > @@ -487,3 +487,11 @@ Description: Information about CPU vulnerabilities
> > "Not affected" CPU is not affected by the vulnerability
> > "Vulnerable" CPU is affected and no mitigation in effect
> > "Mitigation: $M" CPU is affected and mitigation $M is in effect
> > +
> > +What: /sys/devices/system/cpu/cpu[0-9]+/small_core_sibings
>
> s/small_core_sibings/small_core_siblings
Nice catch! Will fix this.
>
> By the way, big_core_siblings was mentioned in the introductory
email.
It should be small_core_siblings in the introductory e-mail. My bad.
>
> > +Date: 03-Jul-2018
> > +KernelVersion: v4.18.0
> > +Contact: Gautham R. Shenoy <[email protected]>
> > +Description: List of Physical ids of CPUs which share the the L1 cache,
> > + translation cache and instruction data-flow with this CPU.
> > +Values: Comma separated list of decimal integers.
[..snip..]
> > +/*
> > + * parse_thread_groups: Parses the "ibm,thread-groups" device tree
> > + * property for the CPU device node dn and stores
> > + * the parsed output in the thread_groups
> > + * structure tg.
>
> Perhaps document the arguments of this function, as done in the second
> patch?
Will do this. Thanks.
>
> > + *
> > + * ibm,thread-groups[0..N-1] array defines which group of threads in
> > + * the CPU-device node can be grouped together based on the property.
> > + *
> > + * ibm,thread-groups[0] tells us the property based on which the
> > + * threads are being grouped together. If this value is 1, it implies
> > + * that the threads in the same group share L1, translation cache.
> > + *
> > + * ibm,thread-groups[1] tells us how many such thread groups exist.
> > + *
> > + * ibm,thread-groups[2] tells us the number of threads in each such
> > + * group.
> > + *
> > + * ibm,thread-groups[3..N-1] is the list of threads identified by
> > + * "ibm,ppc-interrupt-server#s" arranged as per their membership in
> > + * the grouping.
> > + *
> > + * Example: If ibm,thread-groups = [1,2,4,5,6,7,8,9,10,11,12] it
> > + * implies that there are 2 groups of 4 threads each, where each group
> > + * of threads share L1, translation cache.
> > + *
> > + * The "ibm,ppc-interrupt-server#s" of the first group is {5,6,7,8}
> > + * and the "ibm,ppc-interrupt-server#s" of the second group is {9, 10,
> > + * 11, 12} structure
> > + *
> > + * Returns 0 on success, -EINVAL if the property does not exist,
> > + * -ENODATA if property does not have a value, and -EOVERFLOW if the
> > + * property data isn't large enough.
> > + */
> > +int parse_thread_groups(struct device_node *dn,
> > + struct thread_groups *tg)
> > +{
> > + unsigned int nr_groups, threads_per_group, property;
> > + int i;
> > + u32 thread_group_array[3 + MAX_THREAD_LIST_SIZE];
> > + u32 *thread_list;
> > + size_t total_threads;
> > + int ret;
> > +
> > + ret = of_property_read_u32_array(dn, "ibm,thread-groups",
> > + thread_group_array, 3);
> > +
> > + if (ret)
> > + return ret;
> > +
> > + property = thread_group_array[0];
> > + nr_groups = thread_group_array[1];
> > + threads_per_group = thread_group_array[2];
> > + total_threads = nr_groups * threads_per_group;
> > +
> > + ret = of_property_read_u32_array(dn, "ibm,thread-groups",
> > + thread_group_array,
> > + 3 + total_threads);
> > + if (ret)
> > + return ret;
> > +
> > + thread_list = &thread_group_array[3];
> > +
> > + for (i = 0 ; i < total_threads; i++)
> > + tg->thread_list[i] = thread_list[i];
> > +
> > + tg->property = property;
> > + tg->nr_groups = nr_groups;
> > + tg->threads_per_group = threads_per_group;
> > +
> > + return 0;
> > +}
> > +
> > +/*
> > + * get_cpu_thread_group_start : Searches the thread group in tg->thread_list
> > + * that @cpu belongs to.
>
> Same here.
Sure.
>
> > + *
> > + * Returns the index to tg->thread_list that points to the the start
> > + * of the thread_group that @cpu belongs to.
> > + *
> > + * Returns -1 if cpu doesn't belong to any of the groups pointed
> > + * to by tg->thread_list.
> > + */
> > +int get_cpu_thread_group_start(int cpu, struct thread_groups *tg)
> > +{
> > + int hw_cpu_id = get_hard_smp_processor_id(cpu);
> > + int i, j;
> > +
> > + for (i = 0; i < tg->nr_groups; i++) {
> > + int group_start = i * tg->threads_per_group;
> > +
> > + for (j = 0; j < tg->threads_per_group; j++) {
> > + int idx = group_start + j;
> > +
> > + if (tg->thread_list[idx] == hw_cpu_id)
> > + return group_start;
> > + }
> > + }
> > +
> > + return -1;
> > +}
> > +
> > /**
> > * setup_cpu_maps - initialize the following cpu maps:
> > * cpu_possible_mask
> > @@ -467,6 +571,7 @@ void __init smp_setup_cpu_maps(void)
> > const __be32 *intserv;
> > __be32 cpu_be;
> > int j, len;
> > + struct thread_groups tg = {.nr_groups = 0};
>
> We assume has_big_cores = true but here we initialize .nr_groups
> otherwise. It's kind of contradictory.
.nr_groups is being initialized to some sane value here. Perhaps I
should move the initializations of tg.nr_groups and tg.property inside
parse_thread_groups.
>
> What if has_big_cores is assumed false and members of tg are initialized
> with zeroes?
The idea here is that after parsing all the CPU nodes, the variable
"has_big_cores" continues to remain to true if all the CPU nodes are
big cores. Even if one of them isn't a big core (not sure if this is
possible in practise) then we want to set it to false.
Hence we start with the assumption that has_big_cores is true, and
switch it on finding even one core that is not a big-core.
But I got to know that this is an overkill since if the component
small core is bad, the entire big-core is disabled. Thus it might be
sufficient to just check for one CPU node, if it is a big core or not,
and set the variable from "false" to "true".
>
> >
> > DBG(" * %pOF...\n", dn);
> >
> > @@ -505,6 +610,11 @@ void __init smp_setup_cpu_maps(void)
> > cpu++;
> > }
> >
> > + if (parse_thread_groups(dn, &tg) ||
> > + tg.nr_groups < 1 || tg.property != 1) {
> > + has_big_cores = false;
> > + }
> > +
>
> parse_thread_groups() returns before setting tg.property if property
> doesn't exist. Are we confident that tg.property won't contain any
> garbage that could lead to a false positive here? Shouldn't we also
> initialize .property when declaring tg?
Yes we should. Will move the initializations to parse_thread_groups.
>
> What if this logic is encapsulated in a function? For example:
>
> has_big_cores = dt_has_big_cores(dn, &tg);
Good idea.
>
> > if (cpu >= nr_cpu_ids) {
> > of_node_put(dn);
> > break;
> > diff --git a/arch/powerpc/kernel/sysfs.c b/arch/powerpc/kernel/sysfs.c
[..snip..]
Will address these changes in the subsequent patch series.
>
> Cheers
> Murilo
--
Thanks and Regards
gautham.
On Wed, Jul 04, 2018 at 01:45:05PM +0530, Gautham R Shenoy wrote:
> Hi Murilo,
>
> Thanks for the review.
>
> On Tue, Jul 03, 2018 at 02:53:46PM -0300, Murilo Opsfelder Araujo wrote:
> [..snip..]
>
> > > - /* Initialize CPU <=> thread mapping/
> > > + if (has_interleaved_big_core) {
> > > + int key = __builtin_ctzl(CPU_FTR_ASYM_SMT);
> > > +
> > > + cur_cpu_spec->cpu_features |= CPU_FTR_ASYM_SMT;
> > > + static_branch_enable(&cpu_feature_keys[key]);
> > > + pr_info("Detected interleaved big-cores\n");
> > > + }
> >
> > Shouldn't we use cpu_has_feature(CPU_FTR_ASYM_SMT) before setting
> > > it?
>
>
> Are you suggesting that we do the following?
>
> if (has_interleaved_big_core &&
> !cpu_has_feature(CPU_FTR_ASYM_SMT)) {
> ...
> }
>
> Currently CPU_FTR_ASYM_SMT is set at compile time for only POWER7
> where running the tasks on lower numbered threads give us the benefit
> of SMT thread folding. Interleaved big core is a feature introduced
> only on POWER9. Thus, we know that CPU_FTR_ASYM_SMT is not set in
> cpu_features at this point.
Since we're setting CPU_FTR_ASYM_SMT, it doesn't make sense to use
cpu_has_feature(CPU_FTR_ASYM_SMT). I thought cpu_has_feature() held all
available features (not necessarily enabled) that we could check before
setting or enabling such feature. I think I misread it. Sorry.
>
> >
> > > +
> > > + /* Initialize CPU <=> thread mapping/
> > > *
> > > * WARNING: We assume that the number of threads is the same for
> > > * every CPU in the system. If that is not the case, then some code
> > > --
> > > 1.9.4
> > >
> >
> > --
> > Murilo
>
> --
> Thanks and Regards
> gautham.
>
--
Murilo