2023-10-10 11:57:15

by Yicong Yang

[permalink] [raw]
Subject: [PATCH v2] arch_topology: Support SMT control on arm64

From: Yicong Yang <[email protected]>

The core CPU control framework supports runtime SMT control which
is not yet supported on arm64. Besides the general vulnerabilities
concerns we want this runtime control on our arm64 server for:

- better single CPU performance in some cases
- saving overall power consumption

This patch implements it in the following aspects:

- implement the callbacks of the core
- update the SMT status after the topology enumerated on arm64
- select HOTPLUG_SMT for arm64

For disabling SMT we'll offline all the secondary threads and
only leave the primary thread. Since we don't have restriction
for primary thread selection, the first thread is chosen as the
primary thread in this implementation.

Tests has been done on our real ACPI based arm64 server and on
ACPI/OF based QEMU VMs.

Signed-off-by: Yicong Yang <[email protected]>
---
Change since v1:
- Avoid the complexity on SMT detecting by visiting each CPU once, concerned by Sudeep
Link: https://lore.kernel.org/all/[email protected]/

arch/arm64/Kconfig | 1 +
drivers/base/arch_topology.c | 75 +++++++++++++++++++++++++++++++++++
include/linux/arch_topology.h | 11 +++++
3 files changed, 87 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 78f20e632712..339661ceabc8 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -233,6 +233,7 @@ config ARM64
select HAVE_KRETPROBES
select HAVE_GENERIC_VDSO
select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
+ select HOTPLUG_SMT if SMP
select IRQ_DOMAIN
select IRQ_FORCED_THREADING
select KASAN_VMALLOC if KASAN
diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index b741b5ba82bd..c5b453c2cd61 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -729,6 +729,75 @@ const struct cpumask *cpu_clustergroup_mask(int cpu)
return &cpu_topology[cpu].cluster_sibling;
}

+#ifdef CONFIG_HOTPLUG_SMT
+static int topology_smt_num_threads = 1;
+
+void __init topology_smt_set_num_threads(void)
+{
+ int cpu, sibling, threads;
+ cpumask_var_t to_visit;
+
+ if (!alloc_cpumask_var(&to_visit, GFP_KERNEL)) {
+ pr_err("Failed to update the SMT info\n");
+ return;
+ }
+
+ cpumask_or(to_visit, to_visit, cpu_possible_mask);
+
+ /*
+ * Walk all the CPUs to find the largest thread number, in case we're
+ * on a heterogeneous platform with only part of the CPU cores support
+ * SMT.
+ *
+ * Get the thread number by checking the CPUs with same core id
+ * rather than checking the topology_sibling_cpumask(), since the
+ * sibling mask will not cover all the CPUs if there's CPU offline.
+ */
+ for_each_cpu(cpu, to_visit) {
+ threads = 1;
+
+ cpumask_clear_cpu(cpu, to_visit);
+
+ /* Invalid thread id, this CPU is not in a SMT core */
+ if (cpu_topology[cpu].thread_id == -1)
+ continue;
+
+ for_each_cpu(sibling, to_visit) {
+ if (cpu_topology[sibling].thread_id != -1 &&
+ cpu_topology[cpu].core_id == cpu_topology[sibling].core_id)
+ threads++;
+
+ cpumask_clear_cpu(sibling, to_visit);
+ }
+
+ if (threads > topology_smt_num_threads)
+ topology_smt_num_threads = threads;
+ }
+
+ free_cpumask_var(to_visit);
+
+ /*
+ * We don't support CONFIG_SMT_NUM_THREADS_DYNAMIC so make the
+ * max_threads == num_threads.
+ */
+ cpu_smt_set_num_threads(topology_smt_num_threads, topology_smt_num_threads);
+}
+
+/*
+ * On SMT Hotplug the primary thread of the SMT won't be disabled. For x86 they
+ * seem to have a primary thread for special purpose. For other arthitectures
+ * like arm64 there's no such restriction for a primary thread, so make the
+ * first thread in the SMT as the primary thread.
+ */
+bool topology_is_primary_thread(unsigned int cpu)
+{
+ if (cpu == cpumask_first(topology_sibling_cpumask(cpu)))
+ return true;
+
+ return false;
+}
+#endif
+
void update_siblings_masks(unsigned int cpuid)
{
struct cpu_topology *cpu_topo, *cpuid_topo = &cpu_topology[cpuid];
@@ -841,6 +910,12 @@ void __init init_cpu_topology(void)
reset_cpu_topology();
}

+ /*
+ * By this stage we get to know whether we support SMT or not, update
+ * the information for the core.
+ */
+ topology_smt_set_num_threads();
+
for_each_possible_cpu(cpu) {
ret = fetch_cache_info(cpu);
if (!ret)
diff --git a/include/linux/arch_topology.h b/include/linux/arch_topology.h
index a07b510e7dc5..cf605a576e7b 100644
--- a/include/linux/arch_topology.h
+++ b/include/linux/arch_topology.h
@@ -92,6 +92,17 @@ void update_siblings_masks(unsigned int cpu);
void remove_cpu_topology(unsigned int cpuid);
void reset_cpu_topology(void);
int parse_acpi_topology(void);
+
+#ifdef CONFIG_HOTPLUG_SMT
+bool topology_smt_supported(void);
+bool topology_is_primary_thread(unsigned int cpu);
+void topology_smt_set_num_threads(void);
+#else
+static inline bool topology_smt_supported(void) { return false; }
+static inline bool topology_is_primary_thread(unsigned int cpu) { return false; }
+static inline void topology_smt_set_num_threads(void) { }
+#endif
+
#endif

#endif /* _LINUX_ARCH_TOPOLOGY_H_ */
--
2.24.0


2023-10-10 12:35:08

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2] arch_topology: Support SMT control on arm64

On Tue, Oct 10, 2023 at 07:53:35PM +0800, Yicong Yang wrote:
> From: Yicong Yang <[email protected]>
>
> The core CPU control framework supports runtime SMT control which
> is not yet supported on arm64. Besides the general vulnerabilities
> concerns we want this runtime control on our arm64 server for:

But shouldn't this be part of UEFI? Why manually try to determine this
at powerup in Linux?

> - better single CPU performance in some cases
> - saving overall power consumption
>
> This patch implements it in the following aspects:
>
> - implement the callbacks of the core
> - update the SMT status after the topology enumerated on arm64
> - select HOTPLUG_SMT for arm64
>
> For disabling SMT we'll offline all the secondary threads and
> only leave the primary thread. Since we don't have restriction
> for primary thread selection, the first thread is chosen as the
> primary thread in this implementation.
>
> Tests has been done on our real ACPI based arm64 server and on
> ACPI/OF based QEMU VMs.
>
> Signed-off-by: Yicong Yang <[email protected]>
> ---
> Change since v1:
> - Avoid the complexity on SMT detecting by visiting each CPU once, concerned by Sudeep
> Link: https://lore.kernel.org/all/[email protected]/
>
> arch/arm64/Kconfig | 1 +
> drivers/base/arch_topology.c | 75 +++++++++++++++++++++++++++++++++++
> include/linux/arch_topology.h | 11 +++++
> 3 files changed, 87 insertions(+)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 78f20e632712..339661ceabc8 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -233,6 +233,7 @@ config ARM64
> select HAVE_KRETPROBES
> select HAVE_GENERIC_VDSO
> select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
> + select HOTPLUG_SMT if SMP
> select IRQ_DOMAIN
> select IRQ_FORCED_THREADING
> select KASAN_VMALLOC if KASAN
> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
> index b741b5ba82bd..c5b453c2cd61 100644
> --- a/drivers/base/arch_topology.c
> +++ b/drivers/base/arch_topology.c
> @@ -729,6 +729,75 @@ const struct cpumask *cpu_clustergroup_mask(int cpu)
> return &cpu_topology[cpu].cluster_sibling;
> }
>
> +#ifdef CONFIG_HOTPLUG_SMT
> +static int topology_smt_num_threads = 1;
> +
> +void __init topology_smt_set_num_threads(void)
> +{
> + int cpu, sibling, threads;
> + cpumask_var_t to_visit;
> +
> + if (!alloc_cpumask_var(&to_visit, GFP_KERNEL)) {
> + pr_err("Failed to update the SMT info\n");
> + return;
> + }
> +
> + cpumask_or(to_visit, to_visit, cpu_possible_mask);
> +
> + /*
> + * Walk all the CPUs to find the largest thread number, in case we're
> + * on a heterogeneous platform with only part of the CPU cores support
> + * SMT.
> + *
> + * Get the thread number by checking the CPUs with same core id
> + * rather than checking the topology_sibling_cpumask(), since the
> + * sibling mask will not cover all the CPUs if there's CPU offline.
> + */
> + for_each_cpu(cpu, to_visit) {
> + threads = 1;
> +
> + cpumask_clear_cpu(cpu, to_visit);
> +
> + /* Invalid thread id, this CPU is not in a SMT core */
> + if (cpu_topology[cpu].thread_id == -1)
> + continue;
> +
> + for_each_cpu(sibling, to_visit) {
> + if (cpu_topology[sibling].thread_id != -1 &&
> + cpu_topology[cpu].core_id == cpu_topology[sibling].core_id)
> + threads++;
> +
> + cpumask_clear_cpu(sibling, to_visit);
> + }
> +
> + if (threads > topology_smt_num_threads)
> + topology_smt_num_threads = threads;
> + }
> +
> + free_cpumask_var(to_visit);
> +
> + /*
> + * We don't support CONFIG_SMT_NUM_THREADS_DYNAMIC so make the
> + * max_threads == num_threads.
> + */
> + cpu_smt_set_num_threads(topology_smt_num_threads, topology_smt_num_threads);
> +}

How is this going to affect non-arm64 systems? Will we now be doing
this double loop for all cpus in the systems (i.e. for 10's of thousands
on x86)?

And again, why is this not an issue on the current platforms that
already support CONFIG_HOTPLUG_SMT? What makes ARM64 so broken it
requires this manual intervention?

thanks,

greg k-h

2023-10-10 13:11:19

by Yicong Yang

[permalink] [raw]
Subject: Re: [PATCH v2] arch_topology: Support SMT control on arm64

On 2023/10/10 20:33, Greg KH wrote:
> On Tue, Oct 10, 2023 at 07:53:35PM +0800, Yicong Yang wrote:
>> From: Yicong Yang <[email protected]>
>>
>> The core CPU control framework supports runtime SMT control which
>> is not yet supported on arm64. Besides the general vulnerabilities
>> concerns we want this runtime control on our arm64 server for:
>
> But shouldn't this be part of UEFI? Why manually try to determine this
> at powerup in Linux?

We can disable/enable SMT by UEFI, but it lacks of flexibility. With runtime
support we can disable/enable on demand, rather than disable/enable all the time.
In our case, mainly for the below 2 reasons in the commit.

>
>> - better single CPU performance in some cases
>> - saving overall power consumption
>>
>> This patch implements it in the following aspects:
>>
>> - implement the callbacks of the core
>> - update the SMT status after the topology enumerated on arm64
>> - select HOTPLUG_SMT for arm64
>>
>> For disabling SMT we'll offline all the secondary threads and
>> only leave the primary thread. Since we don't have restriction
>> for primary thread selection, the first thread is chosen as the
>> primary thread in this implementation.
>>
>> Tests has been done on our real ACPI based arm64 server and on
>> ACPI/OF based QEMU VMs.
>>
>> Signed-off-by: Yicong Yang <[email protected]>
>> ---
>> Change since v1:
>> - Avoid the complexity on SMT detecting by visiting each CPU once, concerned by Sudeep
>> Link: https://lore.kernel.org/all/[email protected]/
>>
>> arch/arm64/Kconfig | 1 +
>> drivers/base/arch_topology.c | 75 +++++++++++++++++++++++++++++++++++
>> include/linux/arch_topology.h | 11 +++++
>> 3 files changed, 87 insertions(+)
>>
>> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
>> index 78f20e632712..339661ceabc8 100644
>> --- a/arch/arm64/Kconfig
>> +++ b/arch/arm64/Kconfig
>> @@ -233,6 +233,7 @@ config ARM64
>> select HAVE_KRETPROBES
>> select HAVE_GENERIC_VDSO
>> select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU
>> + select HOTPLUG_SMT if SMP
>> select IRQ_DOMAIN
>> select IRQ_FORCED_THREADING
>> select KASAN_VMALLOC if KASAN
>> diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
>> index b741b5ba82bd..c5b453c2cd61 100644
>> --- a/drivers/base/arch_topology.c
>> +++ b/drivers/base/arch_topology.c
>> @@ -729,6 +729,75 @@ const struct cpumask *cpu_clustergroup_mask(int cpu)
>> return &cpu_topology[cpu].cluster_sibling;
>> }
>>
>> +#ifdef CONFIG_HOTPLUG_SMT
>> +static int topology_smt_num_threads = 1;
>> +
>> +void __init topology_smt_set_num_threads(void)
>> +{
>> + int cpu, sibling, threads;
>> + cpumask_var_t to_visit;
>> +
>> + if (!alloc_cpumask_var(&to_visit, GFP_KERNEL)) {
>> + pr_err("Failed to update the SMT info\n");
>> + return;
>> + }
>> +
>> + cpumask_or(to_visit, to_visit, cpu_possible_mask);
>> +
>> + /*
>> + * Walk all the CPUs to find the largest thread number, in case we're
>> + * on a heterogeneous platform with only part of the CPU cores support
>> + * SMT.
>> + *
>> + * Get the thread number by checking the CPUs with same core id
>> + * rather than checking the topology_sibling_cpumask(), since the
>> + * sibling mask will not cover all the CPUs if there's CPU offline.
>> + */
>> + for_each_cpu(cpu, to_visit) {
>> + threads = 1;
>> +
>> + cpumask_clear_cpu(cpu, to_visit);
>> +
>> + /* Invalid thread id, this CPU is not in a SMT core */
>> + if (cpu_topology[cpu].thread_id == -1)
>> + continue;
>> +
>> + for_each_cpu(sibling, to_visit) {
>> + if (cpu_topology[sibling].thread_id != -1 &&
>> + cpu_topology[cpu].core_id == cpu_topology[sibling].core_id)
>> + threads++;
>> +
>> + cpumask_clear_cpu(sibling, to_visit);
>> + }
>> +
>> + if (threads > topology_smt_num_threads)
>> + topology_smt_num_threads = threads;
>> + }
>> +
>> + free_cpumask_var(to_visit);
>> +
>> + /*
>> + * We don't support CONFIG_SMT_NUM_THREADS_DYNAMIC so make the
>> + * max_threads == num_threads.
>> + */
>> + cpu_smt_set_num_threads(topology_smt_num_threads, topology_smt_num_threads);
>> +}
>
> How is this going to affect non-arm64 systems? Will we now be doing
> this double loop for all cpus in the systems (i.e. for 10's of thousands
> on x86)?

x86 build the topology in their arch codes. Currently the user of this driver
are arm64, arm, risv and parisc who select GENERIC_ARCH_TOPOLOGY.
arch needs to select CONFIG_HOTPLUG_SMT for this feature and this patch only
adds arm64 support. So others won't be affected.

> And again, why is this not an issue on the current platforms that
> already support CONFIG_HOTPLUG_SMT? What makes ARM64 so broken it
> requires this manual intervention?
>

Currently I see x86 and powerpc supports CONFIG_HOTPLUG_SMT on the mainline.
For x86 they build the topology and detects the SMT suppport in the arch
code, seems they can directly get the SMT number by reading the system
register, refers to arch/x86/kernel/cpu/common.c:detect_ht_early(). For
powerpc I see they are reading the threads number from the firmware(OF)
and assuming all the SMT have the same thread number, refer to
arch/powerpc/kernel/setup-common.c:smp_setup_cpu_maps(). Please corrects
me if there's mistakes.

Back to arm64, there's no system registers to directly indicate the thread
numbers of each SMT in the system like x86, nor do we have the information
from the firmware (OF/ACPI). I cannot assume all the physical cores have
the same thread number, since we may have SMT cores and non-SMT cores on
heterogeneous platform. We can only know the SMT information after
parsing the firmware information.

So in this implementation I scan the cpus to find the biggest SMT number
rather than use the SMT number of the first CPU in case on a
heterogeneous platform. And it can handle the case of both ACPI and OF
based platform. Sudeep also suggests to detect SMT numbers during
parsing the PPTT or device tree, that means we need to handle it
separately. Do the detecting in the OF scanning code in arch_topology
and also the dectecting in the PPTT scanning code in arch codes.
But if we agree it's better, I can switch to that. Or any other
suggestions are appreciated.

Thanks.

2023-10-10 14:08:59

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2] arch_topology: Support SMT control on arm64

On Tue, Oct 10, 2023 at 09:10:48PM +0800, Yicong Yang wrote:
> On 2023/10/10 20:33, Greg KH wrote:
> > On Tue, Oct 10, 2023 at 07:53:35PM +0800, Yicong Yang wrote:
> >> From: Yicong Yang <[email protected]>
> >>
> >> The core CPU control framework supports runtime SMT control which
> >> is not yet supported on arm64. Besides the general vulnerabilities
> >> concerns we want this runtime control on our arm64 server for:
> >
> > But shouldn't this be part of UEFI? Why manually try to determine this
> > at powerup in Linux?
>
> We can disable/enable SMT by UEFI, but it lacks of flexibility. With runtime
> support we can disable/enable on demand, rather than disable/enable all the time.
> In our case, mainly for the below 2 reasons in the commit.

Runtime is fine, it's the initial detection that I have questions about,
see below.

> > And again, why is this not an issue on the current platforms that
> > already support CONFIG_HOTPLUG_SMT? What makes ARM64 so broken it
> > requires this manual intervention?
> >
>
> Currently I see x86 and powerpc supports CONFIG_HOTPLUG_SMT on the mainline.
> For x86 they build the topology and detects the SMT suppport in the arch
> code, seems they can directly get the SMT number by reading the system
> register, refers to arch/x86/kernel/cpu/common.c:detect_ht_early(). For
> powerpc I see they are reading the threads number from the firmware(OF)
> and assuming all the SMT have the same thread number, refer to
> arch/powerpc/kernel/setup-common.c:smp_setup_cpu_maps(). Please corrects
> me if there's mistakes.
>
> Back to arm64, there's no system registers to directly indicate the thread
> numbers of each SMT in the system like x86, nor do we have the information
> from the firmware (OF/ACPI).

UEFI/ACPI should be showing you this information, as that's how x86 gets
the information, right? If not, it needs to be added as part of the
specification.

> I cannot assume all the physical cores have
> the same thread number, since we may have SMT cores and non-SMT cores on
> heterogeneous platform. We can only know the SMT information after
> parsing the firmware information.

Ugh, heterogeneous platforms, why??? Anyway, I didn't think about that,
but even then, the firmware should be giving you this information in a
standard way, otherwise it's not following the UEFI/ACPI specification
from what I can tell.

thanks,

greg k-h

2023-10-11 09:33:25

by Jonathan Cameron

[permalink] [raw]
Subject: Re: [PATCH v2] arch_topology: Support SMT control on arm64

On Tue, 10 Oct 2023 16:08:25 +0200
Greg KH <[email protected]> wrote:

> On Tue, Oct 10, 2023 at 09:10:48PM +0800, Yicong Yang wrote:
> > On 2023/10/10 20:33, Greg KH wrote:
> > > On Tue, Oct 10, 2023 at 07:53:35PM +0800, Yicong Yang wrote:
> > >> From: Yicong Yang <[email protected]>
> > >>
> > >> The core CPU control framework supports runtime SMT control which
> > >> is not yet supported on arm64. Besides the general vulnerabilities
> > >> concerns we want this runtime control on our arm64 server for:
> > >
> > > But shouldn't this be part of UEFI? Why manually try to determine this
> > > at powerup in Linux?
> >
> > We can disable/enable SMT by UEFI, but it lacks of flexibility. With runtime
> > support we can disable/enable on demand, rather than disable/enable all the time.
> > In our case, mainly for the below 2 reasons in the commit.
>
> Runtime is fine, it's the initial detection that I have questions about,
> see below.
>
> > > And again, why is this not an issue on the current platforms that
> > > already support CONFIG_HOTPLUG_SMT? What makes ARM64 so broken it
> > > requires this manual intervention?
> > >
> >
> > Currently I see x86 and powerpc supports CONFIG_HOTPLUG_SMT on the mainline.
> > For x86 they build the topology and detects the SMT suppport in the arch
> > code, seems they can directly get the SMT number by reading the system
> > register, refers to arch/x86/kernel/cpu/common.c:detect_ht_early(). For
> > powerpc I see they are reading the threads number from the firmware(OF)
> > and assuming all the SMT have the same thread number, refer to
> > arch/powerpc/kernel/setup-common.c:smp_setup_cpu_maps(). Please corrects
> > me if there's mistakes.
> >
> > Back to arm64, there's no system registers to directly indicate the thread
> > numbers of each SMT in the system like x86, nor do we have the information
> > from the firmware (OF/ACPI).
>
> UEFI/ACPI should be showing you this information, as that's how x86 gets
> the information, right? If not, it needs to be added as part of the
> specification.
>
> > I cannot assume all the physical cores have
> > the same thread number, since we may have SMT cores and non-SMT cores on
> > heterogeneous platform. We can only know the SMT information after
> > parsing the firmware information.
>
> Ugh, heterogeneous platforms, why??? Anyway, I didn't think about that,
> but even then, the firmware should be giving you this information in a
> standard way, otherwise it's not following the UEFI/ACPI specification
> from what I can tell.

It can be established from ACPI PPTT, but it's pretty indirect. You'd need
to search all leaf nodes of the tree and find "Processor Hierarchy Nodes
tagged as "Processor is a thread" then walk up from those find common
ancestors. So ultimately I think you end up doing some form of multiple
pass. I'm not sure it ends up simpler than what yangyicong had here.

There may be better algorithms than the simple nested loop here though
with better scalability.

Jonathan






>
> thanks,
>
> greg k-h

2023-10-12 16:00:48

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v2] arch_topology: Support SMT control on arm64

Hi Yicong,

kernel test robot noticed the following build warnings:

[auto build test WARNING on arm64/for-next/core]
[also build test WARNING on driver-core/driver-core-testing driver-core/driver-core-next driver-core/driver-core-linus soc/for-next linus/master v6.6-rc5 next-20231012]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Yicong-Yang/arch_topology-Support-SMT-control-on-arm64/20231010-195926
base: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
patch link: https://lore.kernel.org/r/20231010115335.13862-1-yangyicong%40huawei.com
patch subject: [PATCH v2] arch_topology: Support SMT control on arm64
config: arm64-randconfig-004-20231012 (https://download.01.org/0day-ci/archive/20231012/[email protected]/config)
compiler: aarch64-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231012/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

In file included from include/linux/cpumask.h:12,
from include/linux/smp.h:13,
from include/linux/lockdep.h:14,
from include/linux/spinlock.h:63,
from include/linux/mmzone.h:8,
from include/linux/gfp.h:7,
from include/linux/slab.h:16,
from include/linux/resource_ext.h:11,
from include/linux/acpi.h:13,
from drivers/base/arch_topology.c:9:
In function 'bitmap_or',
inlined from 'cpumask_or' at include/linux/cpumask.h:582:2,
inlined from 'topology_smt_set_num_threads' at drivers/base/arch_topology.c:745:2:
>> include/linux/bitmap.h:330:24: warning: 'to_visit' is used uninitialized [-Wuninitialized]
330 | *dst = *src1 | *src2;
| ^~~~~
drivers/base/arch_topology.c: In function 'topology_smt_set_num_threads':
drivers/base/arch_topology.c:738:23: note: 'to_visit' declared here
738 | cpumask_var_t to_visit;
| ^~~~~~~~


vim +/to_visit +330 include/linux/bitmap.h

^1da177e4c3f41 Linus Torvalds 2005-04-16 325
^1da177e4c3f41 Linus Torvalds 2005-04-16 326 static inline void bitmap_or(unsigned long *dst, const unsigned long *src1,
2f9305eb31097f Rasmus Villemoes 2014-08-06 327 const unsigned long *src2, unsigned int nbits)
^1da177e4c3f41 Linus Torvalds 2005-04-16 328 {
4b0bc0bca83f3f Rusty Russell 2008-12-30 329 if (small_const_nbits(nbits))
^1da177e4c3f41 Linus Torvalds 2005-04-16 @330 *dst = *src1 | *src2;
^1da177e4c3f41 Linus Torvalds 2005-04-16 331 else
^1da177e4c3f41 Linus Torvalds 2005-04-16 332 __bitmap_or(dst, src1, src2, nbits);
^1da177e4c3f41 Linus Torvalds 2005-04-16 333 }
^1da177e4c3f41 Linus Torvalds 2005-04-16 334

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-10-12 16:11:33

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v2] arch_topology: Support SMT control on arm64

Hi Yicong,

kernel test robot noticed the following build errors:

[auto build test ERROR on arm64/for-next/core]
[also build test ERROR on driver-core/driver-core-testing driver-core/driver-core-next driver-core/driver-core-linus soc/for-next linus/master v6.6-rc5 next-20231012]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Yicong-Yang/arch_topology-Support-SMT-control-on-arm64/20231010-195926
base: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
patch link: https://lore.kernel.org/r/20231010115335.13862-1-yangyicong%40huawei.com
patch subject: [PATCH v2] arch_topology: Support SMT control on arm64
config: arm64-allnoconfig (https://download.01.org/0day-ci/archive/20231012/[email protected]/config)
compiler: aarch64-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231012/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All errors (new ones prefixed by >>):

kernel/cpu.c: In function 'cpuhp_smt_disable':
>> kernel/cpu.c:2687:23: error: implicit declaration of function 'cpu_down_maps_locked' [-Werror=implicit-function-declaration]
2687 | ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
| ^~~~~~~~~~~~~~~~~~~~
cc1: some warnings being treated as errors


vim +/cpu_down_maps_locked +2687 kernel/cpu.c

dc8d37ed304eee Arnd Bergmann 2019-12-10 2672
dc8d37ed304eee Arnd Bergmann 2019-12-10 2673 int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
dc8d37ed304eee Arnd Bergmann 2019-12-10 2674 {
dc8d37ed304eee Arnd Bergmann 2019-12-10 2675 int cpu, ret = 0;
dc8d37ed304eee Arnd Bergmann 2019-12-10 2676
dc8d37ed304eee Arnd Bergmann 2019-12-10 2677 cpu_maps_update_begin();
dc8d37ed304eee Arnd Bergmann 2019-12-10 2678 for_each_online_cpu(cpu) {
dc8d37ed304eee Arnd Bergmann 2019-12-10 2679 if (topology_is_primary_thread(cpu))
dc8d37ed304eee Arnd Bergmann 2019-12-10 2680 continue;
38253464bc821d Michael Ellerman 2023-07-05 2681 /*
38253464bc821d Michael Ellerman 2023-07-05 2682 * Disable can be called with CPU_SMT_ENABLED when changing
38253464bc821d Michael Ellerman 2023-07-05 2683 * from a higher to lower number of SMT threads per core.
38253464bc821d Michael Ellerman 2023-07-05 2684 */
38253464bc821d Michael Ellerman 2023-07-05 2685 if (ctrlval == CPU_SMT_ENABLED && cpu_smt_thread_allowed(cpu))
38253464bc821d Michael Ellerman 2023-07-05 2686 continue;
dc8d37ed304eee Arnd Bergmann 2019-12-10 @2687 ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
dc8d37ed304eee Arnd Bergmann 2019-12-10 2688 if (ret)
dc8d37ed304eee Arnd Bergmann 2019-12-10 2689 break;
dc8d37ed304eee Arnd Bergmann 2019-12-10 2690 /*
dc8d37ed304eee Arnd Bergmann 2019-12-10 2691 * As this needs to hold the cpu maps lock it's impossible
dc8d37ed304eee Arnd Bergmann 2019-12-10 2692 * to call device_offline() because that ends up calling
dc8d37ed304eee Arnd Bergmann 2019-12-10 2693 * cpu_down() which takes cpu maps lock. cpu maps lock
dc8d37ed304eee Arnd Bergmann 2019-12-10 2694 * needs to be held as this might race against in kernel
dc8d37ed304eee Arnd Bergmann 2019-12-10 2695 * abusers of the hotplug machinery (thermal management).
dc8d37ed304eee Arnd Bergmann 2019-12-10 2696 *
dc8d37ed304eee Arnd Bergmann 2019-12-10 2697 * So nothing would update device:offline state. That would
dc8d37ed304eee Arnd Bergmann 2019-12-10 2698 * leave the sysfs entry stale and prevent onlining after
dc8d37ed304eee Arnd Bergmann 2019-12-10 2699 * smt control has been changed to 'off' again. This is
dc8d37ed304eee Arnd Bergmann 2019-12-10 2700 * called under the sysfs hotplug lock, so it is properly
dc8d37ed304eee Arnd Bergmann 2019-12-10 2701 * serialized against the regular offline usage.
dc8d37ed304eee Arnd Bergmann 2019-12-10 2702 */
dc8d37ed304eee Arnd Bergmann 2019-12-10 2703 cpuhp_offline_cpu_device(cpu);
dc8d37ed304eee Arnd Bergmann 2019-12-10 2704 }
dc8d37ed304eee Arnd Bergmann 2019-12-10 2705 if (!ret)
dc8d37ed304eee Arnd Bergmann 2019-12-10 2706 cpu_smt_control = ctrlval;
dc8d37ed304eee Arnd Bergmann 2019-12-10 2707 cpu_maps_update_done();
dc8d37ed304eee Arnd Bergmann 2019-12-10 2708 return ret;
dc8d37ed304eee Arnd Bergmann 2019-12-10 2709 }
dc8d37ed304eee Arnd Bergmann 2019-12-10 2710

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki