2023-04-25 20:00:52

by Tony Battersby

[permalink] [raw]
Subject: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

In stop_this_cpu(), make sure the CPUID leaf exists before accessing the
leaf. This fixes a lockup on poweroff 50% of the time due to the wrong
branch being taken randomly on some CPUs (seen on Supermicro X8DTH-6F
with Intel Xeon X5650).

Fixes: 08f253ec3767 ("x86/cpu: Clear SME feature flag when not in use")
Cc: <[email protected]> # 5.18+
Signed-off-by: Tony Battersby <[email protected]>
---

NOTE: I don't have any AMD CPUs to test, so I was unable to fully test
this patch. Could someone with an AMD CPU that supports SME please test
this and make sure it calls native_wbinvd()?


arch/x86/kernel/process.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index b650cde3f64d..26aa32e8f636 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -754,13 +754,15 @@ bool xen_set_default_idle(void)

void __noreturn stop_this_cpu(void *dummy)
{
+ struct cpuinfo_x86 *c = this_cpu_ptr(&cpu_info);
+
local_irq_disable();
/*
* Remove this CPU:
*/
set_cpu_online(smp_processor_id(), false);
disable_local_APIC();
- mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
+ mcheck_cpu_clear(c);

/*
* Use wbinvd on processors that support SME. This provides support
@@ -774,7 +776,8 @@ void __noreturn stop_this_cpu(void *dummy)
* Test the CPUID bit directly because the machine might've cleared
* X86_FEATURE_SME due to cmdline options.
*/
- if (cpuid_eax(0x8000001f) & BIT(0))
+ if (c->extended_cpuid_level >= 0x8000001f &&
+ (cpuid_eax(0x8000001f) & BIT(0)))
native_wbinvd();
for (;;) {
/*
--
2.25.1


2023-04-25 20:06:40

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

On 4/25/23 12:26, Tony Battersby wrote:
> - if (cpuid_eax(0x8000001f) & BIT(0))
> + if (c->extended_cpuid_level >= 0x8000001f &&
> + (cpuid_eax(0x8000001f) & BIT(0)))
> native_wbinvd();

Oh, so the existing code is running into the

> If a value entered for CPUID.EAX is higher than the maximum input
> value for basic or extended function for that processor then the data
> for the highest basic information leaf is returned
behavior. It's basically looking at BIT(0) of some random extended
leaf. Probably 0x80000008 based on your 'cpuid -r' output.


2023-04-25 20:44:47

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

On 4/25/23 13:03, Dave Hansen wrote:
> On 4/25/23 12:26, Tony Battersby wrote:
>> - if (cpuid_eax(0x8000001f) & BIT(0))
>> + if (c->extended_cpuid_level >= 0x8000001f &&
>> + (cpuid_eax(0x8000001f) & BIT(0)))
>> native_wbinvd();
> Oh, so the existing code is running into the
>
>> If a value entered for CPUID.EAX is higher than the maximum input
>> value for basic or extended function for that processor then the data
>> for the highest basic information leaf is returned
> behavior. It's basically looking at BIT(0) of some random extended
> leaf. Probably 0x80000008 based on your 'cpuid -r' output.

Whoops, 0x80000008 isn't a "basic information leaf". If 'cpuid -r'
dumps all the basic leaves, that would mean the "highest basic
information leaf" is 0x0000000b:

> 0x0000000b 0x00: eax=0x00000001 ebx=0x00000002 ecx=0x00000100 edx=0x00000000

which does have BIT(0) set.

So, that at least explains how WBINVD gets called in the first place.
But (as tglx noted on IRC) it doesn't really explain the lockup. WBINVD
should work everywhere and it won't #UD or something because the CPUID
check was botched.

2023-04-25 21:07:21

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

On Tue, Apr 25 2023 at 13:03, Dave Hansen wrote:

> On 4/25/23 12:26, Tony Battersby wrote:
>> - if (cpuid_eax(0x8000001f) & BIT(0))
>> + if (c->extended_cpuid_level >= 0x8000001f &&
>> + (cpuid_eax(0x8000001f) & BIT(0)))
>> native_wbinvd();
>
> Oh, so the existing code is running into the
>
>> If a value entered for CPUID.EAX is higher than the maximum input
>> value for basic or extended function for that processor then the data
>> for the highest basic information leaf is returned
> behavior. It's basically looking at BIT(0) of some random extended
> leaf. Probably 0x80000008 based on your 'cpuid -r' output.

Right, accessing that leaf without checking whether it exists is wrong,
but that does not explain the hang itself.

The only consequence of looking at bit 0 of some random other leaf is
that all CPUs which run stop_this_cpu() issue WBINVD in parallel, which
is slow but should not be a fatal issue.

Tony observed this is a 50% chance to hang, which means this is a timing
issue.

Now there are two things to investigate:

1) Does the system go south if enough CPUs issue WBINVD concurrently?

That should be trivial to analyze by enforcing concurreny on a
WBINVD in an IPI via a global synchronization bit or such

2) The first thing stop_this_cpu() does is to clear its own bit in the
CPU online mask.

The CPU which controls shutdown/reboot waits for num_online_cpus()
to drop down to one, which means it can proceed _before_ the other
CPUs have actually reached HALT.

That's not a new thing. It has been that way forever. Just the
WBINVD might cause enough delay to create problems.

That should be trivial to analyze too by just waiting on the
control side for e.g 100ms after num_online_cpus() dropped down to
one.

The patch itself is correct as is, but as it does not explain the
underlying problem. There is a real serious issue underneath.

Thanks,

tglx

2023-04-25 21:31:51

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

On Tue, Apr 25, 2023 at 01:34:48PM -0700, Dave Hansen wrote:
> > 0x0000000b 0x00: eax=0x00000001 ebx=0x00000002 ecx=0x00000100 edx=0x00000000
>
> which does have BIT(0) set.

Yeah, or the last basic leaf

0x0000000b 0x01: eax=0x00000005 ebx=0x0000000c ecx=0x00000201 edx=0x00000000

but that one has bit 0 set too, so yeah, patch is correct. I'll queue it
after -rc1.

But Tony, please change your commit message to say something like this:

"Check the maximum supported extended CPUID level because on Intel,
querying an invalid extended CPUID leaf returns the values of the
maximum basic CPUID leaf and if bit 0 in EAX is set, the check falsely
evaluates to true and WBINVD is wrongly executed where it shouldn't."

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-04-25 22:42:53

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

On 4/25/23 14:05, Thomas Gleixner wrote:
> The only consequence of looking at bit 0 of some random other leaf is
> that all CPUs which run stop_this_cpu() issue WBINVD in parallel, which
> is slow but should not be a fatal issue.
>
> Tony observed this is a 50% chance to hang, which means this is a timing
> issue.

I _think_ the system in question is a dual-socket Westmere. I don't see
any obvious errata that we could pin this on:

> https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-5600-specification-update.pdf

Andi Kleen had an interesting theory. WBINVD is a pretty expensive
operation. It's possible that it has some degenerative behavior when
it's called on a *bunch* of CPUs all at once (which this path can do).
If the instruction takes too long, it could trigger one of the CPU's
internal lockup detectors and trigger a machine check. At that point,
all hell breaks loose.

I don't know the cache coherency protocol well enough to say for sure,
but I wonder if there's a storm of cache coherency traffic as all those
lines get written back. One of the CPUs gets starved from making enough
forward progress and trips a CPU-internal watchdog.

Andi also says that it _should_ log something in the machine check banks
when this happens so there should be at least some kind of breadcrumb.

Either way, I'm hoping this hand waving satiates tglx's morbid curiosity
about hardware that came out from before I even worked at Intel. ;)

2023-04-25 23:05:42

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

On Tue, Apr 25 2023 at 15:29, Dave Hansen wrote:
> On 4/25/23 14:05, Thomas Gleixner wrote:
>> The only consequence of looking at bit 0 of some random other leaf is
>> that all CPUs which run stop_this_cpu() issue WBINVD in parallel, which
>> is slow but should not be a fatal issue.
>>
>> Tony observed this is a 50% chance to hang, which means this is a timing
>> issue.
>
> I _think_ the system in question is a dual-socket Westmere. I don't see
> any obvious errata that we could pin this on:
>
>> https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-5600-specification-update.pdf
>
> Andi Kleen had an interesting theory. WBINVD is a pretty expensive
> operation. It's possible that it has some degenerative behavior when
> it's called on a *bunch* of CPUs all at once (which this path can do).
> If the instruction takes too long, it could trigger one of the CPU's
> internal lockup detectors and trigger a machine check. At that point,
> all hell breaks loose.
>
> I don't know the cache coherency protocol well enough to say for sure,
> but I wonder if there's a storm of cache coherency traffic as all those
> lines get written back. One of the CPUs gets starved from making enough
> forward progress and trips a CPU-internal watchdog.
>
> Andi also says that it _should_ log something in the machine check banks
> when this happens so there should be at least some kind of breadcrumb.
>
> Either way, I'm hoping this hand waving satiates tglx's morbid curiosity
> about hardware that came out from before I even worked at Intel. ;)

No, it does not. :)

There is no reason to believe that this is just a problem of CPUs which
were released long time ago.

If there is an issue with concurrent WBINVD then this needs to be
addressed independently of Tony's observations.

Aside of that the allowance for the control CPU to make progress based
on the early clearing of the CPU online bit is still a possibility to
explain the wreckage just based on timing.

The reason why I insist on a proper analysis is definitely not morbid
curiosity. The real reason is that I fundamentally hate problems being
handwaved away.

It's a matter of fact that all problems which are not root caused keep
coming back and not necessarily in debuggable ways. Tony's 50% case is
golden compared to the once in a blue moon issues.

I outlined the debug options already. So just throw them at the problem
instead of indulging in handwaing theories.

Thanks,

tglx

2023-04-26 00:16:22

by H. Peter Anvin

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

On April 25, 2023 3:29:49 PM PDT, Dave Hansen <[email protected]> wrote:
>On 4/25/23 14:05, Thomas Gleixner wrote:
>> The only consequence of looking at bit 0 of some random other leaf is
>> that all CPUs which run stop_this_cpu() issue WBINVD in parallel, which
>> is slow but should not be a fatal issue.
>>
>> Tony observed this is a 50% chance to hang, which means this is a timing
>> issue.
>
>I _think_ the system in question is a dual-socket Westmere. I don't see
>any obvious errata that we could pin this on:
>
>> https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/xeon-5600-specification-update.pdf
>
>Andi Kleen had an interesting theory. WBINVD is a pretty expensive
>operation. It's possible that it has some degenerative behavior when
>it's called on a *bunch* of CPUs all at once (which this path can do).
>If the instruction takes too long, it could trigger one of the CPU's
>internal lockup detectors and trigger a machine check. At that point,
>all hell breaks loose.
>
>I don't know the cache coherency protocol well enough to say for sure,
>but I wonder if there's a storm of cache coherency traffic as all those
>lines get written back. One of the CPUs gets starved from making enough
>forward progress and trips a CPU-internal watchdog.
>
>Andi also says that it _should_ log something in the machine check banks
>when this happens so there should be at least some kind of breadcrumb.
>
>Either way, I'm hoping this hand waving satiates tglx's morbid curiosity
>about hardware that came out from before I even worked at Intel. ;)

"Pretty expensive" doesn't really cover it. It is by far the longest time an x86 CPU can block out all outside events.

2023-04-26 14:58:02

by Tony Battersby

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

On 4/25/23 17:05, Thomas Gleixner wrote:
> On Tue, Apr 25 2023 at 13:03, Dave Hansen wrote:
>
>> On 4/25/23 12:26, Tony Battersby wrote:
>>> - if (cpuid_eax(0x8000001f) & BIT(0))
>>> + if (c->extended_cpuid_level >= 0x8000001f &&
>>> + (cpuid_eax(0x8000001f) & BIT(0)))
>>> native_wbinvd();
>> Oh, so the existing code is running into the
>>
>>> If a value entered for CPUID.EAX is higher than the maximum input
>>> value for basic or extended function for that processor then the data
>>> for the highest basic information leaf is returned
>> behavior. It's basically looking at BIT(0) of some random extended
>> leaf. Probably 0x80000008 based on your 'cpuid -r' output.
> Right, accessing that leaf without checking whether it exists is wrong,
> but that does not explain the hang itself.
>
> The only consequence of looking at bit 0 of some random other leaf is
> that all CPUs which run stop_this_cpu() issue WBINVD in parallel, which
> is slow but should not be a fatal issue.
>
> Tony observed this is a 50% chance to hang, which means this is a timing
> issue.
>
> Now there are two things to investigate:
>
> 1) Does the system go south if enough CPUs issue WBINVD concurrently?
>
> That should be trivial to analyze by enforcing concurreny on a
> WBINVD in an IPI via a global synchronization bit or such
>
> 2) The first thing stop_this_cpu() does is to clear its own bit in the
> CPU online mask.
>
> The CPU which controls shutdown/reboot waits for num_online_cpus()
> to drop down to one, which means it can proceed _before_ the other
> CPUs have actually reached HALT.
>
> That's not a new thing. It has been that way forever. Just the
> WBINVD might cause enough delay to create problems.
>
> That should be trivial to analyze too by just waiting on the
> control side for e.g 100ms after num_online_cpus() dropped down to
> one.
>
For test #1, I have never used IPI before, so I would have to look into
how to do that.  Or you could send me a patch to test if you still want
the test done.  But test #2 produced results, so maybe it is not necessary.

For test #2, I re-enabled native_wbinvd() by reverting the patch that I
sent, and then I applied the following patch:

diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 375b33ecafa2..1a9b225c85b6 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -212,6 +212,7 @@ static void native_stop_other_cpus(int wait)
udelay(1);
}

+ mdelay(100);
local_irq_save(flags);
disable_local_APIC();
mcheck_cpu_clear(this_cpu_ptr(&cpu_info));

With that I got a successful power-off 10 times in a row.

Let me know if there is anything else I can test.

I will resend my patch later with a different description.

Tony Battersby
Cybernetics


2023-04-26 16:46:15

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

Tony!

On Wed, Apr 26 2023 at 10:45, Tony Battersby wrote:
> On 4/25/23 17:05, Thomas Gleixner wrote:
> For test #1, I have never used IPI before, so I would have to look into
> how to do that.  Or you could send me a patch to test if you still want
> the test done.  But test #2 produced results, so maybe it is not
> necessary.

I think we can spare that exercise.

> For test #2, I re-enabled native_wbinvd() by reverting the patch that I
> sent, and then I applied the following patch:
>
> diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
> index 375b33ecafa2..1a9b225c85b6 100644
> --- a/arch/x86/kernel/smp.c
> +++ b/arch/x86/kernel/smp.c
> @@ -212,6 +212,7 @@ static void native_stop_other_cpus(int wait)
> udelay(1);
> }
>
> + mdelay(100);
> local_irq_save(flags);
> disable_local_APIC();
> mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
>
> With that I got a successful power-off 10 times in a row.

Thanks for trying this!

The problem really seems to be that the control CPU goes off before the
other CPUs have finished and depending on timing that causes the
wreckage. Otherwise the mdelay(100) would not have helped at all.

But looking at it, that num_online_cpus() == 1 check in
stop_other_cpus() is fragile as hell independent of that wbinvd() issue.

Something like the completely untested below should cure that.

Thanks,

tglx
---
arch/x86/include/asm/cpu.h | 2 ++
arch/x86/kernel/process.c | 10 ++++++++++
arch/x86/kernel/smp.c | 15 ++++++++++++---
3 files changed, 24 insertions(+), 3 deletions(-)

--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -98,4 +98,6 @@ extern u64 x86_read_arch_cap_msr(void);
int intel_find_matching_signature(void *mc, unsigned int csig, int cpf);
int intel_microcode_sanity_check(void *mc, bool print_err, int hdr_type);

+extern atomic_t stop_cpus_count;
+
#endif /* _ASM_X86_CPU_H */
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -752,6 +752,8 @@ bool xen_set_default_idle(void)
}
#endif

+atomic_t stop_cpus_count;
+
void __noreturn stop_this_cpu(void *dummy)
{
local_irq_disable();
@@ -776,6 +778,14 @@ void __noreturn stop_this_cpu(void *dumm
*/
if (cpuid_eax(0x8000001f) & BIT(0))
native_wbinvd();
+
+ /*
+ * native_stop_other_cpus() will write to @stop_cpus_count after
+ * observing that it went down to zero, which will invalidate the
+ * cacheline on this CPU.
+ */
+ atomic_dec(&stop_cpus_count);
+
for (;;) {
/*
* Use native_halt() so that memory contents don't change
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -27,6 +27,7 @@
#include <asm/mmu_context.h>
#include <asm/proto.h>
#include <asm/apic.h>
+#include <asm/cpu.h>
#include <asm/idtentry.h>
#include <asm/nmi.h>
#include <asm/mce.h>
@@ -171,6 +172,8 @@ static void native_stop_other_cpus(int w
if (atomic_cmpxchg(&stopping_cpu, -1, safe_smp_processor_id()) != -1)
return;

+ atomic_set(&stop_cpus_count, num_online_cpus() - 1);
+
/* sync above data before sending IRQ */
wmb();

@@ -183,12 +186,12 @@ static void native_stop_other_cpus(int w
* CPUs reach shutdown state.
*/
timeout = USEC_PER_SEC;
- while (num_online_cpus() > 1 && timeout--)
+ while (atomic_read(&stop_cpus_count) > 0 && timeout--)
udelay(1);
}

/* if the REBOOT_VECTOR didn't work, try with the NMI */
- if (num_online_cpus() > 1) {
+ if (atomic_read(&stop_cpus_count) > 0) {
/*
* If NMI IPI is enabled, try to register the stop handler
* and send the IPI. In any case try to wait for the other
@@ -208,7 +211,7 @@ static void native_stop_other_cpus(int w
* one or more CPUs do not reach shutdown state.
*/
timeout = USEC_PER_MSEC * 10;
- while (num_online_cpus() > 1 && (wait || timeout--))
+ while (atomic_read(&stop_cpus_count) > 0 && (wait || timeout--))
udelay(1);
}

@@ -216,6 +219,12 @@ static void native_stop_other_cpus(int w
disable_local_APIC();
mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
local_irq_restore(flags);
+
+ /*
+ * Ensure that the cache line is invalidated on the other CPUs. See
+ * comment vs. SME in stop_this_cpu().
+ */
+ atomic_set(&stop_cpus_count, INT_MAX);
}

/*

2023-04-26 17:53:34

by Tom Lendacky

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

On 4/26/23 12:37, Tony Battersby wrote:
> On 4/26/23 12:37, Thomas Gleixner wrote:
>> The problem really seems to be that the control CPU goes off before the
>> other CPUs have finished and depending on timing that causes the
>> wreckage. Otherwise the mdelay(100) would not have helped at all.
>>
>> But looking at it, that num_online_cpus() == 1 check in
>> stop_other_cpus() is fragile as hell independent of that wbinvd() issue.
>>
>> Something like the completely untested below should cure that.
>>
>> Thanks,
>>
>> tglx
>> ---
>> arch/x86/include/asm/cpu.h | 2 ++
>> arch/x86/kernel/process.c | 10 ++++++++++
>> arch/x86/kernel/smp.c | 15 ++++++++++++---
>> 3 files changed, 24 insertions(+), 3 deletions(-)
>>
>> --- a/arch/x86/include/asm/cpu.h
>> +++ b/arch/x86/include/asm/cpu.h
>> @@ -98,4 +98,6 @@ extern u64 x86_read_arch_cap_msr(void);
>> int intel_find_matching_signature(void *mc, unsigned int csig, int cpf);
>> int intel_microcode_sanity_check(void *mc, bool print_err, int hdr_type);
>>
>> +extern atomic_t stop_cpus_count;
>> +
>> #endif /* _ASM_X86_CPU_H */
>> --- a/arch/x86/kernel/process.c
>> +++ b/arch/x86/kernel/process.c
>> @@ -752,6 +752,8 @@ bool xen_set_default_idle(void)
>> }
>> #endif
>>
>> +atomic_t stop_cpus_count;
>> +
>> void __noreturn stop_this_cpu(void *dummy)
>> {
>> local_irq_disable();
>> @@ -776,6 +778,14 @@ void __noreturn stop_this_cpu(void *dumm
>> */
>> if (cpuid_eax(0x8000001f) & BIT(0))
>> native_wbinvd();
>> +
>> + /*
>> + * native_stop_other_cpus() will write to @stop_cpus_count after
>> + * observing that it went down to zero, which will invalidate the
>> + * cacheline on this CPU.
>> + */
>> + atomic_dec(&stop_cpus_count);

This is probably going to pull in a cache line and cause the problem the
native_wbinvd() is trying to avoid.

Thanks,
Tom

>> +
>> for (;;) {
>> /*
>> * Use native_halt() so that memory contents don't change
>> --- a/arch/x86/kernel/smp.c
>> +++ b/arch/x86/kernel/smp.c
>> @@ -27,6 +27,7 @@
>> #include <asm/mmu_context.h>
>> #include <asm/proto.h>
>> #include <asm/apic.h>
>> +#include <asm/cpu.h>
>> #include <asm/idtentry.h>
>> #include <asm/nmi.h>
>> #include <asm/mce.h>
>> @@ -171,6 +172,8 @@ static void native_stop_other_cpus(int w
>> if (atomic_cmpxchg(&stopping_cpu, -1, safe_smp_processor_id()) != -1)
>> return;
>>
>> + atomic_set(&stop_cpus_count, num_online_cpus() - 1);
>> +
>> /* sync above data before sending IRQ */
>> wmb();
>>
>> @@ -183,12 +186,12 @@ static void native_stop_other_cpus(int w
>> * CPUs reach shutdown state.
>> */
>> timeout = USEC_PER_SEC;
>> - while (num_online_cpus() > 1 && timeout--)
>> + while (atomic_read(&stop_cpus_count) > 0 && timeout--)
>> udelay(1);
>> }
>>
>> /* if the REBOOT_VECTOR didn't work, try with the NMI */
>> - if (num_online_cpus() > 1) {
>> + if (atomic_read(&stop_cpus_count) > 0) {
>> /*
>> * If NMI IPI is enabled, try to register the stop handler
>> * and send the IPI. In any case try to wait for the other
>> @@ -208,7 +211,7 @@ static void native_stop_other_cpus(int w
>> * one or more CPUs do not reach shutdown state.
>> */
>> timeout = USEC_PER_MSEC * 10;
>> - while (num_online_cpus() > 1 && (wait || timeout--))
>> + while (atomic_read(&stop_cpus_count) > 0 && (wait || timeout--))
>> udelay(1);
>> }
>>
>> @@ -216,6 +219,12 @@ static void native_stop_other_cpus(int w
>> disable_local_APIC();
>> mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
>> local_irq_restore(flags);
>> +
>> + /*
>> + * Ensure that the cache line is invalidated on the other CPUs. See
>> + * comment vs. SME in stop_this_cpu().
>> + */
>> + atomic_set(&stop_cpus_count, INT_MAX);
>> }
>>
>> /*
>>
> Tested-by: Tony Battersby <[email protected]>
>
> 10 successful poweroffs in a row with wbinvd() enabled.  As I mentioned
> before though, I don't have an AMD CPU to test the SME cache
> invalidation logic.
>
> I will reply with my patch with an updated title and description.
>
> Tony
>
>

2023-04-26 18:10:57

by Tony Battersby

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

On 4/26/23 12:37, Thomas Gleixner wrote:
> The problem really seems to be that the control CPU goes off before the
> other CPUs have finished and depending on timing that causes the
> wreckage. Otherwise the mdelay(100) would not have helped at all.
>
> But looking at it, that num_online_cpus() == 1 check in
> stop_other_cpus() is fragile as hell independent of that wbinvd() issue.
>
> Something like the completely untested below should cure that.
>
> Thanks,
>
> tglx
> ---
> arch/x86/include/asm/cpu.h | 2 ++
> arch/x86/kernel/process.c | 10 ++++++++++
> arch/x86/kernel/smp.c | 15 ++++++++++++---
> 3 files changed, 24 insertions(+), 3 deletions(-)
>
> --- a/arch/x86/include/asm/cpu.h
> +++ b/arch/x86/include/asm/cpu.h
> @@ -98,4 +98,6 @@ extern u64 x86_read_arch_cap_msr(void);
> int intel_find_matching_signature(void *mc, unsigned int csig, int cpf);
> int intel_microcode_sanity_check(void *mc, bool print_err, int hdr_type);
>
> +extern atomic_t stop_cpus_count;
> +
> #endif /* _ASM_X86_CPU_H */
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -752,6 +752,8 @@ bool xen_set_default_idle(void)
> }
> #endif
>
> +atomic_t stop_cpus_count;
> +
> void __noreturn stop_this_cpu(void *dummy)
> {
> local_irq_disable();
> @@ -776,6 +778,14 @@ void __noreturn stop_this_cpu(void *dumm
> */
> if (cpuid_eax(0x8000001f) & BIT(0))
> native_wbinvd();
> +
> + /*
> + * native_stop_other_cpus() will write to @stop_cpus_count after
> + * observing that it went down to zero, which will invalidate the
> + * cacheline on this CPU.
> + */
> + atomic_dec(&stop_cpus_count);
> +
> for (;;) {
> /*
> * Use native_halt() so that memory contents don't change
> --- a/arch/x86/kernel/smp.c
> +++ b/arch/x86/kernel/smp.c
> @@ -27,6 +27,7 @@
> #include <asm/mmu_context.h>
> #include <asm/proto.h>
> #include <asm/apic.h>
> +#include <asm/cpu.h>
> #include <asm/idtentry.h>
> #include <asm/nmi.h>
> #include <asm/mce.h>
> @@ -171,6 +172,8 @@ static void native_stop_other_cpus(int w
> if (atomic_cmpxchg(&stopping_cpu, -1, safe_smp_processor_id()) != -1)
> return;
>
> + atomic_set(&stop_cpus_count, num_online_cpus() - 1);
> +
> /* sync above data before sending IRQ */
> wmb();
>
> @@ -183,12 +186,12 @@ static void native_stop_other_cpus(int w
> * CPUs reach shutdown state.
> */
> timeout = USEC_PER_SEC;
> - while (num_online_cpus() > 1 && timeout--)
> + while (atomic_read(&stop_cpus_count) > 0 && timeout--)
> udelay(1);
> }
>
> /* if the REBOOT_VECTOR didn't work, try with the NMI */
> - if (num_online_cpus() > 1) {
> + if (atomic_read(&stop_cpus_count) > 0) {
> /*
> * If NMI IPI is enabled, try to register the stop handler
> * and send the IPI. In any case try to wait for the other
> @@ -208,7 +211,7 @@ static void native_stop_other_cpus(int w
> * one or more CPUs do not reach shutdown state.
> */
> timeout = USEC_PER_MSEC * 10;
> - while (num_online_cpus() > 1 && (wait || timeout--))
> + while (atomic_read(&stop_cpus_count) > 0 && (wait || timeout--))
> udelay(1);
> }
>
> @@ -216,6 +219,12 @@ static void native_stop_other_cpus(int w
> disable_local_APIC();
> mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
> local_irq_restore(flags);
> +
> + /*
> + * Ensure that the cache line is invalidated on the other CPUs. See
> + * comment vs. SME in stop_this_cpu().
> + */
> + atomic_set(&stop_cpus_count, INT_MAX);
> }
>
> /*
>
Tested-by: Tony Battersby <[email protected]>

10 successful poweroffs in a row with wbinvd() enabled.  As I mentioned
before though, I don't have an AMD CPU to test the SME cache
invalidation logic.

I will reply with my patch with an updated title and description.

Tony


2023-04-26 18:20:03

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

On 4/26/23 10:51, Tom Lendacky wrote:
>>> +    /*
>>> +     * native_stop_other_cpus() will write to @stop_cpus_count after
>>> +     * observing that it went down to zero, which will invalidate the
>>> +     * cacheline on this CPU.
>>> +     */
>>> +    atomic_dec(&stop_cpus_count);
>
> This is probably going to pull in a cache line and cause the problem the
> native_wbinvd() is trying to avoid.

Is one _more_ cacheline really the problem?

Or is having _any_ cacheline pulled in a problem? What about the text
page containing the WBINVD? How about all the page table pages that are
needed to resolve %RIP to a physical address?

What about the mds_idle_clear_cpu_buffers() code that snuck into
native_halt()?

> ffffffff810ede4c: 0f 09 wbinvd
> ffffffff810ede4e: 8b 05 e4 3b a7 02 mov 0x2a73be4(%rip),%eax # ffffffff83b61a38 <mds_idle_clear>
> ffffffff810ede54: 85 c0 test %eax,%eax
> ffffffff810ede56: 7e 07 jle ffffffff810ede5f <stop_this_cpu+0x9f>
> ffffffff810ede58: 0f 00 2d b1 75 13 01 verw 0x11375b1(%rip) # ffffffff82225410 <ds.6688>
> ffffffff810ede5f: f4 hlt
> ffffffff810ede60: eb ec jmp ffffffff810ede4e <stop_this_cpu+0x8e>
> ffffffff810ede62: e8 59 40 1a 00 callq ffffffff81291ec0 <trace_hardirqs_off>
> ffffffff810ede67: eb 85 jmp ffffffff810eddee <stop_this_cpu+0x2e>
> ffffffff810ede69: 0f 1f 80 00 00 00 00 nopl 0x0(%rax)

2023-04-26 19:22:51

by Tom Lendacky

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff



On 4/26/23 13:15, Dave Hansen wrote:
> On 4/26/23 10:51, Tom Lendacky wrote:
>>>> +    /*
>>>> +     * native_stop_other_cpus() will write to @stop_cpus_count after
>>>> +     * observing that it went down to zero, which will invalidate the
>>>> +     * cacheline on this CPU.
>>>> +     */
>>>> +    atomic_dec(&stop_cpus_count);
>>
>> This is probably going to pull in a cache line and cause the problem the
>> native_wbinvd() is trying to avoid.
>
> Is one _more_ cacheline really the problem?

The answer is it depends. If the cacheline ends up modified/dirty, then it
can be a problem.

>
> Or is having _any_ cacheline pulled in a problem? What about the text
> page containing the WBINVD? How about all the page table pages that are
> needed to resolve %RIP to a physical address?

It's been a while since I looked into all this, but text and page table
pages didn't present any problems because they weren't modified, but stack
memory was. Doing a plain wbinvd() resulted in calls to the paravirt
support and stack data from the call to wbinvd() ended up in some page
structs in the kexec kernel (applicable to zen1 and zen2). Using
native_wbinvd() eliminated the stack data changes after the WBINVD and
didn't end up with any corruption following a kexec.

>
> What about the mds_idle_clear_cpu_buffers() code that snuck into
> native_halt()?

Luckily that is all inline and using a static branch which isn't enabled
for AMD and should just jmp to the hlt, so no modified cache lines.

Thanks,
Tom

>
>> ffffffff810ede4c: 0f 09 wbinvd
>> ffffffff810ede4e: 8b 05 e4 3b a7 02 mov 0x2a73be4(%rip),%eax # ffffffff83b61a38 <mds_idle_clear>
>> ffffffff810ede54: 85 c0 test %eax,%eax
>> ffffffff810ede56: 7e 07 jle ffffffff810ede5f <stop_this_cpu+0x9f>
>> ffffffff810ede58: 0f 00 2d b1 75 13 01 verw 0x11375b1(%rip) # ffffffff82225410 <ds.6688>
>> ffffffff810ede5f: f4 hlt
>> ffffffff810ede60: eb ec jmp ffffffff810ede4e <stop_this_cpu+0x8e>
>> ffffffff810ede62: e8 59 40 1a 00 callq ffffffff81291ec0 <trace_hardirqs_off>
>> ffffffff810ede67: eb 85 jmp ffffffff810eddee <stop_this_cpu+0x2e>
>> ffffffff810ede69: 0f 1f 80 00 00 00 00 nopl 0x0(%rax)
>

2023-04-26 20:04:01

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

On Wed, Apr 26 2023 at 12:51, Tom Lendacky wrote:
> On 4/26/23 12:37, Tony Battersby wrote:
>>> + /*
>>> + * native_stop_other_cpus() will write to @stop_cpus_count after
>>> + * observing that it went down to zero, which will invalidate the
>>> + * cacheline on this CPU.
>>> + */
>>> + atomic_dec(&stop_cpus_count);
>
> This is probably going to pull in a cache line and cause the problem the
> native_wbinvd() is trying to avoid.

The comment above this atomic_dec() explains why this is _not_ a
problem. Here is the counterpart in native_stop_other_cpus():

>>> @@ -216,6 +219,12 @@ static void native_stop_other_cpus(int w
>>> disable_local_APIC();
>>> mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
>>> local_irq_restore(flags);
>>> +
>>> + /*
>>> + * Ensure that the cache line is invalidated on the other CPUs. See
>>> + * comment vs. SME in stop_this_cpu().
>>> + */
>>> + atomic_set(&stop_cpus_count, INT_MAX);

That happens _after_ all the other CPUs did the atomic_dec() as the
control CPU waits for it to become 0.

As this makes the cacheline exclusive on the control CPU the dirty
cacheline on the CPU which did the last atomic_dec() is invalidated.

As the atomic_dec() is obviously serialized via the lock prefix there
can be only one dirty copy on some other CPU at the time when the
control CPU writes to it.

After that the only dirty copy is on the control CPU, no?

Thanks,

tglx

2023-04-26 22:07:18

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

> > > This is probably going to pull in a cache line and cause the problem the
> > > native_wbinvd() is trying to avoid.
> >
> > Is one _more_ cacheline really the problem?
>
> The answer is it depends. If the cacheline ends up modified/dirty, then it
> can be a problem.

I haven't followed this all in detail, but if any dirty cache line a
problem you probably would need to be sure that any possible NMI user
(like perf or watchdogs) is disabled at this point, otherwise you could
still get NMIs here.

I don't think perf currently has a mechanism to do that other
than to offline the CPU.

Also there are of course machine checks and SMIs that could still happen,
but I guess there's nothing you could do about them.

-Andi

2023-04-26 23:28:55

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH RFC] x86/cpu: fix intermittent lockup on poweroff

Andi!

On Wed, Apr 26 2023 at 15:02, Andi Kleen wrote:
>> > > This is probably going to pull in a cache line and cause the problem the
>> > > native_wbinvd() is trying to avoid.
>> >
>> > Is one _more_ cacheline really the problem?
>>
>> The answer is it depends. If the cacheline ends up modified/dirty, then it
>> can be a problem.
>
> I haven't followed this all in detail, but if any dirty cache line a
> problem you probably would need to be sure that any possible NMI user
> (like perf or watchdogs) is disabled at this point, otherwise you could
> still get NMIs here.
>
> I don't think perf currently has a mechanism to do that other
> than to offline the CPU.

stop_this_cpu()
disable_local_APIC()
apic_soft_disable()
clear_local_APIC()
v = apic_read(APIC_LVTPC);
apic_write(APIC_LVTPC, v | APIC_LVT_MASKED);

So after that point the PMU can't raise NMIs anymore which includes the
default perf based NMI watchdog, no?

External NMIs are a different problem, but they kinda fall into the same
category as:

> Also there are of course machine checks and SMIs that could still happen,
> but I guess there's nothing you could do about them.

Though external NMIs could be disabled via outb(0x70,...) if paranoid
enough. Albeit if there is an external NMI based watchdog raising the
NMI during this kexec() scenario then the outcome is probably as
undefined as in the MCE case independent of the SEV dirty cacheline
concern.

Thanks,

tglx

Subject: [tip: x86/core] x86/smp: Make stop_other_cpus() more robust

The following commit has been merged into the x86/core branch of tip:

Commit-ID: 1f5e7eb7868e42227ac426c96d437117e6e06e8e
Gitweb: https://git.kernel.org/tip/1f5e7eb7868e42227ac426c96d437117e6e06e8e
Author: Thomas Gleixner <[email protected]>
AuthorDate: Wed, 26 Apr 2023 18:37:00 +02:00
Committer: Thomas Gleixner <[email protected]>
CommitterDate: Tue, 20 Jun 2023 14:51:46 +02:00

x86/smp: Make stop_other_cpus() more robust

Tony reported intermittent lockups on poweroff. His analysis identified the
wbinvd() in stop_this_cpu() as the culprit. This was added to ensure that
on SME enabled machines a kexec() does not leave any stale data in the
caches when switching from encrypted to non-encrypted mode or vice versa.

That wbinvd() is conditional on the SME feature bit which is read directly
from CPUID. But that readout does not check whether the CPUID leaf is
available or not. If it's not available the CPU will return the value of
the highest supported leaf instead. Depending on the content the "SME" bit
might be set or not.

That's incorrect but harmless. Making the CPUID readout conditional makes
the observed hangs go away, but it does not fix the underlying problem:

CPU0 CPU1

stop_other_cpus()
send_IPIs(REBOOT); stop_this_cpu()
while (num_online_cpus() > 1); set_online(false);
proceed... -> hang
wbinvd()

WBINVD is an expensive operation and if multiple CPUs issue it at the same
time the resulting delays are even larger.

But CPU0 already observed num_online_cpus() going down to 1 and proceeds
which causes the system to hang.

This issue exists independent of WBINVD, but the delays caused by WBINVD
make it more prominent.

Make this more robust by adding a cpumask which is initialized to the
online CPU mask before sending the IPIs and CPUs clear their bit in
stop_this_cpu() after the WBINVD completed. Check for that cpumask to
become empty in stop_other_cpus() instead of watching num_online_cpus().

The cpumask cannot plug all holes either, but it's better than a raw
counter and allows to restrict the NMI fallback IPI to be sent only the
CPUs which have not reported within the timeout window.

Fixes: 08f253ec3767 ("x86/cpu: Clear SME feature flag when not in use")
Reported-by: Tony Battersby <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Borislav Petkov (AMD) <[email protected]>
Reviewed-by: Ashok Raj <[email protected]>
Cc: [email protected]
Link: https://lore.kernel.org/all/[email protected]
Link: https://lore.kernel.org/r/87h6r770bv.ffs@tglx

---
arch/x86/include/asm/cpu.h | 2 +-
arch/x86/kernel/process.c | 23 ++++++++++++--
arch/x86/kernel/smp.c | 62 ++++++++++++++++++++++++-------------
3 files changed, 64 insertions(+), 23 deletions(-)

diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
index 78796b9..9ba3c3d 100644
--- a/arch/x86/include/asm/cpu.h
+++ b/arch/x86/include/asm/cpu.h
@@ -98,4 +98,6 @@ extern u64 x86_read_arch_cap_msr(void);
int intel_find_matching_signature(void *mc, unsigned int csig, int cpf);
int intel_microcode_sanity_check(void *mc, bool print_err, int hdr_type);

+extern struct cpumask cpus_stop_mask;
+
#endif /* _ASM_X86_CPU_H */
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index dac41a0..05924bc 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -759,13 +759,23 @@ bool xen_set_default_idle(void)
}
#endif

+struct cpumask cpus_stop_mask;
+
void __noreturn stop_this_cpu(void *dummy)
{
+ unsigned int cpu = smp_processor_id();
+
local_irq_disable();
+
/*
- * Remove this CPU:
+ * Remove this CPU from the online mask and disable it
+ * unconditionally. This might be redundant in case that the reboot
+ * vector was handled late and stop_other_cpus() sent an NMI.
+ *
+ * According to SDM and APM NMIs can be accepted even after soft
+ * disabling the local APIC.
*/
- set_cpu_online(smp_processor_id(), false);
+ set_cpu_online(cpu, false);
disable_local_APIC();
mcheck_cpu_clear(this_cpu_ptr(&cpu_info));

@@ -783,6 +793,15 @@ void __noreturn stop_this_cpu(void *dummy)
*/
if (cpuid_eax(0x8000001f) & BIT(0))
native_wbinvd();
+
+ /*
+ * This brings a cache line back and dirties it, but
+ * native_stop_other_cpus() will overwrite cpus_stop_mask after it
+ * observed that all CPUs reported stop. This write will invalidate
+ * the related cache line on this CPU.
+ */
+ cpumask_clear_cpu(cpu, &cpus_stop_mask);
+
for (;;) {
/*
* Use native_halt() so that memory contents don't change
diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 375b33e..935bc65 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -27,6 +27,7 @@
#include <asm/mmu_context.h>
#include <asm/proto.h>
#include <asm/apic.h>
+#include <asm/cpu.h>
#include <asm/idtentry.h>
#include <asm/nmi.h>
#include <asm/mce.h>
@@ -146,31 +147,43 @@ static int register_stop_handler(void)

static void native_stop_other_cpus(int wait)
{
- unsigned long flags;
- unsigned long timeout;
+ unsigned int cpu = smp_processor_id();
+ unsigned long flags, timeout;

if (reboot_force)
return;

- /*
- * Use an own vector here because smp_call_function
- * does lots of things not suitable in a panic situation.
- */
+ /* Only proceed if this is the first CPU to reach this code */
+ if (atomic_cmpxchg(&stopping_cpu, -1, cpu) != -1)
+ return;

/*
- * We start by using the REBOOT_VECTOR irq.
- * The irq is treated as a sync point to allow critical
- * regions of code on other cpus to release their spin locks
- * and re-enable irqs. Jumping straight to an NMI might
- * accidentally cause deadlocks with further shutdown/panic
- * code. By syncing, we give the cpus up to one second to
- * finish their work before we force them off with the NMI.
+ * 1) Send an IPI on the reboot vector to all other CPUs.
+ *
+ * The other CPUs should react on it after leaving critical
+ * sections and re-enabling interrupts. They might still hold
+ * locks, but there is nothing which can be done about that.
+ *
+ * 2) Wait for all other CPUs to report that they reached the
+ * HLT loop in stop_this_cpu()
+ *
+ * 3) If #2 timed out send an NMI to the CPUs which did not
+ * yet report
+ *
+ * 4) Wait for all other CPUs to report that they reached the
+ * HLT loop in stop_this_cpu()
+ *
+ * #3 can obviously race against a CPU reaching the HLT loop late.
+ * That CPU will have reported already and the "have all CPUs
+ * reached HLT" condition will be true despite the fact that the
+ * other CPU is still handling the NMI. Again, there is no
+ * protection against that as "disabled" APICs still respond to
+ * NMIs.
*/
- if (num_online_cpus() > 1) {
- /* did someone beat us here? */
- if (atomic_cmpxchg(&stopping_cpu, -1, safe_smp_processor_id()) != -1)
- return;
+ cpumask_copy(&cpus_stop_mask, cpu_online_mask);
+ cpumask_clear_cpu(cpu, &cpus_stop_mask);

+ if (!cpumask_empty(&cpus_stop_mask)) {
/* sync above data before sending IRQ */
wmb();

@@ -183,12 +196,12 @@ static void native_stop_other_cpus(int wait)
* CPUs reach shutdown state.
*/
timeout = USEC_PER_SEC;
- while (num_online_cpus() > 1 && timeout--)
+ while (!cpumask_empty(&cpus_stop_mask) && timeout--)
udelay(1);
}

/* if the REBOOT_VECTOR didn't work, try with the NMI */
- if (num_online_cpus() > 1) {
+ if (!cpumask_empty(&cpus_stop_mask)) {
/*
* If NMI IPI is enabled, try to register the stop handler
* and send the IPI. In any case try to wait for the other
@@ -200,7 +213,8 @@ static void native_stop_other_cpus(int wait)

pr_emerg("Shutting down cpus with NMI\n");

- apic_send_IPI_allbutself(NMI_VECTOR);
+ for_each_cpu(cpu, &cpus_stop_mask)
+ apic->send_IPI(cpu, NMI_VECTOR);
}
/*
* Don't wait longer than 10 ms if the caller didn't
@@ -208,7 +222,7 @@ static void native_stop_other_cpus(int wait)
* one or more CPUs do not reach shutdown state.
*/
timeout = USEC_PER_MSEC * 10;
- while (num_online_cpus() > 1 && (wait || timeout--))
+ while (!cpumask_empty(&cpus_stop_mask) && (wait || timeout--))
udelay(1);
}

@@ -216,6 +230,12 @@ static void native_stop_other_cpus(int wait)
disable_local_APIC();
mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
local_irq_restore(flags);
+
+ /*
+ * Ensure that the cpus_stop_mask cache lines are invalidated on
+ * the other CPUs. See comment vs. SME in stop_this_cpu().
+ */
+ cpumask_clear(&cpus_stop_mask);
}

/*

Subject: [tip: x86/core] x86/smp: Dont access non-existing CPUID leaf

The following commit has been merged into the x86/core branch of tip:

Commit-ID: 9b040453d4440659f33dc6f0aa26af418ebfe70b
Gitweb: https://git.kernel.org/tip/9b040453d4440659f33dc6f0aa26af418ebfe70b
Author: Tony Battersby <[email protected]>
AuthorDate: Thu, 15 Jun 2023 22:33:52 +02:00
Committer: Thomas Gleixner <[email protected]>
CommitterDate: Tue, 20 Jun 2023 14:51:46 +02:00

x86/smp: Dont access non-existing CPUID leaf

stop_this_cpu() tests CPUID leaf 0x8000001f::EAX unconditionally. Intel
CPUs return the content of the highest supported leaf when a non-existing
leaf is read, while AMD CPUs return all zeros for unsupported leafs.

So the result of the test on Intel CPUs is lottery.

While harmless it's incorrect and causes the conditional wbinvd() to be
issued where not required.

Check whether the leaf is supported before reading it.

[ tglx: Adjusted changelog ]

Fixes: 08f253ec3767 ("x86/cpu: Clear SME feature flag when not in use")
Signed-off-by: Tony Battersby <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Reviewed-by: Mario Limonciello <[email protected]>
Reviewed-by: Borislav Petkov (AMD) <[email protected]>
Cc: [email protected]
Link: https://lore.kernel.org/r/[email protected]
Link: https://lore.kernel.org/r/[email protected]

---
arch/x86/kernel/process.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 05924bc..ff9b80a 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -763,6 +763,7 @@ struct cpumask cpus_stop_mask;

void __noreturn stop_this_cpu(void *dummy)
{
+ struct cpuinfo_x86 *c = this_cpu_ptr(&cpu_info);
unsigned int cpu = smp_processor_id();

local_irq_disable();
@@ -777,7 +778,7 @@ void __noreturn stop_this_cpu(void *dummy)
*/
set_cpu_online(cpu, false);
disable_local_APIC();
- mcheck_cpu_clear(this_cpu_ptr(&cpu_info));
+ mcheck_cpu_clear(c);

/*
* Use wbinvd on processors that support SME. This provides support
@@ -791,7 +792,7 @@ void __noreturn stop_this_cpu(void *dummy)
* Test the CPUID bit directly because the machine might've cleared
* X86_FEATURE_SME due to cmdline options.
*/
- if (cpuid_eax(0x8000001f) & BIT(0))
+ if (c->extended_cpuid_level >= 0x8000001f && (cpuid_eax(0x8000001f) & BIT(0)))
native_wbinvd();

/*