2020-03-20 19:44:06

by Wei Huang

[permalink] [raw]
Subject: [PATCH V2 1/1] x86/mce/amd: Add PPIN support for AMD MCE

Newer AMD CPUs support a feature called protected processor identification
number (PPIN). This feature can be detected via CPUID_Fn80000008_EBX[23].
However CPUID alone is not enough to read the processor serial number.
MSR_AMD_PPIN_CTL also needs to be configured properly. If for any reason
X86_FEATURE_AMD_PPIN[PPIN_EN] can not be turned on, such as disabled in
BIOS, we have to clear the CPU capability bit of X86_FEATURE_AMD_PPIN.

When the X86_FEATURE_AMD_PPIN capability is available, MCE can read the
serial number to keep track the source of MCE errors.

Co-developed-by: Smita Koralahalli Channabasappa <[email protected]>
Signed-off-by: Smita Koralahalli Channabasappa <[email protected]>
Signed-off-by: Wei Huang <[email protected]>
Acked-by: Tony Luck <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Yazen Ghannam <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: linux-edac <[email protected]>
Cc: x86-ml <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/amd.c | 26 ++++++++++++++++++++++++++
arch/x86/kernel/cpu/mce/core.c | 2 ++
3 files changed, 29 insertions(+)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index f3327cb56edf..4b263ffb793b 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -299,6 +299,7 @@
#define X86_FEATURE_AMD_IBRS (13*32+14) /* "" Indirect Branch Restricted Speculation */
#define X86_FEATURE_AMD_STIBP (13*32+15) /* "" Single Thread Indirect Branch Predictors */
#define X86_FEATURE_AMD_STIBP_ALWAYS_ON (13*32+17) /* "" Single Thread Indirect Branch Predictors always-on preferred */
+#define X86_FEATURE_AMD_PPIN (13*32+23) /* Protected Processor Inventory Number */
#define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */
#define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */
#define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 1f875fbe1384..9176db4be69b 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -394,6 +394,31 @@ static void amd_detect_cmp(struct cpuinfo_x86 *c)
per_cpu(cpu_llc_id, cpu) = c->phys_proc_id;
}

+static void amd_detect_ppin(struct cpuinfo_x86 *c)
+{
+ unsigned long long val;
+
+ if (cpu_has(c, X86_FEATURE_AMD_PPIN)) {
+ /* Turn off now until MSR is properly configured */
+ clear_cpu_cap(c, X86_FEATURE_AMD_PPIN);
+
+ if (rdmsrl_safe(MSR_AMD_PPIN_CTL, &val))
+ return;
+
+ if ((val & 3UL) == 1UL)
+ return;
+
+ if (!(val & 2UL)) {
+ wrmsrl_safe(MSR_AMD_PPIN_CTL, val | 2UL);
+ rdmsrl_safe(MSR_AMD_PPIN_CTL, &val);
+ }
+
+ /* MSR_AMD_PPIN_CTL[PPIN_EN] bit is 1, turn feature back on */
+ if (val & 2UL)
+ set_cpu_cap(c, X86_FEATURE_AMD_PPIN);
+ }
+}
+
u16 amd_get_nb_id(int cpu)
{
return per_cpu(cpu_llc_id, cpu);
@@ -941,6 +966,7 @@ static void init_amd(struct cpuinfo_x86 *c)
amd_detect_cmp(c);
amd_get_topology(c);
srat_detect_node(c);
+ amd_detect_ppin(c);

init_amd_cacheinfo(c);

diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 2c4f949611e4..57347e899575 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -142,6 +142,8 @@ void mce_setup(struct mce *m)

if (this_cpu_has(X86_FEATURE_INTEL_PPIN))
rdmsrl(MSR_PPIN, m->ppin);
+ else if (this_cpu_has(X86_FEATURE_AMD_PPIN))
+ rdmsrl(MSR_AMD_PPIN, m->ppin);

m->microcode = boot_cpu_data.microcode;
}
--
2.24.1


2020-03-20 21:11:46

by Tony Luck

[permalink] [raw]
Subject: RE: [PATCH V2 1/1] x86/mce/amd: Add PPIN support for AMD MCE

Acked-by: Tony Luck <[email protected]>

Still true for this new version.

-Tony

2020-03-21 09:39:41

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH V2 1/1] x86/mce/amd: Add PPIN support for AMD MCE

On Fri, Mar 20, 2020 at 02:43:05PM -0500, Wei Huang wrote:
> Newer AMD CPUs support a feature called protected processor identification
> number (PPIN). This feature can be detected via CPUID_Fn80000008_EBX[23].
> However CPUID alone is not enough to read the processor serial number.
> MSR_AMD_PPIN_CTL also needs to be configured properly. If for any reason
> X86_FEATURE_AMD_PPIN[PPIN_EN] can not be turned on, such as disabled in
> BIOS, we have to clear the CPU capability bit of X86_FEATURE_AMD_PPIN.
>
> When the X86_FEATURE_AMD_PPIN capability is available, MCE can read the
> serial number to keep track the source of MCE errors.
>
> Co-developed-by: Smita Koralahalli Channabasappa <[email protected]>
> Signed-off-by: Smita Koralahalli Channabasappa <[email protected]>
> Signed-off-by: Wei Huang <[email protected]>
> Acked-by: Tony Luck <[email protected]>
> Cc: Borislav Petkov <[email protected]>
> Cc: Yazen Ghannam <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: linux-edac <[email protected]>
> Cc: x86-ml <[email protected]>
> ---
> arch/x86/include/asm/cpufeatures.h | 1 +
> arch/x86/kernel/cpu/amd.c | 26 ++++++++++++++++++++++++++
> arch/x86/kernel/cpu/mce/core.c | 2 ++
> 3 files changed, 29 insertions(+)
>
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index f3327cb56edf..4b263ffb793b 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -299,6 +299,7 @@
> #define X86_FEATURE_AMD_IBRS (13*32+14) /* "" Indirect Branch Restricted Speculation */
> #define X86_FEATURE_AMD_STIBP (13*32+15) /* "" Single Thread Indirect Branch Predictors */
> #define X86_FEATURE_AMD_STIBP_ALWAYS_ON (13*32+17) /* "" Single Thread Indirect Branch Predictors always-on preferred */
> +#define X86_FEATURE_AMD_PPIN (13*32+23) /* Protected Processor Inventory Number */
> #define X86_FEATURE_AMD_SSBD (13*32+24) /* "" Speculative Store Bypass Disable */
> #define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */
> #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */
> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> index 1f875fbe1384..9176db4be69b 100644
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -394,6 +394,31 @@ static void amd_detect_cmp(struct cpuinfo_x86 *c)
> per_cpu(cpu_llc_id, cpu) = c->phys_proc_id;
> }
>
> +static void amd_detect_ppin(struct cpuinfo_x86 *c)
> +{
> + unsigned long long val;
> +
> + if (cpu_has(c, X86_FEATURE_AMD_PPIN)) {

Flip check:

if (!cpu_has...
return;

> + /* Turn off now until MSR is properly configured */
> + clear_cpu_cap(c, X86_FEATURE_AMD_PPIN);

What for? You can do the final decision in the end, once, instead of
toggling that bit here.

> +
> + if (rdmsrl_safe(MSR_AMD_PPIN_CTL, &val))
> + return;
> +
> + if ((val & 3UL) == 1UL)
> + return;
> +
> + if (!(val & 2UL)) {
> + wrmsrl_safe(MSR_AMD_PPIN_CTL, val | 2UL);
> + rdmsrl_safe(MSR_AMD_PPIN_CTL, &val);
> + }

Those need comments what they do, like in the Intel variant.

> +
> + /* MSR_AMD_PPIN_CTL[PPIN_EN] bit is 1, turn feature back on */
> + if (val & 2UL)
> + set_cpu_cap(c, X86_FEATURE_AMD_PPIN);

No, keep the feature bit set and clear it only when you determine so
instead of clearing and setting again.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette