2023-06-22 10:18:39

by Borislav Petkov

[permalink] [raw]
Subject: [PATCH] x86/barrier: Do not serialize MSR accesses on AMD

From: "Borislav Petkov (AMD)" <[email protected]>

AMD does not have the requirement for a synchronization barrier when
acccessing a certain group of MSRs. Do not incur that unnecessary
penalty there.

While at it, move to processor.h to avoid include hell. Untangling that
file properly is a matter for another day.

Some notes on the performance aspect of why this is relevant, courtesy
of Kishon VijayAbraham <[email protected]>:

On a AMD Zen4 system with 96 cores, a modified ipi-bench[1] on a VM
shows x2AVIC IPI rate is 3% to 4% lower than AVIC IPI rate. The
ipi-bench is modified so that the IPIs are sent between two vCPUs in the
same CCX. This also requires to pin the vCPU to a physical core to
prevent any latencies. This simulates the use case of pinning vCPUs to
the thread of a single CCX to avoid interrupt IPI latency.

In order to avoid run-to-run variance (for both x2AVIC and AVIC), the
below configurations are done:

1) Disable Power States in BIOS (to prevent the system from going to
lower power state)

2) Run the system at fixed frequency 2500MHz (to prevent the system
from increasing the frequency when the load is more)

With the above configuration:

*) Performance measured using ipi-bench for AVIC:
Average Latency: 1124.98ns [Time to send IPI from one vCPU to another vCPU]

Cumulative throughput: 42.6759M/s [Total number of IPIs sent in a second from
48 vCPUs simultaneously]

*) Performance measured using ipi-bench for x2AVIC:
Average Latency: 1172.42ns [Time to send IPI from one vCPU to another vCPU]

Cumulative throughput: 40.9432M/s [Total number of IPIs sent in a second from
48 vCPUs simultaneously]

From above, x2AVIC latency is ~4% more than AVIC. However, the expectation is
x2AVIC performance to be better or equivalent to AVIC. Upon analyzing
the perf captures, it is observed significant time is spent in
weak_wrmsr_fence() invoked by x2apic_send_IPI().

With the fix to skip weak_wrmsr_fence()

*) Performance measured using ipi-bench for x2AVIC:
Average Latency: 1117.44ns [Time to send IPI from one vCPU to another vCPU]

Cumulative throughput: 42.9608M/s [Total number of IPIs sent in a second from
48 vCPUs simultaneously]

Comparing the performance of x2AVIC with and without the fix, it can be seen
the performance improves by ~4%.

Performance captured using an unmodified ipi-bench using the 'mesh-ipi' option
with and without weak_wrmsr_fence() on a Zen4 system also showed significant
performance improvement without weak_wrmsr_fence(). The 'mesh-ipi' option ignores
CCX or CCD and just picks random vCPU.

Average throughput (10 iterations) with weak_wrmsr_fence(),
Cumulative throughput: 4933374 IPI/s

Average throughput (10 iterations) without weak_wrmsr_fence(),
Cumulative throughput: 6355156 IPI/s

[1] https://github.com/bytedance/kvm-utils/tree/master/microbenchmark/ipi-bench

Signed-off-by: Borislav Petkov (AMD) <[email protected]>
---
arch/x86/include/asm/barrier.h | 18 ------------------
arch/x86/include/asm/processor.h | 19 +++++++++++++++++++
2 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 35389b2af88e..0216f63a366b 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -81,22 +81,4 @@ do { \

#include <asm-generic/barrier.h>

-/*
- * Make previous memory operations globally visible before
- * a WRMSR.
- *
- * MFENCE makes writes visible, but only affects load/store
- * instructions. WRMSR is unfortunately not a load/store
- * instruction and is unaffected by MFENCE. The LFENCE ensures
- * that the WRMSR is not reordered.
- *
- * Most WRMSRs are full serializing instructions themselves and
- * do not require this barrier. This is only required for the
- * IA32_TSC_DEADLINE and X2APIC MSRs.
- */
-static inline void weak_wrmsr_fence(void)
-{
- asm volatile("mfence; lfence" : : : "memory");
-}
-
#endif /* _ASM_X86_BARRIER_H */
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index b216ac80ebcc..983406342484 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -735,4 +735,23 @@ bool arch_is_platform_page(u64 paddr);
#define arch_is_platform_page arch_is_platform_page
#endif

+/*
+ * Make previous memory operations globally visible before
+ * a WRMSR.
+ *
+ * MFENCE makes writes visible, but only affects load/store
+ * instructions. WRMSR is unfortunately not a load/store
+ * instruction and is unaffected by MFENCE. The LFENCE ensures
+ * that the WRMSR is not reordered.
+ *
+ * Most WRMSRs are full serializing instructions themselves and
+ * do not require this barrier. This is only required for the
+ * IA32_TSC_DEADLINE and X2APIC MSRs.
+ */
+static inline void weak_wrmsr_fence(void)
+{
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
+ asm volatile("mfence; lfence" : : : "memory");
+}
+
#endif /* _ASM_X86_PROCESSOR_H */
--
2.35.1



2023-07-03 13:22:42

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] x86/barrier: Do not serialize MSR accesses on AMD

On Thu, Jun 22, 2023 at 11:52:12AM +0200, Borislav Petkov wrote:
> From: "Borislav Petkov (AMD)" <[email protected]>
>
> AMD does not have the requirement for a synchronization barrier when
> acccessing a certain group of MSRs. Do not incur that unnecessary
> penalty there.

So you're saying that AMD tsc_deadline and x2apic MSRs *do* imply
ordering constraints unlike the Intel ones?

Can we pls haz a document link for that, also a comment?

> Signed-off-by: Borislav Petkov (AMD) <[email protected]>
> ---
> arch/x86/include/asm/barrier.h | 18 ------------------
> arch/x86/include/asm/processor.h | 19 +++++++++++++++++++
> 2 files changed, 19 insertions(+), 18 deletions(-)

Moving this code while changing it meant I had to look at it _3_ times
before I spotted you changed it :/

> diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
> index b216ac80ebcc..983406342484 100644
> --- a/arch/x86/include/asm/processor.h
> +++ b/arch/x86/include/asm/processor.h
> @@ -735,4 +735,23 @@ bool arch_is_platform_page(u64 paddr);
> #define arch_is_platform_page arch_is_platform_page
> #endif
>
> +/*
> + * Make previous memory operations globally visible before
> + * a WRMSR.
> + *
> + * MFENCE makes writes visible, but only affects load/store
> + * instructions. WRMSR is unfortunately not a load/store
> + * instruction and is unaffected by MFENCE. The LFENCE ensures
> + * that the WRMSR is not reordered.
> + *
> + * Most WRMSRs are full serializing instructions themselves and
> + * do not require this barrier. This is only required for the
> + * IA32_TSC_DEADLINE and X2APIC MSRs.
> + */
> +static inline void weak_wrmsr_fence(void)
> +{
> + if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
> + asm volatile("mfence; lfence" : : : "memory");

Both instructions are 3 bytes, a 6 byte nop would be better, no?

asm volatile (ALTERNATIVE("mfence; lfence;", "", X86_FEATURE_AMD));

or something ?

> +}
> +
> #endif /* _ASM_X86_PROCESSOR_H */
> --
> 2.35.1
>

2023-07-04 07:53:08

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH] x86/barrier: Do not serialize MSR accesses on AMD

On Mon, Jul 03, 2023 at 02:54:19PM +0200, Peter Zijlstra wrote:
> So you're saying that AMD tsc_deadline and x2apic MSRs *do* imply
> ordering constraints unlike the Intel ones?

Yah, that's the default situation. Only those two - TSC_DEADLINE and
x2APIC MSRs - and on *Intel* are special.

> Can we pls haz a document link for that, also a comment?

Why document the default? The SDM is already documenting this exception.
For everything else WRMSR is serializing.

> Moving this code while changing it meant I had to look at it _3_ times
> before I spotted you changed it :/

I figured it is a simple enough patch - no need to do a sole movement
one.

> Both instructions are 3 bytes, a 6 byte nop would be better, no?

Why? You wanna save the branch insn when sending IPIs through the
x2APIC? Does that really matter? I doubt it...

> asm volatile (ALTERNATIVE("mfence; lfence;", "", X86_FEATURE_AMD));

There's no X86_FEATURE_AMD :)

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-07-04 09:34:37

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] x86/barrier: Do not serialize MSR accesses on AMD

On Tue, Jul 04, 2023 at 09:46:31AM +0200, Borislav Petkov wrote:
> On Mon, Jul 03, 2023 at 02:54:19PM +0200, Peter Zijlstra wrote:
> > So you're saying that AMD tsc_deadline and x2apic MSRs *do* imply
> > ordering constraints unlike the Intel ones?
>
> Yah, that's the default situation. Only those two - TSC_DEADLINE and
> x2APIC MSRs - and on *Intel* are special.

So they are normal MSRs like all other? AMD doesn't have any exceptions
for MSRs, they all the same?

> > Both instructions are 3 bytes, a 6 byte nop would be better, no?
>
> Why? You wanna save the branch insn when sending IPIs through the
> x2APIC? Does that really matter? I doubt it...

Dunno, code density, speculation, many raisons to avoid jumps :-)

> > asm volatile (ALTERNATIVE("mfence; lfence;", "", X86_FEATURE_AMD));
>
> There's no X86_FEATURE_AMD :)

I know, but that's easily fixed.

2023-07-04 09:34:43

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH] x86/barrier: Do not serialize MSR accesses on AMD

On Tue, Jul 04, 2023 at 11:01:32AM +0200, Peter Zijlstra wrote:
> So they are normal MSRs like all other? AMD doesn't have any exceptions
> for MSRs, they all the same?

Yap, as far as I know.

> Dunno, code density, speculation, many raisons to avoid jumps :-)

Looking at x2apic_send_IPI asm:

cmpb $2, boot_cpu_data+1(%rip) #, boot_cpu_data.x86_vendor
# arch/x86/kernel/apic/x2apic_phys.c:44: u32 dest = per_cpu(x86_cpu_to_apicid, cpu);
movq __per_cpu_offset(,%rdi,8), %rdx # __per_cpu_offset[cpu_7(D)], __per_cpu_offset[cpu_7(D)]
movzwl (%rdx,%rax), %edx # *_8,
# ./arch/x86/include/asm/processor.h:753: if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
je .L114 #,
# ./arch/x86/include/asm/processor.h:754: asm volatile("mfence; lfence" : : : "memory");
#APP
# 754 "./arch/x86/include/asm/processor.h" 1
mfence; lfence

So gcc already does mix unrelated insns so that they can all go in
parallel. So it is a

CMP RIP-relative
JE

So yeah, I guess, on the one hand we want to avoid conditional jumps
but, on the other, sprinkling alternatives everywhere without a good
reason is a waste. Especially if this branch is going to be
predicted-taken most of the time and it wouldn't matter.

So I'm still not convinced. We could measure it on my Coffeelake box
which says

"Switched APIC routing to cluster x2apic."

but I don't think it'll be visible.

> > > asm volatile (ALTERNATIVE("mfence; lfence;", "", X86_FEATURE_AMD));
> >
> > There's no X86_FEATURE_AMD :)
>
> I know, but that's easily fixed.

Yeah, there's X86_VENDOR_AMD too. I can see the confusion ensue.

I'm wondering if we could make:

alternative("mfence; lfence;", "", X86_VENDOR_AMD);

work...

Might come in handy in the future.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-10-27 15:33:52

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH] x86/barrier: Do not serialize MSR accesses on AMD

On Tue, Jul 04, 2023 at 11:22:22AM +0200, Borislav Petkov wrote:
> I'm wondering if we could make:
>
> alternative("mfence; lfence;", "", X86_VENDOR_AMD);
>
> work...

Seems to work in my guest here and it don't look half as bad. See
replies to this message.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-10-27 15:34:42

by Borislav Petkov

[permalink] [raw]
Subject: [PATCH 1/2] x86/alternative: Add per-vendor patching

From: "Borislav Petkov (AMD)" <[email protected]>
Date: Fri, 27 Oct 2023 13:34:12 +0200

Add the capability to apply alternatives not based on a CPU feature but
on the current vendor the machine is running on.

Signed-off-by: Borislav Petkov (AMD) <[email protected]>
---
arch/x86/include/asm/alternative.h | 7 ++++++-
arch/x86/kernel/alternative.c | 14 +++++++++-----
2 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
index 65f79092c9d9..76750f8b5aba 100644
--- a/arch/x86/include/asm/alternative.h
+++ b/arch/x86/include/asm/alternative.h
@@ -8,8 +8,13 @@

#define ALT_FLAGS_SHIFT 16

-#define ALT_FLAG_NOT (1 << 0)
+/* Negate the tested feature flag */
+#define ALT_FLAG_NOT BIT(0)
+/* Check X86_VENDOR_* instead of a feature flag */
+#define ALT_FLAG_VENDOR BIT(1)
+
#define ALT_NOT(feature) ((ALT_FLAG_NOT << ALT_FLAGS_SHIFT) | (feature))
+#define ALT_VENDOR(x86_vendor) ((ALT_FLAG_VENDOR << ALT_FLAGS_SHIFT) | (x86_vendor))

#ifndef __ASSEMBLY__

diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 73be3931e4f0..8b67b5c6090e 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -433,19 +433,23 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start,

/*
* Patch if either:
+ * - running on the respective x86 vendor
* - feature is present
* - feature not present but ALT_FLAG_NOT is set to mean,
* patch if feature is *NOT* present.
*/
- if (!boot_cpu_has(a->cpuid) == !(a->flags & ALT_FLAG_NOT)) {
+ if (a->flags & ALT_FLAG_VENDOR) {
+ if (boot_cpu_data.x86_vendor != a->cpuid) {
+ optimize_nops(instr, a->instrlen);
+ continue;
+ }
+ } else if (!boot_cpu_has(a->cpuid) == !(a->flags & ALT_FLAG_NOT)) {
optimize_nops(instr, a->instrlen);
continue;
}

- DPRINTK(ALT, "feat: %s%d*32+%d, old: (%pS (%px) len: %d), repl: (%px, len: %d)",
- (a->flags & ALT_FLAG_NOT) ? "!" : "",
- a->cpuid >> 5,
- a->cpuid & 0x1f,
+ DPRINTK(ALT, "feat: 0x%x, flags: 0x%x, old: (%pS (%px) len: %d), repl: (%px, len: %d)",
+ a->cpuid, a->flags,
instr, instr, a->instrlen,
replacement, a->replacementlen);

--
2.42.0.rc0.25.ga82fb66fed25


--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-10-27 15:35:32

by Borislav Petkov

[permalink] [raw]
Subject: [PATCH 2/2] x86/barrier: Do not serialize MSR accesses on AMD

From: "Borislav Petkov (AMD)" <[email protected]>
Date: Fri, 27 Oct 2023 14:24:16 +0200

AMD does not have the requirement for a synchronization barrier when
acccessing a certain group of MSRs. Do not incur that unnecessary
penalty there.

While at it, move to processor.h to avoid include hell. Untangling that
file properly is a matter for another day.

Some notes on the performance aspect of why this is relevant, courtesy
of Kishon VijayAbraham <[email protected]>:

On a AMD Zen4 system with 96 cores, a modified ipi-bench[1] on a VM
shows x2AVIC IPI rate is 3% to 4% lower than AVIC IPI rate. The
ipi-bench is modified so that the IPIs are sent between two vCPUs in the
same CCX. This also requires to pin the vCPU to a physical core to
prevent any latencies. This simulates the use case of pinning vCPUs to
the thread of a single CCX to avoid interrupt IPI latency.

In order to avoid run-to-run variance (for both x2AVIC and AVIC), the
below configurations are done:

1) Disable Power States in BIOS (to prevent the system from going to
lower power state)

2) Run the system at fixed frequency 2500MHz (to prevent the system
from increasing the frequency when the load is more)

With the above configuration:

*) Performance measured using ipi-bench for AVIC:
Average Latency: 1124.98ns [Time to send IPI from one vCPU to another vCPU]

Cumulative throughput: 42.6759M/s [Total number of IPIs sent in a second from
48 vCPUs simultaneously]

*) Performance measured using ipi-bench for x2AVIC:
Average Latency: 1172.42ns [Time to send IPI from one vCPU to another vCPU]

Cumulative throughput: 40.9432M/s [Total number of IPIs sent in a second from
48 vCPUs simultaneously]

From above, x2AVIC latency is ~4% more than AVIC. However, the expectation is
x2AVIC performance to be better or equivalent to AVIC. Upon analyzing
the perf captures, it is observed significant time is spent in
weak_wrmsr_fence() invoked by x2apic_send_IPI().

With the fix to skip weak_wrmsr_fence()

*) Performance measured using ipi-bench for x2AVIC:
Average Latency: 1117.44ns [Time to send IPI from one vCPU to another vCPU]

Cumulative throughput: 42.9608M/s [Total number of IPIs sent in a second from
48 vCPUs simultaneously]

Comparing the performance of x2AVIC with and without the fix, it can be seen
the performance improves by ~4%.

Performance captured using an unmodified ipi-bench using the 'mesh-ipi' option
with and without weak_wrmsr_fence() on a Zen4 system also showed significant
performance improvement without weak_wrmsr_fence(). The 'mesh-ipi' option ignores
CCX or CCD and just picks random vCPU.

Average throughput (10 iterations) with weak_wrmsr_fence(),
Cumulative throughput: 4933374 IPI/s

Average throughput (10 iterations) without weak_wrmsr_fence(),
Cumulative throughput: 6355156 IPI/s

[1] https://github.com/bytedance/kvm-utils/tree/master/microbenchmark/ipi-bench

Signed-off-by: Borislav Petkov (AMD) <[email protected]>
---
arch/x86/include/asm/barrier.h | 18 ------------------
arch/x86/include/asm/processor.h | 18 ++++++++++++++++++
2 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 35389b2af88e..0216f63a366b 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -81,22 +81,4 @@ do { \

#include <asm-generic/barrier.h>

-/*
- * Make previous memory operations globally visible before
- * a WRMSR.
- *
- * MFENCE makes writes visible, but only affects load/store
- * instructions. WRMSR is unfortunately not a load/store
- * instruction and is unaffected by MFENCE. The LFENCE ensures
- * that the WRMSR is not reordered.
- *
- * Most WRMSRs are full serializing instructions themselves and
- * do not require this barrier. This is only required for the
- * IA32_TSC_DEADLINE and X2APIC MSRs.
- */
-static inline void weak_wrmsr_fence(void)
-{
- asm volatile("mfence; lfence" : : : "memory");
-}
-
#endif /* _ASM_X86_BARRIER_H */
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 4b130d894cb6..a43ff8d0ad24 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -752,4 +752,22 @@ enum mds_mitigations {

extern bool gds_ucode_mitigated(void);

+/*
+ * Make previous memory operations globally visible before
+ * a WRMSR.
+ *
+ * MFENCE makes writes visible, but only affects load/store
+ * instructions. WRMSR is unfortunately not a load/store
+ * instruction and is unaffected by MFENCE. The LFENCE ensures
+ * that the WRMSR is not reordered.
+ *
+ * Most WRMSRs are full serializing instructions themselves and
+ * do not require this barrier. This is only required for the
+ * IA32_TSC_DEADLINE and X2APIC MSRs.
+ */
+static inline void weak_wrmsr_fence(void)
+{
+ alternative("mfence; lfence", "", ALT_VENDOR(X86_VENDOR_AMD));
+}
+
#endif /* _ASM_X86_PROCESSOR_H */
--
2.42.0.rc0.25.ga82fb66fed25

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-10-27 18:12:24

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/2] x86/alternative: Add per-vendor patching

On Fri, Oct 27, 2023 at 05:34:18PM +0200, Borislav Petkov wrote:
> From: "Borislav Petkov (AMD)" <[email protected]>
> Date: Fri, 27 Oct 2023 13:34:12 +0200
>
> Add the capability to apply alternatives not based on a CPU feature but
> on the current vendor the machine is running on.
>
> Signed-off-by: Borislav Petkov (AMD) <[email protected]>
> ---
> arch/x86/include/asm/alternative.h | 7 ++++++-
> arch/x86/kernel/alternative.c | 14 +++++++++-----
> 2 files changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
> index 65f79092c9d9..76750f8b5aba 100644
> --- a/arch/x86/include/asm/alternative.h
> +++ b/arch/x86/include/asm/alternative.h
> @@ -8,8 +8,13 @@
>
> #define ALT_FLAGS_SHIFT 16
>
> -#define ALT_FLAG_NOT (1 << 0)
> +/* Negate the tested feature flag */
> +#define ALT_FLAG_NOT BIT(0)
> +/* Check X86_VENDOR_* instead of a feature flag */
> +#define ALT_FLAG_VENDOR BIT(1)
> +
> #define ALT_NOT(feature) ((ALT_FLAG_NOT << ALT_FLAGS_SHIFT) | (feature))
> +#define ALT_VENDOR(x86_vendor) ((ALT_FLAG_VENDOR << ALT_FLAGS_SHIFT) | (x86_vendor))
>
> #ifndef __ASSEMBLY__
>
> diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
> index 73be3931e4f0..8b67b5c6090e 100644
> --- a/arch/x86/kernel/alternative.c
> +++ b/arch/x86/kernel/alternative.c
> @@ -433,19 +433,23 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start,
>
> /*
> * Patch if either:
> + * - running on the respective x86 vendor
> * - feature is present
> * - feature not present but ALT_FLAG_NOT is set to mean,
> * patch if feature is *NOT* present.
> */
> - if (!boot_cpu_has(a->cpuid) == !(a->flags & ALT_FLAG_NOT)) {
> + if (a->flags & ALT_FLAG_VENDOR) {
> + if (boot_cpu_data.x86_vendor != a->cpuid) {
> + optimize_nops(instr, a->instrlen);
> + continue;
> + }
> + } else if (!boot_cpu_has(a->cpuid) == !(a->flags & ALT_FLAG_NOT)) {
> optimize_nops(instr, a->instrlen);
> continue;
> }

But what if I want to do ALT_FLAG_VENDOR | ALT_FLAG_NOT?

/me runs

2023-10-27 18:37:32

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH 1/2] x86/alternative: Add per-vendor patching

On Fri, Oct 27, 2023 at 08:11:47PM +0200, Peter Zijlstra wrote:
> But what if I want to do ALT_FLAG_VENDOR | ALT_FLAG_NOT?

I knew you were gonna say that...

You sit down and you implement it. :-P

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-10-27 18:57:04

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/barrier: Do not serialize MSR accesses on AMD

On Fri, Oct 27, 2023 at 05:34:58PM +0200, Borislav Petkov wrote:

> +static inline void weak_wrmsr_fence(void)
> +{
> + alternative("mfence; lfence", "", ALT_VENDOR(X86_VENDOR_AMD));
> +}

Well, you see, AFAICT the non-serializing MSRs thing is an Intel thing,
so everything !Intel wants this gone, no?

Definitely the Hygon thing wants this right along with AMD, because
that's basically AMD, no?

2023-10-27 19:18:14

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/barrier: Do not serialize MSR accesses on AMD

On Fri, Oct 27, 2023 at 08:56:41PM +0200, Peter Zijlstra wrote:
> Well, you see, AFAICT the non-serializing MSRs thing is an Intel thing,
> so everything !Intel wants this gone, no?
>
> Definitely the Hygon thing wants this right along with AMD, because
> that's basically AMD, no?

Because of

ce4e240c279a ("x86: add x2apic_wrmsr_fence() to x2apic flush tlb paths")

and it being there since 2009 and getting called unconditionally.

Hygon sure, but the other vendors? I can't even test on some.

Thus the more conservative approach here.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-10-27 19:29:26

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/barrier: Do not serialize MSR accesses on AMD

On Fri, Oct 27, 2023 at 09:16:33PM +0200, Borislav Petkov wrote:
> Thus the more conservative approach here.

And on a second thought, I don't need any of that new stuff - I simply
need a synthetic flag which says "MSRs need fencing" and set it on
everything but AMD and Hygon. And we've solved this type of issue
gazillion times already - why am I reinventing the wheel?!

:-\

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-10-27 20:09:42

by Andrew Cooper

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/barrier: Do not serialize MSR accesses on AMD

On 27/10/2023 8:29 pm, Borislav Petkov wrote:
> On Fri, Oct 27, 2023 at 09:16:33PM +0200, Borislav Petkov wrote:
>> Thus the more conservative approach here.
> And on a second thought, I don't need any of that new stuff - I simply
> need a synthetic flag which says "MSRs need fencing" and set it on
> everything but AMD and Hygon. And we've solved this type of issue
> gazillion times already - why am I reinventing the wheel?!

Quoteth the APM (rev 3.41, June 2023):

16.11.2 WRMSR / RDMSR serialization for x2APIC Register

The WRMSR instruction is used to write the APIC register set in x2APIC
mode. Normally WRMSR is
a serializing instruction, however when accessing x2APIC registers, the
serializing aspect of WRMSR
is relaxed to allow for more efficient access to those registers.
Consequently, a WRMSR write to an
x2APIC register may complete before older store operations are complete
and have become globally
visible. When strong ordering of an x2APIC write access is required with
respect to preceding memory
operations, software can insert a serializing instruction (such as
MFENCE) before the WRMSR
instruction.

So which is right?  This commit message, or the APM?  (and yes, if
you're waiting on an APM update then the commit message should at least
note that one is coming.)


But, to the issue at hand.

There are other non-serialising MSRs on AMD CPUs, including the FS/GS
base MSRs on more modern parts which is enumerated in 8000_0021.eax[1].

So there isn't a boolean "MSRs need fencing, yes/no".  It is vendor
*and* model specific as to whether a particular MSR is serialising or
non-serialising.

*And* it's vendor specific as to what the fencing sequence is.  Intel
require mfence;lfence, while on AMD, mfence suffices.

Most MSR writes don't want to be architecturally serialising, and Intel
are introducing WRMSRNS for this purpose.  But ICR writes *do* need to
be ordered with respect to stores becoming globally visible, and it was
an error for MSR_X2APIC_ICR to be specified as non-serialising.


The only sanity-preserving (pseudo) API for this is something like:

vendor_msr_fence = { mfence;lfence (Intel) | mfence (AMD, Hygon) | ... }

and for each MSR separately, something like:

ALTERNATIVE("", vendor_msr_fence, $VENDOR_NEEDS_MSR_$X_FENCE);

because that's the only one which properly separates "what fence to use"
and "do I need to fence this MSR on the current vendor".

~Andrew

2023-10-27 20:24:04

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/barrier: Do not serialize MSR accesses on AMD

On Fri, Oct 27, 2023 at 09:09:05PM +0100, Andrew Cooper wrote:
> There are other non-serialising MSRs on AMD CPUs, including the FS/GS
> base MSRs on more modern parts which is enumerated in 8000_0021.eax[1].
>
> So there isn't a boolean "MSRs need fencing, yes/no". 

Well, I was implying that "MSRs need fencing" refers to this particular
use case where weak_wrmsr_fence() is used - IA32_TSC_DEADLINE and X2APIC
MSRs.

So the feature bit should be named something more specific:

X86_FEATURE_APIC_TSC_MSRS_NEED_FENCING

or so.

If we have to do something for the other case, yes, we will have to
either generalize this or add yet another flag.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-10-29 10:36:26

by Borislav Petkov

[permalink] [raw]
Subject: [PATCH -v3] x86/barrier: Do not serialize MSR accesses on AMD

On Fri, Oct 27, 2023 at 10:23:28PM +0200, Borislav Petkov wrote:
> So the feature bit should be named something more specific:
>
> X86_FEATURE_APIC_TSC_MSRS_NEED_FENCING
>
> or so.

Plain and simple:

---
From: "Borislav Petkov (AMD)" <[email protected]>
Date: Fri, 27 Oct 2023 14:24:16 +0200

AMD does not have the requirement for a synchronization barrier when
acccessing a certain group of MSRs. Do not incur that unnecessary
penalty there.

While at it, move to processor.h to avoid include hell. Untangling that
file properly is a matter for another day.

Some notes on the performance aspect of why this is relevant, courtesy
of Kishon VijayAbraham <[email protected]>:

On a AMD Zen4 system with 96 cores, a modified ipi-bench[1] on a VM
shows x2AVIC IPI rate is 3% to 4% lower than AVIC IPI rate. The
ipi-bench is modified so that the IPIs are sent between two vCPUs in the
same CCX. This also requires to pin the vCPU to a physical core to
prevent any latencies. This simulates the use case of pinning vCPUs to
the thread of a single CCX to avoid interrupt IPI latency.

In order to avoid run-to-run variance (for both x2AVIC and AVIC), the
below configurations are done:

1) Disable Power States in BIOS (to prevent the system from going to
lower power state)

2) Run the system at fixed frequency 2500MHz (to prevent the system
from increasing the frequency when the load is more)

With the above configuration:

*) Performance measured using ipi-bench for AVIC:
Average Latency: 1124.98ns [Time to send IPI from one vCPU to another vCPU]

Cumulative throughput: 42.6759M/s [Total number of IPIs sent in a second from
48 vCPUs simultaneously]

*) Performance measured using ipi-bench for x2AVIC:
Average Latency: 1172.42ns [Time to send IPI from one vCPU to another vCPU]

Cumulative throughput: 40.9432M/s [Total number of IPIs sent in a second from
48 vCPUs simultaneously]

From above, x2AVIC latency is ~4% more than AVIC. However, the expectation is
x2AVIC performance to be better or equivalent to AVIC. Upon analyzing
the perf captures, it is observed significant time is spent in
weak_wrmsr_fence() invoked by x2apic_send_IPI().

With the fix to skip weak_wrmsr_fence()

*) Performance measured using ipi-bench for x2AVIC:
Average Latency: 1117.44ns [Time to send IPI from one vCPU to another vCPU]

Cumulative throughput: 42.9608M/s [Total number of IPIs sent in a second from
48 vCPUs simultaneously]

Comparing the performance of x2AVIC with and without the fix, it can be seen
the performance improves by ~4%.

Performance captured using an unmodified ipi-bench using the 'mesh-ipi' option
with and without weak_wrmsr_fence() on a Zen4 system also showed significant
performance improvement without weak_wrmsr_fence(). The 'mesh-ipi' option ignores
CCX or CCD and just picks random vCPU.

Average throughput (10 iterations) with weak_wrmsr_fence(),
Cumulative throughput: 4933374 IPI/s

Average throughput (10 iterations) without weak_wrmsr_fence(),
Cumulative throughput: 6355156 IPI/s

[1] https://github.com/bytedance/kvm-utils/tree/master/microbenchmark/ipi-bench

Signed-off-by: Borislav Petkov (AMD) <[email protected]>
---
arch/x86/include/asm/barrier.h | 18 ------------------
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/processor.h | 18 ++++++++++++++++++
arch/x86/kernel/cpu/amd.c | 3 +++
arch/x86/kernel/cpu/common.c | 7 +++++++
arch/x86/kernel/cpu/hygon.c | 3 +++
6 files changed, 32 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 35389b2af88e..0216f63a366b 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -81,22 +81,4 @@ do { \

#include <asm-generic/barrier.h>

-/*
- * Make previous memory operations globally visible before
- * a WRMSR.
- *
- * MFENCE makes writes visible, but only affects load/store
- * instructions. WRMSR is unfortunately not a load/store
- * instruction and is unaffected by MFENCE. The LFENCE ensures
- * that the WRMSR is not reordered.
- *
- * Most WRMSRs are full serializing instructions themselves and
- * do not require this barrier. This is only required for the
- * IA32_TSC_DEADLINE and X2APIC MSRs.
- */
-static inline void weak_wrmsr_fence(void)
-{
- asm volatile("mfence; lfence" : : : "memory");
-}
-
#endif /* _ASM_X86_BARRIER_H */
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 58cb9495e40f..0091f1008314 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -308,10 +308,10 @@
#define X86_FEATURE_SMBA (11*32+21) /* "" Slow Memory Bandwidth Allocation */
#define X86_FEATURE_BMEC (11*32+22) /* "" Bandwidth Monitoring Event Configuration */
#define X86_FEATURE_USER_SHSTK (11*32+23) /* Shadow stack support for user mode applications */
-
#define X86_FEATURE_SRSO (11*32+24) /* "" AMD BTB untrain RETs */
#define X86_FEATURE_SRSO_ALIAS (11*32+25) /* "" AMD BTB untrain RETs through aliasing */
#define X86_FEATURE_IBPB_ON_VMEXIT (11*32+26) /* "" Issue an IBPB only on VMEXIT */
+#define X86_FEATURE_APIC_MSRS_FENCE (11*32+27) /* "" IA32_TSC_DEADLINE and X2APIC MSRs need fencing */

/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
#define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 4b130d894cb6..061aa86b4662 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -752,4 +752,22 @@ enum mds_mitigations {

extern bool gds_ucode_mitigated(void);

+/*
+ * Make previous memory operations globally visible before
+ * a WRMSR.
+ *
+ * MFENCE makes writes visible, but only affects load/store
+ * instructions. WRMSR is unfortunately not a load/store
+ * instruction and is unaffected by MFENCE. The LFENCE ensures
+ * that the WRMSR is not reordered.
+ *
+ * Most WRMSRs are full serializing instructions themselves and
+ * do not require this barrier. This is only required for the
+ * IA32_TSC_DEADLINE and X2APIC MSRs.
+ */
+static inline void weak_wrmsr_fence(void)
+{
+ alternative("mfence; lfence", "", ALT_NOT(X86_FEATURE_APIC_MSRS_FENCE));
+}
+
#endif /* _ASM_X86_PROCESSOR_H */
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index a7eab05e5f29..841e21213668 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -1162,6 +1162,9 @@ static void init_amd(struct cpuinfo_x86 *c)
if (!cpu_has(c, X86_FEATURE_HYPERVISOR) &&
cpu_has_amd_erratum(c, amd_erratum_1485))
msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_SHARED_BTB_FIX_BIT);
+
+ /* AMD CPUs don't need fencing after x2APIC/TSC_DEADLINE MSR writes. */
+ clear_cpu_cap(c, X86_FEATURE_APIC_MSRS_FENCE);
}

#ifdef CONFIG_X86_32
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 9058da9ae011..4d4b87c6885d 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1856,6 +1856,13 @@ static void identify_cpu(struct cpuinfo_x86 *c)
c->topo.apicid = apic->phys_pkg_id(c->topo.initial_apicid, 0);
#endif

+
+ /*
+ * Set default APIC and TSC_DEADLINE MSR fencing flag. AMD and
+ * Hygon will clear it in ->c_init() below.
+ */
+ set_cpu_cap(c, X86_FEATURE_APIC_MSRS_FENCE);
+
/*
* Vendor-specific initialization. In this section we
* canonicalize the feature flags, meaning if there are
diff --git a/arch/x86/kernel/cpu/hygon.c b/arch/x86/kernel/cpu/hygon.c
index 6f247d66758d..f0cd95502faa 100644
--- a/arch/x86/kernel/cpu/hygon.c
+++ b/arch/x86/kernel/cpu/hygon.c
@@ -354,6 +354,9 @@ static void init_hygon(struct cpuinfo_x86 *c)
set_cpu_bug(c, X86_BUG_SYSRET_SS_ATTRS);

check_null_seg_clears_base(c);
+
+ /* Hygon CPUs don't need fencing after x2APIC/TSC_DEADLINE MSR writes. */
+ clear_cpu_cap(c, X86_FEATURE_APIC_MSRS_FENCE);
}

static void cpu_detect_tlb_hygon(struct cpuinfo_x86 *c)
--
2.42.0.rc0.25.ga82fb66fed25

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2023-11-02 11:09:24

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/barrier: Do not serialize MSR accesses on AMD

On Fri, Oct 27, 2023 at 09:09:05PM +0100, Andrew Cooper wrote:
> Quoteth the APM (rev 3.41, June 2023):
>
> 16.11.2 WRMSR / RDMSR serialization for x2APIC Register
>
> The WRMSR instruction is used to write the APIC register set in x2APIC
> mode. Normally WRMSR is
> a serializing instruction, however when accessing x2APIC registers, the
> serializing aspect of WRMSR
> is relaxed to allow for more efficient access to those registers.
> Consequently, a WRMSR write to an
> x2APIC register may complete before older store operations are complete
> and have become globally
> visible. When strong ordering of an x2APIC write access is required with
> respect to preceding memory
> operations, software can insert a serializing instruction (such as
> MFENCE) before the WRMSR
> instruction.
>
> So which is right?  This commit message, or the APM?  (and yes, if
> you're waiting on an APM update then the commit message should at least
> note that one is coming.)

To clarify this one: there will be a CPUID bit which states that an
MFENCE is not needed ATM and it'll be added to the APM. I'll add a note
about it to the commit message too.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette