2023-10-24 08:08:25

by Pawan Gupta

[permalink] [raw]
Subject: [PATCH v2 0/6] Delay VERW

v2:
- Removed the extra EXEC_VERW macro layers. (Sean)
- Move NOPL before VERW. (Sean)
- s/USER_CLEAR_CPU_BUFFERS/CLEAR_CPU_BUFFERS/. (Josh/Dave)
- Removed the comments before CLEAR_CPU_BUFFERS. (Josh)
- Remove CLEAR_CPU_BUFFERS from NMI returning to kernel and document the
reason. (Josh/Dave)
- Reformat comment in md_clear_update_mitigation(). (Josh)
- Squash "x86/bugs: Cleanup mds_user_clear" patch. (Nikolay)
- s/GUEST_CLEAR_CPU_BUFFERS/CLEAR_CPU_BUFFERS/. (Josh)
- Added a patch from Sean to use CFLAGS.CF for VMLAUNCH/VMRESUME
selection. This facilitates a single CLEAR_CPU_BUFFERS location for both
VMLAUNCH and VMRESUME. (Sean)

v1: https://lore.kernel.org/r/[email protected]

Hi,

Legacy instruction VERW was overloaded by some processors to clear
micro-architectural CPU buffers as a mitigation of CPU bugs. This series
moves VERW execution to a later point in exit-to-user path. This is
needed because in some cases it may be possible for kernel data to be
accessed after VERW in arch_exit_to_user_mode(). Such accesses may put
data into MDS affected CPU buffers, for example:

1. Kernel data accessed by an NMI between VERW and return-to-user can
remain in CPU buffers (since NMI returning to kernel does not
execute VERW to clear CPU buffers).
2. Alyssa reported that after VERW is executed,
CONFIG_GCC_PLUGIN_STACKLEAK=y scrubs the stack used by a system
call. Memory accesses during stack scrubbing can move kernel stack
contents into CPU buffers.
3. When caller saved registers are restored after a return from
function executing VERW, the kernel stack accesses can remain in
CPU buffers(since they occur after VERW).

Although these cases are less practical to exploit, moving VERW closer
to ring transition reduces the attack surface.

Overview of the series:

Patch 1: Prepares VERW macros for use in asm.
Patch 2: Adds macros to 64-bit entry/exit points.
Patch 3: Adds macros to 32-bit entry/exit points.
Patch 4: Enables the new macros.
Patch 5: Uses CFLAGS.CF for VMLAUNCH/VMRESUME selection.
Patch 6: Adds macro to VMenter.

Below is some performance data collected with v1 on a Skylake client
compared with previous implementation:

Baseline: v6.6-rc5

| Test | Configuration | Relative |
| ------------------ | ---------------------- | -------- |
| build-linux-kernel | defconfig | 1.00 |
| hackbench | 32 - Process | 1.02 |
| nginx | Short Connection - 500 | 1.01 |

Signed-off-by: Pawan Gupta <[email protected]>
---
Pawan Gupta (5):
x86/bugs: Add asm helpers for executing VERW
x86/entry_64: Add VERW just before userspace transition
x86/entry_32: Add VERW just before userspace transition
x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key
KVM: VMX: Move VERW closer to VMentry for MDS mitigation

Sean Christopherson (1):
KVM: VMX: Use BT+JNC, i.e. EFLAGS.CF to select VMRESUME vs. VMLAUNCH

Documentation/arch/x86/mds.rst | 39 ++++++++++++++++++++++++++----------
arch/x86/entry/entry_32.S | 3 +++
arch/x86/entry/entry_64.S | 11 ++++++++++
arch/x86/entry/entry_64_compat.S | 1 +
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/entry-common.h | 1 -
arch/x86/include/asm/nospec-branch.h | 31 +++++++++++++++++-----------
arch/x86/kernel/cpu/bugs.c | 15 ++++++--------
arch/x86/kernel/nmi.c | 2 --
arch/x86/kvm/vmx/run_flags.h | 7 +++++--
arch/x86/kvm/vmx/vmenter.S | 10 ++++++---
arch/x86/kvm/vmx/vmx.c | 10 ++++++---
12 files changed, 88 insertions(+), 44 deletions(-)
---
base-commit: 05d3ef8bba77c1b5f98d941d8b2d4aeab8118ef1
change-id: 20231011-delay-verw-d0474986b2c3

Best regards,
--
Thanks,
Pawan



2023-10-24 08:08:39

by Pawan Gupta

[permalink] [raw]
Subject: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

MDS mitigation requires clearing the CPU buffers before returning to
user. This needs to be done late in the exit-to-user path. Current
location of VERW leaves a possibility of kernel data ending up in CPU
buffers for memory accesses done after VERW such as:

1. Kernel data accessed by an NMI between VERW and return-to-user can
remain in CPU buffers ( since NMI returning to kernel does not
execute VERW to clear CPU buffers.
2. Alyssa reported that after VERW is executed,
CONFIG_GCC_PLUGIN_STACKLEAK=y scrubs the stack used by a system
call. Memory accesses during stack scrubbing can move kernel stack
contents into CPU buffers.
3. When caller saved registers are restored after a return from
function executing VERW, the kernel stack accesses can remain in
CPU buffers(since they occur after VERW).

To fix this VERW needs to be moved very late in exit-to-user path.

In preparation for moving VERW to entry/exit asm code, create macros
that can be used in asm. Also make them depend on a new feature flag
X86_FEATURE_CLEAR_CPU_BUF.

Reported-by: Alyssa Milburn <[email protected]>
Signed-off-by: Pawan Gupta <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/nospec-branch.h | 19 +++++++++++++++++++
2 files changed, 20 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 58cb9495e40f..f21fc0f12737 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -308,10 +308,10 @@
#define X86_FEATURE_SMBA (11*32+21) /* "" Slow Memory Bandwidth Allocation */
#define X86_FEATURE_BMEC (11*32+22) /* "" Bandwidth Monitoring Event Configuration */
#define X86_FEATURE_USER_SHSTK (11*32+23) /* Shadow stack support for user mode applications */
-
#define X86_FEATURE_SRSO (11*32+24) /* "" AMD BTB untrain RETs */
#define X86_FEATURE_SRSO_ALIAS (11*32+25) /* "" AMD BTB untrain RETs through aliasing */
#define X86_FEATURE_IBPB_ON_VMEXIT (11*32+26) /* "" Issue an IBPB only on VMEXIT */
+#define X86_FEATURE_CLEAR_CPU_BUF (11*32+27) /* "" Clear CPU buffers */

/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
#define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index c55cc243592e..c269ee74682c 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -329,6 +329,25 @@
#endif
.endm

+/*
+ * Macro to execute VERW instruction to mitigate transient data sampling
+ * attacks such as MDS. On affected systems a microcode update overloaded VERW
+ * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
+ *
+ * Note: Only the memory operand variant of VERW clears the CPU buffers. To
+ * handle the case when VERW is executed after user registers are restored, use
+ * RIP to point the memory operand to a part NOPL instruction that contains
+ * __KERNEL_DS.
+ */
+.macro CLEAR_CPU_BUFFERS
+ ALTERNATIVE "jmp .Lskip_verw_\@;", "jmp .Ldo_verw_\@", X86_FEATURE_CLEAR_CPU_BUF
+ /* nopl __KERNEL_DS(%rax) */
+ .byte 0x0f, 0x1f, 0x80, 0x00, 0x00;
+.Lverw_arg_\@: .word __KERNEL_DS;
+.Ldo_verw_\@: verw _ASM_RIP(.Lverw_arg_\@);
+.Lskip_verw_\@:
+.endm
+
#else /* __ASSEMBLY__ */

#define ANNOTATE_RETPOLINE_SAFE \

--
2.34.1


2023-10-24 08:09:13

by Pawan Gupta

[permalink] [raw]
Subject: [PATCH v2 3/6] x86/entry_32: Add VERW just before userspace transition

As done for entry_64, add support for executing VERW late in exit to
user path for 32-bit mode.

Signed-off-by: Pawan Gupta <[email protected]>
---
arch/x86/entry/entry_32.S | 3 +++
1 file changed, 3 insertions(+)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 6e6af42e044a..74a4358c7f45 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -885,6 +885,7 @@ SYM_FUNC_START(entry_SYSENTER_32)
BUG_IF_WRONG_CR3 no_user_check=1
popfl
popl %eax
+ CLEAR_CPU_BUFFERS

/*
* Return back to the vDSO, which will pop ecx and edx.
@@ -954,6 +955,7 @@ restore_all_switch_stack:

/* Restore user state */
RESTORE_REGS pop=4 # skip orig_eax/error_code
+ CLEAR_CPU_BUFFERS
.Lirq_return:
/*
* ARCH_HAS_MEMBARRIER_SYNC_CORE rely on IRET core serialization
@@ -1146,6 +1148,7 @@ SYM_CODE_START(asm_exc_nmi)

/* Not on SYSENTER stack. */
call exc_nmi
+ CLEAR_CPU_BUFFERS
jmp .Lnmi_return

.Lnmi_from_sysenter_stack:

--
2.34.1


2023-10-24 08:09:12

by Pawan Gupta

[permalink] [raw]
Subject: [PATCH v2 4/6] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key

The VERW mitigation at exit-to-user is enabled via a static branch
mds_user_clear. This static branch is never toggled after boot, and can
be safely replaced with an ALTERNATIVE() which is convenient to use in
asm.

Switch to ALTERNATIVE() to use the VERW mitigation late in exit-to-user
path. Also remove the now redundant VERW in exc_nmi() and
arch_exit_to_user_mode().

Signed-off-by: Pawan Gupta <[email protected]>
---
Documentation/arch/x86/mds.rst | 39 ++++++++++++++++++++++++++----------
arch/x86/include/asm/entry-common.h | 1 -
arch/x86/include/asm/nospec-branch.h | 12 -----------
arch/x86/kernel/cpu/bugs.c | 15 ++++++--------
arch/x86/kernel/nmi.c | 2 --
arch/x86/kvm/vmx/vmx.c | 2 +-
6 files changed, 35 insertions(+), 36 deletions(-)

diff --git a/Documentation/arch/x86/mds.rst b/Documentation/arch/x86/mds.rst
index e73fdff62c0a..34b9e476078c 100644
--- a/Documentation/arch/x86/mds.rst
+++ b/Documentation/arch/x86/mds.rst
@@ -95,6 +95,9 @@ The kernel provides a function to invoke the buffer clearing:

mds_clear_cpu_buffers()

+Also macro CLEAR_CPU_BUFFERS is meant to be used in ASM late in exit-to-user
+path. This macro works for cases where GPRs can't be clobbered.
+
The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state
(idle) transitions.

@@ -138,17 +141,31 @@ Mitigation points

When transitioning from kernel to user space the CPU buffers are flushed
on affected CPUs when the mitigation is not disabled on the kernel
- command line. The migitation is enabled through the static key
- mds_user_clear.
-
- The mitigation is invoked in prepare_exit_to_usermode() which covers
- all but one of the kernel to user space transitions. The exception
- is when we return from a Non Maskable Interrupt (NMI), which is
- handled directly in do_nmi().
-
- (The reason that NMI is special is that prepare_exit_to_usermode() can
- enable IRQs. In NMI context, NMIs are blocked, and we don't want to
- enable IRQs with NMIs blocked.)
+ command line. The mitigation is enabled through the feature flag
+ X86_FEATURE_CLEAR_CPU_BUF.
+
+ The mitigation is invoked just before transitioning to userspace after
+ user registers are restored. This is done to minimize the window in
+ which kernel data could be accessed after VERW e.g. via an NMI after
+ VERW.
+
+ Corner case not handled
+ ^^^^^^^^^^^^^^^^^^^^^^^
+ Interrupts returning to kernel don't clear CPUs buffers since the
+ exit-to-user path is expected to do that anyways. But, there could be
+ a case when an NMI is generated in kernel after the exit-to-user path
+ has cleared the buffers. This case is not handled and NMI returning to
+ kernel don't clear CPU buffers because:
+
+ 1. It is rare to get an NMI after VERW, but before returning to userspace.
+ 2. For an unprivileged user, there is no known way to make that NMI
+ less rare or target it.
+ 3. It would take a large number of these precisely-timed NMIs to mount
+ an actual attack. There's presumably not enough bandwidth.
+ 4. The NMI in question occurs after a VERW, i.e. when user state is
+ restored and most interesting data is already scrubbed. Whats left
+ is only the data that NMI touches, and that may or may not be of
+ any interest.


2. C-State transition
diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/entry-common.h
index ce8f50192ae3..7e523bb3d2d3 100644
--- a/arch/x86/include/asm/entry-common.h
+++ b/arch/x86/include/asm/entry-common.h
@@ -91,7 +91,6 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,

static __always_inline void arch_exit_to_user_mode(void)
{
- mds_user_clear_cpu_buffers();
amd_clear_divider();
}
#define arch_exit_to_user_mode arch_exit_to_user_mode
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index c269ee74682c..214d68956d50 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -557,7 +557,6 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);

-DECLARE_STATIC_KEY_FALSE(mds_user_clear);
DECLARE_STATIC_KEY_FALSE(mds_idle_clear);

DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
@@ -589,17 +588,6 @@ static __always_inline void mds_clear_cpu_buffers(void)
asm volatile("verw %[ds]" : : [ds] "m" (ds) : "cc");
}

-/**
- * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
- *
- * Clear CPU buffers if the corresponding static key is enabled
- */
-static __always_inline void mds_user_clear_cpu_buffers(void)
-{
- if (static_branch_likely(&mds_user_clear))
- mds_clear_cpu_buffers();
-}
-
/**
* mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability
*
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 10499bcd4e39..00aab0c0937f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -111,9 +111,6 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
/* Control unconditional IBPB in switch_mm() */
DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);

-/* Control MDS CPU buffer clear before returning to user space */
-DEFINE_STATIC_KEY_FALSE(mds_user_clear);
-EXPORT_SYMBOL_GPL(mds_user_clear);
/* Control MDS CPU buffer clear before idling (halt, mwait) */
DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
EXPORT_SYMBOL_GPL(mds_idle_clear);
@@ -252,7 +249,7 @@ static void __init mds_select_mitigation(void)
if (!boot_cpu_has(X86_FEATURE_MD_CLEAR))
mds_mitigation = MDS_MITIGATION_VMWERV;

- static_branch_enable(&mds_user_clear);
+ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);

if (!boot_cpu_has(X86_BUG_MSBDS_ONLY) &&
(mds_nosmt || cpu_mitigations_auto_nosmt()))
@@ -356,7 +353,7 @@ static void __init taa_select_mitigation(void)
* For guests that can't determine whether the correct microcode is
* present on host, enable the mitigation for UCODE_NEEDED as well.
*/
- static_branch_enable(&mds_user_clear);
+ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);

if (taa_nosmt || cpu_mitigations_auto_nosmt())
cpu_smt_disable(false);
@@ -424,7 +421,7 @@ static void __init mmio_select_mitigation(void)
*/
if (boot_cpu_has_bug(X86_BUG_MDS) || (boot_cpu_has_bug(X86_BUG_TAA) &&
boot_cpu_has(X86_FEATURE_RTM)))
- static_branch_enable(&mds_user_clear);
+ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
else
static_branch_enable(&mmio_stale_data_clear);

@@ -484,12 +481,12 @@ static void __init md_clear_update_mitigation(void)
if (cpu_mitigations_off())
return;

- if (!static_key_enabled(&mds_user_clear))
+ if (!boot_cpu_has(X86_FEATURE_CLEAR_CPU_BUF))
goto out;

/*
- * mds_user_clear is now enabled. Update MDS, TAA and MMIO Stale Data
- * mitigation, if necessary.
+ * X86_FEATURE_CLEAR_CPU_BUF is now enabled. Update MDS, TAA and MMIO
+ * Stale Data mitigation, if necessary.
*/
if (mds_mitigation == MDS_MITIGATION_OFF &&
boot_cpu_has_bug(X86_BUG_MDS)) {
diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
index a0c551846b35..ebfff8dca661 100644
--- a/arch/x86/kernel/nmi.c
+++ b/arch/x86/kernel/nmi.c
@@ -551,8 +551,6 @@ DEFINE_IDTENTRY_RAW(exc_nmi)
if (this_cpu_dec_return(nmi_state))
goto nmi_restart;

- if (user_mode(regs))
- mds_user_clear_cpu_buffers();
if (IS_ENABLED(CONFIG_NMI_CHECK_CPU)) {
WRITE_ONCE(nsp->idt_seq, nsp->idt_seq + 1);
WARN_ON_ONCE(nsp->idt_seq & 0x1);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 72e3943f3693..24e8694b83fc 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7229,7 +7229,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
/* L1D Flush includes CPU buffer clear to mitigate MDS */
if (static_branch_unlikely(&vmx_l1d_should_flush))
vmx_l1d_flush(vcpu);
- else if (static_branch_unlikely(&mds_user_clear))
+ else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
mds_clear_cpu_buffers();
else if (static_branch_unlikely(&mmio_stale_data_clear) &&
kvm_arch_has_assigned_device(vcpu->kvm))

--
2.34.1


2023-10-24 08:09:23

by Pawan Gupta

[permalink] [raw]
Subject: [PATCH v2 5/6] KVM: VMX: Use BT+JNC, i.e. EFLAGS.CF to select VMRESUME vs. VMLAUNCH

From: Sean Christopherson <[email protected]>

Use EFLAGS.CF instead of EFLAGS.ZF to track whether to use VMRESUME versus
VMLAUNCH. Freeing up EFLAGS.ZF will allow doing VERW, which clobbers ZF,
for MDS mitigations as late as possible without needing to duplicate VERW
for both paths.

Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Pawan Gupta <[email protected]>
---
arch/x86/kvm/vmx/run_flags.h | 7 +++++--
arch/x86/kvm/vmx/vmenter.S | 6 +++---
2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/vmx/run_flags.h b/arch/x86/kvm/vmx/run_flags.h
index edc3f16cc189..6a9bfdfbb6e5 100644
--- a/arch/x86/kvm/vmx/run_flags.h
+++ b/arch/x86/kvm/vmx/run_flags.h
@@ -2,7 +2,10 @@
#ifndef __KVM_X86_VMX_RUN_FLAGS_H
#define __KVM_X86_VMX_RUN_FLAGS_H

-#define VMX_RUN_VMRESUME (1 << 0)
-#define VMX_RUN_SAVE_SPEC_CTRL (1 << 1)
+#define VMX_RUN_VMRESUME_SHIFT 0
+#define VMX_RUN_SAVE_SPEC_CTRL_SHIFT 1
+
+#define VMX_RUN_VMRESUME BIT(VMX_RUN_VMRESUME_SHIFT)
+#define VMX_RUN_SAVE_SPEC_CTRL BIT(VMX_RUN_SAVE_SPEC_CTRL_SHIFT)

#endif /* __KVM_X86_VMX_RUN_FLAGS_H */
diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index be275a0410a8..b3b13ec04bac 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -139,7 +139,7 @@ SYM_FUNC_START(__vmx_vcpu_run)
mov (%_ASM_SP), %_ASM_AX

/* Check if vmlaunch or vmresume is needed */
- test $VMX_RUN_VMRESUME, %ebx
+ bt $VMX_RUN_VMRESUME_SHIFT, %ebx

/* Load guest registers. Don't clobber flags. */
mov VCPU_RCX(%_ASM_AX), %_ASM_CX
@@ -161,8 +161,8 @@ SYM_FUNC_START(__vmx_vcpu_run)
/* Load guest RAX. This kills the @regs pointer! */
mov VCPU_RAX(%_ASM_AX), %_ASM_AX

- /* Check EFLAGS.ZF from 'test VMX_RUN_VMRESUME' above */
- jz .Lvmlaunch
+ /* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
+ jnc .Lvmlaunch

/*
* After a successful VMRESUME/VMLAUNCH, control flow "magically"

--
2.34.1


2023-10-24 08:09:44

by Pawan Gupta

[permalink] [raw]
Subject: [PATCH v2 2/6] x86/entry_64: Add VERW just before userspace transition

Mitigation for MDS is to use VERW instruction to clear any secrets in
CPU Buffers. Any memory accesses after VERW execution can still remain
in CPU buffers. It is safer to execute VERW late in return to user path
to minimize the window in which kernel data can end up in CPU buffers.
There are not many kernel secrets to be had after SWITCH_TO_USER_CR3.

Add support for deploying VERW mitigation after user register state is
restored. This helps minimize the chances of kernel data ending up into
CPU buffers after executing VERW.

Note that the mitigation at the new location is not yet enabled.

Corner case not handled
=======================
Interrupts returning to kernel don't clear CPUs buffers since the
exit-to-user path is expected to do that anyways. But, there could be
a case when an NMI is generated in kernel after the exit-to-user path
has cleared the buffers. This case is not handled and NMI returning to
kernel don't clear CPU buffers because:

1. It is rare to get an NMI after VERW, but before returning to userspace.
2. For an unprivileged user, there is no known way to make that NMI
less rare or target it.
3. It would take a large number of these precisely-timed NMIs to mount
an actual attack. There's presumably not enough bandwidth.
4. The NMI in question occurs after a VERW, i.e. when user state is
restored and most interesting data is already scrubbed. Whats left
is only the data that NMI touches, and that may or may not be of
any interest.

Suggested-by: Dave Hansen <[email protected]>
Signed-off-by: Pawan Gupta <[email protected]>
---
arch/x86/entry/entry_64.S | 11 +++++++++++
arch/x86/entry/entry_64_compat.S | 1 +
2 files changed, 12 insertions(+)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 43606de22511..9f97a8bd11e8 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -223,6 +223,7 @@ syscall_return_via_sysret:
SYM_INNER_LABEL(entry_SYSRETQ_unsafe_stack, SYM_L_GLOBAL)
ANNOTATE_NOENDBR
swapgs
+ CLEAR_CPU_BUFFERS
sysretq
SYM_INNER_LABEL(entry_SYSRETQ_end, SYM_L_GLOBAL)
ANNOTATE_NOENDBR
@@ -663,6 +664,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
/* Restore RDI. */
popq %rdi
swapgs
+ CLEAR_CPU_BUFFERS
jmp .Lnative_iret


@@ -774,6 +776,8 @@ native_irq_return_ldt:
*/
popq %rax /* Restore user RAX */

+ CLEAR_CPU_BUFFERS
+
/*
* RSP now points to an ordinary IRET frame, except that the page
* is read-only and RSP[31:16] are preloaded with the userspace
@@ -1502,6 +1506,12 @@ nmi_restore:
std
movq $0, 5*8(%rsp) /* clear "NMI executing" */

+ /*
+ * Skip CLEAR_CPU_BUFFERS here, since it only helps in rare cases like
+ * NMI in kernel after user state is restored. For an unprivileged user
+ * these conditions are hard to meet.
+ */
+
/*
* iretq reads the "iret" frame and exits the NMI stack in a
* single instruction. We are returning to kernel mode, so this
@@ -1520,6 +1530,7 @@ SYM_CODE_START(ignore_sysret)
UNWIND_HINT_END_OF_STACK
ENDBR
mov $-ENOSYS, %eax
+ CLEAR_CPU_BUFFERS
sysretl
SYM_CODE_END(ignore_sysret)
#endif
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index 70150298f8bd..245697eb8485 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -271,6 +271,7 @@ SYM_INNER_LABEL(entry_SYSRETL_compat_unsafe_stack, SYM_L_GLOBAL)
xorl %r9d, %r9d
xorl %r10d, %r10d
swapgs
+ CLEAR_CPU_BUFFERS
sysretl
SYM_INNER_LABEL(entry_SYSRETL_compat_end, SYM_L_GLOBAL)
ANNOTATE_NOENDBR

--
2.34.1


2023-10-24 08:10:00

by Pawan Gupta

[permalink] [raw]
Subject: [PATCH v2 6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation

During VMentry VERW is executed to mitigate MDS. After VERW, any memory
access like register push onto stack may put host data in MDS affected
CPU buffers. A guest can then use MDS to sample host data.

Although likelihood of secrets surviving in registers at current VERW
callsite is less, but it can't be ruled out. Harden the MDS mitigation
by moving the VERW mitigation late in VMentry path.

Note that VERW for MMIO Stale Data mitigation is unchanged because of
the complexity of per-guest conditional VERW which is not easy to handle
that late in asm with no GPRs available. If the CPU is also affected by
MDS, VERW is unconditionally executed late in asm regardless of guest
having MMIO access.

Signed-off-by: Pawan Gupta <[email protected]>
---
arch/x86/kvm/vmx/vmenter.S | 4 ++++
arch/x86/kvm/vmx/vmx.c | 10 +++++++---
2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index b3b13ec04bac..c566035938cc 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -1,6 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
#include <linux/linkage.h>
#include <asm/asm.h>
+#include <asm/segment.h>
#include <asm/bitsperlong.h>
#include <asm/kvm_vcpu_regs.h>
#include <asm/nospec-branch.h>
@@ -161,6 +162,9 @@ SYM_FUNC_START(__vmx_vcpu_run)
/* Load guest RAX. This kills the @regs pointer! */
mov VCPU_RAX(%_ASM_AX), %_ASM_AX

+ /* Clobbers EFLAGS.ZF */
+ CLEAR_CPU_BUFFERS
+
/* Check EFLAGS.CF from the VMX_RUN_VMRESUME bit test above. */
jnc .Lvmlaunch

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 24e8694b83fc..e2234c0643e9 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -7226,13 +7226,17 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,

guest_state_enter_irqoff();

- /* L1D Flush includes CPU buffer clear to mitigate MDS */
+ /*
+ * L1D Flush includes CPU buffer clear to mitigate MDS, but VERW
+ * mitigation for MDS is done late in VMentry and is still
+ * executed inspite of L1D Flush. This is because an extra VERW
+ * should not matter much after the big hammer L1D Flush.
+ */
if (static_branch_unlikely(&vmx_l1d_should_flush))
vmx_l1d_flush(vcpu);
- else if (cpu_feature_enabled(X86_FEATURE_CLEAR_CPU_BUF))
- mds_clear_cpu_buffers();
else if (static_branch_unlikely(&mmio_stale_data_clear) &&
kvm_arch_has_assigned_device(vcpu->kvm))
+ /* MMIO mitigation is mutually exclusive to MDS mitigation later in asm */
mds_clear_cpu_buffers();

vmx_disable_fb_clear(vmx);

--
2.34.1


2023-10-24 10:37:19

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On Tue, Oct 24, 2023 at 01:08:21AM -0700, Pawan Gupta wrote:

> +.macro CLEAR_CPU_BUFFERS
> + ALTERNATIVE "jmp .Lskip_verw_\@;", "jmp .Ldo_verw_\@", X86_FEATURE_CLEAR_CPU_BUF
> + /* nopl __KERNEL_DS(%rax) */
> + .byte 0x0f, 0x1f, 0x80, 0x00, 0x00;
> +.Lverw_arg_\@: .word __KERNEL_DS;
> +.Ldo_verw_\@: verw _ASM_RIP(.Lverw_arg_\@);
> +.Lskip_verw_\@:
> +.endm

Why can't this be:

ALTERNATIVE "". "verw _ASM_RIP(mds_verw_sel)", X86_FEATURE_CLEAR_CPU_BUF

And have that mds_verw_sel thing be out-of-line ? That gives much better
code for the case where we don't need this.

2023-10-24 12:26:55

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH v2 0/6] Delay VERW

On Tue, Oct 24, 2023 at 01:08:14AM -0700, Pawan Gupta wrote:
> Legacy instruction VERW was overloaded by some processors to clear

Can you raise a bug against the SDM? The VERR/VERW instruction is
out-of-order alphabetically; my copy of Volume 2 from June 2023 has it
placed between VEXPANDPS and VEXTRACTF128.

2023-10-24 16:35:56

by Pawan Gupta

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On Tue, Oct 24, 2023 at 12:36:01PM +0200, Peter Zijlstra wrote:
> On Tue, Oct 24, 2023 at 01:08:21AM -0700, Pawan Gupta wrote:
>
> > +.macro CLEAR_CPU_BUFFERS
> > + ALTERNATIVE "jmp .Lskip_verw_\@;", "jmp .Ldo_verw_\@", X86_FEATURE_CLEAR_CPU_BUF
> > + /* nopl __KERNEL_DS(%rax) */
> > + .byte 0x0f, 0x1f, 0x80, 0x00, 0x00;
> > +.Lverw_arg_\@: .word __KERNEL_DS;
> > +.Ldo_verw_\@: verw _ASM_RIP(.Lverw_arg_\@);
> > +.Lskip_verw_\@:
> > +.endm
>
> Why can't this be:
>
> ALTERNATIVE "". "verw _ASM_RIP(mds_verw_sel)", X86_FEATURE_CLEAR_CPU_BUF
>
> And have that mds_verw_sel thing be out-of-line ?

I haven't done this way because its a tad bit fragile as it depends on
modules being within 4GB of kernel.

> That gives much better code for the case where we don't need this.

If this is the preferred way let me test this and roll a new revision.

2023-10-24 16:36:59

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On Tue, Oct 24, 2023 at 09:35:15AM -0700, Pawan Gupta wrote:
> On Tue, Oct 24, 2023 at 12:36:01PM +0200, Peter Zijlstra wrote:
> > On Tue, Oct 24, 2023 at 01:08:21AM -0700, Pawan Gupta wrote:
> >
> > > +.macro CLEAR_CPU_BUFFERS
> > > + ALTERNATIVE "jmp .Lskip_verw_\@;", "jmp .Ldo_verw_\@", X86_FEATURE_CLEAR_CPU_BUF
> > > + /* nopl __KERNEL_DS(%rax) */
> > > + .byte 0x0f, 0x1f, 0x80, 0x00, 0x00;
> > > +.Lverw_arg_\@: .word __KERNEL_DS;
> > > +.Ldo_verw_\@: verw _ASM_RIP(.Lverw_arg_\@);
> > > +.Lskip_verw_\@:
> > > +.endm
> >
> > Why can't this be:
> >
> > ALTERNATIVE "". "verw _ASM_RIP(mds_verw_sel)", X86_FEATURE_CLEAR_CPU_BUF
> >
> > And have that mds_verw_sel thing be out-of-line ?
>
> I haven't done this way because its a tad bit fragile as it depends on
> modules being within 4GB of kernel.

We 100% rely on that *everywhere*, nothing fragile about it.

2023-10-24 16:51:29

by Pawan Gupta

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On Tue, Oct 24, 2023 at 06:36:21PM +0200, Peter Zijlstra wrote:
> On Tue, Oct 24, 2023 at 09:35:15AM -0700, Pawan Gupta wrote:
> > On Tue, Oct 24, 2023 at 12:36:01PM +0200, Peter Zijlstra wrote:
> > > On Tue, Oct 24, 2023 at 01:08:21AM -0700, Pawan Gupta wrote:
> > >
> > > > +.macro CLEAR_CPU_BUFFERS
> > > > + ALTERNATIVE "jmp .Lskip_verw_\@;", "jmp .Ldo_verw_\@", X86_FEATURE_CLEAR_CPU_BUF
> > > > + /* nopl __KERNEL_DS(%rax) */
> > > > + .byte 0x0f, 0x1f, 0x80, 0x00, 0x00;
> > > > +.Lverw_arg_\@: .word __KERNEL_DS;
> > > > +.Ldo_verw_\@: verw _ASM_RIP(.Lverw_arg_\@);
> > > > +.Lskip_verw_\@:
> > > > +.endm
> > >
> > > Why can't this be:
> > >
> > > ALTERNATIVE "". "verw _ASM_RIP(mds_verw_sel)", X86_FEATURE_CLEAR_CPU_BUF
> > >
> > > And have that mds_verw_sel thing be out-of-line ?
> >
> > I haven't done this way because its a tad bit fragile as it depends on
> > modules being within 4GB of kernel.
>
> We 100% rely on that *everywhere*, nothing fragile about it.

I didn't know that, doing it this way then. Thanks.

2023-10-24 17:02:20

by Pawan Gupta

[permalink] [raw]
Subject: Re: [PATCH v2 0/6] Delay VERW

On Tue, Oct 24, 2023 at 01:26:00PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 24, 2023 at 01:08:14AM -0700, Pawan Gupta wrote:
> > Legacy instruction VERW was overloaded by some processors to clear
>
> Can you raise a bug against the SDM? The VERR/VERW instruction is
> out-of-order alphabetically; my copy of Volume 2 from June 2023 has it
> placed between VEXPANDPS and VEXTRACTF128.

:)

Thanks for reporting, I have notified the relevant people. Hopefully,
this should be fixed in the next SDM release.

2023-10-24 17:03:49

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On Tue, Oct 24, 2023 at 09:45:20AM -0700, Pawan Gupta wrote:

> > > modules being within 4GB of kernel.

FWIW, it's 2G, it's a s32 displacement, the highest most address can
jump 2g down, while the lowest most address can jump 2g up. Leaving a 2G
directly addressable range.

And yeah, we ensure kernel text and modules are inside that 2G range.

2023-10-24 17:31:10

by Pawan Gupta

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On Tue, Oct 24, 2023 at 07:02:48PM +0200, Peter Zijlstra wrote:
> On Tue, Oct 24, 2023 at 09:45:20AM -0700, Pawan Gupta wrote:
>
> > > > modules being within 4GB of kernel.
>
> FWIW, it's 2G, it's a s32 displacement, the highest most address can
> jump 2g down, while the lowest most address can jump 2g up. Leaving a 2G
> directly addressable range.
>
> And yeah, we ensure kernel text and modules are inside that 2G range.

Ah, okay. Thanks for the info.

2023-10-24 18:29:14

by H. Peter Anvin

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On October 24, 2023 10:02:48 AM PDT, Peter Zijlstra <[email protected]> wrote:
>On Tue, Oct 24, 2023 at 09:45:20AM -0700, Pawan Gupta wrote:
>
>> > > modules being within 4GB of kernel.
>
>FWIW, it's 2G, it's a s32 displacement, the highest most address can
>jump 2g down, while the lowest most address can jump 2g up. Leaving a 2G
>directly addressable range.
>
>And yeah, we ensure kernel text and modules are inside that 2G range.

To be specific, we don't require that it is located at any particular *physical* addresses, but all modules including the root module are remapped into the [-2GiB,0) range. If we didn't do that, modules would have to be compiled with the pic memory model rather than the kernel memory model which is what they currently are. This would add substantial overhead due to the need for a GOT (the PLT is optional if all symbols are resolved at load time.)

The kernel is different from user space objects since it is always fully loaded into physical memory and is never paged or shared. Therefore, inline relocations, which break sharing and create dirty pages in user space, have zero execution cost in the kernel; the only overhead to modules other than load time (including the runtime linking) is that modules can't realistically be mapped using large page entries.

2023-10-24 18:49:22

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

> the only overhead to modules other than load time (including the runtime linking) is that modules can't realistically be mapped using large page entries.

If there were some significant win for using large pages, couldn't the
kernel pre-allocate some 2MB pages in the [-2GiB,0) range? Boot parameter
for how many (perhaps two for separate code/data pages). First few loaded
modules get to use that space until it is all gone.

It would all be quite messy if those modules were later unloaded/reloaded
... so there would have to be some compelling benchmarks to justify
the complexity.

That's probably why Peter said "can't realistically".

-Tony

2023-10-24 19:15:44

by H. Peter Anvin

[permalink] [raw]
Subject: RE: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On October 24, 2023 11:49:07 AM PDT, "Luck, Tony" <[email protected]> wrote:
>> the only overhead to modules other than load time (including the runtime linking) is that modules can't realistically be mapped using large page entries.
>
>If there were some significant win for using large pages, couldn't the
>kernel pre-allocate some 2MB pages in the [-2GiB,0) range? Boot parameter
>for how many (perhaps two for separate code/data pages). First few loaded
>modules get to use that space until it is all gone.
>
>It would all be quite messy if those modules were later unloaded/reloaded
>... so there would have to be some compelling benchmarks to justify
>the complexity.
>
>That's probably why Peter said "can't realistically".
>
>-Tony
>

Sure it could, but it would mean the kernel is sitting on an average of 6 MB of unusable memory. It would also mean that unloaded modules would create holes in that memory which would have to be managed.

2023-10-24 19:40:44

by Luck, Tony

[permalink] [raw]
Subject: RE: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

> Sure it could, but it would mean the kernel is sitting on an average of 6 MB of unusable memory. It would also mean that unloaded modules would create holes in that memory which would have to be managed.

On my Fedora38 desktop:

$ lsmod | awk '{ bytes += $2 } END {print bytes/(1024*1024)}'
21.0859

Lots more than 6MB memory already essentially pinned by loaded modules.

$ head -3 /proc/meminfo
MemTotal: 65507344 kB
MemFree: 56762336 kB
MemAvailable: 63358552 kB

Pinning 20 or so Mbytes isn't going to make a dent in that free memory.

Managing the holes for unloading/reloading modules adds some complexity ... but shouldn't be awful.

If this code managed at finer granularity than "page", it would save some memory.

$ lsmod | wc -l
123

All those modules rounding text/data up to 4K boundaries is wasting a bunch of it.

-Tony


2023-10-24 20:31:06

by H. Peter Anvin

[permalink] [raw]
Subject: RE: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On October 24, 2023 12:40:02 PM PDT, "Luck, Tony" <[email protected]> wrote:
>> Sure it could, but it would mean the kernel is sitting on an average of 6 MB of unusable memory. It would also mean that unloaded modules would create holes in that memory which would have to be managed.
>
>On my Fedora38 desktop:
>
>$ lsmod | awk '{ bytes += $2 } END {print bytes/(1024*1024)}'
>21.0859
>
>Lots more than 6MB memory already essentially pinned by loaded modules.
>
>$ head -3 /proc/meminfo
>MemTotal: 65507344 kB
>MemFree: 56762336 kB
>MemAvailable: 63358552 kB
>
>Pinning 20 or so Mbytes isn't going to make a dent in that free memory.
>
>Managing the holes for unloading/reloading modules adds some complexity ... but shouldn't be awful.
>
>If this code managed at finer granularity than "page", it would save some memory.
>
>$ lsmod | wc -l
>123
>
>All those modules rounding text/data up to 4K boundaries is wasting a bunch of it.
>
>-Tony
>
>
>

Sure, but is it worth the effort?

2023-10-24 22:31:30

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On Tue, Oct 24, 2023 at 11:27:36AM -0700, H. Peter Anvin wrote:

> the only overhead to modules other than load time (including the
> runtime linking) is that modules can't realistically be mapped using
> large page entries.

Song is working on that. Currently the module allocator is split between
all (5 IIRC) different page permissions required, next step is using
large page pools for those.

2023-10-24 22:31:55

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On Tue, Oct 24, 2023 at 06:49:07PM +0000, Luck, Tony wrote:
> > the only overhead to modules other than load time (including the runtime linking) is that modules can't realistically be mapped using large page entries.
>
> If there were some significant win for using large pages, couldn't the

There is. The 4k TLBs really hurt. Thomas and me ran into that when
doing the retbleed call depth crud. Similarly Song ran into it with BPF,
they really want eBPF JIT to write to large pages.

2023-10-25 04:02:18

by Pawan Gupta

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On Tue, Oct 24, 2023 at 12:36:01PM +0200, Peter Zijlstra wrote:
> On Tue, Oct 24, 2023 at 01:08:21AM -0700, Pawan Gupta wrote:
>
> > +.macro CLEAR_CPU_BUFFERS
> > + ALTERNATIVE "jmp .Lskip_verw_\@;", "jmp .Ldo_verw_\@", X86_FEATURE_CLEAR_CPU_BUF
> > + /* nopl __KERNEL_DS(%rax) */
> > + .byte 0x0f, 0x1f, 0x80, 0x00, 0x00;
> > +.Lverw_arg_\@: .word __KERNEL_DS;
> > +.Ldo_verw_\@: verw _ASM_RIP(.Lverw_arg_\@);
> > +.Lskip_verw_\@:
> > +.endm
>
> Why can't this be:
>
> ALTERNATIVE "". "verw _ASM_RIP(mds_verw_sel)", X86_FEATURE_CLEAR_CPU_BUF
>
> And have that mds_verw_sel thing be out-of-line ? That gives much better
> code for the case where we don't need this.

Overall the code generated with this approach is much better. But, in my
testing I am seeing an issue with runtime patching in 32-bit mode, when
mitigations are off. Instead of NOPs I am seeing random instruction. I
don't see any issue with 64-bit mode.

config1: mitigations=on, 32-bit mode, post-boot

entry_SYSENTER_32:
...
0xc1a3748e <+222>: pop %eax
0xc1a3748f <+223>: verw 0xc1a38240
0xc1a37496 <+230>: sti
0xc1a37497 <+231>: sysexit

---------------------------------------------

config2: mitigations=off, 32-bit mode, post-boot

entry_SYSENTER_32:
...
0xc1a3748e <+222>: pop %eax
0xc1a3748f <+223>: lea 0x0(%esi,%eiz,1),%esi <---- Doesn't look right
0xc1a37496 <+230>: sti
0xc1a37497 <+231>: sysexit

---------------------------------------------

config3: 32-bit mode, pre-boot objdump

entry_SYSENTER_32:
...
c8e: 58 pop %eax
c8f: 90 nop
c90: 90 nop
c91: 90 nop
c92: 90 nop
c93: 90 nop
c94: 90 nop
c95: 90 nop
c96: fb sti
c97: 0f 35 sysexit

These tests were done with below patch:

-----8<-----
From: Pawan Gupta <[email protected]>
Date: Mon, 23 Oct 2023 15:04:56 -0700
Subject: [PATCH] x86/bugs: Add asm helpers for executing VERW

MDS mitigation requires clearing the CPU buffers before returning to
user. This needs to be done late in the exit-to-user path. Current
location of VERW leaves a possibility of kernel data ending up in CPU
buffers for memory accesses done after VERW such as:

1. Kernel data accessed by an NMI between VERW and return-to-user can
remain in CPU buffers ( since NMI returning to kernel does not
execute VERW to clear CPU buffers.
2. Alyssa reported that after VERW is executed,
CONFIG_GCC_PLUGIN_STACKLEAK=y scrubs the stack used by a system
call. Memory accesses during stack scrubbing can move kernel stack
contents into CPU buffers.
3. When caller saved registers are restored after a return from
function executing VERW, the kernel stack accesses can remain in
CPU buffers(since they occur after VERW).

To fix this VERW needs to be moved very late in exit-to-user path.

In preparation for moving VERW to entry/exit asm code, create macros
that can be used in asm. Also make them depend on a new feature flag
X86_FEATURE_CLEAR_CPU_BUF.

Reported-by: Alyssa Milburn <[email protected]>
Suggested-by: Andrew Cooper <[email protected]>
Signed-off-by: Pawan Gupta <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/nospec-branch.h | 24 ++++++++++++++++++++++++
2 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 58cb9495e40f..f21fc0f12737 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -308,10 +308,10 @@
#define X86_FEATURE_SMBA (11*32+21) /* "" Slow Memory Bandwidth Allocation */
#define X86_FEATURE_BMEC (11*32+22) /* "" Bandwidth Monitoring Event Configuration */
#define X86_FEATURE_USER_SHSTK (11*32+23) /* Shadow stack support for user mode applications */
-
#define X86_FEATURE_SRSO (11*32+24) /* "" AMD BTB untrain RETs */
#define X86_FEATURE_SRSO_ALIAS (11*32+25) /* "" AMD BTB untrain RETs through aliasing */
#define X86_FEATURE_IBPB_ON_VMEXIT (11*32+26) /* "" Issue an IBPB only on VMEXIT */
+#define X86_FEATURE_CLEAR_CPU_BUF (11*32+27) /* "" Clear CPU buffers */

/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
#define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index c55cc243592e..ed8218e2d9a7 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -13,6 +13,7 @@
#include <asm/unwind_hints.h>
#include <asm/percpu.h>
#include <asm/current.h>
+#include <asm/segment.h>

/*
* Call depth tracking for Intel SKL CPUs to address the RSB underflow
@@ -329,6 +330,29 @@
#endif
.endm

+/*
+ * Macros to execute VERW instruction that mitigate transient data sampling
+ * attacks such as MDS. On affected systems a microcode update overloaded VERW
+ * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
+ *
+ * Note: Only the memory operand variant of VERW clears the CPU buffers.
+ */
+.pushsection .rodata
+.align 64
+mds_verw_sel:
+ .word __KERNEL_DS
+ .byte 0xcc
+.align 64
+.popsection
+
+.macro EXEC_VERW
+ verw _ASM_RIP(mds_verw_sel)
+.endm
+
+.macro CLEAR_CPU_BUFFERS
+ ALTERNATIVE "", __stringify(EXEC_VERW), X86_FEATURE_CLEAR_CPU_BUF
+.endm
+
#else /* __ASSEMBLY__ */

#define ANNOTATE_RETPOLINE_SAFE \
--
2.34.1

2023-10-25 06:57:05

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On Tue, Oct 24, 2023 at 09:00:29PM -0700, Pawan Gupta wrote:

> config1: mitigations=on, 32-bit mode, post-boot
>
> entry_SYSENTER_32:
> ...
> 0xc1a3748e <+222>: pop %eax
> 0xc1a3748f <+223>: verw 0xc1a38240
> 0xc1a37496 <+230>: sti
> 0xc1a37497 <+231>: sysexit
>
> ---------------------------------------------
>
> config2: mitigations=off, 32-bit mode, post-boot
>
> entry_SYSENTER_32:
> ...
> 0xc1a3748e <+222>: pop %eax
> 0xc1a3748f <+223>: lea 0x0(%esi,%eiz,1),%esi <---- Doesn't look right
> 0xc1a37496 <+230>: sti
> 0xc1a37497 <+231>: sysexit
>
> ---------------------------------------------
>
> config3: 32-bit mode, pre-boot objdump
>
> entry_SYSENTER_32:
> ...
> c8e: 58 pop %eax
> c8f: 90 nop
> c90: 90 nop
> c91: 90 nop
> c92: 90 nop
> c93: 90 nop
> c94: 90 nop
> c95: 90 nop
> c96: fb sti
> c97: 0f 35 sysexit
>

If you look at arch/x86/include/asm/nops.h, you'll find (for 32bit):

* 7: leal 0x0(%esi,%eiz,1),%esi

Which reads as:

load-effective-address of %esi[0] into %esi

which is, of course, just %esi.

You can also get this from GAS using:

.nops 7

which results in:

0: 8d b4 26 00 00 00 00 lea 0x0(%esi,%eiz,1),%esi

It is basically abusing bizarro x86 instruction encoding rules to create
a 7 byte nop without using NOPL. If you really want to know, volume 2
of the SDM has a ton of stuff on instruction encoding, also the interweb
:-)

2023-10-25 06:59:02

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On Tue, Oct 24, 2023 at 09:00:29PM -0700, Pawan Gupta wrote:

> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> index c55cc243592e..ed8218e2d9a7 100644
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -13,6 +13,7 @@
> #include <asm/unwind_hints.h>
> #include <asm/percpu.h>
> #include <asm/current.h>
> +#include <asm/segment.h>
>
> /*
> * Call depth tracking for Intel SKL CPUs to address the RSB underflow
> @@ -329,6 +330,29 @@
> #endif
> .endm
>
> +/*
> + * Macros to execute VERW instruction that mitigate transient data sampling
> + * attacks such as MDS. On affected systems a microcode update overloaded VERW
> + * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
> + *
> + * Note: Only the memory operand variant of VERW clears the CPU buffers.
> + */
> +.pushsection .rodata
> +.align 64
> +mds_verw_sel:
> + .word __KERNEL_DS
> + .byte 0xcc
> +.align 64
> +.popsection

This should not be in a header file, you'll get an instance of this per
translation unit, not what you want.

> +
> +.macro EXEC_VERW
> + verw _ASM_RIP(mds_verw_sel)
> +.endm
> +
> +.macro CLEAR_CPU_BUFFERS
> + ALTERNATIVE "", __stringify(EXEC_VERW), X86_FEATURE_CLEAR_CPU_BUF
> +.endm
> +
> #else /* __ASSEMBLY__ */

2023-10-25 08:11:36

by Chao Gao

[permalink] [raw]
Subject: Re: [PATCH v2 6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation

On Tue, Oct 24, 2023 at 01:08:53AM -0700, Pawan Gupta wrote:
>During VMentry VERW is executed to mitigate MDS. After VERW, any memory
>access like register push onto stack may put host data in MDS affected
>CPU buffers. A guest can then use MDS to sample host data.
>
>Although likelihood of secrets surviving in registers at current VERW
>callsite is less, but it can't be ruled out. Harden the MDS mitigation
>by moving the VERW mitigation late in VMentry path.
>
>Note that VERW for MMIO Stale Data mitigation is unchanged because of
>the complexity of per-guest conditional VERW which is not easy to handle
>that late in asm with no GPRs available. If the CPU is also affected by
>MDS, VERW is unconditionally executed late in asm regardless of guest
>having MMIO access.
>
>Signed-off-by: Pawan Gupta <[email protected]>
>---
> arch/x86/kvm/vmx/vmenter.S | 4 ++++
> arch/x86/kvm/vmx/vmx.c | 10 +++++++---
> 2 files changed, 11 insertions(+), 3 deletions(-)
>
>diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
>index b3b13ec04bac..c566035938cc 100644
>--- a/arch/x86/kvm/vmx/vmenter.S
>+++ b/arch/x86/kvm/vmx/vmenter.S
>@@ -1,6 +1,7 @@
> /* SPDX-License-Identifier: GPL-2.0 */
> #include <linux/linkage.h>
> #include <asm/asm.h>
>+#include <asm/segment.h>

This header is already included a few lines below:

#include <asm/nospec-branch.h>
#include <asm/percpu.h>
#include <asm/segment.h> <---

> #include <asm/bitsperlong.h>
> #include <asm/kvm_vcpu_regs.h>
> #include <asm/nospec-branch.h>

2023-10-25 08:16:13

by Nikolay Borisov

[permalink] [raw]
Subject: Re: [PATCH v2 5/6] KVM: VMX: Use BT+JNC, i.e. EFLAGS.CF to select VMRESUME vs. VMLAUNCH



On 24.10.23 г. 11:08 ч., Pawan Gupta wrote:
> From: Sean Christopherson <[email protected]>
>
> Use EFLAGS.CF instead of EFLAGS.ZF to track whether to use VMRESUME versus
> VMLAUNCH. Freeing up EFLAGS.ZF will allow doing VERW, which clobbers ZF,
> for MDS mitigations as late as possible without needing to duplicate VERW
> for both paths.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> Signed-off-by: Pawan Gupta <[email protected]>

Reviewed-by: Nikolay Borisov <[email protected]>

2023-10-25 15:09:39

by Pawan Gupta

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On Wed, Oct 25, 2023 at 08:56:10AM +0200, Peter Zijlstra wrote:
> > config3: 32-bit mode, pre-boot objdump
> >
> > entry_SYSENTER_32:
> > ...
> > c8e: 58 pop %eax
> > c8f: 90 nop
> > c90: 90 nop
> > c91: 90 nop
> > c92: 90 nop
> > c93: 90 nop
> > c94: 90 nop
> > c95: 90 nop
> > c96: fb sti
> > c97: 0f 35 sysexit
> >
>
> If you look at arch/x86/include/asm/nops.h, you'll find (for 32bit):
>
> * 7: leal 0x0(%esi,%eiz,1),%esi
>
> Which reads as:
>
> load-effective-address of %esi[0] into %esi

Wow, never imagined that this would be one of the magician's trick. I
will go read on why is it better than NOPL.

2023-10-25 15:10:35

by Pawan Gupta

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] x86/bugs: Add asm helpers for executing VERW

On Wed, Oct 25, 2023 at 08:58:18AM +0200, Peter Zijlstra wrote:
> > +.pushsection .rodata
> > +.align 64
> > +mds_verw_sel:
> > + .word __KERNEL_DS
> > + .byte 0xcc
> > +.align 64
> > +.popsection
>
> This should not be in a header file, you'll get an instance of this per
> translation unit, not what you want.

Agh, sorry I missed it.

2023-10-25 15:16:46

by Pawan Gupta

[permalink] [raw]
Subject: Re: [PATCH v2 6/6] KVM: VMX: Move VERW closer to VMentry for MDS mitigation

On Wed, Oct 25, 2023 at 03:47:37PM +0800, Chao Gao wrote:
> >diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
> >index b3b13ec04bac..c566035938cc 100644
> >--- a/arch/x86/kvm/vmx/vmenter.S
> >+++ b/arch/x86/kvm/vmx/vmenter.S
> >@@ -1,6 +1,7 @@
> > /* SPDX-License-Identifier: GPL-2.0 */
> > #include <linux/linkage.h>
> > #include <asm/asm.h>
> >+#include <asm/segment.h>
>
> This header is already included a few lines below:

Thanks, will remove it.

2023-10-25 22:08:57

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v2 4/6] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key

Hi Pawan,

kernel test robot noticed the following build warnings:

[auto build test WARNING on 05d3ef8bba77c1b5f98d941d8b2d4aeab8118ef1]

url: https://github.com/intel-lab-lkp/linux/commits/Pawan-Gupta/x86-bugs-Add-asm-helpers-for-executing-VERW/20231024-161029
base: 05d3ef8bba77c1b5f98d941d8b2d4aeab8118ef1
patch link: https://lore.kernel.org/r/20231024-delay-verw-v2-4-f1881340c807%40linux.intel.com
patch subject: [PATCH v2 4/6] x86/bugs: Use ALTERNATIVE() instead of mds_user_clear static key
reproduce: (https://download.01.org/0day-ci/archive/20231026/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

>> Documentation/arch/x86/mds.rst:153: WARNING: Unexpected section title.

vim +153 Documentation/arch/x86/mds.rst

141
142 When transitioning from kernel to user space the CPU buffers are flushed
143 on affected CPUs when the mitigation is not disabled on the kernel
144 command line. The mitigation is enabled through the feature flag
145 X86_FEATURE_CLEAR_CPU_BUF.
146
147 The mitigation is invoked just before transitioning to userspace after
148 user registers are restored. This is done to minimize the window in
149 which kernel data could be accessed after VERW e.g. via an NMI after
150 VERW.
151
152 Corner case not handled
> 153 ^^^^^^^^^^^^^^^^^^^^^^^
154 Interrupts returning to kernel don't clear CPUs buffers since the
155 exit-to-user path is expected to do that anyways. But, there could be
156 a case when an NMI is generated in kernel after the exit-to-user path
157 has cleared the buffers. This case is not handled and NMI returning to
158 kernel don't clear CPU buffers because:
159
160 1. It is rare to get an NMI after VERW, but before returning to userspace.
161 2. For an unprivileged user, there is no known way to make that NMI
162 less rare or target it.
163 3. It would take a large number of these precisely-timed NMIs to mount
164 an actual attack. There's presumably not enough bandwidth.
165 4. The NMI in question occurs after a VERW, i.e. when user state is
166 restored and most interesting data is already scrubbed. Whats left
167 is only the data that NMI touches, and that may or may not be of
168 any interest.
169
170

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki