2022-04-22 21:42:16

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH] KVM: x86/mmu: Use enable_mmio_caching to track if MMIO caching is enabled

Clear enable_mmio_caching if hardware can't support MMIO caching and use
the dedicated flag to detect if MMIO caching is enabled instead of
assuming shadow_mmio_value==0 means MMIO caching is disabled. TDX will
use a zero value even when caching is enabled, and is_mmio_spte() isn't
so hot that it needs to avoid an extra memory access, i.e. there's no
reason to be super clever. And the clever approach may not even be more
performant, e.g. gcc-11 lands the extra check on a non-zero value inline,
but puts the enable_mmio_caching out-of-line, i.e. avoids the few extra
uops for non-MMIO SPTEs.

Cc: Isaku Yamahata <[email protected]>
Cc: Kai Huang <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
---
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86/kvm/mmu/spte.c | 5 ++++-
arch/x86/kvm/mmu/spte.h | 4 +++-
3 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 69a30d6d1e2b..01bbe7744342 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2991,7 +2991,7 @@ static bool handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fa
* touching the shadow page tables as attempting to install an
* MMIO SPTE will just be an expensive nop.
*/
- if (unlikely(!shadow_mmio_value)) {
+ if (unlikely(!enable_mmio_caching)) {
*ret_val = RET_PF_EMULATE;
return true;
}
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 4739b53c9734..eedfc599a457 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -19,7 +19,7 @@
#include <asm/memtype.h>
#include <asm/vmx.h>

-static bool __read_mostly enable_mmio_caching = true;
+bool __read_mostly enable_mmio_caching = true;
module_param_named(mmio_caching, enable_mmio_caching, bool, 0444);

u64 __read_mostly shadow_host_writable_mask;
@@ -351,6 +351,9 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask)
WARN_ON(mmio_value && (REMOVED_SPTE & mmio_mask) == mmio_value))
mmio_value = 0;

+ if (!mmio_value)
+ enable_mmio_caching = false;
+
shadow_mmio_value = mmio_value;
shadow_mmio_mask = mmio_mask;
shadow_mmio_access_mask = access_mask;
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 73f12615416f..ad8ce3c5d083 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -5,6 +5,8 @@

#include "mmu_internal.h"

+extern bool __read_mostly enable_mmio_caching;
+
/*
* A MMU present SPTE is backed by actual memory and may or may not be present
* in hardware. E.g. MMIO SPTEs are not considered present. Use bit 11, as it
@@ -210,7 +212,7 @@ extern u8 __read_mostly shadow_phys_bits;
static inline bool is_mmio_spte(u64 spte)
{
return (spte & shadow_mmio_mask) == shadow_mmio_value &&
- likely(shadow_mmio_value);
+ likely(enable_mmio_caching);
}

static inline bool is_shadow_present_pte(u64 pte)

base-commit: 150866cd0ec871c765181d145aa0912628289c8a
--
2.36.0.rc0.470.gd361397f0d-goog


2022-07-28 14:55:29

by Michael Roth

[permalink] [raw]
Subject: re: Possible 5.19 regression for systems with 52-bit physical address support

Hi Sean,

With this patch applied, AMD processors that support 52-bit physical
address will result in MMIO caching being disabled. This ends up
breaking SEV-ES and SNP, since they rely on the MMIO reserved bit to
generate the appropriate NAE MMIO exit event.

This failure can also be reproduced on Milan by disabling mmio_caching
via KVM module parameter.

In the case of AMD, guests use a separate physical address range that
and so there are still reserved bits available to make use of the MMIO
caching. This adjustment happens in svm_adjust_mmio_mask(), but since
mmio_caching_enabled flag is 0, any attempts to update masks get
ignored by kvm_mmu_set_mmio_spte_mask().

Would adding 'force' parameter to kvm_mmu_set_mmio_spte_mask() that
svm_adjust_mmio_mask() can set to ignore enable_mmio_caching be
reasonable fix, or should we take a different approach?

Thanks!

-Mike