2023-02-08 20:54:37

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 0/2] KVM: VMX: Stub out enable_evmcs static key

Stub out the enable_evmcs static key for CONFIG_HYPERV=n. gcc (as of
gcc-12) isn't clever enough to elide the nop placeholder when there's no
code guarded by a static branch. With gcc-12, because of the vast number
of VMCS accesses, eliminating the nops reduces the size of kvm-intel.ko by
~7.5% (200KiB).

Patch 1 is tangentially related cleanup.

Applies on `[email protected]:kvm-x86/linux.git vmx`.

Sean Christopherson (2):
KVM: nVMX: Move EVMCS1_SUPPORT_* macros to hyperv.c
KVM: VMX: Stub out enable_evmcs static key for CONFIG_HYPERV=n

arch/x86/kvm/vmx/hyperv.c | 107 +++++++++++++++++++++++++++++++++-
arch/x86/kvm/vmx/hyperv.h | 115 +++----------------------------------
arch/x86/kvm/vmx/vmx.c | 15 +++--
arch/x86/kvm/vmx/vmx_ops.h | 22 +++----
4 files changed, 132 insertions(+), 127 deletions(-)


base-commit: 93827a0a36396f2fd6368a54a020f420c8916e9b
--
2.39.1.519.gcb327c4b5f-goog



2023-02-08 20:54:41

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 1/2] KVM: nVMX: Move EVMCS1_SUPPORT_* macros to hyperv.c

Move the macros that define the set of VMCS controls that are supported
by eVMCS1 from hyperv.h to hyperv.c, i.e. make them "private". The
macros should never be consumed directly by KVM at-large since the "final"
set of supported controls depends on guest CPUID.

No functional change intended.

Signed-off-by: Sean Christopherson <[email protected]>
---
arch/x86/kvm/vmx/hyperv.c | 105 ++++++++++++++++++++++++++++++++++++++
arch/x86/kvm/vmx/hyperv.h | 105 --------------------------------------
2 files changed, 105 insertions(+), 105 deletions(-)

diff --git a/arch/x86/kvm/vmx/hyperv.c b/arch/x86/kvm/vmx/hyperv.c
index 22daca752797..b6748055c586 100644
--- a/arch/x86/kvm/vmx/hyperv.c
+++ b/arch/x86/kvm/vmx/hyperv.c
@@ -13,6 +13,111 @@

#define CC KVM_NESTED_VMENTER_CONSISTENCY_CHECK

+/*
+ * Enlightened VMCSv1 doesn't support these:
+ *
+ * POSTED_INTR_NV = 0x00000002,
+ * GUEST_INTR_STATUS = 0x00000810,
+ * APIC_ACCESS_ADDR = 0x00002014,
+ * POSTED_INTR_DESC_ADDR = 0x00002016,
+ * EOI_EXIT_BITMAP0 = 0x0000201c,
+ * EOI_EXIT_BITMAP1 = 0x0000201e,
+ * EOI_EXIT_BITMAP2 = 0x00002020,
+ * EOI_EXIT_BITMAP3 = 0x00002022,
+ * GUEST_PML_INDEX = 0x00000812,
+ * PML_ADDRESS = 0x0000200e,
+ * VM_FUNCTION_CONTROL = 0x00002018,
+ * EPTP_LIST_ADDRESS = 0x00002024,
+ * VMREAD_BITMAP = 0x00002026,
+ * VMWRITE_BITMAP = 0x00002028,
+ *
+ * TSC_MULTIPLIER = 0x00002032,
+ * PLE_GAP = 0x00004020,
+ * PLE_WINDOW = 0x00004022,
+ * VMX_PREEMPTION_TIMER_VALUE = 0x0000482E,
+ *
+ * Currently unsupported in KVM:
+ * GUEST_IA32_RTIT_CTL = 0x00002814,
+ */
+#define EVMCS1_SUPPORTED_PINCTRL \
+ (PIN_BASED_ALWAYSON_WITHOUT_TRUE_MSR | \
+ PIN_BASED_EXT_INTR_MASK | \
+ PIN_BASED_NMI_EXITING | \
+ PIN_BASED_VIRTUAL_NMIS)
+
+#define EVMCS1_SUPPORTED_EXEC_CTRL \
+ (CPU_BASED_ALWAYSON_WITHOUT_TRUE_MSR | \
+ CPU_BASED_HLT_EXITING | \
+ CPU_BASED_CR3_LOAD_EXITING | \
+ CPU_BASED_CR3_STORE_EXITING | \
+ CPU_BASED_UNCOND_IO_EXITING | \
+ CPU_BASED_MOV_DR_EXITING | \
+ CPU_BASED_USE_TSC_OFFSETTING | \
+ CPU_BASED_MWAIT_EXITING | \
+ CPU_BASED_MONITOR_EXITING | \
+ CPU_BASED_INVLPG_EXITING | \
+ CPU_BASED_RDPMC_EXITING | \
+ CPU_BASED_INTR_WINDOW_EXITING | \
+ CPU_BASED_CR8_LOAD_EXITING | \
+ CPU_BASED_CR8_STORE_EXITING | \
+ CPU_BASED_RDTSC_EXITING | \
+ CPU_BASED_TPR_SHADOW | \
+ CPU_BASED_USE_IO_BITMAPS | \
+ CPU_BASED_MONITOR_TRAP_FLAG | \
+ CPU_BASED_USE_MSR_BITMAPS | \
+ CPU_BASED_NMI_WINDOW_EXITING | \
+ CPU_BASED_PAUSE_EXITING | \
+ CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)
+
+#define EVMCS1_SUPPORTED_2NDEXEC \
+ (SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE | \
+ SECONDARY_EXEC_WBINVD_EXITING | \
+ SECONDARY_EXEC_ENABLE_VPID | \
+ SECONDARY_EXEC_ENABLE_EPT | \
+ SECONDARY_EXEC_UNRESTRICTED_GUEST | \
+ SECONDARY_EXEC_DESC | \
+ SECONDARY_EXEC_ENABLE_RDTSCP | \
+ SECONDARY_EXEC_ENABLE_INVPCID | \
+ SECONDARY_EXEC_XSAVES | \
+ SECONDARY_EXEC_RDSEED_EXITING | \
+ SECONDARY_EXEC_RDRAND_EXITING | \
+ SECONDARY_EXEC_TSC_SCALING | \
+ SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE | \
+ SECONDARY_EXEC_PT_USE_GPA | \
+ SECONDARY_EXEC_PT_CONCEAL_VMX | \
+ SECONDARY_EXEC_BUS_LOCK_DETECTION | \
+ SECONDARY_EXEC_NOTIFY_VM_EXITING | \
+ SECONDARY_EXEC_ENCLS_EXITING)
+
+#define EVMCS1_SUPPORTED_3RDEXEC (0ULL)
+
+#define EVMCS1_SUPPORTED_VMEXIT_CTRL \
+ (VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR | \
+ VM_EXIT_SAVE_DEBUG_CONTROLS | \
+ VM_EXIT_ACK_INTR_ON_EXIT | \
+ VM_EXIT_HOST_ADDR_SPACE_SIZE | \
+ VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | \
+ VM_EXIT_SAVE_IA32_PAT | \
+ VM_EXIT_LOAD_IA32_PAT | \
+ VM_EXIT_SAVE_IA32_EFER | \
+ VM_EXIT_LOAD_IA32_EFER | \
+ VM_EXIT_CLEAR_BNDCFGS | \
+ VM_EXIT_PT_CONCEAL_PIP | \
+ VM_EXIT_CLEAR_IA32_RTIT_CTL)
+
+#define EVMCS1_SUPPORTED_VMENTRY_CTRL \
+ (VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR | \
+ VM_ENTRY_LOAD_DEBUG_CONTROLS | \
+ VM_ENTRY_IA32E_MODE | \
+ VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL | \
+ VM_ENTRY_LOAD_IA32_PAT | \
+ VM_ENTRY_LOAD_IA32_EFER | \
+ VM_ENTRY_LOAD_BNDCFGS | \
+ VM_ENTRY_PT_CONCEAL_PIP | \
+ VM_ENTRY_LOAD_IA32_RTIT_CTL)
+
+#define EVMCS1_SUPPORTED_VMFUNC (0)
+
DEFINE_STATIC_KEY_FALSE(enable_evmcs);

#define EVMCS1_OFFSET(x) offsetof(struct hv_enlightened_vmcs, x)
diff --git a/arch/x86/kvm/vmx/hyperv.h b/arch/x86/kvm/vmx/hyperv.h
index 78d17667e7ec..1299143d00df 100644
--- a/arch/x86/kvm/vmx/hyperv.h
+++ b/arch/x86/kvm/vmx/hyperv.h
@@ -22,111 +22,6 @@ DECLARE_STATIC_KEY_FALSE(enable_evmcs);

#define KVM_EVMCS_VERSION 1

-/*
- * Enlightened VMCSv1 doesn't support these:
- *
- * POSTED_INTR_NV = 0x00000002,
- * GUEST_INTR_STATUS = 0x00000810,
- * APIC_ACCESS_ADDR = 0x00002014,
- * POSTED_INTR_DESC_ADDR = 0x00002016,
- * EOI_EXIT_BITMAP0 = 0x0000201c,
- * EOI_EXIT_BITMAP1 = 0x0000201e,
- * EOI_EXIT_BITMAP2 = 0x00002020,
- * EOI_EXIT_BITMAP3 = 0x00002022,
- * GUEST_PML_INDEX = 0x00000812,
- * PML_ADDRESS = 0x0000200e,
- * VM_FUNCTION_CONTROL = 0x00002018,
- * EPTP_LIST_ADDRESS = 0x00002024,
- * VMREAD_BITMAP = 0x00002026,
- * VMWRITE_BITMAP = 0x00002028,
- *
- * TSC_MULTIPLIER = 0x00002032,
- * PLE_GAP = 0x00004020,
- * PLE_WINDOW = 0x00004022,
- * VMX_PREEMPTION_TIMER_VALUE = 0x0000482E,
- *
- * Currently unsupported in KVM:
- * GUEST_IA32_RTIT_CTL = 0x00002814,
- */
-#define EVMCS1_SUPPORTED_PINCTRL \
- (PIN_BASED_ALWAYSON_WITHOUT_TRUE_MSR | \
- PIN_BASED_EXT_INTR_MASK | \
- PIN_BASED_NMI_EXITING | \
- PIN_BASED_VIRTUAL_NMIS)
-
-#define EVMCS1_SUPPORTED_EXEC_CTRL \
- (CPU_BASED_ALWAYSON_WITHOUT_TRUE_MSR | \
- CPU_BASED_HLT_EXITING | \
- CPU_BASED_CR3_LOAD_EXITING | \
- CPU_BASED_CR3_STORE_EXITING | \
- CPU_BASED_UNCOND_IO_EXITING | \
- CPU_BASED_MOV_DR_EXITING | \
- CPU_BASED_USE_TSC_OFFSETTING | \
- CPU_BASED_MWAIT_EXITING | \
- CPU_BASED_MONITOR_EXITING | \
- CPU_BASED_INVLPG_EXITING | \
- CPU_BASED_RDPMC_EXITING | \
- CPU_BASED_INTR_WINDOW_EXITING | \
- CPU_BASED_CR8_LOAD_EXITING | \
- CPU_BASED_CR8_STORE_EXITING | \
- CPU_BASED_RDTSC_EXITING | \
- CPU_BASED_TPR_SHADOW | \
- CPU_BASED_USE_IO_BITMAPS | \
- CPU_BASED_MONITOR_TRAP_FLAG | \
- CPU_BASED_USE_MSR_BITMAPS | \
- CPU_BASED_NMI_WINDOW_EXITING | \
- CPU_BASED_PAUSE_EXITING | \
- CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)
-
-#define EVMCS1_SUPPORTED_2NDEXEC \
- (SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE | \
- SECONDARY_EXEC_WBINVD_EXITING | \
- SECONDARY_EXEC_ENABLE_VPID | \
- SECONDARY_EXEC_ENABLE_EPT | \
- SECONDARY_EXEC_UNRESTRICTED_GUEST | \
- SECONDARY_EXEC_DESC | \
- SECONDARY_EXEC_ENABLE_RDTSCP | \
- SECONDARY_EXEC_ENABLE_INVPCID | \
- SECONDARY_EXEC_XSAVES | \
- SECONDARY_EXEC_RDSEED_EXITING | \
- SECONDARY_EXEC_RDRAND_EXITING | \
- SECONDARY_EXEC_TSC_SCALING | \
- SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE | \
- SECONDARY_EXEC_PT_USE_GPA | \
- SECONDARY_EXEC_PT_CONCEAL_VMX | \
- SECONDARY_EXEC_BUS_LOCK_DETECTION | \
- SECONDARY_EXEC_NOTIFY_VM_EXITING | \
- SECONDARY_EXEC_ENCLS_EXITING)
-
-#define EVMCS1_SUPPORTED_3RDEXEC (0ULL)
-
-#define EVMCS1_SUPPORTED_VMEXIT_CTRL \
- (VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR | \
- VM_EXIT_SAVE_DEBUG_CONTROLS | \
- VM_EXIT_ACK_INTR_ON_EXIT | \
- VM_EXIT_HOST_ADDR_SPACE_SIZE | \
- VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | \
- VM_EXIT_SAVE_IA32_PAT | \
- VM_EXIT_LOAD_IA32_PAT | \
- VM_EXIT_SAVE_IA32_EFER | \
- VM_EXIT_LOAD_IA32_EFER | \
- VM_EXIT_CLEAR_BNDCFGS | \
- VM_EXIT_PT_CONCEAL_PIP | \
- VM_EXIT_CLEAR_IA32_RTIT_CTL)
-
-#define EVMCS1_SUPPORTED_VMENTRY_CTRL \
- (VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR | \
- VM_ENTRY_LOAD_DEBUG_CONTROLS | \
- VM_ENTRY_IA32E_MODE | \
- VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL | \
- VM_ENTRY_LOAD_IA32_PAT | \
- VM_ENTRY_LOAD_IA32_EFER | \
- VM_ENTRY_LOAD_BNDCFGS | \
- VM_ENTRY_PT_CONCEAL_PIP | \
- VM_ENTRY_LOAD_IA32_RTIT_CTL)
-
-#define EVMCS1_SUPPORTED_VMFUNC (0)
-
struct evmcs_field {
u16 offset;
u16 clean_field;
--
2.39.1.519.gcb327c4b5f-goog


2023-02-08 20:54:51

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 2/2] KVM: VMX: Stub out enable_evmcs static key for CONFIG_HYPERV=n

Wrap enable_evmcs in a helper and stub it out when CONFIG_HYPERV=n in
order to eliminate the static branch nop placeholders. clang-14 is clever
enough to elide the nop, but gcc-12 is not. Stubbing out the key reduces
the size of kvm-intel.ko by ~7.5% (200KiB) when compiled with gcc-12
(there are a _lot_ of VMCS accesses throughout KVM).

Signed-off-by: Sean Christopherson <[email protected]>
---
arch/x86/kvm/vmx/hyperv.c | 4 ++--
arch/x86/kvm/vmx/hyperv.h | 10 ++++++++--
arch/x86/kvm/vmx/vmx.c | 15 +++++++--------
arch/x86/kvm/vmx/vmx_ops.h | 22 +++++++++++-----------
4 files changed, 28 insertions(+), 23 deletions(-)

diff --git a/arch/x86/kvm/vmx/hyperv.c b/arch/x86/kvm/vmx/hyperv.c
index b6748055c586..274fbd38c64e 100644
--- a/arch/x86/kvm/vmx/hyperv.c
+++ b/arch/x86/kvm/vmx/hyperv.c
@@ -118,8 +118,6 @@

#define EVMCS1_SUPPORTED_VMFUNC (0)

-DEFINE_STATIC_KEY_FALSE(enable_evmcs);
-
#define EVMCS1_OFFSET(x) offsetof(struct hv_enlightened_vmcs, x)
#define EVMCS1_FIELD(number, name, clean_field)[ROL16(number, 6)] = \
{EVMCS1_OFFSET(name), clean_field}
@@ -611,6 +609,8 @@ int nested_evmcs_check_controls(struct vmcs12 *vmcs12)
}

#if IS_ENABLED(CONFIG_HYPERV)
+DEFINE_STATIC_KEY_FALSE(enable_evmcs);
+
/*
* KVM on Hyper-V always uses the latest known eVMCSv1 revision, the assumption
* is: in case a feature has corresponding fields in eVMCS described and it was
diff --git a/arch/x86/kvm/vmx/hyperv.h b/arch/x86/kvm/vmx/hyperv.h
index 1299143d00df..a0b6d05dba5d 100644
--- a/arch/x86/kvm/vmx/hyperv.h
+++ b/arch/x86/kvm/vmx/hyperv.h
@@ -16,8 +16,6 @@

struct vmcs_config;

-DECLARE_STATIC_KEY_FALSE(enable_evmcs);
-
#define current_evmcs ((struct hv_enlightened_vmcs *)this_cpu_read(current_vmcs))

#define KVM_EVMCS_VERSION 1
@@ -69,6 +67,13 @@ static inline u64 evmcs_read_any(struct hv_enlightened_vmcs *evmcs,

#if IS_ENABLED(CONFIG_HYPERV)

+DECLARE_STATIC_KEY_FALSE(enable_evmcs);
+
+static __always_inline bool is_evmcs_enabled(void)
+{
+ return static_branch_unlikely(&enable_evmcs);
+}
+
static __always_inline int get_evmcs_offset(unsigned long field,
u16 *clean_field)
{
@@ -158,6 +163,7 @@ static inline void evmcs_load(u64 phys_addr)

void evmcs_sanitize_exec_ctrls(struct vmcs_config *vmcs_conf);
#else /* !IS_ENABLED(CONFIG_HYPERV) */
+static __always_inline bool is_evmcs_enabled(void) { return false; }
static __always_inline void evmcs_write64(unsigned long field, u64 value) {}
static __always_inline void evmcs_write32(unsigned long field, u32 value) {}
static __always_inline void evmcs_write16(unsigned long field, u16 value) {}
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 33614ee2cd67..9f0098c9ad64 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -595,7 +595,7 @@ static void hv_reset_evmcs(void)
{
struct hv_vp_assist_page *vp_ap;

- if (!static_branch_unlikely(&enable_evmcs))
+ if (!is_evmcs_enabled())
return;

/*
@@ -2818,8 +2818,7 @@ static int vmx_hardware_enable(void)
* This can happen if we hot-added a CPU but failed to allocate
* VP assist page for it.
*/
- if (static_branch_unlikely(&enable_evmcs) &&
- !hv_get_vp_assist_page(cpu))
+ if (is_evmcs_enabled() && !hv_get_vp_assist_page(cpu))
return -EFAULT;

intel_pt_handle_vmx(1);
@@ -2871,7 +2870,7 @@ struct vmcs *alloc_vmcs_cpu(bool shadow, int cpu, gfp_t flags)
memset(vmcs, 0, vmcs_config.size);

/* KVM supports Enlightened VMCS v1 only */
- if (static_branch_unlikely(&enable_evmcs))
+ if (is_evmcs_enabled())
vmcs->hdr.revision_id = KVM_EVMCS_VERSION;
else
vmcs->hdr.revision_id = vmcs_config.revision_id;
@@ -2966,7 +2965,7 @@ static __init int alloc_kvm_area(void)
* still be marked with revision_id reported by
* physical CPU.
*/
- if (static_branch_unlikely(&enable_evmcs))
+ if (is_evmcs_enabled())
vmcs->hdr.revision_id = vmcs_config.revision_id;

per_cpu(vmxarea, cpu) = vmcs;
@@ -3936,7 +3935,7 @@ static void vmx_msr_bitmap_l01_changed(struct vcpu_vmx *vmx)
* 'Enlightened MSR Bitmap' feature L0 needs to know that MSR
* bitmap has changed.
*/
- if (IS_ENABLED(CONFIG_HYPERV) && static_branch_unlikely(&enable_evmcs)) {
+ if (is_evmcs_enabled()) {
struct hv_enlightened_vmcs *evmcs = (void *)vmx->vmcs01.vmcs;

if (evmcs->hv_enlightenments_control.msr_bitmap)
@@ -7313,7 +7312,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
vmx_vcpu_enter_exit(vcpu, __vmx_vcpu_run_flags(vmx));

/* All fields are clean at this point */
- if (static_branch_unlikely(&enable_evmcs)) {
+ if (is_evmcs_enabled()) {
current_evmcs->hv_clean_fields |=
HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL;

@@ -7443,7 +7442,7 @@ static int vmx_vcpu_create(struct kvm_vcpu *vcpu)
* feature only for vmcs01, KVM currently isn't equipped to realize any
* performance benefits from enabling it for vmcs02.
*/
- if (IS_ENABLED(CONFIG_HYPERV) && static_branch_unlikely(&enable_evmcs) &&
+ if (is_evmcs_enabled() &&
(ms_hyperv.nested_features & HV_X64_NESTED_MSR_BITMAP)) {
struct hv_enlightened_vmcs *evmcs = (void *)vmx->vmcs01.vmcs;

diff --git a/arch/x86/kvm/vmx/vmx_ops.h b/arch/x86/kvm/vmx/vmx_ops.h
index db95bde52998..6b072db47fdc 100644
--- a/arch/x86/kvm/vmx/vmx_ops.h
+++ b/arch/x86/kvm/vmx/vmx_ops.h
@@ -147,7 +147,7 @@ static __always_inline unsigned long __vmcs_readl(unsigned long field)
static __always_inline u16 vmcs_read16(unsigned long field)
{
vmcs_check16(field);
- if (static_branch_unlikely(&enable_evmcs))
+ if (is_evmcs_enabled())
return evmcs_read16(field);
return __vmcs_readl(field);
}
@@ -155,7 +155,7 @@ static __always_inline u16 vmcs_read16(unsigned long field)
static __always_inline u32 vmcs_read32(unsigned long field)
{
vmcs_check32(field);
- if (static_branch_unlikely(&enable_evmcs))
+ if (is_evmcs_enabled())
return evmcs_read32(field);
return __vmcs_readl(field);
}
@@ -163,7 +163,7 @@ static __always_inline u32 vmcs_read32(unsigned long field)
static __always_inline u64 vmcs_read64(unsigned long field)
{
vmcs_check64(field);
- if (static_branch_unlikely(&enable_evmcs))
+ if (is_evmcs_enabled())
return evmcs_read64(field);
#ifdef CONFIG_X86_64
return __vmcs_readl(field);
@@ -175,7 +175,7 @@ static __always_inline u64 vmcs_read64(unsigned long field)
static __always_inline unsigned long vmcs_readl(unsigned long field)
{
vmcs_checkl(field);
- if (static_branch_unlikely(&enable_evmcs))
+ if (is_evmcs_enabled())
return evmcs_read64(field);
return __vmcs_readl(field);
}
@@ -222,7 +222,7 @@ static __always_inline void __vmcs_writel(unsigned long field, unsigned long val
static __always_inline void vmcs_write16(unsigned long field, u16 value)
{
vmcs_check16(field);
- if (static_branch_unlikely(&enable_evmcs))
+ if (is_evmcs_enabled())
return evmcs_write16(field, value);

__vmcs_writel(field, value);
@@ -231,7 +231,7 @@ static __always_inline void vmcs_write16(unsigned long field, u16 value)
static __always_inline void vmcs_write32(unsigned long field, u32 value)
{
vmcs_check32(field);
- if (static_branch_unlikely(&enable_evmcs))
+ if (is_evmcs_enabled())
return evmcs_write32(field, value);

__vmcs_writel(field, value);
@@ -240,7 +240,7 @@ static __always_inline void vmcs_write32(unsigned long field, u32 value)
static __always_inline void vmcs_write64(unsigned long field, u64 value)
{
vmcs_check64(field);
- if (static_branch_unlikely(&enable_evmcs))
+ if (is_evmcs_enabled())
return evmcs_write64(field, value);

__vmcs_writel(field, value);
@@ -252,7 +252,7 @@ static __always_inline void vmcs_write64(unsigned long field, u64 value)
static __always_inline void vmcs_writel(unsigned long field, unsigned long value)
{
vmcs_checkl(field);
- if (static_branch_unlikely(&enable_evmcs))
+ if (is_evmcs_enabled())
return evmcs_write64(field, value);

__vmcs_writel(field, value);
@@ -262,7 +262,7 @@ static __always_inline void vmcs_clear_bits(unsigned long field, u32 mask)
{
BUILD_BUG_ON_MSG(__builtin_constant_p(field) && ((field) & 0x6000) == 0x2000,
"vmcs_clear_bits does not support 64-bit fields");
- if (static_branch_unlikely(&enable_evmcs))
+ if (is_evmcs_enabled())
return evmcs_write32(field, evmcs_read32(field) & ~mask);

__vmcs_writel(field, __vmcs_readl(field) & ~mask);
@@ -272,7 +272,7 @@ static __always_inline void vmcs_set_bits(unsigned long field, u32 mask)
{
BUILD_BUG_ON_MSG(__builtin_constant_p(field) && ((field) & 0x6000) == 0x2000,
"vmcs_set_bits does not support 64-bit fields");
- if (static_branch_unlikely(&enable_evmcs))
+ if (is_evmcs_enabled())
return evmcs_write32(field, evmcs_read32(field) | mask);

__vmcs_writel(field, __vmcs_readl(field) | mask);
@@ -289,7 +289,7 @@ static inline void vmcs_load(struct vmcs *vmcs)
{
u64 phys_addr = __pa(vmcs);

- if (static_branch_unlikely(&enable_evmcs))
+ if (is_evmcs_enabled())
return evmcs_load(phys_addr);

vmx_asm1(vmptrld, "m"(phys_addr), vmcs, phys_addr);
--
2.39.1.519.gcb327c4b5f-goog


2023-02-09 13:09:47

by Vitaly Kuznetsov

[permalink] [raw]
Subject: Re: [PATCH 1/2] KVM: nVMX: Move EVMCS1_SUPPORT_* macros to hyperv.c

Sean Christopherson <[email protected]> writes:

> Move the macros that define the set of VMCS controls that are supported
> by eVMCS1 from hyperv.h to hyperv.c, i.e. make them "private". The
> macros should never be consumed directly by KVM at-large since the "final"
> set of supported controls depends on guest CPUID.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> arch/x86/kvm/vmx/hyperv.c | 105 ++++++++++++++++++++++++++++++++++++++
> arch/x86/kvm/vmx/hyperv.h | 105 --------------------------------------
> 2 files changed, 105 insertions(+), 105 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/hyperv.c b/arch/x86/kvm/vmx/hyperv.c
> index 22daca752797..b6748055c586 100644
> --- a/arch/x86/kvm/vmx/hyperv.c
> +++ b/arch/x86/kvm/vmx/hyperv.c
> @@ -13,6 +13,111 @@
>
> #define CC KVM_NESTED_VMENTER_CONSISTENCY_CHECK
>
> +/*
> + * Enlightened VMCSv1 doesn't support these:
> + *
> + * POSTED_INTR_NV = 0x00000002,
> + * GUEST_INTR_STATUS = 0x00000810,
> + * APIC_ACCESS_ADDR = 0x00002014,
> + * POSTED_INTR_DESC_ADDR = 0x00002016,
> + * EOI_EXIT_BITMAP0 = 0x0000201c,
> + * EOI_EXIT_BITMAP1 = 0x0000201e,
> + * EOI_EXIT_BITMAP2 = 0x00002020,
> + * EOI_EXIT_BITMAP3 = 0x00002022,
> + * GUEST_PML_INDEX = 0x00000812,
> + * PML_ADDRESS = 0x0000200e,
> + * VM_FUNCTION_CONTROL = 0x00002018,
> + * EPTP_LIST_ADDRESS = 0x00002024,
> + * VMREAD_BITMAP = 0x00002026,
> + * VMWRITE_BITMAP = 0x00002028,
> + *
> + * TSC_MULTIPLIER = 0x00002032,
> + * PLE_GAP = 0x00004020,
> + * PLE_WINDOW = 0x00004022,
> + * VMX_PREEMPTION_TIMER_VALUE = 0x0000482E,
> + *
> + * Currently unsupported in KVM:
> + * GUEST_IA32_RTIT_CTL = 0x00002814,
> + */
> +#define EVMCS1_SUPPORTED_PINCTRL \
> + (PIN_BASED_ALWAYSON_WITHOUT_TRUE_MSR | \
> + PIN_BASED_EXT_INTR_MASK | \
> + PIN_BASED_NMI_EXITING | \
> + PIN_BASED_VIRTUAL_NMIS)
> +
> +#define EVMCS1_SUPPORTED_EXEC_CTRL \
> + (CPU_BASED_ALWAYSON_WITHOUT_TRUE_MSR | \
> + CPU_BASED_HLT_EXITING | \
> + CPU_BASED_CR3_LOAD_EXITING | \
> + CPU_BASED_CR3_STORE_EXITING | \
> + CPU_BASED_UNCOND_IO_EXITING | \
> + CPU_BASED_MOV_DR_EXITING | \
> + CPU_BASED_USE_TSC_OFFSETTING | \
> + CPU_BASED_MWAIT_EXITING | \
> + CPU_BASED_MONITOR_EXITING | \
> + CPU_BASED_INVLPG_EXITING | \
> + CPU_BASED_RDPMC_EXITING | \
> + CPU_BASED_INTR_WINDOW_EXITING | \
> + CPU_BASED_CR8_LOAD_EXITING | \
> + CPU_BASED_CR8_STORE_EXITING | \
> + CPU_BASED_RDTSC_EXITING | \
> + CPU_BASED_TPR_SHADOW | \
> + CPU_BASED_USE_IO_BITMAPS | \
> + CPU_BASED_MONITOR_TRAP_FLAG | \
> + CPU_BASED_USE_MSR_BITMAPS | \
> + CPU_BASED_NMI_WINDOW_EXITING | \
> + CPU_BASED_PAUSE_EXITING | \
> + CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)
> +
> +#define EVMCS1_SUPPORTED_2NDEXEC \
> + (SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE | \
> + SECONDARY_EXEC_WBINVD_EXITING | \
> + SECONDARY_EXEC_ENABLE_VPID | \
> + SECONDARY_EXEC_ENABLE_EPT | \
> + SECONDARY_EXEC_UNRESTRICTED_GUEST | \
> + SECONDARY_EXEC_DESC | \
> + SECONDARY_EXEC_ENABLE_RDTSCP | \
> + SECONDARY_EXEC_ENABLE_INVPCID | \
> + SECONDARY_EXEC_XSAVES | \
> + SECONDARY_EXEC_RDSEED_EXITING | \
> + SECONDARY_EXEC_RDRAND_EXITING | \
> + SECONDARY_EXEC_TSC_SCALING | \
> + SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE | \
> + SECONDARY_EXEC_PT_USE_GPA | \
> + SECONDARY_EXEC_PT_CONCEAL_VMX | \
> + SECONDARY_EXEC_BUS_LOCK_DETECTION | \
> + SECONDARY_EXEC_NOTIFY_VM_EXITING | \
> + SECONDARY_EXEC_ENCLS_EXITING)
> +
> +#define EVMCS1_SUPPORTED_3RDEXEC (0ULL)
> +
> +#define EVMCS1_SUPPORTED_VMEXIT_CTRL \
> + (VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR | \
> + VM_EXIT_SAVE_DEBUG_CONTROLS | \
> + VM_EXIT_ACK_INTR_ON_EXIT | \
> + VM_EXIT_HOST_ADDR_SPACE_SIZE | \
> + VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | \
> + VM_EXIT_SAVE_IA32_PAT | \
> + VM_EXIT_LOAD_IA32_PAT | \
> + VM_EXIT_SAVE_IA32_EFER | \
> + VM_EXIT_LOAD_IA32_EFER | \
> + VM_EXIT_CLEAR_BNDCFGS | \
> + VM_EXIT_PT_CONCEAL_PIP | \
> + VM_EXIT_CLEAR_IA32_RTIT_CTL)
> +
> +#define EVMCS1_SUPPORTED_VMENTRY_CTRL \
> + (VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR | \
> + VM_ENTRY_LOAD_DEBUG_CONTROLS | \
> + VM_ENTRY_IA32E_MODE | \
> + VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL | \
> + VM_ENTRY_LOAD_IA32_PAT | \
> + VM_ENTRY_LOAD_IA32_EFER | \
> + VM_ENTRY_LOAD_BNDCFGS | \
> + VM_ENTRY_PT_CONCEAL_PIP | \
> + VM_ENTRY_LOAD_IA32_RTIT_CTL)
> +
> +#define EVMCS1_SUPPORTED_VMFUNC (0)
> +
> DEFINE_STATIC_KEY_FALSE(enable_evmcs);
>
> #define EVMCS1_OFFSET(x) offsetof(struct hv_enlightened_vmcs, x)
> diff --git a/arch/x86/kvm/vmx/hyperv.h b/arch/x86/kvm/vmx/hyperv.h
> index 78d17667e7ec..1299143d00df 100644
> --- a/arch/x86/kvm/vmx/hyperv.h
> +++ b/arch/x86/kvm/vmx/hyperv.h
> @@ -22,111 +22,6 @@ DECLARE_STATIC_KEY_FALSE(enable_evmcs);
>
> #define KVM_EVMCS_VERSION 1
>
> -/*
> - * Enlightened VMCSv1 doesn't support these:
> - *
> - * POSTED_INTR_NV = 0x00000002,
> - * GUEST_INTR_STATUS = 0x00000810,
> - * APIC_ACCESS_ADDR = 0x00002014,
> - * POSTED_INTR_DESC_ADDR = 0x00002016,
> - * EOI_EXIT_BITMAP0 = 0x0000201c,
> - * EOI_EXIT_BITMAP1 = 0x0000201e,
> - * EOI_EXIT_BITMAP2 = 0x00002020,
> - * EOI_EXIT_BITMAP3 = 0x00002022,
> - * GUEST_PML_INDEX = 0x00000812,
> - * PML_ADDRESS = 0x0000200e,
> - * VM_FUNCTION_CONTROL = 0x00002018,
> - * EPTP_LIST_ADDRESS = 0x00002024,
> - * VMREAD_BITMAP = 0x00002026,
> - * VMWRITE_BITMAP = 0x00002028,
> - *
> - * TSC_MULTIPLIER = 0x00002032,
> - * PLE_GAP = 0x00004020,
> - * PLE_WINDOW = 0x00004022,
> - * VMX_PREEMPTION_TIMER_VALUE = 0x0000482E,
> - *
> - * Currently unsupported in KVM:
> - * GUEST_IA32_RTIT_CTL = 0x00002814,
> - */
> -#define EVMCS1_SUPPORTED_PINCTRL \
> - (PIN_BASED_ALWAYSON_WITHOUT_TRUE_MSR | \
> - PIN_BASED_EXT_INTR_MASK | \
> - PIN_BASED_NMI_EXITING | \
> - PIN_BASED_VIRTUAL_NMIS)
> -
> -#define EVMCS1_SUPPORTED_EXEC_CTRL \
> - (CPU_BASED_ALWAYSON_WITHOUT_TRUE_MSR | \
> - CPU_BASED_HLT_EXITING | \
> - CPU_BASED_CR3_LOAD_EXITING | \
> - CPU_BASED_CR3_STORE_EXITING | \
> - CPU_BASED_UNCOND_IO_EXITING | \
> - CPU_BASED_MOV_DR_EXITING | \
> - CPU_BASED_USE_TSC_OFFSETTING | \
> - CPU_BASED_MWAIT_EXITING | \
> - CPU_BASED_MONITOR_EXITING | \
> - CPU_BASED_INVLPG_EXITING | \
> - CPU_BASED_RDPMC_EXITING | \
> - CPU_BASED_INTR_WINDOW_EXITING | \
> - CPU_BASED_CR8_LOAD_EXITING | \
> - CPU_BASED_CR8_STORE_EXITING | \
> - CPU_BASED_RDTSC_EXITING | \
> - CPU_BASED_TPR_SHADOW | \
> - CPU_BASED_USE_IO_BITMAPS | \
> - CPU_BASED_MONITOR_TRAP_FLAG | \
> - CPU_BASED_USE_MSR_BITMAPS | \
> - CPU_BASED_NMI_WINDOW_EXITING | \
> - CPU_BASED_PAUSE_EXITING | \
> - CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)
> -
> -#define EVMCS1_SUPPORTED_2NDEXEC \
> - (SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE | \
> - SECONDARY_EXEC_WBINVD_EXITING | \
> - SECONDARY_EXEC_ENABLE_VPID | \
> - SECONDARY_EXEC_ENABLE_EPT | \
> - SECONDARY_EXEC_UNRESTRICTED_GUEST | \
> - SECONDARY_EXEC_DESC | \
> - SECONDARY_EXEC_ENABLE_RDTSCP | \
> - SECONDARY_EXEC_ENABLE_INVPCID | \
> - SECONDARY_EXEC_XSAVES | \
> - SECONDARY_EXEC_RDSEED_EXITING | \
> - SECONDARY_EXEC_RDRAND_EXITING | \
> - SECONDARY_EXEC_TSC_SCALING | \
> - SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE | \
> - SECONDARY_EXEC_PT_USE_GPA | \
> - SECONDARY_EXEC_PT_CONCEAL_VMX | \
> - SECONDARY_EXEC_BUS_LOCK_DETECTION | \
> - SECONDARY_EXEC_NOTIFY_VM_EXITING | \
> - SECONDARY_EXEC_ENCLS_EXITING)
> -
> -#define EVMCS1_SUPPORTED_3RDEXEC (0ULL)
> -
> -#define EVMCS1_SUPPORTED_VMEXIT_CTRL \
> - (VM_EXIT_ALWAYSON_WITHOUT_TRUE_MSR | \
> - VM_EXIT_SAVE_DEBUG_CONTROLS | \
> - VM_EXIT_ACK_INTR_ON_EXIT | \
> - VM_EXIT_HOST_ADDR_SPACE_SIZE | \
> - VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | \
> - VM_EXIT_SAVE_IA32_PAT | \
> - VM_EXIT_LOAD_IA32_PAT | \
> - VM_EXIT_SAVE_IA32_EFER | \
> - VM_EXIT_LOAD_IA32_EFER | \
> - VM_EXIT_CLEAR_BNDCFGS | \
> - VM_EXIT_PT_CONCEAL_PIP | \
> - VM_EXIT_CLEAR_IA32_RTIT_CTL)
> -
> -#define EVMCS1_SUPPORTED_VMENTRY_CTRL \
> - (VM_ENTRY_ALWAYSON_WITHOUT_TRUE_MSR | \
> - VM_ENTRY_LOAD_DEBUG_CONTROLS | \
> - VM_ENTRY_IA32E_MODE | \
> - VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL | \
> - VM_ENTRY_LOAD_IA32_PAT | \
> - VM_ENTRY_LOAD_IA32_EFER | \
> - VM_ENTRY_LOAD_BNDCFGS | \
> - VM_ENTRY_PT_CONCEAL_PIP | \
> - VM_ENTRY_LOAD_IA32_RTIT_CTL)
> -
> -#define EVMCS1_SUPPORTED_VMFUNC (0)
> -
> struct evmcs_field {
> u16 offset;
> u16 clean_field;

Reviewed-by: Vitaly Kuznetsov <[email protected]>

--
Vitaly


2023-02-09 13:15:12

by Vitaly Kuznetsov

[permalink] [raw]
Subject: Re: [PATCH 2/2] KVM: VMX: Stub out enable_evmcs static key for CONFIG_HYPERV=n

Sean Christopherson <[email protected]> writes:

> Wrap enable_evmcs in a helper and stub it out when CONFIG_HYPERV=n in
> order to eliminate the static branch nop placeholders. clang-14 is clever
> enough to elide the nop, but gcc-12 is not. Stubbing out the key reduces
> the size of kvm-intel.ko by ~7.5% (200KiB) when compiled with gcc-12
> (there are a _lot_ of VMCS accesses throughout KVM).
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> arch/x86/kvm/vmx/hyperv.c | 4 ++--
> arch/x86/kvm/vmx/hyperv.h | 10 ++++++++--
> arch/x86/kvm/vmx/vmx.c | 15 +++++++--------
> arch/x86/kvm/vmx/vmx_ops.h | 22 +++++++++++-----------
> 4 files changed, 28 insertions(+), 23 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/hyperv.c b/arch/x86/kvm/vmx/hyperv.c
> index b6748055c586..274fbd38c64e 100644
> --- a/arch/x86/kvm/vmx/hyperv.c
> +++ b/arch/x86/kvm/vmx/hyperv.c
> @@ -118,8 +118,6 @@
>
> #define EVMCS1_SUPPORTED_VMFUNC (0)
>
> -DEFINE_STATIC_KEY_FALSE(enable_evmcs);
> -
> #define EVMCS1_OFFSET(x) offsetof(struct hv_enlightened_vmcs, x)
> #define EVMCS1_FIELD(number, name, clean_field)[ROL16(number, 6)] = \
> {EVMCS1_OFFSET(name), clean_field}
> @@ -611,6 +609,8 @@ int nested_evmcs_check_controls(struct vmcs12 *vmcs12)
> }
>
> #if IS_ENABLED(CONFIG_HYPERV)
> +DEFINE_STATIC_KEY_FALSE(enable_evmcs);
> +
> /*
> * KVM on Hyper-V always uses the latest known eVMCSv1 revision, the assumption
> * is: in case a feature has corresponding fields in eVMCS described and it was
> diff --git a/arch/x86/kvm/vmx/hyperv.h b/arch/x86/kvm/vmx/hyperv.h
> index 1299143d00df..a0b6d05dba5d 100644
> --- a/arch/x86/kvm/vmx/hyperv.h
> +++ b/arch/x86/kvm/vmx/hyperv.h
> @@ -16,8 +16,6 @@
>
> struct vmcs_config;
>
> -DECLARE_STATIC_KEY_FALSE(enable_evmcs);
> -
> #define current_evmcs ((struct hv_enlightened_vmcs *)this_cpu_read(current_vmcs))
>
> #define KVM_EVMCS_VERSION 1
> @@ -69,6 +67,13 @@ static inline u64 evmcs_read_any(struct hv_enlightened_vmcs *evmcs,
>
> #if IS_ENABLED(CONFIG_HYPERV)
>
> +DECLARE_STATIC_KEY_FALSE(enable_evmcs);
> +
> +static __always_inline bool is_evmcs_enabled(void)
> +{
> + return static_branch_unlikely(&enable_evmcs);
> +}

I have a suggestion. While 'is_evmcs_enabled' name is certainly not
worse than 'enable_evmcs', it may still be confusing as it's not clear
which eVMCS is meant: are we running a guest using eVMCS or using eVMCS
ourselves? So what if we rename this to a very explicit 'is_kvm_on_hyperv()'
and hide the implementation details (i.e. 'evmcs') inside?

> +
> static __always_inline int get_evmcs_offset(unsigned long field,
> u16 *clean_field)
> {
> @@ -158,6 +163,7 @@ static inline void evmcs_load(u64 phys_addr)
>
> void evmcs_sanitize_exec_ctrls(struct vmcs_config *vmcs_conf);
> #else /* !IS_ENABLED(CONFIG_HYPERV) */
> +static __always_inline bool is_evmcs_enabled(void) { return false; }
> static __always_inline void evmcs_write64(unsigned long field, u64 value) {}
> static __always_inline void evmcs_write32(unsigned long field, u32 value) {}
> static __always_inline void evmcs_write16(unsigned long field, u16 value) {}
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 33614ee2cd67..9f0098c9ad64 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -595,7 +595,7 @@ static void hv_reset_evmcs(void)
> {
> struct hv_vp_assist_page *vp_ap;
>
> - if (!static_branch_unlikely(&enable_evmcs))
> + if (!is_evmcs_enabled())
> return;
>
> /*
> @@ -2818,8 +2818,7 @@ static int vmx_hardware_enable(void)
> * This can happen if we hot-added a CPU but failed to allocate
> * VP assist page for it.
> */
> - if (static_branch_unlikely(&enable_evmcs) &&
> - !hv_get_vp_assist_page(cpu))
> + if (is_evmcs_enabled() && !hv_get_vp_assist_page(cpu))
> return -EFAULT;
>
> intel_pt_handle_vmx(1);
> @@ -2871,7 +2870,7 @@ struct vmcs *alloc_vmcs_cpu(bool shadow, int cpu, gfp_t flags)
> memset(vmcs, 0, vmcs_config.size);
>
> /* KVM supports Enlightened VMCS v1 only */
> - if (static_branch_unlikely(&enable_evmcs))
> + if (is_evmcs_enabled())
> vmcs->hdr.revision_id = KVM_EVMCS_VERSION;
> else
> vmcs->hdr.revision_id = vmcs_config.revision_id;
> @@ -2966,7 +2965,7 @@ static __init int alloc_kvm_area(void)
> * still be marked with revision_id reported by
> * physical CPU.
> */
> - if (static_branch_unlikely(&enable_evmcs))
> + if (is_evmcs_enabled())
> vmcs->hdr.revision_id = vmcs_config.revision_id;
>
> per_cpu(vmxarea, cpu) = vmcs;
> @@ -3936,7 +3935,7 @@ static void vmx_msr_bitmap_l01_changed(struct vcpu_vmx *vmx)
> * 'Enlightened MSR Bitmap' feature L0 needs to know that MSR
> * bitmap has changed.
> */
> - if (IS_ENABLED(CONFIG_HYPERV) && static_branch_unlikely(&enable_evmcs)) {
> + if (is_evmcs_enabled()) {
> struct hv_enlightened_vmcs *evmcs = (void *)vmx->vmcs01.vmcs;
>
> if (evmcs->hv_enlightenments_control.msr_bitmap)
> @@ -7313,7 +7312,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
> vmx_vcpu_enter_exit(vcpu, __vmx_vcpu_run_flags(vmx));
>
> /* All fields are clean at this point */
> - if (static_branch_unlikely(&enable_evmcs)) {
> + if (is_evmcs_enabled()) {
> current_evmcs->hv_clean_fields |=
> HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL;
>
> @@ -7443,7 +7442,7 @@ static int vmx_vcpu_create(struct kvm_vcpu *vcpu)
> * feature only for vmcs01, KVM currently isn't equipped to realize any
> * performance benefits from enabling it for vmcs02.
> */
> - if (IS_ENABLED(CONFIG_HYPERV) && static_branch_unlikely(&enable_evmcs) &&
> + if (is_evmcs_enabled() &&
> (ms_hyperv.nested_features & HV_X64_NESTED_MSR_BITMAP)) {
> struct hv_enlightened_vmcs *evmcs = (void *)vmx->vmcs01.vmcs;
>
> diff --git a/arch/x86/kvm/vmx/vmx_ops.h b/arch/x86/kvm/vmx/vmx_ops.h
> index db95bde52998..6b072db47fdc 100644
> --- a/arch/x86/kvm/vmx/vmx_ops.h
> +++ b/arch/x86/kvm/vmx/vmx_ops.h
> @@ -147,7 +147,7 @@ static __always_inline unsigned long __vmcs_readl(unsigned long field)
> static __always_inline u16 vmcs_read16(unsigned long field)
> {
> vmcs_check16(field);
> - if (static_branch_unlikely(&enable_evmcs))
> + if (is_evmcs_enabled())
> return evmcs_read16(field);
> return __vmcs_readl(field);
> }
> @@ -155,7 +155,7 @@ static __always_inline u16 vmcs_read16(unsigned long field)
> static __always_inline u32 vmcs_read32(unsigned long field)
> {
> vmcs_check32(field);
> - if (static_branch_unlikely(&enable_evmcs))
> + if (is_evmcs_enabled())
> return evmcs_read32(field);
> return __vmcs_readl(field);
> }
> @@ -163,7 +163,7 @@ static __always_inline u32 vmcs_read32(unsigned long field)
> static __always_inline u64 vmcs_read64(unsigned long field)
> {
> vmcs_check64(field);
> - if (static_branch_unlikely(&enable_evmcs))
> + if (is_evmcs_enabled())
> return evmcs_read64(field);
> #ifdef CONFIG_X86_64
> return __vmcs_readl(field);
> @@ -175,7 +175,7 @@ static __always_inline u64 vmcs_read64(unsigned long field)
> static __always_inline unsigned long vmcs_readl(unsigned long field)
> {
> vmcs_checkl(field);
> - if (static_branch_unlikely(&enable_evmcs))
> + if (is_evmcs_enabled())
> return evmcs_read64(field);
> return __vmcs_readl(field);
> }
> @@ -222,7 +222,7 @@ static __always_inline void __vmcs_writel(unsigned long field, unsigned long val
> static __always_inline void vmcs_write16(unsigned long field, u16 value)
> {
> vmcs_check16(field);
> - if (static_branch_unlikely(&enable_evmcs))
> + if (is_evmcs_enabled())
> return evmcs_write16(field, value);
>
> __vmcs_writel(field, value);
> @@ -231,7 +231,7 @@ static __always_inline void vmcs_write16(unsigned long field, u16 value)
> static __always_inline void vmcs_write32(unsigned long field, u32 value)
> {
> vmcs_check32(field);
> - if (static_branch_unlikely(&enable_evmcs))
> + if (is_evmcs_enabled())
> return evmcs_write32(field, value);
>
> __vmcs_writel(field, value);
> @@ -240,7 +240,7 @@ static __always_inline void vmcs_write32(unsigned long field, u32 value)
> static __always_inline void vmcs_write64(unsigned long field, u64 value)
> {
> vmcs_check64(field);
> - if (static_branch_unlikely(&enable_evmcs))
> + if (is_evmcs_enabled())
> return evmcs_write64(field, value);
>
> __vmcs_writel(field, value);
> @@ -252,7 +252,7 @@ static __always_inline void vmcs_write64(unsigned long field, u64 value)
> static __always_inline void vmcs_writel(unsigned long field, unsigned long value)
> {
> vmcs_checkl(field);
> - if (static_branch_unlikely(&enable_evmcs))
> + if (is_evmcs_enabled())
> return evmcs_write64(field, value);
>
> __vmcs_writel(field, value);
> @@ -262,7 +262,7 @@ static __always_inline void vmcs_clear_bits(unsigned long field, u32 mask)
> {
> BUILD_BUG_ON_MSG(__builtin_constant_p(field) && ((field) & 0x6000) == 0x2000,
> "vmcs_clear_bits does not support 64-bit fields");
> - if (static_branch_unlikely(&enable_evmcs))
> + if (is_evmcs_enabled())
> return evmcs_write32(field, evmcs_read32(field) & ~mask);
>
> __vmcs_writel(field, __vmcs_readl(field) & ~mask);
> @@ -272,7 +272,7 @@ static __always_inline void vmcs_set_bits(unsigned long field, u32 mask)
> {
> BUILD_BUG_ON_MSG(__builtin_constant_p(field) && ((field) & 0x6000) == 0x2000,
> "vmcs_set_bits does not support 64-bit fields");
> - if (static_branch_unlikely(&enable_evmcs))
> + if (is_evmcs_enabled())
> return evmcs_write32(field, evmcs_read32(field) | mask);
>
> __vmcs_writel(field, __vmcs_readl(field) | mask);
> @@ -289,7 +289,7 @@ static inline void vmcs_load(struct vmcs *vmcs)
> {
> u64 phys_addr = __pa(vmcs);
>
> - if (static_branch_unlikely(&enable_evmcs))
> + if (is_evmcs_enabled())
> return evmcs_load(phys_addr);
>
> vmx_asm1(vmptrld, "m"(phys_addr), vmcs, phys_addr);

With or without the change:

Reviewed-by: Vitaly Kuznetsov <[email protected]>

--
Vitaly


2023-02-09 13:57:41

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH 2/2] KVM: VMX: Stub out enable_evmcs static key for CONFIG_HYPERV=n

On 2/9/23 14:13, Vitaly Kuznetsov wrote:
>> +static __always_inline bool is_evmcs_enabled(void)
>> +{
>> + return static_branch_unlikely(&enable_evmcs);
>> +}
> I have a suggestion. While 'is_evmcs_enabled' name is certainly not
> worse than 'enable_evmcs', it may still be confusing as it's not clear
> which eVMCS is meant: are we running a guest using eVMCS or using eVMCS
> ourselves? So what if we rename this to a very explicit 'is_kvm_on_hyperv()'
> and hide the implementation details (i.e. 'evmcs') inside?

I prefer keeping eVMCS in the name, but I agree a better name could be
something like kvm_uses_evmcs()?

Paolo


2023-02-10 01:13:38

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH 2/2] KVM: VMX: Stub out enable_evmcs static key for CONFIG_HYPERV=n

On Thu, Feb 09, 2023, Paolo Bonzini wrote:
> On 2/9/23 14:13, Vitaly Kuznetsov wrote:
> > > +static __always_inline bool is_evmcs_enabled(void)
> > > +{
> > > + return static_branch_unlikely(&enable_evmcs);
> > > +}
> > I have a suggestion. While 'is_evmcs_enabled' name is certainly not
> > worse than 'enable_evmcs', it may still be confusing as it's not clear
> > which eVMCS is meant: are we running a guest using eVMCS or using eVMCS
> > ourselves? So what if we rename this to a very explicit 'is_kvm_on_hyperv()'
> > and hide the implementation details (i.e. 'evmcs') inside?
>
> I prefer keeping eVMCS in the name,

+1, IIUC KVM can run on Hyper-V without eVMCS being enabled.

> but I agree a better name could be something like kvm_uses_evmcs()?

kvm_is_using_evmcs()?

2023-02-10 09:56:10

by Vitaly Kuznetsov

[permalink] [raw]
Subject: Re: [PATCH 2/2] KVM: VMX: Stub out enable_evmcs static key for CONFIG_HYPERV=n

Sean Christopherson <[email protected]> writes:

> On Thu, Feb 09, 2023, Paolo Bonzini wrote:
>> On 2/9/23 14:13, Vitaly Kuznetsov wrote:
>> > > +static __always_inline bool is_evmcs_enabled(void)
>> > > +{
>> > > + return static_branch_unlikely(&enable_evmcs);
>> > > +}
>> > I have a suggestion. While 'is_evmcs_enabled' name is certainly not
>> > worse than 'enable_evmcs', it may still be confusing as it's not clear
>> > which eVMCS is meant: are we running a guest using eVMCS or using eVMCS
>> > ourselves? So what if we rename this to a very explicit 'is_kvm_on_hyperv()'
>> > and hide the implementation details (i.e. 'evmcs') inside?
>>
>> I prefer keeping eVMCS in the name,
>
> +1, IIUC KVM can run on Hyper-V without eVMCS being enabled.
>
>> but I agree a better name could be something like kvm_uses_evmcs()?
>
> kvm_is_using_evmcs()?
>

Sounds good to me!

--
Vitaly