Received: by 10.213.65.68 with SMTP id h4csp1549643imn; Thu, 15 Mar 2018 02:58:16 -0700 (PDT) X-Google-Smtp-Source: AG47ELudg7+ceTsdavEFoDfUt72hjsywL59ufWht7FmwCK/E+tlzOLjN7Kseq7UzS5WS7TejG470 X-Received: by 10.99.42.83 with SMTP id q80mr6335683pgq.115.1521107896041; Thu, 15 Mar 2018 02:58:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521107895; cv=none; d=google.com; s=arc-20160816; b=GWhVVeocbWMf1oaAQZZBF9LBWSKFQbKf6KKdsXQ0m/eS39j0pQS4r3TTHVzi7xIkWJ ETwu04S4+fFkBRRAnWTbFgfQ47ZCNSDthDOlrU40vxJfogcXwIuwjoB6zpFhv1EyAHgf 118l3m4TcrD82jfjf1qthpXSExs/WUVPc+mFHRdse2xZ5pPsitDgD/W3In9AulfrsE4I mTARz66+ywnqZPMJ1AUISWLnRbbjnJf7W0p0FbzaXx8J3V8YrkZxd704SE4l3RdrFPxt qGAqGOnv4L4lpxgff4DKaQ541NEgS0r9gUd9lxTuMhocwyKIgxw+w5cXr+JApkdyx1QP /1lA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:message-id:in-reply-to:date:references:subject:cc:to :from:arc-authentication-results; bh=H9vlBmlzTdg0PYfiD77yB72VMzCKNVqqKbrJJ506PDY=; b=mxFr6WRJYzBWFvPkLClUJ5LJLTtroHLnXGQkh4uv8kEnvyD4Wgj+7OQWGTyVTxfu8i kWh46pSTxQx73oiWALa63q+ab5IOO19f6QhfnyHM4QnqHeoFfLNb/2+HjCd/5AIfyuUF aZnRsJ/BgV679cIJLZMchnn0wvLwo7x/I3Ll5xc/ddxrBS/UdxJwB15mI3uugcDqagsd tEtuLUwCieTEJQvVwf/lM6ZRcsH1oZpfVy0AuJh4lTXukPJpO3YVADtdOqcd8x01oD6j V7mk6hvlfjmYAUc3Lf/Z4DCA1Li1ZQaS5vSLhDHJ41AAZ8Z0a7TGUXM2SF7p9AitYSPD /yfw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n6si3236588pgc.801.2018.03.15.02.58.01; Thu, 15 Mar 2018 02:58:15 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751984AbeCOJ4q (ORCPT + 99 others); Thu, 15 Mar 2018 05:56:46 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:40628 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751488AbeCOJ4n (ORCPT ); Thu, 15 Mar 2018 05:56:43 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 96F7B40201A5; Thu, 15 Mar 2018 09:56:42 +0000 (UTC) Received: from vitty.brq.redhat.com.redhat.com (unknown [10.40.205.96]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 21D971208F77; Thu, 15 Mar 2018 09:56:35 +0000 (UTC) From: Vitaly Kuznetsov To: Paolo Bonzini Cc: kvm@vger.kernel.org, x86@kernel.org, Radim =?utf-8?B?S3LEjW3DocWZ?= , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , "Michael Kelley \(EOSG\)" , Mohammed Gamal , Cathy Avery , Bandan Das , linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 7/7] x86/kvm: use Enlightened VMCS when running on Hyper-V References: <20180309140249.2840-1-vkuznets@redhat.com> <20180309140249.2840-8-vkuznets@redhat.com> <2747cc75-b549-61bb-9c1b-0f554a49b536@redhat.com> Date: Thu, 15 Mar 2018 10:56:34 +0100 In-Reply-To: <2747cc75-b549-61bb-9c1b-0f554a49b536@redhat.com> (Paolo Bonzini's message of "Wed, 14 Mar 2018 15:53:32 +0100") Message-ID: <87fu51br1p.fsf@vitty.brq.redhat.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Thu, 15 Mar 2018 09:56:42 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Thu, 15 Mar 2018 09:56:42 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'vkuznets@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Paolo Bonzini writes: > On 09/03/2018 15:02, Vitaly Kuznetsov wrote: >> Enlightened VMCS is just a structure in memory, the main benefit >> besides avoiding somewhat slower VMREAD/VMWRITE is using clean field >> mask: we tell the underlying hypervisor which fields were modified >> since VMEXIT so there's no need to inspect them all. >> >> Tight CPUID loop test shows significant speedup: >> Before: 18890 cycles >> After: 8304 cycles >> >> Static key is being used to avoid performance penalty for non-Hyper-V >> deployments. Tests show we add around 3 (three) CPU cycles on each >> VMEXIT (1077.5 cycles before, 1080.7 cycles after for the same CPUID >> loop on bare metal). We can probably avoid one test/jmp in vmx_vcpu_run() >> but I don't see a clean way to use static key in assembly. > > If you want to live dangerously, you can use text_poke_early to change > the vmwrite to mov. It's just a single instruction, so it's probably > not too hard. > I'd say it's not worth it ... >> Signed-off-by: Vitaly Kuznetsov >> --- >> Changes since v2: >> - define KVM_EVMCS_VERSION [Radim Krčmář] >> - WARN_ONCE in get_evmcs_offset[,_cf] [Radim Krčmář] >> - add evmcs_sanitize_exec_ctrls() and use it in hardware_setup() and >> dump_vmcs() [Radim Krčmář] >> --- >> arch/x86/kvm/vmx.c | 625 ++++++++++++++++++++++++++++++++++++++++++++++++++++- >> 1 file changed, 615 insertions(+), 10 deletions(-) >> >> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c >> index 051dab74e4e9..44b6efa7d54e 100644 >> --- a/arch/x86/kvm/vmx.c >> +++ b/arch/x86/kvm/vmx.c >> @@ -53,6 +53,7 @@ >> #include >> #include >> #include >> +#include >> >> #include "trace.h" >> #include "pmu.h" >> @@ -1000,6 +1001,484 @@ static const u32 vmx_msr_index[] = { >> MSR_EFER, MSR_TSC_AUX, MSR_STAR, >> }; >> >> +DEFINE_STATIC_KEY_FALSE(enable_evmcs); >> + >> +#define current_evmcs ((struct hv_enlightened_vmcs *)this_cpu_read(current_vmcs)) >> + >> +#if IS_ENABLED(CONFIG_HYPERV) >> +static bool __read_mostly enlightened_vmcs = true; >> +module_param(enlightened_vmcs, bool, 0444); >> + >> +#define KVM_EVMCS_VERSION 1 >> + >> +#define EVMCS1_OFFSET(x) offsetof(struct hv_enlightened_vmcs, x) >> +#define EVMCS1_FIELD(number, name, clean_mask)[ROL16(number, 6)] = \ >> + (u32)EVMCS1_OFFSET(name) | ((u32)clean_mask << 16) >> + >> +/* >> + * Lower 16 bits encode offset of the field in struct hv_enlightened_vmcs, >> + * upped 16 bits hold clean field mask. >> + */ >> +static const u32 vmcs_field_to_evmcs_1[] = { >> + /* 64 bit rw */ >> + EVMCS1_FIELD(GUEST_RIP, guest_rip, >> + HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE), > > Maybe we should use a single "#include"d file (like vmx_shadow_fields.h) > and share it between HV-on-KVM and KVM-on-HV. > > ... Actually, yes, looking at 13k+ lines of code in vmx.c makes me think it's time we start doing something about it :-) > >> + EVMCS1_FIELD(VM_EXIT_MSR_STORE_ADDR, vm_exit_msr_store_addr, >> + HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL), >> + EVMCS1_FIELD(VM_EXIT_MSR_LOAD_ADDR, vm_exit_msr_load_addr, >> + HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL), >> + EVMCS1_FIELD(VM_ENTRY_MSR_LOAD_ADDR, vm_entry_msr_load_addr, >> + HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL), >> + EVMCS1_FIELD(CR3_TARGET_VALUE0, cr3_target_value0, >> + HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL), >> + EVMCS1_FIELD(CR3_TARGET_VALUE1, cr3_target_value1, >> + HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL), >> + EVMCS1_FIELD(CR3_TARGET_VALUE2, cr3_target_value2, >> + HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL), >> + EVMCS1_FIELD(CR3_TARGET_VALUE3, cr3_target_value3, >> + HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL), > > We shouldn't use these on Hyper-V, should we (that is, shouldn't the > WARN below fire if you try---and so why include them at all)? > True, these shouldn't be used and that's why there's no clean field assigned to them. They, however, do have a corresponding eVMCS field. I will try removing them in next version. >> + >> +static inline u16 get_evmcs_offset(unsigned long field) >> +{ >> + unsigned int index = ROL16(field, 6); >> + >> + if (index >= ARRAY_SIZE(vmcs_field_to_evmcs_1)) { >> + WARN_ONCE(1, "kvm: reading unsupported EVMCS field %lx\n", >> + field); >> + return 0; >> + } >> + >> + return (u16)vmcs_field_to_evmcs_1[index]; >> +} >> + >> +static inline u16 get_evmcs_offset_cf(unsigned long field, u16 *clean_field) >> +{ >> + unsigned int index = ROL16(field, 6); >> + u32 evmcs_field; >> + >> + if (index >= ARRAY_SIZE(vmcs_field_to_evmcs_1)) { >> + WARN_ONCE(1, "kvm: writing unsupported EVMCS field %lx\n", >> + field); >> + return 0; >> + } >> + >> + evmcs_field = vmcs_field_to_evmcs_1[index]; >> + >> + *clean_field = evmcs_field >> 16; >> + >> + return (u16)evmcs_field; >> +} > > You can mark this __always_inline, and make it > > if (clean_field) > *clean_field = evmcs_field >> 16; > > or alternatively, use a two-element struct and do > > evmcs_field = &vmcs_field_to_evmcs_1[index]; > if (clean_field) > *clean_field = evmcs_field->clean_field; > return evmcs_field->offset; > > Also, if you return int and make the WARN_ONCE case return -ENOENT, GCC > should be able to optimize out the "if (!offset)" (which becomes "if > (offset < 0)") in the callers. Nitpicking, but... > Ok, good suggestion, I'll try. >> +static void vmcs_load_enlightened(u64 phys_addr) >> +{ >> + struct hv_vp_assist_page *vp_ap = >> + hv_get_vp_assist_page(smp_processor_id()); >> + >> + vp_ap->current_nested_vmcs = phys_addr; >> + vp_ap->enlighten_vmentry = 1; >> +} > > evmcs_load? > Works for me, >> +static void evmcs_sanitize_exec_ctrls(u32 *cpu_based_2nd_exec_ctrl, >> + u32 *pin_based_exec_ctrl) >> +{ >> + *pin_based_exec_ctrl &= ~PIN_BASED_POSTED_INTR;> + *cpu_based_2nd_exec_ctrl &= ~SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY; >> + *cpu_based_2nd_exec_ctrl &= ~SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES; >> + *cpu_based_2nd_exec_ctrl &= ~SECONDARY_EXEC_APIC_REGISTER_VIRT; >> + *cpu_based_2nd_exec_ctrl &= ~SECONDARY_EXEC_ENABLE_PML; >> + *cpu_based_2nd_exec_ctrl &= ~SECONDARY_EXEC_ENABLE_VMFUNC; >> + *cpu_based_2nd_exec_ctrl &= ~SECONDARY_EXEC_SHADOW_VMCS; >> + *cpu_based_2nd_exec_ctrl &= ~SECONDARY_EXEC_TSC_SCALING; >> + *cpu_based_2nd_exec_ctrl &= ~SECONDARY_EXEC_PAUSE_LOOP_EXITING; >> + *pin_based_exec_ctrl &= ~PIN_BASED_VMX_PREEMPTION_TIMER; > > How can these be set? > They can not if Hyper-V behaves but Radim didn't want to trust it -- so the suggestion was to forcefully disable unsupported controls. >> @@ -3596,6 +4104,14 @@ static int hardware_enable(void) >> if (cr4_read_shadow() & X86_CR4_VMXE) >> return -EBUSY; >> >> + /* >> + * This can happen if we hot-added a CPU but failed to allocate >> + * VP assist page for it. >> + */ >> + if (static_branch_unlikely(&enable_evmcs) && >> + !hv_get_vp_assist_page(cpu)) >> + return -EFAULT; > > -ENODEV? Maybe add a printk, because this is really rare. > Ok, >> INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu)); >> INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu)); >> spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu)); >> @@ -3829,7 +4345,12 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf) >> vmcs_conf->size = vmx_msr_high & 0x1fff; >> vmcs_conf->order = get_order(vmcs_conf->size); >> vmcs_conf->basic_cap = vmx_msr_high & ~0x1fff; >> - vmcs_conf->revision_id = vmx_msr_low; >> + >> + /* KVM supports Enlightened VMCS v1 only */ >> + if (static_branch_unlikely(&enable_evmcs)) >> + vmcs_conf->revision_id = KVM_EVMCS_VERSION; >> + else >> + vmcs_conf->revision_id = vmx_msr_low; >> >> vmcs_conf->pin_based_exec_ctrl = _pin_based_exec_control; >> vmcs_conf->cpu_based_exec_ctrl = _cpu_based_exec_control; >> @@ -6990,6 +7511,17 @@ static __init int hardware_setup(void) >> goto out; >> } >> >> + if (static_branch_unlikely(&enable_evmcs)) { >> + evmcs_sanitize_exec_ctrls(&vmcs_config.cpu_based_2nd_exec_ctrl, >> + &vmcs_config.pin_based_exec_ctrl); > > Why not do it in setup_vmcs_config after the vmcs_conf->vmentry_ctrl > assignment (and pass &vmcs_config, which there is "vmcs_conf", directly > to the function)? And if sanitizing clears the bits in vmentry_ctl and > vmexit_ctl, there's no need to clear cpu_has_load_perf_global_ctrl. > Ok, if we decide to keep 'sanitization' in place. >> + /* >> + * Enlightened VMCSv1 doesn't support these: >> + * GUEST_IA32_PERF_GLOBAL_CTRL = 0x00002808, >> + * HOST_IA32_PERF_GLOBAL_CTRL = 0x00002c04, >> + */ >> + cpu_has_load_perf_global_ctrl = false;> + } >> + >> if (boot_cpu_has(X86_FEATURE_NX)) >> kvm_enable_efer_bits(EFER_NX); >> >> @@ -8745,6 +9277,10 @@ static void dump_vmcs(void) >> if (cpu_has_secondary_exec_ctrls()) >> secondary_exec_control = vmcs_read32(SECONDARY_VM_EXEC_CONTROL); >> >> + if (static_branch_unlikely(&enable_evmcs)) >> + evmcs_sanitize_exec_ctrls(&secondary_exec_control, >> + &pin_based_exec_ctrl); > > This is wrong, we're reading the VMCS so the values must already be > sanitized (and if not, that's the bug and we want dump_vmcs to print the > "wrong" values). The problem is that we vmcs_read these fields later in the function and this will now WARN(). Initally, there was no WARN() for non-existent fields so this could work (we would just print zeroes for unsupported fields). Maybe, additional WARN_ON() is not a big deal here. In reality, these controls should never be set. > >> pr_err("*** Guest State ***\n"); >> pr_err("CR0: actual=0x%016lx, shadow=0x%016lx, gh_mask=%016lx\n", >> vmcs_readl(GUEST_CR0), vmcs_readl(CR0_READ_SHADOW), >> @@ -8784,7 +9320,8 @@ static void dump_vmcs(void) >> pr_err("DebugCtl = 0x%016llx DebugExceptions = 0x%016lx\n", >> vmcs_read64(GUEST_IA32_DEBUGCTL), >> vmcs_readl(GUEST_PENDING_DBG_EXCEPTIONS)); >> - if (vmentry_ctl & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) >> + if (cpu_has_load_perf_global_ctrl && >> + vmentry_ctl & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) >> pr_err("PerfGlobCtl = 0x%016llx\n", >> vmcs_read64(GUEST_IA32_PERF_GLOBAL_CTRL)); >> if (vmentry_ctl & VM_ENTRY_LOAD_BNDCFGS) >> @@ -8820,7 +9357,8 @@ static void dump_vmcs(void) >> pr_err("EFER = 0x%016llx PAT = 0x%016llx\n", >> vmcs_read64(HOST_IA32_EFER), >> vmcs_read64(HOST_IA32_PAT)); >> - if (vmexit_ctl & VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL) >> + if (cpu_has_load_perf_global_ctrl && >> + vmexit_ctl & VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL) >> pr_err("PerfGlobCtl = 0x%016llx\n", >> vmcs_read64(HOST_IA32_PERF_GLOBAL_CTRL)); >> >> @@ -9397,7 +9935,7 @@ static void vmx_arm_hv_timer(struct kvm_vcpu *vcpu) >> static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) >> { >> struct vcpu_vmx *vmx = to_vmx(vcpu); >> - unsigned long cr3, cr4; >> + unsigned long cr3, cr4, evmcs_rsp; >> >> /* Record the guest's net vcpu time for enforced NMI injections. */ >> if (unlikely(!enable_vnmi && >> @@ -9463,6 +10001,10 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) >> native_wrmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl); >> >> vmx->__launched = vmx->loaded_vmcs->launched; >> + >> + evmcs_rsp = static_branch_unlikely(&enable_evmcs) ? >> + (unsigned long)¤t_evmcs->host_rsp : 0; > > (If you use text_poke_early, you can do this assignment unconditionally, > since it's just a single lea instruction). > Something to take a look at) >> @@ -9604,6 +10152,11 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) >> /* Eliminate branch target predictions from guest mode */ >> vmexit_fill_RSB(); >> >> + /* All fields are clean at this point */ >> + if (static_branch_unlikely(&enable_evmcs)) >> + current_evmcs->hv_clean_fields |= >> + HV_VMX_ENLIGHTENED_CLEAN_FIELD_ALL; >> + >> /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */ >> if (vmx->host_debugctlmsr) >> update_debugctlmsr(vmx->host_debugctlmsr); >> @@ -12419,7 +12972,36 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = { >> >> static int __init vmx_init(void) >> { >> - int r = kvm_init(&vmx_x86_ops, sizeof(struct vcpu_vmx), >> + int r; >> + >> +#if IS_ENABLED(CONFIG_HYPERV) >> + /* >> + * Enlightened VMCS usage should be recommended and the host needs >> + * to support eVMCS v1 or above. We can also disable eVMCS support >> + * with module parameter. >> + */ >> + if (enlightened_vmcs && >> + ms_hyperv.hints & HV_X64_ENLIGHTENED_VMCS_RECOMMENDED && >> + (ms_hyperv.nested_features & HV_X64_ENLIGHTENED_VMCS_VERSION) >= >> + KVM_EVMCS_VERSION) { >> + int cpu; >> + >> + /* Check that we have assist pages on all online CPUs */ >> + for_each_online_cpu(cpu) { >> + if (!hv_get_vp_assist_page(cpu)) { >> + enlightened_vmcs = false; >> + break; >> + } >> + } >> + if (enlightened_vmcs) { >> + pr_info("KVM: vmx: using Hyper-V Enlightened VMCS\n"); >> + static_branch_enable(&enable_evmcs); >> + } >> + } > > A bit nicer to clear enlightened_vmcs in the "else" branch? Yes, as a precaution, why not. (But we should solely rely on 'enable_evmcs' later on). > > That's it. Nice work! > > Paolo > >> +#endif >> + >> + r = kvm_init(&vmx_x86_ops, sizeof(struct vcpu_vmx), >> __alignof__(struct vcpu_vmx), THIS_MODULE); >> if (r) >> return r; >> @@ -12440,6 +13022,29 @@ static void __exit vmx_exit(void) >> #endif >> >> kvm_exit(); >> + >> +#if IS_ENABLED(CONFIG_HYPERV) >> + if (static_branch_unlikely(&enable_evmcs)) { >> + int cpu; >> + struct hv_vp_assist_page *vp_ap; >> + /* >> + * Reset everything to support using non-enlightened VMCS >> + * access later (e.g. when we reload the module with >> + * enlightened_vmcs=0) >> + */ >> + for_each_online_cpu(cpu) { >> + vp_ap = hv_get_vp_assist_page(cpu); >> + >> + if (!vp_ap) >> + continue; >> + >> + vp_ap->current_nested_vmcs = 0; >> + vp_ap->enlighten_vmentry = 0; >> + } >> + >> + static_branch_disable(&enable_evmcs); >> + } >> +#endif >> } >> >> module_init(vmx_init) >> -- Vitaly