Received: by 2002:a05:6358:795:b0:dc:4c66:fc3e with SMTP id n21csp1439190rwj; Sun, 30 Oct 2022 00:21:20 -0700 (PDT) X-Google-Smtp-Source: AMsMyM741K6jjYRJgVUXm20xLV46YtmaX7DTvS7PDxcYBxkgfT2zvl3SiaJf5zISiYkW9anjwVGH X-Received: by 2002:a63:d716:0:b0:46f:3dfb:90b5 with SMTP id d22-20020a63d716000000b0046f3dfb90b5mr7398958pgg.58.1667114480158; Sun, 30 Oct 2022 00:21:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667114480; cv=none; d=google.com; s=arc-20160816; b=zbmlqglJL2eTdyOUl7/G0wSVuF42cwvetOED0Cc5M7gBjYIVM/RZ6a+iI+KomEcCXz rWMERSb59scS2kmEG2LHogLpPrcfIuH65+OtK0692E5Yja0GO1SMMXybh5DZwgeCdD9b o1FwkGVl+Tw0mMB047o6K18arOfmOK6QhwpI8r5RtHjhFR4vVfEa2yjzWxF6vtPbuGS9 QbXZRQVBYIL5MOplHkP5wz5jnPNZopysCndV8CzoA6gNBtfwg1JmMxSCzMjeTxcsf8/g pJTseVzEAQ246HjNaY7ph+oxHKjrwmJhsKe4szHts0rFslfuse8YnaV4sT8QYKiuvti7 Jq0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=81nvyc5luY6S/FmpAeEUpQNjohFg7QFZd7wIZT9wMVo=; b=KZLafouLYRN4IKPvVw952xq6ZepZzXEI2nDCBt6H3V1E93M5f37aVyHEcYogwRmF82 D4DDqij+5VcEcC93dOzg/u69WK7VET3TwumE+XCFtlyEToEqtbARryCFLsrwqiGClP19 OlIiZShVoIRoxH4JS2tr6geYjzaJMm0jQQzMG7psMB9GU1hz3G2iF/CCrVZPHJ6nkb7p szL+oRwJFEQrBAZUPLMm1u+/PRiDmCJQq2gfxeSzriFv69seiQGMDBIU+T/mjvLiA2pH 7xrEZhdUfq6TmdlLNhRKS8saE4x8O6sy1v58RJ5HVzNnk0aqXKDgN6NRKRYap+hlzPd7 /Y9g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=iz7b05at; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lw16-20020a17090b181000b002137a2faf08si5269203pjb.127.2022.10.30.00.21.09; Sun, 30 Oct 2022 00:21:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=iz7b05at; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230391AbiJ3G07 (ORCPT + 99 others); Sun, 30 Oct 2022 02:26:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229923AbiJ3GYL (ORCPT ); Sun, 30 Oct 2022 02:24:11 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B727F119; Sat, 29 Oct 2022 23:24:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1667111049; x=1698647049; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+7BTlOk9N2U6BnlePCe66dB3SDCA9PryIW++EahJqO8=; b=iz7b05atofNomism7X2NLfU3bdW4Ts5XY8qWzvz/NWCj3PtcdarAu1VZ W06Bz2b8M4CxBI7VOhsW4B7FgmwDXduFdmwDeOcUhIXM1P59VE+U+QWxt L8aQXW3KEk997zHiRW83SSskYZjK2V08JSK31/EMDViDPTvOe1clT23+l q5HHg/o+0by3NQExeqxUGla35pClXfsVxuBmoxblCvN9I9zVNa8cu0CWz M6+XAphV6cVwUX8omKGMC8NiNbQnrtgJouYpxf4x2iZDzI4SNC0jJgsab DW7zrLdRSx5hOUv5zY8Gy5P0LS9E5SRTZIIGg9hpDd3FXP34pwqowbGRZ Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10515"; a="395037151" X-IronPort-AV: E=Sophos;i="5.95,225,1661842800"; d="scan'208";a="395037151" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2022 23:24:03 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10515"; a="878392972" X-IronPort-AV: E=Sophos;i="5.95,225,1661842800"; d="scan'208";a="878392972" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2022 23:24:03 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack Subject: [PATCH v10 039/108] KVM: VMX: Introduce test mode related to EPT violation VE Date: Sat, 29 Oct 2022 23:22:40 -0700 Message-Id: <8b3101711c5291246653efb50cc2975863d3a8ab.1667110240.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.9 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata To support TDX, KVM is enhanced to operate with #VE. For TDX, KVM programs to inject #VE conditionally and set #VE suppress bit in EPT entry. For VMX case, #VE isn't used. If #VE happens for VMX, it's a bug. To be defensive (test that VMX case isn't broken), introduce option ept_violation_ve_test and when it's set, set error. Suggested-by: Paolo Bonzini Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/vmx.h | 12 +++++++ arch/x86/kvm/vmx/vmcs.h | 5 +++ arch/x86/kvm/vmx/vmx.c | 69 +++++++++++++++++++++++++++++++++++++- arch/x86/kvm/vmx/vmx.h | 6 +++- 4 files changed, 90 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h index cdbf12c1a83c..752d53652007 100644 --- a/arch/x86/include/asm/vmx.h +++ b/arch/x86/include/asm/vmx.h @@ -68,6 +68,7 @@ #define SECONDARY_EXEC_ENCLS_EXITING VMCS_CONTROL_BIT(ENCLS_EXITING) #define SECONDARY_EXEC_RDSEED_EXITING VMCS_CONTROL_BIT(RDSEED_EXITING) #define SECONDARY_EXEC_ENABLE_PML VMCS_CONTROL_BIT(PAGE_MOD_LOGGING) +#define SECONDARY_EXEC_EPT_VIOLATION_VE VMCS_CONTROL_BIT(EPT_VIOLATION_VE) #define SECONDARY_EXEC_PT_CONCEAL_VMX VMCS_CONTROL_BIT(PT_CONCEAL_VMX) #define SECONDARY_EXEC_XSAVES VMCS_CONTROL_BIT(XSAVES) #define SECONDARY_EXEC_MODE_BASED_EPT_EXEC VMCS_CONTROL_BIT(MODE_BASED_EPT_EXEC) @@ -223,6 +224,8 @@ enum vmcs_field { VMREAD_BITMAP_HIGH = 0x00002027, VMWRITE_BITMAP = 0x00002028, VMWRITE_BITMAP_HIGH = 0x00002029, + VE_INFORMATION_ADDRESS = 0x0000202A, + VE_INFORMATION_ADDRESS_HIGH = 0x0000202B, XSS_EXIT_BITMAP = 0x0000202C, XSS_EXIT_BITMAP_HIGH = 0x0000202D, ENCLS_EXITING_BITMAP = 0x0000202E, @@ -628,4 +631,13 @@ enum vmx_l1d_flush_state { extern enum vmx_l1d_flush_state l1tf_vmx_mitigation; +struct vmx_ve_information { + u32 exit_reason; + u32 delivery; + u64 exit_qualification; + u64 guest_linear_address; + u64 guest_physical_address; + u16 eptp_index; +}; + #endif diff --git a/arch/x86/kvm/vmx/vmcs.h b/arch/x86/kvm/vmx/vmcs.h index ac290a44a693..9277676057a7 100644 --- a/arch/x86/kvm/vmx/vmcs.h +++ b/arch/x86/kvm/vmx/vmcs.h @@ -140,6 +140,11 @@ static inline bool is_nm_fault(u32 intr_info) return is_exception_n(intr_info, NM_VECTOR); } +static inline bool is_ve_fault(u32 intr_info) +{ + return is_exception_n(intr_info, VE_VECTOR); +} + /* Undocumented: icebp/int1 */ static inline bool is_icebp(u32 intr_info) { diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f890191e8580..dd3fde9d3c32 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -126,6 +126,9 @@ module_param(error_on_inconsistent_vmcs_config, bool, 0444); static bool __read_mostly dump_invalid_vmcs = 0; module_param(dump_invalid_vmcs, bool, 0644); +static bool __read_mostly ept_violation_ve_test; +module_param(ept_violation_ve_test, bool, 0444); + #define MSR_BITMAP_MODE_X2APIC 1 #define MSR_BITMAP_MODE_X2APIC_APICV 2 @@ -783,6 +786,13 @@ void vmx_update_exception_bitmap(struct kvm_vcpu *vcpu) eb = (1u << PF_VECTOR) | (1u << UD_VECTOR) | (1u << MC_VECTOR) | (1u << DB_VECTOR) | (1u << AC_VECTOR); + /* + * #VE isn't used for VMX, but for TDX. To test against unexpected + * change related to #VE for VMX, intercept unexpected #VE and warn on + * it. + */ + if (ept_violation_ve_test) + eb |= 1u << VE_VECTOR; /* * Guest access to VMware backdoor ports could legitimately * trigger #GP because of TSS I/O permission bitmap. @@ -2644,6 +2654,9 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf, &_cpu_based_2nd_exec_control)) return -EIO; } + if (!ept_violation_ve_test) + _cpu_based_exec_control &= ~SECONDARY_EXEC_EPT_VIOLATION_VE; + #ifndef CONFIG_X86_64 if (!(_cpu_based_2nd_exec_control & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)) @@ -2668,6 +2681,7 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf, return -EIO; vmx_cap->ept = 0; + _cpu_based_2nd_exec_control &= ~SECONDARY_EXEC_EPT_VIOLATION_VE; } if (!(_cpu_based_2nd_exec_control & SECONDARY_EXEC_ENABLE_VPID) && vmx_cap->vpid) { @@ -4510,6 +4524,7 @@ static u32 vmx_secondary_exec_control(struct vcpu_vmx *vmx) exec_control &= ~SECONDARY_EXEC_ENABLE_VPID; if (!enable_ept) { exec_control &= ~SECONDARY_EXEC_ENABLE_EPT; + exec_control &= ~SECONDARY_EXEC_EPT_VIOLATION_VE; enable_unrestricted_guest = 0; } if (!enable_unrestricted_guest) @@ -4637,8 +4652,40 @@ static void init_vmcs(struct vcpu_vmx *vmx) exec_controls_set(vmx, vmx_exec_control(vmx)); - if (cpu_has_secondary_exec_ctrls()) + if (cpu_has_secondary_exec_ctrls()) { secondary_exec_controls_set(vmx, vmx_secondary_exec_control(vmx)); + if (secondary_exec_controls_get(vmx) & + SECONDARY_EXEC_EPT_VIOLATION_VE) { + if (!vmx->ve_info) { + /* ve_info must be page aligned. */ + struct page *page; + + BUILD_BUG_ON(sizeof(*vmx->ve_info) > PAGE_SIZE); + page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); + if (page) + vmx->ve_info = page_to_virt(page); + } + if (vmx->ve_info) { + /* + * Allow #VE delivery. CPU sets this field to + * 0xFFFFFFFF on #VE delivery. Another #VE can + * occur only if software clears the field. + */ + vmx->ve_info->delivery = 0; + vmcs_write64(VE_INFORMATION_ADDRESS, + __pa(vmx->ve_info)); + } else { + /* + * Because SECONDARY_EXEC_EPT_VIOLATION_VE is + * used only when ept_violation_ve_test is true, + * it's okay to go with the bit disabled. + */ + pr_err("Failed to allocate ve_info. disabling EPT_VIOLATION_VE.\n"); + secondary_exec_controls_clearbit(vmx, + SECONDARY_EXEC_EPT_VIOLATION_VE); + } + } + } if (cpu_has_tertiary_exec_ctrls()) tertiary_exec_controls_set(vmx, vmx_tertiary_exec_control(vmx)); @@ -5118,6 +5165,12 @@ static int handle_exception_nmi(struct kvm_vcpu *vcpu) if (is_invalid_opcode(intr_info)) return handle_ud(vcpu); + /* + * #VE isn't supposed to happen. Although vcpu can send + */ + if (KVM_BUG_ON(is_ve_fault(intr_info), vcpu->kvm)) + return -EIO; + error_code = 0; if (intr_info & INTR_INFO_DELIVER_CODE_MASK) error_code = vmcs_read32(VM_EXIT_INTR_ERROR_CODE); @@ -6306,6 +6359,18 @@ void dump_vmcs(struct kvm_vcpu *vcpu) if (secondary_exec_control & SECONDARY_EXEC_ENABLE_VPID) pr_err("Virtual processor ID = 0x%04x\n", vmcs_read16(VIRTUAL_PROCESSOR_ID)); + if (secondary_exec_control & SECONDARY_EXEC_EPT_VIOLATION_VE) { + struct vmx_ve_information *ve_info; + + pr_err("VE info address = 0x%016llx\n", + vmcs_read64(VE_INFORMATION_ADDRESS)); + ve_info = __va(vmcs_read64(VE_INFORMATION_ADDRESS)); + pr_err("ve_info: 0x%08x 0x%08x 0x%016llx 0x%016llx 0x%016llx 0x%04x\n", + ve_info->exit_reason, ve_info->delivery, + ve_info->exit_qualification, + ve_info->guest_linear_address, + ve_info->guest_physical_address, ve_info->eptp_index); + } } /* @@ -7302,6 +7367,8 @@ void vmx_vcpu_free(struct kvm_vcpu *vcpu) free_vpid(vmx->vpid); nested_vmx_free_vcpu(vcpu); free_loaded_vmcs(vmx->loaded_vmcs); + if (vmx->ve_info) + free_page((unsigned long)vmx->ve_info); } int vmx_vcpu_create(struct kvm_vcpu *vcpu) diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h index d49d0ace9fb8..1813caeb24d8 100644 --- a/arch/x86/kvm/vmx/vmx.h +++ b/arch/x86/kvm/vmx/vmx.h @@ -359,6 +359,9 @@ struct vcpu_vmx { DECLARE_BITMAP(read, MAX_POSSIBLE_PASSTHROUGH_MSRS); DECLARE_BITMAP(write, MAX_POSSIBLE_PASSTHROUGH_MSRS); } shadow_msr_intercept; + + /* ve_info must be page aligned. */ + struct vmx_ve_information *ve_info; }; struct kvm_vmx { @@ -570,7 +573,8 @@ static inline u8 vmx_get_rvi(void) SECONDARY_EXEC_ENABLE_VMFUNC | \ SECONDARY_EXEC_BUS_LOCK_DETECTION | \ SECONDARY_EXEC_NOTIFY_VM_EXITING | \ - SECONDARY_EXEC_ENCLS_EXITING) + SECONDARY_EXEC_ENCLS_EXITING | \ + SECONDARY_EXEC_EPT_VIOLATION_VE) #define KVM_REQUIRED_VMX_TERTIARY_VM_EXEC_CONTROL 0 #define KVM_OPTIONAL_VMX_TERTIARY_VM_EXEC_CONTROL \ -- 2.25.1