Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CBCEC6FA99 for ; Sun, 12 Mar 2023 18:06:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231375AbjCLSGE (ORCPT ); Sun, 12 Mar 2023 14:06:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232116AbjCLSFJ (ORCPT ); Sun, 12 Mar 2023 14:05:09 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9FF6274AA; Sun, 12 Mar 2023 11:01:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678644061; x=1710180061; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mkI52qiy4Vlq5BabVhI2G3FvnS63gDDvr5s31pAWgl8=; b=f2zetR5kuKTSewl1IhR4Ih2C32nbr051l3jd09WVUuFXW5cYjdN+1o20 MR1h+sXi/2qfPpVjT5THmdYCdhI3Lz3O7Nor5Dh4m2DVml+8JOZdARQNz bjh41WOzsAprk57A2bREsJ/R4vpISo+AE4S5hLU3tkY//U54IOAcCYHa7 0aC3DGGaQGLxAUraVTsJ06hK+T+bbVb6DnHyf0QCIFt6O+x9f2rxefB3X awuZiNAyN4VFjTy05K1t0wY6NEjf75dMLPe1J5LQtcX9xx2/oLvQbHbzL 7Ldc2Hfuz3BJgxwuMiDzT5e9hEN+SDz6Zrevmdjg8gjfStErdgtchL6O0 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10647"; a="316660103" X-IronPort-AV: E=Sophos;i="5.98,254,1673942400"; d="scan'208";a="316660103" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2023 10:58:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10647"; a="742596834" X-IronPort-AV: E=Sophos;i="5.98,254,1673942400"; d="scan'208";a="742596834" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2023 10:58:16 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang Subject: [PATCH v13 100/113] KVM: TDX: Silently discard SMI request Date: Sun, 12 Mar 2023 10:57:04 -0700 Message-Id: <752cc1c8003c9133eb1f034107652f3457b4eab1.1678643052.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata TDX doesn't support system-management mode (SMM) and system-management interrupt (SMI) in guest TDs. Because guest state (vcpu state, memory state) is protected, it must go through the TDX module APIs to change guest state, injecting SMI and changing vcpu mode into SMM. The TDX module doesn't provide a way for VMM to inject SMI into guest TD and a way for VMM to switch guest vcpu mode into SMM. We have two options in KVM when handling SMM or SMI in the guest TD or the device model (e.g. QEMU): 1) silently ignore the request or 2) return a meaningful error. For simplicity, we implemented the option 1). Signed-off-by: Isaku Yamahata --- arch/x86/kvm/smm.h | 7 +++++- arch/x86/kvm/vmx/main.c | 45 ++++++++++++++++++++++++++++++++++---- arch/x86/kvm/vmx/tdx.c | 29 ++++++++++++++++++++++++ arch/x86/kvm/vmx/x86_ops.h | 12 ++++++++++ 4 files changed, 88 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/smm.h b/arch/x86/kvm/smm.h index a1cf2ac5bd78..bc77902f5c18 100644 --- a/arch/x86/kvm/smm.h +++ b/arch/x86/kvm/smm.h @@ -142,7 +142,12 @@ union kvm_smram { static inline int kvm_inject_smi(struct kvm_vcpu *vcpu) { - kvm_make_request(KVM_REQ_SMI, vcpu); + /* + * If SMM isn't supported (e.g. TDX), silently discard SMI request. + * Assume that SMM supported = MSR_IA32_SMBASE supported. + */ + if (static_call(kvm_x86_has_emulated_msr)(vcpu->kvm, MSR_IA32_SMBASE)) + kvm_make_request(KVM_REQ_SMI, vcpu); return 0; } diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 9901d4400b7b..a01efaa10bbc 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -295,6 +295,43 @@ static void vt_msr_filter_changed(struct kvm_vcpu *vcpu) vmx_msr_filter_changed(vcpu); } +#ifdef CONFIG_KVM_SMM +static int vt_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) +{ + if (is_td_vcpu(vcpu)) + return tdx_smi_allowed(vcpu, for_injection); + + return vmx_smi_allowed(vcpu, for_injection); +} + +static int vt_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) +{ + if (unlikely(is_td_vcpu(vcpu))) + return tdx_enter_smm(vcpu, smram); + + return vmx_enter_smm(vcpu, smram); +} + +static int vt_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) +{ + if (unlikely(is_td_vcpu(vcpu))) + return tdx_leave_smm(vcpu, smram); + + return vmx_leave_smm(vcpu, smram); +} + +static void vt_enable_smi_window(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) { + tdx_enable_smi_window(vcpu); + return; + } + + /* RSM will cause a vmexit anyway. */ + vmx_enable_smi_window(vcpu); +} +#endif + static void vt_apicv_post_state_restore(struct kvm_vcpu *vcpu) { struct pi_desc *pi = vcpu_to_pi_desc(vcpu); @@ -680,10 +717,10 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .setup_mce = vmx_setup_mce, #ifdef CONFIG_KVM_SMM - .smi_allowed = vmx_smi_allowed, - .enter_smm = vmx_enter_smm, - .leave_smm = vmx_leave_smm, - .enable_smi_window = vmx_enable_smi_window, + .smi_allowed = vt_smi_allowed, + .enter_smm = vt_enter_smm, + .leave_smm = vt_leave_smm, + .enable_smi_window = vt_enable_smi_window, #endif .can_emulate_instruction = vmx_can_emulate_instruction, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 10bbac208a9c..5c6f8b73b820 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1829,6 +1829,35 @@ int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) return 1; } +#ifdef CONFIG_KVM_SMM +int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) +{ + /* SMI isn't supported for TDX. */ + WARN_ON_ONCE(1); + return false; +} + +int tdx_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) +{ + /* smi_allowed() is always false for TDX as above. */ + WARN_ON_ONCE(1); + return 0; +} + +int tdx_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) +{ + WARN_ON_ONCE(1); + return 0; +} + +void tdx_enable_smi_window(struct kvm_vcpu *vcpu) +{ + /* SMI isn't supported for TDX. Silently discard SMI request. */ + WARN_ON_ONCE(1); + vcpu->arch.smi_pending = false; +} +#endif + int tdx_dev_ioctl(void __user *argp) { struct kvm_tdx_capabilities __user *user_caps; diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index 4e0befb9d530..59d74f4a4b63 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -227,4 +227,16 @@ static inline int tdx_sept_tlb_remote_flush(struct kvm *kvm) { return 0; } static inline void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level) {} #endif +#if defined(CONFIG_INTEL_TDX_HOST) && defined(CONFIG_KVM_SMM) +int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection); +int tdx_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram); +int tdx_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram); +void tdx_enable_smi_window(struct kvm_vcpu *vcpu); +#else +static inline int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) { return false; } +static inline int tdx_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) { return 0; } +static inline int tdx_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) { return 0; } +static inline void tdx_enable_smi_window(struct kvm_vcpu *vcpu) {} +#endif + #endif /* __KVM_X86_VMX_X86_OPS_H */ -- 2.25.1