Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp2690536rwd; Sun, 28 May 2023 22:18:06 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5GfwLOHKQc1tPywnAT8G7DX8+Ecr6TO4jHaLz3AZbxxSzAA1TXje/2eI0Q9F6MJR7a3fvR X-Received: by 2002:a17:902:cecd:b0:1af:f751:1be9 with SMTP id d13-20020a170902cecd00b001aff7511be9mr11443579plg.32.1685337486596; Sun, 28 May 2023 22:18:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685337486; cv=none; d=google.com; s=arc-20160816; b=dTsFlene+R91hkW6FCXadHuasEQtczq4l1LR+dOJzaer4AjFOWeokXf8u0JXuX7f80 NYoUJsoxFXmgx6XF3WGXoQU+zp8xZrETKzmtJrRLvY4XN8xlWcRe2R+Ap0O/EhBDnsh0 us9JujVNEU6F7BGW20YJb40TWrFBZbus1KbsTuYpHpbbcLqSkpE12hPHnvPu8mR1qd2/ jSOsT3QB1biBHhgFL4Nvr9QtLzhTYMZXqH6jDLrMwKU/8SY/09J1kLZX+GBCo9ydZoSj gzHik4bEAymqUCZF27UbrVUO+5RgrjRz0IOKktCwNzHif4HSJ7SKH0qNkaRx+2KDk22f LPuw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ZD3BzKViQGy79cGo6Ha6k9MyERUsbEZOeRciMy9A9JY=; b=p0Ge2its9CA5XSGeJF7hcfMPfzmak1UZhyVKnSb7aW9O64WFsbniwA7CkQbxr8GjOZ hGVMNfbWNcLaCaBc9u7aGapbCwmTThFiM3hKD6zTXPruZtmTIRtTH0c4ViItipiCdNIV DUpLkjm4TS048REOOJZPel9uVoAbTVdRx/4vYOb3g1bppj5yBUjIQF3mG3lN6PmHYxZD UXolToL/ijgHGyoAUkv+m8Pd4TpeElVxBQ9CXLaB1TmRRa878lgprenAvuonjXKfg09+ VaM0wBy+GtC6rIBU9cIxXwgU9qWJzDVcUF+oiCa+4TeMeswKeC1EdMWIWVS6y3U+WEJc SiVQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MDQbKyQK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b4-20020a170902d50400b001afd30b8fa4si3293735plg.333.2023.05.28.22.17.52; Sun, 28 May 2023 22:18:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MDQbKyQK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232547AbjE2EaR (ORCPT + 99 others); Mon, 29 May 2023 00:30:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231842AbjE2E2x (ORCPT ); Mon, 29 May 2023 00:28:53 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED3FC1732; Sun, 28 May 2023 21:25:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685334310; x=1716870310; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=f9JuYr2966N5mRq/7i4c9e9tlEgSyhFkfn6VnyS1Wfc=; b=MDQbKyQK9+0X97TbiRGFK+t7QDkmExMGrISH70tbN2ENsZ2Z0N8JjHgH QEgTp0UupYr3lsErLeehu76Q3s9YnrstZbDLhUHSkY8utPvtUzLbth2I8 D0XjX9KF80ec7jX4CHzjL3YsfpJh7Whze25I+KJ2coJiFU68gt34y6N8H M5XGr24D7CaamXJKgvCcPUz6+mAU+30JhH/MDFjGsv2fLF4c4l9tDAVtj fFaR1uy3D3Fd/xRSThgAnLPrm/+7IB4hDf8SvTXa5zL/QR0lQ1nTD3GiY 8KpkkTF+lDlGK3WMyr3Gzv2UB6qIYRfKxiXtBMA6a6i0QaL2LkCmtslgC Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="356993470" X-IronPort-AV: E=Sophos;i="6.00,200,1681196400"; d="scan'208";a="356993470" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2023 21:21:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10724"; a="830223510" X-IronPort-AV: E=Sophos;i="6.00,200,1681196400"; d="scan'208";a="830223510" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 May 2023 21:21:45 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com Subject: [PATCH v14 099/113] KVM: TDX: Silently discard SMI request Date: Sun, 28 May 2023 21:20:21 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata TDX doesn't support system-management mode (SMM) and system-management interrupt (SMI) in guest TDs. Because guest state (vcpu state, memory state) is protected, it must go through the TDX module APIs to change guest state, injecting SMI and changing vcpu mode into SMM. The TDX module doesn't provide a way for VMM to inject SMI into guest TD and a way for VMM to switch guest vcpu mode into SMM. We have two options in KVM when handling SMM or SMI in the guest TD or the device model (e.g. QEMU): 1) silently ignore the request or 2) return a meaningful error. For simplicity, we implemented the option 1). Signed-off-by: Isaku Yamahata --- arch/x86/kvm/smm.h | 7 +++++- arch/x86/kvm/vmx/main.c | 45 ++++++++++++++++++++++++++++++++++---- arch/x86/kvm/vmx/tdx.c | 29 ++++++++++++++++++++++++ arch/x86/kvm/vmx/x86_ops.h | 12 ++++++++++ 4 files changed, 88 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/smm.h b/arch/x86/kvm/smm.h index a1cf2ac5bd78..bc77902f5c18 100644 --- a/arch/x86/kvm/smm.h +++ b/arch/x86/kvm/smm.h @@ -142,7 +142,12 @@ union kvm_smram { static inline int kvm_inject_smi(struct kvm_vcpu *vcpu) { - kvm_make_request(KVM_REQ_SMI, vcpu); + /* + * If SMM isn't supported (e.g. TDX), silently discard SMI request. + * Assume that SMM supported = MSR_IA32_SMBASE supported. + */ + if (static_call(kvm_x86_has_emulated_msr)(vcpu->kvm, MSR_IA32_SMBASE)) + kvm_make_request(KVM_REQ_SMI, vcpu); return 0; } diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 3a3ef7a77047..faa965c069dd 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -294,6 +294,43 @@ static void vt_msr_filter_changed(struct kvm_vcpu *vcpu) vmx_msr_filter_changed(vcpu); } +#ifdef CONFIG_KVM_SMM +static int vt_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) +{ + if (is_td_vcpu(vcpu)) + return tdx_smi_allowed(vcpu, for_injection); + + return vmx_smi_allowed(vcpu, for_injection); +} + +static int vt_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) +{ + if (unlikely(is_td_vcpu(vcpu))) + return tdx_enter_smm(vcpu, smram); + + return vmx_enter_smm(vcpu, smram); +} + +static int vt_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) +{ + if (unlikely(is_td_vcpu(vcpu))) + return tdx_leave_smm(vcpu, smram); + + return vmx_leave_smm(vcpu, smram); +} + +static void vt_enable_smi_window(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) { + tdx_enable_smi_window(vcpu); + return; + } + + /* RSM will cause a vmexit anyway. */ + vmx_enable_smi_window(vcpu); +} +#endif + static void vt_apicv_post_state_restore(struct kvm_vcpu *vcpu) { struct pi_desc *pi = vcpu_to_pi_desc(vcpu); @@ -679,10 +716,10 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .setup_mce = vmx_setup_mce, #ifdef CONFIG_KVM_SMM - .smi_allowed = vmx_smi_allowed, - .enter_smm = vmx_enter_smm, - .leave_smm = vmx_leave_smm, - .enable_smi_window = vmx_enable_smi_window, + .smi_allowed = vt_smi_allowed, + .enter_smm = vt_enter_smm, + .leave_smm = vt_leave_smm, + .enable_smi_window = vt_enable_smi_window, #endif .can_emulate_instruction = vmx_can_emulate_instruction, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 28ab274a83c8..905fb8a1e1ea 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1856,6 +1856,35 @@ int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) } } +#ifdef CONFIG_KVM_SMM +int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) +{ + /* SMI isn't supported for TDX. */ + WARN_ON_ONCE(1); + return false; +} + +int tdx_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) +{ + /* smi_allowed() is always false for TDX as above. */ + WARN_ON_ONCE(1); + return 0; +} + +int tdx_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) +{ + WARN_ON_ONCE(1); + return 0; +} + +void tdx_enable_smi_window(struct kvm_vcpu *vcpu) +{ + /* SMI isn't supported for TDX. Silently discard SMI request. */ + WARN_ON_ONCE(1); + vcpu->arch.smi_pending = false; +} +#endif + static int tdx_get_capabilities(struct kvm_tdx_cmd *cmd) { struct kvm_tdx_capabilities __user *user_caps; diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index ba5cadb0135e..38a7a0075450 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -218,4 +218,16 @@ static inline int tdx_sept_flush_remote_tlbs(struct kvm *kvm) { return 0; } static inline void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level) {} #endif +#if defined(CONFIG_INTEL_TDX_HOST) && defined(CONFIG_KVM_SMM) +int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection); +int tdx_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram); +int tdx_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram); +void tdx_enable_smi_window(struct kvm_vcpu *vcpu); +#else +static inline int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) { return false; } +static inline int tdx_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) { return 0; } +static inline int tdx_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) { return 0; } +static inline void tdx_enable_smi_window(struct kvm_vcpu *vcpu) {} +#endif + #endif /* __KVM_X86_VMX_X86_OPS_H */ -- 2.25.1