Received: by 2002:a05:6358:a55:b0:ec:fcf4:3ecf with SMTP id 21csp510121rwb; Thu, 12 Jan 2023 08:45:57 -0800 (PST) X-Google-Smtp-Source: AMrXdXtCpIhU6knE84gYA4GpBTqK4DZZiqrEG/1X2x5lZm8vC8bzNbJI/Sg2X8bxA0S/Mx0CKX3Y X-Received: by 2002:a17:906:9f22:b0:7c1:bb5:5704 with SMTP id fy34-20020a1709069f2200b007c10bb55704mr53139ejc.26.1673541957381; Thu, 12 Jan 2023 08:45:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673541957; cv=none; d=google.com; s=arc-20160816; b=v76y31xXreIbJ9wxeZx8NXOkXnGcfDK/NhJxOuQ9QmDojZZRT0XBhPuqTp294n4HND if0cIidox8GukEiRW389pTXNR1TfSYPDaSvodNpQW5pI7jNoT7Kxk4wK7tZQREtUexjx iSPqs1mwwX13kW9ZUXzUeR2lOWzQ3vOdNqsCEqsEcxlwhZ3dn81vZtsktx6r4b5tuQyS vx4MX7MDpWyy3ZhLk+iH1QXNcBqleBGyy6fYkvf+MwZRRrW6JEXZkKTPfaiAMknMYJIb rX6TbhbbnflhswWy9Ww7FFyT6JYM3G91ATGu0Du1r8n/zTB0jDJiyJZ1KL6sEE5NNtPs 2IVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=orOl0OEsBGdsi57Vw+Ub3K3hMnfbHGa+3/paHzyT/rY=; b=jNAqxInmgDfKb9Q4yLBba8WWWH6pe64a82pEGHgvHnn/WSEv+n0iZ5J7MRAByOBhZU 68TvEQDK8mKXys2p8XEc7GPy2DRN85IP6s/Z/cTg492jFDPGykdTYHIakxkQ7V8YG1tZ tjkN/Ya9Q/wUwFMvLwjvQNJexPxTeCCJFOOXF8Py0ZbfXrpt+MCPaQEgRRsUa8wMOlNt WCZG6tJtRLpmqBqMfC4BndXdLAp091wlJ3k6ol0UpeGFVyTl3vGbCC0fLvrzrpwL5QkK w3WaH+JLrmAELWf/JPLk3kaahMEWosSd7LB/IIDyCg+JG5DCOtqzwLl5fE8U4hTQ3oy6 aCUg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=P3GbeEcd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id go32-20020a1709070da000b0084d36ddfb02si16959505ejc.771.2023.01.12.08.45.44; Thu, 12 Jan 2023 08:45:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=P3GbeEcd; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240350AbjALQlN (ORCPT + 50 others); Thu, 12 Jan 2023 11:41:13 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235956AbjALQh2 (ORCPT ); Thu, 12 Jan 2023 11:37:28 -0500 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 42BB212AA1; Thu, 12 Jan 2023 08:33:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673541227; x=1705077227; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rdXh9ySxJtg/VL0Y7dpCTvEEG8BvBEUgRnPglDX2OQw=; b=P3GbeEcd6icT8BdlhlA4QpUWP85Nh7w7gxtU5F/MYSkSPuLDcc1Soxv5 h4AKQl4QSOiKL8qPn6kFMFW3PyCI1JmDCX79zM50fllRV9BGc7zt/4Q1o ZrOfJjEHQ4MjDK4pQ6KmVUKXTG08k0mZM3lNc58VOFaufqv4ztI30DPLi 1RtyZn9GsX96YPsrjCzZvLeZq6CJsFhkJJvjHiTAwvNKvdvC6Z9yaLubt 8grCKitAXtYPvbFk/45OROE883jAzxqhQfK8oeKswlo5FNF102AiOyMDg 9F4sbIQ49eAKQFSp26qg51MlE7T1uG0tS+oJge87cFY4DMR70OCnDF/1b w==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="386089775" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="386089775" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:33:38 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="726372593" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="726372593" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:33:37 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack Subject: [PATCH v11 101/113] KVM: TDX: Silently discard SMI request Date: Thu, 12 Jan 2023 08:32:49 -0800 Message-Id: <219cf79a3325263a0f236f740106b9ce7b9fd455.1673539699.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata TDX doesn't support system-management mode (SMM) and system-management interrupt (SMI) in guest TDs. Because guest state (vcpu state, memory state) is protected, it must go through the TDX module APIs to change guest state, injecting SMI and changing vcpu mode into SMM. The TDX module doesn't provide a way for VMM to inject SMI into guest TD and a way for VMM to switch guest vcpu mode into SMM. We have two options in KVM when handling SMM or SMI in the guest TD or the device model (e.g. QEMU): 1) silently ignore the request or 2) return a meaningful error. For simplicity, we implemented the option 1). Signed-off-by: Isaku Yamahata --- arch/x86/kvm/smm.h | 7 +++++- arch/x86/kvm/vmx/main.c | 45 ++++++++++++++++++++++++++++++++++---- arch/x86/kvm/vmx/tdx.c | 29 ++++++++++++++++++++++++ arch/x86/kvm/vmx/x86_ops.h | 12 ++++++++++ 4 files changed, 88 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/smm.h b/arch/x86/kvm/smm.h index a1cf2ac5bd78..bc77902f5c18 100644 --- a/arch/x86/kvm/smm.h +++ b/arch/x86/kvm/smm.h @@ -142,7 +142,12 @@ union kvm_smram { static inline int kvm_inject_smi(struct kvm_vcpu *vcpu) { - kvm_make_request(KVM_REQ_SMI, vcpu); + /* + * If SMM isn't supported (e.g. TDX), silently discard SMI request. + * Assume that SMM supported = MSR_IA32_SMBASE supported. + */ + if (static_call(kvm_x86_has_emulated_msr)(vcpu->kvm, MSR_IA32_SMBASE)) + kvm_make_request(KVM_REQ_SMI, vcpu); return 0; } diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index cb79d64a2058..3651c32e6cad 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -219,6 +219,43 @@ static void vt_msr_filter_changed(struct kvm_vcpu *vcpu) vmx_msr_filter_changed(vcpu); } +#ifdef CONFIG_KVM_SMM +static int vt_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) +{ + if (is_td_vcpu(vcpu)) + return tdx_smi_allowed(vcpu, for_injection); + + return vmx_smi_allowed(vcpu, for_injection); +} + +static int vt_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) +{ + if (unlikely(is_td_vcpu(vcpu))) + return tdx_enter_smm(vcpu, smram); + + return vmx_enter_smm(vcpu, smram); +} + +static int vt_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) +{ + if (unlikely(is_td_vcpu(vcpu))) + return tdx_leave_smm(vcpu, smram); + + return vmx_leave_smm(vcpu, smram); +} + +static void vt_enable_smi_window(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) { + tdx_enable_smi_window(vcpu); + return; + } + + /* RSM will cause a vmexit anyway. */ + vmx_enable_smi_window(vcpu); +} +#endif + static void vt_apicv_post_state_restore(struct kvm_vcpu *vcpu) { struct pi_desc *pi = vcpu_to_pi_desc(vcpu); @@ -586,10 +623,10 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .setup_mce = vmx_setup_mce, #ifdef CONFIG_KVM_SMM - .smi_allowed = vmx_smi_allowed, - .enter_smm = vmx_enter_smm, - .leave_smm = vmx_leave_smm, - .enable_smi_window = vmx_enable_smi_window, + .smi_allowed = vt_smi_allowed, + .enter_smm = vt_enter_smm, + .leave_smm = vt_leave_smm, + .enable_smi_window = vt_enable_smi_window, #endif .can_emulate_instruction = vmx_can_emulate_instruction, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 2f3206551c48..778d170b7549 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1818,6 +1818,35 @@ int tdx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) return 1; } +#ifdef CONFIG_KVM_SMM +int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) +{ + /* SMI isn't supported for TDX. */ + WARN_ON_ONCE(1); + return false; +} + +int tdx_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) +{ + /* smi_allowed() is always false for TDX as above. */ + WARN_ON_ONCE(1); + return 0; +} + +int tdx_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) +{ + WARN_ON_ONCE(1); + return 0; +} + +void tdx_enable_smi_window(struct kvm_vcpu *vcpu) +{ + /* SMI isn't supported for TDX. Silently discard SMI request. */ + WARN_ON_ONCE(1); + vcpu->arch.smi_pending = false; +} +#endif + int tdx_dev_ioctl(void __user *argp) { struct kvm_tdx_capabilities __user *user_caps; diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index 3b747fb5bc20..d6c592d06baa 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -225,4 +225,16 @@ static inline int tdx_sept_tlb_remote_flush(struct kvm *kvm) { return 0; } static inline void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level) {} #endif +#if defined(CONFIG_INTEL_TDX_HOST) && defined(CONFIG_KVM_SMM) +int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection); +int tdx_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram); +int tdx_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram); +void tdx_enable_smi_window(struct kvm_vcpu *vcpu); +#else +static inline int tdx_smi_allowed(struct kvm_vcpu *vcpu, bool for_injection) { return false; } +static inline int tdx_enter_smm(struct kvm_vcpu *vcpu, union kvm_smram *smram) { return 0; } +static inline int tdx_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) { return 0; } +static inline void tdx_enable_smi_window(struct kvm_vcpu *vcpu) {} +#endif + #endif /* __KVM_X86_VMX_X86_OPS_H */ -- 2.25.1