Received: by 2002:a05:6359:c8b:b0:c7:702f:21d4 with SMTP id go11csp3486745rwb; Fri, 30 Sep 2022 04:27:01 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5AVCsm7/QlnyR2P/lomrSQU23jk3DJ7P//EIrePPRCNcVKxU8oE/+KF7o0Nu6BA0I0z/LT X-Received: by 2002:a05:6402:1712:b0:457:460c:44eb with SMTP id y18-20020a056402171200b00457460c44ebmr7575960edu.426.1664537221491; Fri, 30 Sep 2022 04:27:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1664537221; cv=none; d=google.com; s=arc-20160816; b=wTUOoUVLGDiRowYl2kMscW+kS95XKUnIMI7WCSwCx1Vebp86BOmRdXndkMGqK+Nrjo zMdYygtPPMHFjXZbwN9U9FrMNU0r8vOV63uHMM3rZWDT3tP6hKELrqQRWV1QEzEQlLuF 88vWH7e98ujG/9rYmOuWMjuroM5i3fBWJ0/53bzPWZQMnfJDGrd/K0pSsdb5w44W1Yw0 rJljHZsy/s8jvK27YbNi6dE5r6wkdP5FMDELtSoMoe77hrlwGePU861FSoJWpTRUWT7R jT9cgK6WXgUsSn3Sp2ca8TFblWWVXchWFiJU45IvbbOFCX4zglWhoGjm2Bai7j+FBa1L 8wjA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=DoXwGgklonSSGcOvzuYpIzHD/he7dO+5sLd1vaf4p34=; b=0TAqFwN6EVYoYD21VXlxiw4PkOhRwalhUArxB0BUChECIRjFzlxAZR27mfHDROV0aK iVcaUhflUxvIbZUilkHFB0lVqfS2WS4KmFFt2iHwR8Snl5NVaFtrAWRxuyaeGVjMZngI 3olr1yBq4xoC3nDKJ01cITCNR8uecYsgNlm98ko5xK+AhbTF1yYyVNDXfEzTItd6Krbb fmT1vCemC/UMbUQZYj2T0rlBnEcSEV1TQVJHEXtvfPiY2Jtgf/BqLNPYHNGtL41wLlxg rOpP2Vip2ylNp7mX7Q2CkGBz4yHJGvStKNmLtNqqo+qO/Mku6T2QiBNKHAYY2+EtP2Hn E22A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=k9kSJmsn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id oz32-20020a1709077da000b00783cf05c210si1612833ejc.833.2022.09.30.04.26.33; Fri, 30 Sep 2022 04:27:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=k9kSJmsn; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232500AbiI3KZc (ORCPT + 99 others); Fri, 30 Sep 2022 06:25:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231825AbiI3KUb (ORCPT ); Fri, 30 Sep 2022 06:20:31 -0400 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E08BC1F0185; Fri, 30 Sep 2022 03:19:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664533151; x=1696069151; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=T4J3Fz4bJzzSS1JT/PJ4XNJjBqTH7WjvGiRoqDtdVj4=; b=k9kSJmsnuvFldXgMifWwjqrM7OFhq93U0Hg60zhn2lbmWpTu3b6pw51x wnpS0j9i014jlJ550wcELSS8uZH2cSqm//cX1fLviM6k3ikX+2GSZzrCo sFe9um0xjPoxw5h9qXd2Bsk3WCtxzR0gwLSUH12o180sigAbMpMzNnDMo w9KwhztED1d7ypok8vt/BBR32itskZVggKGJN4ujdDduUhEquzU8zhbu7 7VaaOhJjxQ2tTFlHjfY/FDmxEHPXp1f2Fn+htZifKUElHLTnDjQlinx8K W8gpf5GDTbIhEIFyWX8zPqz3k5gYc6G/5ecrmoMqC+DpY6JrxiwPx9xti A==; X-IronPort-AV: E=McAfee;i="6500,9779,10485"; a="366207535" X-IronPort-AV: E=Sophos;i="5.93,358,1654585200"; d="scan'208";a="366207535" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2022 03:19:05 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10485"; a="726807764" X-IronPort-AV: E=Sophos;i="5.93,358,1654585200"; d="scan'208";a="726807764" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2022 03:19:05 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar Subject: [PATCH v9 080/105] KVM: TDX: Implement methods to inject NMI Date: Fri, 30 Sep 2022 03:18:14 -0700 Message-Id: <471ee3c28b309ba10434c30ac6947a070b18c790.1664530908.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata TDX vcpu control structure defines one bit for pending NMI for VMM to inject NMI by setting the bit without knowing TDX vcpu NMI states. Because the vcpu state is protected, VMM can't know about NMI states of TDX vcpu. The TDX module handles actual injection and NMI states transition. Add methods for NMI and treat NMI can be injected always. Signed-off-by: Isaku Yamahata Reviewed-by: Paolo Bonzini --- arch/x86/kvm/vmx/main.c | 62 +++++++++++++++++++++++++++++++++++--- arch/x86/kvm/vmx/tdx.c | 5 +++ arch/x86/kvm/vmx/x86_ops.h | 2 ++ 3 files changed, 64 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 7dae3a1999eb..a2417c7f7ad7 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -255,6 +255,58 @@ static void vt_flush_tlb_guest(struct kvm_vcpu *vcpu) vmx_flush_tlb_guest(vcpu); } +static void vt_inject_nmi(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) + return tdx_inject_nmi(vcpu); + + vmx_inject_nmi(vcpu); +} + +static int vt_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection) +{ + /* + * The TDX module manages NMI windows and NMI reinjection, and hides NMI + * blocking, all KVM can do is throw an NMI over the wall. + */ + if (is_td_vcpu(vcpu)) + return true; + + return vmx_nmi_allowed(vcpu, for_injection); +} + +static bool vt_get_nmi_mask(struct kvm_vcpu *vcpu) +{ + /* + * Assume NMIs are always unmasked. KVM could query PEND_NMI and treat + * NMIs as masked if a previous NMI is still pending, but SEAMCALLs are + * expensive and the end result is unchanged as the only relevant usage + * of get_nmi_mask() is to limit the number of pending NMIs, i.e. it + * only changes whether KVM or the TDX module drops an NMI. + */ + if (is_td_vcpu(vcpu)) + return false; + + return vmx_get_nmi_mask(vcpu); +} + +static void vt_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked) +{ + if (is_td_vcpu(vcpu)) + return; + + vmx_set_nmi_mask(vcpu, masked); +} + +static void vt_enable_nmi_window(struct kvm_vcpu *vcpu) +{ + /* Refer the comment in vt_get_nmi_mask(). */ + if (is_td_vcpu(vcpu)) + return; + + vmx_enable_nmi_window(vcpu); +} + static void vt_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level) { @@ -410,14 +462,14 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .get_interrupt_shadow = vt_get_interrupt_shadow, .patch_hypercall = vmx_patch_hypercall, .inject_irq = vt_inject_irq, - .inject_nmi = vmx_inject_nmi, + .inject_nmi = vt_inject_nmi, .queue_exception = vmx_queue_exception, .cancel_injection = vt_cancel_injection, .interrupt_allowed = vt_interrupt_allowed, - .nmi_allowed = vmx_nmi_allowed, - .get_nmi_mask = vmx_get_nmi_mask, - .set_nmi_mask = vmx_set_nmi_mask, - .enable_nmi_window = vmx_enable_nmi_window, + .nmi_allowed = vt_nmi_allowed, + .get_nmi_mask = vt_get_nmi_mask, + .set_nmi_mask = vt_set_nmi_mask, + .enable_nmi_window = vt_enable_nmi_window, .enable_irq_window = vt_enable_irq_window, .update_cr8_intercept = vmx_update_cr8_intercept, .set_virtual_apic_mode = vmx_set_virtual_apic_mode, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index fa309acf05de..5e994d581d87 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -670,6 +670,11 @@ fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) return EXIT_FASTPATH_NONE; } +void tdx_inject_nmi(struct kvm_vcpu *vcpu) +{ + td_management_write8(to_tdx(vcpu), TD_VCPU_PEND_NMI, 1); +} + void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level) { td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK); diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index c6dda5f6acda..fb630d17ccd1 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -156,6 +156,7 @@ bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu); void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode, int trig_mode, int vector); +void tdx_inject_nmi(struct kvm_vcpu *vcpu); int tdx_vm_ioctl(struct kvm *kvm, void __user *argp); int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); @@ -188,6 +189,7 @@ static inline bool tdx_protected_apic_has_interrupt(struct kvm_vcpu *vcpu) { ret static inline void tdx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode, int trig_mode, int vector) {} +static inline void tdx_inject_nmi(struct kvm_vcpu *vcpu) {} static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; } static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; } -- 2.25.1