Received: by 2002:a05:6a10:9afc:0:0:0:0 with SMTP id t28csp2260597pxm; Fri, 4 Mar 2022 12:33:08 -0800 (PST) X-Google-Smtp-Source: ABdhPJwlTZhR3BjhOBJ0X1YTvlwE/Zd2AoosEgIzHL1YTFjXHRmPX/MfzuqH9pRbnzTtk4XJATJZ X-Received: by 2002:a17:90a:9412:b0:1bc:f629:43bc with SMTP id r18-20020a17090a941200b001bcf62943bcmr12410109pjo.103.1646425988565; Fri, 04 Mar 2022 12:33:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1646425988; cv=none; d=google.com; s=arc-20160816; b=WGJiGlkvBQGnKe8VXDd+tx/kzZ6vSkXuHks8p1oPYrkmV3DL64ZSC7kxPVV/GTYICo igmhuMHWFKkd0jnScPFSf9LV8aD+H30NOvla6mWxqJqVLdPKnRsjHwPiPX/8kMYgX1St H3BNAukS0ev4emXarduLvOzsPiphlxukaBdE44iOfsasn2rTTU0TQ2vySioQQKrS5SBw GMkDa91/ycdar4VZkqj0SB1Ifl/GjsJbDGjs2oMo51LSRCyTHahJINHFcO88Z+X+IJfh bceZgUm+qeE1VMk9zvw0yBlb5oAh5wtDPJTJ/WRpu04FCbMtUrhVO+H9j++xZFW1/S/q s4gg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=6p+H34X5GhPzcOnygPfLIP8dFsFQit22+II28kpRNYQ=; b=NnFc2phIhZE3SnGWUt9wJCYkvWWQg7B7ss+d2OW3kjqTyR7xy57cYdRxhXGLNqLApx aGsxSK9FnPdYnsZKHH9JTDfQbZ5daFcz3NAOnwFzLVs03hMl2YyyHjQCxXvx8JOUQSJq NYtlbR4CbYj7NWmGZl1UeevhCjJBUT1vS1vhTfg7pmIMY3qHWQpeIVC6NJcDfl5wN1zn i5E42431JgCut6ugu2S0YjG9Rt31LEHGoM1ubTcg1mFNtRfVtvD3UxVsQ+pHuH9vYD7Y c8y+pRugbzbWvE5kwLtEvJQ/7ucXby/aPAWAW2DuD1PQD0yLmI+ESGTDdoIBUVGmJ8jg eq+A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ljz9leDD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z6-20020a170902708600b00150199ac369si5025167plk.119.2022.03.04.12.32.51; Fri, 04 Mar 2022 12:33:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ljz9leDD; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229902AbiCDUcd (ORCPT + 99 others); Fri, 4 Mar 2022 15:32:33 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55164 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229974AbiCDUcV (ORCPT ); Fri, 4 Mar 2022 15:32:21 -0500 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED6F21EE9C2; Fri, 4 Mar 2022 12:31:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646425890; x=1677961890; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jeM2a+va9Zz/1HXG3BmdSeCUyI8j9kuXIign2nnCuVw=; b=ljz9leDDrcdItSK09fh6JDYfHzOTghxqIxds+D+y5yuA2P6K2NWMyuMM EkMdfawHGqpatHUtqTU0EqvhtypUheTd5c6Pr3F8FGovXsO+V1xBZ6NiI pPf+APDq1CCJdvABqRizeBv1Mk4Osbq/kQLnbRPgiNbggX8KjUDJh/U8j CWkn3OKOkpeUxKTVm4CB10ofIMJz1tJh/O5+miOov+f4tc78PNyKLDgum ZPyP91wefk7tTRbS1z0ZvH/dCSbZgi5lZAPyUTt8OWmf1Mboppb0eljCs exDDiLV5jvLhVkpc+9xp+qPJcbl/ZfnZUvpoQ6ZmGLH/aQbAivv1x/wYs A==; X-IronPort-AV: E=McAfee;i="6200,9189,10276"; a="251624255" X-IronPort-AV: E=Sophos;i="5.90,156,1643702400"; d="scan'208";a="251624255" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2022 11:50:32 -0800 X-IronPort-AV: E=Sophos;i="5.90,156,1643702400"; d="scan'208";a="552344448" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2022 11:50:31 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , Jim Mattson , erdemaktas@google.com, Connor Kuehl , Sean Christopherson Subject: [RFC PATCH v5 064/104] KVM: TDX: Implement TDX vcpu enter/exit path Date: Fri, 4 Mar 2022 11:49:20 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata This patch implements running TDX vcpu. Once vcpu runs on the logical processor (LP), the TDX vcpu is associated with it. When the TDX vcpu moves to another LP, the TDX vcpu needs to flush its status on the LP. When destroying TDX vcpu, it needs to complete flush and flush cpu memory cache. Track which LP the TDX vcpu run and flush it as necessary. Do nothing on sched_in event as TDX doesn't support pause loop. TDX vcpu execution requires restoring PMU debug store after returning back to KVM because the TDX module unconditionally resets the value. To reuse the existing code, export perf_restore_debug_store. Signed-off-by: Isaku Yamahata --- arch/x86/kvm/vmx/main.c | 10 +++++++++- arch/x86/kvm/vmx/tdx.c | 34 ++++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/tdx.h | 33 +++++++++++++++++++++++++++++++++ arch/x86/kvm/vmx/x86_ops.h | 2 ++ arch/x86/kvm/x86.c | 1 + 5 files changed, 79 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index f571b07c2aae..2e5a7a72d560 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -89,6 +89,14 @@ static void vt_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) return vmx_vcpu_reset(vcpu, init_event); } +static fastpath_t vt_vcpu_run(struct kvm_vcpu *vcpu) +{ + if (is_td_vcpu(vcpu)) + return tdx_vcpu_run(vcpu); + + return vmx_vcpu_run(vcpu); +} + static void vt_flush_tlb_all(struct kvm_vcpu *vcpu) { if (is_td_vcpu(vcpu)) @@ -200,7 +208,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .tlb_flush_guest = vt_flush_tlb_guest, .vcpu_pre_run = vmx_vcpu_pre_run, - .run = vmx_vcpu_run, + .run = vt_vcpu_run, .handle_exit = vmx_handle_exit, .skip_emulated_instruction = vmx_skip_emulated_instruction, .update_emulated_instruction = vmx_update_emulated_instruction, diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 85d5f961d97e..ebe4f9bf19e7 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -10,6 +10,9 @@ #include "vmx.h" #include "x86.h" +#include +#include "trace.h" + #undef pr_fmt #define pr_fmt(fmt) "tdx: " fmt @@ -509,6 +512,37 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) vcpu->kvm->vm_bugged = true; } +u64 __tdx_vcpu_run(hpa_t tdvpr, void *regs, u32 regs_mask); + +static noinstr void tdx_vcpu_enter_exit(struct kvm_vcpu *vcpu, + struct vcpu_tdx *tdx) +{ + guest_enter_irqoff(); + tdx->exit_reason.full = __tdx_vcpu_run(tdx->tdvpr.pa, vcpu->arch.regs, 0); + guest_exit_irqoff(); +} + +fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) +{ + struct vcpu_tdx *tdx = to_tdx(vcpu); + + if (unlikely(vcpu->kvm->vm_bugged)) { + tdx->exit_reason.full = TDX_NON_RECOVERABLE_VCPU; + return EXIT_FASTPATH_NONE; + } + + trace_kvm_entry(vcpu); + + tdx_vcpu_enter_exit(vcpu, tdx); + + vcpu->arch.regs_avail &= ~VMX_REGS_LAZY_LOAD_SET; + trace_kvm_exit(vcpu, KVM_ISA_VMX); + + if (tdx->exit_reason.error || tdx->exit_reason.non_recoverable) + return EXIT_FASTPATH_NONE; + return EXIT_FASTPATH_NONE; +} + void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level) { td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK); diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index bf9865a88991..e950404ce5de 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -44,12 +44,45 @@ struct kvm_tdx { spinlock_t seamcall_lock; }; +union tdx_exit_reason { + struct { + /* 31:0 mirror the VMX Exit Reason format */ + u64 basic : 16; + u64 reserved16 : 1; + u64 reserved17 : 1; + u64 reserved18 : 1; + u64 reserved19 : 1; + u64 reserved20 : 1; + u64 reserved21 : 1; + u64 reserved22 : 1; + u64 reserved23 : 1; + u64 reserved24 : 1; + u64 reserved25 : 1; + u64 bus_lock_detected : 1; + u64 enclave_mode : 1; + u64 smi_pending_mtf : 1; + u64 smi_from_vmx_root : 1; + u64 reserved30 : 1; + u64 failed_vmentry : 1; + + /* 63:32 are TDX specific */ + u64 details_l1 : 8; + u64 class : 8; + u64 reserved61_48 : 14; + u64 non_recoverable : 1; + u64 error : 1; + }; + u64 full; +}; + struct vcpu_tdx { struct kvm_vcpu vcpu; struct tdx_td_page tdvpr; struct tdx_td_page *tdvpx; + union tdx_exit_reason exit_reason; + bool initialized; }; diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index 922a3799336e..44404dd25737 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -140,6 +140,7 @@ void tdx_vm_free(struct kvm *kvm); int tdx_vcpu_create(struct kvm_vcpu *vcpu); void tdx_vcpu_free(struct kvm_vcpu *vcpu); void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); +fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu); int tdx_vm_ioctl(struct kvm *kvm, void __user *argp); int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); @@ -160,6 +161,7 @@ static inline void tdx_vm_free(struct kvm *kvm) {} static inline int tdx_vcpu_create(struct kvm_vcpu *vcpu) { return -EOPNOTSUPP; } static inline void tdx_vcpu_free(struct kvm_vcpu *vcpu) {} static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {} +static inline fastpath_t tdx_vcpu_run(struct kvm_vcpu *vcpu) { return EXIT_FASTPATH_NONE; } static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; } static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index da411bcd8cbc..66400810d54f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -300,6 +300,7 @@ const struct kvm_stats_header kvm_vcpu_stats_header = { }; u64 __read_mostly host_xcr0; +EXPORT_SYMBOL_GPL(host_xcr0); u64 __read_mostly supported_xcr0; EXPORT_SYMBOL_GPL(supported_xcr0); -- 2.25.1