Received: by 2002:a05:6359:c8b:b0:c7:702f:21d4 with SMTP id go11csp3470845rwb; Fri, 30 Sep 2022 04:12:31 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4+SN4MvPgt6aJ1V9OuFj0Lk68ZM5SB3Ay+l/PDTLR++Y2qqXNyRflNUArxbCQZIqkQJMLj X-Received: by 2002:a17:90b:1894:b0:200:b12e:9db0 with SMTP id mn20-20020a17090b189400b00200b12e9db0mr8851091pjb.219.1664536351526; Fri, 30 Sep 2022 04:12:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1664536351; cv=none; d=google.com; s=arc-20160816; b=yQuPByK3UvOIK9vWhGyrxepIlu03YFlbmtWXZ+QcxykxWiHO2eDroeruk7fLRlu1a3 1YyUxm4phPU1vQ92C1vNZo/iW8wxlV9tldZVxqth02TEp82X4JNbNbA+kemK4PKDjARd p3xHFNyVCD9+cln3GvAwFwQ+EkqPtzgHB/gkuJH4pw6UV5iI0XhB2l5I6ztkc/ypq4E9 0wtDn3iqF/Yo2O8cJxG5JmcTD+jyKCBMmqD3An3CTSZdvNIkVcafeyDsfyq8qg72nxwC i31eclimC/cPj2H4ekeSuQLXYzPEpvuRi2H5lMydpUYIVnntUrEXDpYk8jqX9PxW7r7B V4+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=W1u1iPyP2QIP4+nhi1aTMMynRtaiIkSYxGtDJJsIRes=; b=EaX81strBrAkaOd04kN1aKTAMG8iFxoCWaMT8RKHkQAfpCmoYoLUqndUvN7jj2SMHH LKFR4+aQpV1VCjmEMEsveebVhc+513KzKJB8pYRgLRYMq5LxJ+FRQcpQCsgAiLHCvHtI 1HG9lu1aJ1kD02ug2HGk0Jk2ZD25E/Q+ysWTY5om3W96khN69jqpVvu2jU1b79yjUNxv owLX1iuo3G1EUjAJbpxTupO34a1GSv0FIcqKfLAj2Hca+n0a8+NmnKWXD4FMRuWLVee/ odRD3g+dtm0i/FnrLcbxvwwPftdR5Z/treoWXDxmk1I36+IOkF2Rc36A0Hc7Nvdp291K ZO9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ZDNgFx11; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ls9-20020a17090b350900b002004c680032si2888896pjb.160.2022.09.30.04.12.20; Fri, 30 Sep 2022 04:12:31 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ZDNgFx11; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232027AbiI3KVf (ORCPT + 99 others); Fri, 30 Sep 2022 06:21:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33706 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231431AbiI3KTB (ORCPT ); Fri, 30 Sep 2022 06:19:01 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BF64166F10; Fri, 30 Sep 2022 03:18:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664533139; x=1696069139; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=34CR5xIh6D3mP24sKs3zCz+r1oDmGsCAoUfohFLjao0=; b=ZDNgFx11pcpEzZWw97AJfxO0iA6wt3VZSlCS8LO04TdnCvrBjrb3uCwO 4VvDOulfCCOcibdziwF8GuovdT5R55uA2Qs5LM0YYOmYX9BCFsdbYwp3l LJVHKbFpoZychJOlg6OfxU7h/Slz2fXgZI4eUvFTSlTBvlBTqa+0SiRev lW7ETfRuod3OyOdjyxCLda4qgsHiYjiphEn3HuPjGFwQ1IAO5z9GDX+oD dE8UvFBrgVjqOt4klRrjSVIqSHR5GGdN5qW7YlIkAWFJ5rtwKLRomqavr HyBMwidgjLnHp9xWX6AjHCW+sdo7NKXmamKp0rCbP8ZD8DoVHZ2zgXhSc g==; X-IronPort-AV: E=McAfee;i="6500,9779,10485"; a="281870078" X-IronPort-AV: E=Sophos;i="5.93,358,1654585200"; d="scan'208";a="281870078" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2022 03:18:55 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10485"; a="726807580" X-IronPort-AV: E=Sophos;i="5.93,358,1654585200"; d="scan'208";a="726807580" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Sep 2022 03:18:55 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Sean Christopherson Subject: [PATCH v9 024/105] KVM: TDX: Do TDX specific vcpu initialization Date: Fri, 30 Sep 2022 03:17:18 -0700 Message-Id: <944c214b72337343c7b1a87e816fdc4c3b077a99.1664530907.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sean Christopherson TD guest vcpu need to be configured before ready to run which requests addtional information from Device model (e.g. qemu), one 64bit value is passed to vcpu's RCX as an initial value. Repurpose KVM_MEMORY_ENCRYPT_OP to vcpu-scope and add new sub-commands KVM_TDX_INIT_VCPU under it for such additional vcpu configuration. Add callback for kvm vCPU-scoped operations of KVM_MEMORY_ENCRYPT_OP and add a new subcommand, KVM_TDX_INIT_VCPU, for further vcpu initialization. Signed-off-by: Sean Christopherson Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 1 + arch/x86/include/uapi/asm/kvm.h | 1 + arch/x86/kvm/vmx/main.c | 9 ++ arch/x86/kvm/vmx/tdx.c | 166 ++++++++++++++++++-------- arch/x86/kvm/vmx/tdx.h | 4 + arch/x86/kvm/vmx/x86_ops.h | 2 + arch/x86/kvm/x86.c | 6 + tools/arch/x86/include/uapi/asm/kvm.h | 1 + 9 files changed, 139 insertions(+), 52 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 78b6b2c4d596..104a34b44e94 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -120,6 +120,7 @@ KVM_X86_OP(leave_smm) KVM_X86_OP(enable_smi_window) KVM_X86_OP_OPTIONAL(dev_mem_enc_ioctl) KVM_X86_OP_OPTIONAL(mem_enc_ioctl) +KVM_X86_OP_OPTIONAL(vcpu_mem_enc_ioctl) KVM_X86_OP_OPTIONAL(mem_enc_register_region) KVM_X86_OP_OPTIONAL(mem_enc_unregister_region) KVM_X86_OP_OPTIONAL(vm_copy_enc_context_from) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index aa43733746c6..531f04e36904 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1622,6 +1622,7 @@ struct kvm_x86_ops { int (*dev_mem_enc_ioctl)(void __user *argp); int (*mem_enc_ioctl)(struct kvm *kvm, void __user *argp); + int (*vcpu_mem_enc_ioctl)(struct kvm_vcpu *vcpu, void __user *argp); int (*mem_enc_register_region)(struct kvm *kvm, struct kvm_enc_region *argp); int (*mem_enc_unregister_region)(struct kvm *kvm, struct kvm_enc_region *argp); int (*vm_copy_enc_context_from)(struct kvm *kvm, unsigned int source_fd); diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index e231ba752788..801b78b957fa 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -539,6 +539,7 @@ struct kvm_pmu_event_filter { enum kvm_tdx_cmd_id { KVM_TDX_CAPABILITIES = 0, KVM_TDX_INIT_VM, + KVM_TDX_INIT_VCPU, KVM_TDX_CMD_NR_MAX, }; diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index b49d3f58dc4f..fe927aaee114 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -103,6 +103,14 @@ static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp) return tdx_vm_ioctl(kvm, argp); } +static int vt_vcpu_mem_enc_ioctl(struct kvm_vcpu *vcpu, void __user *argp) +{ + if (!is_td_vcpu(vcpu)) + return -EINVAL; + + return tdx_vcpu_ioctl(vcpu, argp); +} + struct kvm_x86_ops vt_x86_ops __initdata = { .name = "kvm_intel", @@ -246,6 +254,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .dev_mem_enc_ioctl = tdx_dev_ioctl, .mem_enc_ioctl = vt_mem_enc_ioctl, + .vcpu_mem_enc_ioctl = vt_vcpu_mem_enc_ioctl, }; struct kvm_x86_init_ops vt_init_ops __initdata = { diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 0fa4746f0450..10b0ac09bd00 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -70,6 +70,11 @@ static inline bool is_hkid_assigned(struct kvm_tdx *kvm_tdx) return kvm_tdx->hkid > 0; } +static inline bool is_td_finalized(struct kvm_tdx *kvm_tdx) +{ + return kvm_tdx->finalized; +} + static void tdx_clear_page(unsigned long page) { const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0))); @@ -293,31 +298,12 @@ int tdx_vm_init(struct kvm *kvm) int tdx_vcpu_create(struct kvm_vcpu *vcpu) { - struct vcpu_tdx *tdx = to_tdx(vcpu); - int ret, i; - /* TDX only supports x2APIC, which requires an in-kernel local APIC. */ if (!vcpu->arch.apic) return -EINVAL; fpstate_set_confidential(&vcpu->arch.guest_fpu); - ret = tdx_alloc_td_page(&tdx->tdvpr); - if (ret) - return ret; - - tdx->tdvpx = kcalloc(tdx_caps.tdvpx_nr_pages, sizeof(*tdx->tdvpx), - GFP_KERNEL_ACCOUNT); - if (!tdx->tdvpx) { - ret = -ENOMEM; - goto free_tdvpr; - } - for (i = 0; i < tdx_caps.tdvpx_nr_pages; i++) { - ret = tdx_alloc_td_page(&tdx->tdvpx[i]); - if (ret) - goto free_tdvpx; - } - vcpu->arch.efer = EFER_SCE | EFER_LME | EFER_LMA | EFER_NX; vcpu->arch.cr0_guest_owned_bits = -1ul; @@ -329,17 +315,6 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu) !(to_kvm_tdx(vcpu->kvm)->attributes & TDX_TD_ATTRIBUTE_DEBUG); return 0; - -free_tdvpx: - /* @i points at the TDVPX page that failed allocation. */ - for (--i; i >= 0; i--) - free_page(tdx->tdvpx[i].va); - kfree(tdx->tdvpx); - tdx->tdvpx = NULL; -free_tdvpr: - free_page(tdx->tdvpr.va); - - return ret; } void tdx_vcpu_free(struct kvm_vcpu *vcpu) @@ -362,34 +337,14 @@ void tdx_vcpu_free(struct kvm_vcpu *vcpu) void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) { - struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm); - struct vcpu_tdx *tdx = to_tdx(vcpu); struct msr_data apic_base_msr; - u64 err; - int i; /* TDX doesn't support INIT event. */ if (WARN_ON_ONCE(init_event)) goto td_bugged; - if (WARN_ON_ONCE(is_td_vcpu_created(tdx))) + if (WARN_ON_ONCE(is_td_vcpu_created(to_tdx(vcpu)))) goto td_bugged; - err = tdh_vp_create(kvm_tdx->tdr.pa, tdx->tdvpr.pa); - if (WARN_ON_ONCE(err)) { - pr_tdx_error(TDH_VP_CREATE, err, NULL); - goto td_bugged; - } - tdx_mark_td_page_added(&tdx->tdvpr); - - for (i = 0; i < tdx_caps.tdvpx_nr_pages; i++) { - err = tdh_vp_addcx(tdx->tdvpr.pa, tdx->tdvpx[i].pa); - if (WARN_ON_ONCE(err)) { - pr_tdx_error(TDH_VP_ADDCX, err, NULL); - goto td_bugged; - } - tdx_mark_td_page_added(&tdx->tdvpx[i]); - } - if (!vcpu->arch.cpuid_entries) { /* * On cpu creation, cpuid entry is blank. Forcibly enable @@ -409,6 +364,8 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) vcpu->arch.cpuid_entries = e; vcpu->arch.cpuid_nent = 1; } + + /* TDX rquires X2APIC. */ apic_base_msr.data = APIC_DEFAULT_PHYS_BASE | LAPIC_MODE_X2APIC; if (kvm_vcpu_is_reset_bsp(vcpu)) apic_base_msr.data |= MSR_IA32_APICBASE_BSP; @@ -416,7 +373,10 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) if (WARN_ON_ONCE(kvm_set_apic_base(vcpu, &apic_base_msr))) goto td_bugged; - vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; + /* + * Don't update mp_state to runnable because more initialization + * is needed by TDX_VCPU_INIT. + */ return; @@ -860,6 +820,108 @@ int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) return r; } +static int tdx_td_vcpu_init(struct kvm_vcpu *vcpu, u64 vcpu_rcx) +{ + struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm); + struct vcpu_tdx *tdx = to_tdx(vcpu); + int ret, i; + u64 err; + + if (is_td_vcpu_created(tdx)) + return -EINVAL; + + ret = tdx_alloc_td_page(&tdx->tdvpr); + if (ret) + return ret; + + tdx->tdvpx = kcalloc(tdx_caps.tdvpx_nr_pages, sizeof(*tdx->tdvpx), + GFP_KERNEL_ACCOUNT); + if (!tdx->tdvpx) { + ret = -ENOMEM; + goto free_tdvpr; + } + for (i = 0; i < tdx_caps.tdvpx_nr_pages; i++) { + ret = tdx_alloc_td_page(&tdx->tdvpx[i]); + if (ret) + goto free_tdvpx; + } + + err = tdh_vp_create(kvm_tdx->tdr.pa, tdx->tdvpr.pa); + if (WARN_ON_ONCE(err)) { + ret = -EIO; + pr_tdx_error(TDH_VP_CREATE, err, NULL); + goto td_bugged; + } + tdx_mark_td_page_added(&tdx->tdvpr); + + for (i = 0; i < tdx_caps.tdvpx_nr_pages; i++) { + err = tdh_vp_addcx(tdx->tdvpr.pa, tdx->tdvpx[i].pa); + if (WARN_ON_ONCE(err)) { + ret = -EIO; + pr_tdx_error(TDH_VP_ADDCX, err, NULL); + goto td_bugged; + } + tdx_mark_td_page_added(&tdx->tdvpx[i]); + } + + err = tdh_vp_init(tdx->tdvpr.pa, vcpu_rcx); + if (WARN_ON_ONCE(err)) { + ret = -EIO; + pr_tdx_error(TDH_VP_INIT, err, NULL); + goto td_bugged; + } + + vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; + + return 0; + +td_bugged: + vcpu->kvm->vm_bugged = true; + return ret; + +free_tdvpx: + /* @i points at the TDVPX page that failed allocation. */ + for (--i; i >= 0; i--) + free_page(tdx->tdvpx[i].va); + kfree(tdx->tdvpx); + tdx->tdvpx = NULL; +free_tdvpr: + free_page(tdx->tdvpr.va); + + return ret; +} + +int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) +{ + struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm); + struct vcpu_tdx *tdx = to_tdx(vcpu); + struct kvm_tdx_cmd cmd; + int ret; + + if (tdx->vcpu_initialized) + return -EINVAL; + + if (!is_td_initialized(vcpu->kvm) || is_td_finalized(kvm_tdx)) + return -EINVAL; + + if (copy_from_user(&cmd, argp, sizeof(cmd))) + return -EFAULT; + + if (cmd.error || cmd.unused) + return -EINVAL; + + /* Currently only KVM_TDX_INTI_VCPU is defined for vcpu operation. */ + if (cmd.flags || cmd.id != KVM_TDX_INIT_VCPU) + return -EINVAL; + + ret = tdx_td_vcpu_init(vcpu, (u64)cmd.data); + if (ret) + return ret; + + tdx->vcpu_initialized = true; + return 0; +} + static int __init tdx_module_setup(void) { const struct tdsysinfo_struct *tdsysinfo; diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index b1906dc2f0f9..4ce236a0cab2 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -23,6 +23,8 @@ struct kvm_tdx { u64 xfam; int hkid; + bool finalized; + u64 tsc_offset; }; @@ -32,6 +34,8 @@ struct vcpu_tdx { struct tdx_td_page tdvpr; struct tdx_td_page *tdvpx; + bool vcpu_initialized; + /* * Dummy to make pmu_intel not corrupt memory. * TODO: Support PMU for TDX. Future work. diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index 1febdc8dfe9f..37c74f325b97 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -146,6 +146,7 @@ void tdx_vcpu_free(struct kvm_vcpu *vcpu); void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); int tdx_vm_ioctl(struct kvm *kvm, void __user *argp); +int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); #else static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return 0; } static inline bool tdx_is_vm_type_supported(unsigned long type) { return false; } @@ -163,6 +164,7 @@ static inline void tdx_vcpu_free(struct kvm_vcpu *vcpu) {} static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {} static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOPNOTSUPP; } +static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; } #endif #endif /* __KVM_X86_VMX_X86_OPS_H */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e2cbeeec9d6a..e1c35cbe0b77 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5920,6 +5920,12 @@ long kvm_arch_vcpu_ioctl(struct file *filp, case KVM_SET_DEVICE_ATTR: r = kvm_vcpu_ioctl_device_attr(vcpu, ioctl, argp); break; + case KVM_MEMORY_ENCRYPT_OP: + r = -ENOTTY; + if (!kvm_x86_ops.vcpu_mem_enc_ioctl) + goto out; + r = kvm_x86_ops.vcpu_mem_enc_ioctl(vcpu, argp); + break; default: r = -EINVAL; } diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h index 531a0033e530..35e3b4aa2e96 100644 --- a/tools/arch/x86/include/uapi/asm/kvm.h +++ b/tools/arch/x86/include/uapi/asm/kvm.h @@ -539,6 +539,7 @@ struct kvm_pmu_event_filter { enum kvm_tdx_cmd_id { KVM_TDX_CAPABILITIES = 0, KVM_TDX_INIT_VM, + KVM_TDX_INIT_VCPU, KVM_TDX_CMD_NR_MAX, }; -- 2.25.1