Received: by 2002:a05:7412:d8a:b0:e2:908c:2ebd with SMTP id b10csp2634212rdg; Mon, 16 Oct 2023 10:00:57 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFQIe809jqbCVJ2lpXA56yFt2rE0niiQKGCKWg9Gy3bnILSRmxSpgFJX4qh19NucUJW+m9Z X-Received: by 2002:aa7:88d0:0:b0:6bd:9281:9458 with SMTP id k16-20020aa788d0000000b006bd92819458mr3614321pff.9.1697475656677; Mon, 16 Oct 2023 10:00:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1697475656; cv=none; d=google.com; s=arc-20160816; b=r2hgK1g4nDyPXRP5GD+14LsjTANdEhQEJnWXK3ybNBveYEB7SveNmmUC7FpviSiF4K FmKWIStJ2noAPAtyr1ubpBUs63chFwp3lqLBK7VkbQ7aepZK19ZY0ClQuGy3fy43GNiE Om47QNQUVC06PTx8wijmhFoSktcQMSMLdRoo4nO7sFSh1Z52PHYT7Q2bygCPsLWCpop3 E8kIfUI9qcaOCbnQoVBtVBVXw5AhvatctA1FtolVhlExt9WmIdQkcPfh7knoxXaNJFKz DazIMMdCCJaqJEHOQAqJZBOVYswi9k4Ki9w5BeyWBzxm88VinUoJeZtzlJ5vZCgF5pfX K0bg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Cm1UNdj5fKvYecBJHNOxulxJzzQIa/SANTNCxxEYrDU=; fh=4OEEkzWp/q+dELKxtVy/jK+pEs8VYY1pIiC6FIGYmPg=; b=eP2Mux3POoulIoWzsJ6v7uARJhEp/5vbm1S2y3KJOF2chOB5PJ/Ox6pKpnm46xXgkJ nBJKca37+xiB9nSISfsZ7DavBpCQhReg7xbA+6O60L6J22w5cBJqcH1nWrpN1Nq79CKL ijmPlqSB53qQPjvSiIx6VHFv3ElNafBid5iH+epjHEOWE/ycC6Eca9BwolUVw6Zyq8uD IFQXMKkwoJy5MBycPbTaQsWjT7O6cBlrmWddXR91YSgbG/SbhwliiDfYe6mkJ3RhvToV LGdtfyPrH2L/dcRiiPWw0Un2D0V2Wm+0icqlr2A5QRY1qmc28UtM2bcArl3QIzuZAqLi dgPg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cntWJwji; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from fry.vger.email (fry.vger.email. [23.128.96.38]) by mx.google.com with ESMTPS id l191-20020a6391c8000000b00585a02550e1si11751647pge.50.2023.10.16.10.00.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 16 Oct 2023 10:00:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) client-ip=23.128.96.38; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cntWJwji; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.38 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by fry.vger.email (Postfix) with ESMTP id 3E2C6808681B; Mon, 16 Oct 2023 10:00:24 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at fry.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233920AbjJPRAI (ORCPT + 99 others); Mon, 16 Oct 2023 13:00:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234011AbjJPQ7k (ORCPT ); Mon, 16 Oct 2023 12:59:40 -0400 Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A440872A4; Mon, 16 Oct 2023 09:22:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1697473369; x=1729009369; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=90QKAWQzXzLnHEXTkkejXj/NCC3K3APRxzLxiR40+Rg=; b=cntWJwjiJecl2egha1hr5pN7WaCBlAJlITPPgjk4W5mqXSHh9x2Ubc2i QR1v46GD2ZNiKfdngg7DDT3Hd816bgcbXq7Q7pdmSgKrTn8KjtYLjtPUg BuO0oHecb0iaVgBzlAzgqXcd8NVrh32FPRBy9g094I/t9+hoq5/1qPvD1 NbJQ81i+/4FFMyAqniY6gVtzajQ4eRCYMODZCjFT9HtSpmTlvBczAU701 XFqcT/yFnY2XEIu3XC1mmY1dq+T8kepcBv1y4TNfc/Y8lM/n7y+tvKH40 vth8b28Dzxl5zDL6nDhV+nqjtjAUhrO3X/alKR314jo3z42NpSLKp4RGg g==; X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="365825999" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="365825999" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:15:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10865"; a="1087126076" X-IronPort-AV: E=Sophos;i="6.03,229,1694761200"; d="scan'208";a="1087126076" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Oct 2023 09:15:32 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Sean Christopherson Subject: [PATCH v16 026/116] KVM: TDX: Do TDX specific vcpu initialization Date: Mon, 16 Oct 2023 09:13:38 -0700 Message-Id: <111c5bb46ef4cade884481124e5a6e5b39b11f6b.1697471314.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on fry.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (fry.vger.email [0.0.0.0]); Mon, 16 Oct 2023 10:00:24 -0700 (PDT) From: Isaku Yamahata TD guest vcpu needs TDX specific initialization before running. Repurpose KVM_MEMORY_ENCRYPT_OP to vcpu-scope, add a new sub-command KVM_TDX_INIT_VCPU, and implement the callback for it. Signed-off-by: Sean Christopherson Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 1 + arch/x86/include/uapi/asm/kvm.h | 1 + arch/x86/kvm/vmx/main.c | 9 ++ arch/x86/kvm/vmx/tdx.c | 180 +++++++++++++++++++++++++- arch/x86/kvm/vmx/tdx.h | 7 + arch/x86/kvm/vmx/x86_ops.h | 4 + arch/x86/kvm/x86.c | 6 + tools/arch/x86/include/uapi/asm/kvm.h | 1 + 9 files changed, 208 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index e1adeb7d4e26..d740a3f19d96 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -126,6 +126,7 @@ KVM_X86_OP(leave_smm) KVM_X86_OP(enable_smi_window) #endif KVM_X86_OP(mem_enc_ioctl) +KVM_X86_OP_OPTIONAL(vcpu_mem_enc_ioctl) KVM_X86_OP_OPTIONAL(mem_enc_register_region) KVM_X86_OP_OPTIONAL(mem_enc_unregister_region) KVM_X86_OP_OPTIONAL(vm_copy_enc_context_from) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 24d9a9ab338d..a21c060ffc68 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1737,6 +1737,7 @@ struct kvm_x86_ops { #endif int (*mem_enc_ioctl)(struct kvm *kvm, void __user *argp); + int (*vcpu_mem_enc_ioctl)(struct kvm_vcpu *vcpu, void __user *argp); int (*mem_enc_register_region)(struct kvm *kvm, struct kvm_enc_region *argp); int (*mem_enc_unregister_region)(struct kvm *kvm, struct kvm_enc_region *argp); int (*vm_copy_enc_context_from)(struct kvm *kvm, unsigned int source_fd); diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h index 7112546bd1d0..311a7894b712 100644 --- a/arch/x86/include/uapi/asm/kvm.h +++ b/arch/x86/include/uapi/asm/kvm.h @@ -571,6 +571,7 @@ struct kvm_pmu_event_filter { enum kvm_tdx_cmd_id { KVM_TDX_CAPABILITIES = 0, KVM_TDX_INIT_VM, + KVM_TDX_INIT_VCPU, KVM_TDX_CMD_NR_MAX, }; diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 7a9ee3ec0785..5d88d2196822 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -142,6 +142,14 @@ static int vt_mem_enc_ioctl(struct kvm *kvm, void __user *argp) return tdx_vm_ioctl(kvm, argp); } +static int vt_vcpu_mem_enc_ioctl(struct kvm_vcpu *vcpu, void __user *argp) +{ + if (!is_td_vcpu(vcpu)) + return -EINVAL; + + return tdx_vcpu_ioctl(vcpu, argp); +} + #define VMX_REQUIRED_APICV_INHIBITS \ (BIT(APICV_INHIBIT_REASON_DISABLE)| \ BIT(APICV_INHIBIT_REASON_ABSENT) | \ @@ -299,6 +307,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .vcpu_deliver_sipi_vector = kvm_vcpu_deliver_sipi_vector, .mem_enc_ioctl = vt_mem_enc_ioctl, + .vcpu_mem_enc_ioctl = vt_vcpu_mem_enc_ioctl, }; struct kvm_x86_init_ops vt_init_ops __initdata = { diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index b7f8ac4b9f95..c1a8560981a3 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -49,6 +49,7 @@ int tdx_vm_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) struct tdx_info { u8 nr_tdcs_pages; + u8 nr_tdvpx_pages; }; /* Info about the TDX module. */ @@ -71,6 +72,11 @@ static __always_inline hpa_t set_hkid_to_hpa(hpa_t pa, u16 hkid) return pa | ((hpa_t)hkid << boot_cpu_data.x86_phys_bits); } +static inline bool is_td_vcpu_created(struct vcpu_tdx *tdx) +{ + return tdx->tdvpr_pa; +} + static inline bool is_td_created(struct kvm_tdx *kvm_tdx) { return kvm_tdx->tdr_pa; @@ -88,6 +94,11 @@ static inline bool is_hkid_assigned(struct kvm_tdx *kvm_tdx) return kvm_tdx->hkid > 0; } +static inline bool is_td_finalized(struct kvm_tdx *kvm_tdx) +{ + return kvm_tdx->finalized; +} + static void tdx_clear_page(unsigned long page_pa) { const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0))); @@ -371,7 +382,32 @@ int tdx_vcpu_create(struct kvm_vcpu *vcpu) void tdx_vcpu_free(struct kvm_vcpu *vcpu) { - /* This is stub for now. More logic will come. */ + struct vcpu_tdx *tdx = to_tdx(vcpu); + int i; + + /* + * This methods can be called when vcpu allocation/initialization + * failed. So it's possible that hkid, tdvpx and tdvpr are not assigned + * yet. + */ + if (is_hkid_assigned(to_kvm_tdx(vcpu->kvm))) { + WARN_ON_ONCE(tdx->tdvpx_pa); + WARN_ON_ONCE(tdx->tdvpr_pa); + return; + } + + if (tdx->tdvpx_pa) { + for (i = 0; i < tdx_info.nr_tdvpx_pages; i++) { + if (tdx->tdvpx_pa[i]) + tdx_reclaim_td_page(tdx->tdvpx_pa[i]); + } + kfree(tdx->tdvpx_pa); + tdx->tdvpx_pa = NULL; + } + if (tdx->tdvpr_pa) { + tdx_reclaim_td_page(tdx->tdvpr_pa); + tdx->tdvpr_pa = 0; + } } void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) @@ -380,8 +416,13 @@ void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) /* Ignore INIT silently because TDX doesn't support INIT event. */ if (init_event) return; + if (KVM_BUG_ON(is_td_vcpu_created(to_tdx(vcpu)), vcpu->kvm)) + return; - /* This is stub for now. More logic will come here. */ + /* + * Don't update mp_state to runnable because more initialization + * is needed by TDX_VCPU_INIT. + */ } static int tdx_get_capabilities(struct kvm_tdx_cmd *cmd) @@ -876,6 +917,136 @@ int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) return r; } +/* VMM can pass one 64bit auxiliary data to vcpu via RCX for guest BIOS. */ +static int tdx_td_vcpu_init(struct kvm_vcpu *vcpu, u64 vcpu_rcx) +{ + struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm); + struct vcpu_tdx *tdx = to_tdx(vcpu); + unsigned long *tdvpx_pa = NULL; + unsigned long tdvpr_pa; + unsigned long va; + int ret, i; + u64 err; + + if (is_td_vcpu_created(tdx)) + return -EINVAL; + + /* + * vcpu_free method frees allocated pages. Avoid partial setup so + * that the method can't handle it. + */ + va = __get_free_page(GFP_KERNEL_ACCOUNT); + if (!va) + return -ENOMEM; + tdvpr_pa = __pa(va); + + tdvpx_pa = kcalloc(tdx_info.nr_tdvpx_pages, sizeof(*tdx->tdvpx_pa), + GFP_KERNEL_ACCOUNT); + if (!tdvpx_pa) { + ret = -ENOMEM; + goto free_tdvpr; + } + for (i = 0; i < tdx_info.nr_tdvpx_pages; i++) { + va = __get_free_page(GFP_KERNEL_ACCOUNT); + if (!va) { + ret = -ENOMEM; + goto free_tdvpx; + } + tdvpx_pa[i] = __pa(va); + } + + err = tdh_vp_create(kvm_tdx->tdr_pa, tdvpr_pa); + if (KVM_BUG_ON(err, vcpu->kvm)) { + ret = -EIO; + pr_tdx_error(TDH_VP_CREATE, err, NULL); + goto free_tdvpx; + } + tdx->tdvpr_pa = tdvpr_pa; + + tdx->tdvpx_pa = tdvpx_pa; + for (i = 0; i < tdx_info.nr_tdvpx_pages; i++) { + err = tdh_vp_addcx(tdx->tdvpr_pa, tdvpx_pa[i]); + if (KVM_BUG_ON(err, vcpu->kvm)) { + pr_tdx_error(TDH_VP_ADDCX, err, NULL); + for (; i < tdx_info.nr_tdvpx_pages; i++) { + free_page((unsigned long)__va(tdvpx_pa[i])); + tdvpx_pa[i] = 0; + } + /* vcpu_free method frees TDVPX and TDR donated to TDX */ + return -EIO; + } + } + + err = tdh_vp_init(tdx->tdvpr_pa, vcpu_rcx); + if (KVM_BUG_ON(err, vcpu->kvm)) { + pr_tdx_error(TDH_VP_INIT, err, NULL); + return -EIO; + } + + vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; + return 0; + +free_tdvpx: + for (i = 0; i < tdx_info.nr_tdvpx_pages; i++) { + if (tdvpx_pa[i]) + free_page((unsigned long)__va(tdvpx_pa[i])); + tdvpx_pa[i] = 0; + } + kfree(tdvpx_pa); + tdx->tdvpx_pa = NULL; +free_tdvpr: + if (tdvpr_pa) + free_page((unsigned long)__va(tdvpr_pa)); + tdx->tdvpr_pa = 0; + + return ret; +} + +int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) +{ + struct msr_data apic_base_msr; + struct kvm_tdx *kvm_tdx = to_kvm_tdx(vcpu->kvm); + struct vcpu_tdx *tdx = to_tdx(vcpu); + struct kvm_tdx_cmd cmd; + int ret; + + if (tdx->initialized) + return -EINVAL; + + if (!is_hkid_assigned(kvm_tdx) || is_td_finalized(kvm_tdx)) + return -EINVAL; + + if (copy_from_user(&cmd, argp, sizeof(cmd))) + return -EFAULT; + + if (cmd.error) + return -EINVAL; + + /* Currently only KVM_TDX_INTI_VCPU is defined for vcpu operation. */ + if (cmd.flags || cmd.id != KVM_TDX_INIT_VCPU) + return -EINVAL; + + /* + * As TDX requires X2APIC, set local apic mode to X2APIC. User space + * VMM, e.g. qemu, is required to set CPUID[0x1].ecx.X2APIC=1 by + * KVM_SET_CPUID2. Otherwise kvm_set_apic_base() will fail. + */ + apic_base_msr = (struct msr_data) { + .host_initiated = true, + .data = APIC_DEFAULT_PHYS_BASE | LAPIC_MODE_X2APIC | + (kvm_vcpu_is_reset_bsp(vcpu) ? MSR_IA32_APICBASE_BSP : 0), + }; + if (kvm_set_apic_base(vcpu, &apic_base_msr)) + return -EINVAL; + + ret = tdx_td_vcpu_init(vcpu, (u64)cmd.data); + if (ret) + return ret; + + tdx->initialized = true; + return 0; +} + static int __init tdx_module_setup(void) { const struct tdsysinfo_struct *tdsysinfo; @@ -894,6 +1065,11 @@ static int __init tdx_module_setup(void) WARN_ON(tdsysinfo->num_cpuid_config > TDX_MAX_NR_CPUID_CONFIGS); tdx_info = (struct tdx_info) { .nr_tdcs_pages = tdsysinfo->tdcs_base_size / PAGE_SIZE, + /* + * TDVPS = TDVPR(4K page) + TDVPX(multiple 4K pages). + * -1 for TDVPR. + */ + .nr_tdvpx_pages = tdsysinfo->tdvps_base_size / PAGE_SIZE - 1, }; return 0; diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h index 173ed19207fb..961a7da2452a 100644 --- a/arch/x86/kvm/vmx/tdx.h +++ b/arch/x86/kvm/vmx/tdx.h @@ -17,12 +17,19 @@ struct kvm_tdx { u64 xfam; int hkid; + bool finalized; + u64 tsc_offset; }; struct vcpu_tdx { struct kvm_vcpu vcpu; + unsigned long tdvpr_pa; + unsigned long *tdvpx_pa; + + bool initialized; + /* * Dummy to make pmu_intel not corrupt memory. * TODO: Support PMU for TDX. Future work. diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index 9b01104946b8..88e117db748c 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -150,6 +150,8 @@ int tdx_vm_ioctl(struct kvm *kvm, void __user *argp); int tdx_vcpu_create(struct kvm_vcpu *vcpu); void tdx_vcpu_free(struct kvm_vcpu *vcpu); void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event); + +int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp); #else static inline int tdx_hardware_setup(struct kvm_x86_ops *x86_ops) { return -EOPNOTSUPP; } static inline void tdx_hardware_unsetup(void) {} @@ -169,6 +171,8 @@ static inline int tdx_vm_ioctl(struct kvm *kvm, void __user *argp) { return -EOP static inline int tdx_vcpu_create(struct kvm_vcpu *vcpu) { return -EOPNOTSUPP; } static inline void tdx_vcpu_free(struct kvm_vcpu *vcpu) {} static inline void tdx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event) {} + +static inline int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) { return -EOPNOTSUPP; } #endif #endif /* __KVM_X86_VMX_X86_OPS_H */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 7aff6f88f575..69d9d15ef70d 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6086,6 +6086,12 @@ long kvm_arch_vcpu_ioctl(struct file *filp, case KVM_SET_DEVICE_ATTR: r = kvm_vcpu_ioctl_device_attr(vcpu, ioctl, argp); break; + case KVM_MEMORY_ENCRYPT_OP: + r = -ENOTTY; + if (!kvm_x86_ops.vcpu_mem_enc_ioctl) + goto out; + r = kvm_x86_ops.vcpu_mem_enc_ioctl(vcpu, argp); + break; default: r = -EINVAL; } diff --git a/tools/arch/x86/include/uapi/asm/kvm.h b/tools/arch/x86/include/uapi/asm/kvm.h index 61ce7d174fcf..83bd9e3118d1 100644 --- a/tools/arch/x86/include/uapi/asm/kvm.h +++ b/tools/arch/x86/include/uapi/asm/kvm.h @@ -566,6 +566,7 @@ struct kvm_pmu_event_filter { enum kvm_tdx_cmd_id { KVM_TDX_CAPABILITIES = 0, KVM_TDX_INIT_VM, + KVM_TDX_INIT_VCPU, KVM_TDX_CMD_NR_MAX, }; -- 2.25.1