Received: by 2002:a05:6358:c692:b0:131:369:b2a3 with SMTP id fe18csp297302rwb; Tue, 25 Jul 2023 16:16:21 -0700 (PDT) X-Google-Smtp-Source: APBJJlEBXMQ/2hhu8nheO0YRt4RyYqFMTUx9/+0ybmgDQ5b+MmqzxHV6JH85W1lSd8fKSTnSK2Al X-Received: by 2002:a17:902:d4d2:b0:1bb:5b88:73da with SMTP id o18-20020a170902d4d200b001bb5b8873damr566669plg.61.1690326980777; Tue, 25 Jul 2023 16:16:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690326980; cv=none; d=google.com; s=arc-20160816; b=tSvOqCa7SEcjgWkiZ42hsWeBDesKGOxBdT62SzE4Da5QBwuXxKnQJ7gfV5gs+DVjGm PY51SD2qa2pU8qRYIedXbgTEb5Mv11MFlzOOhp9+LDd3ltWT0CmG2d2fRYScv5fs/Oke 0d9/wtqjeBWaR5q7lLOid4FPRtb3KPja65TgknbVcFOeULkA+0or7REkRawa7os1Z03t NxrgaRL0hegQCx4jLzE9uf7JydGG6UpdhnlXTnL6eMDmckg5DtU+9973SlGjORzCkaP9 UEyljX4DIYwdWZcpZ5yNSg8K7VqYFMy4KM1OfQDADVCZ7edIz8WwkpANYGBTAfxnqgUV Tr8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=NkKRTgkiJjrdRFQXpOdgmRYKAusKJHoBeb1cmvj4qIk=; fh=aHvdLX1Pa61KvQFmeZODM982ht845x7CZ22gIB9kxIo=; b=uRFhI4ABNzu+Q65DOmqvBQmpd6N+Y+4V/OBM/toKKXbmsl1C588TxDKd097qVcbIgu XF87S/3zCi/l47zSfrElPWTZAi2GPpmKTeyCg6BStkA57MoETtsLR3/abMY+z+J0LZn2 a8e1DKrzu4QSG+Vy9N0Wa0UlBMiPIMGsB7cvP9IIhxtFI4QjdGPiMq8PtAXbjTEZ38S3 nApyLLVqJkvtdl78/CTv5ZE6rjltw2YIs7mjNyD0CdEUzF/QFYVvufFbFzklZwlNeX0p I/LpigwlVHpv31DdMTDXhV1rpp3VM/lGpz5x70h0NC2/jdUiYQ7tQmd27QhF2G5XPhVY CZfg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cWrvujUU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lo5-20020a170903434500b001b67bdc438csi11594797plb.376.2023.07.25.16.16.08; Tue, 25 Jul 2023 16:16:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cWrvujUU; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232111AbjGYWQq (ORCPT + 99 others); Tue, 25 Jul 2023 18:16:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32932 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231409AbjGYWPn (ORCPT ); Tue, 25 Jul 2023 18:15:43 -0400 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76EE31733; Tue, 25 Jul 2023 15:15:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690323341; x=1721859341; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XCdxkHSNfG3a+eFTqVYn3k3O1d86etbzM4CeTXyKbwM=; b=cWrvujUUuIqHgCnORIEMpI8T7u8HfvNEdrWNkJ8r22h3lySGkpnTGrLH xfxXuLDnsKPHOsjJW42hBq4F7o5pyslM4MnLa+gEPymlKIqIzGY4w7etR 73NOVpzneJzem9cr63L8Zyn9vxprTbd83k76H0y8KO1G2onr0P2bEfCp4 TsY1ZnNQ04/SGvXKMmNk4Xp3x+kRGFVE1JWkwoTg/XcgNtflY0djVxYSL sceKVPBIhJZZwm5kKzpANqEwlQfO7IF+llKwR1G1KHeeXXkZR500VdFvi BTVxLI+vl7IJDo4ZF9BK0dY1pz2kBz/wftiYJsZ+nrPPB+Dlx2SPmC6zR A==; X-IronPort-AV: E=McAfee;i="6600,9927,10782"; a="357863080" X-IronPort-AV: E=Sophos;i="6.01,231,1684825200"; d="scan'208";a="357863080" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jul 2023 15:15:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10782"; a="1056938797" X-IronPort-AV: E=Sophos;i="6.01,231,1684825200"; d="scan'208";a="1056938797" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jul 2023 15:15:19 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Sean Christopherson Subject: [PATCH v15 011/115] KVM: TDX: Add C wrapper functions for SEAMCALLs to the TDX module Date: Tue, 25 Jul 2023 15:13:22 -0700 Message-Id: <2d81a22fc1d641b3f66aec13e5d1ee13ad266857.1690322424.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_NONE, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata A VMM interacts with the TDX module using a new instruction (SEAMCALL). For instance, a TDX VMM does not have full access to the VM control structure corresponding to VMX VMCS. Instead, a VMM induces the TDX module to act on behalf via SEAMCALLs. Export __seamcall and define C wrapper functions for SEAMCALLs for readability. Some SEAMCALL APIs donate host pages to TDX module or guest TD, and the donated pages are encrypted. Those require the VMM to flush the cache lines to avoid cache line alias. Signed-off-by: Sean Christopherson Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/tdx.h | 4 + arch/x86/kvm/vmx/tdx_ops.h | 204 +++++++++++++++++++++++++++++++ arch/x86/virt/vmx/tdx/seamcall.S | 2 + arch/x86/virt/vmx/tdx/tdx.h | 3 - 4 files changed, 210 insertions(+), 3 deletions(-) create mode 100644 arch/x86/kvm/vmx/tdx_ops.h diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h index ed84211fe190..bf5324b5ea01 100644 --- a/arch/x86/include/asm/tdx.h +++ b/arch/x86/include/asm/tdx.h @@ -90,12 +90,16 @@ int tdx_cpu_enable(void); int tdx_enable(void); void tdx_reset_memory(void); bool tdx_is_private_mem(unsigned long phys); +u64 __seamcall(u64 op, u64 rcx, u64 rdx, u64 r8, u64 r9, + struct tdx_module_output *out); #else /* !CONFIG_INTEL_TDX_HOST */ static inline bool platform_tdx_enabled(void) { return false; } static inline int tdx_cpu_enable(void) { return -ENODEV; } static inline int tdx_enable(void) { return -ENODEV; } static inline void tdx_reset_memory(void) { } static inline bool tdx_is_private_mem(unsigned long phys) { return false; } +static inline u64 __seamcall(u64 op, u64 rcx, u64 rdx, u64 r8, u64 r9, + struct tdx_module_output *out) { return TDX_SEAMCALL_UD; }; #endif /* CONFIG_INTEL_TDX_HOST */ #endif /* !__ASSEMBLY__ */ diff --git a/arch/x86/kvm/vmx/tdx_ops.h b/arch/x86/kvm/vmx/tdx_ops.h new file mode 100644 index 000000000000..76eddecdca12 --- /dev/null +++ b/arch/x86/kvm/vmx/tdx_ops.h @@ -0,0 +1,204 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* constants/data definitions for TDX SEAMCALLs */ + +#ifndef __KVM_X86_TDX_OPS_H +#define __KVM_X86_TDX_OPS_H + +#include + +#include +#include +#include + +#include "tdx_errno.h" +#include "tdx_arch.h" +#include "x86.h" + +static inline u64 tdx_seamcall(u64 op, u64 rcx, u64 rdx, u64 r8, u64 r9, + struct tdx_module_output *out) +{ + u64 ret; + + ret = __seamcall(op, rcx, rdx, r8, r9, out); + if (unlikely(ret == TDX_SEAMCALL_UD)) { + /* + * SEAMCALLs fail with TDX_SEAMCALL_UD returned when VMX is off. + * This can happen when the host gets rebooted or live + * updated. In this case, the instruction execution is ignored + * as KVM is shut down, so the error code is suppressed. Other + * than this, the error is unexpected and the execution can't + * continue as the TDX features reply on VMX to be on. + */ + kvm_spurious_fault(); + return 0; + } + return ret; +} + +static inline u64 tdh_mng_addcx(hpa_t tdr, hpa_t addr) +{ + clflush_cache_range(__va(addr), PAGE_SIZE); + return tdx_seamcall(TDH_MNG_ADDCX, addr, tdr, 0, 0, NULL); +} + +static inline u64 tdh_mem_page_add(hpa_t tdr, gpa_t gpa, hpa_t hpa, hpa_t source, + struct tdx_module_output *out) +{ + clflush_cache_range(__va(hpa), PAGE_SIZE); + return tdx_seamcall(TDH_MEM_PAGE_ADD, gpa, tdr, hpa, source, out); +} + +static inline u64 tdh_mem_sept_add(hpa_t tdr, gpa_t gpa, int level, hpa_t page, + struct tdx_module_output *out) +{ + clflush_cache_range(__va(page), PAGE_SIZE); + return tdx_seamcall(TDH_MEM_SEPT_ADD, gpa | level, tdr, page, 0, out); +} + +static inline u64 tdh_mem_sept_remove(hpa_t tdr, gpa_t gpa, int level, + struct tdx_module_output *out) +{ + return tdx_seamcall(TDH_MEM_SEPT_REMOVE, gpa | level, tdr, 0, 0, out); +} + +static inline u64 tdh_vp_addcx(hpa_t tdvpr, hpa_t addr) +{ + clflush_cache_range(__va(addr), PAGE_SIZE); + return tdx_seamcall(TDH_VP_ADDCX, addr, tdvpr, 0, 0, NULL); +} + +static inline u64 tdh_mem_page_relocate(hpa_t tdr, gpa_t gpa, hpa_t hpa, + struct tdx_module_output *out) +{ + clflush_cache_range(__va(hpa), PAGE_SIZE); + return tdx_seamcall(TDH_MEM_PAGE_RELOCATE, gpa, tdr, hpa, 0, out); +} + +static inline u64 tdh_mem_page_aug(hpa_t tdr, gpa_t gpa, hpa_t hpa, + struct tdx_module_output *out) +{ + clflush_cache_range(__va(hpa), PAGE_SIZE); + return tdx_seamcall(TDH_MEM_PAGE_AUG, gpa, tdr, hpa, 0, out); +} + +static inline u64 tdh_mem_range_block(hpa_t tdr, gpa_t gpa, int level, + struct tdx_module_output *out) +{ + return tdx_seamcall(TDH_MEM_RANGE_BLOCK, gpa | level, tdr, 0, 0, out); +} + +static inline u64 tdh_mng_key_config(hpa_t tdr) +{ + return tdx_seamcall(TDH_MNG_KEY_CONFIG, tdr, 0, 0, 0, NULL); +} + +static inline u64 tdh_mng_create(hpa_t tdr, int hkid) +{ + clflush_cache_range(__va(tdr), PAGE_SIZE); + return tdx_seamcall(TDH_MNG_CREATE, tdr, hkid, 0, 0, NULL); +} + +static inline u64 tdh_vp_create(hpa_t tdr, hpa_t tdvpr) +{ + clflush_cache_range(__va(tdvpr), PAGE_SIZE); + return tdx_seamcall(TDH_VP_CREATE, tdvpr, tdr, 0, 0, NULL); +} + +static inline u64 tdh_mng_rd(hpa_t tdr, u64 field, struct tdx_module_output *out) +{ + return tdx_seamcall(TDH_MNG_RD, tdr, field, 0, 0, out); +} + +static inline u64 tdh_mr_extend(hpa_t tdr, gpa_t gpa, + struct tdx_module_output *out) +{ + return tdx_seamcall(TDH_MR_EXTEND, gpa, tdr, 0, 0, out); +} + +static inline u64 tdh_mr_finalize(hpa_t tdr) +{ + return tdx_seamcall(TDH_MR_FINALIZE, tdr, 0, 0, 0, NULL); +} + +static inline u64 tdh_vp_flush(hpa_t tdvpr) +{ + return tdx_seamcall(TDH_VP_FLUSH, tdvpr, 0, 0, 0, NULL); +} + +static inline u64 tdh_mng_vpflushdone(hpa_t tdr) +{ + return tdx_seamcall(TDH_MNG_VPFLUSHDONE, tdr, 0, 0, 0, NULL); +} + +static inline u64 tdh_mng_key_freeid(hpa_t tdr) +{ + return tdx_seamcall(TDH_MNG_KEY_FREEID, tdr, 0, 0, 0, NULL); +} + +static inline u64 tdh_mng_init(hpa_t tdr, hpa_t td_params, + struct tdx_module_output *out) +{ + return tdx_seamcall(TDH_MNG_INIT, tdr, td_params, 0, 0, out); +} + +static inline u64 tdh_vp_init(hpa_t tdvpr, u64 rcx) +{ + return tdx_seamcall(TDH_VP_INIT, tdvpr, rcx, 0, 0, NULL); +} + +static inline u64 tdh_vp_rd(hpa_t tdvpr, u64 field, + struct tdx_module_output *out) +{ + return tdx_seamcall(TDH_VP_RD, tdvpr, field, 0, 0, out); +} + +static inline u64 tdh_mng_key_reclaimid(hpa_t tdr) +{ + return tdx_seamcall(TDH_MNG_KEY_RECLAIMID, tdr, 0, 0, 0, NULL); +} + +static inline u64 tdh_phymem_page_reclaim(hpa_t page, + struct tdx_module_output *out) +{ + return tdx_seamcall(TDH_PHYMEM_PAGE_RECLAIM, page, 0, 0, 0, out); +} + +static inline u64 tdh_mem_page_remove(hpa_t tdr, gpa_t gpa, int level, + struct tdx_module_output *out) +{ + return tdx_seamcall(TDH_MEM_PAGE_REMOVE, gpa | level, tdr, 0, 0, out); +} + +static inline u64 tdh_sys_lp_shutdown(void) +{ + return tdx_seamcall(TDH_SYS_LP_SHUTDOWN, 0, 0, 0, 0, NULL); +} + +static inline u64 tdh_mem_track(hpa_t tdr) +{ + return tdx_seamcall(TDH_MEM_TRACK, tdr, 0, 0, 0, NULL); +} + +static inline u64 tdh_mem_range_unblock(hpa_t tdr, gpa_t gpa, int level, + struct tdx_module_output *out) +{ + return tdx_seamcall(TDH_MEM_RANGE_UNBLOCK, gpa | level, tdr, 0, 0, out); +} + +static inline u64 tdh_phymem_cache_wb(bool resume) +{ + return tdx_seamcall(TDH_PHYMEM_CACHE_WB, resume ? 1 : 0, 0, 0, 0, NULL); +} + +static inline u64 tdh_phymem_page_wbinvd(hpa_t page) +{ + return tdx_seamcall(TDH_PHYMEM_PAGE_WBINVD, page, 0, 0, 0, NULL); +} + +static inline u64 tdh_vp_wr(hpa_t tdvpr, u64 field, u64 val, u64 mask, + struct tdx_module_output *out) +{ + return tdx_seamcall(TDH_VP_WR, tdvpr, field, val, mask, out); +} + +#endif /* __KVM_X86_TDX_OPS_H */ diff --git a/arch/x86/virt/vmx/tdx/seamcall.S b/arch/x86/virt/vmx/tdx/seamcall.S index f81be6b9c133..b90a7fe05494 100644 --- a/arch/x86/virt/vmx/tdx/seamcall.S +++ b/arch/x86/virt/vmx/tdx/seamcall.S @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include +#include #include #include "tdxcall.S" @@ -50,3 +51,4 @@ SYM_FUNC_START(__seamcall) FRAME_END RET SYM_FUNC_END(__seamcall) +EXPORT_SYMBOL_GPL(__seamcall) diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h index 2fefd688924c..70315263d8d2 100644 --- a/arch/x86/virt/vmx/tdx/tdx.h +++ b/arch/x86/virt/vmx/tdx/tdx.h @@ -145,7 +145,4 @@ struct tdmr_info_list { int max_tdmrs; /* How many 'tdmr_info's are allocated */ }; -struct tdx_module_output; -u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9, - struct tdx_module_output *out); #endif -- 2.25.1