Hi,
Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious
host and certain physical attacks. This series provides support for
initializing TDX in the host kernel. KVM support for TDX is being
developed separately[1].
This series is based on Kirill's TDX guest series[2]. The reason is host
side SEAMCALL implementation can share TDCALL's implementation which is
implemented in TDX guest series.
You can also find this series in below repo in github:
https://github.com/intel/tdx/tree/host-upstream
The code has been tested on couple of TDX-capable machines. I would
consider it as ready for review (from overall design to detail
implementations). It would be highly appreciated if anyone can help to
review this series. For Intel folks, I would appreciate acks if the
patches look good to you.
Thanks in advance.
Changelog history:
RFC (v1):
https://lore.kernel.org/all/e0ff030a49b252d91c789a89c303bb4206f85e3d.1646007267.git.kai.huang@intel.com/T/
- RFC (v1) -> v2:
- Rebased to Kirill's latest TDX guest code.
- Fixed two issues that are related to finding all RAM memory regions
based on e820.
- Minor improvement on comments and commit messages.
== Background ==
Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious
host and certain physical attacks. To support TDX, a new CPU mode called
Secure Arbitration Mode (SEAM) is added to Intel processors.
SEAM is an extension to the existing VMX architecture. It defines a new
VMX root operation (SEAM VMX root) and a new VMX non-root operation (SEAM
VMX non-root).
SEAM VMX root operation is designed to host a CPU-attested, software
module called the 'TDX module' which implements functions to manage
crypto protected VMs called Trust Domains (TD). SEAM VMX root is also
designed to host a CPU-attested, software module called the 'Intel
Persistent SEAMLDR (Intel P-SEAMLDR)' to load and update the TDX module.
Host kernel transits to either the P-SEAMLDR or the TDX module via a new
SEAMCALL instruction. SEAMCALLs are host-side interface functions
defined by the P-SEAMLDR and the TDX module around the new SEAMCALL
instruction. They are similar to a hypercall, except they are made by
host kernel to the SEAM software modules.
TDX leverages Intel Multi-Key Total Memory Encryption (MKTME) to crypto
protect TD guests. TDX reserves part of MKTME KeyID space as TDX private
KeyIDs, which can only be used by software runs in SEAM. The physical
address bits for encoding TDX private KeyID are treated as reserved bits
when not in SEAM operation. The partitioning of MKTME KeyIDs and TDX
private KeyIDs is configured by BIOS.
Before being able to manage TD guests, the TDX module must be loaded
and properly initialized using SEAMCALLs defined by TDX architecture.
This series assumes both the P-SEAMLDR and the TDX module are loaded by
BIOS before the kernel boots.
There's no CPUID or MSR to detect either the P-SEAMLDR or the TDX module.
Instead, detecting them can be done by using P-SEAMLDR's SEAMLDR.INFO
SEAMCALL to detect P-SEAMLDR. The success of this SEAMCALL means the
P-SEAMLDR is loaded. The P-SEAMLDR information returned by this
SEAMCALL further tells whether TDX module is loaded.
The TDX module is initialized in multiple steps:
1) Global initialization;
2) Logical-CPU scope initialization;
3) Enumerate the TDX module capabilities;
4) Configure the TDX module about usable memory ranges and
global KeyID information;
5) Package-scope configuration for the global KeyID;
6) Initialize TDX metadata for usable memory ranges based on 4).
Step 2) requires calling some SEAMCALL on all "BIOS-enabled" (in MADT
table) logical cpus, otherwise step 4) will fail. Step 5) requires
calling SEAMCALL on at least one cpu on all packages.
TDX module can also be shut down at any time during module's lifetime, by
calling SEAMCALL on all "BIOS-enabled" logical cpus.
== Design Considerations ==
1. Lazy TDX module initialization on-demand by caller
None of the steps in the TDX module initialization process must be done
during kernel boot. This series doesn't initialize TDX at boot time, but
instead, provides two functions to allow caller to detect and initialize
TDX on demand:
if (tdx_detect())
goto no_tdx;
if (tdx_init())
goto no_tdx;
This approach has below pros:
1) Initializing the TDX module requires to reserve ~1/256th system RAM as
metadata. Enabling TDX on demand allows only to consume this memory when
TDX is truly needed (i.e. when KVM wants to create TD guests).
2) Both detecting and initializing the TDX module require calling
SEAMCALL. However, SEAMCALL requires CPU being already in VMX operation
(VMXON has been done). So far, KVM is the only user of TDX, and it
already handles VMXON/VMXOFF. Therefore, letting KVM to initialize TDX
on-demand avoids handling VMXON/VMXOFF (which is not that trivial) in
core-kernel. Also, in long term, likely a reference based VMXON/VMXOFF
approach is needed since more kernel components will need to handle
VMXON/VMXONFF.
3) It is more flexible to support "TDX module runtime update" (not in
this series). After updating to the new module at runtime, kernel needs
to go through the initialization process again. For the new module,
it's possible the metadata allocated for the old module cannot be reused
for the new module, and needs to be re-allocated again.
2. Kernel policy on TDX memory
Host kernel is responsible for choosing which memory regions can be used
as TDX memory, and configuring those memory regions to the TDX module by
using an array of "TD Memory Regions" (TDMR), which is a data structure
defined by TDX architecture.
The first generation of TDX essentially guarantees that all system RAM
memory regions (excluding the memory below 1MB) can be used as TDX
memory. To avoid having to modify the page allocator to distinguish TDX
and non-TDX allocation, this series chooses to use all system RAM as TDX
memory.
E820 table is used to find all system RAM entries. Following
e820__memblock_setup(), both E820_TYPE_RAM and E820_TYPE_RESERVED_KERN
types are treated as TDX memory, and contiguous ranges in the same NUMA
node are merged together (similar to memblock_add()) before trimming the
non-page-aligned part.
X86 Legacy PMEMs (E820_TYPE_PRAM) also unconditionally treated as TDX
memory as underneath they are RAM and can be potentially used as TD guest
memory.
Memblock is not used to find all RAM regions as: 1) it is gone after
kernel boots; 2) it doesn't have legacy PMEM.
3. Memory hotplug
The first generation of TDX architecturally doesn't support memory
hotplug. And the first generation of TDX-capable platforms don't support
physical memory hotplug. Since it physically cannot happen, this series
doesn't add any check in ACPI memory hotplug code path to disable it.
A special case of memory hotplug is adding NVDIMM as system RAM using
kmem driver. However the first generation of TDX-capable platforms
cannot enable TDX and NVDIMM simultaneously, so in practice this cannot
happen either.
Another case is admin can use 'memmap' kernel command line to create
legacy PMEMs and use them as TD guest memory, or theoretically, can use
kmem driver to add them as system RAM. To avoid having to change memory
hotplug code to prevent this from happening, this series always include
legacy PMEMs when constructing TDMRs so they are also TDX memory.
4. CPU hotplug
The first generation of TDX architecturally doesn't support ACPI CPU
hotplug. All logical cpus are enabled by BIOS in MADT table. Also, the
first generation of TDX-capable platforms don't support ACPI CPU hotplug
either. Since this physically cannot happen, this series doesn't add any
check in ACPI CPU hotplug code path to disable it.
Also, only TDX module initialization requires all BIOS-enabled cpus are
online. After the initialization, any logical cpu can be brought down
and brought up to online again later. Therefore this series doesn't
change logical CPU hotplug either.
5. TDX interaction with kexec()
If TDX is ever enabled and/or used to run any TD guests, the cachelines
of TDX private memory, including PAMTs, used by TDX module need to be
flushed before transiting to the new kernel otherwise they may silently
corrupt the new kernel. Similar to SME, this series flushes cache in
stop_this_cpu().
The TDX module can be initialized only once during its lifetime. The
first generation of TDX doesn't have interface to reset TDX module to
uninitialized state so it can be initialized again.
This implies:
- If the old kernel fails to initialize TDX, the new kernel cannot
use TDX too unless the new kernel fixes the bug which leads to
initialization failure in the old kernel and can resume from where
the old kernel stops. This requires certain coordination between
the two kernels.
- If the old kernel has initialized TDX successfully, the new kernel
may be able to use TDX if the two kernels have the exactly same
configurations on the TDX module. It further requires the new kernel
to reserve the TDX metadata pages (allocated by the old kernel) in
its page allocator. It also requires coordination between the two
kernels. Furthermore, if kexec() is done when there are active TD
guests running, the new kernel cannot use TDX because it's extremely
hard for the old kernel to pass all TDX private pages to the new
kernel.
Given that, this series doesn't support TDX after kexec() (except the
old kernel doesn't attempt to initialize TDX at all).
And this series doesn't shut down TDX module but leaves it open during
kexec(). It is because shutting down TDX module requires CPU being in
VMX operation but there's no guarantee of this during kexec(). Leaving
the TDX module open is not the best case, but it is OK since the new
kernel won't be able to use TDX anyway (therefore TDX module won't run
at all).
[1] https://lore.kernel.org/lkml/772b20e270b3451aea9714260f2c40ddcc4afe80.1646422845.git.isaku.yamahata@intel.com/T/
[2] https://github.com/intel/tdx/tree/guest-upstream
Kai Huang (21):
x86/virt/tdx: Detect SEAM
x86/virt/tdx: Detect TDX private KeyIDs
x86/virt/tdx: Implement the SEAMCALL base function
x86/virt/tdx: Add skeleton for detecting and initializing TDX on
demand
x86/virt/tdx: Detect P-SEAMLDR and TDX module
x86/virt/tdx: Shut down TDX module in case of error
x86/virt/tdx: Do TDX module global initialization
x86/virt/tdx: Do logical-cpu scope TDX module initialization
x86/virt/tdx: Get information about TDX module and convertible memory
x86/virt/tdx: Add placeholder to coveret all system RAM as TDX memory
x86/virt/tdx: Choose to use all system RAM as TDX memory
x86/virt/tdx: Create TDMRs to cover all system RAM
x86/virt/tdx: Allocate and set up PAMTs for TDMRs
x86/virt/tdx: Set up reserved areas for all TDMRs
x86/virt/tdx: Reserve TDX module global KeyID
x86/virt/tdx: Configure TDX module with TDMRs and global KeyID
x86/virt/tdx: Configure global KeyID on all packages
x86/virt/tdx: Initialize all TDMRs
x86: Flush cache of TDX private memory during kexec()
x86/virt/tdx: Add kernel command line to opt-in TDX host support
Documentation/x86: Add documentation for TDX host support
.../admin-guide/kernel-parameters.txt | 6 +
Documentation/x86/index.rst | 1 +
Documentation/x86/tdx_host.rst | 300 +++
arch/x86/Kconfig | 14 +
arch/x86/Makefile | 2 +
arch/x86/include/asm/tdx.h | 15 +
arch/x86/kernel/cpu/intel.c | 3 +
arch/x86/kernel/process.c | 15 +-
arch/x86/virt/Makefile | 2 +
arch/x86/virt/vmx/Makefile | 2 +
arch/x86/virt/vmx/seamcall.S | 52 +
arch/x86/virt/vmx/tdx.c | 1711 +++++++++++++++++
arch/x86/virt/vmx/tdx.h | 137 ++
13 files changed, 2259 insertions(+), 1 deletion(-)
create mode 100644 Documentation/x86/tdx_host.rst
create mode 100644 arch/x86/virt/Makefile
create mode 100644 arch/x86/virt/vmx/Makefile
create mode 100644 arch/x86/virt/vmx/seamcall.S
create mode 100644 arch/x86/virt/vmx/tdx.c
create mode 100644 arch/x86/virt/vmx/tdx.h
--
2.35.1
TDX provides increased levels of memory confidentiality and integrity.
This requires special hardware support for features like memory
encryption and storage of memory integrity checksums. Not all memory
satisfies these requirements.
As a result, TDX introduced the concept of a "Convertible Memory Region"
(CMR). During boot, the firmware builds a list of all of the memory
ranges which can provide the TDX security guarantees. The list of these
ranges, along with TDX module information, is available to the kernel by
querying the TDX module via TDH.SYS.INFO SEAMCALL.
Host kernel can choose whether or not to use all convertible memory
regions as TDX memory. Before TDX module is ready to create any TD
guests, all TDX memory regions that host kernel intends to use must be
configured to the TDX module, using specific data structures defined by
TDX architecture. Constructing those structures requires information of
both TDX module and the Convertible Memory Regions. Call TDH.SYS.INFO
to get this information as preparation to construct those structures.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/virt/vmx/tdx.c | 127 ++++++++++++++++++++++++++++++++++++++++
arch/x86/virt/vmx/tdx.h | 61 +++++++++++++++++++
2 files changed, 188 insertions(+)
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index 4b0c285d844b..eb585bc5edda 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -80,6 +80,11 @@ static DEFINE_MUTEX(tdx_module_lock);
static struct p_seamldr_info p_seamldr_info;
+/* Base address of CMR array needs to be 512 bytes aligned. */
+static struct cmr_info tdx_cmr_array[MAX_CMRS] __aligned(CMR_INFO_ARRAY_ALIGNMENT);
+static int tdx_cmr_num;
+static struct tdsysinfo_struct tdx_sysinfo;
+
static bool __seamrr_enabled(void)
{
return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS;
@@ -468,6 +473,123 @@ static int tdx_module_init_cpus(void)
return seamcall_on_each_cpu(&sc);
}
+static inline bool cmr_valid(struct cmr_info *cmr)
+{
+ return !!cmr->size;
+}
+
+static void print_cmrs(struct cmr_info *cmr_array, int cmr_num,
+ const char *name)
+{
+ int i;
+
+ for (i = 0; i < cmr_num; i++) {
+ struct cmr_info *cmr = &cmr_array[i];
+
+ pr_info("%s : [0x%llx, 0x%llx)\n", name,
+ cmr->base, cmr->base + cmr->size);
+ }
+}
+
+static int sanitize_cmrs(struct cmr_info *cmr_array, int cmr_num)
+{
+ int i, j;
+
+ /*
+ * Intel TDX module spec, 20.7.3 CMR_INFO:
+ *
+ * TDH.SYS.INFO leaf function returns a MAX_CMRS (32) entry
+ * array of CMR_INFO entries. The CMRs are sorted from the
+ * lowest base address to the highest base address, and they
+ * are non-overlapping.
+ *
+ * This implies that BIOS may generate invalid empty entries
+ * if total CMRs are less than 32. Skip them manually.
+ */
+ for (i = 0; i < cmr_num; i++) {
+ struct cmr_info *cmr = &cmr_array[i];
+ struct cmr_info *prev_cmr = NULL;
+
+ /* Skip further invalid CMRs */
+ if (!cmr_valid(cmr))
+ break;
+
+ if (i > 0)
+ prev_cmr = &cmr_array[i - 1];
+
+ /*
+ * It is a TDX firmware bug if CMRs are not
+ * in address ascending order.
+ */
+ if (prev_cmr && ((prev_cmr->base + prev_cmr->size) >
+ cmr->base)) {
+ pr_err("Firmware bug: CMRs not in address ascending order.\n");
+ return -EFAULT;
+ }
+ }
+
+ /*
+ * Also a sane BIOS should never generate invalid CMR(s) between
+ * two valid CMRs. Sanity check this and simply return error in
+ * this case.
+ */
+ for (j = i; j < cmr_num; j++)
+ if (cmr_valid(&cmr_array[j])) {
+ pr_err("Firmware bug: invalid CMR(s) among valid CMRs.\n");
+ return -EFAULT;
+ }
+
+ /*
+ * Trim all tail invalid empty CMRs. BIOS should generate at
+ * least one valid CMR, otherwise it's a TDX firmware bug.
+ */
+ tdx_cmr_num = i;
+ if (!tdx_cmr_num) {
+ pr_err("Firmware bug: No valid CMR.\n");
+ return -EFAULT;
+ }
+
+ /* Print kernel sanitized CMRs */
+ print_cmrs(tdx_cmr_array, tdx_cmr_num, "Kernel-sanitized-CMR");
+
+ return 0;
+}
+
+static int tdx_get_sysinfo(void)
+{
+ struct tdx_module_output out;
+ u64 tdsysinfo_sz, cmr_num;
+ int ret;
+
+ BUILD_BUG_ON(sizeof(struct tdsysinfo_struct) != TDSYSINFO_STRUCT_SIZE);
+
+ ret = seamcall(TDH_SYS_INFO, __pa(&tdx_sysinfo), TDSYSINFO_STRUCT_SIZE,
+ __pa(tdx_cmr_array), MAX_CMRS, NULL, &out);
+ if (ret)
+ return ret;
+
+ /*
+ * If TDH.SYS.CONFIG succeeds, RDX contains the actual bytes
+ * written to @tdx_sysinfo and R9 contains the actual entries
+ * written to @tdx_cmr_array. Sanity check them.
+ */
+ tdsysinfo_sz = out.rdx;
+ cmr_num = out.r9;
+ if (WARN_ON_ONCE((tdsysinfo_sz > sizeof(tdx_sysinfo)) || !tdsysinfo_sz ||
+ (cmr_num > MAX_CMRS) || !cmr_num))
+ return -EFAULT;
+
+ pr_info("TDX module: vendor_id 0x%x, major_version %u, minor_version %u, build_date %u, build_num %u",
+ tdx_sysinfo.vendor_id, tdx_sysinfo.major_version,
+ tdx_sysinfo.minor_version, tdx_sysinfo.build_date,
+ tdx_sysinfo.build_num);
+
+ /* Print BIOS provided CMRs */
+ print_cmrs(tdx_cmr_array, cmr_num, "BIOS-CMR");
+
+ return sanitize_cmrs(tdx_cmr_array, cmr_num);
+}
+
static int init_tdx_module(void)
{
int ret;
@@ -482,6 +604,11 @@ static int init_tdx_module(void)
if (ret)
goto out;
+ /* Get TDX module information and CMRs */
+ ret = tdx_get_sysinfo();
+ if (ret)
+ goto out;
+
/*
* Return -EFAULT until all steps of TDX module
* initialization are done.
diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h
index b8cfdd6e12f3..2f21c45df6ac 100644
--- a/arch/x86/virt/vmx/tdx.h
+++ b/arch/x86/virt/vmx/tdx.h
@@ -29,6 +29,66 @@ struct p_seamldr_info {
u8 reserved2[88];
} __packed __aligned(P_SEAMLDR_INFO_ALIGNMENT);
+struct cmr_info {
+ u64 base;
+ u64 size;
+} __packed;
+
+#define MAX_CMRS 32
+#define CMR_INFO_ARRAY_ALIGNMENT 512
+
+struct cpuid_config {
+ u32 leaf;
+ u32 sub_leaf;
+ u32 eax;
+ u32 ebx;
+ u32 ecx;
+ u32 edx;
+} __packed;
+
+#define TDSYSINFO_STRUCT_SIZE 1024
+#define TDSYSINFO_STRUCT_ALIGNMENT 1024
+
+struct tdsysinfo_struct {
+ /* TDX-SEAM Module Info */
+ u32 attributes;
+ u32 vendor_id;
+ u32 build_date;
+ u16 build_num;
+ u16 minor_version;
+ u16 major_version;
+ u8 reserved0[14];
+ /* Memory Info */
+ u16 max_tdmrs;
+ u16 max_reserved_per_tdmr;
+ u16 pamt_entry_size;
+ u8 reserved1[10];
+ /* Control Struct Info */
+ u16 tdcs_base_size;
+ u8 reserved2[2];
+ u16 tdvps_base_size;
+ u8 tdvps_xfam_dependent_size;
+ u8 reserved3[9];
+ /* TD Capabilities */
+ u64 attributes_fixed0;
+ u64 attributes_fixed1;
+ u64 xfam_fixed0;
+ u64 xfam_fixed1;
+ u8 reserved4[32];
+ u32 num_cpuid_config;
+ /*
+ * The actual number of CPUID_CONFIG depends on above
+ * 'num_cpuid_config'. The size of 'struct tdsysinfo_struct'
+ * is 1024B defined by TDX architecture. Use a union with
+ * specific padding to make 'sizeof(struct tdsysinfo_struct)'
+ * equal to 1024.
+ */
+ union {
+ struct cpuid_config cpuid_configs[0];
+ u8 reserved5[892];
+ };
+} __packed __aligned(TDSYSINFO_STRUCT_ALIGNMENT);
+
/*
* P-SEAMLDR SEAMCALL leaf function
*/
@@ -38,6 +98,7 @@ struct p_seamldr_info {
/*
* TDX module SEAMCALL leaf functions
*/
+#define TDH_SYS_INFO 32
#define TDH_SYS_INIT 33
#define TDH_SYS_LP_INIT 35
#define TDH_SYS_LP_SHUTDOWN 44
--
2.35.1
After the TDX usable memory regions are constructed in an array of TDMRs
and the global KeyID is reserved, configure them to the TDX module. The
configuration is done via TDH.SYS.CONFIG, which is one call and can be
done on any logical cpu.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/virt/vmx/tdx.c | 42 +++++++++++++++++++++++++++++++++++++++++
arch/x86/virt/vmx/tdx.h | 2 ++
2 files changed, 44 insertions(+)
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index f70eee7df0c1..e03dc3e420db 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -1280,6 +1280,42 @@ static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num)
return ret;
}
+static int config_tdx_module(struct tdmr_info **tdmr_array, int tdmr_num,
+ u64 global_keyid)
+{
+ u64 *tdmr_pa_array;
+ int i, array_sz;
+ int ret;
+
+ /*
+ * TDMR_INFO entries are configured to the TDX module via an
+ * array of the physical address of each TDMR_INFO. TDX requires
+ * the array itself must be 512 aligned. Round up the array size
+ * to 512 aligned so the buffer allocated by kzalloc() meets the
+ * alignment requirement.
+ */
+ array_sz = ALIGN(tdmr_num * sizeof(u64), TDMR_INFO_PA_ARRAY_ALIGNMENT);
+ tdmr_pa_array = kzalloc(array_sz, GFP_KERNEL);
+ if (!tdmr_pa_array)
+ return -ENOMEM;
+
+ for (i = 0; i < tdmr_num; i++)
+ tdmr_pa_array[i] = __pa(tdmr_array[i]);
+
+ /*
+ * TDH.SYS.CONFIG fails when TDH.SYS.LP.INIT is not done on all
+ * BIOS-enabled cpus. tdx_init() only disables CPU hotplug but
+ * doesn't do early check whether all BIOS-enabled cpus are
+ * online, so TDH.SYS.CONFIG can fail here.
+ */
+ ret = seamcall(TDH_SYS_CONFIG, __pa(tdmr_pa_array), tdmr_num,
+ global_keyid, 0, NULL, NULL);
+ /* Free the array as it is not required any more. */
+ kfree(tdmr_pa_array);
+
+ return ret;
+}
+
static int init_tdx_module(void)
{
struct tdmr_info **tdmr_array;
@@ -1325,11 +1361,17 @@ static int init_tdx_module(void)
*/
tdx_global_keyid = tdx_keyid_start;
+ /* Config the TDX module with TDMRs and global KeyID */
+ ret = config_tdx_module(tdmr_array, tdmr_num, tdx_global_keyid);
+ if (ret)
+ goto out_free_pamts;
+
/*
* Return -EFAULT until all steps of TDX module
* initialization are done.
*/
ret = -EFAULT;
+out_free_pamts:
/*
* Free PAMTs allocated in construct_tdmrs() when TDX module
* initialization fails.
diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h
index 05bf9fe6bd00..d8e2800397af 100644
--- a/arch/x86/virt/vmx/tdx.h
+++ b/arch/x86/virt/vmx/tdx.h
@@ -95,6 +95,7 @@ struct tdmr_reserved_area {
} __packed;
#define TDMR_INFO_ALIGNMENT 512
+#define TDMR_INFO_PA_ARRAY_ALIGNMENT 512
struct tdmr_info {
u64 base;
@@ -125,6 +126,7 @@ struct tdmr_info {
#define TDH_SYS_INIT 33
#define TDH_SYS_LP_INIT 35
#define TDH_SYS_LP_SHUTDOWN 44
+#define TDH_SYS_CONFIG 45
struct tdx_module_output;
u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
--
2.35.1
As one step of initializing the TDX module, the memory regions that the
TDX module can use must be configured to it via an array of 'TD Memory
Regions' (TDMR). The kernel is responsible for choosing which memory
regions to be used as TDX memory and building the array of TDMRs to
cover those memory regions.
The first generation of TDX-capable platforms basically guarantees all
system RAM regions during machine boot are Convertible Memory Regions
(excluding the memory below 1MB) and can be used by TDX. The memory
pages allocated to TD guests can be any pages managed by the page
allocator. To avoid having to modify the page allocator to distinguish
TDX and non-TDX memory allocation, adopt a simple policy to use all
system RAM regions as TDX memory. The low 1MB pages are excluded from
TDX memory since they are not in CMRs in some platforms (those pages are
reserved at boot time and won't be managed by page allocator anyway).
This policy could be revised later if future TDX generations break
the guarantee or when the size of the metadata (~1/256th of the size of
the TDX usable memory) becomes a concern. At that time a CMR-aware
page allocator may be necessary.
Also, on the first generation of TDX-capable machine, the system RAM
ranges discovered during boot time are all memory regions that kernel
can use during its runtime. This is because the first generation of TDX
architecturally doesn't support ACPI memory hotplug (CMRs are generated
during machine boot and are static during machine's runtime). Also, the
first generation of TDX-capable platform doesn't support TDX and ACPI
memory hotplug at the same time on a single machine. Another case of
memory hotplug is user may use NVDIMM as system RAM via kmem driver.
But the first generation of TDX-capable machine doesn't support TDX and
NVDIMM simultaneously, therefore in practice it cannot happen. One
special case is user may use 'memmap' kernel command line to reserve
part of system RAM as x86 legacy PMEMs, and user can theoretically add
them as system RAM via kmem driver. This can be resolved by always
treating legacy PMEMs as TDX memory.
Implement a helper to loop over all RAM entries in e820 table to find
all system RAM ranges, as a preparation to covert all of them to TDX
memory. Use 'e820_table', rather than 'e820_table_firmware' to honor
'mem' and 'memmap' command lines. Following e820__memblock_setup(),
both E820_TYPE_RAM and E820_TYPE_RESERVED_KERN types are treated as TDX
memory, and contiguous ranges in the same NUMA node are merged together.
One difference is, as mentioned above, x86 legacy PMEMs (E820_TYPE_PRAM)
are also always treated as TDX memory. They are underneath RAM, and
they could be used as TD guest memory. Always including them as TDX
memory also avoids having to modify memory hotplug code to handle adding
them as system RAM via kmem driver.
To begin with, sanity check all memory regions found in e820 are fully
covered by any CMR and can be used as TDX memory.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/Kconfig | 1 +
arch/x86/virt/vmx/tdx.c | 228 +++++++++++++++++++++++++++++++++++++++-
2 files changed, 228 insertions(+), 1 deletion(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index f4c5481cca46..2d3f983e3582 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1961,6 +1961,7 @@ config INTEL_TDX_HOST
default n
depends on CPU_SUP_INTEL
depends on X86_64
+ select NUMA_KEEP_MEMINFO if NUMA
help
Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
host and certain physical attacks. This option enables necessary TDX
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index 1571bf192dde..e5206599f558 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -14,11 +14,13 @@
#include <linux/smp.h>
#include <linux/atomic.h>
#include <linux/slab.h>
+#include <linux/math.h>
#include <asm/msr-index.h>
#include <asm/msr.h>
#include <asm/cpufeature.h>
#include <asm/cpufeatures.h>
#include <asm/virtext.h>
+#include <asm/e820/api.h>
#include <asm/tdx.h>
#include "tdx.h"
@@ -591,6 +593,222 @@ static int tdx_get_sysinfo(void)
return sanitize_cmrs(tdx_cmr_array, cmr_num);
}
+/* Check whether one e820 entry is RAM and could be used as TDX memory */
+static bool e820_entry_is_ram(struct e820_entry *entry)
+{
+ /*
+ * Besides E820_TYPE_RAM, E820_TYPE_RESERVED_KERN type entries
+ * are also treated as TDX memory as they are also added to
+ * memblock.memory in e820__memblock_setup().
+ *
+ * E820_TYPE_SOFT_RESERVED type entries are excluded as they are
+ * marked as reserved and are not later freed to page allocator
+ * (only part of kernel image, initrd, etc are freed to page
+ * allocator).
+ *
+ * Also unconditionally treat x86 legacy PMEMs (E820_TYPE_PRAM)
+ * as TDX memory since they are RAM underneath, and could be used
+ * as TD guest memory.
+ */
+ return (entry->type == E820_TYPE_RAM) ||
+ (entry->type == E820_TYPE_RESERVED_KERN) ||
+ (entry->type == E820_TYPE_PRAM);
+}
+
+/*
+ * The low memory below 1MB is not covered by CMRs on some TDX platforms.
+ * In practice, this range cannot be used for guest memory because it is
+ * not managed by the page allocator due to boot-time reservation. Just
+ * skip the low 1MB so this range won't be treated as TDX memory.
+ *
+ * Return true if the e820 entry is completely skipped, in which case
+ * caller should ignore this entry. Otherwise the actual memory range
+ * after skipping the low 1MB is returned via @start and @end.
+ */
+static bool e820_entry_skip_lowmem(struct e820_entry *entry, u64 *start,
+ u64 *end)
+{
+ u64 _start = entry->addr;
+ u64 _end = entry->addr + entry->size;
+
+ if (_start < SZ_1M)
+ _start = SZ_1M;
+
+ *start = _start;
+ *end = _end;
+
+ return _start >= _end;
+}
+
+/*
+ * Trim away non-page-aligned memory at the beginning and the end for a
+ * given region. Return true when there are still pages remaining after
+ * trimming, and the trimmed region is returned via @start and @end.
+ */
+static bool e820_entry_trim(u64 *start, u64 *end)
+{
+ u64 s, e;
+
+ s = round_up(*start, PAGE_SIZE);
+ e = round_down(*end, PAGE_SIZE);
+
+ if (s >= e)
+ return false;
+
+ *start = s;
+ *end = e;
+
+ return true;
+}
+
+/*
+ * Get the next memory region (excluding low 1MB) in e820. @idx points
+ * to the entry to start to walk with. Multiple memory regions in the
+ * same NUMA node that are contiguous are merged together (following
+ * e820__memblock_setup()). The merged range is returned via @start and
+ * @end. After return, @idx points to the next entry of the last RAM
+ * entry that has been walked, or table->nr_entries (indicating all
+ * entries in the e820 table have been walked).
+ */
+static void e820_next_mem(struct e820_table *table, int *idx, u64 *start,
+ u64 *end)
+{
+ u64 rs, re;
+ int rnid, i;
+
+again:
+ rs = re = 0;
+ for (i = *idx; i < table->nr_entries; i++) {
+ struct e820_entry *entry = &table->entries[i];
+ u64 s, e;
+ int nid;
+
+ if (!e820_entry_is_ram(entry))
+ continue;
+
+ if (e820_entry_skip_lowmem(entry, &s, &e))
+ continue;
+
+ /*
+ * Found the first RAM entry. Record it and keep
+ * looping to find other RAM entries that can be
+ * merged.
+ */
+ if (!rs) {
+ rs = s;
+ re = e;
+ rnid = phys_to_target_node(rs);
+ if (WARN_ON_ONCE(rnid == NUMA_NO_NODE))
+ rnid = 0;
+ continue;
+ }
+
+ /*
+ * Try to merge with previous RAM entry. E820 entries
+ * are not necessarily page aligned. For instance, the
+ * setup_data elements in boot_params are marked as
+ * E820_TYPE_RESERVED_KERN, and they may not be page
+ * aligned. In e820__memblock_setup() all adjancent
+ * memory regions within the same NUMA node are merged to
+ * a single one, and the non-page-aligned parts (at the
+ * beginning and the end) are trimmed. Follow the same
+ * rule here.
+ */
+ nid = phys_to_target_node(s);
+ if (WARN_ON_ONCE(nid == NUMA_NO_NODE))
+ nid = 0;
+ if ((nid == rnid) && (s == re)) {
+ /* Merge with previous range and update the end */
+ re = e;
+ continue;
+ }
+
+ /*
+ * Stop if current entry cannot be merged with previous
+ * one (or more) entries.
+ */
+ break;
+ }
+
+ /*
+ * @i is either the RAM entry that cannot be merged with previous
+ * one (or more) entries, or table->nr_entries.
+ */
+ *idx = i;
+ /*
+ * Trim non-page-aligned parts of [@rs, @re), which is either a
+ * valid memory region, or empty. If there's nothing left after
+ * trimming and there are still entries that have not been
+ * walked, continue to walk.
+ */
+ if (!e820_entry_trim(&rs, &re) && i < table->nr_entries)
+ goto again;
+
+ *start = rs;
+ *end = re;
+}
+
+/*
+ * Helper to loop all e820 RAM entries with low 1MB excluded
+ * in a given e820 table.
+ */
+#define _e820_for_each_mem(_table, _i, _start, _end) \
+ for ((_i) = 0, e820_next_mem((_table), &(_i), &(_start), &(_end)); \
+ (_start) < (_end); \
+ e820_next_mem((_table), &(_i), &(_start), &(_end)))
+
+/*
+ * Helper to loop all e820 RAM entries with low 1MB excluded
+ * in kernel modified 'e820_table' to honor 'mem' and 'memmap' kernel
+ * command lines.
+ */
+#define e820_for_each_mem(_i, _start, _end) \
+ _e820_for_each_mem(e820_table, _i, _start, _end)
+
+/* Check whether first range is the subrange of the second */
+static bool is_subrange(u64 r1_start, u64 r1_end, u64 r2_start, u64 r2_end)
+{
+ return (r1_start >= r2_start && r1_end <= r2_end) ? true : false;
+}
+
+/* Check whether address range is covered by any CMR or not. */
+static bool range_covered_by_cmr(struct cmr_info *cmr_array, int cmr_num,
+ u64 start, u64 end)
+{
+ int i;
+
+ for (i = 0; i < cmr_num; i++) {
+ struct cmr_info *cmr = &cmr_array[i];
+
+ if (is_subrange(start, end, cmr->base, cmr->base + cmr->size))
+ return true;
+ }
+
+ return false;
+}
+
+/* Sanity check whether all e820 RAM entries are fully covered by CMRs. */
+static int e820_check_against_cmrs(void)
+{
+ u64 start, end;
+ int i;
+
+ /*
+ * Loop over e820_table to find all RAM entries and check
+ * whether they are all fully covered by any CMR.
+ */
+ e820_for_each_mem(i, start, end) {
+ if (!range_covered_by_cmr(tdx_cmr_array, tdx_cmr_num,
+ start, end)) {
+ pr_err("[0x%llx, 0x%llx) is not fully convertible memory\n",
+ start, end);
+ return -EFAULT;
+ }
+ }
+
+ return 0;
+}
+
static void free_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num)
{
int i;
@@ -606,8 +824,16 @@ static void free_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num)
static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num)
{
+ int ret;
+
+ ret = e820_check_against_cmrs();
+ if (ret)
+ goto err;
+
/* Return -EFAULT until constructing TDMRs is done */
- return -EFAULT;
+ ret = -EFAULT;
+err:
+ return ret;
}
static int init_tdx_module(void)
--
2.35.1
The kernel configures TDX usable memory regions to the TDX module via
an array of "TD Memory Region" (TDMR). Each TDMR entry (TDMR_INFO)
contains the information of the base/size of a memory region, the
base/size of the associated Physical Address Metadata Table (PAMT) and
a list of reserved areas in the region.
Create a number of TDMRs according to the verified e820 RAM entries.
As the first step only set up the base/size information for each TDMR.
TDMR must be 1G aligned and the size must be in 1G granularity. This
implies that one TDMR could cover multiple e820 RAM entries. If a RAM
entry spans the 1GB boundary and the former part is already covered by
the previous TDMR, just create a new TDMR for the latter part.
TDX only supports a limited number of TDMRs (currently 64). Abort the
TDMR construction process when the number of TDMRs exceeds this
limitation.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/virt/vmx/tdx.c | 138 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 138 insertions(+)
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index e5206599f558..1939b64d23e8 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -54,6 +54,18 @@
((u32)(((_keyid_part) & 0xffffffffull) + 1))
#define TDX_KEYID_NUM(_keyid_part) ((u32)((_keyid_part) >> 32))
+/* TDMR must be 1gb aligned */
+#define TDMR_ALIGNMENT BIT_ULL(30)
+#define TDMR_PFN_ALIGNMENT (TDMR_ALIGNMENT >> PAGE_SHIFT)
+
+/* Align up and down the address to TDMR boundary */
+#define TDMR_ALIGN_DOWN(_addr) ALIGN_DOWN((_addr), TDMR_ALIGNMENT)
+#define TDMR_ALIGN_UP(_addr) ALIGN((_addr), TDMR_ALIGNMENT)
+
+/* TDMR's start and end address */
+#define TDMR_START(_tdmr) ((_tdmr)->base)
+#define TDMR_END(_tdmr) ((_tdmr)->base + (_tdmr)->size)
+
/*
* TDX module status during initialization
*/
@@ -809,6 +821,44 @@ static int e820_check_against_cmrs(void)
return 0;
}
+/* The starting offset of reserved areas within TDMR_INFO */
+#define TDMR_RSVD_START 64
+
+static struct tdmr_info *__alloc_tdmr(void)
+{
+ int tdmr_sz;
+
+ /*
+ * TDMR_INFO's actual size depends on maximum number of reserved
+ * areas that one TDMR supports.
+ */
+ tdmr_sz = TDMR_RSVD_START + tdx_sysinfo.max_reserved_per_tdmr *
+ sizeof(struct tdmr_reserved_area);
+
+ /*
+ * TDX requires TDMR_INFO to be 512 aligned. Always align up
+ * TDMR_INFO size to 512 so the memory allocated via kzalloc()
+ * can meet the alignment requirement.
+ */
+ tdmr_sz = ALIGN(tdmr_sz, TDMR_INFO_ALIGNMENT);
+
+ return kzalloc(tdmr_sz, GFP_KERNEL);
+}
+
+/* Create a new TDMR at given index in the TDMR array */
+static struct tdmr_info *alloc_tdmr(struct tdmr_info **tdmr_array, int idx)
+{
+ struct tdmr_info *tdmr;
+
+ if (WARN_ON_ONCE(tdmr_array[idx]))
+ return NULL;
+
+ tdmr = __alloc_tdmr();
+ tdmr_array[idx] = tdmr;
+
+ return tdmr;
+}
+
static void free_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num)
{
int i;
@@ -822,6 +872,89 @@ static void free_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num)
}
}
+/*
+ * Create TDMRs to cover all RAM entries in e820_table. The created
+ * TDMRs are saved to @tdmr_array and @tdmr_num is set to the actual
+ * number of TDMRs. All entries in @tdmr_array must be initially NULL.
+ */
+static int create_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num)
+{
+ struct tdmr_info *tdmr;
+ u64 start, end;
+ int i, tdmr_idx;
+ int ret = 0;
+
+ tdmr_idx = 0;
+ tdmr = alloc_tdmr(tdmr_array, 0);
+ if (!tdmr)
+ return -ENOMEM;
+ /*
+ * Loop over all RAM entries in e820 and create TDMRs to cover
+ * them. To keep it simple, always try to use one TDMR to cover
+ * one RAM entry.
+ */
+ e820_for_each_mem(e820_table, i, start, end) {
+ start = TDMR_ALIGN_DOWN(start);
+ end = TDMR_ALIGN_UP(end);
+
+ /*
+ * If the current TDMR's size hasn't been initialized, it
+ * is a new allocated TDMR to cover the new RAM entry.
+ * Otherwise the current TDMR already covers the previous
+ * RAM entry. In the latter case, check whether the
+ * current RAM entry has been fully or partially covered
+ * by the current TDMR, since TDMR is 1G aligned.
+ */
+ if (tdmr->size) {
+ /*
+ * Loop to next RAM entry if the current entry
+ * is already fully covered by the current TDMR.
+ */
+ if (end <= TDMR_END(tdmr))
+ continue;
+
+ /*
+ * If part of current RAM entry has already been
+ * covered by current TDMR, skip the already
+ * covered part.
+ */
+ if (start < TDMR_END(tdmr))
+ start = TDMR_END(tdmr);
+
+ /*
+ * Create a new TDMR to cover the current RAM
+ * entry, or the remaining part of it.
+ */
+ tdmr_idx++;
+ if (tdmr_idx >= tdx_sysinfo.max_tdmrs) {
+ ret = -E2BIG;
+ goto err;
+ }
+ tdmr = alloc_tdmr(tdmr_array, tdmr_idx);
+ if (!tdmr) {
+ ret = -ENOMEM;
+ goto err;
+ }
+ }
+
+ tdmr->base = start;
+ tdmr->size = end - start;
+ }
+
+ /* @tdmr_idx is always the index of last valid TDMR. */
+ *tdmr_num = tdmr_idx + 1;
+
+ return 0;
+err:
+ /*
+ * Clean up already allocated TDMRs in case of error. @tdmr_idx
+ * indicates the last TDMR that wasn't created successfully,
+ * therefore only needs to free @tdmr_idx TDMRs.
+ */
+ free_tdmrs(tdmr_array, tdmr_idx);
+ return ret;
+}
+
static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num)
{
int ret;
@@ -830,8 +963,13 @@ static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num)
if (ret)
goto err;
+ ret = create_tdmrs(tdmr_array, tdmr_num);
+ if (ret)
+ goto err;
+
/* Return -EFAULT until constructing TDMRs is done */
ret = -EFAULT;
+ free_tdmrs(tdmr_array, *tdmr_num);
err:
return ret;
}
--
2.35.1
Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious
host and certain physical attacks. To support TDX, a new CPU mode called
Secure Arbitration Mode (SEAM) is added to Intel processors.
SEAM is an extension to the VMX architecture to define a new VMX root
operation (SEAM VMX root) and a new VMX non-root operation (SEAM VMX
non-root). SEAM VMX root operation is designed to host a CPU-attested
software module called the 'TDX module' which implements functions to
manage crypto protected VMs called Trust Domains (TD). It is also
designed to additionally host a CPU-attested software module called the
'Intel Persistent SEAMLDR (Intel P-SEAMLDR)' to load and update the TDX
module.
Software modules in SEAM VMX root run in a memory region defined by the
SEAM range register (SEAMRR). So the first thing of detecting Intel TDX
is to detect the validity of SEAMRR.
The presence of SEAMRR is reported via a new SEAMRR bit (15) of the
IA32_MTRRCAP MSR. The SEAMRR range registers consist of a pair of MSRs:
IA32_SEAMRR_PHYS_BASE and IA32_SEAMRR_PHYS_MASK
BIOS is expected to configure SEAMRR with the same value across all
cores. In case of BIOS misconfiguration, detect and compare SEAMRR
on all cpus.
To start to support TDX, create a new arch/x86/virt/vmx/ for non-KVM
host kernel virtualization support for Intel platforms, and create a new
tdx.c under it for TDX host kernel support.
TDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to
crypto protect TD guests. Part of MKTME KeyIDs are reserved as "TDX
private KeyID" or "TDX KeyIDs" for short. Similar to detecting SEAMRR,
detecting TDX private KeyIDs also needs to be done on all cpus to detect
any BIOS misconfiguration.
Add a function to detect all TDX preliminaries (SEAMRR, TDX private
KeyIDs) for a given cpu when it is brought up. As the first step,
detect the validity of SEAMRR.
Also add a new Kconfig option CONFIG_INTEL_TDX_HOST to opt-in TDX host
kernel support (to distinguish with TDX guest kernel support).
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/Kconfig | 12 +++++
arch/x86/Makefile | 2 +
arch/x86/include/asm/tdx.h | 9 ++++
arch/x86/kernel/cpu/intel.c | 3 ++
arch/x86/virt/Makefile | 2 +
arch/x86/virt/vmx/Makefile | 2 +
arch/x86/virt/vmx/tdx.c | 102 ++++++++++++++++++++++++++++++++++++
7 files changed, 132 insertions(+)
create mode 100644 arch/x86/virt/Makefile
create mode 100644 arch/x86/virt/vmx/Makefile
create mode 100644 arch/x86/virt/vmx/tdx.c
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index fb2706f7f04a..f4c5481cca46 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1956,6 +1956,18 @@ config X86_SGX
If unsure, say N.
+config INTEL_TDX_HOST
+ bool "Intel Trust Domain Extensions (TDX) host support"
+ default n
+ depends on CPU_SUP_INTEL
+ depends on X86_64
+ help
+ Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
+ host and certain physical attacks. This option enables necessary TDX
+ support in host kernel to run protected VMs.
+
+ If unsure, say N.
+
config EFI
bool "EFI runtime service support"
depends on ACPI
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index e84cdd409b64..83a6a5a2e244 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -238,6 +238,8 @@ head-y += arch/x86/kernel/platform-quirks.o
libs-y += arch/x86/lib/
+core-y += arch/x86/virt/
+
# drivers-y are linked after core-y
drivers-$(CONFIG_MATH_EMULATION) += arch/x86/math-emu/
drivers-$(CONFIG_PCI) += arch/x86/pci/
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 6a97d42b0de9..605d87ab580e 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -11,6 +11,8 @@
#ifndef __ASSEMBLY__
+#include <asm/processor.h>
+
/*
* Used to gather the output registers values of the TDCALL and SEAMCALL
* instructions when requesting services from the TDX module.
@@ -78,5 +80,12 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
return -ENODEV;
}
#endif /* CONFIG_INTEL_TDX_GUEST && CONFIG_KVM_GUEST */
+
+#ifdef CONFIG_INTEL_TDX_HOST
+void tdx_detect_cpu(struct cpuinfo_x86 *c);
+#else
+static inline void tdx_detect_cpu(struct cpuinfo_x86 *c) { }
+#endif /* CONFIG_INTEL_TDX_HOST */
+
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_X86_TDX_H */
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 8321c43554a1..b142a640fb8e 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -26,6 +26,7 @@
#include <asm/resctrl.h>
#include <asm/numa.h>
#include <asm/thermal.h>
+#include <asm/tdx.h>
#ifdef CONFIG_X86_64
#include <linux/topology.h>
@@ -715,6 +716,8 @@ static void init_intel(struct cpuinfo_x86 *c)
if (cpu_has(c, X86_FEATURE_TME))
detect_tme(c);
+ tdx_detect_cpu(c);
+
init_intel_misc_features(c);
if (tsx_ctrl_state == TSX_CTRL_ENABLE)
diff --git a/arch/x86/virt/Makefile b/arch/x86/virt/Makefile
new file mode 100644
index 000000000000..1e36502cd738
--- /dev/null
+++ b/arch/x86/virt/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0-only
+obj-y += vmx/
diff --git a/arch/x86/virt/vmx/Makefile b/arch/x86/virt/vmx/Makefile
new file mode 100644
index 000000000000..1bd688684716
--- /dev/null
+++ b/arch/x86/virt/vmx/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0-only
+obj-$(CONFIG_INTEL_TDX_HOST) += tdx.o
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
new file mode 100644
index 000000000000..03f35c75f439
--- /dev/null
+++ b/arch/x86/virt/vmx/tdx.c
@@ -0,0 +1,102 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright(c) 2022 Intel Corporation.
+ *
+ * Intel Trusted Domain Extensions (TDX) support
+ */
+
+#define pr_fmt(fmt) "tdx: " fmt
+
+#include <linux/types.h>
+#include <linux/cpumask.h>
+#include <asm/msr-index.h>
+#include <asm/msr.h>
+#include <asm/cpufeature.h>
+#include <asm/cpufeatures.h>
+#include <asm/tdx.h>
+
+/* Support Intel Secure Arbitration Mode Range Registers (SEAMRR) */
+#define MTRR_CAP_SEAMRR BIT(15)
+
+/* Core-scope Intel SEAMRR base and mask registers. */
+#define MSR_IA32_SEAMRR_PHYS_BASE 0x00001400
+#define MSR_IA32_SEAMRR_PHYS_MASK 0x00001401
+
+#define SEAMRR_PHYS_BASE_CONFIGURED BIT_ULL(3)
+#define SEAMRR_PHYS_MASK_ENABLED BIT_ULL(11)
+#define SEAMRR_PHYS_MASK_LOCKED BIT_ULL(10)
+
+#define SEAMRR_ENABLED_BITS \
+ (SEAMRR_PHYS_MASK_ENABLED | SEAMRR_PHYS_MASK_LOCKED)
+
+/* BIOS must configure SEAMRR registers for all cores consistently */
+static u64 seamrr_base, seamrr_mask;
+
+static bool __seamrr_enabled(void)
+{
+ return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS;
+}
+
+static void detect_seam_bsp(struct cpuinfo_x86 *c)
+{
+ u64 mtrrcap, base, mask;
+
+ /* SEAMRR is reported via MTRRcap */
+ if (!boot_cpu_has(X86_FEATURE_MTRR))
+ return;
+
+ rdmsrl(MSR_MTRRcap, mtrrcap);
+ if (!(mtrrcap & MTRR_CAP_SEAMRR))
+ return;
+
+ rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base);
+ if (!(base & SEAMRR_PHYS_BASE_CONFIGURED)) {
+ pr_info("SEAMRR base is not configured by BIOS\n");
+ return;
+ }
+
+ rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask);
+ if ((mask & SEAMRR_ENABLED_BITS) != SEAMRR_ENABLED_BITS) {
+ pr_info("SEAMRR is not enabled by BIOS\n");
+ return;
+ }
+
+ seamrr_base = base;
+ seamrr_mask = mask;
+}
+
+static void detect_seam_ap(struct cpuinfo_x86 *c)
+{
+ u64 base, mask;
+
+ /*
+ * Don't bother to detect this AP if SEAMRR is not
+ * enabled after earlier detections.
+ */
+ if (!__seamrr_enabled())
+ return;
+
+ rdmsrl(MSR_IA32_SEAMRR_PHYS_BASE, base);
+ rdmsrl(MSR_IA32_SEAMRR_PHYS_MASK, mask);
+
+ if (base == seamrr_base && mask == seamrr_mask)
+ return;
+
+ pr_err("Inconsistent SEAMRR configuration by BIOS\n");
+ /* Mark SEAMRR as disabled. */
+ seamrr_base = 0;
+ seamrr_mask = 0;
+}
+
+static void detect_seam(struct cpuinfo_x86 *c)
+{
+ if (c == &boot_cpu_data)
+ detect_seam_bsp(c);
+ else
+ detect_seam_ap(c);
+}
+
+void tdx_detect_cpu(struct cpuinfo_x86 *c)
+{
+ detect_seam(c);
+}
--
2.35.1
Add <Documentation/x86/tdx_host.rst> for TDX host support.
Signed-off-by: Kai Huang <[email protected]>
---
Documentation/x86/index.rst | 1 +
Documentation/x86/tdx_host.rst | 300 +++++++++++++++++++++++++++++++++
2 files changed, 301 insertions(+)
create mode 100644 Documentation/x86/tdx_host.rst
diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst
index 382e53ca850a..145fc251fbfc 100644
--- a/Documentation/x86/index.rst
+++ b/Documentation/x86/index.rst
@@ -25,6 +25,7 @@ x86-specific Documentation
intel_txt
amd-memory-encryption
tdx
+ tdx_host
pti
mds
microcode
diff --git a/Documentation/x86/tdx_host.rst b/Documentation/x86/tdx_host.rst
new file mode 100644
index 000000000000..a843ede9d45c
--- /dev/null
+++ b/Documentation/x86/tdx_host.rst
@@ -0,0 +1,300 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================================
+Intel Trusted Domain Extensions (TDX) host kernel support
+=========================================================
+
+Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious
+host and certain physical attacks. To support TDX, a new CPU mode called
+Secure Arbitration Mode (SEAM) is added to Intel processors.
+
+SEAM is an extension to the VMX architecture to define a new VMX root
+operation called 'SEAM VMX root' and a new VMX non-root operation called
+'VMX non-root'. Collectively, the SEAM VMX root and SEAM VMX non-root
+execution modes are called operation in SEAM.
+
+SEAM VMX root operation is designed to host a CPU-attested, software
+module called 'Intel TDX module' to manage virtual machine (VM) guests
+called Trust Domains (TD). The TDX module implements the functions to
+build, tear down, and start execution of TD VMs. SEAM VMX root is also
+designed to additionally host a CPU-attested, software module called the
+'Intel Persistent SEAMLDR (Intel P-SEAMLDR)' module to load and update
+the Intel TDX module.
+
+The software in SEAM VMX root runs in the memory region defined by the
+SEAM range register (SEAMRR). Access to this range is restricted to SEAM
+VMX root operation. Code fetches outside of SEAMRR when in SEAM VMX root
+operation are meant to be disallowed and lead to an unbreakable shutdown.
+
+TDX leverages Intel Multi-Key Total Memory Encryption (MKTME) to crypto
+protect TD guests. TDX reserves part of MKTME KeyID space as TDX private
+KeyIDs, which can only be used by software runs in SEAM. The physical
+address bits reserved for encoding TDX private KeyID are treated as
+reserved bits when not in SEAM operation. The partitioning of MKTME
+KeyIDs and TDX private KeyIDs is configured by BIOS.
+
+Host kernel transits to either P-SEAMLDR or TDX module via the new
+SEAMCALL instruction. SEAMCALLs are host-side interface functions
+defined by P-SEAMLDR and TDX module around the new SEAMCALL instruction.
+They are similar to a hypercall, except they are made by host kernel to
+the SEAM software modules.
+
+Before being able to manage TD guests, the TDX module must be loaded
+into SEAMRR and properly initialized using SEAMCALLs defined by TDX
+architecture. The current implementation assumes both P-SEAMLDR and
+TDX module are loaded by BIOS before the kernel boots.
+
+Detection and Initialization
+----------------------------
+
+The presence of SEAMRR is reported via a new SEAMRR bit (15) of the
+IA32_MTRRCAP MSR. The SEAMRR range registers consist of a pair of MSRs:
+IA32_SEAMRR_PHYS_BASE (0x1400) and IA32_SEAMRR_PHYS_MASK (0x1401).
+SEAMRR is enabled when bit 3 of IA32_SEAMRR_PHYS_BASE is set and
+bit 10/11 of IA32_SEAMRR_PHYS_MASK are set.
+
+However, there is no CPUID or MSR for querying the presence of the TDX
+module or the P-SEAMLDR. SEAMCALL fails with VMfailInvalid when SEAM
+software is not loaded, so SEAMCALL can be used to detect P-SEAMLDR and
+TDX module. SEAMLDR.INFO SEAMCALL is used to detect both P-SEAMLDR and
+TDX module. Success of the SEAMCALL means P-SEAMLDR is loaded, and the
+P-SEAMLDR information returned by the SEAMCALL further tells whether TDX
+module is loaded or not.
+
+User can check whether the TDX module is initialized via dmesg:
+
+ [..] tdx: P-SEAMLDR: version 0x0, vendor_id: 0x8086, build_date: 20211209, build_num 160, major 1, minor 0
+ [..] tdx: TDX module detected.
+ [..] tdx: TDX module: vendor_id 0x8086, major_version 1, minor_version 0, build_date 20211209, build_num 160
+ [..] tdx: TDX module initialized.
+
+Initializing TDX takes time (in seconds) and additional memory space (for
+metadata). Both are affected by the size of total usable memory which the
+TDX module is configured with. In particular, the TDX metadata consumes
+~1/256 of TDX usable memory. This leads to a non-negligible burden as the
+current implementation simply treats all E820 RAM ranges as TDX usable
+memory (all system RAM meets the security requirements on the first
+generation of TDX-capable platforms).
+
+Therefore, kernel uses lazy TDX initialization to avoid such burden for
+all users on a TDX-capable platform. The software component (e.g. KVM)
+which wants to use TDX is expected to call two helpers below to detect
+and initialize the TDX module until TDX is truly needed:
+
+ if (tdx_detect())
+ goto no_tdx;
+ if (tdx_init())
+ goto no_tdx;
+
+TDX detection and initialization are done via SEAMCALLs which require the
+CPU in VMX operation. The caller of the above two helpers should ensure
+that condition.
+
+Currently, only KVM is the only user of TDX and KVM already handles
+entering/leaving VMX operation. Letting KVM initialize TDX on demand
+avoids handling entering/leaving VMX operation, which isn't trivial, in
+core-kernel.
+
+In addition, a new kernel parameter 'tdx_host={on/off}' can be used to
+force disabling the TDX capability by the admin.
+
+TDX Memory Management
+---------------------
+
+TDX architecture manages TDX memory via below data structures:
+
+1) Convertible Memory Regions (CMRs)
+
+TDX provides increased levels of memory confidentiality and integrity.
+This requires special hardware support for features like memory
+encryption and storage of memory integrity checksums. A CMR represents a
+memory range that meets those requirements and can be used as TDX memory.
+The list of CMRs can be queried from TDX module.
+
+2) TD Memory Regions (TDMRs)
+
+The TDX module manages TDX usable memory via TD Memory Regions (TDMR).
+Each TDMR has information of its base and size, its metadata (PAMT)'s
+base and size, and an array of reserved areas to hold the memory region
+address holes and PAMTs. TDMR must be 1G aligned and in 1G granularity.
+
+Host kernel is responsible for choosing which convertible memory regions
+(reside in CMRs) to use as TDX memory, and constructing a list of TDMRs
+to cover all those memory regions, and configure the TDMRs to TDX module.
+
+3) Physical Address Metadata Tables (PAMTs)
+
+This metadata essentially serves as the 'struct page' for the TDX module,
+recording things like which TD guest 'owns' a given page of memory. Each
+TDMR has a dedicated PAMT.
+
+PAMT is not reserved by the hardware upfront and must be allocated by the
+kernel and given to the TDX module. PAMT for a given TDMR doesn't have
+to be within that TDMR, but a PAMT must be within one CMR. Additionally,
+if a PAMT overlaps with a TDMR, the overlapping part must be marked as
+reserved in that particular TDMR.
+
+Kernel Policy of TDX Memory
+---------------------------
+
+The first generation of TDX essentially guarantees that all system RAM
+memory regions (excluding the memory below 1MB) are covered by CMRs.
+Currently, to avoid having to modify the page allocator to support both
+TDX and non-TDX allocation, the kernel choose to use all system RAM as
+TDX memory. A list of TDMRs are constructed based on all RAM entries in
+e820 table and configured to the TDX module.
+
+Limitations
+-----------
+
+1. Constructing TDMRs
+
+Currently, the kernel tries to create one TDMR for each RAM entry in
+e820. 'e820_table' is used to find all RAM entries to honor 'mem' and
+'memmap' kernel command line. However, 'memmap' command line may also
+result in many discrete RAM entries. TDX architecturally only supports a
+limited number of TDMRs (currently 64). In this case, constructing TDMRs
+may fail due to exceeding the maximum number of TDMRs. The user is
+responsible for not doing so otherwise TDX may not be available. This
+can be further enhanced by supporting merging adjacent TDMRs.
+
+2. PAMT allocation
+
+Currently, the kernel allocates PAMT for each TDMR separately using
+alloc_contig_pages(). alloc_contig_pages() only guarantees the PAMT is
+allocated from a given NUMA node, but doesn't have control over
+allocating PAMT from a given TDMR range. This may result in all PAMTs
+on one NUMA node being within one single TDMR. PAMTs overlapping with
+a given TDMR must be put into the TDMR's reserved areas too. However TDX
+only supports a limited number of reserved areas per TDMR (currently 16),
+thus too many PAMTs in one NUMA node may result in constructing TDMR
+failure due to exceeding TDMR's maximum reserved areas.
+
+The user is responsible for not creating too many discrete RAM entries
+on one NUMA node, which may result in having too many TDMRs on one node,
+which eventually results in constructing TDMR failure due to exceeding
+the maximum reserved areas. This can be further enhanced to support
+per-NUMA-node PAMT allocation, which could reduce the number of PAMT to
+1 for each node.
+
+3. TDMR initialization
+
+Currently, the kernel initialize TDMRs one by one. This may take couple
+of seconds to finish on large memory systems (TBs). This can be further
+enhanced by allowing initializing different TDMRs in parallel on multiple
+cpus.
+
+4. CPU hotplug
+
+The first generation of TDX architecturally doesn't support ACPI CPU
+hotplug. All logical cpus are enabled by BIOS in MADT table. Also, the
+first generation of TDX-capable platforms don't support ACPI CPU hotplug
+either. Since this physically cannot happen, currently kernel doesn't
+have any check in ACPI CPU hotplug code path to disable it.
+
+Also, only TDX module initialization requires all BIOS-enabled cpus are
+online. After the initialization, any logical cpu can be brought down
+and brought up to online again later. Therefore this series doesn't
+change logical CPU hotplug either.
+
+This can be enhanced when any future generation of TDX starts to support
+ACPI cpu hotplug.
+
+5. Memory hotplug
+
+The first generation of TDX architecturally doesn't support memory
+hotplug. The CMRs are generated by BIOS during boot and it is fixed
+during machine's runtime.
+
+However, the first generation of TDX-capable platforms don't support ACPI
+memory hotplug. Since it physically cannot happen, currently kernel
+doesn't have any check in ACPI memory hotplug code path to disable it.
+
+A special case of memory hotplug is adding NVDIMM as system RAM using
+kmem driver. However the first generation of TDX-capable platforms
+cannot turn on TDX and NVDIMM simultaneously, so in practice this cannot
+happen either.
+
+Another case is admin can use 'memmap' kernel command line to create
+legacy PMEMs and use them as TD guest memory, or theoretically, can use
+kmem driver to add them as system RAM. Current implementation always
+includes legacy PMEMs when constructing TDMRs so they are also TDX memory.
+So legacy PMEMs can either be used as TD guest memory directly or can be
+converted to system RAM via kmem driver.
+
+This can be enhanced when future generation of TDX starts to support ACPI
+memory hotplug, or NVDIMM and TDX can be enabled simultaneously on the
+same platform.
+
+6. Online CPUs
+
+TDX initialization includes a step where certain SEAMCALL must be called
+on every BIOS-enabled CPU (with a ACPI MADT entry marked as enabled).
+Otherwise, the initialization process aborts at a later step.
+
+The user should avoid using boot parameters (such as maxcpus, nr_cpus,
+possible_cpus) or offlining CPUs before initializing TDX. Doing so will
+lead to the mismatch between online CPUs and BIOS-enabled CPUs, resulting
+TDX module initialization failure.
+
+It is ok to offline CPUs after TDX initialization is completed.
+
+7. Kexec
+
+The TDX module can be initialized only once during its lifetime. The
+first generation of TDX doesn't have interface to reset TDX module to
+uninitialized state so it can be initialized again.
+
+This implies:
+
+ - If the old kernel fails to initialize TDX, the new kernel cannot
+ use TDX too unless the new kernel fixes the bug which leads to
+ initialization failure in the old kernel and can resume from where
+ the old kernel stops. This requires certain coordination between
+ the two kernels.
+
+ - If the old kernel has initialized TDX successfully, the new kernel
+ may be able to use TDX if the two kernels have exactly the same
+ configurations on the TDX module. It further requires the new kernel
+ to reserve the TDX metadata pages (allocated by the old kernel) in
+ its page allocator. It also requires coordination between the two
+ kernels. Furthermore, if kexec() is done when there are active TD
+ guests running, the new kernel cannot use TDX because it's extremely
+ hard for the old kernel to pass all TDX private pages to the new
+ kernel.
+
+Given that, the current implementation doesn't support TDX after kexec()
+(except the old kernel hasn't initialized TDX at all).
+
+The current implementation doesn't shut down TDX module but leaves it
+open during kexec(). This is because shutting down TDX module requires
+CPU being in VMX operation but there's no guarantee of this during
+kexec(). Leaving the TDX module open is not the best case, but it is OK
+since the new kernel won't be able to use TDX anyway (therefore TDX
+module won't run at all).
+
+This can be further enhanced when core-kernele (non-KVM) can handle
+VMXON.
+
+If TDX is ever enabled and/or used to run any TD guests, the cachelines
+of TDX private memory, including PAMTs, used by TDX module need to be
+flushed before transiting to the new kernel otherwise they may silently
+corrupt the new kernel. Similar to SME, the current implementation
+flushes cache in stop_this_cpu().
+
+8. Initialization error
+
+Currently, any error happened during TDX initialization moves the TDX
+module to the SHUTDOWN state. No SEAMCALL is allowed in this state, and
+the TDX module cannot be re-initialized without a hard reset.
+
+This can be further enhanced to treat some errors as recoverable errors
+and let the caller retry later. A more detailed state machine can be
+added to record the internal state of TDX module, and the initialization
+can resume from that state in the next try.
+
+Specifically, there are three cases that can be treated as recoverable
+error: 1) -ENOMEM (i.e. due to PAMT allocation); 2) TDH.SYS.CONFIG error
+due to TDH.SYS.LP.INIT is not called on all cpus (i.e. due to offline
+cpus); 3) -EPERM when the caller doesn't guarantee all cpus are in VMX
+operation.
--
2.35.1
In order to provide crypto protection to guests, the TDX module uses
additional metadata to record things like which guest "owns" a given
page of memory. This metadata, referred as Physical Address Metadata
Table (PAMT), essentially serves as the 'struct page' for the TDX
module. PAMTs are not reserved by hardware upfront. They must be
allocated by the kernel and then given to the TDX module.
TDX supports 3 page sizes: 4K, 2M, and 1G. Each "TD Memory Region"
(TDMR) has 3 PAMTs to track the 3 supported page sizes respectively.
Each PAMT must be a physically contiguous area from the Convertible
Memory Regions (CMR). However, the PAMTs which track pages in one TDMR
do not need to reside within that TDMR but can be anywhere in CMRs.
If one PAMT overlaps with any TDMR, the overlapping part must be
reported as a reserved area in that particular TDMR.
Use alloc_contig_pages() since PAMT must be a physically contiguous area
and it may be potentially large (~1/256th of the size of the given TDMR).
The current version of TDX supports at most 16 reserved areas per TDMR
to cover both PAMTs and potential memory holes within the TDMR. If many
PAMTs are allocated within a single TDMR, 16 reserved areas may not be
sufficient to cover all of them.
Adopt the following policies when allocating PAMTs for a given TDMR:
- Allocate three PAMTs of the TDMR in one contiguous chunk to minimize
the total number of reserved areas consumed for PAMTs.
- Try to first allocate PAMT from the local node of the TDMR for better
NUMA locality.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/Kconfig | 1 +
arch/x86/virt/vmx/tdx.c | 167 +++++++++++++++++++++++++++++++++++++++-
2 files changed, 167 insertions(+), 1 deletion(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 2d3f983e3582..193ba089d353 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1962,6 +1962,7 @@ config INTEL_TDX_HOST
depends on CPU_SUP_INTEL
depends on X86_64
select NUMA_KEEP_MEMINFO if NUMA
+ depends on CONTIG_ALLOC
help
Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
host and certain physical attacks. This option enables necessary TDX
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index 1939b64d23e8..c58c99b94c72 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -21,6 +21,7 @@
#include <asm/cpufeatures.h>
#include <asm/virtext.h>
#include <asm/e820/api.h>
+#include <asm/pgtable.h>
#include <asm/tdx.h>
#include "tdx.h"
@@ -66,6 +67,16 @@
#define TDMR_START(_tdmr) ((_tdmr)->base)
#define TDMR_END(_tdmr) ((_tdmr)->base + (_tdmr)->size)
+/* Page sizes supported by TDX */
+enum tdx_page_sz {
+ TDX_PG_4K = 0,
+ TDX_PG_2M,
+ TDX_PG_1G,
+ TDX_PG_MAX,
+};
+
+#define TDX_HPAGE_SHIFT 9
+
/*
* TDX module status during initialization
*/
@@ -893,7 +904,7 @@ static int create_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num)
* them. To keep it simple, always try to use one TDMR to cover
* one RAM entry.
*/
- e820_for_each_mem(e820_table, i, start, end) {
+ e820_for_each_mem(i, start, end) {
start = TDMR_ALIGN_DOWN(start);
end = TDMR_ALIGN_UP(end);
@@ -955,6 +966,148 @@ static int create_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num)
return ret;
}
+/* Calculate PAMT size given a TDMR and a page size */
+static unsigned long __tdmr_get_pamt_sz(struct tdmr_info *tdmr,
+ enum tdx_page_sz pgsz)
+{
+ unsigned long pamt_sz;
+
+ pamt_sz = (tdmr->size >> ((TDX_HPAGE_SHIFT * pgsz) + PAGE_SHIFT)) *
+ tdx_sysinfo.pamt_entry_size;
+ /* PAMT size must be 4K aligned */
+ pamt_sz = ALIGN(pamt_sz, PAGE_SIZE);
+
+ return pamt_sz;
+}
+
+/* Calculate the size of all PAMTs for a TDMR */
+static unsigned long tdmr_get_pamt_sz(struct tdmr_info *tdmr)
+{
+ enum tdx_page_sz pgsz;
+ unsigned long pamt_sz;
+
+ pamt_sz = 0;
+ for (pgsz = TDX_PG_4K; pgsz < TDX_PG_MAX; pgsz++)
+ pamt_sz += __tdmr_get_pamt_sz(tdmr, pgsz);
+
+ return pamt_sz;
+}
+
+/*
+ * Locate the NUMA node containing the start of the given TDMR's first
+ * RAM entry. The given TDMR may also cover memory in other NUMA nodes.
+ */
+static int tdmr_get_nid(struct tdmr_info *tdmr)
+{
+ u64 start, end;
+ int i;
+
+ /* Find the first RAM entry covered by the TDMR */
+ e820_for_each_mem(i, start, end)
+ if (end > TDMR_START(tdmr))
+ break;
+
+ /*
+ * One TDMR must cover at least one (or partial) RAM entry,
+ * otherwise it is kernel bug. WARN_ON() in this case.
+ */
+ if (WARN_ON_ONCE((start >= end) || start >= TDMR_END(tdmr)))
+ return 0;
+
+ /*
+ * The first RAM entry may be partially covered by the previous
+ * TDMR. In this case, use TDMR's start to find the NUMA node.
+ */
+ if (start < TDMR_START(tdmr))
+ start = TDMR_START(tdmr);
+
+ return phys_to_target_node(start);
+}
+
+static int tdmr_setup_pamt(struct tdmr_info *tdmr)
+{
+ unsigned long tdmr_pamt_base, pamt_base[TDX_PG_MAX];
+ unsigned long pamt_sz[TDX_PG_MAX];
+ unsigned long pamt_npages;
+ struct page *pamt;
+ enum tdx_page_sz pgsz;
+ int nid;
+
+ /*
+ * Allocate one chunk of physically contiguous memory for all
+ * PAMTs. This helps minimize the PAMT's use of reserved areas
+ * in overlapped TDMRs.
+ */
+ nid = tdmr_get_nid(tdmr);
+ pamt_npages = tdmr_get_pamt_sz(tdmr) >> PAGE_SHIFT;
+ pamt = alloc_contig_pages(pamt_npages, GFP_KERNEL, nid,
+ &node_online_map);
+ if (!pamt)
+ return -ENOMEM;
+
+ /* Calculate PAMT base and size for all supported page sizes. */
+ tdmr_pamt_base = page_to_pfn(pamt) << PAGE_SHIFT;
+ for (pgsz = TDX_PG_4K; pgsz < TDX_PG_MAX; pgsz++) {
+ unsigned long sz = __tdmr_get_pamt_sz(tdmr, pgsz);
+
+ pamt_base[pgsz] = tdmr_pamt_base;
+ pamt_sz[pgsz] = sz;
+
+ tdmr_pamt_base += sz;
+ }
+
+ tdmr->pamt_4k_base = pamt_base[TDX_PG_4K];
+ tdmr->pamt_4k_size = pamt_sz[TDX_PG_4K];
+ tdmr->pamt_2m_base = pamt_base[TDX_PG_2M];
+ tdmr->pamt_2m_size = pamt_sz[TDX_PG_2M];
+ tdmr->pamt_1g_base = pamt_base[TDX_PG_1G];
+ tdmr->pamt_1g_size = pamt_sz[TDX_PG_1G];
+
+ return 0;
+}
+
+static void tdmr_free_pamt(struct tdmr_info *tdmr)
+{
+ unsigned long pamt_pfn, pamt_sz;
+
+ pamt_pfn = tdmr->pamt_4k_base >> PAGE_SHIFT;
+ pamt_sz = tdmr->pamt_4k_size + tdmr->pamt_2m_size + tdmr->pamt_1g_size;
+
+ /* Do nothing if PAMT hasn't been allocated for this TDMR */
+ if (!pamt_sz)
+ return;
+
+ if (WARN_ON(!pamt_pfn))
+ return;
+
+ free_contig_range(pamt_pfn, pamt_sz >> PAGE_SHIFT);
+}
+
+static void tdmrs_free_pamt_all(struct tdmr_info **tdmr_array, int tdmr_num)
+{
+ int i;
+
+ for (i = 0; i < tdmr_num; i++)
+ tdmr_free_pamt(tdmr_array[i]);
+}
+
+/* Allocate and set up PAMTs for all TDMRs */
+static int tdmrs_setup_pamt_all(struct tdmr_info **tdmr_array, int tdmr_num)
+{
+ int i, ret;
+
+ for (i = 0; i < tdmr_num; i++) {
+ ret = tdmr_setup_pamt(tdmr_array[i]);
+ if (ret)
+ goto err;
+ }
+
+ return 0;
+err:
+ tdmrs_free_pamt_all(tdmr_array, tdmr_num);
+ return -ENOMEM;
+}
+
static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num)
{
int ret;
@@ -967,8 +1120,14 @@ static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num)
if (ret)
goto err;
+ ret = tdmrs_setup_pamt_all(tdmr_array, *tdmr_num);
+ if (ret)
+ goto err_free_tdmrs;
+
/* Return -EFAULT until constructing TDMRs is done */
ret = -EFAULT;
+ tdmrs_free_pamt_all(tdmr_array, *tdmr_num);
+err_free_tdmrs:
free_tdmrs(tdmr_array, *tdmr_num);
err:
return ret;
@@ -1018,6 +1177,12 @@ static int init_tdx_module(void)
* initialization are done.
*/
ret = -EFAULT;
+ /*
+ * Free PAMTs allocated in construct_tdmrs() when TDX module
+ * initialization fails.
+ */
+ if (ret)
+ tdmrs_free_pamt_all(tdmr_array, tdmr_num);
out_free_tdmrs:
/*
* TDMRs are only used during initializing TDX module. Always
--
2.35.1
Do the TDX module global initialization which requires calling
TDH.SYS.INIT once on any logical cpu.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/virt/vmx/tdx.c | 11 ++++++++++-
arch/x86/virt/vmx/tdx.h | 1 +
2 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index d87af534db51..45e7404b5d81 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -463,11 +463,20 @@ static int __tdx_detect(void)
static int init_tdx_module(void)
{
+ int ret;
+
+ /* TDX module global initialization */
+ ret = seamcall(TDH_SYS_INIT, 0, 0, 0, 0, NULL, NULL);
+ if (ret)
+ goto out;
+
/*
* Return -EFAULT until all steps of TDX module
* initialization are done.
*/
- return -EFAULT;
+ ret = -EFAULT;
+out:
+ return ret;
}
static void shutdown_tdx_module(void)
diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h
index dcc1f6dfe378..f0983b1936d8 100644
--- a/arch/x86/virt/vmx/tdx.h
+++ b/arch/x86/virt/vmx/tdx.h
@@ -38,6 +38,7 @@ struct p_seamldr_info {
/*
* TDX module SEAMCALL leaf functions
*/
+#define TDH_SYS_INIT 33
#define TDH_SYS_LP_SHUTDOWN 44
struct tdx_module_output;
--
2.35.1
Pre-TDX Intel hardware has support for a memory encryption architecture
called MKTME. The memory encryption hardware underpinning MKTME is also
used for Intel TDX. TDX ends up "stealing" some of the physical address
space from the MKTME architecture for crypto protection to VMs.
A new MSR (IA32_MKTME_KEYID_PARTITIONING) helps to enumerate how MKTME-
enumerated "KeyID" space is distributed between TDX and legacy MKTME.
KeyIDs reserved for TDX are called 'TDX private KeyIDs' or 'TDX KeyIDs'
for short.
The new MSR is per package and BIOS is responsible for partitioning
MKTME KeyIDs and TDX KeyIDs consistently among all packages.
Detect TDX private KeyIDs as a preparation to initialize TDX. Similar
to detecting SEAMRR, detect on all cpus to detect any potential BIOS
misconfiguration among packages.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/virt/vmx/tdx.c | 72 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 72 insertions(+)
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index 03f35c75f439..ba2210001ea8 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -29,9 +29,28 @@
#define SEAMRR_ENABLED_BITS \
(SEAMRR_PHYS_MASK_ENABLED | SEAMRR_PHYS_MASK_LOCKED)
+/*
+ * Intel Trusted Domain CPU Architecture Extension spec:
+ *
+ * IA32_MKTME_KEYID_PARTIONING:
+ *
+ * Bit [31:0]: number of MKTME KeyIDs.
+ * Bit [63:32]: number of TDX private KeyIDs.
+ *
+ * TDX private KeyIDs start after the last MKTME KeyID.
+ */
+#define MSR_IA32_MKTME_KEYID_PARTITIONING 0x00000087
+
+#define TDX_KEYID_START(_keyid_part) \
+ ((u32)(((_keyid_part) & 0xffffffffull) + 1))
+#define TDX_KEYID_NUM(_keyid_part) ((u32)((_keyid_part) >> 32))
+
/* BIOS must configure SEAMRR registers for all cores consistently */
static u64 seamrr_base, seamrr_mask;
+static u32 tdx_keyid_start;
+static u32 tdx_keyid_num;
+
static bool __seamrr_enabled(void)
{
return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS;
@@ -96,7 +115,60 @@ static void detect_seam(struct cpuinfo_x86 *c)
detect_seam_ap(c);
}
+static void detect_tdx_keyids_bsp(struct cpuinfo_x86 *c)
+{
+ u64 keyid_part;
+
+ /* TDX is built on MKTME, which is based on TME */
+ if (!boot_cpu_has(X86_FEATURE_TME))
+ return;
+
+ if (rdmsrl_safe(MSR_IA32_MKTME_KEYID_PARTITIONING, &keyid_part))
+ return;
+
+ /* If MSR value is 0, TDX is not enabled by BIOS. */
+ if (!keyid_part)
+ return;
+
+ tdx_keyid_num = TDX_KEYID_NUM(keyid_part);
+ if (!tdx_keyid_num)
+ return;
+
+ tdx_keyid_start = TDX_KEYID_START(keyid_part);
+}
+
+static void detect_tdx_keyids_ap(struct cpuinfo_x86 *c)
+{
+ u64 keyid_part;
+
+ /*
+ * Don't bother to detect this AP if TDX KeyIDs are
+ * not detected or cleared after earlier detections.
+ */
+ if (!tdx_keyid_num)
+ return;
+
+ rdmsrl(MSR_IA32_MKTME_KEYID_PARTITIONING, keyid_part);
+
+ if ((tdx_keyid_start == TDX_KEYID_START(keyid_part)) &&
+ (tdx_keyid_num == TDX_KEYID_NUM(keyid_part)))
+ return;
+
+ pr_err("Inconsistent TDX KeyID configuration among packages by BIOS\n");
+ tdx_keyid_start = 0;
+ tdx_keyid_num = 0;
+}
+
+static void detect_tdx_keyids(struct cpuinfo_x86 *c)
+{
+ if (c == &boot_cpu_data)
+ detect_tdx_keyids_bsp(c);
+ else
+ detect_tdx_keyids_ap(c);
+}
+
void tdx_detect_cpu(struct cpuinfo_x86 *c)
{
detect_seam(c);
+ detect_tdx_keyids(c);
}
--
2.35.1
TDX provides increased levels of memory confidentiality and integrity.
This requires special hardware support for features like memory
encryption and storage of memory integrity checksums. Not all memory
satisfies these requirements.
As a result, TDX introduced the concept of a "Convertible Memory Region"
(CMR). During boot, the firmware builds a list of all of the memory
ranges which can provide the TDX security guarantees. The list of these
ranges, along with TDX module information, is available to the kernel by
querying the TDX module.
In order to provide crypto protection to TD guests, the TDX architecture
also needs additional metadata to record things like which TD guest
"owns" a given page of memory. This metadata essentially serves as the
'struct page' for the TDX module. The space for this metadata is not
reserved by the hardware upfront and must be allocated by the kernel
and given to the TDX module.
Since this metadata consumes space, the VMM can choose whether or not to
allocate it for a given area of convertible memory. If it chooses not
to, the memory cannot receive TDX protections and can not be used by TDX
guests as private memory.
For every memory region that the VMM wants to use as TDX memory, it sets
up a "TD Memory Region" (TDMR). Each TDMR represents a physically
contiguous convertible range and must also have its own physically
contiguous metadata table, referred to as a Physical Address Metadata
Table (PAMT), to track status for each page in the TDMR range.
Unlike a CMR, each TDMR requires 1G granularity and alignment. To
support physical RAM areas that don't meet those strict requirements,
each TDMR permits a number of internal "reserved areas" which can be
placed over memory holes. If PAMT metadata is placed within a TDMR it
must be covered by one of these reserved areas.
Let's summarize the concepts:
CMR - Firmware-enumerated physical ranges that support TDX. CMRs are
4K aligned.
TDMR - Physical address range which is chosen by the kernel to support
TDX. 1G granularity and alignment required. Each TDMR has
reserved areas where TDX memory holes and overlapping PAMTs can
be put into.
PAMT - Physically contiguous TDX metadata. One table for each page size
per TDMR. Roughly 1/256th of TDMR in size. 256G TDMR = ~1G
PAMT.
As one step of initializing the TDX module, the memory regions that TDX
module can use must be configured to the TDX module via an array of
TDMRs.
Constructing TDMRs to build the TDX memory consists below steps:
1) Create TDMRs to cover all memory regions that TDX module can use;
2) Allocate and set up PAMT for each TDMR;
3) Set up reserved areas for each TDMR.
Add a placeholder right after getting TDX module and CMRs information to
construct TDMRs to do the above steps, as the preparation to configure
the TDX module. Always free TDMRs at the end of the initialization (no
matter successful or not), as TDMRs are only used during the
initialization.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/virt/vmx/tdx.c | 47 +++++++++++++++++++++++++++++++++++++++++
arch/x86/virt/vmx/tdx.h | 23 ++++++++++++++++++++
2 files changed, 70 insertions(+)
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index eb585bc5edda..1571bf192dde 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -13,6 +13,7 @@
#include <linux/cpu.h>
#include <linux/smp.h>
#include <linux/atomic.h>
+#include <linux/slab.h>
#include <asm/msr-index.h>
#include <asm/msr.h>
#include <asm/cpufeature.h>
@@ -590,8 +591,29 @@ static int tdx_get_sysinfo(void)
return sanitize_cmrs(tdx_cmr_array, cmr_num);
}
+static void free_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num)
+{
+ int i;
+
+ for (i = 0; i < tdmr_num; i++) {
+ struct tdmr_info *tdmr = tdmr_array[i];
+
+ /* kfree() works with NULL */
+ kfree(tdmr);
+ tdmr_array[i] = NULL;
+ }
+}
+
+static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num)
+{
+ /* Return -EFAULT until constructing TDMRs is done */
+ return -EFAULT;
+}
+
static int init_tdx_module(void)
{
+ struct tdmr_info **tdmr_array;
+ int tdmr_num;
int ret;
/* TDX module global initialization */
@@ -609,11 +631,36 @@ static int init_tdx_module(void)
if (ret)
goto out;
+ /*
+ * Prepare enough space to hold pointers of TDMRs (TDMR_INFO).
+ * TDX requires TDMR_INFO being 512 aligned. Each TDMR is
+ * allocated individually within construct_tdmrs() to meet
+ * this requirement.
+ */
+ tdmr_array = kcalloc(tdx_sysinfo.max_tdmrs, sizeof(struct tdmr_info *),
+ GFP_KERNEL);
+ if (!tdmr_array) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ /* Construct TDMRs to build TDX memory */
+ ret = construct_tdmrs(tdmr_array, &tdmr_num);
+ if (ret)
+ goto out_free_tdmrs;
+
/*
* Return -EFAULT until all steps of TDX module
* initialization are done.
*/
ret = -EFAULT;
+out_free_tdmrs:
+ /*
+ * TDMRs are only used during initializing TDX module. Always
+ * free them no matter the initialization was successful or not.
+ */
+ free_tdmrs(tdmr_array, tdmr_num);
+ kfree(tdmr_array);
out:
return ret;
}
diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h
index 2f21c45df6ac..05bf9fe6bd00 100644
--- a/arch/x86/virt/vmx/tdx.h
+++ b/arch/x86/virt/vmx/tdx.h
@@ -89,6 +89,29 @@ struct tdsysinfo_struct {
};
} __packed __aligned(TDSYSINFO_STRUCT_ALIGNMENT);
+struct tdmr_reserved_area {
+ u64 offset;
+ u64 size;
+} __packed;
+
+#define TDMR_INFO_ALIGNMENT 512
+
+struct tdmr_info {
+ u64 base;
+ u64 size;
+ u64 pamt_1g_base;
+ u64 pamt_1g_size;
+ u64 pamt_2m_base;
+ u64 pamt_2m_size;
+ u64 pamt_4k_base;
+ u64 pamt_4k_size;
+ /*
+ * Actual number of reserved areas depends on
+ * 'struct tdsysinfo_struct'::max_reserved_per_tdmr.
+ */
+ struct tdmr_reserved_area reserved_areas[0];
+} __packed __aligned(TDMR_INFO_ALIGNMENT);
+
/*
* P-SEAMLDR SEAMCALL leaf function
*/
--
2.35.1
The P-SEAMLDR (persistent SEAM loader) is the first software module that
runs in SEAM VMX root, responsible for loading and updating the TDX
module. Both the P-SEAMLDR and the TDX module are expected to be loaded
before host kernel boots.
There is not CPUID or MSR to detect whether the P-SEAMLDR or the TDX
module has been loaded. SEAMCALL instruction fails with VMfailInvalid
when the target SEAM software module is not loaded, so SEAMCALL can be
used to detect whether the P-SEAMLDR and the TDX module are loaded.
Detect the P-SEAMLDR and the TDX module by calling SEAMLDR.INFO SEAMCALL
to get the P-SEAMLDR information. If the SEAMCALL succeeds, the
P-SEAMLDR information further tells whether the TDX module is loaded or
not.
Also add a wrapper of __seamcall() to make SEAMCALL to the P-SEAMLDR and
the TDX module with additional defensive check on SEAMRR and CR4.VMXE,
since both detecting and initializing TDX module require the caller of
TDX to handle VMXON.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/virt/vmx/tdx.c | 175 +++++++++++++++++++++++++++++++++++++++-
arch/x86/virt/vmx/tdx.h | 31 +++++++
2 files changed, 205 insertions(+), 1 deletion(-)
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index a3517e221578..b04f792f1e65 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -15,7 +15,9 @@
#include <asm/msr.h>
#include <asm/cpufeature.h>
#include <asm/cpufeatures.h>
+#include <asm/virtext.h>
#include <asm/tdx.h>
+#include "tdx.h"
/* Support Intel Secure Arbitration Mode Range Registers (SEAMRR) */
#define MTRR_CAP_SEAMRR BIT(15)
@@ -74,6 +76,8 @@ static enum tdx_module_status_t tdx_module_status;
/* Prevent concurrent attempts on TDX detection and initialization */
static DEFINE_MUTEX(tdx_module_lock);
+static struct p_seamldr_info p_seamldr_info;
+
static bool __seamrr_enabled(void)
{
return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS;
@@ -229,6 +233,160 @@ static bool tdx_keyid_sufficient(void)
return tdx_keyid_num >= 2;
}
+/*
+ * All error codes of both the P-SEAMLDR and the TDX module SEAMCALLs
+ * have bit 63 set if SEAMCALL fails.
+ */
+#define SEAMCALL_LEAF_ERROR(_ret) ((_ret) & BIT_ULL(63))
+
+/**
+ * seamcall - make SEAMCALL to the P-SEAMLDR or the TDX module with
+ * additional check on SEAMRR and CR4.VMXE
+ *
+ * @fn: SEAMCALL leaf number.
+ * @rcx: Input operand RCX.
+ * @rdx: Input operand RDX.
+ * @r8: Input operand R8.
+ * @r9: Input operand R9.
+ * @seamcall_ret: SEAMCALL completion status (can be NULL).
+ * @out: Additional output operands (can be NULL).
+ *
+ * Wrapper of __seamcall() to make SEAMCALL to the P-SEAMLDR or the TDX
+ * module with additional defensive check on SEAMRR and CR4.VMXE. Caller
+ * to make sure SEAMRR is enabled and CPU is already in VMX operation
+ * before calling this function.
+ *
+ * Unlike __seamcall(), it returns kernel error code instead of SEAMCALL
+ * completion status, which is returned via @seamcall_ret if desired.
+ *
+ * Return:
+ *
+ * * -ENODEV: SEAMCALL failed with VMfailInvalid, or SEAMRR is not enabled.
+ * * -EPERM: CR4.VMXE is not enabled
+ * * -EFAULT: SEAMCALL failed
+ * * -0: SEAMCALL succeeded
+ */
+static int seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
+ u64 *seamcall_ret, struct tdx_module_output *out)
+{
+ u64 ret;
+
+ if (WARN_ON_ONCE(!seamrr_enabled()))
+ return -ENODEV;
+
+ /*
+ * SEAMCALL instruction requires CPU being already in VMX
+ * operation (VMXON has been done), otherwise it causes #UD.
+ * Sanity check whether CR4.VMXE has been enabled.
+ *
+ * Note VMX being enabled in CR4 doesn't mean CPU is already
+ * in VMX operation, but unfortunately there's no way to do
+ * such check. However in practice enabling CR4.VMXE and
+ * doing VMXON are done together (for now) so in practice it
+ * checks whether VMXON has been done.
+ *
+ * Preemption is disabled during the CR4.VMXE check and the
+ * actual SEAMCALL so VMX doesn't get disabled by other threads
+ * due to scheduling.
+ */
+ preempt_disable();
+ if (WARN_ON_ONCE(!cpu_vmx_enabled())) {
+ preempt_enable_no_resched();
+ return -EPERM;
+ }
+
+ ret = __seamcall(fn, rcx, rdx, r8, r9, out);
+
+ preempt_enable_no_resched();
+
+ /*
+ * Convert SEAMCALL error code to kernel error code:
+ * - -ENODEV: VMfailInvalid
+ * - -EFAULT: SEAMCALL failed
+ * - 0: SEAMCALL was successful
+ */
+ if (ret == TDX_SEAMCALL_VMFAILINVALID)
+ return -ENODEV;
+
+ /* Save the completion status if caller wants to use it */
+ if (seamcall_ret)
+ *seamcall_ret = ret;
+
+ /*
+ * TDX module SEAMCALLs may also return non-zero completion
+ * status codes but w/o bit 63 set. Those codes are treated
+ * as additional information/warning while the SEAMCALL is
+ * treated as completed successfully. Return 0 in this case.
+ * Caller can use @seamcall_ret to get the additional code
+ * when it is desired.
+ */
+ if (SEAMCALL_LEAF_ERROR(ret)) {
+ pr_err("SEAMCALL leaf %llu failed: 0x%llx\n", fn, ret);
+ return -EFAULT;
+ }
+
+ return 0;
+}
+
+static inline bool p_seamldr_ready(void)
+{
+ return !!p_seamldr_info.p_seamldr_ready;
+}
+
+static inline bool tdx_module_ready(void)
+{
+ /*
+ * SEAMLDR_INFO.SEAM_READY indicates whether TDX module
+ * is (loaded and) ready for SEAMCALL.
+ */
+ return p_seamldr_ready() && !!p_seamldr_info.seam_ready;
+}
+
+/*
+ * Detect whether the P-SEAMLDR has been loaded by calling SEAMLDR.INFO
+ * SEAMCALL to get the P-SEAMLDR information, which further tells whether
+ * the TDX module has been loaded and ready for SEAMCALL. Caller to make
+ * sure only calling this function when CPU is already in VMX operation.
+ */
+static int detect_p_seamldr(void)
+{
+ int ret;
+
+ /*
+ * SEAMCALL fails with VMfailInvalid when SEAM software is not
+ * loaded, in which case seamcall() returns -ENODEV. Use this
+ * to detect the P-SEAMLDR.
+ *
+ * Note the P-SEAMLDR SEAMCALL also fails with VMfailInvalid when
+ * the P-SEAMLDR is already busy with another SEAMCALL. But this
+ * won't happen here as this function is only called once.
+ */
+ ret = seamcall(P_SEAMCALL_SEAMLDR_INFO, __pa(&p_seamldr_info),
+ 0, 0, 0, NULL, NULL);
+ if (ret) {
+ if (ret == -ENODEV)
+ pr_info("P-SEAMLDR is not loaded.\n");
+ else
+ pr_info("Failed to detect P-SEAMLDR.\n");
+
+ return ret;
+ }
+
+ /*
+ * If SEAMLDR.INFO was successful, it must be ready for SEAMCALL.
+ * Otherwise it's either kernel or firmware bug.
+ */
+ if (WARN_ON_ONCE(!p_seamldr_ready()))
+ return -ENODEV;
+
+ pr_info("P-SEAMLDR: version 0x%x, vendor_id: 0x%x, build_date: %u, build_num %u, major %u, minor %u\n",
+ p_seamldr_info.version, p_seamldr_info.vendor_id,
+ p_seamldr_info.build_date, p_seamldr_info.build_num,
+ p_seamldr_info.major, p_seamldr_info.minor);
+
+ return 0;
+}
+
static int __tdx_detect(void)
{
/* The TDX module is not loaded if SEAMRR is disabled */
@@ -247,7 +405,22 @@ static int __tdx_detect(void)
goto no_tdx_module;
}
- /* Return -ENODEV until the TDX module is detected */
+ /*
+ * For simplicity any error during detect_p_seamldr() marks
+ * TDX module as not loaded.
+ */
+ if (detect_p_seamldr())
+ goto no_tdx_module;
+
+ if (!tdx_module_ready()) {
+ pr_info("TDX module is not loaded.\n");
+ goto no_tdx_module;
+ }
+
+ pr_info("TDX module detected.\n");
+ tdx_module_status = TDX_MODULE_LOADED;
+ return 0;
+
no_tdx_module:
tdx_module_status = TDX_MODULE_NONE;
return -ENODEV;
diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h
index 9d5b6f554c20..6990c93198b3 100644
--- a/arch/x86/virt/vmx/tdx.h
+++ b/arch/x86/virt/vmx/tdx.h
@@ -3,6 +3,37 @@
#define _X86_VIRT_TDX_H
#include <linux/types.h>
+#include <linux/compiler.h>
+
+/*
+ * TDX architectural data structures
+ */
+
+#define P_SEAMLDR_INFO_ALIGNMENT 256
+
+struct p_seamldr_info {
+ u32 version;
+ u32 attributes;
+ u32 vendor_id;
+ u32 build_date;
+ u16 build_num;
+ u16 minor;
+ u16 major;
+ u8 reserved0[2];
+ u32 acm_x2apicid;
+ u8 reserved1[4];
+ u8 seaminfo[128];
+ u8 seam_ready;
+ u8 seam_debug;
+ u8 p_seamldr_ready;
+ u8 reserved2[88];
+} __packed __aligned(P_SEAMLDR_INFO_ALIGNMENT);
+
+/*
+ * P-SEAMLDR SEAMCALL leaf function
+ */
+#define P_SEAMLDR_SEAMCALL_BASE BIT_ULL(63)
+#define P_SEAMCALL_SEAMLDR_INFO (P_SEAMLDR_SEAMCALL_BASE | 0x0)
struct tdx_module_output;
u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
--
2.35.1
Logical-cpu scope initialization requires calling TDH.SYS.LP.INIT on all
BIOS-enabled cpus, otherwise the TDH.SYS.CONFIG SEAMCALL will fail.
TDH.SYS.LP.INIT can be called concurrently on all cpus.
Following global initialization, do the logical-cpu scope initialization
by calling TDH.SYS.LP.INIT on all online cpus. Whether all BIOS-enabled
cpus are online is not checked here for simplicity. The user of TDX
should guarantee all BIOS-enabled cpus are online.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/virt/vmx/tdx.c | 12 ++++++++++++
arch/x86/virt/vmx/tdx.h | 1 +
2 files changed, 13 insertions(+)
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index 45e7404b5d81..4b0c285d844b 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -461,6 +461,13 @@ static int __tdx_detect(void)
return -ENODEV;
}
+static int tdx_module_init_cpus(void)
+{
+ struct seamcall_ctx sc = { .fn = TDH_SYS_LP_INIT };
+
+ return seamcall_on_each_cpu(&sc);
+}
+
static int init_tdx_module(void)
{
int ret;
@@ -470,6 +477,11 @@ static int init_tdx_module(void)
if (ret)
goto out;
+ /* Logical-cpu scope initialization */
+ ret = tdx_module_init_cpus();
+ if (ret)
+ goto out;
+
/*
* Return -EFAULT until all steps of TDX module
* initialization are done.
diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h
index f0983b1936d8..b8cfdd6e12f3 100644
--- a/arch/x86/virt/vmx/tdx.h
+++ b/arch/x86/virt/vmx/tdx.h
@@ -39,6 +39,7 @@ struct p_seamldr_info {
* TDX module SEAMCALL leaf functions
*/
#define TDH_SYS_INIT 33
+#define TDH_SYS_LP_INIT 35
#define TDH_SYS_LP_SHUTDOWN 44
struct tdx_module_output;
--
2.35.1
As the last step of constructing TDMRs, create reserved area information
for the memory region holes in each TDMR. If any PAMT (or part of it)
resides within a particular TDMR, also it as reserved.
All reserved areas in each TDMR must be in address ascending order,
required by TDX architecture.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/virt/vmx/tdx.c | 148 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 146 insertions(+), 2 deletions(-)
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index c58c99b94c72..f6345af29414 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -15,6 +15,7 @@
#include <linux/atomic.h>
#include <linux/slab.h>
#include <linux/math.h>
+#include <linux/sort.h>
#include <asm/msr-index.h>
#include <asm/msr.h>
#include <asm/cpufeature.h>
@@ -1108,6 +1109,145 @@ static int tdmrs_setup_pamt_all(struct tdmr_info **tdmr_array, int tdmr_num)
return -ENOMEM;
}
+static int tdmr_add_rsvd_area(struct tdmr_info *tdmr, int *p_idx,
+ u64 addr, u64 size)
+{
+ struct tdmr_reserved_area *rsvd_areas = tdmr->reserved_areas;
+ int idx = *p_idx;
+
+ /* Reserved area must be 4K aligned in offset and size */
+ if (WARN_ON(addr & ~PAGE_MASK || size & ~PAGE_MASK))
+ return -EINVAL;
+
+ /* Cannot exceed maximum reserved areas supported by TDX */
+ if (idx >= tdx_sysinfo.max_reserved_per_tdmr)
+ return -E2BIG;
+
+ rsvd_areas[idx].offset = addr - tdmr->base;
+ rsvd_areas[idx].size = size;
+
+ *p_idx = idx + 1;
+
+ return 0;
+}
+
+/* Compare function called by sort() for TDMR reserved areas */
+static int rsvd_area_cmp_func(const void *a, const void *b)
+{
+ struct tdmr_reserved_area *r1 = (struct tdmr_reserved_area *)a;
+ struct tdmr_reserved_area *r2 = (struct tdmr_reserved_area *)b;
+
+ if (r1->offset + r1->size <= r2->offset)
+ return -1;
+ if (r1->offset >= r2->offset + r2->size)
+ return 1;
+
+ /* Reserved areas cannot overlap. Caller should guarantee. */
+ WARN_ON(1);
+ return -1;
+}
+
+/* Set up reserved areas for a TDMR, including memory holes and PAMTs */
+static int tdmr_setup_rsvd_areas(struct tdmr_info *tdmr,
+ struct tdmr_info **tdmr_array,
+ int tdmr_num)
+{
+ u64 start, end, prev_end;
+ int rsvd_idx, i, ret = 0;
+
+ /* Mark holes between e820 RAM entries as reserved */
+ rsvd_idx = 0;
+ prev_end = TDMR_START(tdmr);
+ e820_for_each_mem(i, start, end) {
+ /* Break if this entry is after the TDMR */
+ if (start >= TDMR_END(tdmr))
+ break;
+
+ /* Exclude entries before this TDMR */
+ if (end < TDMR_START(tdmr))
+ continue;
+
+ /*
+ * Skip if no hole exists before this entry. "<=" is
+ * used because one e820 entry might span two TDMRs.
+ * In that case the start address of this entry is
+ * smaller then the start address of the second TDMR.
+ */
+ if (start <= prev_end) {
+ prev_end = end;
+ continue;
+ }
+
+ /* Add the hole before this e820 entry */
+ ret = tdmr_add_rsvd_area(tdmr, &rsvd_idx, prev_end,
+ start - prev_end);
+ if (ret)
+ return ret;
+
+ prev_end = end;
+ }
+
+ /* Add the hole after the last RAM entry if it exists. */
+ if (prev_end < TDMR_END(tdmr)) {
+ ret = tdmr_add_rsvd_area(tdmr, &rsvd_idx, prev_end,
+ TDMR_END(tdmr) - prev_end);
+ if (ret)
+ return ret;
+ }
+
+ /*
+ * Walk over all TDMRs to find out whether any PAMT falls into
+ * the given TDMR. If yes, mark it as reserved too.
+ */
+ for (i = 0; i < tdmr_num; i++) {
+ struct tdmr_info *tmp = tdmr_array[i];
+ u64 pamt_start, pamt_end;
+
+ pamt_start = tmp->pamt_4k_base;
+ pamt_end = pamt_start + tmp->pamt_4k_size +
+ tmp->pamt_2m_size + tmp->pamt_1g_size;
+
+ /* Skip PAMTs outside of the given TDMR */
+ if ((pamt_end <= TDMR_START(tdmr)) ||
+ (pamt_start >= TDMR_END(tdmr)))
+ continue;
+
+ /* Only mark the part within the TDMR as reserved */
+ if (pamt_start < TDMR_START(tdmr))
+ pamt_start = TDMR_START(tdmr);
+ if (pamt_end > TDMR_END(tdmr))
+ pamt_end = TDMR_END(tdmr);
+
+ ret = tdmr_add_rsvd_area(tdmr, &rsvd_idx, pamt_start,
+ pamt_end - pamt_start);
+ if (ret)
+ return ret;
+ }
+
+ /* TDX requires reserved areas listed in address ascending order */
+ sort(tdmr->reserved_areas, rsvd_idx, sizeof(struct tdmr_reserved_area),
+ rsvd_area_cmp_func, NULL);
+
+ return 0;
+}
+
+static int tdmrs_setup_rsvd_areas_all(struct tdmr_info **tdmr_array,
+ int tdmr_num)
+{
+ int i;
+
+ for (i = 0; i < tdmr_num; i++) {
+ int ret;
+
+ ret = tdmr_setup_rsvd_areas(tdmr_array[i], tdmr_array,
+ tdmr_num);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num)
{
int ret;
@@ -1124,8 +1264,12 @@ static int construct_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num)
if (ret)
goto err_free_tdmrs;
- /* Return -EFAULT until constructing TDMRs is done */
- ret = -EFAULT;
+ ret = tdmrs_setup_rsvd_areas_all(tdmr_array, *tdmr_num);
+ if (ret)
+ goto err_free_pamts;
+
+ return 0;
+err_free_pamts:
tdmrs_free_pamt_all(tdmr_array, *tdmr_num);
err_free_tdmrs:
free_tdmrs(tdmr_array, *tdmr_num);
--
2.35.1
Enabling TDX consumes additional memory (used by TDX as metadata) and
additional initialization time. Introduce a kernel command line to
allow to opt-in TDX host kernel support when user truly wants to use
TDX.
Signed-off-by: Kai Huang <[email protected]>
---
Documentation/admin-guide/kernel-parameters.txt | 6 ++++++
arch/x86/virt/vmx/tdx.c | 14 ++++++++++++++
2 files changed, 20 insertions(+)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index f5a27f067db9..9f85cafd0c2d 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5707,6 +5707,12 @@
tdfx= [HW,DRM]
+ tdx_host= [X86-64, TDX]
+ Format: {on|off}
+ on: Enable TDX host kernel support
+ off: Disable TDX host kernel support
+ Default is off.
+
test_suspend= [SUSPEND][,N]
Specify "mem" (for Suspend-to-RAM) or "standby" (for
standby suspend) or "freeze" (for suspend type freeze)
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index d9ad8dc7111e..2022f9c019b8 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -116,6 +116,16 @@ static struct tdsysinfo_struct tdx_sysinfo;
/* TDX global KeyID to protect TDX metadata */
static u32 tdx_global_keyid;
+static bool enable_tdx_host;
+
+static int __init tdx_host_setup(char *s)
+{
+ if (!strcmp(s, "on"))
+ enable_tdx_host = true;
+ return 1;
+}
+__setup("tdx_host=", tdx_host_setup);
+
static bool __seamrr_enabled(void)
{
return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS;
@@ -501,6 +511,10 @@ static int detect_p_seamldr(void)
static int __tdx_detect(void)
{
+ /* Disabled by kernel command line */
+ if (!enable_tdx_host)
+ goto no_tdx_module;
+
/* The TDX module is not loaded if SEAMRR is disabled */
if (!seamrr_enabled()) {
pr_info("SEAMRR not enabled.\n");
--
2.35.1
If TDX is ever enabled and/or used to run any TD guests, the cachelines
of TDX private memory, including PAMTs, used by TDX module need to be
flushed before transiting to the new kernel otherwise they may silently
corrupt the new kernel.
TDX module can only be initialized once during its lifetime. TDX does
not have interface to reset TDX module to an uninitialized state so it
could be initialized again. If the old kernel has enabled TDX, the new
kernel won't be able to use TDX again. Therefore, ideally the old
kernel should shut down the TDX module if it is ever initialized so that
no SEAMCALLs can be made to it again.
However, shutting down the TDX module requires calling SEAMCALL, which
requires cpu being in VMX operation (VMXON has been done). Currently,
only KVM does entering/leaving VMX operation, so there's no guarantee
that all cpus are in VMX operation during kexec(). Therefore, this
implementation doesn't shut down the TDX module, but only does cache
flush and just leave the TDX module open.
And it's fine to leave the module open. If the new kernel wants to use
TDX, it needs to go through the initialization process, and it will fail
at the first SEAMCALL due to the TDX module is not in the uninitialized
state. If the new kernel doesn't want to use TDX, then the TDX module
won't run at all.
Following the implementation of SME support, use wbinvd() to flush cache
in stop_this_cpu(). Introduce a new function platform_has_tdx() to only
check whether the platform is TDX-capable and do wbinvd() when it is
true. platform_has_tdx() returns true when SEAMRR is enabled and there
are enough TDX private KeyIDs to run at least one TD guest (both of
which are detected at boot time). TDX is enabled on demand at runtime
and it has a state machine with mutex to protect multiple callers to
initialize TDX in parallel. Getting TDX module state needs to hold the
mutex but stop_this_cpu() runs in interrupt context, so just check
whether platform supports TDX and flush cache.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/include/asm/tdx.h | 2 ++
arch/x86/kernel/process.c | 15 ++++++++++++++-
arch/x86/virt/vmx/tdx.c | 14 ++++++++++++++
3 files changed, 30 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index b526d41c4bbf..24f2b7e8b280 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -85,10 +85,12 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
void tdx_detect_cpu(struct cpuinfo_x86 *c);
int tdx_detect(void);
int tdx_init(void);
+bool platform_has_tdx(void);
#else
static inline void tdx_detect_cpu(struct cpuinfo_x86 *c) { }
static inline int tdx_detect(void) { return -ENODEV; }
static inline int tdx_init(void) { return -ENODEV; }
+static inline bool platform_has_tdx(void) { return false; }
#endif /* CONFIG_INTEL_TDX_HOST */
#endif /* !__ASSEMBLY__ */
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 71aa12082370..bf3d1c9cb00c 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -766,8 +766,21 @@ void stop_this_cpu(void *dummy)
* without the encryption bit, they don't race each other when flushed
* and potentially end up with the wrong entry being committed to
* memory.
+ *
+ * In case of kexec, similar to SME, if TDX is ever enabled, the
+ * cachelines of TDX private memory (including PAMTs) used by TDX
+ * module need to be flushed before transiting to the new kernel,
+ * otherwise they may silently corrupt the new kernel.
+ *
+ * Note TDX is enabled on demand at runtime, and enabling TDX has a
+ * state machine protected with a mutex to prevent concurrent calls
+ * from multiple callers. Holding the mutex is required to get the
+ * TDX enabling status, but this function runs in interrupt context.
+ * So to make it simple, always flush cache when platform supports
+ * TDX (detected at boot time), regardless whether TDX is truly
+ * enabled by kernel.
*/
- if (boot_cpu_has(X86_FEATURE_SME))
+ if (boot_cpu_has(X86_FEATURE_SME) || platform_has_tdx())
native_wbinvd();
for (;;) {
/*
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index f2b9c98191ed..d9ad8dc7111e 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -1681,3 +1681,17 @@ int tdx_init(void)
return ret;
}
EXPORT_SYMBOL_GPL(tdx_init);
+
+/**
+ * platform_has_tdx - Whether platform supports TDX
+ *
+ * Check whether platform supports TDX (i.e. TDX is enabled in BIOS),
+ * regardless whether TDX is truly enabled by kernel.
+ *
+ * Return true if SEAMRR is enabled, and there are sufficient TDX private
+ * KeyIDs to run TD guests.
+ */
+bool platform_has_tdx(void)
+{
+ return seamrr_enabled() && tdx_keyid_sufficient();
+}
--
2.35.1
Before TDX module can use the global KeyID to access TDX metadata, the
key of the global KeyID must be configured on all physical packages via
TDH.SYS.KEY.CONFIG. This SEAMCALL cannot run concurrently on different
cpus since it exclusively acquires the TDX module.
Implement a helper to run SEAMCALL on one (any) cpu for all packages in
serialized way, and run TDH.SYS.KEY.CONFIG on all packages using the
helper.
The TDX module uses the global KeyID to initialize its metadata (PAMTs).
Before TDX module can do that, all cachelines of PAMTs must be flushed.
Otherwise, they may silently corrupt the PAMTs later initialized by the
TDX module.
Use wbinvd to flush cache as PAMTs can be potentially large (~1/256th of
system RAM).
Flush cache before configuring the global KeyID on all packages, as
suggested by TDX specification. In practice, the current generation of
TDX doesn't use the global KeyID in TDH.SYS.KEY.CONFIG. Therefore in
practice flushing cache can be done after configuring the global KeyID
is done on all packages. But the future generation of TDX may change
this behaviour, so just follow TDX specification's suggestion to flush
cache before configuring the global KeyID on all packages.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/virt/vmx/tdx.c | 94 ++++++++++++++++++++++++++++++++++++++++-
arch/x86/virt/vmx/tdx.h | 1 +
2 files changed, 94 insertions(+), 1 deletion(-)
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index e03dc3e420db..39b1b7d0417d 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -23,6 +23,7 @@
#include <asm/virtext.h>
#include <asm/e820/api.h>
#include <asm/pgtable.h>
+#include <asm/smp.h>
#include <asm/tdx.h>
#include "tdx.h"
@@ -398,6 +399,47 @@ static int seamcall_on_each_cpu(struct seamcall_ctx *sc)
return atomic_read(&sc->err);
}
+/*
+ * Call the SEAMCALL on one (any) cpu for each physical package in
+ * serialized way. Note for serialized calls 'seamcall_ctx::err'
+ * doesn't have to be atomic, but for simplicity just reuse it
+ * instead of adding a new one.
+ *
+ * Return -ENXIO if IPI SEAMCALL wasn't run on any cpu, or -EFAULT
+ * when SEAMCALL fails, or -EPERM when the cpu where SEAMCALL runs
+ * on is not in VMX operation. In case of -EFAULT, the error code
+ * of SEAMCALL is in 'struct seamcall_ctx::seamcall_ret'.
+ */
+static int seamcall_on_each_package_serialized(struct seamcall_ctx *sc)
+{
+ cpumask_var_t packages;
+ int cpu, ret;
+
+ if (!zalloc_cpumask_var(&packages, GFP_KERNEL))
+ return -ENOMEM;
+
+ for_each_online_cpu(cpu) {
+ if (cpumask_test_and_set_cpu(topology_physical_package_id(cpu),
+ packages))
+ continue;
+
+ ret = smp_call_function_single(cpu, seamcall_smp_call_function,
+ sc, true);
+ if (ret)
+ return ret;
+
+ /*
+ * Doesn't have to use atomic_read(), but it doesn't
+ * hurt either.
+ */
+ ret = atomic_read(&sc->err);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
static inline bool p_seamldr_ready(void)
{
return !!p_seamldr_info.p_seamldr_ready;
@@ -1316,6 +1358,18 @@ static int config_tdx_module(struct tdmr_info **tdmr_array, int tdmr_num,
return ret;
}
+static int config_global_keyid(u64 global_keyid)
+{
+ struct seamcall_ctx sc = { .fn = TDH_SYS_KEY_CONFIG };
+
+ /*
+ * TDH.SYS.KEY.CONFIG may fail with entropy error (which is
+ * a recoverable error). Assume this is exceedingly rare and
+ * just return error if encountered instead of retrying.
+ */
+ return seamcall_on_each_package_serialized(&sc);
+}
+
static int init_tdx_module(void)
{
struct tdmr_info **tdmr_array;
@@ -1366,6 +1420,37 @@ static int init_tdx_module(void)
if (ret)
goto out_free_pamts;
+ /*
+ * The same physical address associated with different KeyIDs
+ * has separate cachelines. Before using the new KeyID to access
+ * some memory, the cachelines associated with the old KeyID must
+ * be flushed, otherwise they may later silently corrupt the data
+ * written with the new KeyID. After cachelines associated with
+ * the old KeyID are flushed, CPU speculative fetch using the old
+ * KeyID is OK since the prefetched cachelines won't be consumed
+ * by CPU core.
+ *
+ * TDX module initializes PAMTs using the global KeyID to crypto
+ * protect them from malicious host. Before that, the PAMTs are
+ * used by kernel (with KeyID 0) and the cachelines associated
+ * with the PAMTs must be flushed. Given PAMTs are potentially
+ * large (~1/256th of system RAM), just use WBINVD on all cpus to
+ * flush the cache.
+ *
+ * In practice, the current generation of TDX doesn't use the
+ * global KeyID in TDH.SYS.KEY.CONFIG. Therefore in practice,
+ * the cachelines can be flushed after configuring the global
+ * KeyID on all pkgs is done. But the future generation of TDX
+ * may change this, so just follow the suggestion of TDX spec to
+ * flush cache before TDH.SYS.KEY.CONFIG.
+ */
+ wbinvd_on_all_cpus();
+
+ /* Config the key of global KeyID on all packages */
+ ret = config_global_keyid(tdx_global_keyid);
+ if (ret)
+ goto out_free_pamts;
+
/*
* Return -EFAULT until all steps of TDX module
* initialization are done.
@@ -1376,8 +1461,15 @@ static int init_tdx_module(void)
* Free PAMTs allocated in construct_tdmrs() when TDX module
* initialization fails.
*/
- if (ret)
+ if (ret) {
+ /*
+ * Part of PAMTs may already have been initialized by
+ * TDX module. Flush cache before returning them back
+ * to kernel.
+ */
+ wbinvd_on_all_cpus();
tdmrs_free_pamt_all(tdmr_array, tdmr_num);
+ }
out_free_tdmrs:
/*
* TDMRs are only used during initializing TDX module. Always
diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h
index d8e2800397af..bba8cabea4bb 100644
--- a/arch/x86/virt/vmx/tdx.h
+++ b/arch/x86/virt/vmx/tdx.h
@@ -122,6 +122,7 @@ struct tdmr_info {
/*
* TDX module SEAMCALL leaf functions
*/
+#define TDH_SYS_KEY_CONFIG 31
#define TDH_SYS_INFO 32
#define TDH_SYS_INIT 33
#define TDH_SYS_LP_INIT 35
--
2.35.1
Secure Arbitration Mode (SEAM) is an extension of VMX architecture. It
defines a new VMX root operation (SEAM VMX root) and a new VMX non-root
operation (SEAM VMX non-root) which isolate from legacy VMX root and VMX
non-root mode.
A CPU-attested software module (called the 'TDX module') runs in SEAM
VMX root to manage the crypto protected VMs running in SEAM VMX non-root.
SEAM VMX root is also used to host another CPU-attested software module
(called the 'P-SEAMLDR') to load and update the TDX module.
Host kernel transits to either the P-SEAMLDR or the TDX module via the
new SEAMCALL instruction. SEAMCALLs are host-side interface functions
defined by the P-SEAMLDR and the TDX module around the new SEAMCALL
instruction. They are similar to a hypercall, except they are made by
host kernel to the SEAM software.
SEAMCALLs use an ABI different from the x86-64 system-v ABI. Instead,
they share the same ABI with the TDCALL. %rax is used to carry both the
SEAMCALL leaf function number (input) and the completion status code
(output). Additional GPRs (%rcx, %rdx, %r8->%r11) may be further used
as both input and output operands in individual leaf SEAMCALLs.
Implement a C function __seamcall() to do SEAMCALL using the assembly
macro used by __tdx_module_call() (the implementation of TDCALL). The
only exception not covered here is TDENTER leaf function which takes
all GPRs and XMM0-XMM15 as both input and output. The caller of TDENTER
should implement its own logic to call TDENTER directly instead of using
this function.
SEAMCALL instruction is essentially a VMExit from VMX root to SEAM VMX
root, and it can fail with VMfailInvalid, for instance, when the SEAM
software module is not loaded. The C function __seamcall() returns
TDX_SEAMCALL_VMFAILINVALID, which doesn't conflict with any actual error
code of SEAMCALLs, to uniquely represent this case.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/virt/vmx/Makefile | 2 +-
arch/x86/virt/vmx/seamcall.S | 52 ++++++++++++++++++++++++++++++++++++
arch/x86/virt/vmx/tdx.h | 11 ++++++++
3 files changed, 64 insertions(+), 1 deletion(-)
create mode 100644 arch/x86/virt/vmx/seamcall.S
create mode 100644 arch/x86/virt/vmx/tdx.h
diff --git a/arch/x86/virt/vmx/Makefile b/arch/x86/virt/vmx/Makefile
index 1bd688684716..fd577619620e 100644
--- a/arch/x86/virt/vmx/Makefile
+++ b/arch/x86/virt/vmx/Makefile
@@ -1,2 +1,2 @@
# SPDX-License-Identifier: GPL-2.0-only
-obj-$(CONFIG_INTEL_TDX_HOST) += tdx.o
+obj-$(CONFIG_INTEL_TDX_HOST) += tdx.o seamcall.o
diff --git a/arch/x86/virt/vmx/seamcall.S b/arch/x86/virt/vmx/seamcall.S
new file mode 100644
index 000000000000..f31a717c00e0
--- /dev/null
+++ b/arch/x86/virt/vmx/seamcall.S
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#include <linux/linkage.h>
+#include <asm/frame.h>
+
+#include "../tdxcall.S"
+
+/*
+ * __seamcall() - Host-side interface functions to SEAM software module
+ * (the P-SEAMLDR or the TDX module)
+ *
+ * Transform function call register arguments into the SEAMCALL register
+ * ABI. Return TDX_SEAMCALL_VMFAILINVALID, or the completion status of
+ * the SEAMCALL. Additional output operands are saved in @out (if it is
+ * provided by caller).
+ *
+ *-------------------------------------------------------------------------
+ * SEAMCALL ABI:
+ *-------------------------------------------------------------------------
+ * Input Registers:
+ *
+ * RAX - SEAMCALL Leaf number.
+ * RCX,RDX,R8-R9 - SEAMCALL Leaf specific input registers.
+ *
+ * Output Registers:
+ *
+ * RAX - SEAMCALL completion status code.
+ * RCX,RDX,R8-R11 - SEAMCALL Leaf specific output registers.
+ *
+ *-------------------------------------------------------------------------
+ *
+ * __seamcall() function ABI:
+ *
+ * @fn (RDI) - SEAMCALL Leaf number, moved to RAX
+ * @rcx (RSI) - Input parameter 1, moved to RCX
+ * @rdx (RDX) - Input parameter 2, moved to RDX
+ * @r8 (RCX) - Input parameter 3, moved to R8
+ * @r9 (R8) - Input parameter 4, moved to R9
+ *
+ * @out (R9) - struct tdx_module_output pointer
+ * stored temporarily in R12 (not
+ * used by the P-SEAMLDR or the TDX
+ * module). It can be NULL.
+ *
+ * Return (via RAX) the completion status of the SEAMCALL, or
+ * TDX_SEAMCALL_VMFAILINVALID.
+ */
+SYM_FUNC_START(__seamcall)
+ FRAME_BEGIN
+ TDX_MODULE_CALL host=1
+ FRAME_END
+ ret
+SYM_FUNC_END(__seamcall)
diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h
new file mode 100644
index 000000000000..9d5b6f554c20
--- /dev/null
+++ b/arch/x86/virt/vmx/tdx.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _X86_VIRT_TDX_H
+#define _X86_VIRT_TDX_H
+
+#include <linux/types.h>
+
+struct tdx_module_output;
+u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
+ struct tdx_module_output *out);
+
+#endif
--
2.35.1
Initialize TDMRs via TDH.SYS.TDMR.INIT as the last step to complete the
TDX initialization.
All TDMRs need to be initialized using TDH.SYS.TDMR.INIT SEAMCALL before
the TDX memory can be used to run any TD guest. The SEAMCALL internally
uses the global KeyID to initialize PAMTs in order to crypto protect
them from malicious host kernel. TDH.SYS.TDMR.INIT can be done any cpu.
The time of initializing TDMR is proportional to the size of the TDMR.
To avoid long latency caused in one SEAMCALL, TDH.SYS.TDMR.INIT only
initializes an (implementation-specific) subset of PAMT entries of one
TDMR in one invocation. The caller is responsible for calling
TDH.SYS.TDMR.INIT iteratively until all PAMT entries of the requested
TDMR are initialized.
Current implementation initializes TDMRs one by one. It takes ~100ms on
a 2-socket machine with 2.2GHz CPUs and 64GB memory when the system is
idle. Each TDH.SYS.TDMR.INIT takes ~7us on average.
TDX does allow different TDMRs to be initialized concurrently on
multiple CPUs. This parallel scheme could be introduced later when the
total initialization time becomes a real concern, e.g. on a platform
with a much bigger memory size.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/virt/vmx/tdx.c | 75 ++++++++++++++++++++++++++++++++++++++---
arch/x86/virt/vmx/tdx.h | 1 +
2 files changed, 71 insertions(+), 5 deletions(-)
diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
index 39b1b7d0417d..f2b9c98191ed 100644
--- a/arch/x86/virt/vmx/tdx.c
+++ b/arch/x86/virt/vmx/tdx.c
@@ -1370,6 +1370,65 @@ static int config_global_keyid(u64 global_keyid)
return seamcall_on_each_package_serialized(&sc);
}
+/* Initialize one TDMR */
+static int init_tdmr(struct tdmr_info *tdmr)
+{
+ u64 next;
+
+ /*
+ * Initializing PAMT entries might be time-consuming (in
+ * proportion to the size of the requested TDMR). To avoid long
+ * latency in one SEAMCALL, TDH.SYS.TDMR.INIT only initializes
+ * an (implementation-defined) subset of PAMT entries in one
+ * invocation.
+ *
+ * Call TDH.SYS.TDMR.INIT iteratively until all PAMT entries
+ * of the requested TDMR are initialized (if next-to-initialize
+ * address matches the end address of the TDMR).
+ */
+ do {
+ struct tdx_module_output out;
+ int ret;
+
+ ret = seamcall(TDH_SYS_TDMR_INIT, tdmr->base, 0, 0, 0,
+ NULL, &out);
+ if (ret)
+ return ret;
+ /*
+ * RDX contains 'next-to-initialize' address if
+ * TDH.SYS.TDMR.INT succeeded.
+ */
+ next = out.rdx;
+ if (need_resched())
+ cond_resched();
+ } while (next < tdmr->base + tdmr->size);
+
+ return 0;
+}
+
+/* Initialize all TDMRs */
+static int init_tdmrs(struct tdmr_info **tdmr_array, int tdmr_num)
+{
+ int i;
+
+ /*
+ * Initialize TDMRs one-by-one for simplicity, though the TDX
+ * architecture does allow different TDMRs to be initialized in
+ * parallel on multiple CPUs. Parallel initialization could
+ * be added later when the time spent in the serialized scheme
+ * becomes a real concern.
+ */
+ for (i = 0; i < tdmr_num; i++) {
+ int ret;
+
+ ret = init_tdmr(tdmr_array[i]);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
static int init_tdx_module(void)
{
struct tdmr_info **tdmr_array;
@@ -1451,11 +1510,12 @@ static int init_tdx_module(void)
if (ret)
goto out_free_pamts;
- /*
- * Return -EFAULT until all steps of TDX module
- * initialization are done.
- */
- ret = -EFAULT;
+ /* Initialize TDMRs to complete the TDX module initialization */
+ ret = init_tdmrs(tdmr_array, tdmr_num);
+ if (ret)
+ goto out_free_pamts;
+
+ tdx_module_status = TDX_MODULE_INITIALIZED;
out_free_pamts:
/*
* Free PAMTs allocated in construct_tdmrs() when TDX module
@@ -1478,6 +1538,11 @@ static int init_tdx_module(void)
free_tdmrs(tdmr_array, tdmr_num);
kfree(tdmr_array);
out:
+ if (ret)
+ pr_info("Failed to initialize TDX module.\n");
+ else
+ pr_info("TDX module initialized.\n");
+
return ret;
}
diff --git a/arch/x86/virt/vmx/tdx.h b/arch/x86/virt/vmx/tdx.h
index bba8cabea4bb..212f83374c0a 100644
--- a/arch/x86/virt/vmx/tdx.h
+++ b/arch/x86/virt/vmx/tdx.h
@@ -126,6 +126,7 @@ struct tdmr_info {
#define TDH_SYS_INFO 32
#define TDH_SYS_INIT 33
#define TDH_SYS_LP_INIT 35
+#define TDH_SYS_TDMR_INIT 36
#define TDH_SYS_LP_SHUTDOWN 44
#define TDH_SYS_CONFIG 45
--
2.35.1
> From: Kai Huang <[email protected]>
> Sent: Sunday, March 13, 2022 6:50 PM
>
> @@ -715,6 +716,8 @@ static void init_intel(struct cpuinfo_x86 *c)
> if (cpu_has(c, X86_FEATURE_TME))
> detect_tme(c);
>
> + tdx_detect_cpu(c);
> +
TDX is not reported as a x86 feature. and the majority of detection
and initialization have been conducted on demand in this series
(as explained in patch04). Why is SEAM (and latter keyid) so different
to be detected at early boot phase?
Thanks
Kevin
> From: Kai Huang <[email protected]>
> Sent: Sunday, March 13, 2022 6:50 PM
>
> Secure Arbitration Mode (SEAM) is an extension of VMX architecture. It
> defines a new VMX root operation (SEAM VMX root) and a new VMX non-
> root
> operation (SEAM VMX non-root) which isolate from legacy VMX root and
> VMX
> non-root mode.
s/isolate/are isolated/
>
> A CPU-attested software module (called the 'TDX module') runs in SEAM
> VMX root to manage the crypto protected VMs running in SEAM VMX non-
> root.
> SEAM VMX root is also used to host another CPU-attested software module
> (called the 'P-SEAMLDR') to load and update the TDX module.
>
> Host kernel transits to either the P-SEAMLDR or the TDX module via the
> new SEAMCALL instruction. SEAMCALLs are host-side interface functions
> defined by the P-SEAMLDR and the TDX module around the new SEAMCALL
> instruction. They are similar to a hypercall, except they are made by
"SEAMCALLs are ... functions ... around the new SEAMCALL instruction"
This is confusing. Probably just:
"SEAMCALL functions are defined and handled by the P-SEAMLDR and
the TDX module"
> host kernel to the SEAM software.
>
> SEAMCALLs use an ABI different from the x86-64 system-v ABI. Instead,
> they share the same ABI with the TDCALL. %rax is used to carry both the
> SEAMCALL leaf function number (input) and the completion status code
> (output). Additional GPRs (%rcx, %rdx, %r8->%r11) may be further used
> as both input and output operands in individual leaf SEAMCALLs.
>
> Implement a C function __seamcall() to do SEAMCALL using the assembly
> macro used by __tdx_module_call() (the implementation of TDCALL). The
> only exception not covered here is TDENTER leaf function which takes
> all GPRs and XMM0-XMM15 as both input and output. The caller of TDENTER
> should implement its own logic to call TDENTER directly instead of using
> this function.
>
> SEAMCALL instruction is essentially a VMExit from VMX root to SEAM VMX
> root, and it can fail with VMfailInvalid, for instance, when the SEAM
> software module is not loaded. The C function __seamcall() returns
> TDX_SEAMCALL_VMFAILINVALID, which doesn't conflict with any actual error
> code of SEAMCALLs, to uniquely represent this case.
SEAMCALL is TDX specific, is it? If yes, there is no need to have both
TDX and SEAMCALL in one macro, i.e. above can be SEAMCALL_VMFAILINVALID.
Thanks
Kevin
On Sun, Mar 13, 2022 at 11:49:53PM +1300,
Kai Huang <[email protected]> wrote:
> diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
> index 1939b64d23e8..c58c99b94c72 100644
> --- a/arch/x86/virt/vmx/tdx.c
> +++ b/arch/x86/virt/vmx/tdx.c
> @@ -21,6 +21,7 @@
> #include <asm/cpufeatures.h>
> #include <asm/virtext.h>
> #include <asm/e820/api.h>
> +#include <asm/pgtable.h>
> #include <asm/tdx.h>
> #include "tdx.h"
>
> @@ -66,6 +67,16 @@
> #define TDMR_START(_tdmr) ((_tdmr)->base)
> #define TDMR_END(_tdmr) ((_tdmr)->base + (_tdmr)->size)
>
> +/* Page sizes supported by TDX */
> +enum tdx_page_sz {
> + TDX_PG_4K = 0,
> + TDX_PG_2M,
> + TDX_PG_1G,
> + TDX_PG_MAX,
> +};
> +
> +#define TDX_HPAGE_SHIFT 9
> +
> /*
> * TDX module status during initialization
> */
> @@ -893,7 +904,7 @@ static int create_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num)
> * them. To keep it simple, always try to use one TDMR to cover
> * one RAM entry.
> */
> - e820_for_each_mem(e820_table, i, start, end) {
> + e820_for_each_mem(i, start, end) {
This patch doesn't change e820_for_each_mem. This hunk should go into the
previous patch?
thansk,
--
Isaku Yamahata <[email protected]>
On Sun, Mar 13, 2022 at 11:49:57PM +1300,
Kai Huang <[email protected]> wrote:
> diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
> index e03dc3e420db..39b1b7d0417d 100644
> --- a/arch/x86/virt/vmx/tdx.c
> +++ b/arch/x86/virt/vmx/tdx.c
> @@ -23,6 +23,7 @@
> #include <asm/virtext.h>
> #include <asm/e820/api.h>
> #include <asm/pgtable.h>
> +#include <asm/smp.h>
> #include <asm/tdx.h>
> #include "tdx.h"
>
> @@ -398,6 +399,47 @@ static int seamcall_on_each_cpu(struct seamcall_ctx *sc)
> return atomic_read(&sc->err);
> }
>
> +/*
> + * Call the SEAMCALL on one (any) cpu for each physical package in
> + * serialized way. Note for serialized calls 'seamcall_ctx::err'
> + * doesn't have to be atomic, but for simplicity just reuse it
> + * instead of adding a new one.
> + *
> + * Return -ENXIO if IPI SEAMCALL wasn't run on any cpu, or -EFAULT
> + * when SEAMCALL fails, or -EPERM when the cpu where SEAMCALL runs
> + * on is not in VMX operation. In case of -EFAULT, the error code
> + * of SEAMCALL is in 'struct seamcall_ctx::seamcall_ret'.
> + */
> +static int seamcall_on_each_package_serialized(struct seamcall_ctx *sc)
> +{
> + cpumask_var_t packages;
> + int cpu, ret;
> +
> + if (!zalloc_cpumask_var(&packages, GFP_KERNEL))
> + return -ENOMEM;
Memory leak. This should be freed before returning.
> + for_each_online_cpu(cpu) {
> + if (cpumask_test_and_set_cpu(topology_physical_package_id(cpu),
> + packages))
> + continue;
> +
> + ret = smp_call_function_single(cpu, seamcall_smp_call_function,
> + sc, true);
> + if (ret)
> + return ret;
> +
> + /*
> + * Doesn't have to use atomic_read(), but it doesn't
> + * hurt either.
> + */
> + ret = atomic_read(&sc->err);
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> static inline bool p_seamldr_ready(void)
> {
> return !!p_seamldr_info.p_seamldr_ready;
> @@ -1316,6 +1358,18 @@ static int config_tdx_module(struct tdmr_info **tdmr_array, int tdmr_num,
> return ret;
> }
>
> +static int config_global_keyid(u64 global_keyid)
global_keyid argument isn't used. Is global variable tdx_global_keyid used?
> +{
> + struct seamcall_ctx sc = { .fn = TDH_SYS_KEY_CONFIG };
> +
> + /*
> + * TDH.SYS.KEY.CONFIG may fail with entropy error (which is
> + * a recoverable error). Assume this is exceedingly rare and
> + * just return error if encountered instead of retrying.
> + */
> + return seamcall_on_each_package_serialized(&sc);
> +}
> +
--
Isaku Yamahata <[email protected]>
On Sun, Mar 13, 2022 at 11:49:49PM +1300,
Kai Huang <[email protected]> wrote:
> TDX provides increased levels of memory confidentiality and integrity.
> This requires special hardware support for features like memory
> encryption and storage of memory integrity checksums. Not all memory
> satisfies these requirements.
>
> As a result, TDX introduced the concept of a "Convertible Memory Region"
> (CMR). During boot, the firmware builds a list of all of the memory
> ranges which can provide the TDX security guarantees. The list of these
> ranges, along with TDX module information, is available to the kernel by
> querying the TDX module via TDH.SYS.INFO SEAMCALL.
>
> Host kernel can choose whether or not to use all convertible memory
> regions as TDX memory. Before TDX module is ready to create any TD
> guests, all TDX memory regions that host kernel intends to use must be
> configured to the TDX module, using specific data structures defined by
> TDX architecture. Constructing those structures requires information of
> both TDX module and the Convertible Memory Regions. Call TDH.SYS.INFO
> to get this information as preparation to construct those structures.
>
> Signed-off-by: Kai Huang <[email protected]>
> ---
> arch/x86/virt/vmx/tdx.c | 127 ++++++++++++++++++++++++++++++++++++++++
> arch/x86/virt/vmx/tdx.h | 61 +++++++++++++++++++
> 2 files changed, 188 insertions(+)
>
> diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
> index 4b0c285d844b..eb585bc5edda 100644
> --- a/arch/x86/virt/vmx/tdx.c
> +++ b/arch/x86/virt/vmx/tdx.c
> @@ -80,6 +80,11 @@ static DEFINE_MUTEX(tdx_module_lock);
>
> static struct p_seamldr_info p_seamldr_info;
>
> +/* Base address of CMR array needs to be 512 bytes aligned. */
> +static struct cmr_info tdx_cmr_array[MAX_CMRS] __aligned(CMR_INFO_ARRAY_ALIGNMENT);
> +static int tdx_cmr_num;
> +static struct tdsysinfo_struct tdx_sysinfo;
> +
> static bool __seamrr_enabled(void)
> {
> return (seamrr_mask & SEAMRR_ENABLED_BITS) == SEAMRR_ENABLED_BITS;
> @@ -468,6 +473,123 @@ static int tdx_module_init_cpus(void)
> return seamcall_on_each_cpu(&sc);
> }
>
> +static inline bool cmr_valid(struct cmr_info *cmr)
> +{
> + return !!cmr->size;
> +}
> +
> +static void print_cmrs(struct cmr_info *cmr_array, int cmr_num,
> + const char *name)
> +{
> + int i;
> +
> + for (i = 0; i < cmr_num; i++) {
> + struct cmr_info *cmr = &cmr_array[i];
> +
> + pr_info("%s : [0x%llx, 0x%llx)\n", name,
> + cmr->base, cmr->base + cmr->size);
> + }
> +}
> +
> +static int sanitize_cmrs(struct cmr_info *cmr_array, int cmr_num)
> +{
> + int i, j;
> +
> + /*
> + * Intel TDX module spec, 20.7.3 CMR_INFO:
> + *
> + * TDH.SYS.INFO leaf function returns a MAX_CMRS (32) entry
> + * array of CMR_INFO entries. The CMRs are sorted from the
> + * lowest base address to the highest base address, and they
> + * are non-overlapping.
> + *
> + * This implies that BIOS may generate invalid empty entries
> + * if total CMRs are less than 32. Skip them manually.
> + */
> + for (i = 0; i < cmr_num; i++) {
> + struct cmr_info *cmr = &cmr_array[i];
> + struct cmr_info *prev_cmr = NULL;
> +
> + /* Skip further invalid CMRs */
> + if (!cmr_valid(cmr))
> + break;
> +
> + if (i > 0)
> + prev_cmr = &cmr_array[i - 1];
> +
> + /*
> + * It is a TDX firmware bug if CMRs are not
> + * in address ascending order.
> + */
> + if (prev_cmr && ((prev_cmr->base + prev_cmr->size) >
> + cmr->base)) {
> + pr_err("Firmware bug: CMRs not in address ascending order.\n");
> + return -EFAULT;
> + }
> + }
> +
> + /*
> + * Also a sane BIOS should never generate invalid CMR(s) between
> + * two valid CMRs. Sanity check this and simply return error in
> + * this case.
> + */
> + for (j = i; j < cmr_num; j++)
> + if (cmr_valid(&cmr_array[j])) {
> + pr_err("Firmware bug: invalid CMR(s) among valid CMRs.\n");
> + return -EFAULT;
> + }
This check doesn't make sense because above i-for loop has break.
> +
> + /*
> + * Trim all tail invalid empty CMRs. BIOS should generate at
> + * least one valid CMR, otherwise it's a TDX firmware bug.
> + */
> + tdx_cmr_num = i;
> + if (!tdx_cmr_num) {
> + pr_err("Firmware bug: No valid CMR.\n");
> + return -EFAULT;
> + }
Something strange.
Probably we'd like to check it by decrementing.
for (i = cmr_num; i >= 0; i--)
if (!cmr_valid()) // if last invalid cmr
tdx_cmr_num
// more check. overlapping
--
Isaku Yamahata <[email protected]>
On Thu, 2022-03-24 at 11:18 -0700, [email protected] wrote:
> On Sun, Mar 13, 2022 at 11:49:57PM +1300,
> Kai Huang <[email protected]> wrote:
>
> > diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
> > index e03dc3e420db..39b1b7d0417d 100644
> > --- a/arch/x86/virt/vmx/tdx.c
> > +++ b/arch/x86/virt/vmx/tdx.c
> > @@ -23,6 +23,7 @@
> > #include <asm/virtext.h>
> > #include <asm/e820/api.h>
> > #include <asm/pgtable.h>
> > +#include <asm/smp.h>
> > #include <asm/tdx.h>
> > #include "tdx.h"
> >
> > @@ -398,6 +399,47 @@ static int seamcall_on_each_cpu(struct seamcall_ctx *sc)
> > return atomic_read(&sc->err);
> > }
> >
> > +/*
> > + * Call the SEAMCALL on one (any) cpu for each physical package in
> > + * serialized way. Note for serialized calls 'seamcall_ctx::err'
> > + * doesn't have to be atomic, but for simplicity just reuse it
> > + * instead of adding a new one.
> > + *
> > + * Return -ENXIO if IPI SEAMCALL wasn't run on any cpu, or -EFAULT
> > + * when SEAMCALL fails, or -EPERM when the cpu where SEAMCALL runs
> > + * on is not in VMX operation. In case of -EFAULT, the error code
> > + * of SEAMCALL is in 'struct seamcall_ctx::seamcall_ret'.
> > + */
> > +static int seamcall_on_each_package_serialized(struct seamcall_ctx *sc)
> > +{
> > + cpumask_var_t packages;
> > + int cpu, ret;
> > +
> > + if (!zalloc_cpumask_var(&packages, GFP_KERNEL))
> > + return -ENOMEM;
>
> Memory leak. This should be freed before returning.
>
>
> > + for_each_online_cpu(cpu) {
> > + if (cpumask_test_and_set_cpu(topology_physical_package_id(cpu),
> > + packages))
> > + continue;
> > +
> > + ret = smp_call_function_single(cpu, seamcall_smp_call_function,
> > + sc, true);
> > + if (ret)
> > + return ret;
> > +
> > + /*
> > + * Doesn't have to use atomic_read(), but it doesn't
> > + * hurt either.
> > + */
> > + ret = atomic_read(&sc->err);
> > + if (ret)
> > + return ret;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > static inline bool p_seamldr_ready(void)
> > {
> > return !!p_seamldr_info.p_seamldr_ready;
> > @@ -1316,6 +1358,18 @@ static int config_tdx_module(struct tdmr_info **tdmr_array, int tdmr_num,
> > return ret;
> > }
> >
> > +static int config_global_keyid(u64 global_keyid)
>
> global_keyid argument isn't used. Is global variable tdx_global_keyid used?
>
>
> > +{
> > + struct seamcall_ctx sc = { .fn = TDH_SYS_KEY_CONFIG };
> > +
> > + /*
> > + * TDH.SYS.KEY.CONFIG may fail with entropy error (which is
> > + * a recoverable error). Assume this is exceedingly rare and
> > + * just return error if encountered instead of retrying.
> > + */
> > + return seamcall_on_each_package_serialized(&sc);
> > +}
> > +
Both will be fixed. Thanks!
--
Thanks,
-Kai
On Thu, 2022-03-24 at 11:06 -0700, Isaku Yamahata wrote:
> On Sun, Mar 13, 2022 at 11:49:53PM +1300,
> Kai Huang <[email protected]> wrote:
>
> > diff --git a/arch/x86/virt/vmx/tdx.c b/arch/x86/virt/vmx/tdx.c
> > index 1939b64d23e8..c58c99b94c72 100644
> > --- a/arch/x86/virt/vmx/tdx.c
> > +++ b/arch/x86/virt/vmx/tdx.c
> > @@ -21,6 +21,7 @@
> > #include <asm/cpufeatures.h>
> > #include <asm/virtext.h>
> > #include <asm/e820/api.h>
> > +#include <asm/pgtable.h>
> > #include <asm/tdx.h>
> > #include "tdx.h"
> >
> > @@ -66,6 +67,16 @@
> > #define TDMR_START(_tdmr) ((_tdmr)->base)
> > #define TDMR_END(_tdmr) ((_tdmr)->base + (_tdmr)->size)
> >
> > +/* Page sizes supported by TDX */
> > +enum tdx_page_sz {
> > + TDX_PG_4K = 0,
> > + TDX_PG_2M,
> > + TDX_PG_1G,
> > + TDX_PG_MAX,
> > +};
> > +
> > +#define TDX_HPAGE_SHIFT 9
> > +
> > /*
> > * TDX module status during initialization
> > */
> > @@ -893,7 +904,7 @@ static int create_tdmrs(struct tdmr_info **tdmr_array, int *tdmr_num)
> > * them. To keep it simple, always try to use one TDMR to cover
> > * one RAM entry.
> > */
> > - e820_for_each_mem(e820_table, i, start, end) {
> > + e820_for_each_mem(i, start, end) {
>
> This patch doesn't change e820_for_each_mem. This hunk should go into the
> previous patch?
>
> thansk,
>
Exactly. My mistake. Thanks for catching.
--
Thanks,
-Kai
> > +
> > +static int sanitize_cmrs(struct cmr_info *cmr_array, int cmr_num)
> > +{
> > + int i, j;
> > +
> > + /*
> > + * Intel TDX module spec, 20.7.3 CMR_INFO:
> > + *
> > + * TDH.SYS.INFO leaf function returns a MAX_CMRS (32) entry
> > + * array of CMR_INFO entries. The CMRs are sorted from the
> > + * lowest base address to the highest base address, and they
> > + * are non-overlapping.
> > + *
> > + * This implies that BIOS may generate invalid empty entries
> > + * if total CMRs are less than 32. Skip them manually.
> > + */
> > + for (i = 0; i < cmr_num; i++) {
> > + struct cmr_info *cmr = &cmr_array[i];
> > + struct cmr_info *prev_cmr = NULL;
> > +
> > + /* Skip further invalid CMRs */
> > + if (!cmr_valid(cmr))
> > + break;
> > +
> > + if (i > 0)
> > + prev_cmr = &cmr_array[i - 1];
> > +
> > + /*
> > + * It is a TDX firmware bug if CMRs are not
> > + * in address ascending order.
> > + */
> > + if (prev_cmr && ((prev_cmr->base + prev_cmr->size) >
> > + cmr->base)) {
> > + pr_err("Firmware bug: CMRs not in address ascending order.\n");
> > + return -EFAULT;
> > + }
> > + }
> > +
> > + /*
> > + * Also a sane BIOS should never generate invalid CMR(s) between
> > + * two valid CMRs. Sanity check this and simply return error in
> > + * this case.
> > + */
> > + for (j = i; j < cmr_num; j++)
> > + if (cmr_valid(&cmr_array[j])) {
> > + pr_err("Firmware bug: invalid CMR(s) among valid CMRs.\n");
> > + return -EFAULT;
> > + }
>
> This check doesn't make sense because above i-for loop has break.
The break in above i-for loop will hit at the first invalid CMR entry. Yes "j =
i" will make double check on this invalid CMR entry, but it should have no
problem. Or we can change to "j = i + 1" to skip the first invalid CMR entry.
Does this make sense?
>
> > +
> > + /*
> > + * Trim all tail invalid empty CMRs. BIOS should generate at
> > + * least one valid CMR, otherwise it's a TDX firmware bug.
> > + */
> > + tdx_cmr_num = i;
> > + if (!tdx_cmr_num) {
> > + pr_err("Firmware bug: No valid CMR.\n");
> > + return -EFAULT;
> > + }
>
> Something strange.
> Probably we'd like to check it by decrementing.
> for (i = cmr_num; i >= 0; i--)
> if (!cmr_valid()) // if last invalid cmr
> tdx_cmr_num
> // more check. overlapping
>
I don't know how does this look strange to you? As replied above, "i" is the
index to the first invalid CMR entry (or cmr_num, which is array size), so
"tdx_cmr_num = i" initializes the total valid CMR number, correct?
On Wed, 2022-03-23 at 16:35 +1300, Tian, Kevin wrote:
> > From: Kai Huang <[email protected]>
> > Sent: Sunday, March 13, 2022 6:50 PM
> >
> > Secure Arbitration Mode (SEAM) is an extension of VMX architecture. It
> > defines a new VMX root operation (SEAM VMX root) and a new VMX non-
> > root
> > operation (SEAM VMX non-root) which isolate from legacy VMX root and
> > VMX
> > non-root mode.
>
> s/isolate/are isolated/
OK thanks.
>
> >
> > A CPU-attested software module (called the 'TDX module') runs in SEAM
> > VMX root to manage the crypto protected VMs running in SEAM VMX non-
> > root.
> > SEAM VMX root is also used to host another CPU-attested software module
> > (called the 'P-SEAMLDR') to load and update the TDX module.
> >
> > Host kernel transits to either the P-SEAMLDR or the TDX module via the
> > new SEAMCALL instruction. SEAMCALLs are host-side interface functions
> > defined by the P-SEAMLDR and the TDX module around the new SEAMCALL
> > instruction. They are similar to a hypercall, except they are made by
>
> "SEAMCALLs are ... functions ... around the new SEAMCALL instruction"
>
> This is confusing. Probably just:
May I ask why is it confusing?
>
> "SEAMCALL functions are defined and handled by the P-SEAMLDR and
> the TDX module"
>
> > host kernel to the SEAM software.
> >
> > SEAMCALLs use an ABI different from the x86-64 system-v ABI. Instead,
> > they share the same ABI with the TDCALL. %rax is used to carry both the
> > SEAMCALL leaf function number (input) and the completion status code
> > (output). Additional GPRs (%rcx, %rdx, %r8->%r11) may be further used
> > as both input and output operands in individual leaf SEAMCALLs.
> >
> > Implement a C function __seamcall() to do SEAMCALL using the assembly
> > macro used by __tdx_module_call() (the implementation of TDCALL). The
> > only exception not covered here is TDENTER leaf function which takes
> > all GPRs and XMM0-XMM15 as both input and output. The caller of TDENTER
> > should implement its own logic to call TDENTER directly instead of using
> > this function.
> >
> > SEAMCALL instruction is essentially a VMExit from VMX root to SEAM VMX
> > root, and it can fail with VMfailInvalid, for instance, when the SEAM
> > software module is not loaded. The C function __seamcall() returns
> > TDX_SEAMCALL_VMFAILINVALID, which doesn't conflict with any actual error
> > code of SEAMCALLs, to uniquely represent this case.
>
> SEAMCALL is TDX specific, is it? If yes, there is no need to have both
> TDX and SEAMCALL in one macro, i.e. above can be SEAMCALL_VMFAILINVALID.
This is defined in TDX guest series. I just use it.
https://lore.kernel.org/lkml/[email protected]/T/#md0b1aa563bd003ab625de159612a0d07e3ded7cb
--
Thanks,
-Kai
> From: Huang, Kai <[email protected]>
> Sent: Monday, March 28, 2022 11:55 AM
>
> On Wed, 2022-03-23 at 16:21 +1300, Tian, Kevin wrote:
> > > From: Kai Huang <[email protected]>
> > > Sent: Sunday, March 13, 2022 6:50 PM
> > >
> > > @@ -715,6 +716,8 @@ static void init_intel(struct cpuinfo_x86 *c)
> > > if (cpu_has(c, X86_FEATURE_TME))
> > > detect_tme(c);
> > >
> > > + tdx_detect_cpu(c);
> > > +
> >
> > TDX is not reported as a x86 feature. and the majority of detection
> > and initialization have been conducted on demand in this series
> > (as explained in patch04). Why is SEAM (and latter keyid) so different
> > to be detected at early boot phase?
> >
> > Thanks
> > Kevin
>
> Hi Kevin,
>
> Sorry for late reply. I was out last week.
>
> SEAMRR and TDX KeyIDs are configured by BIOS and they are static during
> machine's runtime. On the other hand, TDX module can be updated and
> reinitialized at runtime (not supported in this series but will be supported in
> the future). Theoretically, even P-SEAMLDR can be updated at runtime
> (although
> I think unlikely to be supported in Linux). Therefore I think detecting
> SEAMRR
> and TDX KeyIDs at boot fits better.
If those info are static it's perfectly fine to detect them until they are
required... and following are not solid cases (e.g. just exposing SEAM
alone doesn't tell the availability of TDX) but let's also hear the opinions
from others.
>
> It is also more flexible from some other perspectives I think: 1) There was a
> request to add X86_FEATURE_SEAM bit and expose it to /proc/cpuinfo. I
> didn't add
> it because I didn't think the use case was solid. But in case someone has
> some
> use case in the future we can add it, and detecting SEAMRR during boot fits
> this
> more. 2) There was a request to expose TDX KeyID info via sysfs so userspace
> can
> know how many TDs can be created. It's not done in this series but it will be
> done at some time in the future. Detecting KeyIDs at boot allows this info
> being
> able to be exposed via sysfs at early stage, providing more flexibility to
> userspace.
>
> At last, currently in this series the patch to handle kexec checks whether
> SEAMRR and TDX KeyIDs are enabled and then flush cache (of course this is
> open
> to discussion). Detecting them at boot fits better I think.
>
> --
> Thanks,
> -Kai
>
On Wed, 2022-03-23 at 16:21 +1300, Tian, Kevin wrote:
> > From: Kai Huang <[email protected]>
> > Sent: Sunday, March 13, 2022 6:50 PM
> >
> > @@ -715,6 +716,8 @@ static void init_intel(struct cpuinfo_x86 *c)
> > if (cpu_has(c, X86_FEATURE_TME))
> > detect_tme(c);
> >
> > + tdx_detect_cpu(c);
> > +
>
> TDX is not reported as a x86 feature. and the majority of detection
> and initialization have been conducted on demand in this series
> (as explained in patch04). Why is SEAM (and latter keyid) so different
> to be detected at early boot phase?
>
> Thanks
> Kevin
Hi Kevin,
Sorry for late reply. I was out last week.
SEAMRR and TDX KeyIDs are configured by BIOS and they are static during
machine's runtime. On the other hand, TDX module can be updated and
reinitialized at runtime (not supported in this series but will be supported in
the future). Theoretically, even P-SEAMLDR can be updated at runtime (although
I think unlikely to be supported in Linux). Therefore I think detecting SEAMRR
and TDX KeyIDs at boot fits better.
It is also more flexible from some other perspectives I think: 1) There was a
request to add X86_FEATURE_SEAM bit and expose it to /proc/cpuinfo. I didn't add
it because I didn't think the use case was solid. But in case someone has some
use case in the future we can add it, and detecting SEAMRR during boot fits this
more. 2) There was a request to expose TDX KeyID info via sysfs so userspace can
know how many TDs can be created. It's not done in this series but it will be
done at some time in the future. Detecting KeyIDs at boot allows this info being
able to be exposed via sysfs at early stage, providing more flexibility to
userspace.
At last, currently in this series the patch to handle kexec checks whether
SEAMRR and TDX KeyIDs are enabled and then flush cache (of course this is open
to discussion). Detecting them at boot fits better I think.
--
Thanks,
-Kai
> From: Huang, Kai <[email protected]>
> Sent: Monday, March 28, 2022 9:42 AM
>
>
> >
> > >
> > > A CPU-attested software module (called the 'TDX module') runs in SEAM
> > > VMX root to manage the crypto protected VMs running in SEAM VMX
> non-
> > > root.
> > > SEAM VMX root is also used to host another CPU-attested software
> module
> > > (called the 'P-SEAMLDR') to load and update the TDX module.
> > >
> > > Host kernel transits to either the P-SEAMLDR or the TDX module via the
> > > new SEAMCALL instruction. SEAMCALLs are host-side interface functions
> > > defined by the P-SEAMLDR and the TDX module around the new
> SEAMCALL
> > > instruction. They are similar to a hypercall, except they are made by
> >
> > "SEAMCALLs are ... functions ... around the new SEAMCALL instruction"
> >
> > This is confusing. Probably just:
>
> May I ask why is it confusing?
SEAMCALL is an instruction. One of its arguments carries the function
number.
On 3/28/22 13:22, Isaku Yamahata wrote:
>>>> + /*
>>>> + * Also a sane BIOS should never generate invalid CMR(s) between
>>>> + * two valid CMRs. Sanity check this and simply return error in
>>>> + * this case.
>>>> + */
>>>> + for (j = i; j < cmr_num; j++)
>>>> + if (cmr_valid(&cmr_array[j])) {
>>>> + pr_err("Firmware bug: invalid CMR(s) among valid CMRs.\n");
>>>> + return -EFAULT;
>>>> + }
>>> This check doesn't make sense because above i-for loop has break.
>> The break in above i-for loop will hit at the first invalid CMR entry. Yes "j =
>> i" will make double check on this invalid CMR entry, but it should have no
>> problem. Or we can change to "j = i + 1" to skip the first invalid CMR entry.
>>
>> Does this make sense?
> It makes sense. Somehow I missed j = i. I scratch my review.
You can also take it as something you might want to refactor, add
comments, or work on better variable names. If it confused one person,
it will confuse more in the future.
On Mon, 2022-03-28 at 21:16 +1300, Tian, Kevin wrote:
> > From: Huang, Kai <[email protected]>
> > Sent: Monday, March 28, 2022 9:42 AM
> >
> >
> > >
> > > >
> > > > A CPU-attested software module (called the 'TDX module') runs in SEAM
> > > > VMX root to manage the crypto protected VMs running in SEAM VMX
> > non-
> > > > root.
> > > > SEAM VMX root is also used to host another CPU-attested software
> > module
> > > > (called the 'P-SEAMLDR') to load and update the TDX module.
> > > >
> > > > Host kernel transits to either the P-SEAMLDR or the TDX module via the
> > > > new SEAMCALL instruction. SEAMCALLs are host-side interface functions
> > > > defined by the P-SEAMLDR and the TDX module around the new
> > SEAMCALL
> > > > instruction. They are similar to a hypercall, except they are made by
> > >
> > > "SEAMCALLs are ... functions ... around the new SEAMCALL instruction"
> > >
> > > This is confusing. Probably just:
> >
> > May I ask why is it confusing?
>
> SEAMCALL is an instruction. One of its arguments carries the function
> number.
>
To confirm, are you saying the word "SEAMCALLs" is confusing, and we should use
"SEAMCALL leaf functions" instead?
--
Thanks,
-Kai
On Mon, Mar 28, 2022 at 02:30:05PM +1300,
Kai Huang <[email protected]> wrote:
>
> > > +
> > > +static int sanitize_cmrs(struct cmr_info *cmr_array, int cmr_num)
> > > +{
> > > + int i, j;
> > > +
> > > + /*
> > > + * Intel TDX module spec, 20.7.3 CMR_INFO:
> > > + *
> > > + * TDH.SYS.INFO leaf function returns a MAX_CMRS (32) entry
> > > + * array of CMR_INFO entries. The CMRs are sorted from the
> > > + * lowest base address to the highest base address, and they
> > > + * are non-overlapping.
> > > + *
> > > + * This implies that BIOS may generate invalid empty entries
> > > + * if total CMRs are less than 32. Skip them manually.
> > > + */
> > > + for (i = 0; i < cmr_num; i++) {
> > > + struct cmr_info *cmr = &cmr_array[i];
> > > + struct cmr_info *prev_cmr = NULL;
> > > +
> > > + /* Skip further invalid CMRs */
> > > + if (!cmr_valid(cmr))
> > > + break;
> > > +
> > > + if (i > 0)
> > > + prev_cmr = &cmr_array[i - 1];
> > > +
> > > + /*
> > > + * It is a TDX firmware bug if CMRs are not
> > > + * in address ascending order.
> > > + */
> > > + if (prev_cmr && ((prev_cmr->base + prev_cmr->size) >
> > > + cmr->base)) {
> > > + pr_err("Firmware bug: CMRs not in address ascending order.\n");
> > > + return -EFAULT;
> > > + }
> > > + }
> > > +
> > > + /*
> > > + * Also a sane BIOS should never generate invalid CMR(s) between
> > > + * two valid CMRs. Sanity check this and simply return error in
> > > + * this case.
> > > + */
> > > + for (j = i; j < cmr_num; j++)
> > > + if (cmr_valid(&cmr_array[j])) {
> > > + pr_err("Firmware bug: invalid CMR(s) among valid CMRs.\n");
> > > + return -EFAULT;
> > > + }
> >
> > This check doesn't make sense because above i-for loop has break.
>
> The break in above i-for loop will hit at the first invalid CMR entry. Yes "j =
> i" will make double check on this invalid CMR entry, but it should have no
> problem. Or we can change to "j = i + 1" to skip the first invalid CMR entry.
>
> Does this make sense?
It makes sense. Somehow I missed j = i. I scratch my review.
--
Isaku Yamahata <[email protected]>
On Mon, 2022-03-28 at 13:30 -0700, Dave Hansen wrote:
> On 3/28/22 13:22, Isaku Yamahata wrote:
> > > > > + /*
> > > > > + * Also a sane BIOS should never generate invalid CMR(s) between
> > > > > + * two valid CMRs. Sanity check this and simply return error in
> > > > > + * this case.
> > > > > + */
> > > > > + for (j = i; j < cmr_num; j++)
> > > > > + if (cmr_valid(&cmr_array[j])) {
> > > > > + pr_err("Firmware bug: invalid CMR(s) among valid CMRs.\n");
> > > > > + return -EFAULT;
> > > > > + }
> > > > This check doesn't make sense because above i-for loop has break.
> > > The break in above i-for loop will hit at the first invalid CMR entry. Yes "j =
> > > i" will make double check on this invalid CMR entry, but it should have no
> > > problem. Or we can change to "j = i + 1" to skip the first invalid CMR entry.
> > >
> > > Does this make sense?
> > It makes sense. Somehow I missed j = i. I scratch my review.
>
> You can also take it as something you might want to refactor, add
> comments, or work on better variable names. If it confused one person,
> it will confuse more in the future.
Hi Dave,
OK I'll think over whether I can improve. Thanks for advice.
Btw if you have time, could you help to review this series? Or could you take a
look at whether the overall design is OK, for instance, the design limitations
described in the cover letter?
--
Thanks,
-Kai
On 3/28/22 16:40, Kai Huang wrote:
> Btw if you have time, could you help to review this series? Or could you take a
> look at whether the overall design is OK, for instance, the design limitations
> described in the cover letter?
Kai, it's on my list, but it's a long list.
If you want to help, there are at least two other *big* TDX patch sets
out there that need eyeballs:
> https://lore.kernel.org/all/[email protected]/
and
> https://lore.kernel.org/all/[email protected]/
On Mon, 2022-03-28 at 16:44 -0700, Dave Hansen wrote:
> On 3/28/22 16:40, Kai Huang wrote:
> > Btw if you have time, could you help to review this series? Or could you take a
> > look at whether the overall design is OK, for instance, the design limitations
> > described in the cover letter?
>
> Kai, it's on my list, but it's a long list.
>
> If you want to help, there are at least two other *big* TDX patch sets
> out there that need eyeballs:
>
> > https://lore.kernel.org/all/[email protected]/
>
> and
>
> > https://lore.kernel.org/all/[email protected]/
>
Yes I am aware of them. I'll comment if I think that can help to speed up.
Thanks for the info!
--
Thanks,
-Kai
On Mon, Mar 28, 2022 at 08:10:47AM +0000,
"Tian, Kevin" <[email protected]> wrote:
> > From: Huang, Kai <[email protected]>
> > Sent: Monday, March 28, 2022 11:55 AM
> >
> > SEAMRR and TDX KeyIDs are configured by BIOS and they are static during
> > machine's runtime. On the other hand, TDX module can be updated and
> > reinitialized at runtime (not supported in this series but will be supported in
> > the future). Theoretically, even P-SEAMLDR can be updated at runtime
> > (although
> > I think unlikely to be supported in Linux). Therefore I think detecting
> > SEAMRR
> > and TDX KeyIDs at boot fits better.
>
> If those info are static it's perfectly fine to detect them until they are
> required... and following are not solid cases (e.g. just exposing SEAM
> alone doesn't tell the availability of TDX) but let's also hear the opinions
> from others.
One use case is cloud use case. If TDX module is initialized dynamically at
runtime, cloud management system wants to know if the physical machine is
capable of TDX in addition to if TDX module is initialized. Also how many TDs
can be run on the machine even when TDX module is not initialized yet. The
management system will schedule TDs based on those information.
--
Isaku Yamahata <[email protected]>
On Tue, 2022-03-29 at 10:52 -0700, Isaku Yamahata wrote:
> On Mon, Mar 28, 2022 at 08:10:47AM +0000,
> "Tian, Kevin" <[email protected]> wrote:
>
> > > From: Huang, Kai <[email protected]>
> > > Sent: Monday, March 28, 2022 11:55 AM
> > >
> > > SEAMRR and TDX KeyIDs are configured by BIOS and they are static during
> > > machine's runtime. On the other hand, TDX module can be updated and
> > > reinitialized at runtime (not supported in this series but will be supported in
> > > the future). Theoretically, even P-SEAMLDR can be updated at runtime
> > > (although
> > > I think unlikely to be supported in Linux). Therefore I think detecting
> > > SEAMRR
> > > and TDX KeyIDs at boot fits better.
> >
> > If those info are static it's perfectly fine to detect them until they are
> > required... and following are not solid cases (e.g. just exposing SEAM
> > alone doesn't tell the availability of TDX) but let's also hear the opinions
> > from others.
>
> One use case is cloud use case. If TDX module is initialized dynamically at
> runtime, cloud management system wants to know if the physical machine is
> capable of TDX in addition to if TDX module is initialized. Also how many TDs
> can be run on the machine even when TDX module is not initialized yet. The
> management system will schedule TDs based on those information.
Thanks Isaku. I'll keep current way for now.
--
Thanks,
-Kai