2023-02-13 12:00:29

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 00/18] TDX host kernel support

Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious
host and certain physical attacks. TDX specs are available in [1].

This series is the initial support to enable TDX with minimal code to
allow KVM to create and run TDX guests. KVM support for TDX is being
developed separately[2]. A new "userspace inaccessible memfd" approach
to support TDX private memory is also being developed[3]. The KVM will
only support the new "userspace inaccessible memfd" as TDX guest memory.

This series doesn't aim to support all functionalities, and doesn't aim
to resolve all things perfectly. For example, memory hotplug is handled
in simple way (please refer to "Kernel policy on TDX memory" and "Memory
hotplug" sections below).

(For memory hotplug, sorry for broadcasting widely but I cc'ed the
[email protected] following Kirill's suggestion so MM experts can also
help to provide comments.)

And TDX module metadata allocation just uses alloc_contig_pages() to
allocate large chunk at runtime, thus it can fail. It is imperfect now
but _will_ be improved in the future.

Also, the patch to add the new kernel comline tdx="force" isn't included
in this initial version, as Dave suggested it isn't mandatory. But I
_will_ add one once this initial version gets merged.

All other optimizations will be posted as follow-up once this initial
TDX support is upstreamed.

Hi Dave, Peter, Thomas, Dan (and Intel reviewers),

It turns out we won't remove TDH.SYS.LP.INIT (and TDH.SYS.INIT) as did in
v8. But the TDH.SYS.LP.INIT will be relaxed to be able to called on one
cpu at any time (after TDH.SYS.INIT) before any other SEAMCALL can be
made on that cpu. This new version (v9) is written based on this new
behaviour.

But I haven't tested the new behaviour because the TDX module isn't
ready. Instead, I tested with all cpus are online when initializing the
TDX module. CPU hotplug path isn't really tested although I did some
basic test that I can offline some cpus after module initialization,
online them again and the LP.INIT was skipped successfully for them.

However I believe there should be no issue when the new module is ready.
I will test and report back when the new module is ready.

I would appreciate if folks could review this presumptive series anyway.

And I would appreciate reviewed-by or acked-by tags if the patches look
good to you.

----- Changelog history: ------

- v8 -> v9:

- Added patches to handle TDH.SYS.INIT and TDH.SYS.LP.INIT back.
- Other changes please refer to changelog histroy in individual patches.

v8: https://lore.kernel.org/lkml/[email protected]/

- v7 -> v8:

- 200+ LOC removed (from 1800+ -> 1600+).
- Removed patches to do TDH.SYS.INIT and TDH.SYS.LP.INIT
(Dave/Peter/Thomas).
- Removed patch to shut down TDX module (Sean).
- For memory hotplug, changed to reject non-TDX memory from
arch_add_memory() to memory_notifier (Dan/David).
- Simplified the "skeletion patch" as a result of removing
TDH.SYS.LP.INIT patch.
- Refined changelog/comments for most of the patches (to tell better
story, remove silly comments, etc) (Dave).
- Added new 'struct tdmr_info_list' struct, and changed all TDMR related
patches to use it (Dave).
- Effectively merged patch "Reserve TDX module global KeyID" and
"Configure TDX module with TDMRs and global KeyID", and removed the
static variable 'tdx_global_keyid', following Dave's suggestion on
making tdx_sysinfo local variable.
- For detailed changes please see individual patch changelog history.

v7: https://lore.kernel.org/lkml/[email protected]/T/

- v6 -> v7:
- Added memory hotplug support.
- Changed how to choose the list of "TDX-usable" memory regions from at
kernel boot time to TDX module initialization time.
- Addressed comments received in previous versions. (Andi/Dave).
- Improved the commit message and the comments of kexec() support patch,
and the patch handles returnning PAMTs back to the kernel when TDX
module initialization fails. Please also see "kexec()" section below.
- Changed the documentation patch accordingly.
- For all others please see individual patch changelog history.

v6: https://lore.kernel.org/lkml/[email protected]/T/

- v5 -> v6:

- Removed ACPI CPU/memory hotplug patches. (Intel internal discussion)
- Removed patch to disable driver-managed memory hotplug (Intel
internal discussion).
- Added one patch to introduce enum type for TDX supported page size
level to replace the hard-coded values in TDX guest code (Dave).
- Added one patch to make TDX depends on X2APIC being enabled (Dave).
- Added one patch to build all boot-time present memory regions as TDX
memory during kernel boot.
- Added Reviewed-by from others to some patches.
- For all others please see individual patch changelog history.

v5: https://lore.kernel.org/lkml/[email protected]/T/

- v4 -> v5:

This is essentially a resent of v4. Sorry I forgot to consult
get_maintainer.pl when sending out v4, so I forgot to add linux-acpi
and linux-mm mailing list and the relevant people for 4 new patches.

There are also very minor code and commit message update from v4:

- Rebased to latest tip/x86/tdx.
- Fixed a checkpatch issue that I missed in v4.
- Removed an obsoleted comment that I missed in patch 6.
- Very minor update to the commit message of patch 12.

For other changes to individual patches since v3, please refer to the
changelog histroy of individual patches (I just used v3 -> v5 since
there's basically no code change to v4).

v4: https://lore.kernel.org/lkml/98c84c31d8f062a0b50a69ef4d3188bc259f2af2.1654025431.git.kai.huang@intel.com/T/

- v3 -> v4 (addressed Dave's comments, and other comments from others):

- Simplified SEAMRR and TDX keyID detection.
- Added patches to handle ACPI CPU hotplug.
- Added patches to handle ACPI memory hotplug and driver managed memory
hotplug.
- Removed tdx_detect() but only use single tdx_init().
- Removed detecting TDX module via P-SEAMLDR.
- Changed from using e820 to using memblock to convert system RAM to TDX
memory.
- Excluded legacy PMEM from TDX memory.
- Removed the boot-time command line to disable TDX patch.
- Addressed comments for other individual patches (please see individual
patches).
- Improved the documentation patch based on the new implementation.

v3: https://lore.kernel.org/lkml/[email protected]/T/

- V2 -> v3:

- Addressed comments from Isaku.
- Fixed memory leak and unnecessary function argument in the patch to
configure the key for the global keyid (patch 17).
- Enhanced a little bit to the patch to get TDX module and CMR
information (patch 09).
- Fixed an unintended change in the patch to allocate PAMT (patch 13).
- Addressed comments from Kevin:
- Slightly improvement on commit message to patch 03.
- Removed WARN_ON_ONCE() in the check of cpus_booted_once_mask in
seamrr_enabled() (patch 04).
- Changed documentation patch to add TDX host kernel support materials
to Documentation/x86/tdx.rst together with TDX guest staff, instead
of a standalone file (patch 21)
- Very minor improvement in commit messages.

v2: https://lore.kernel.org/lkml/[email protected]/T/

- RFC (v1) -> v2:
- Rebased to Kirill's latest TDX guest code.
- Fixed two issues that are related to finding all RAM memory regions
based on e820.
- Minor improvement on comments and commit messages.

v1: https://lore.kernel.org/lkml/[email protected]/T/

== Background ==

TDX introduces a new CPU mode called Secure Arbitration Mode (SEAM)
and a new isolated range pointed by the SEAM Ranger Register (SEAMRR).
A CPU-attested software module called 'the TDX module' runs in the new
isolated region as a trusted hypervisor to create/run protected VMs.

TDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to
provide crypto-protection to the VMs. TDX reserves part of MKTME KeyIDs
as TDX private KeyIDs, which are only accessible within the SEAM mode.

TDX is different from AMD SEV/SEV-ES/SEV-SNP, which uses a dedicated
secure processor to provide crypto-protection. The firmware runs on the
secure processor acts a similar role as the TDX module.

The host kernel communicates with SEAM software via a new SEAMCALL
instruction. This is conceptually similar to a guest->host hypercall,
except it is made from the host to SEAM software instead.

Before being able to manage TD guests, the TDX module must be loaded
and properly initialized. This series assumes the TDX module is loaded
by BIOS before the kernel boots.

How to initialize the TDX module is described at TDX module 1.0
specification, chapter "13.Intel TDX Module Lifecycle: Enumeration,
Initialization and Shutdown".

== Design Considerations ==

1. Initialize the TDX module at runtime

There are basically two ways the TDX module could be initialized: either
in early boot, or at runtime before the first TDX guest is run. This
series implements the runtime initialization.

This series adds a function tdx_enable() to allow the caller to initialize
TDX at runtime:

if (tdx_enable())
goto no_tdx;
// TDX is ready to create TD guests.

This approach has below pros:

1) Initializing the TDX module requires to reserve ~1/256th system RAM as
metadata. Enabling TDX on demand allows only to consume this memory when
TDX is truly needed (i.e. when KVM wants to create TD guests).

2) SEAMCALL requires CPU being already in VMX operation (VMXON has been
done). So far, KVM is the only user of TDX, and it already handles VMXON.
Letting KVM to initialize TDX avoids handling VMXON in the core kernel.

3) It is more flexible to support "TDX module runtime update" (not in
this series). After updating to the new module at runtime, kernel needs
to go through the initialization process again.

2. CPU hotplug

TDX module requires one SEAMCALL (TDH.SYS.LP.INIT) to do per-cpu module
initialization on one cpu before any other SEAMCALLs can be made on that
cpu, including those involved during the module initialization.

Currently kernel simply guarantees all online cpus are "TDX-runnable"
(TDH.SYS.LP.INIT has been done successfully on them). During module
initialization, the SEAMCALL is done for all online cpus and CPU hotplug
is disabled during the entire module initialization. If any fails, TDX
is disabled. In CPU hotplug, the kernel provides another function
tdx_cpu_online() for the user of TDX (KVM for now) to call in it's own
CPU online callback, and reject to online the cpu if SEAMCALL fails.

TDX doesn't support physical (ACPI) CPU hotplug. A non-buggy BIOS should
never support hotpluggable CPU devicee and/or deliver ACPI CPU hotplug
event to the kernel. This series doesn't handle physical (ACPI) CPU
hotplug at all but depends on the BIOS to behave correctly.

Note TDX works with CPU logical online/offline, thus this series still
allows to do logical CPU online/offline.

3. Kernel policy on TDX memory

The TDX module reports a list of "Convertible Memory Region" (CMR) to
indicate which memory regions are TDX-capable. The TDX architecture
allows the VMM to designate specific convertible memory regions as usable
for TDX private memory.

The initial support of TDX guests will only allocate TDX private memory
from the global page allocator. This series chooses to designate _all_
system RAM in the core-mm at the time of initializing TDX module as TDX
memory to guarantee all pages in the page allocator are TDX pages.

4. Memory Hotplug

After the kernel passes all "TDX-usable" memory regions to the TDX
module, the set of "TDX-usable" memory regions are fixed during module's
runtime. No more "TDX-usable" memory can be added to the TDX module
after that.

To achieve above "to guarantee all pages in the page allocator are TDX
pages", this series simply choose to reject any non-TDX-usable memory in
memory hotplug.

This _will_ be enhanced in the future after first submission.

A better solution, suggested by Kirill, is similar to the per-node memory
encryption flag in this series [4]. We can allow adding/onlining non-TDX
memory to separate NUMA nodes so that both "TDX-capable" nodes and
"TDX-capable" nodes can co-exist. The new TDX flag can be exposed to
userspace via /sysfs so userspace can bind TDX guests to "TDX-capable"
nodes via NUMA ABIs.

5. Physical Memory Hotplug

Note TDX assumes convertible memory is always physically present during
machine's runtime. A non-buggy BIOS should never support hot-removal of
any convertible memory. This implementation doesn't handle ACPI memory
removal but depends on the BIOS to behave correctly.

6. Kexec()

There are two problems in terms of using kexec() to boot to a new kernel
when the old kernel has enabled TDX: 1) Part of the memory pages are
still TDX private pages; 2) There might be dirty cachelines associated
with TDX private pages.

The first problem doesn't matter. KeyID 0 doesn't have integrity check.
Even the new kernel wants to use any non-zero KeyID, it needs to convert
the memory to that KeyID and such conversion would work from any KeyID.

However the old kernel needs to guarantee there's no dirty cacheline
left behind before booting to the new kernel to avoid silent corruption
from later cacheline writeback (Intel hardware doesn't guarantee cache
coherency across different KeyIDs).

This series just uses wbinvd() to flush cache in stop_this_cpu()
following AMD's SME.




Kai Huang (18):
x86/tdx: Define TDX supported page sizes as macros
x86/virt/tdx: Detect TDX during kernel boot
x86/virt/tdx: Make INTEL_TDX_HOST depend on X86_X2APIC
x86/virt/tdx: Add skeleton to initialize TDX on demand
x86/virt/tdx: Add SEAMCALL infrastructure
x86/virt/tdx: Do TDX module global initialization
x86/virt/tdx: Do TDX module per-cpu initialization
x86/virt/tdx: Get information about TDX module and TDX-capable memory
x86/virt/tdx: Use all system memory when initializing TDX module as
TDX memory
x86/virt/tdx: Add placeholder to construct TDMRs to cover all TDX
memory regions
x86/virt/tdx: Fill out TDMRs to cover all TDX memory regions
x86/virt/tdx: Allocate and set up PAMTs for TDMRs
x86/virt/tdx: Designate reserved areas for all TDMRs
x86/virt/tdx: Configure TDX module with the TDMRs and global KeyID
x86/virt/tdx: Configure global KeyID on all packages
x86/virt/tdx: Initialize all TDMRs
x86/virt/tdx: Flush cache in kexec() when TDX is enabled
Documentation/x86: Add documentation for TDX host support

Documentation/x86/tdx.rst | 176 +++-
arch/x86/Kconfig | 15 +
arch/x86/Makefile | 2 +
arch/x86/coco/tdx/tdx.c | 6 +-
arch/x86/include/asm/msr-index.h | 3 +
arch/x86/include/asm/tdx.h | 25 +
arch/x86/kernel/process.c | 7 +-
arch/x86/kernel/setup.c | 2 +
arch/x86/virt/Makefile | 2 +
arch/x86/virt/vmx/Makefile | 2 +
arch/x86/virt/vmx/tdx/Makefile | 2 +
arch/x86/virt/vmx/tdx/seamcall.S | 52 ++
arch/x86/virt/vmx/tdx/tdx.c | 1440 ++++++++++++++++++++++++++++++
arch/x86/virt/vmx/tdx/tdx.h | 161 ++++
arch/x86/virt/vmx/tdx/tdxcall.S | 19 +-
15 files changed, 1897 insertions(+), 17 deletions(-)
create mode 100644 arch/x86/virt/Makefile
create mode 100644 arch/x86/virt/vmx/Makefile
create mode 100644 arch/x86/virt/vmx/tdx/Makefile
create mode 100644 arch/x86/virt/vmx/tdx/seamcall.S
create mode 100644 arch/x86/virt/vmx/tdx/tdx.c
create mode 100644 arch/x86/virt/vmx/tdx/tdx.h


base-commit: 1e70c680375aa33cca97bff0bca68c0f82f5023c
--
2.39.1



2023-02-13 12:00:47

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 01/18] x86/tdx: Define TDX supported page sizes as macros

TDX supports 4K, 2M and 1G page sizes. The corresponding values are
defined by the TDX module spec and used as TDX module ABI. Currently,
they are used in try_accept_one() when the TDX guest tries to accept a
page. However currently try_accept_one() uses hard-coded magic values.

Define TDX supported page sizes as macros and get rid of the hard-coded
values in try_accept_one(). TDX host support will need to use them too.

Signed-off-by: Kai Huang <[email protected]>
Reviewed-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: Dave Hansen <[email protected]>
---

v8 -> v9:
- Added Dave's Reviewed-by

v7 -> v8:
- Improved the comment of TDX supported page sizes macros (Dave)

v6 -> v7:
- Removed the helper to convert kernel page level to TDX page level.
- Changed to use macro to define TDX supported page sizes.

---
arch/x86/coco/tdx/tdx.c | 6 +++---
arch/x86/include/asm/tdx.h | 5 +++++
2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
index b593009b30ab..e27c3cd97fcb 100644
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -777,13 +777,13 @@ static bool try_accept_one(phys_addr_t *start, unsigned long len,
*/
switch (pg_level) {
case PG_LEVEL_4K:
- page_size = 0;
+ page_size = TDX_PS_4K;
break;
case PG_LEVEL_2M:
- page_size = 1;
+ page_size = TDX_PS_2M;
break;
case PG_LEVEL_1G:
- page_size = 2;
+ page_size = TDX_PS_1G;
break;
default:
return false;
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 28d889c9aa16..25fd6070dc0b 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -20,6 +20,11 @@

#ifndef __ASSEMBLY__

+/* TDX supported page sizes from the TDX module ABI. */
+#define TDX_PS_4K 0
+#define TDX_PS_2M 1
+#define TDX_PS_1G 2
+
/*
* Used to gather the output registers values of the TDCALL and SEAMCALL
* instructions when requesting services from the TDX module.
--
2.39.1


2023-02-13 12:00:58

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 02/18] x86/virt/tdx: Detect TDX during kernel boot

Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
host and certain physical attacks. A CPU-attested software module
called 'the TDX module' runs inside a new isolated memory range as a
trusted hypervisor to manage and run protected VMs.

Pre-TDX Intel hardware has support for a memory encryption architecture
called MKTME. The memory encryption hardware underpinning MKTME is also
used for Intel TDX. TDX ends up "stealing" some of the physical address
space from the MKTME architecture for crypto-protection to VMs. The
BIOS is responsible for partitioning the "KeyID" space between legacy
MKTME and TDX. The KeyIDs reserved for TDX are called 'TDX private
KeyIDs' or 'TDX KeyIDs' for short.

TDX doesn't trust the BIOS. During machine boot, TDX verifies the TDX
private KeyIDs are consistently and correctly programmed by the BIOS
across all CPU packages before it enables TDX on any CPU core. A valid
TDX private KeyID range on BSP indicates TDX has been enabled by the
BIOS, otherwise the BIOS is buggy.

The TDX module is expected to be loaded by the BIOS when it enables TDX,
but the kernel needs to properly initialize it before it can be used to
create and run any TDX guests. The TDX module will be initialized by
the KVM subsystem when KVM wants to use TDX.

Add a new early_initcall(tdx_init) to detect the TDX by detecting TDX
private KeyIDs. Also add a function to report whether TDX is enabled by
the BIOS. Similar to AMD SME, kexec() will use it to determine whether
cache flush is needed.

The TDX module itself requires one TDX KeyID as the 'TDX global KeyID'
to protect its metadata. Each TDX guest also needs a TDX KeyID for its
own protection. Just use the first TDX KeyID as the global KeyID and
leave the rest for TDX guests. If no TDX KeyID is left for TDX guests,
disable TDX as initializing the TDX module alone is useless.

To start to support TDX, create a new arch/x86/virt/vmx/tdx/tdx.c for
TDX host kernel support. Add a new Kconfig option CONFIG_INTEL_TDX_HOST
to opt-in TDX host kernel support (to distinguish with TDX guest kernel
support). So far only KVM uses TDX. Make the new config option depend
on KVM_INTEL.

Signed-off-by: Kai Huang <[email protected]>
Reviewed-by: Kirill A. Shutemov <[email protected]>
---

v8 -> v9:
- Moved MSR macro from local tdx.h to <asm/msr-index.h> (Dave).
- Moved reserving the TDX global KeyID from later patch to here.
- Changed 'tdx_keyid_start' and 'nr_tdx_keyids' to
'tdx_guest_keyid_start' and 'tdx_nr_guest_keyids' to represent KeyIDs
can be used by guest. (Dave)
- Slight changelog update according to above changes.

v7 -> v8: (address Dave's comments)
- Improved changelog:
- "KVM user" -> "The TDX module will be initialized by KVM when ..."
- Changed "tdx_int" part to "Just say what this patch is doing"
- Fixed the last sentence of "kexec()" paragraph
- detect_tdx() -> record_keyid_partitioning()
- Improved how to calculate tdx_keyid_start.
- tdx_keyid_num -> nr_tdx_keyids.
- Improved dmesg printing.
- Add comment to clear_tdx().

v6 -> v7:
- No change.

v5 -> v6:
- Removed SEAMRR detection to make code simpler.
- Removed the 'default N' in the KVM_TDX_HOST Kconfig (Kirill).
- Changed to use 'obj-y' in arch/x86/virt/vmx/tdx/Makefile (Kirill).

---
arch/x86/Kconfig | 12 ++++
arch/x86/Makefile | 2 +
arch/x86/include/asm/msr-index.h | 3 +
arch/x86/include/asm/tdx.h | 7 +++
arch/x86/virt/Makefile | 2 +
arch/x86/virt/vmx/Makefile | 2 +
arch/x86/virt/vmx/tdx/Makefile | 2 +
arch/x86/virt/vmx/tdx/tdx.c | 105 +++++++++++++++++++++++++++++++
8 files changed, 135 insertions(+)
create mode 100644 arch/x86/virt/Makefile
create mode 100644 arch/x86/virt/vmx/Makefile
create mode 100644 arch/x86/virt/vmx/tdx/Makefile
create mode 100644 arch/x86/virt/vmx/tdx/tdx.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 3604074a878b..fc010973a6ff 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1952,6 +1952,18 @@ config X86_SGX

If unsure, say N.

+config INTEL_TDX_HOST
+ bool "Intel Trust Domain Extensions (TDX) host support"
+ depends on CPU_SUP_INTEL
+ depends on X86_64
+ depends on KVM_INTEL
+ help
+ Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
+ host and certain physical attacks. This option enables necessary TDX
+ support in host kernel to run protected VMs.
+
+ If unsure, say N.
+
config EFI
bool "EFI runtime service support"
depends on ACPI
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 9cf07322875a..972b5a64ce38 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -252,6 +252,8 @@ archheaders:

libs-y += arch/x86/lib/

+core-y += arch/x86/virt/
+
# drivers-y are linked after core-y
drivers-$(CONFIG_MATH_EMULATION) += arch/x86/math-emu/
drivers-$(CONFIG_PCI) += arch/x86/pci/
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 37ff47552bcb..952374ddb167 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -512,6 +512,9 @@
#define MSR_RELOAD_PMC0 0x000014c1
#define MSR_RELOAD_FIXED_CTR0 0x00001309

+/* KeyID partitioning between MKTME and TDX */
+#define MSR_IA32_MKTME_KEYID_PARTITIONING 0x00000087
+
/*
* AMD64 MSRs. Not complete. See the architecture manual for a more
* complete list.
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 25fd6070dc0b..4dfe2e794411 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -94,5 +94,12 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
return -ENODEV;
}
#endif /* CONFIG_INTEL_TDX_GUEST && CONFIG_KVM_GUEST */
+
+#ifdef CONFIG_INTEL_TDX_HOST
+bool platform_tdx_enabled(void);
+#else /* !CONFIG_INTEL_TDX_HOST */
+static inline bool platform_tdx_enabled(void) { return false; }
+#endif /* CONFIG_INTEL_TDX_HOST */
+
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_X86_TDX_H */
diff --git a/arch/x86/virt/Makefile b/arch/x86/virt/Makefile
new file mode 100644
index 000000000000..1e36502cd738
--- /dev/null
+++ b/arch/x86/virt/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0-only
+obj-y += vmx/
diff --git a/arch/x86/virt/vmx/Makefile b/arch/x86/virt/vmx/Makefile
new file mode 100644
index 000000000000..feebda21d793
--- /dev/null
+++ b/arch/x86/virt/vmx/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0-only
+obj-$(CONFIG_INTEL_TDX_HOST) += tdx/
diff --git a/arch/x86/virt/vmx/tdx/Makefile b/arch/x86/virt/vmx/tdx/Makefile
new file mode 100644
index 000000000000..93ca8b73e1f1
--- /dev/null
+++ b/arch/x86/virt/vmx/tdx/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0-only
+obj-y += tdx.o
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
new file mode 100644
index 000000000000..a600b5d0879d
--- /dev/null
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -0,0 +1,105 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright(c) 2023 Intel Corporation.
+ *
+ * Intel Trusted Domain Extensions (TDX) support
+ */
+
+#define pr_fmt(fmt) "tdx: " fmt
+
+#include <linux/types.h>
+#include <linux/cache.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/printk.h>
+#include <asm/msr-index.h>
+#include <asm/msr.h>
+#include <asm/tdx.h>
+
+static u32 tdx_global_keyid __ro_after_init;
+static u32 tdx_guest_keyid_start __ro_after_init;
+static u32 tdx_nr_guest_keyids __ro_after_init;
+
+/*
+ * Use tdx_global_keyid to indicate that TDX is uninitialized.
+ * This is used in TDX initialization error paths to take it from
+ * initialized -> uninitialized.
+ */
+static void __init clear_tdx(void)
+{
+ tdx_global_keyid = 0;
+}
+
+static int __init record_keyid_partitioning(u32 *tdx_keyid_start,
+ u32 *nr_tdx_keyids)
+{
+ u32 _nr_mktme_keyids, _tdx_keyid_start, _nr_tdx_keyids;
+ int ret;
+
+ /*
+ * IA32_MKTME_KEYID_PARTIONING:
+ * Bit [31:0]: Number of MKTME KeyIDs.
+ * Bit [63:32]: Number of TDX private KeyIDs.
+ */
+ ret = rdmsr_safe(MSR_IA32_MKTME_KEYID_PARTITIONING, &_nr_mktme_keyids,
+ &_nr_tdx_keyids);
+ if (ret)
+ return -ENODEV;
+
+ if (!_nr_tdx_keyids)
+ return -ENODEV;
+
+ /* TDX KeyIDs start after the last MKTME KeyID. */
+ _tdx_keyid_start = _nr_mktme_keyids + 1;
+
+ *tdx_keyid_start = _tdx_keyid_start;
+ *nr_tdx_keyids = _nr_tdx_keyids;
+
+ return 0;
+}
+
+static int __init tdx_init(void)
+{
+ u32 tdx_keyid_start, nr_tdx_keyids;
+ int err;
+
+ err = record_keyid_partitioning(&tdx_keyid_start, &nr_tdx_keyids);
+ if (err)
+ return err;
+
+ pr_info("BIOS enabled: private KeyID range [%u, %u)\n",
+ tdx_keyid_start, tdx_keyid_start + nr_tdx_keyids);
+
+ /*
+ * The TDX module itself requires one 'TDX global KeyID' to
+ * protect its metadata. Just use the first one.
+ */
+ tdx_global_keyid = tdx_keyid_start;
+ tdx_keyid_start++;
+ nr_tdx_keyids--;
+
+ /*
+ * If there's no more TDX KeyID left, KVM won't be able to run
+ * any TDX guest. Disable TDX in this case as initializing the
+ * TDX module alone is meaningless.
+ */
+ if (!nr_tdx_keyids) {
+ pr_info("initialization failed: too few private KeyIDs available.\n");
+ goto no_tdx;
+ }
+
+ tdx_guest_keyid_start = tdx_keyid_start;
+ tdx_nr_guest_keyids = nr_tdx_keyids;
+
+ return 0;
+no_tdx:
+ clear_tdx();
+ return -ENODEV;
+}
+early_initcall(tdx_init);
+
+/* Return whether the BIOS has enabled TDX */
+bool platform_tdx_enabled(void)
+{
+ return !!tdx_global_keyid;
+}
--
2.39.1


2023-02-13 12:01:04

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 03/18] x86/virt/tdx: Make INTEL_TDX_HOST depend on X86_X2APIC

TDX capable platforms are locked to X2APIC mode and cannot fall back to
the legacy xAPIC mode when TDX is enabled by the BIOS. TDX host support
requires x2APIC. Make INTEL_TDX_HOST depend on X86_X2APIC.

Link: https://lore.kernel.org/lkml/[email protected]/
Signed-off-by: Kai Huang <[email protected]>
Reviewed-by: Dave Hansen <[email protected]>
---

v8 -> v9:
- Added Dave's Reviewed-by.

v7 -> v8: (Dave)
- Only make INTEL_TDX_HOST depend on X86_X2APIC but removed other code
- Rewrote the changelog.

v6 -> v7:
- Changed to use "Link" for the two lore links to get rid of checkpatch
warning.

---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index fc010973a6ff..6dd5d5586099 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1957,6 +1957,7 @@ config INTEL_TDX_HOST
depends on CPU_SUP_INTEL
depends on X86_64
depends on KVM_INTEL
+ depends on X86_X2APIC
help
Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
host and certain physical attacks. This option enables necessary TDX
--
2.39.1


2023-02-13 12:01:20

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 04/18] x86/virt/tdx: Add skeleton to initialize TDX on demand

Before the TDX module can be used to create and run TDX guests, it must
be loaded and properly initialized. The TDX module is expected to be
loaded by the BIOS, and to be initialized by the kernel.

TDX introduces a new CPU mode: Secure Arbitration Mode (SEAM). The host
kernel communicates with the TDX module via a new SEAMCALL instruction.
The TDX module implements a set of SEAMCALL leaf functions to allow the
host kernel to initialize it.

The TDX module can be initialized only once in its lifetime. Instead
of always initializing it at boot time, this implementation chooses an
"on demand" approach to initialize TDX until there is a real need (e.g
when requested by KVM). This approach has below pros:

1) It avoids consuming the memory that must be allocated by kernel and
given to the TDX module as metadata (~1/256th of the TDX-usable memory),
and also saves the CPU cycles of initializing the TDX module (and the
metadata) when TDX is not used at all.

2) The TDX module design allows it to be updated while the system is
running. The update procedure shares quite a few steps with this "on
demand" initialization mechanism. The hope is that much of "on demand"
mechanism can be shared with a future "update" mechanism. A boot-time
TDX module implementation would not be able to share much code with the
update mechanism.

3) Loading the TDX module requires VMX to be enabled. Currently, only
the kernel KVM code mucks with VMX enabling. If the TDX module were to
be initialized separately from KVM (like at boot), the boot code would
need to be taught how to muck with VMX enabling and KVM would need to be
taught how to cope with that. Making KVM itself responsible for TDX
initialization lets the rest of the kernel stay blissfully unaware of
VMX.

Add a placeholder tdx_enable() to initialize the TDX module on demand.
The TODO list will be pared down as functionality is added.

Use a state machine protected by mutex to make sure the initialization
will only be done once, as tdx_enable() can be called multiple times
(i.e. KVM module can be reloaded) and be called concurrently by other
kernel components in the future.

Also introduce a local tdx.h to hold all TDX architectural and kernel
defined structures and declarations used by module initialization.

Signed-off-by: Kai Huang <[email protected]>
Reviewed-by: Chao Gao <[email protected]>
---

v8 -> v9:
- Removed detailed TODO list in the changelog (Dave).
- Added back steps to do module global initialization and per-cpu
initialization in the TODO list comment.
- Moved the 'enum tdx_module_status_t' from tdx.c to local tdx.h

v7 -> v8:
- Refined changelog (Dave).
- Removed "all BIOS-enabled cpus" related code (Peter/Thomas/Dave).
- Add a "TODO list" comment in init_tdx_module() to list all steps of
initializing the TDX Module to tell the story (Dave).
- Made tdx_enable() unverisally return -EINVAL, and removed nonsense
comments (Dave).
- Simplified __tdx_enable() to only handle success or failure.
- TDX_MODULE_SHUTDOWN -> TDX_MODULE_ERROR
- Removed TDX_MODULE_NONE (not loaded) as it is not necessary.
- Improved comments (Dave).
- Pointed out 'tdx_module_status' is software thing (Dave).

v6 -> v7:
- No change.

v5 -> v6:
- Added code to set status to TDX_MODULE_NONE if TDX module is not
loaded (Chao)
- Added Chao's Reviewed-by.
- Improved comments around cpus_read_lock().

- v3->v5 (no feedback on v4):
- Removed the check that SEAMRR and TDX KeyID have been detected on
all present cpus.
- Removed tdx_detect().
- Added num_online_cpus() to MADT-enabled CPUs check within the CPU
hotplug lock and return early with error message.
- Improved dmesg printing for TDX module detection and initialization.

---
arch/x86/include/asm/tdx.h | 2 +
arch/x86/virt/vmx/tdx/tdx.c | 89 +++++++++++++++++++++++++++++++++++++
arch/x86/virt/vmx/tdx/tdx.h | 12 +++++
3 files changed, 103 insertions(+)
create mode 100644 arch/x86/virt/vmx/tdx/tdx.h

diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 4dfe2e794411..4a3ee64c1ca7 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -97,8 +97,10 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,

#ifdef CONFIG_INTEL_TDX_HOST
bool platform_tdx_enabled(void);
+int tdx_enable(void);
#else /* !CONFIG_INTEL_TDX_HOST */
static inline bool platform_tdx_enabled(void) { return false; }
+static inline int tdx_enable(void) { return -EINVAL; }
#endif /* CONFIG_INTEL_TDX_HOST */

#endif /* !__ASSEMBLY__ */
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index a600b5d0879d..f5a20d56097c 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -12,14 +12,20 @@
#include <linux/init.h>
#include <linux/errno.h>
#include <linux/printk.h>
+#include <linux/mutex.h>
#include <asm/msr-index.h>
#include <asm/msr.h>
#include <asm/tdx.h>
+#include "tdx.h"

static u32 tdx_global_keyid __ro_after_init;
static u32 tdx_guest_keyid_start __ro_after_init;
static u32 tdx_nr_guest_keyids __ro_after_init;

+static enum tdx_module_status_t tdx_module_status;
+/* Prevent concurrent attempts on TDX module initialization */
+static DEFINE_MUTEX(tdx_module_lock);
+
/*
* Use tdx_global_keyid to indicate that TDX is uninitialized.
* This is used in TDX initialization error paths to take it from
@@ -103,3 +109,86 @@ bool platform_tdx_enabled(void)
{
return !!tdx_global_keyid;
}
+
+static int init_tdx_module(void)
+{
+ /*
+ * TODO:
+ *
+ * - TDX module global initialization.
+ * - TDX module per-cpu initialization.
+ * - Get TDX module information and TDX-capable memory regions.
+ * - Build the list of TDX-usable memory regions.
+ * - Construct a list of "TD Memory Regions" (TDMRs) to cover
+ * all TDX-usable memory regions.
+ * - Configure the TDMRs and the global KeyID to the TDX module.
+ * - Configure the global KeyID on all packages.
+ * - Initialize all TDMRs.
+ *
+ * Return error before all steps are done.
+ */
+ return -EINVAL;
+}
+
+static int __tdx_enable(void)
+{
+ int ret;
+
+ ret = init_tdx_module();
+ if (ret) {
+ pr_err("initialization failed (%d)\n", ret);
+ tdx_module_status = TDX_MODULE_ERROR;
+ /*
+ * Just return one universal error code.
+ * For now the caller cannot recover anyway.
+ */
+ return -EINVAL;
+ }
+
+ pr_info("TDX module initialized.\n");
+ tdx_module_status = TDX_MODULE_INITIALIZED;
+
+ return 0;
+}
+
+/**
+ * tdx_enable - Enable TDX to be ready to run TDX guests
+ *
+ * Initialize the TDX module to enable TDX. After this function, the TDX
+ * module is ready to create and run TDX guests.
+ *
+ * This function assumes all online cpus are already in VMX operation.
+ * This function can be called in parallel by multiple callers.
+ *
+ * Return 0 if TDX is enabled successfully, otherwise error.
+ */
+int tdx_enable(void)
+{
+ int ret;
+
+ if (!platform_tdx_enabled()) {
+ pr_err_once("initialization failed: TDX is disabled.\n");
+ return -EINVAL;
+ }
+
+ mutex_lock(&tdx_module_lock);
+
+ switch (tdx_module_status) {
+ case TDX_MODULE_UNKNOWN:
+ ret = __tdx_enable();
+ break;
+ case TDX_MODULE_INITIALIZED:
+ /* Already initialized, great, tell the caller. */
+ ret = 0;
+ break;
+ default:
+ /* Failed to initialize in the previous attempts */
+ ret = -EINVAL;
+ break;
+ }
+
+ mutex_unlock(&tdx_module_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(tdx_enable);
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
new file mode 100644
index 000000000000..881cca276956
--- /dev/null
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _X86_VIRT_TDX_H
+#define _X86_VIRT_TDX_H
+
+/* Kernel defined TDX module status during module initialization. */
+enum tdx_module_status_t {
+ TDX_MODULE_UNKNOWN,
+ TDX_MODULE_INITIALIZED,
+ TDX_MODULE_ERROR
+};
+
+#endif
--
2.39.1


2023-02-13 12:01:37

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 05/18] x86/virt/tdx: Add SEAMCALL infrastructure

TDX introduces a new CPU mode: Secure Arbitration Mode (SEAM). This
mode runs only the TDX module itself or other code to load the TDX
module.

The host kernel communicates with SEAM software via a new SEAMCALL
instruction. This is conceptually similar to a guest->host hypercall,
except it is made from the host to SEAM software instead. The TDX
module establishes a new SEAMCALL ABI which allows the host to
initialize the module and to manage VMs.

Add infrastructure to make SEAMCALLs. The SEAMCALL ABI is very similar
to the TDCALL ABI and leverages much TDCALL infrastructure.

SEAMCALL instruction causes #GP when TDX isn't BIOS enabled, and #UD
when CPU is not in VMX operation. The current TDX_MODULE_CALL macro
doesn't handle any of them. There's no way to check whether the CPU is
in VMX operation or not.

Initializing the TDX module is done at runtime on demand, and it depends
on the caller to ensure CPU is in VMX operation before making SEAMCALL.
To avoid getting Oops when the caller mistakenly tries to initialize the
TDX module when CPU is not in VMX operation, extend the TDX_MODULE_CALL
macro to handle #UD (and opportunistically #GP since they share the same
assembly).

Introduce two new TDX error codes for #UD and #GP respectively so the
caller can distinguish. Also, Opportunistically put the new TDX error
codes and the existing TDX_SEAMCALL_VMFAILINVALID into INTEL_TDX_HOST
Kconfig option as they are only used when it is on.

Any failure during the module initialization is not recoverable for now.
Print out error message when SEAMCALL failed depending on the error code
to help the user to understand what went wrong.

Signed-off-by: Kai Huang <[email protected]>
---

v8 -> v9:
- Changed patch title (Dave).
- Enhanced seamcall() to include the cpu id to the error message when
SEAMCALL fails.

v7 -> v8:
- Improved changelog (Dave):
- Trim down some sentences (Dave).
- Removed __seamcall() and seamcall() function name and changed
accordingly (Dave).
- Improved the sentence explaining why to handle #GP (Dave).
- Added code to print out error message in seamcall(), following
the idea that tdx_enable() to return universal error and print out
error message to make clear what's going wrong (Dave). Also mention
this in changelog.

v6 -> v7:
- No change.

v5 -> v6:
- Added code to handle #UD and #GP (Dave).
- Moved the seamcall() wrapper function to this patch, and used a
temporary __always_unused to avoid compile warning (Dave).

- v3 -> v5 (no feedback on v4):
- Explicitly tell TDX_SEAMCALL_VMFAILINVALID is returned if the
SEAMCALL itself fails.
- Improve the changelog.

---
arch/x86/include/asm/tdx.h | 9 +++++
arch/x86/virt/vmx/tdx/Makefile | 2 +-
arch/x86/virt/vmx/tdx/seamcall.S | 52 +++++++++++++++++++++++++++
arch/x86/virt/vmx/tdx/tdx.c | 60 ++++++++++++++++++++++++++++++++
arch/x86/virt/vmx/tdx/tdx.h | 5 +++
arch/x86/virt/vmx/tdx/tdxcall.S | 19 ++++++++--
6 files changed, 144 insertions(+), 3 deletions(-)
create mode 100644 arch/x86/virt/vmx/tdx/seamcall.S

diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 4a3ee64c1ca7..5c5ecfddb15b 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -8,6 +8,10 @@
#include <asm/ptrace.h>
#include <asm/shared/tdx.h>

+#ifdef CONFIG_INTEL_TDX_HOST
+
+#include <asm/trapnr.h>
+
/*
* SW-defined error codes.
*
@@ -18,6 +22,11 @@
#define TDX_SW_ERROR (TDX_ERROR | GENMASK_ULL(47, 40))
#define TDX_SEAMCALL_VMFAILINVALID (TDX_SW_ERROR | _UL(0xFFFF0000))

+#define TDX_SEAMCALL_GP (TDX_SW_ERROR | X86_TRAP_GP)
+#define TDX_SEAMCALL_UD (TDX_SW_ERROR | X86_TRAP_UD)
+
+#endif
+
#ifndef __ASSEMBLY__

/* TDX supported page sizes from the TDX module ABI. */
diff --git a/arch/x86/virt/vmx/tdx/Makefile b/arch/x86/virt/vmx/tdx/Makefile
index 93ca8b73e1f1..38d534f2c113 100644
--- a/arch/x86/virt/vmx/tdx/Makefile
+++ b/arch/x86/virt/vmx/tdx/Makefile
@@ -1,2 +1,2 @@
# SPDX-License-Identifier: GPL-2.0-only
-obj-y += tdx.o
+obj-y += tdx.o seamcall.o
diff --git a/arch/x86/virt/vmx/tdx/seamcall.S b/arch/x86/virt/vmx/tdx/seamcall.S
new file mode 100644
index 000000000000..f81be6b9c133
--- /dev/null
+++ b/arch/x86/virt/vmx/tdx/seamcall.S
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#include <linux/linkage.h>
+#include <asm/frame.h>
+
+#include "tdxcall.S"
+
+/*
+ * __seamcall() - Host-side interface functions to SEAM software module
+ * (the P-SEAMLDR or the TDX module).
+ *
+ * Transform function call register arguments into the SEAMCALL register
+ * ABI. Return TDX_SEAMCALL_VMFAILINVALID if the SEAMCALL itself fails,
+ * or the completion status of the SEAMCALL leaf function. Additional
+ * output operands are saved in @out (if it is provided by the caller).
+ *
+ *-------------------------------------------------------------------------
+ * SEAMCALL ABI:
+ *-------------------------------------------------------------------------
+ * Input Registers:
+ *
+ * RAX - SEAMCALL Leaf number.
+ * RCX,RDX,R8-R9 - SEAMCALL Leaf specific input registers.
+ *
+ * Output Registers:
+ *
+ * RAX - SEAMCALL completion status code.
+ * RCX,RDX,R8-R11 - SEAMCALL Leaf specific output registers.
+ *
+ *-------------------------------------------------------------------------
+ *
+ * __seamcall() function ABI:
+ *
+ * @fn (RDI) - SEAMCALL Leaf number, moved to RAX
+ * @rcx (RSI) - Input parameter 1, moved to RCX
+ * @rdx (RDX) - Input parameter 2, moved to RDX
+ * @r8 (RCX) - Input parameter 3, moved to R8
+ * @r9 (R8) - Input parameter 4, moved to R9
+ *
+ * @out (R9) - struct tdx_module_output pointer
+ * stored temporarily in R12 (not
+ * used by the P-SEAMLDR or the TDX
+ * module). It can be NULL.
+ *
+ * Return (via RAX) the completion status of the SEAMCALL, or
+ * TDX_SEAMCALL_VMFAILINVALID.
+ */
+SYM_FUNC_START(__seamcall)
+ FRAME_BEGIN
+ TDX_MODULE_CALL host=1
+ FRAME_END
+ RET
+SYM_FUNC_END(__seamcall)
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index f5a20d56097c..5ae3d71b70b4 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -110,6 +110,66 @@ bool platform_tdx_enabled(void)
return !!tdx_global_keyid;
}

+/*
+ * Wrapper of __seamcall() to convert SEAMCALL leaf function error code
+ * to kernel error code. @seamcall_ret and @out contain the SEAMCALL
+ * leaf function return code and the additional output respectively if
+ * not NULL.
+ */
+static int __always_unused seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
+ u64 *seamcall_ret,
+ struct tdx_module_output *out)
+{
+ int cpu, ret = 0;
+ u64 sret;
+
+ /* Need a stable CPU id for printing error message */
+ cpu = get_cpu();
+
+ sret = __seamcall(fn, rcx, rdx, r8, r9, out);
+
+ /* Save SEAMCALL return code if the caller wants it */
+ if (seamcall_ret)
+ *seamcall_ret = sret;
+
+ /* SEAMCALL was successful */
+ if (!sret)
+ goto out;
+
+ switch (sret) {
+ case TDX_SEAMCALL_GP:
+ /*
+ * tdx_enable() has already checked that BIOS has
+ * enabled TDX at the very beginning before going
+ * forward. It's likely a firmware bug if the
+ * SEAMCALL still caused #GP.
+ */
+ pr_err_once("[firmware bug]: TDX is not enabled by BIOS.\n");
+ ret = -ENODEV;
+ break;
+ case TDX_SEAMCALL_VMFAILINVALID:
+ pr_err_once("TDX module is not loaded.\n");
+ ret = -ENODEV;
+ break;
+ case TDX_SEAMCALL_UD:
+ pr_err_once("SEAMCALL failed: CPU %d is not in VMX operation.\n",
+ cpu);
+ ret = -EINVAL;
+ break;
+ default:
+ pr_err_once("SEAMCALL failed: CPU %d: leaf %llu, error 0x%llx.\n",
+ cpu, fn, sret);
+ if (out)
+ pr_err_once("additional output: rcx 0x%llx, rdx 0x%llx, r8 0x%llx, r9 0x%llx, r10 0x%llx, r11 0x%llx.\n",
+ out->rcx, out->rdx, out->r8,
+ out->r9, out->r10, out->r11);
+ ret = -EIO;
+ }
+out:
+ put_cpu();
+ return ret;
+}
+
static int init_tdx_module(void)
{
/*
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index 881cca276956..931a50f0f44c 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -2,6 +2,8 @@
#ifndef _X86_VIRT_TDX_H
#define _X86_VIRT_TDX_H

+#include <linux/types.h>
+
/* Kernel defined TDX module status during module initialization. */
enum tdx_module_status_t {
TDX_MODULE_UNKNOWN,
@@ -9,4 +11,7 @@ enum tdx_module_status_t {
TDX_MODULE_ERROR
};

+struct tdx_module_output;
+u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
+ struct tdx_module_output *out);
#endif
diff --git a/arch/x86/virt/vmx/tdx/tdxcall.S b/arch/x86/virt/vmx/tdx/tdxcall.S
index 49a54356ae99..757b0c34be10 100644
--- a/arch/x86/virt/vmx/tdx/tdxcall.S
+++ b/arch/x86/virt/vmx/tdx/tdxcall.S
@@ -1,6 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0 */
#include <asm/asm-offsets.h>
#include <asm/tdx.h>
+#include <asm/asm.h>

/*
* TDCALL and SEAMCALL are supported in Binutils >= 2.36.
@@ -45,6 +46,7 @@
/* Leave input param 2 in RDX */

.if \host
+1:
seamcall
/*
* SEAMCALL instruction is essentially a VMExit from VMX root
@@ -57,10 +59,23 @@
* This value will never be used as actual SEAMCALL error code as
* it is from the Reserved status code class.
*/
- jnc .Lno_vmfailinvalid
+ jnc .Lseamcall_out
mov $TDX_SEAMCALL_VMFAILINVALID, %rax
-.Lno_vmfailinvalid:
+ jmp .Lseamcall_out
+2:
+ /*
+ * SEAMCALL caused #GP or #UD. By reaching here %eax contains
+ * the trap number. Convert the trap number to the TDX error
+ * code by setting TDX_SW_ERROR to the high 32-bits of %rax.
+ *
+ * Note cannot OR TDX_SW_ERROR directly to %rax as OR instruction
+ * only accepts 32-bit immediate at most.
+ */
+ mov $TDX_SW_ERROR, %r12
+ orq %r12, %rax

+ _ASM_EXTABLE_FAULT(1b, 2b)
+.Lseamcall_out:
.else
tdcall
.endif
--
2.39.1


2023-02-13 12:01:43

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 06/18] x86/virt/tdx: Do TDX module global initialization

Start to transit out the "multi-steps" of initializing the TDX module as
listed in the skeleton infrastructure. Do the first step to do module
global initialization, which is one SEAMCALL on any logical cpu.

Signed-off-by: Kai Huang <[email protected]>
Reviewed-by: Isaku Yamahata <[email protected]>
---

v8 -> v9:
- Added this patch back.

---
arch/x86/virt/vmx/tdx/tdx.c | 11 ++++++++++-
arch/x86/virt/vmx/tdx/tdx.h | 12 ++++++++++++
2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 5ae3d71b70b4..79cee28c51b5 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -172,10 +172,19 @@ static int __always_unused seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,

static int init_tdx_module(void)
{
+ int ret;
+
+ /*
+ * TDX module global initialization. All '0's are just
+ * unused parameters.
+ */
+ ret = seamcall(TDH_SYS_INIT, 0, 0, 0, 0, NULL, NULL);
+ if (ret)
+ return ret;
+
/*
* TODO:
*
- * - TDX module global initialization.
* - TDX module per-cpu initialization.
* - Get TDX module information and TDX-capable memory regions.
* - Build the list of TDX-usable memory regions.
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index 931a50f0f44c..55472e085bc8 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -4,6 +4,18 @@

#include <linux/types.h>

+/*
+ * This file contains both macros and data structures defined by the TDX
+ * architecture and Linux defined software data structures and functions.
+ * The two should not be mixed together for better readability. The
+ * architectural definitions come first.
+ */
+
+ /*
+ * TDX module SEAMCALL leaf functions
+ */
+#define TDH_SYS_INIT 33
+
/* Kernel defined TDX module status during module initialization. */
enum tdx_module_status_t {
TDX_MODULE_UNKNOWN,
--
2.39.1


2023-02-13 12:02:10

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

After the SEAMCALL to do TDX module global initialization, a SEAMCALL to
do per-cpu initialization (TDH.SYS.LP.INIT) must be done on one logical
cpu before any other SEAMCALLs can be made on that cpu, including those
involved in the future steps of the module initialization.

To keep things simple, this implementation just chooses to guarantee all
online cpus are "TDX-runnable" (TDH.SYS.LP.INIT has been successfully
done on them). If the kernel were to allow one cpu to be online while
TDH.SYS.LP.INIT failed on it, the kernel would need to track a cpumask
of "TDX-runnable" cpus, know which task is "TDX workload" and guarantee
such task can only be scheduled to "TDX-runnable" cpus. For example,
the kernel would need to reject in sched_setaffinity() if the userspace
tries to bind TDX task to any "non-TDX-runnable" cpu.

To guarantee all online cpus are "TDX-runnable", disable the CPU hotplug
during module initialization and do TDH.SYS.LP.INIT for all online cpus
before any further steps of module initialization. In CPU hotplug, do
TDH.SYS.LP.INIT when TDX has been enabled in the CPU online callback and
reject to online the cpu if the SEAMCALL fails.

Currently only KVM handles VMXON. Similar to tdx_enable(), only provide
a new helper tdx_cpu_online() but make KVM itself responsible for doing
VMXON and calling tdx_cpu_online() in its own CPU online callback.

Note tdx_enable() can be called multiple times by KVM because KVM module
can be unloaded and reloaded. New cpus may become online while KVM is
unloaded, and in this case TDH.SYS.LP.INIT won't be called for those new
online cpus because KVM's CPU online callback is removed when KVM is
unloaded. To make sure all online cpus are "TDX-runnable", always do
the per-cpu initialization for all online cpus in tdx_enable() even the
module has been initialized.

Similar to the per-cpu module initialization, a later step to config the
key for the global KeyID needs to call some SEAMCALL on one cpu for each
CPU package. The difference is that SEAMCALL cannot run in parallel on
different cpus but TDH.SYS.LP.INIT can. To avoid duplicated code, add a
helper to call SEAMCALL on all online cpus one by one but with a skip
function to check whether to skip certain cpus, and use that helper to
do the per-cpu initialization.

Signed-off-by: Kai Huang <[email protected]>
---

v8 -> v9:
- Added this patch back.
- Handled the relaxed new behaviour of TDH.SYS.LP.INIT

---
arch/x86/include/asm/tdx.h | 2 +
arch/x86/virt/vmx/tdx/tdx.c | 210 +++++++++++++++++++++++++++++++++++-
arch/x86/virt/vmx/tdx/tdx.h | 1 +
3 files changed, 208 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 5c5ecfddb15b..2b2efaa4bc0e 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -107,9 +107,11 @@ static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
#ifdef CONFIG_INTEL_TDX_HOST
bool platform_tdx_enabled(void);
int tdx_enable(void);
+int tdx_cpu_online(unsigned int cpu);
#else /* !CONFIG_INTEL_TDX_HOST */
static inline bool platform_tdx_enabled(void) { return false; }
static inline int tdx_enable(void) { return -EINVAL; }
+static inline int tdx_cpu_online(unsigned int cpu) { return 0; }
#endif /* CONFIG_INTEL_TDX_HOST */

#endif /* !__ASSEMBLY__ */
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 79cee28c51b5..23b2db28726f 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -13,6 +13,8 @@
#include <linux/errno.h>
#include <linux/printk.h>
#include <linux/mutex.h>
+#include <linux/cpumask.h>
+#include <linux/cpu.h>
#include <asm/msr-index.h>
#include <asm/msr.h>
#include <asm/tdx.h>
@@ -26,6 +28,10 @@ static enum tdx_module_status_t tdx_module_status;
/* Prevent concurrent attempts on TDX module initialization */
static DEFINE_MUTEX(tdx_module_lock);

+/* TDX-runnable cpus. Protected by cpu_hotplug_lock. */
+static cpumask_t __cpu_tdx_mask;
+static cpumask_t *cpu_tdx_mask = &__cpu_tdx_mask;
+
/*
* Use tdx_global_keyid to indicate that TDX is uninitialized.
* This is used in TDX initialization error paths to take it from
@@ -170,6 +176,63 @@ static int __always_unused seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
return ret;
}

+/*
+ * Call @func on all online cpus one by one but skip those cpus
+ * when @skip_func is valid and returns true for them.
+ */
+static int tdx_on_each_cpu_cond(int (*func)(void *), void *func_data,
+ bool (*skip_func)(int cpu, void *),
+ void *skip_data)
+{
+ int cpu;
+
+ for_each_online_cpu(cpu) {
+ int ret;
+
+ if (skip_func && skip_func(cpu, skip_data))
+ continue;
+
+ /*
+ * SEAMCALL can be time consuming. Call the @func on
+ * remote cpu via smp_call_on_cpu() instead of
+ * smp_call_function_single() to avoid busy waiting.
+ */
+ ret = smp_call_on_cpu(cpu, func, func_data, true);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static int seamcall_lp_init(void)
+{
+ /* All '0's are just unused parameters */
+ return seamcall(TDH_SYS_LP_INIT, 0, 0, 0, 0, NULL, NULL);
+}
+
+static int smp_func_module_lp_init(void *data)
+{
+ int ret, cpu = smp_processor_id();
+
+ ret = seamcall_lp_init();
+ if (!ret)
+ cpumask_set_cpu(cpu, cpu_tdx_mask);
+
+ return ret;
+}
+
+static bool skip_func_module_lp_init_done(int cpu, void *data)
+{
+ return cpumask_test_cpu(cpu, cpu_tdx_mask);
+}
+
+static int module_lp_init_online_cpus(void)
+{
+ return tdx_on_each_cpu_cond(smp_func_module_lp_init, NULL,
+ skip_func_module_lp_init_done, NULL);
+}
+
static int init_tdx_module(void)
{
int ret;
@@ -182,10 +245,26 @@ static int init_tdx_module(void)
if (ret)
return ret;

+ /*
+ * TDX module per-cpu initialization SEAMCALL must be done on
+ * one cpu before any other SEAMCALLs can be made on that cpu,
+ * including those involved in further steps to initialize the
+ * TDX module.
+ *
+ * To make sure further SEAMCALLs can be done successfully w/o
+ * having to consider preemption, disable CPU hotplug during
+ * rest of module initialization and do per-cpu initialization
+ * for all online cpus.
+ */
+ cpus_read_lock();
+
+ ret = module_lp_init_online_cpus();
+ if (ret)
+ goto out;
+
/*
* TODO:
*
- * - TDX module per-cpu initialization.
* - Get TDX module information and TDX-capable memory regions.
* - Build the list of TDX-usable memory regions.
* - Construct a list of "TD Memory Regions" (TDMRs) to cover
@@ -196,7 +275,17 @@ static int init_tdx_module(void)
*
* Return error before all steps are done.
*/
- return -EINVAL;
+ ret = -EINVAL;
+out:
+ /*
+ * Clear @cpu_tdx_mask if module initialization fails before
+ * CPU hotplug is re-enabled. tdx_cpu_online() uses it to check
+ * whether the initialization has been successful or not.
+ */
+ if (ret)
+ cpumask_clear(cpu_tdx_mask);
+ cpus_read_unlock();
+ return ret;
}

static int __tdx_enable(void)
@@ -220,13 +309,72 @@ static int __tdx_enable(void)
return 0;
}

+/*
+ * Disable TDX module after it has been initialized successfully.
+ */
+static void disable_tdx_module(void)
+{
+ /*
+ * TODO: module clean up in reverse to steps in
+ * init_tdx_module(). Remove this comment after
+ * all steps are done.
+ */
+ cpumask_clear(cpu_tdx_mask);
+}
+
+static int tdx_module_init_online_cpus(void)
+{
+ int ret;
+
+ /*
+ * Make sure no cpu can become online to prevent
+ * race against tdx_cpu_online().
+ */
+ cpus_read_lock();
+
+ /*
+ * Do per-cpu initialization for any new online cpus.
+ * If any fails, disable TDX.
+ */
+ ret = module_lp_init_online_cpus();
+ if (ret)
+ disable_tdx_module();
+
+ cpus_read_unlock();
+
+ return ret;
+
+}
+static int __tdx_enable_online_cpus(void)
+{
+ if (tdx_module_init_online_cpus()) {
+ /*
+ * SEAMCALL failure has already printed
+ * meaningful error message.
+ */
+ tdx_module_status = TDX_MODULE_ERROR;
+
+ /*
+ * Just return one universal error code.
+ * For now the caller cannot recover anyway.
+ */
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
/**
* tdx_enable - Enable TDX to be ready to run TDX guests
*
* Initialize the TDX module to enable TDX. After this function, the TDX
- * module is ready to create and run TDX guests.
+ * module is ready to create and run TDX guests on all online cpus.
+ *
+ * This function internally calls cpus_read_lock()/unlock() to prevent
+ * any cpu from going online and offline.
*
* This function assumes all online cpus are already in VMX operation.
+ *
* This function can be called in parallel by multiple callers.
*
* Return 0 if TDX is enabled successfully, otherwise error.
@@ -247,8 +395,17 @@ int tdx_enable(void)
ret = __tdx_enable();
break;
case TDX_MODULE_INITIALIZED:
- /* Already initialized, great, tell the caller. */
- ret = 0;
+ /*
+ * The previous call of __tdx_enable() may only have
+ * initialized part of present cpus during module
+ * initialization, and new cpus may have become online
+ * since then.
+ *
+ * To make sure all online cpus are TDX-runnable, always
+ * do per-cpu initialization for all online cpus here
+ * even the module has been initialized.
+ */
+ ret = __tdx_enable_online_cpus();
break;
default:
/* Failed to initialize in the previous attempts */
@@ -261,3 +418,46 @@ int tdx_enable(void)
return ret;
}
EXPORT_SYMBOL_GPL(tdx_enable);
+
+/**
+ * tdx_cpu_online - Enable TDX on a hotplugged local cpu
+ *
+ * @cpu: the cpu to be brought up.
+ *
+ * Do TDX module per-cpu initialization for a hotplugged cpu to make
+ * it TDX-runnable. All online cpus are initialized during module
+ * initialization.
+ *
+ * This function must be called from CPU hotplug callback which holds
+ * write lock of cpu_hotplug_lock.
+ *
+ * This function assumes local cpu is already in VMX operation.
+ */
+int tdx_cpu_online(unsigned int cpu)
+{
+ int ret;
+
+ /*
+ * @cpu_tdx_mask is updated in tdx_enable() and is protected
+ * by cpus_read_lock()/unlock(). If it is empty, TDX module
+ * either hasn't been initialized, or TDX didn't get enabled
+ * successfully.
+ *
+ * In either case, do nothing but return success.
+ */
+ if (cpumask_empty(cpu_tdx_mask))
+ return 0;
+
+ WARN_ON_ONCE(cpu != smp_processor_id());
+
+ /* Already done */
+ if (cpumask_test_cpu(cpu, cpu_tdx_mask))
+ return 0;
+
+ ret = seamcall_lp_init();
+ if (!ret)
+ cpumask_set_cpu(cpu, cpu_tdx_mask);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(tdx_cpu_online);
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index 55472e085bc8..30413d7fbee8 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -15,6 +15,7 @@
* TDX module SEAMCALL leaf functions
*/
#define TDH_SYS_INIT 33
+#define TDH_SYS_LP_INIT 35

/* Kernel defined TDX module status during module initialization. */
enum tdx_module_status_t {
--
2.39.1


2023-02-13 12:02:16

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 08/18] x86/virt/tdx: Get information about TDX module and TDX-capable memory

TDX provides increased levels of memory confidentiality and integrity.
This requires special hardware support for features like memory
encryption and storage of memory integrity checksums. Not all memory
satisfies these requirements.

As a result, TDX introduced the concept of a "Convertible Memory Region"
(CMR). During boot, the firmware builds a list of all of the memory
ranges which can provide the TDX security guarantees.

CMRs tell the kernel which memory is TDX compatible. The kernel takes
CMRs (plus a little more metadata) and constructs "TD Memory Regions"
(TDMRs). TDMRs let the kernel grant TDX protections to some or all of
the CMR areas.

The TDX module also reports necessary information to let the kernel
build TDMRs and run TDX guests in structure 'tdsysinfo_struct'. The
list of CMRs, along with the TDX module information, is available to
the kernel by querying the TDX module.

As a preparation to construct TDMRs, get the TDX module information and
the list of CMRs. Print out CMRs to help user to decode which memory
regions are TDX convertible.

The 'tdsysinfo_struct' is fairly large (1024 bytes) and contains a lot
of info about the TDX module. Fully define the entire structure, but
only use the fields necessary to build the TDMRs and pr_info() some
basics about the module. The rest of the fields will get used by KVM.

For now both 'tdsysinfo_struct' and CMRs are only used during the module
initialization. But because they are both relatively big, declare them
inside the module initialization function but as static variables.

Signed-off-by: Kai Huang <[email protected]>
Reviewed-by: Isaku Yamahata <[email protected]>
---

v8 -> v9:
- Removed "start to trransit out ..." part in changelog since this patch
is no longer the first step anymore.
- Changed to declare 'tdsysinfo' and 'cmr_array' as local static, and
changed changelog accordingly (Dave).
- Improved changelog to explain why to declare 'tdsysinfo_struct' in
full but only use a few members of them (Dave).

v7 -> v8: (Dave)
- Improved changelog to tell this is the first patch to transit out the
"multi-steps" init_tdx_module().
- Removed all CMR check/trim code but to depend on later SEAMCALL.
- Variable 'vertical alignment' in print TDX module information.
- Added DECLARE_PADDED_STRUCT() for padded structure.
- Made tdx_sysinfo and tdx_cmr_array[] to be function local variable
(and rename them accordingly), and added -Wframe-larger-than=4096 flag
to silence the build warning.

v6 -> v7:
- Simplified the check of CMRs due to the fact that TDX actually
verifies CMRs (that are passed by the BIOS) before enabling TDX.
- Changed the function name from check_cmrs() -> trim_empty_cmrs().
- Added CMR page aligned check so that later patch can just get the PFN
using ">> PAGE_SHIFT".

v5 -> v6:
- Added to also print TDX module's attribute (Isaku).
- Removed all arguments in tdx_gete_sysinfo() to use static variables
of 'tdx_sysinfo' and 'tdx_cmr_array' directly as they are all used
directly in other functions in later patches.
- Added Isaku's Reviewed-by.

- v3 -> v5 (no feedback on v4):
- Renamed sanitize_cmrs() to check_cmrs().
- Removed unnecessary sanity check against tdx_sysinfo and tdx_cmr_array
actual size returned by TDH.SYS.INFO.
- Changed -EFAULT to -EINVAL in couple places.
- Added comments around tdx_sysinfo and tdx_cmr_array saying they are
used by TDH.SYS.INFO ABI.
- Changed to pass 'tdx_sysinfo' and 'tdx_cmr_array' as function
arguments in tdx_get_sysinfo().
- Changed to only print BIOS-CMR when check_cmrs() fails.

---
arch/x86/virt/vmx/tdx/tdx.c | 65 ++++++++++++++++++++++++++-
arch/x86/virt/vmx/tdx/tdx.h | 88 +++++++++++++++++++++++++++++++++++++
2 files changed, 152 insertions(+), 1 deletion(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 23b2db28726f..ae8e59294b46 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -17,6 +17,7 @@
#include <linux/cpu.h>
#include <asm/msr-index.h>
#include <asm/msr.h>
+#include <asm/page.h>
#include <asm/tdx.h>
#include "tdx.h"

@@ -233,8 +234,67 @@ static int module_lp_init_online_cpus(void)
skip_func_module_lp_init_done, NULL);
}

+static inline bool is_cmr_empty(struct cmr_info *cmr)
+{
+ return !cmr->size;
+}
+
+static void print_cmrs(struct cmr_info *cmr_array, int nr_cmrs)
+{
+ int i;
+
+ for (i = 0; i < nr_cmrs; i++) {
+ struct cmr_info *cmr = &cmr_array[i];
+
+ /*
+ * The array of CMRs reported via TDH.SYS.INFO can
+ * contain tail empty CMRs. Don't print them.
+ */
+ if (is_cmr_empty(cmr))
+ break;
+
+ pr_info("CMR: [0x%llx, 0x%llx)\n", cmr->base,
+ cmr->base + cmr->size);
+ }
+}
+
+/*
+ * Get the TDX module information (TDSYSINFO_STRUCT) and the array of
+ * CMRs, and save them to @sysinfo and @cmr_array. @sysinfo must have
+ * been padded to have enough room to save the TDSYSINFO_STRUCT.
+ */
+static int tdx_get_sysinfo(struct tdsysinfo_struct *sysinfo,
+ struct cmr_info *cmr_array)
+{
+ struct tdx_module_output out;
+ u64 sysinfo_pa, cmr_array_pa;
+ int ret;
+
+ sysinfo_pa = __pa(sysinfo);
+ cmr_array_pa = __pa(cmr_array);
+ ret = seamcall(TDH_SYS_INFO, sysinfo_pa, TDSYSINFO_STRUCT_SIZE,
+ cmr_array_pa, MAX_CMRS, NULL, &out);
+ if (ret)
+ return ret;
+
+ pr_info("TDX module: atributes 0x%x, vendor_id 0x%x, major_version %u, minor_version %u, build_date %u, build_num %u",
+ sysinfo->attributes, sysinfo->vendor_id,
+ sysinfo->major_version, sysinfo->minor_version,
+ sysinfo->build_date, sysinfo->build_num);
+
+ /* R9 contains the actual entries written to the CMR array. */
+ print_cmrs(cmr_array, out.r9);
+
+ return 0;
+}
+
static int init_tdx_module(void)
{
+ static DECLARE_PADDED_STRUCT(tdsysinfo_struct, tdsysinfo,
+ TDSYSINFO_STRUCT_SIZE, TDSYSINFO_STRUCT_ALIGNMENT);
+ static struct cmr_info cmr_array[MAX_CMRS]
+ __aligned(CMR_INFO_ARRAY_ALIGNMENT);
+ struct tdsysinfo_struct *sysinfo = &PADDED_STRUCT(tdsysinfo);
int ret;

/*
@@ -262,10 +322,13 @@ static int init_tdx_module(void)
if (ret)
goto out;

+ ret = tdx_get_sysinfo(sysinfo, cmr_array);
+ if (ret)
+ goto out;
+
/*
* TODO:
*
- * - Get TDX module information and TDX-capable memory regions.
* - Build the list of TDX-usable memory regions.
* - Construct a list of "TD Memory Regions" (TDMRs) to cover
* all TDX-usable memory regions.
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index 30413d7fbee8..e32d9920b3a7 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -3,6 +3,94 @@
#define _X86_VIRT_TDX_H

#include <linux/types.h>
+#include <linux/stddef.h>
+#include <linux/compiler_attributes.h>
+
+/*
+ * This file contains both macros and data structures defined by the TDX
+ * architecture and Linux defined software data structures and functions.
+ * The two should not be mixed together for better readability. The
+ * architectural definitions come first.
+ */
+
+/*
+ * TDX module SEAMCALL leaf functions
+ */
+#define TDH_SYS_INFO 32
+
+struct cmr_info {
+ u64 base;
+ u64 size;
+} __packed;
+
+#define MAX_CMRS 32
+#define CMR_INFO_ARRAY_ALIGNMENT 512
+
+struct cpuid_config {
+ u32 leaf;
+ u32 sub_leaf;
+ u32 eax;
+ u32 ebx;
+ u32 ecx;
+ u32 edx;
+} __packed;
+
+#define DECLARE_PADDED_STRUCT(type, name, size, alignment) \
+ struct type##_padded { \
+ union { \
+ struct type name; \
+ u8 padding[size]; \
+ }; \
+ } name##_padded __aligned(alignment)
+
+#define PADDED_STRUCT(name) (name##_padded.name)
+
+#define TDSYSINFO_STRUCT_SIZE 1024
+#define TDSYSINFO_STRUCT_ALIGNMENT 1024
+
+/*
+ * The size of this structure itself is flexible. The actual structure
+ * passed to TDH.SYS.INFO must be padded to TDSYSINFO_STRUCT_SIZE and be
+ * aligned to TDSYSINFO_STRUCT_ALIGNMENT using DECLARE_PADDED_STRUCT().
+ */
+struct tdsysinfo_struct {
+ /* TDX-SEAM Module Info */
+ u32 attributes;
+ u32 vendor_id;
+ u32 build_date;
+ u16 build_num;
+ u16 minor_version;
+ u16 major_version;
+ u8 reserved0[14];
+ /* Memory Info */
+ u16 max_tdmrs;
+ u16 max_reserved_per_tdmr;
+ u16 pamt_entry_size;
+ u8 reserved1[10];
+ /* Control Struct Info */
+ u16 tdcs_base_size;
+ u8 reserved2[2];
+ u16 tdvps_base_size;
+ u8 tdvps_xfam_dependent_size;
+ u8 reserved3[9];
+ /* TD Capabilities */
+ u64 attributes_fixed0;
+ u64 attributes_fixed1;
+ u64 xfam_fixed0;
+ u64 xfam_fixed1;
+ u8 reserved4[32];
+ u32 num_cpuid_config;
+ /*
+ * The actual number of CPUID_CONFIG depends on above
+ * 'num_cpuid_config'.
+ */
+ DECLARE_FLEX_ARRAY(struct cpuid_config, cpuid_configs);
+} __packed;
+
+/*
+ * Do not put any hardware-defined TDX structure representations below
+ * this comment!
+ */

/*
* This file contains both macros and data structures defined by the TDX
--
2.39.1


2023-02-13 12:02:39

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 09/18] x86/virt/tdx: Use all system memory when initializing TDX module as TDX memory

As a step of initializing the TDX module, the kernel needs to tell the
TDX module which memory regions can be used by the TDX module as TDX
guest memory.

TDX reports a list of "Convertible Memory Region" (CMR) to tell the
kernel which memory is TDX compatible. The kernel needs to build a list
of memory regions (out of CMRs) as "TDX-usable" memory and pass them to
the TDX module. Once this is done, those "TDX-usable" memory regions
are fixed during module's lifetime.

To keep things simple, assume that all TDX-protected memory will come
from the page allocator. Make sure all pages in the page allocator
*are* TDX-usable memory.

As TDX-usable memory is a fixed configuration, take a snapshot of the
memory configuration from memblocks at the time of module initialization
(memblocks are modified on memory hotplug). This snapshot is used to
enable TDX support for *this* memory configuration only. Use a memory
hotplug notifier to ensure that no other RAM can be added outside of
this configuration.

This approach requires all memblock memory regions at the time of module
initialization to be TDX convertible memory to work, otherwise module
initialization will fail in a later SEAMCALL when passing those regions
to the module. This approach works when all boot-time "system RAM" are
TDX convertible memory, and no non-TDX-convertible memory is hot-added
to the core-mm before module initialization.

For instance, on the first generation of TDX machines, both CXL memory
and NVDIMM are not TDX convertible memory. Using kmem driver to hot-add
any CXL memory or NVDIMM to the core-mm before module initialization
will result in module fail to initialize. The SEAMCALL error code will
be available in the dmesg to help user to understand the failure.

Signed-off-by: Kai Huang <[email protected]>
---

v8 -> v9:
- Replace "The initial support ..." with timeless sentence in both
changelog and comments(Dave).
- Fix run-on sentence in changelog, and senstence to explain why to
stash off memblock (Dave).
- Tried to improve why to choose this approach and how it work in
changelog based on Dave's suggestion.
- Many other comments enhancement (Dave).

v7 -> v8:
- Trimed down changelog (Dave).
- Changed to use PHYS_PFN() and PFN_PHYS() throughout this series
(Ying).
- Moved memory hotplug handling from add_arch_memory() to
memory_notifier (Dan/David).
- Removed 'nid' from 'struct tdx_memblock' to later patch (Dave).
- {build|free}_tdx_memory() -> {build|}free_tdx_memlist() (Dave).
- Removed pfn_covered_by_cmr() check as no code to trim CMRs now.
- Improve the comment around first 1MB (Dave).
- Added a comment around reserve_real_mode() to point out TDX code
relies on first 1MB being reserved (Ying).
- Added comment to explain why the new online memory range cannot
cross multiple TDX memory blocks (Dave).
- Improved other comments (Dave).

---
arch/x86/Kconfig | 1 +
arch/x86/kernel/setup.c | 2 +
arch/x86/virt/vmx/tdx/tdx.c | 159 +++++++++++++++++++++++++++++++++++-
arch/x86/virt/vmx/tdx/tdx.h | 6 ++
4 files changed, 167 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 6dd5d5586099..f23bc540778a 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1958,6 +1958,7 @@ config INTEL_TDX_HOST
depends on X86_64
depends on KVM_INTEL
depends on X86_X2APIC
+ select ARCH_KEEP_MEMBLOCK
help
Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
host and certain physical attacks. This option enables necessary TDX
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 88188549647c..a8a119a9b48c 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1165,6 +1165,8 @@ void __init setup_arch(char **cmdline_p)
*
* Moreover, on machines with SandyBridge graphics or in setups that use
* crashkernel the entire 1M is reserved anyway.
+ *
+ * Note the host kernel TDX also requires the first 1MB being reserved.
*/
x86_platform.realmode_reserve();

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index ae8e59294b46..5101b636a9b0 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -15,6 +15,13 @@
#include <linux/mutex.h>
#include <linux/cpumask.h>
#include <linux/cpu.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/memblock.h>
+#include <linux/memory.h>
+#include <linux/minmax.h>
+#include <linux/sizes.h>
+#include <linux/pfn.h>
#include <asm/msr-index.h>
#include <asm/msr.h>
#include <asm/page.h>
@@ -33,6 +40,9 @@ static DEFINE_MUTEX(tdx_module_lock);
static cpumask_t __cpu_tdx_mask;
static cpumask_t *cpu_tdx_mask = &__cpu_tdx_mask;

+/* All TDX-usable memory regions. Protected by mem_hotplug_lock. */
+static LIST_HEAD(tdx_memlist);
+
/*
* Use tdx_global_keyid to indicate that TDX is uninitialized.
* This is used in TDX initialization error paths to take it from
@@ -71,6 +81,51 @@ static int __init record_keyid_partitioning(u32 *tdx_keyid_start,
return 0;
}

+static bool is_tdx_memory(unsigned long start_pfn, unsigned long end_pfn)
+{
+ struct tdx_memblock *tmb;
+
+ /* Empty list means TDX isn't enabled. */
+ if (list_empty(&tdx_memlist))
+ return true;
+
+ /*
+ * This check assumes that the start_pfn<->end_pfn range does not
+ * cross multiple @tdx_memlist entries. A single memory online
+ * event across multiple memblocks (from which @tdx_memlist
+ * entries are derived at the time of module initialization) is
+ * not possible. This is because memory offline/online is done
+ * on granularity of 'struct memory_block', and the hotpluggable
+ * memory region (one memblock) must be multiple of memory_block.
+ */
+ list_for_each_entry(tmb, &tdx_memlist, list) {
+ if (start_pfn >= tmb->start_pfn && end_pfn <= tmb->end_pfn)
+ return true;
+ }
+ return false;
+}
+
+static int tdx_memory_notifier(struct notifier_block *nb, unsigned long action,
+ void *v)
+{
+ struct memory_notify *mn = v;
+
+ if (action != MEM_GOING_ONLINE)
+ return NOTIFY_OK;
+
+ /*
+ * The TDX memory configuration is static and can not be
+ * changed. Reject onlining any memory which is outside of
+ * the static configuration whether it supports TDX or not.
+ */
+ return is_tdx_memory(mn->start_pfn, mn->start_pfn + mn->nr_pages) ?
+ NOTIFY_OK : NOTIFY_BAD;
+}
+
+static struct notifier_block tdx_memory_nb = {
+ .notifier_call = tdx_memory_notifier,
+};
+
static int __init tdx_init(void)
{
u32 tdx_keyid_start, nr_tdx_keyids;
@@ -101,6 +156,13 @@ static int __init tdx_init(void)
goto no_tdx;
}

+ err = register_memory_notifier(&tdx_memory_nb);
+ if (err) {
+ pr_info("initialization failed: register_memory_notifier() failed (%d)\n",
+ err);
+ goto no_tdx;
+ }
+
tdx_guest_keyid_start = tdx_keyid_start;
tdx_nr_guest_keyids = nr_tdx_keyids;

@@ -288,6 +350,79 @@ static int tdx_get_sysinfo(struct tdsysinfo_struct *sysinfo,
return 0;
}

+/*
+ * Add a memory region as a TDX memory block. The caller must make sure
+ * all memory regions are added in address ascending order and don't
+ * overlap.
+ */
+static int add_tdx_memblock(struct list_head *tmb_list, unsigned long start_pfn,
+ unsigned long end_pfn)
+{
+ struct tdx_memblock *tmb;
+
+ tmb = kmalloc(sizeof(*tmb), GFP_KERNEL);
+ if (!tmb)
+ return -ENOMEM;
+
+ INIT_LIST_HEAD(&tmb->list);
+ tmb->start_pfn = start_pfn;
+ tmb->end_pfn = end_pfn;
+
+ /* @tmb_list is protected by mem_hotplug_lock */
+ list_add_tail(&tmb->list, tmb_list);
+ return 0;
+}
+
+static void free_tdx_memlist(struct list_head *tmb_list)
+{
+ /* @tmb_list is protected by mem_hotplug_lock */
+ while (!list_empty(tmb_list)) {
+ struct tdx_memblock *tmb = list_first_entry(tmb_list,
+ struct tdx_memblock, list);
+
+ list_del(&tmb->list);
+ kfree(tmb);
+ }
+}
+
+/*
+ * Ensure that all memblock memory regions are convertible to TDX
+ * memory. Once this has been established, stash the memblock
+ * ranges off in a secondary structure because memblock is modified
+ * in memory hotplug while TDX memory regions are fixed.
+ */
+static int build_tdx_memlist(struct list_head *tmb_list)
+{
+ unsigned long start_pfn, end_pfn;
+ int i, ret;
+
+ for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
+ /*
+ * The first 1MB is not reported as TDX convertible memory.
+ * Although the first 1MB is always reserved and won't end up
+ * to the page allocator, it is still in memblock's memory
+ * regions. Skip them manually to exclude them as TDX memory.
+ */
+ start_pfn = max(start_pfn, PHYS_PFN(SZ_1M));
+ if (start_pfn >= end_pfn)
+ continue;
+
+ /*
+ * Add the memory regions as TDX memory. The regions in
+ * memblock has already guaranteed they are in address
+ * ascending order and don't overlap.
+ */
+ ret = add_tdx_memblock(tmb_list, start_pfn, end_pfn);
+ if (ret)
+ goto err;
+ }
+
+ return 0;
+err:
+ free_tdx_memlist(tmb_list);
+ return ret;
+}
+
static int init_tdx_module(void)
{
static DECLARE_PADDED_STRUCT(tdsysinfo_struct, tdsysinfo,
@@ -326,10 +461,25 @@ static int init_tdx_module(void)
if (ret)
goto out;

+ /*
+ * To keep things simple, assume that all TDX-protected memory
+ * will come from the page allocator. Make sure all pages in the
+ * page allocator are TDX-usable memory.
+ *
+ * Build the list of "TDX-usable" memory regions which cover all
+ * pages in the page allocator to guarantee that. Do it while
+ * holding mem_hotplug_lock read-lock as the memory hotplug code
+ * path reads the @tdx_memlist to reject any new memory.
+ */
+ get_online_mems();
+
+ ret = build_tdx_memlist(&tdx_memlist);
+ if (ret)
+ goto out;
+
/*
* TODO:
*
- * - Build the list of TDX-usable memory regions.
* - Construct a list of "TD Memory Regions" (TDMRs) to cover
* all TDX-usable memory regions.
* - Configure the TDMRs and the global KeyID to the TDX module.
@@ -340,6 +490,12 @@ static int init_tdx_module(void)
*/
ret = -EINVAL;
out:
+ /*
+ * @tdx_memlist is written here and read at memory hotplug time.
+ * Lock out memory hotplug code while building it.
+ */
+ put_online_mems();
+
/*
* Clear @cpu_tdx_mask if module initialization fails before
* CPU hotplug is re-enabled. tdx_cpu_online() uses it to check
@@ -382,6 +538,7 @@ static void disable_tdx_module(void)
* init_tdx_module(). Remove this comment after
* all steps are done.
*/
+ free_tdx_memlist(&tdx_memlist);
cpumask_clear(cpu_tdx_mask);
}

diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index e32d9920b3a7..edb1d697347f 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -112,6 +112,12 @@ enum tdx_module_status_t {
TDX_MODULE_ERROR
};

+struct tdx_memblock {
+ struct list_head list;
+ unsigned long start_pfn;
+ unsigned long end_pfn;
+};
+
struct tdx_module_output;
u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
struct tdx_module_output *out);
--
2.39.1


2023-02-13 12:02:42

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 10/18] x86/virt/tdx: Add placeholder to construct TDMRs to cover all TDX memory regions

After the kernel selects all TDX-usable memory regions, the kernel needs
to pass those regions to the TDX module via data structure "TD Memory
Region" (TDMR).

Add a placeholder to construct a list of TDMRs (in multiple steps) to
cover all TDX-usable memory regions.

=== Long Version ===

TDX provides increased levels of memory confidentiality and integrity.
This requires special hardware support for features like memory
encryption and storage of memory integrity checksums. Not all memory
satisfies these requirements.

As a result, TDX introduced the concept of a "Convertible Memory Region"
(CMR). During boot, the firmware builds a list of all of the memory
ranges which can provide the TDX security guarantees. The list of these
ranges is available to the kernel by querying the TDX module.

The TDX architecture needs additional metadata to record things like
which TD guest "owns" a given page of memory. This metadata essentially
serves as the 'struct page' for the TDX module. The space for this
metadata is not reserved by the hardware up front and must be allocated
by the kernel and given to the TDX module.

Since this metadata consumes space, the VMM can choose whether or not to
allocate it for a given area of convertible memory. If it chooses not
to, the memory cannot receive TDX protections and can not be used by TDX
guests as private memory.

For every memory region that the VMM wants to use as TDX memory, it sets
up a "TD Memory Region" (TDMR). Each TDMR represents a physically
contiguous convertible range and must also have its own physically
contiguous metadata table, referred to as a Physical Address Metadata
Table (PAMT), to track status for each page in the TDMR range.

Unlike a CMR, each TDMR requires 1G granularity and alignment. To
support physical RAM areas that don't meet those strict requirements,
each TDMR permits a number of internal "reserved areas" which can be
placed over memory holes. If PAMT metadata is placed within a TDMR it
must be covered by one of these reserved areas.

Let's summarize the concepts:

CMR - Firmware-enumerated physical ranges that support TDX. CMRs are
4K aligned.
TDMR - Physical address range which is chosen by the kernel to support
TDX. 1G granularity and alignment required. Each TDMR has
reserved areas where TDX memory holes and overlapping PAMTs can
be represented.
PAMT - Physically contiguous TDX metadata. One table for each page size
per TDMR. Roughly 1/256th of TDMR in size. 256G TDMR = ~1G
PAMT.

As one step of initializing the TDX module, the kernel configures
TDX-usable memory regions by passing a list of TDMRs to the TDX module.

Constructing the list of TDMRs consists below steps:

1) Fill out TDMRs to cover all memory regions that the TDX module will
use for TD memory.
2) Allocate and set up PAMT for each TDMR.
3) Designate reserved areas for each TDMR.

Add a placeholder to construct TDMRs to do the above steps. To keep
things simple, just allocate enough space to hold maximum number of
TDMRs up front.

Although the TDMRs are not used by the TDX module anymore after module
initialization, still keep the TDMRs as static as they are needed to
find PAMTs when TDX gets disabled after module initialization.

Signed-off-by: Kai Huang <[email protected]>
Reviewed-by: Isaku Yamahata <[email protected]>
---

v8 -> v9:
- Changes around 'struct tdmr_info_list' (Dave):
- Moved the declaration from tdx.c to tdx.h.
- Renamed 'first_tdmr' to 'tdmrs'.
- 'nr_tdmrs' -> 'nr_consumed_tdmrs'.
- Changed 'tdmrs' to 'void *'.
- Improved comments for all structure members.
- Added a missing empty line in alloc_tdmr_list() (Dave).

v7 -> v8:
- Improved changelog to tell this is one step of "TODO list" in
init_tdx_module().
- Other changelog improvement suggested by Dave (with "Create TDMRs" to
"Fill out TDMRs" to align with the code).
- Added a "TODO list" comment to lay out the steps to construct TDMRs,
following the same idea of "TODO list" in tdx_module_init().
- Introduced 'struct tdmr_info_list' (Dave)
- Further added additional members (tdmr_sz/max_tdmrs/nr_tdmrs) to
simplify getting TDMR by given index, and reduce passing arguments
around functions.
- Added alloc_tdmr_list()/free_tdmr_list() accordingly, which internally
uses tdmr_size_single() (Dave).
- tdmr_num -> nr_tdmrs (Dave).

v6 -> v7:
- Improved commit message to explain 'int' overflow cannot happen
in cal_tdmr_size() and alloc_tdmr_array(). -- Andy/Dave.

v5 -> v6:
- construct_tdmrs_memblock() -> construct_tdmrs() as 'tdx_memblock' is
used instead of memblock.
- Added Isaku's Reviewed-by.

- v3 -> v5 (no feedback on v4):
- Moved calculating TDMR size to this patch.
- Changed to use alloc_pages_exact() to allocate buffer for all TDMRs
once, instead of allocating each TDMR individually.
- Removed "crypto protection" in the changelog.
- -EFAULT -> -EINVAL in couple of places.

---
arch/x86/virt/vmx/tdx/tdx.c | 97 ++++++++++++++++++++++++++++++++++++-
arch/x86/virt/vmx/tdx/tdx.h | 32 ++++++++++++
2 files changed, 127 insertions(+), 2 deletions(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 5101b636a9b0..f604e3399d03 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -22,6 +22,7 @@
#include <linux/minmax.h>
#include <linux/sizes.h>
#include <linux/pfn.h>
+#include <linux/align.h>
#include <asm/msr-index.h>
#include <asm/msr.h>
#include <asm/page.h>
@@ -43,6 +44,9 @@ static cpumask_t *cpu_tdx_mask = &__cpu_tdx_mask;
/* All TDX-usable memory regions. Protected by mem_hotplug_lock. */
static LIST_HEAD(tdx_memlist);

+/* The list of TDMRs passed to TDX module */
+struct tdmr_info_list tdx_tdmr_list;
+
/*
* Use tdx_global_keyid to indicate that TDX is uninitialized.
* This is used in TDX initialization error paths to take it from
@@ -423,6 +427,80 @@ static int build_tdx_memlist(struct list_head *tmb_list)
return ret;
}

+/* Calculate the actual TDMR size */
+static int tdmr_size_single(u16 max_reserved_per_tdmr)
+{
+ int tdmr_sz;
+
+ /*
+ * The actual size of TDMR depends on the maximum
+ * number of reserved areas.
+ */
+ tdmr_sz = sizeof(struct tdmr_info);
+ tdmr_sz += sizeof(struct tdmr_reserved_area) * max_reserved_per_tdmr;
+
+ return ALIGN(tdmr_sz, TDMR_INFO_ALIGNMENT);
+}
+
+static int alloc_tdmr_list(struct tdmr_info_list *tdmr_list,
+ struct tdsysinfo_struct *sysinfo)
+{
+ size_t tdmr_sz, tdmr_array_sz;
+ void *tdmr_array;
+
+ tdmr_sz = tdmr_size_single(sysinfo->max_reserved_per_tdmr);
+ tdmr_array_sz = tdmr_sz * sysinfo->max_tdmrs;
+
+ /*
+ * To keep things simple, allocate all TDMRs together.
+ * The buffer needs to be physically contiguous to make
+ * sure each TDMR is physically contiguous.
+ */
+ tdmr_array = alloc_pages_exact(tdmr_array_sz,
+ GFP_KERNEL | __GFP_ZERO);
+ if (!tdmr_array)
+ return -ENOMEM;
+
+ tdmr_list->tdmrs = tdmr_array;
+
+ /*
+ * Keep the size of TDMR to find the target TDMR
+ * at a given index in the TDMR list.
+ */
+ tdmr_list->tdmr_sz = tdmr_sz;
+ tdmr_list->max_tdmrs = sysinfo->max_tdmrs;
+ tdmr_list->nr_consumed_tdmrs = 0;
+
+ return 0;
+}
+
+static void free_tdmr_list(struct tdmr_info_list *tdmr_list)
+{
+ free_pages_exact(tdmr_list->tdmrs,
+ tdmr_list->max_tdmrs * tdmr_list->tdmr_sz);
+}
+
+/*
+ * Construct a list of TDMRs on the preallocated space in @tdmr_list
+ * to cover all TDX memory regions in @tmb_list based on the TDX module
+ * information in @sysinfo.
+ */
+static int construct_tdmrs(struct list_head *tmb_list,
+ struct tdmr_info_list *tdmr_list,
+ struct tdsysinfo_struct *sysinfo)
+{
+ /*
+ * TODO:
+ *
+ * - Fill out TDMRs to cover all TDX memory regions.
+ * - Allocate and set up PAMTs for each TDMR.
+ * - Designate reserved areas for each TDMR.
+ *
+ * Return -EINVAL until constructing TDMRs is done
+ */
+ return -EINVAL;
+}
+
static int init_tdx_module(void)
{
static DECLARE_PADDED_STRUCT(tdsysinfo_struct, tdsysinfo,
@@ -477,11 +555,19 @@ static int init_tdx_module(void)
if (ret)
goto out;

+ /* Allocate enough space for constructing TDMRs */
+ ret = alloc_tdmr_list(&tdx_tdmr_list, sysinfo);
+ if (ret)
+ goto out_free_tdx_mem;
+
+ /* Cover all TDX-usable memory regions in TDMRs */
+ ret = construct_tdmrs(&tdx_memlist, &tdx_tdmr_list, sysinfo);
+ if (ret)
+ goto out_free_tdmrs;
+
/*
* TODO:
*
- * - Construct a list of "TD Memory Regions" (TDMRs) to cover
- * all TDX-usable memory regions.
* - Configure the TDMRs and the global KeyID to the TDX module.
* - Configure the global KeyID on all packages.
* - Initialize all TDMRs.
@@ -489,6 +575,12 @@ static int init_tdx_module(void)
* Return error before all steps are done.
*/
ret = -EINVAL;
+out_free_tdmrs:
+ if (ret)
+ free_tdmr_list(&tdx_tdmr_list);
+out_free_tdx_mem:
+ if (ret)
+ free_tdx_memlist(&tdx_memlist);
out:
/*
* @tdx_memlist is written here and read at memory hotplug time.
@@ -538,6 +630,7 @@ static void disable_tdx_module(void)
* init_tdx_module(). Remove this comment after
* all steps are done.
*/
+ free_tdmr_list(&tdx_tdmr_list);
free_tdx_memlist(&tdx_memlist);
cpumask_clear(cpu_tdx_mask);
}
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index edb1d697347f..66c7617b357c 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -87,6 +87,29 @@ struct tdsysinfo_struct {
DECLARE_FLEX_ARRAY(struct cpuid_config, cpuid_configs);
} __packed;

+struct tdmr_reserved_area {
+ u64 offset;
+ u64 size;
+} __packed;
+
+#define TDMR_INFO_ALIGNMENT 512
+
+struct tdmr_info {
+ u64 base;
+ u64 size;
+ u64 pamt_1g_base;
+ u64 pamt_1g_size;
+ u64 pamt_2m_base;
+ u64 pamt_2m_size;
+ u64 pamt_4k_base;
+ u64 pamt_4k_size;
+ /*
+ * Actual number of reserved areas depends on
+ * 'struct tdsysinfo_struct'::max_reserved_per_tdmr.
+ */
+ DECLARE_FLEX_ARRAY(struct tdmr_reserved_area, reserved_areas);
+} __packed __aligned(TDMR_INFO_ALIGNMENT);
+
/*
* Do not put any hardware-defined TDX structure representations below
* this comment!
@@ -118,6 +141,15 @@ struct tdx_memblock {
unsigned long end_pfn;
};

+struct tdmr_info_list {
+ void *tdmrs; /* Flexible array to hold 'tdmr_info's */
+ int nr_consumed_tdmrs; /* How many 'tdmr_info's are in use */
+
+ /* Metadata for finding target 'tdmr_info' and freeing @tdmrs */
+ int tdmr_sz; /* Size of one 'tdmr_info' */
+ int max_tdmrs; /* How many 'tdmr_info's are allocated */
+};
+
struct tdx_module_output;
u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
struct tdx_module_output *out);
--
2.39.1


2023-02-13 12:03:11

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 11/18] x86/virt/tdx: Fill out TDMRs to cover all TDX memory regions

Start to transit out the "multi-steps" to construct a list of "TD Memory
Regions" (TDMRs) to cover all TDX-usable memory regions.

The kernel configures TDX-usable memory regions by passing a list of
TDMRs "TD Memory Regions" (TDMRs) to the TDX module. Each TDMR contains
the information of the base/size of a memory region, the base/size of the
associated Physical Address Metadata Table (PAMT) and a list of reserved
areas in the region.

Do the first step to fill out a number of TDMRs to cover all TDX memory
regions. To keep it simple, always try to use one TDMR for each memory
region. As the first step only set up the base/size for each TDMR.

Each TDMR must be 1G aligned and the size must be in 1G granularity.
This implies that one TDMR could cover multiple memory regions. If a
memory region spans the 1GB boundary and the former part is already
covered by the previous TDMR, just use a new TDMR for the remaining
part.

TDX only supports a limited number of TDMRs. Disable TDX if all TDMRs
are consumed but there is more memory region to cover.

There are fancier things that could be done like trying to merge
adjacent TDMRs. This would allow more pathological memory layouts to be
supported. But, current systems are not even close to exhausting the
existing TDMR resources in practice. For now, keep it simple.

Signed-off-by: Kai Huang <[email protected]>
---

v8 -> v9:

- Added the last paragraph in the changelog (Dave).
- Removed unnecessary type cast in tdmr_entry() (Dave).

---
arch/x86/virt/vmx/tdx/tdx.c | 94 ++++++++++++++++++++++++++++++++++++-
1 file changed, 93 insertions(+), 1 deletion(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index f604e3399d03..5ff346871b4b 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -480,6 +480,93 @@ static void free_tdmr_list(struct tdmr_info_list *tdmr_list)
tdmr_list->max_tdmrs * tdmr_list->tdmr_sz);
}

+/* Get the TDMR from the list at the given index. */
+static struct tdmr_info *tdmr_entry(struct tdmr_info_list *tdmr_list,
+ int idx)
+{
+ int tdmr_info_offset = tdmr_list->tdmr_sz * idx;
+
+ return (void *)tdmr_list->tdmrs + tdmr_info_offset;
+}
+
+#define TDMR_ALIGNMENT BIT_ULL(30)
+#define TDMR_PFN_ALIGNMENT (TDMR_ALIGNMENT >> PAGE_SHIFT)
+#define TDMR_ALIGN_DOWN(_addr) ALIGN_DOWN((_addr), TDMR_ALIGNMENT)
+#define TDMR_ALIGN_UP(_addr) ALIGN((_addr), TDMR_ALIGNMENT)
+
+static inline u64 tdmr_end(struct tdmr_info *tdmr)
+{
+ return tdmr->base + tdmr->size;
+}
+
+/*
+ * Take the memory referenced in @tmb_list and populate the
+ * preallocated @tdmr_list, following all the special alignment
+ * and size rules for TDMR.
+ */
+static int fill_out_tdmrs(struct list_head *tmb_list,
+ struct tdmr_info_list *tdmr_list)
+{
+ struct tdx_memblock *tmb;
+ int tdmr_idx = 0;
+
+ /*
+ * Loop over TDX memory regions and fill out TDMRs to cover them.
+ * To keep it simple, always try to use one TDMR to cover one
+ * memory region.
+ *
+ * In practice TDX1.0 supports 64 TDMRs, which is big enough to
+ * cover all memory regions in reality if the admin doesn't use
+ * 'memmap' to create a bunch of discrete memory regions. When
+ * there's a real problem, enhancement can be done to merge TDMRs
+ * to reduce the final number of TDMRs.
+ */
+ list_for_each_entry(tmb, tmb_list, list) {
+ struct tdmr_info *tdmr = tdmr_entry(tdmr_list, tdmr_idx);
+ u64 start, end;
+
+ start = TDMR_ALIGN_DOWN(PFN_PHYS(tmb->start_pfn));
+ end = TDMR_ALIGN_UP(PFN_PHYS(tmb->end_pfn));
+
+ /*
+ * A valid size indicates the current TDMR has already
+ * been filled out to cover the previous memory region(s).
+ */
+ if (tdmr->size) {
+ /*
+ * Loop to the next if the current memory region
+ * has already been fully covered.
+ */
+ if (end <= tdmr_end(tdmr))
+ continue;
+
+ /* Otherwise, skip the already covered part. */
+ if (start < tdmr_end(tdmr))
+ start = tdmr_end(tdmr);
+
+ /*
+ * Create a new TDMR to cover the current memory
+ * region, or the remaining part of it.
+ */
+ tdmr_idx++;
+ if (tdmr_idx >= tdmr_list->max_tdmrs) {
+ pr_warn("initialization failed: TDMRs exhausted.\n");
+ return -ENOSPC;
+ }
+
+ tdmr = tdmr_entry(tdmr_list, tdmr_idx);
+ }
+
+ tdmr->base = start;
+ tdmr->size = end - start;
+ }
+
+ /* @tdmr_idx is always the index of last valid TDMR. */
+ tdmr_list->nr_consumed_tdmrs = tdmr_idx + 1;
+
+ return 0;
+}
+
/*
* Construct a list of TDMRs on the preallocated space in @tdmr_list
* to cover all TDX memory regions in @tmb_list based on the TDX module
@@ -489,10 +576,15 @@ static int construct_tdmrs(struct list_head *tmb_list,
struct tdmr_info_list *tdmr_list,
struct tdsysinfo_struct *sysinfo)
{
+ int ret;
+
+ ret = fill_out_tdmrs(tmb_list, tdmr_list);
+ if (ret)
+ return ret;
+
/*
* TODO:
*
- * - Fill out TDMRs to cover all TDX memory regions.
* - Allocate and set up PAMTs for each TDMR.
* - Designate reserved areas for each TDMR.
*
--
2.39.1


2023-02-13 12:03:14

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 12/18] x86/virt/tdx: Allocate and set up PAMTs for TDMRs

The TDX module uses additional metadata to record things like which
guest "owns" a given page of memory. This metadata, referred as
Physical Address Metadata Table (PAMT), essentially serves as the
'struct page' for the TDX module. PAMTs are not reserved by hardware
up front. They must be allocated by the kernel and then given to the
TDX module during module initialization.

TDX supports 3 page sizes: 4K, 2M, and 1G. Each "TD Memory Region"
(TDMR) has 3 PAMTs to track the 3 supported page sizes. Each PAMT must
be a physically contiguous area from a Convertible Memory Region (CMR).
However, the PAMTs which track pages in one TDMR do not need to reside
within that TDMR but can be anywhere in CMRs. If one PAMT overlaps with
any TDMR, the overlapping part must be reported as a reserved area in
that particular TDMR.

Use alloc_contig_pages() since PAMT must be a physically contiguous area
and it may be potentially large (~1/256th of the size of the given TDMR).
The downside is alloc_contig_pages() may fail at runtime. One (bad)
mitigation is to launch a TDX guest early during system boot to get
those PAMTs allocated at early time, but the only way to fix is to add a
boot option to allocate or reserve PAMTs during kernel boot.

It is imperfect but will be improved on later.

TDX only supports a limited number of reserved areas per TDMR to cover
both PAMTs and memory holes within the given TDMR. If many PAMTs are
allocated within a single TDMR, the reserved areas may not be sufficient
to cover all of them.

Adopt the following policies when allocating PAMTs for a given TDMR:

- Allocate three PAMTs of the TDMR in one contiguous chunk to minimize
the total number of reserved areas consumed for PAMTs.
- Try to first allocate PAMT from the local node of the TDMR for better
NUMA locality.

Also dump out how many pages are allocated for PAMTs when the TDX module
is initialized successfully. This helps answer the eternal "where did
all my memory go?" questions.

Signed-off-by: Kai Huang <[email protected]>
Reviewed-by: Isaku Yamahata <[email protected]>
Reviewed-by: Dave Hansen <[email protected]>
---

Hi Dave,

I added your Reviewed-by, but let me know if you want me to remove since
this patch contains one more chunk introduced by the new "per-cpu
initialization" handling.

v8 -> v9:
- Added TDX_PS_NR macro instead of open-coding (Dave).
- Better alignment of 'pamt_entry_size' in tdmr_set_up_pamt() (Dave).
- Changed to print out PAMTs in "KBs" instead of "pages" (Dave).
- Added Dave's Reviewed-by.

v7 -> v8: (Dave)
- Changelog:
- Added a sentence to state PAMT allocation will be improved.
- Others suggested by Dave.
- Moved 'nid' of 'struct tdx_memblock' to this patch.
- Improved comments around tdmr_get_nid().
- WARN_ON_ONCE() -> pr_warn() in tdmr_get_nid().
- Other changes due to 'struct tdmr_info_list'.

v6 -> v7:
- Changes due to using macros instead of 'enum' for TDX supported page
sizes.

v5 -> v6:
- Rebase due to using 'tdx_memblock' instead of memblock.
- 'int pamt_entry_nr' -> 'unsigned long nr_pamt_entries' (Dave/Sagis).
- Improved comment around tdmr_get_nid() (Dave).
- Improved comment in tdmr_set_up_pamt() around breaking the PAMT
into PAMTs for 4K/2M/1G (Dave).
- tdmrs_get_pamt_pages() -> tdmrs_count_pamt_pages() (Dave).

- v3 -> v5 (no feedback on v4):
- Used memblock to get the NUMA node for given TDMR.
- Removed tdmr_get_pamt_sz() helper but use open-code instead.
- Changed to use 'switch .. case..' for each TDX supported page size in
tdmr_get_pamt_sz() (the original __tdmr_get_pamt_sz()).
- Added printing out memory used for PAMT allocation when TDX module is
initialized successfully.
- Explained downside of alloc_contig_pages() in changelog.
- Addressed other minor comments.

---
arch/x86/Kconfig | 1 +
arch/x86/virt/vmx/tdx/tdx.c | 217 +++++++++++++++++++++++++++++++++++-
arch/x86/virt/vmx/tdx/tdx.h | 1 +
3 files changed, 214 insertions(+), 5 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index f23bc540778a..2a4d4097c5e6 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1959,6 +1959,7 @@ config INTEL_TDX_HOST
depends on KVM_INTEL
depends on X86_X2APIC
select ARCH_KEEP_MEMBLOCK
+ depends on CONTIG_ALLOC
help
Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
host and certain physical attacks. This option enables necessary TDX
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 5ff346871b4b..ce71a3ef86d3 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -360,7 +360,7 @@ static int tdx_get_sysinfo(struct tdsysinfo_struct *sysinfo,
* overlap.
*/
static int add_tdx_memblock(struct list_head *tmb_list, unsigned long start_pfn,
- unsigned long end_pfn)
+ unsigned long end_pfn, int nid)
{
struct tdx_memblock *tmb;

@@ -371,6 +371,7 @@ static int add_tdx_memblock(struct list_head *tmb_list, unsigned long start_pfn,
INIT_LIST_HEAD(&tmb->list);
tmb->start_pfn = start_pfn;
tmb->end_pfn = end_pfn;
+ tmb->nid = nid;

/* @tmb_list is protected by mem_hotplug_lock */
list_add_tail(&tmb->list, tmb_list);
@@ -398,9 +399,9 @@ static void free_tdx_memlist(struct list_head *tmb_list)
static int build_tdx_memlist(struct list_head *tmb_list)
{
unsigned long start_pfn, end_pfn;
- int i, ret;
+ int i, nid, ret;

- for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
+ for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
/*
* The first 1MB is not reported as TDX convertible memory.
* Although the first 1MB is always reserved and won't end up
@@ -416,7 +417,7 @@ static int build_tdx_memlist(struct list_head *tmb_list)
* memblock has already guaranteed they are in address
* ascending order and don't overlap.
*/
- ret = add_tdx_memblock(tmb_list, start_pfn, end_pfn);
+ ret = add_tdx_memblock(tmb_list, start_pfn, end_pfn, nid);
if (ret)
goto err;
}
@@ -567,6 +568,202 @@ static int fill_out_tdmrs(struct list_head *tmb_list,
return 0;
}

+/*
+ * Calculate PAMT size given a TDMR and a page size. The returned
+ * PAMT size is always aligned up to 4K page boundary.
+ */
+static unsigned long tdmr_get_pamt_sz(struct tdmr_info *tdmr, int pgsz,
+ u16 pamt_entry_size)
+{
+ unsigned long pamt_sz, nr_pamt_entries;
+
+ switch (pgsz) {
+ case TDX_PS_4K:
+ nr_pamt_entries = tdmr->size >> PAGE_SHIFT;
+ break;
+ case TDX_PS_2M:
+ nr_pamt_entries = tdmr->size >> PMD_SHIFT;
+ break;
+ case TDX_PS_1G:
+ nr_pamt_entries = tdmr->size >> PUD_SHIFT;
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ return 0;
+ }
+
+ pamt_sz = nr_pamt_entries * pamt_entry_size;
+ /* TDX requires PAMT size must be 4K aligned */
+ pamt_sz = ALIGN(pamt_sz, PAGE_SIZE);
+
+ return pamt_sz;
+}
+
+/*
+ * Locate a NUMA node which should hold the allocation of the @tdmr
+ * PAMT. This node will have some memory covered by the TDMR. The
+ * relative amount of memory covered is not considered.
+ */
+static int tdmr_get_nid(struct tdmr_info *tdmr, struct list_head *tmb_list)
+{
+ struct tdx_memblock *tmb;
+
+ /*
+ * A TDMR must cover at least part of one TMB. That TMB will end
+ * after the TDMR begins. But, that TMB may have started before
+ * the TDMR. Find the next 'tmb' that _ends_ after this TDMR
+ * begins. Ignore 'tmb' start addresses. They are irrelevant.
+ */
+ list_for_each_entry(tmb, tmb_list, list) {
+ if (tmb->end_pfn > PHYS_PFN(tdmr->base))
+ return tmb->nid;
+ }
+
+ /*
+ * Fall back to allocating the TDMR's metadata from node 0 when
+ * no TDX memory block can be found. This should never happen
+ * since TDMRs originate from TDX memory blocks.
+ */
+ pr_warn("TDMR [0x%llx, 0x%llx): unable to find local NUMA node for PAMT allocation, fallback to use node 0.\n",
+ tdmr->base, tdmr_end(tdmr));
+ return 0;
+}
+
+#define TDX_PS_NR (TDX_PS_1G + 1)
+
+/*
+ * Allocate PAMTs from the local NUMA node of some memory in @tmb_list
+ * within @tdmr, and set up PAMTs for @tdmr.
+ */
+static int tdmr_set_up_pamt(struct tdmr_info *tdmr,
+ struct list_head *tmb_list,
+ u16 pamt_entry_size)
+{
+ unsigned long pamt_base[TDX_PS_NR];
+ unsigned long pamt_size[TDX_PS_NR];
+ unsigned long tdmr_pamt_base;
+ unsigned long tdmr_pamt_size;
+ struct page *pamt;
+ int pgsz, nid;
+
+ nid = tdmr_get_nid(tdmr, tmb_list);
+
+ /*
+ * Calculate the PAMT size for each TDX supported page size
+ * and the total PAMT size.
+ */
+ tdmr_pamt_size = 0;
+ for (pgsz = TDX_PS_4K; pgsz <= TDX_PS_1G ; pgsz++) {
+ pamt_size[pgsz] = tdmr_get_pamt_sz(tdmr, pgsz,
+ pamt_entry_size);
+ tdmr_pamt_size += pamt_size[pgsz];
+ }
+
+ /*
+ * Allocate one chunk of physically contiguous memory for all
+ * PAMTs. This helps minimize the PAMT's use of reserved areas
+ * in overlapped TDMRs.
+ */
+ pamt = alloc_contig_pages(tdmr_pamt_size >> PAGE_SHIFT, GFP_KERNEL,
+ nid, &node_online_map);
+ if (!pamt)
+ return -ENOMEM;
+
+ /*
+ * Break the contiguous allocation back up into the
+ * individual PAMTs for each page size.
+ */
+ tdmr_pamt_base = page_to_pfn(pamt) << PAGE_SHIFT;
+ for (pgsz = TDX_PS_4K; pgsz <= TDX_PS_1G; pgsz++) {
+ pamt_base[pgsz] = tdmr_pamt_base;
+ tdmr_pamt_base += pamt_size[pgsz];
+ }
+
+ tdmr->pamt_4k_base = pamt_base[TDX_PS_4K];
+ tdmr->pamt_4k_size = pamt_size[TDX_PS_4K];
+ tdmr->pamt_2m_base = pamt_base[TDX_PS_2M];
+ tdmr->pamt_2m_size = pamt_size[TDX_PS_2M];
+ tdmr->pamt_1g_base = pamt_base[TDX_PS_1G];
+ tdmr->pamt_1g_size = pamt_size[TDX_PS_1G];
+
+ return 0;
+}
+
+static void tdmr_get_pamt(struct tdmr_info *tdmr, unsigned long *pamt_pfn,
+ unsigned long *pamt_npages)
+{
+ unsigned long pamt_base, pamt_sz;
+
+ /*
+ * The PAMT was allocated in one contiguous unit. The 4K PAMT
+ * should always point to the beginning of that allocation.
+ */
+ pamt_base = tdmr->pamt_4k_base;
+ pamt_sz = tdmr->pamt_4k_size + tdmr->pamt_2m_size + tdmr->pamt_1g_size;
+
+ *pamt_pfn = PHYS_PFN(pamt_base);
+ *pamt_npages = pamt_sz >> PAGE_SHIFT;
+}
+
+static void tdmr_free_pamt(struct tdmr_info *tdmr)
+{
+ unsigned long pamt_pfn, pamt_npages;
+
+ tdmr_get_pamt(tdmr, &pamt_pfn, &pamt_npages);
+
+ /* Do nothing if PAMT hasn't been allocated for this TDMR */
+ if (!pamt_npages)
+ return;
+
+ if (WARN_ON_ONCE(!pamt_pfn))
+ return;
+
+ free_contig_range(pamt_pfn, pamt_npages);
+}
+
+static void tdmrs_free_pamt_all(struct tdmr_info_list *tdmr_list)
+{
+ int i;
+
+ for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++)
+ tdmr_free_pamt(tdmr_entry(tdmr_list, i));
+}
+
+/* Allocate and set up PAMTs for all TDMRs */
+static int tdmrs_set_up_pamt_all(struct tdmr_info_list *tdmr_list,
+ struct list_head *tmb_list,
+ u16 pamt_entry_size)
+{
+ int i, ret = 0;
+
+ for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
+ ret = tdmr_set_up_pamt(tdmr_entry(tdmr_list, i), tmb_list,
+ pamt_entry_size);
+ if (ret)
+ goto err;
+ }
+
+ return 0;
+err:
+ tdmrs_free_pamt_all(tdmr_list);
+ return ret;
+}
+
+static unsigned long tdmrs_count_pamt_pages(struct tdmr_info_list *tdmr_list)
+{
+ unsigned long pamt_npages = 0;
+ int i;
+
+ for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
+ unsigned long pfn, npages;
+
+ tdmr_get_pamt(tdmr_entry(tdmr_list, i), &pfn, &npages);
+ pamt_npages += npages;
+ }
+
+ return pamt_npages;
+}
+
/*
* Construct a list of TDMRs on the preallocated space in @tdmr_list
* to cover all TDX memory regions in @tmb_list based on the TDX module
@@ -582,10 +779,13 @@ static int construct_tdmrs(struct list_head *tmb_list,
if (ret)
return ret;

+ ret = tdmrs_set_up_pamt_all(tdmr_list, tmb_list,
+ sysinfo->pamt_entry_size);
+ if (ret)
+ return ret;
/*
* TODO:
*
- * - Allocate and set up PAMTs for each TDMR.
* - Designate reserved areas for each TDMR.
*
* Return -EINVAL until constructing TDMRs is done
@@ -666,7 +866,13 @@ static int init_tdx_module(void)
*
* Return error before all steps are done.
*/
+
ret = -EINVAL;
+ if (ret)
+ tdmrs_free_pamt_all(&tdx_tdmr_list);
+ else
+ pr_info("%lu KBs allocated for PAMT.\n",
+ tdmrs_count_pamt_pages(&tdx_tdmr_list) * 4);
out_free_tdmrs:
if (ret)
free_tdmr_list(&tdx_tdmr_list);
@@ -722,6 +928,7 @@ static void disable_tdx_module(void)
* init_tdx_module(). Remove this comment after
* all steps are done.
*/
+ tdmrs_free_pamt_all(&tdx_tdmr_list);
free_tdmr_list(&tdx_tdmr_list);
free_tdx_memlist(&tdx_memlist);
cpumask_clear(cpu_tdx_mask);
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index 66c7617b357c..7348ffdfc287 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -139,6 +139,7 @@ struct tdx_memblock {
struct list_head list;
unsigned long start_pfn;
unsigned long end_pfn;
+ int nid;
};

struct tdmr_info_list {
--
2.39.1


2023-02-13 12:03:28

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 13/18] x86/virt/tdx: Designate reserved areas for all TDMRs

As the last step of constructing TDMRs, populate reserved areas for all
TDMRs. For each TDMR, put all memory holes within this TDMR to the
reserved areas. And for all PAMTs which overlap with this TDMR, put
all the overlapping parts to reserved areas too.

Signed-off-by: Kai Huang <[email protected]>
Reviewed-by: Isaku Yamahata <[email protected]>
---

v8 -> v9:
- Added comment around 'tdmr_add_rsvd_area()' to point out it doesn't do
optimization to save reserved areas. (Dave).

v7 -> v8: (Dave)
- "set_up" -> "populate" in function name change (Dave).
- Improved comment suggested by Dave.
- Other changes due to 'struct tdmr_info_list'.

v6 -> v7:
- No change.

v5 -> v6:
- Rebase due to using 'tdx_memblock' instead of memblock.
- Split tdmr_set_up_rsvd_areas() into two functions to handle memory
hole and PAMT respectively.
- Added Isaku's Reviewed-by.


---
arch/x86/virt/vmx/tdx/tdx.c | 220 ++++++++++++++++++++++++++++++++++--
1 file changed, 212 insertions(+), 8 deletions(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index ce71a3ef86d3..34aed2aa9f4b 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -23,6 +23,7 @@
#include <linux/sizes.h>
#include <linux/pfn.h>
#include <linux/align.h>
+#include <linux/sort.h>
#include <asm/msr-index.h>
#include <asm/msr.h>
#include <asm/page.h>
@@ -764,6 +765,210 @@ static unsigned long tdmrs_count_pamt_pages(struct tdmr_info_list *tdmr_list)
return pamt_npages;
}

+static int tdmr_add_rsvd_area(struct tdmr_info *tdmr, int *p_idx, u64 addr,
+ u64 size, u16 max_reserved_per_tdmr)
+{
+ struct tdmr_reserved_area *rsvd_areas = tdmr->reserved_areas;
+ int idx = *p_idx;
+
+ /* Reserved area must be 4K aligned in offset and size */
+ if (WARN_ON(addr & ~PAGE_MASK || size & ~PAGE_MASK))
+ return -EINVAL;
+
+ if (idx >= max_reserved_per_tdmr) {
+ pr_warn("initialization failed: TDMR [0x%llx, 0x%llx): reserved areas exhausted.\n",
+ tdmr->base, tdmr_end(tdmr));
+ return -ENOSPC;
+ }
+
+ /*
+ * Consume one reserved area per call. Make no effort to
+ * optimize or reduce the number of reserved areas which are
+ * consumed by contiguous reserved areas, for instance.
+ */
+ rsvd_areas[idx].offset = addr - tdmr->base;
+ rsvd_areas[idx].size = size;
+
+ *p_idx = idx + 1;
+
+ return 0;
+}
+
+/*
+ * Go through @tmb_list to find holes between memory areas. If any of
+ * those holes fall within @tdmr, set up a TDMR reserved area to cover
+ * the hole.
+ */
+static int tdmr_populate_rsvd_holes(struct list_head *tmb_list,
+ struct tdmr_info *tdmr,
+ int *rsvd_idx,
+ u16 max_reserved_per_tdmr)
+{
+ struct tdx_memblock *tmb;
+ u64 prev_end;
+ int ret;
+
+ /*
+ * Start looking for reserved blocks at the
+ * beginning of the TDMR.
+ */
+ prev_end = tdmr->base;
+ list_for_each_entry(tmb, tmb_list, list) {
+ u64 start, end;
+
+ start = PFN_PHYS(tmb->start_pfn);
+ end = PFN_PHYS(tmb->end_pfn);
+
+ /* Break if this region is after the TDMR */
+ if (start >= tdmr_end(tdmr))
+ break;
+
+ /* Exclude regions before this TDMR */
+ if (end < tdmr->base)
+ continue;
+
+ /*
+ * Skip over memory areas that
+ * have already been dealt with.
+ */
+ if (start <= prev_end) {
+ prev_end = end;
+ continue;
+ }
+
+ /* Add the hole before this region */
+ ret = tdmr_add_rsvd_area(tdmr, rsvd_idx, prev_end,
+ start - prev_end,
+ max_reserved_per_tdmr);
+ if (ret)
+ return ret;
+
+ prev_end = end;
+ }
+
+ /* Add the hole after the last region if it exists. */
+ if (prev_end < tdmr_end(tdmr)) {
+ ret = tdmr_add_rsvd_area(tdmr, rsvd_idx, prev_end,
+ tdmr_end(tdmr) - prev_end,
+ max_reserved_per_tdmr);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+/*
+ * Go through @tdmr_list to find all PAMTs. If any of those PAMTs
+ * overlaps with @tdmr, set up a TDMR reserved area to cover the
+ * overlapping part.
+ */
+static int tdmr_populate_rsvd_pamts(struct tdmr_info_list *tdmr_list,
+ struct tdmr_info *tdmr,
+ int *rsvd_idx,
+ u16 max_reserved_per_tdmr)
+{
+ int i, ret;
+
+ for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
+ struct tdmr_info *tmp = tdmr_entry(tdmr_list, i);
+ unsigned long pamt_start_pfn, pamt_npages;
+ u64 pamt_start, pamt_end;
+
+ tdmr_get_pamt(tmp, &pamt_start_pfn, &pamt_npages);
+ /* Each TDMR must already have PAMT allocated */
+ WARN_ON_ONCE(!pamt_npages || !pamt_start_pfn);
+
+ pamt_start = PFN_PHYS(pamt_start_pfn);
+ pamt_end = PFN_PHYS(pamt_start_pfn + pamt_npages);
+
+ /* Skip PAMTs outside of the given TDMR */
+ if ((pamt_end <= tdmr->base) ||
+ (pamt_start >= tdmr_end(tdmr)))
+ continue;
+
+ /* Only mark the part within the TDMR as reserved */
+ if (pamt_start < tdmr->base)
+ pamt_start = tdmr->base;
+ if (pamt_end > tdmr_end(tdmr))
+ pamt_end = tdmr_end(tdmr);
+
+ ret = tdmr_add_rsvd_area(tdmr, rsvd_idx, pamt_start,
+ pamt_end - pamt_start,
+ max_reserved_per_tdmr);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+/* Compare function called by sort() for TDMR reserved areas */
+static int rsvd_area_cmp_func(const void *a, const void *b)
+{
+ struct tdmr_reserved_area *r1 = (struct tdmr_reserved_area *)a;
+ struct tdmr_reserved_area *r2 = (struct tdmr_reserved_area *)b;
+
+ if (r1->offset + r1->size <= r2->offset)
+ return -1;
+ if (r1->offset >= r2->offset + r2->size)
+ return 1;
+
+ /* Reserved areas cannot overlap. The caller must guarantee. */
+ WARN_ON_ONCE(1);
+ return -1;
+}
+
+/*
+ * Populate reserved areas for the given @tdmr, including memory holes
+ * (via @tmb_list) and PAMTs (via @tdmr_list).
+ */
+static int tdmr_populate_rsvd_areas(struct tdmr_info *tdmr,
+ struct list_head *tmb_list,
+ struct tdmr_info_list *tdmr_list,
+ u16 max_reserved_per_tdmr)
+{
+ int ret, rsvd_idx = 0;
+
+ ret = tdmr_populate_rsvd_holes(tmb_list, tdmr, &rsvd_idx,
+ max_reserved_per_tdmr);
+ if (ret)
+ return ret;
+
+ ret = tdmr_populate_rsvd_pamts(tdmr_list, tdmr, &rsvd_idx,
+ max_reserved_per_tdmr);
+ if (ret)
+ return ret;
+
+ /* TDX requires reserved areas listed in address ascending order */
+ sort(tdmr->reserved_areas, rsvd_idx, sizeof(struct tdmr_reserved_area),
+ rsvd_area_cmp_func, NULL);
+
+ return 0;
+}
+
+/*
+ * Populate reserved areas for all TDMRs in @tdmr_list, including memory
+ * holes (via @tmb_list) and PAMTs.
+ */
+static int tdmrs_populate_rsvd_areas_all(struct tdmr_info_list *tdmr_list,
+ struct list_head *tmb_list,
+ u16 max_reserved_per_tdmr)
+{
+ int i;
+
+ for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
+ int ret;
+
+ ret = tdmr_populate_rsvd_areas(tdmr_entry(tdmr_list, i),
+ tmb_list, tdmr_list, max_reserved_per_tdmr);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
/*
* Construct a list of TDMRs on the preallocated space in @tdmr_list
* to cover all TDX memory regions in @tmb_list based on the TDX module
@@ -783,14 +988,13 @@ static int construct_tdmrs(struct list_head *tmb_list,
sysinfo->pamt_entry_size);
if (ret)
return ret;
- /*
- * TODO:
- *
- * - Designate reserved areas for each TDMR.
- *
- * Return -EINVAL until constructing TDMRs is done
- */
- return -EINVAL;
+
+ ret = tdmrs_populate_rsvd_areas_all(tdmr_list, tmb_list,
+ sysinfo->max_reserved_per_tdmr);
+ if (ret)
+ tdmrs_free_pamt_all(tdmr_list);
+
+ return ret;
}

static int init_tdx_module(void)
--
2.39.1


2023-02-13 12:03:47

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 14/18] x86/virt/tdx: Configure TDX module with the TDMRs and global KeyID

The TDX module uses a private KeyID as the "global KeyID" for mapping
things like the PAMT and other TDX metadata. This KeyID has already
been reserved when detecting TDX during the kernel early boot.

After the list of "TD Memory Regions" (TDMRs) has been constructed to
cover all TDX-usable memory regions, the next step is to pass them to
the TDX module together with the global KeyID.

Signed-off-by: Kai Huang <[email protected]>
Reviewed-by: Isaku Yamahata <[email protected]>
---

v8 -> v9:
- Removed 'i.e.' in changelog and removed the passive voice (Dave).
- Improved changelog (Dave).
- Changed the way to allocate aligned PA array (Dave).
- Moved reserving the TDX global KeyID to the second patch, and also
changed 'tdx_keyid_start' and 'nr_tdx_keyid' to guest's KeyIDs in
that patch (Dave).

v7 -> v8:
- Merged "Reserve TDX module global KeyID" patch to this patch, and
removed 'tdx_global_keyid' but use 'tdx_keyid_start' directly.
- Changed changelog accordingly.
- Changed how to allocate aligned array (Dave).

---
arch/x86/virt/vmx/tdx/tdx.c | 41 ++++++++++++++++++++++++++++++++++++-
arch/x86/virt/vmx/tdx/tdx.h | 2 ++
2 files changed, 42 insertions(+), 1 deletion(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 34aed2aa9f4b..5a4163d40f58 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -24,6 +24,7 @@
#include <linux/pfn.h>
#include <linux/align.h>
#include <linux/sort.h>
+#include <linux/log2.h>
#include <asm/msr-index.h>
#include <asm/msr.h>
#include <asm/page.h>
@@ -997,6 +998,39 @@ static int construct_tdmrs(struct list_head *tmb_list,
return ret;
}

+static int config_tdx_module(struct tdmr_info_list *tdmr_list, u64 global_keyid)
+{
+ u64 *tdmr_pa_array;
+ size_t array_sz;
+ int i, ret;
+
+ /*
+ * TDMRs are passed to the TDX module via an array of physical
+ * addresses of each TDMR. The array itself also has certain
+ * alignment requirement.
+ */
+ array_sz = tdmr_list->nr_consumed_tdmrs * sizeof(u64);
+ array_sz = roundup_pow_of_two(array_sz);
+ if (array_sz < TDMR_INFO_PA_ARRAY_ALIGNMENT)
+ array_sz = TDMR_INFO_PA_ARRAY_ALIGNMENT;
+
+ tdmr_pa_array = kzalloc(array_sz, GFP_KERNEL);
+ if (!tdmr_pa_array)
+ return -ENOMEM;
+
+ for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++)
+ tdmr_pa_array[i] = __pa(tdmr_entry(tdmr_list, i));
+
+ ret = seamcall(TDH_SYS_CONFIG, __pa(tdmr_pa_array),
+ tdmr_list->nr_consumed_tdmrs,
+ global_keyid, 0, NULL, NULL);
+
+ /* Free the array as it is not required anymore. */
+ kfree(tdmr_pa_array);
+
+ return ret;
+}
+
static int init_tdx_module(void)
{
static DECLARE_PADDED_STRUCT(tdsysinfo_struct, tdsysinfo,
@@ -1061,10 +1095,14 @@ static int init_tdx_module(void)
if (ret)
goto out_free_tdmrs;

+ /* Pass the TDMRs and the global KeyID to the TDX module */
+ ret = config_tdx_module(&tdx_tdmr_list, tdx_global_keyid);
+ if (ret)
+ goto out_free_pamts;
+
/*
* TODO:
*
- * - Configure the TDMRs and the global KeyID to the TDX module.
* - Configure the global KeyID on all packages.
* - Initialize all TDMRs.
*
@@ -1072,6 +1110,7 @@ static int init_tdx_module(void)
*/

ret = -EINVAL;
+out_free_pamts:
if (ret)
tdmrs_free_pamt_all(&tdx_tdmr_list);
else
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index 7348ffdfc287..7b34ac257b9a 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -17,6 +17,7 @@
* TDX module SEAMCALL leaf functions
*/
#define TDH_SYS_INFO 32
+#define TDH_SYS_CONFIG 45

struct cmr_info {
u64 base;
@@ -93,6 +94,7 @@ struct tdmr_reserved_area {
} __packed;

#define TDMR_INFO_ALIGNMENT 512
+#define TDMR_INFO_PA_ARRAY_ALIGNMENT 512

struct tdmr_info {
u64 base;
--
2.39.1


2023-02-13 12:03:50

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 15/18] x86/virt/tdx: Configure global KeyID on all packages

After the list of TDMRs and the global KeyID are configured to the TDX
module, the kernel needs to configure the key of the global KeyID on all
packages using TDH.SYS.KEY.CONFIG.

Just use the helper, which conditionally calls function on all online
cpus, to configure the global KeyID on all packages. Loop all online
cpus, keep track which packages have been called and skip all cpus for
those already called packages.

To keep things simple, this implementation takes no affirmative steps to
online cpus to make sure there's at least one cpu for each package. The
callers (aka. KVM) can ensure success by ensuring that.

Intel hardware doesn't guarantee cache coherency across different
KeyIDs. The PAMTs are transitioning from being used by the kernel
mapping (KeyId 0) to the TDX module's "global KeyID" mapping.

This means that the kernel must flush any dirty KeyID-0 PAMT cachelines
before the TDX module uses the global KeyID to access the PAMT.
Otherwise, if those dirty cachelines were written back, they would
corrupt the TDX module's metadata. Aside: This corruption would be
detected by the memory integrity hardware on the next read of the memory
with the global KeyID. The result would likely be fatal to the system
but would not impact TDX security.

Following the TDX module specification, flush cache before configuring
the global KeyID on all packages. Given the PAMT size can be large
(~1/256th of system RAM), just use WBINVD on all CPUs to flush.

Note if any TDH.SYS.KEY.CONFIG fails, the TDX module may already have
used the global KeyID to write any PAMT. Therefore, use WBINVD to flush
cache before freeing the PAMTs back to the kernel.

Signed-off-by: Kai Huang <[email protected]>
Reviewed-by: Isaku Yamahata <[email protected]>
---

v8 -> v9:
- Improved changelog (Dave).
- Improved comments to explain the function to configure global KeyID
"takes no affirmative action to online any cpu". (Dave).
- Improved other comments suggested by Dave.

v7 -> v8: (Dave)
- Changelog changes:
- Point out this is the step of "multi-steps" of init_tdx_module().
- Removed MOVDIR64B part.
- Other changes due to removing TDH.SYS.SHUTDOWN and TDH.SYS.LP.INIT.
- Changed to loop over online cpus and use smp_call_function_single()
directly as the patch to shut down TDX module has been removed.
- Removed MOVDIR64B part in comment.

v6 -> v7:
- Improved changelong and comment to explain why MOVDIR64B isn't used
when returning PAMTs back to the kernel.

---
arch/x86/virt/vmx/tdx/tdx.c | 92 ++++++++++++++++++++++++++++++++++---
arch/x86/virt/vmx/tdx/tdx.h | 1 +
2 files changed, 86 insertions(+), 7 deletions(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 5a4163d40f58..ff6f2c9d9838 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -1031,6 +1031,61 @@ static int config_tdx_module(struct tdmr_info_list *tdmr_list, u64 global_keyid)
return ret;
}

+static int smp_func_config_global_keyid(void *data)
+{
+ cpumask_var_t packages = data;
+ int pkg, ret;
+
+ pkg = topology_physical_package_id(smp_processor_id());
+
+ /*
+ * TDH.SYS.KEY.CONFIG may fail with entropy error (which is a
+ * recoverable error). Assume this is exceedingly rare and
+ * just return error if encountered instead of retrying.
+ *
+ * All '0's are just unused parameters.
+ */
+ ret = seamcall(TDH_SYS_KEY_CONFIG, 0, 0, 0, 0, NULL, NULL);
+ if (!ret)
+ cpumask_set_cpu(pkg, packages);
+
+ return ret;
+}
+
+static bool skip_func_config_global_keyid(int cpu, void *data)
+{
+ cpumask_var_t packages = data;
+
+ return cpumask_test_cpu(topology_physical_package_id(cpu),
+ packages);
+}
+
+/*
+ * Attempt to configure the global KeyID on all physical packages.
+ *
+ * This requires running code on at least one CPU in each package. If a
+ * package has no online CPUs, that code will not run and TDX module
+ * initialization (TDMR initialization) will fail.
+ *
+ * This code takes no affirmative steps to online CPUs. Callers (aka.
+ * KVM) can ensure success by ensuring sufficient CPUs are online for
+ * this to succeed.
+ */
+static int config_global_keyid(void)
+{
+ cpumask_var_t packages;
+ int ret;
+
+ if (!zalloc_cpumask_var(&packages, GFP_KERNEL))
+ return -ENOMEM;
+
+ ret = tdx_on_each_cpu_cond(smp_func_config_global_keyid, packages,
+ skip_func_config_global_keyid, packages);
+
+ free_cpumask_var(packages);
+ return ret;
+}
+
static int init_tdx_module(void)
{
static DECLARE_PADDED_STRUCT(tdsysinfo_struct, tdsysinfo,
@@ -1100,10 +1155,24 @@ static int init_tdx_module(void)
if (ret)
goto out_free_pamts;

+ /*
+ * Hardware doesn't guarantee cache coherency across different
+ * KeyIDs. The kernel needs to flush PAMT's dirty cachelines
+ * (associated with KeyID 0) before the TDX module can use the
+ * global KeyID to access the PAMT. Given PAMTs are potentially
+ * large (~1/256th of system RAM), just use WBINVD on all cpus
+ * to flush the cache.
+ */
+ wbinvd_on_all_cpus();
+
+ /* Config the key of global KeyID on all packages */
+ ret = config_global_keyid();
+ if (ret)
+ goto out_free_pamts;
+
/*
* TODO:
*
- * - Configure the global KeyID on all packages.
* - Initialize all TDMRs.
*
* Return error before all steps are done.
@@ -1111,8 +1180,18 @@ static int init_tdx_module(void)

ret = -EINVAL;
out_free_pamts:
- if (ret)
+ if (ret) {
+ /*
+ * Part of PAMT may already have been initialized by the
+ * TDX module. Flush cache before returning PAMT back
+ * to the kernel.
+ *
+ * No need to worry about integrity checks here. KeyID
+ * 0 has integrity checking disabled.
+ */
+ wbinvd_on_all_cpus();
tdmrs_free_pamt_all(&tdx_tdmr_list);
+ }
else
pr_info("%lu KBs allocated for PAMT.\n",
tdmrs_count_pamt_pages(&tdx_tdmr_list) * 4);
@@ -1166,11 +1245,7 @@ static int __tdx_enable(void)
*/
static void disable_tdx_module(void)
{
- /*
- * TODO: module clean up in reverse to steps in
- * init_tdx_module(). Remove this comment after
- * all steps are done.
- */
+ wbinvd_on_all_cpus();
tdmrs_free_pamt_all(&tdx_tdmr_list);
free_tdmr_list(&tdx_tdmr_list);
free_tdx_memlist(&tdx_memlist);
@@ -1228,6 +1303,9 @@ static int __tdx_enable_online_cpus(void)
* This function internally calls cpus_read_lock()/unlock() to prevent
* any cpu from going online and offline.
*
+ * This function requires there's at least one online cpu for each CPU
+ * package to succeed.
+ *
* This function assumes all online cpus are already in VMX operation.
*
* This function can be called in parallel by multiple callers.
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index 7b34ac257b9a..ca4e2edbf4bc 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -16,6 +16,7 @@
/*
* TDX module SEAMCALL leaf functions
*/
+#define TDH_SYS_KEY_CONFIG 31
#define TDH_SYS_INFO 32
#define TDH_SYS_CONFIG 45

--
2.39.1


2023-02-13 12:04:00

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 16/18] x86/virt/tdx: Initialize all TDMRs

After the global KeyID has been configured on all packages, initialize
all TDMRs to make all TDX-usable memory regions that are passed to the
TDX module become usable.

This is the last step of initializing the TDX module.

Initializing TDMRs can be time consuming on large memory systems as it
involves initializing all metadata entries for all pages that can be
used by TDX guests. Initializing different TDMRs can be parallelized.
For now to keep it simple, just initialize all TDMRs one by one. It can
be enhanced in the future.

Signed-off-by: Kai Huang <[email protected]>
Reviewed-by: Isaku Yamahata <[email protected]>
---

v8 -> v9:
- Improved changlog to explain why initializing TDMRs can take long
time (Dave).
- Improved comments around 'next-to-initialize' address (Dave).

v7 -> v8: (Dave)
- Changelog:
- explicitly call out this is the last step of TDX module initialization.
- Trimed down changelog by removing SEAMCALL name and details.
- Removed/trimmed down unnecessary comments.
- Other changes due to 'struct tdmr_info_list'.

v6 -> v7:
- Removed need_resched() check. -- Andi.

---
arch/x86/virt/vmx/tdx/tdx.c | 60 ++++++++++++++++++++++++++++++++-----
arch/x86/virt/vmx/tdx/tdx.h | 1 +
2 files changed, 53 insertions(+), 8 deletions(-)

diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index ff6f2c9d9838..c291fbd29bb0 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -1086,6 +1086,56 @@ static int config_global_keyid(void)
return ret;
}

+static int init_tdmr(struct tdmr_info *tdmr)
+{
+ u64 next;
+
+ /*
+ * Initializing a TDMR can be time consuming. To avoid long
+ * SEAMCALLs, the TDX module may only initialize a part of the
+ * TDMR in each call.
+ */
+ do {
+ struct tdx_module_output out;
+ int ret;
+
+ /* All 0's are unused parameters, they mean nothing. */
+ ret = seamcall(TDH_SYS_TDMR_INIT, tdmr->base, 0, 0, 0, NULL,
+ &out);
+ if (ret)
+ return ret;
+ /*
+ * RDX contains 'next-to-initialize' address if
+ * TDH.SYS.TDMR.INIT did not fully complete and
+ * should be retried.
+ */
+ next = out.rdx;
+ cond_resched();
+ /* Keep making SEAMCALLs until the TDMR is done */
+ } while (next < tdmr->base + tdmr->size);
+
+ return 0;
+}
+
+static int init_tdmrs(struct tdmr_info_list *tdmr_list)
+{
+ int i;
+
+ /*
+ * This operation is costly. It can be parallelized,
+ * but keep it simple for now.
+ */
+ for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
+ int ret;
+
+ ret = init_tdmr(tdmr_entry(tdmr_list, i));
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
static int init_tdx_module(void)
{
static DECLARE_PADDED_STRUCT(tdsysinfo_struct, tdsysinfo,
@@ -1170,15 +1220,9 @@ static int init_tdx_module(void)
if (ret)
goto out_free_pamts;

- /*
- * TODO:
- *
- * - Initialize all TDMRs.
- *
- * Return error before all steps are done.
- */
+ /* Initialize TDMRs to complete the TDX module initialization */
+ ret = init_tdmrs(&tdx_tdmr_list);

- ret = -EINVAL;
out_free_pamts:
if (ret) {
/*
diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
index ca4e2edbf4bc..4e312c7f9553 100644
--- a/arch/x86/virt/vmx/tdx/tdx.h
+++ b/arch/x86/virt/vmx/tdx/tdx.h
@@ -18,6 +18,7 @@
*/
#define TDH_SYS_KEY_CONFIG 31
#define TDH_SYS_INFO 32
+#define TDH_SYS_TDMR_INIT 36
#define TDH_SYS_CONFIG 45

struct cmr_info {
--
2.39.1


2023-02-13 12:04:22

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 17/18] x86/virt/tdx: Flush cache in kexec() when TDX is enabled

There are two problems in terms of using kexec() to boot to a new kernel
when the old kernel has enabled TDX: 1) Part of the memory pages are
still TDX private pages; 2) There might be dirty cachelines associated
with TDX private pages.

The first problem doesn't matter. KeyID 0 doesn't have integrity check.
Even the new kernel wants to use any non-zero KeyID, it needs to convert
the memory to that KeyID and such conversion would work from any KeyID.

However the old kernel needs to guarantee there's no dirty cacheline
left behind before booting to the new kernel to avoid silent corruption
from later cacheline writeback (Intel hardware doesn't guarantee cache
coherency across different KeyIDs).

There are two things that the old kernel needs to do to achieve that:

1) Stop accessing TDX private memory mappings:
a. Stop making TDX module SEAMCALLs (TDX global KeyID);
b. Stop TDX guests from running (per-guest TDX KeyID).
2) Flush any cachelines from previous TDX private KeyID writes.

For 2), use wbinvd() to flush cache in stop_this_cpu(), following SME
support. And in this way 1) happens for free as there's no TDX activity
between wbinvd() and the native_halt().

Theoretically, cache flush is only needed when the TDX module has been
initialized. However initializing the TDX module is done on demand at
runtime, and it takes a mutex to read the module status. Just check
whether TDX is enabled by the BIOS instead to flush cache.

Signed-off-by: Kai Huang <[email protected]>
Reviewed-by: Isaku Yamahata <[email protected]>
---

v8 -> v9:
- Various changelog enhancement and fix (Dave).
- Improved comment (Dave).

v7 -> v8:
- Changelog:
- Removed "leave TDX module open" part due to shut down patch has been
removed.

v6 -> v7:
- Improved changelog to explain why don't convert TDX private pages back
to normal.

---
arch/x86/kernel/process.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 40d156a31676..5876dda412c7 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -765,8 +765,13 @@ void __noreturn stop_this_cpu(void *dummy)
*
* Test the CPUID bit directly because the machine might've cleared
* X86_FEATURE_SME due to cmdline options.
+ *
+ * The TDX module or guests might have left dirty cachelines
+ * behind. Flush them to avoid corruption from later writeback.
+ * Note that this flushes on all systems where TDX is possible,
+ * but does not actually check that TDX was in use.
*/
- if (cpuid_eax(0x8000001f) & BIT(0))
+ if (cpuid_eax(0x8000001f) & BIT(0) || platform_tdx_enabled())
native_wbinvd();
for (;;) {
/*
--
2.39.1


2023-02-13 12:04:26

by Huang, Kai

[permalink] [raw]
Subject: [PATCH v9 18/18] Documentation/x86: Add documentation for TDX host support

Add documentation for TDX host kernel support. There is already one
file Documentation/x86/tdx.rst containing documentation for TDX guest
internals. Also reuse it for TDX host kernel support.

Introduce a new level menu "TDX Guest Support" and move existing
materials under it, and add a new menu for TDX host kernel support.

Signed-off-by: Kai Huang <[email protected]>
---
Documentation/x86/tdx.rst | 176 +++++++++++++++++++++++++++++++++++---
1 file changed, 165 insertions(+), 11 deletions(-)

diff --git a/Documentation/x86/tdx.rst b/Documentation/x86/tdx.rst
index dc8d9fd2c3f7..8a84d7646bc3 100644
--- a/Documentation/x86/tdx.rst
+++ b/Documentation/x86/tdx.rst
@@ -10,6 +10,160 @@ encrypting the guest memory. In TDX, a special module running in a special
mode sits between the host and the guest and manages the guest/host
separation.

+TDX Host Kernel Support
+=======================
+
+TDX introduces a new CPU mode called Secure Arbitration Mode (SEAM) and
+a new isolated range pointed by the SEAM Ranger Register (SEAMRR). A
+CPU-attested software module called 'the TDX module' runs inside the new
+isolated range to provide the functionalities to manage and run protected
+VMs.
+
+TDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to
+provide crypto-protection to the VMs. TDX reserves part of MKTME KeyIDs
+as TDX private KeyIDs, which are only accessible within the SEAM mode.
+BIOS is responsible for partitioning legacy MKTME KeyIDs and TDX KeyIDs.
+
+Before the TDX module can be used to create and run protected VMs, it
+must be loaded into the isolated range and properly initialized. The TDX
+architecture doesn't require the BIOS to load the TDX module, but the
+kernel assumes it is loaded by the BIOS.
+
+TDX boot-time detection
+-----------------------
+
+The kernel detects TDX by detecting TDX private KeyIDs during kernel
+boot. Below dmesg shows when TDX is enabled by BIOS::
+
+ [..] tdx: BIOS enabled: private KeyID range: [16, 64).
+
+TDX module detection and initialization
+---------------------------------------
+
+There is no CPUID or MSR to detect the TDX module. The kernel detects it
+by initializing it.
+
+The kernel talks to the TDX module via the new SEAMCALL instruction. The
+TDX module implements SEAMCALL leaf functions to allow the kernel to
+initialize it.
+
+Initializing the TDX module consumes roughly ~1/256th system RAM size to
+use it as 'metadata' for the TDX memory. It also takes additional CPU
+time to initialize those metadata along with the TDX module itself. Both
+are not trivial. The kernel initializes the TDX module at runtime on
+demand. The caller to call tdx_enable() to initialize the TDX module::
+
+ ret = tdx_enable();
+ if (ret)
+ goto no_tdx;
+ // TDX is ready to use
+
+One step of initializing the TDX module requires at least one online cpu
+for each package. The caller needs to guarantee this otherwise the
+initialization will fail.
+
+Making SEAMCALL requires the CPU already being in VMX operation (VMXON
+has been done). For now tdx_enable() doesn't handle VMXON internally,
+but depends on the caller to guarantee that. So far only KVM calls
+tdx_enable() and KVM already handles VMXON.
+
+User can consult dmesg to see the presence of the TDX module, and whether
+it has been initialized.
+
+If the TDX module is not loaded, dmesg shows below::
+
+ [..] tdx: TDX module is not loaded.
+
+If the TDX module is initialized successfully, dmesg shows something
+like below::
+
+ [..] tdx: TDX module: attributes 0x0, vendor_id 0x8086, major_version 1, minor_version 0, build_date 20211209, build_num 160
+ [..] tdx: 262668 KBs allocated for PAMT.
+ [..] tdx: TDX module initialized.
+
+If the TDX module failed to initialize, dmesg also shows it failed to
+initialize::
+
+ [..] tdx: initialization failed ...
+
+TDX Interaction to Other Kernel Components
+------------------------------------------
+
+TDX Memory Policy
+~~~~~~~~~~~~~~~~~
+
+TDX reports a list of "Convertible Memory Region" (CMR) to tell the
+kernel which memory is TDX compatible. The kernel needs to build a list
+of memory regions (out of CMRs) as "TDX-usable" memory and pass those
+regions to the TDX module. Once this is done, those "TDX-usable" memory
+regions are fixed during module's lifetime.
+
+To keep things simple, currently the kernel simply guarantees all pages
+in the page allocator are TDX memory. Specifically, the kernel uses all
+system memory in the core-mm at the time of initializing the TDX module
+as TDX memory, and in the meantime, refuses to online any non-TDX-memory
+in the memory hotplug.
+
+This can be enhanced in the future, i.e. by allowing adding non-TDX
+memory to a separate NUMA node. In this case, the "TDX-capable" nodes
+and the "non-TDX-capable" nodes can co-exist, but the kernel/userspace
+needs to guarantee memory pages for TDX guests are always allocated from
+the "TDX-capable" nodes.
+
+Physical Memory Hotplug
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Note TDX assumes convertible memory is always physically present during
+machine's runtime. A non-buggy BIOS should never support hot-removal of
+any convertible memory. This implementation doesn't handle ACPI memory
+removal but depends on the BIOS to behave correctly.
+
+CPU Hotplug
+~~~~~~~~~~~
+
+TDX module requires one SEAMCALL (TDH.SYS.LP.INIT) to do per-cpu module
+initialization on one cpu before any other SEAMCALLs can be made on that
+cpu, including those involved during the module initialization.
+
+Currently kernel simply guarantees all online cpus are "TDX-runnable"
+(TDH.SYS.LP.INIT has been done successfully on them). During module
+initialization, the SEAMCALL is done for all online cpus and CPU hotplug
+is disabled during the entire module initialization. If any fails, TDX
+is disabled. In CPU hotplug, the kernel provides another function
+tdx_cpu_online() for the user of TDX (KVM for now) to call in it's own
+CPU online callback, and reject to online the cpu if SEAMCALL fails.
+
+TDX doesn't support physical (ACPI) CPU hotplug. During machine boot,
+TDX verifies all boot-time present logical CPUs are TDX compatible before
+enabling TDX. A non-buggy BIOS should never support hot-add/removal of
+physical CPU. Currently the kernel doesn't handle physical CPU hotplug,
+but depends on the BIOS to behave correctly.
+
+Note TDX works with CPU logical online/offline, thus the kernel still
+allows to offline logical CPU and online it again.
+
+Kexec()
+~~~~~~~
+
+There are two problems in terms of using kexec() to boot to a new kernel
+when the old kernel has enabled TDX: 1) Part of the memory pages are
+still TDX private pages; 2) There might be dirty cachelines associated
+with TDX private pages.
+
+The first problem doesn't matter. KeyID 0 doesn't have integrity check.
+Even the new kernel wants use any non-zero KeyID, it needs to convert
+the memory to that KeyID and such conversion would work from any KeyID.
+
+However the old kernel needs to guarantee there's no dirty cacheline
+left behind before booting to the new kernel to avoid silent corruption
+from later cacheline writeback (Intel hardware doesn't guarantee cache
+coherency across different KeyIDs).
+
+Similar to AMD SME, the kernel just uses wbinvd() to flush cache before
+booting to the new kernel.
+
+TDX Guest Support
+=================
Since the host cannot directly access guest registers or memory, much
normal functionality of a hypervisor must be moved into the guest. This is
implemented using a Virtualization Exception (#VE) that is handled by the
@@ -20,7 +174,7 @@ TDX includes new hypercall-like mechanisms for communicating from the
guest to the hypervisor or the TDX module.

New TDX Exceptions
-==================
+------------------

TDX guests behave differently from bare-metal and traditional VMX guests.
In TDX guests, otherwise normal instructions or memory accesses can cause
@@ -30,7 +184,7 @@ Instructions marked with an '*' conditionally cause exceptions. The
details for these instructions are discussed below.

Instruction-based #VE
----------------------
+~~~~~~~~~~~~~~~~~~~~~

- Port I/O (INS, OUTS, IN, OUT)
- HLT
@@ -41,7 +195,7 @@ Instruction-based #VE
- CPUID*

Instruction-based #GP
----------------------
+~~~~~~~~~~~~~~~~~~~~~

- All VMX instructions: INVEPT, INVVPID, VMCLEAR, VMFUNC, VMLAUNCH,
VMPTRLD, VMPTRST, VMREAD, VMRESUME, VMWRITE, VMXOFF, VMXON
@@ -52,7 +206,7 @@ Instruction-based #GP
- RDMSR*,WRMSR*

RDMSR/WRMSR Behavior
---------------------
+~~~~~~~~~~~~~~~~~~~~

MSR access behavior falls into three categories:

@@ -73,7 +227,7 @@ trapping and handling in the TDX module. Other than possibly being slow,
these MSRs appear to function just as they would on bare metal.

CPUID Behavior
---------------
+~~~~~~~~~~~~~~

For some CPUID leaves and sub-leaves, the virtualized bit fields of CPUID
return values (in guest EAX/EBX/ECX/EDX) are configurable by the
@@ -93,7 +247,7 @@ not know how to handle. The guest kernel may ask the hypervisor for the
value with a hypercall.

#VE on Memory Accesses
-======================
+----------------------

There are essentially two classes of TDX memory: private and shared.
Private memory receives full TDX protections. Its content is protected
@@ -107,7 +261,7 @@ entries. This helps ensure that a guest does not place sensitive
information in shared memory, exposing it to the untrusted hypervisor.

#VE on Shared Memory
---------------------
+~~~~~~~~~~~~~~~~~~~~

Access to shared mappings can cause a #VE. The hypervisor ultimately
controls whether a shared memory access causes a #VE, so the guest must be
@@ -127,7 +281,7 @@ be careful not to access device MMIO regions unless it is also prepared to
handle a #VE.

#VE on Private Pages
---------------------
+~~~~~~~~~~~~~~~~~~~~

An access to private mappings can also cause a #VE. Since all kernel
memory is also private memory, the kernel might theoretically need to
@@ -145,7 +299,7 @@ The hypervisor is permitted to unilaterally move accepted pages to a
to handle the exception.

Linux #VE handler
-=================
+-----------------

Just like page faults or #GP's, #VE exceptions can be either handled or be
fatal. Typically, an unhandled userspace #VE results in a SIGSEGV.
@@ -167,7 +321,7 @@ While the block is in place, any #VE is elevated to a double fault (#DF)
which is not recoverable.

MMIO handling
-=============
+-------------

In non-TDX VMs, MMIO is usually implemented by giving a guest access to a
mapping which will cause a VMEXIT on access, and then the hypervisor
@@ -189,7 +343,7 @@ MMIO access via other means (like structure overlays) may result in an
oops.

Shared Memory Conversions
-=========================
+-------------------------

All TDX guest memory starts out as private at boot. This memory can not
be accessed by the hypervisor. However, some kernel users like device
--
2.39.1


2023-02-13 17:48:46

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v9 05/18] x86/virt/tdx: Add SEAMCALL infrastructure

On 2/13/23 03:59, Kai Huang wrote:
> diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
> index 4a3ee64c1ca7..5c5ecfddb15b 100644
> --- a/arch/x86/include/asm/tdx.h
> +++ b/arch/x86/include/asm/tdx.h
> @@ -8,6 +8,10 @@
> #include <asm/ptrace.h>
> #include <asm/shared/tdx.h>
>
> +#ifdef CONFIG_INTEL_TDX_HOST
...
> +#define TDX_SEAMCALL_GP (TDX_SW_ERROR | X86_TRAP_GP)
> +#define TDX_SEAMCALL_UD (TDX_SW_ERROR | X86_TRAP_UD)
> +
> +#endif

All these kinds of header #ifdefs do it make it harder to write code in
.c files without matching #ifdefs. Think of code like this completely
made up example:

if (!tdx_enable()) {
// Success! Make a seamcall:
int something = tdx_seamcall();
if (something == TDX_SEAMCALL_UD)
// oh no!
}

tdx_enable() can never return 0 if CONFIG_INTEL_TDX_HOST=n, so the
entire if() block is optimized away by the compiler. *BUT*, if you've
#ifdef'd away TDX_SEAMCALL_UD, you'll get a compile error. People
usually fix the compile error like this:

if (!tdx_enable()) {
#ifdef CONFIG_INTEL_TDX_HOST
// Success! Make a seamcall:
int something = tdx_seamcall();
if (something == TDX_SEAMCALL_UD)
// oh no!
#endif
}

Which isn't great.

Defining things unconditionally in header files is *FINE*, as long as
the #ifdefs are there somewhere to make the code go away at compile time.

Please post an updated (and tested) patch as a reply to this.

2023-02-13 17:59:33

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

On 2/13/23 03:59, Kai Huang wrote:
> To avoid duplicated code, add a
> helper to call SEAMCALL on all online cpus one by one but with a skip
> function to check whether to skip certain cpus, and use that helper to
> do the per-cpu initialization.
...
> +/*
> + * Call @func on all online cpus one by one but skip those cpus
> + * when @skip_func is valid and returns true for them.
> + */
> +static int tdx_on_each_cpu_cond(int (*func)(void *), void *func_data,
> + bool (*skip_func)(int cpu, void *),
> + void *skip_data)

I only see one caller of this. Where is the duplicated code?

2023-02-13 18:09:41

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

On 2/13/23 03:59, Kai Huang wrote:
> @@ -247,8 +395,17 @@ int tdx_enable(void)
> ret = __tdx_enable();
> break;
> case TDX_MODULE_INITIALIZED:
> - /* Already initialized, great, tell the caller. */
> - ret = 0;
> + /*
> + * The previous call of __tdx_enable() may only have
> + * initialized part of present cpus during module
> + * initialization, and new cpus may have become online
> + * since then.
> + *
> + * To make sure all online cpus are TDX-runnable, always
> + * do per-cpu initialization for all online cpus here
> + * even the module has been initialized.
> + */
> + ret = __tdx_enable_online_cpus();

I'm missing something here. CPUs get initialized through either:

1. __tdx_enable(), for the CPUs around at the time
2. tdx_cpu_online(), for hotplugged CPUs after __tdx_enable()

But, this is a third class. CPUs that came online after #1, but which
got missed by #2. How can that happen?

2023-02-13 21:14:03

by Huang, Kai

[permalink] [raw]
Subject: RE: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

> On 2/13/23 03:59, Kai Huang wrote:
> > @@ -247,8 +395,17 @@ int tdx_enable(void)
> > ret = __tdx_enable();
> > break;
> > case TDX_MODULE_INITIALIZED:
> > - /* Already initialized, great, tell the caller. */
> > - ret = 0;
> > + /*
> > + * The previous call of __tdx_enable() may only have
> > + * initialized part of present cpus during module
> > + * initialization, and new cpus may have become online
> > + * since then.
> > + *
> > + * To make sure all online cpus are TDX-runnable, always
> > + * do per-cpu initialization for all online cpus here
> > + * even the module has been initialized.
> > + */
> > + ret = __tdx_enable_online_cpus();
>
> I'm missing something here. CPUs get initialized through either:
>
> 1. __tdx_enable(), for the CPUs around at the time 2. tdx_cpu_online(), for
> hotplugged CPUs after __tdx_enable()
>
> But, this is a third class. CPUs that came online after #1, but which got missed
> by #2. How can that happen?

(Replying via Microsoft Outlook cause my Evolution suddenly stopped to work after updating the Fedora).

Currently we depend on KVM's CPU hotplug callback to call tdx_cpu_online(). The problem is the KVM's callback can go away when KVM module gets unloaded.

For example:

1) KVM module loaded when CPU 0, 1, 2 are online, CPU 3, 4, 5 are offline.
2) __tdx_enable() gets called. LP.INIT are done on CPU 0, 1, 2.
3) KVM gets unloaded. It's CPU hotplug callbacks are removed too.
4) CPU 3 becomes online. In this case, tdx_cpu_online() is not called for it as the KVM's CPU hotplug callback is gone.

So later if KVM gets loaded again, we need to go through __tdx_enable_online_cpus() to do LP.INIT for CPU 3 as it's already online.

Perhaps I didn't explain clearly in the comment. Below is the updated one:

/*
* The previous call of __tdx_enable() may only have
* initialized part of present cpus during module
* initialization, and new cpus may have become online
* since then w/o doing per-cpu initialization.
*
* For example, a new CPU can become online when KVM is
* unloaded, in which case tdx_cpu_enable() is not called since
* KVM's CPU online callback has been removed.
*
* To make sure all online cpus are TDX-runnable, always
* do per-cpu initialization for all online cpus here
* even the module has been initialized.
*/

2023-02-13 21:19:40

by Huang, Kai

[permalink] [raw]
Subject: RE: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

> On 2/13/23 03:59, Kai Huang wrote:
> > To avoid duplicated code, add a
> > helper to call SEAMCALL on all online cpus one by one but with a skip
> > function to check whether to skip certain cpus, and use that helper to
> > do the per-cpu initialization.
> ...
> > +/*
> > + * Call @func on all online cpus one by one but skip those cpus
> > + * when @skip_func is valid and returns true for them.
> > + */
> > +static int tdx_on_each_cpu_cond(int (*func)(void *), void *func_data,
> > + bool (*skip_func)(int cpu, void *),
> > + void *skip_data)
>
> I only see one caller of this. Where is the duplicated code?

The other caller is in patch 15 (x86/virt/tdx: Configure global KeyID on all packages).

I kinda mentioned this in the changelog:

" Similar to the per-cpu module initialization, a later step to config the key for the global KeyID..."

If we don't have this helper, then we can end up with having below loop in two functions:

for_each_online(cpu) {
if (should_skip(cpu))
continue;

// call @func on @cpu.
}

2023-02-13 21:22:15

by Huang, Kai

[permalink] [raw]
Subject: RE: [PATCH v9 05/18] x86/virt/tdx: Add SEAMCALL infrastructure

> On 2/13/23 03:59, Kai Huang wrote:
> > diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
> > index 4a3ee64c1ca7..5c5ecfddb15b 100644
> > --- a/arch/x86/include/asm/tdx.h
> > +++ b/arch/x86/include/asm/tdx.h
> > @@ -8,6 +8,10 @@
> > #include <asm/ptrace.h>
> > #include <asm/shared/tdx.h>
> >
> > +#ifdef CONFIG_INTEL_TDX_HOST
> ...
> > +#define TDX_SEAMCALL_GP (TDX_SW_ERROR |
> X86_TRAP_GP)
> > +#define TDX_SEAMCALL_UD (TDX_SW_ERROR |
> X86_TRAP_UD)
> > +
> > +#endif
>
> All these kinds of header #ifdefs do it make it harder to write code in .c files
> without matching #ifdefs. Think of code like this completely made up example:
>
> if (!tdx_enable()) {
> // Success! Make a seamcall:
> int something = tdx_seamcall();
> if (something == TDX_SEAMCALL_UD)
> // oh no!
> }
>
> tdx_enable() can never return 0 if CONFIG_INTEL_TDX_HOST=n, so the entire if()
> block is optimized away by the compiler. *BUT*, if you've #ifdef'd away
> TDX_SEAMCALL_UD, you'll get a compile error. People usually fix the compile
> error like this:
>
> if (!tdx_enable()) {
> #ifdef CONFIG_INTEL_TDX_HOST
> // Success! Make a seamcall:
> int something = tdx_seamcall();
> if (something == TDX_SEAMCALL_UD)
> // oh no!
> #endif
> }
>
> Which isn't great.
>
> Defining things unconditionally in header files is *FINE*, as long as the #ifdefs
> are there somewhere to make the code go away at compile time.

Thanks for the explanation above!

>
> Please post an updated (and tested) patch as a reply to this.

Will do.

2023-02-13 22:28:53

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

On 2/13/23 13:13, Huang, Kai wrote:
> Perhaps I didn't explain clearly in the comment. Below is the updated one:
>
> /*
> * The previous call of __tdx_enable() may only have
> * initialized part of present cpus during module
> * initialization, and new cpus may have become online
> * since then w/o doing per-cpu initialization.
> *
> * For example, a new CPU can become online when KVM is
> * unloaded, in which case tdx_cpu_enable() is not called since
> * KVM's CPU online callback has been removed.
> *
> * To make sure all online cpus are TDX-runnable, always
> * do per-cpu initialization for all online cpus here
> * even the module has been initialized.
> */

This is voodoo.

I want a TDX-specific hotplug CPU handler. Period. Please make that
happen. Put that code in this patch. That handler should:

1. Run after the KVM handler (if present)
2. See if VMX is on
3. If VMX is on:
3a. Run smp_func_module_lp_init(), else
3b. Mark the CPU as needing smp_func_module_lp_init()

Then, in the 'case TDX_MODULE_INITIALIZED:', you call a function to
iterate over the cpumask that was generated in 3b.

That makes the handoff *EXPLICIT*. You know exactly which CPUs need
what done to them. A CPU hotplug either explicitly involves doing the
work to make TDX work on the CPU, or explicitly defers the work to a
specific later time in a specific later piece of code.

2023-02-13 22:39:20

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v9 05/18] x86/virt/tdx: Add SEAMCALL infrastructure

On 2/13/23 03:59, Kai Huang wrote:
> SEAMCALL instruction causes #GP when TDX isn't BIOS enabled, and #UD
> when CPU is not in VMX operation. The current TDX_MODULE_CALL macro
> doesn't handle any of them. There's no way to check whether the CPU is
> in VMX operation or not.

Really? ... *REALLY*?

Like, there's no possible way for the kernel to record whether it has
executed VMXON or not?

I think what you're saying here is that there's no architecturally
visible flag that tells you whether in spot #1 or #2 in the following code:

static int kvm_cpu_vmxon(u64 vmxon_pointer)
{
u64 msr;

cr4_set_bits(X86_CR4_VMXE);
// spot #1
asm_volatile_goto("1: vmxon %[vmxon_pointer]\n\t"
_ASM_EXTABLE(1b, %l[fault])
: : [vmxon_pointer] "m"(vmxon_pointer)
: : fault);
// spot #2

That's _maybe_ technically correct (I don't know enough about VMX
enabling to tell you). But, what I *DO* know is that it's nonsense to
say that it's impossible in the *kernel* to tell whether we're on a CPU
that's successfully executed VMXON or not.

kvm_cpu_vmxon() has two paths through it:

1. Successfully executes VMXON and leaves with X86_CR4_VMXE=1
2. Fails VMXON and leaves with X86_CR4_VMXE=0

Guess what? CR4 is rather architecturally visible. From what I can
tell, it's *ENTIRELY* plausible to assume that X86_CR4_VMXE==1 means
that VMXON has been done. Even if that's wrong, it's only a cpumask and
a cpumask_set() away from becoming plausible. Like so:

static int kvm_cpu_vmxon(u64 vmxon_pointer)
{
u64 msr;

cr4_set_bits(X86_CR4_VMXE);

asm_volatile_goto("1: vmxon %[vmxon_pointer]\n\t"
_ASM_EXTABLE(1b, %l[fault])
: : [vmxon_pointer] "m"(vmxon_pointer)
: : fault);
// set cpumask bit here
return 0;

fault:
// clear cpu bit here
cr4_clear_bits(X86_CR4_VMXE);

return -EFAULT;
}

How many design decisions down the line in this series were predicated
on the idea that:

There's no way to check whether the CPU is
in VMX operation or not.

?

2023-02-13 22:43:24

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

On 2/13/23 13:19, Huang, Kai wrote:
>> On 2/13/23 03:59, Kai Huang wrote:
>>> To avoid duplicated code, add a
>>> helper to call SEAMCALL on all online cpus one by one but with a skip
>>> function to check whether to skip certain cpus, and use that helper to
>>> do the per-cpu initialization.
>> ...
>>> +/*
>>> + * Call @func on all online cpus one by one but skip those cpus
>>> + * when @skip_func is valid and returns true for them.
>>> + */
>>> +static int tdx_on_each_cpu_cond(int (*func)(void *), void *func_data,
>>> + bool (*skip_func)(int cpu, void *),
>>> + void *skip_data)
>> I only see one caller of this. Where is the duplicated code?
> The other caller is in patch 15 (x86/virt/tdx: Configure global KeyID on all packages).
>
> I kinda mentioned this in the changelog:
>
> " Similar to the per-cpu module initialization, a later step to config the key for the global KeyID..."
>
> If we don't have this helper, then we can end up with having below loop in two functions:
>
> for_each_online(cpu) {
> if (should_skip(cpu))
> continue;
>
> // call @func on @cpu.
> }

I don't think saving two lines of actual code is worth the opacity that
results from this abstraction.

2023-02-13 23:22:49

by Huang, Kai

[permalink] [raw]
Subject: Re: [PATCH v9 05/18] x86/virt/tdx: Add SEAMCALL infrastructure

On Mon, 2023-02-13 at 14:39 -0800, Dave Hansen wrote:
> On 2/13/23 03:59, Kai Huang wrote:
> > SEAMCALL instruction causes #GP when TDX isn't BIOS enabled, and #UD
> > when CPU is not in VMX operation. The current TDX_MODULE_CALL macro
> > doesn't handle any of them. There's no way to check whether the CPU is
> > in VMX operation or not.
>
> Really? ... *REALLY*?
>
> Like, there's no possible way for the kernel to record whether it has
> executed VMXON or not?
>
> I think what you're saying here is that there's no architecturally
> visible flag that tells you whether in spot #1 or #2 in the following code:
>
> static int kvm_cpu_vmxon(u64 vmxon_pointer)
> {
> u64 msr;
>
> cr4_set_bits(X86_CR4_VMXE);
> // spot #1
> asm_volatile_goto("1: vmxon %[vmxon_pointer]\n\t"
> _ASM_EXTABLE(1b, %l[fault])
> : : [vmxon_pointer] "m"(vmxon_pointer)
> : : fault);
> // spot #2
>

Yes I was talking about architectural flag rather than kernel defined software
tracking mechanism.

> That's _maybe_ technically correct (I don't know enough about VMX
> enabling to tell you). But, what I *DO* know is that it's nonsense to
> say that it's impossible in the *kernel* to tell whether we're on a CPU
> that's successfully executed VMXON or not.
>
> kvm_cpu_vmxon() has two paths through it:
>
> 1. Successfully executes VMXON and leaves with X86_CR4_VMXE=1
> 2. Fails VMXON and leaves with X86_CR4_VMXE=0
>
> Guess what? CR4 is rather architecturally visible. From what I can
> tell, it's *ENTIRELY* plausible to assume that X86_CR4_VMXE==1 means
> that VMXON has been done.  
>

Yes CR4.VMXE bit can be used to check. This is what KVM does.

Architecturally CR4.VMXE bit only checks whether VMX is enabled, but not VMXON
has been done, but in current kernel implement they are always done together. 

So checking CR4 is fine.

> Even if that's wrong, it's only a cpumask and
> a cpumask_set() away from becoming plausible. Like so:
>
> static int kvm_cpu_vmxon(u64 vmxon_pointer)
> {
> u64 msr;
>
> cr4_set_bits(X86_CR4_VMXE);
>
> asm_volatile_goto("1: vmxon %[vmxon_pointer]\n\t"
> _ASM_EXTABLE(1b, %l[fault])
> : : [vmxon_pointer] "m"(vmxon_pointer)
> : : fault);
> // set cpumask bit here
> return 0;
>
> fault:
> // clear cpu bit here
> cr4_clear_bits(X86_CR4_VMXE);
>
> return -EFAULT;
> }
>
> How many design decisions down the line in this series were predicated
> on the idea that:
>
> There's no way to check whether the CPU is
> in VMX operation or not.
>
> ?

Only the assembly code to handle TDX_SEAMCALL_UD in this patch is.  

Whether we have definitive way to _check_ whether VMXON has been done doesn't
matter. What impacts the design decisions is (non-KVM) kernel doesn't support
doing VMXON and we depend on KVM to do that (which is also a design decision).

We can remove the assembly code which returns TDX_SEAMCALL_{UD|GP} and replace
it with a below check in seamcall():

static int seamcall(...)
{
cpu = get_cpu();

if (cr4_read_shadow() & X86_CR4_VMXE) {
WARN_ONCE("VMXON isn't done for cpu ...\n");
ret = -EINVAL;
}

...

out:
put_cpu();
return ret;
}

But this was actually discussed in the v5, in which IIUC you prefer to having
the assembly code to return additional TDX_SEAMCALL_UD rather than having above
CR4 check:

https://lore.kernel.org/all/[email protected]/


2023-02-13 23:43:55

by Huang, Kai

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

On Mon, 2023-02-13 at 14:28 -0800, Dave Hansen wrote:
> On 2/13/23 13:13, Huang, Kai wrote:
> > Perhaps I didn't explain clearly in the comment. Below is the updated one:
> >
> > /*
> > * The previous call of __tdx_enable() may only have
> > * initialized part of present cpus during module
> > * initialization, and new cpus may have become online
> > * since then w/o doing per-cpu initialization.
> > *
> > * For example, a new CPU can become online when KVM is
> > * unloaded, in which case tdx_cpu_enable() is not called since
> > * KVM's CPU online callback has been removed.
> > *
> > * To make sure all online cpus are TDX-runnable, always
> > * do per-cpu initialization for all online cpus here
> > * even the module has been initialized.
> > */
>
> This is voodoo.
>
> I want a TDX-specific hotplug CPU handler. Period. Please make that
> happen.  
>

Yes 100% agreed.

> Put that code in this patch. That handler should:
>
> 1. Run after the KVM handler (if present)
> 2. See if VMX is on
> 3. If VMX is on:
> 3a. Run smp_func_module_lp_init(), else
> 3b. Mark the CPU as needing smp_func_module_lp_init()
>
> Then, in the 'case TDX_MODULE_INITIALIZED:', you call a function to
> iterate over the cpumask that was generated in 3b.
>
> That makes the handoff *EXPLICIT*. You know exactly which CPUs need
> what done to them. A CPU hotplug either explicitly involves doing the
> work to make TDX work on the CPU, or explicitly defers the work to a
> specific later time in a specific later piece of code.

In 3b. we don't need to "explicitly mark the CPU as needing
smp_func_module_lp_init()". We already have __cpu_tdx_mask to track whether
LP.INIT has been done on one cpu and we can use that to determine:

Any online cpu which isn't set in __cpu_tdx_mask needs to do LP.INIT in
tdx_enable().

And the function module_lp_init_online_cpus() already handles that, and it can
be called directly in tdx_enable() path (as shown in this patch).

I'll do above as you suggested, but just use __cpu_tdx_mask as explained above.

( My main concern is "Run after the KVM handler" seems a little bit hacky to me.
Logically, it's more reasonable to have the TDX callback _before_ KVM's but not
_after_. If any user (KVM) has done tdx_enable() successfully, the TDX code
should give the user a "TDX-runnable" cpu before user (KVM)'s own callback is
involved. Anyway as mentioned above, I'll do above as you suggested.)

2023-02-13 23:52:16

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

On 2/13/23 15:43, Huang, Kai wrote:
> ( My main concern is "Run after the KVM handler" seems a little bit hacky to me.
> Logically, it's more reasonable to have the TDX callback _before_ KVM's but not
> _after_. If any user (KVM) has done tdx_enable() successfully, the TDX code
> should give the user a "TDX-runnable" cpu before user (KVM)'s own callback is
> involved. Anyway as mentioned above, I'll do above as you suggested.)

I was assuming that the KVM callback is what does VMXON for a given
logical CPU. If that were the case, you'd need to do the TDX stuff
*AFTER* VMXON.

Am I wrong?



2023-02-14 00:02:59

by Huang, Kai

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

On Mon, 2023-02-13 at 14:43 -0800, Dave Hansen wrote:
> On 2/13/23 13:19, Huang, Kai wrote:
> > > On 2/13/23 03:59, Kai Huang wrote:
> > > > To avoid duplicated code, add a
> > > > helper to call SEAMCALL on all online cpus one by one but with a skip
> > > > function to check whether to skip certain cpus, and use that helper to
> > > > do the per-cpu initialization.
> > > ...
> > > > +/*
> > > > + * Call @func on all online cpus one by one but skip those cpus
> > > > + * when @skip_func is valid and returns true for them.
> > > > + */
> > > > +static int tdx_on_each_cpu_cond(int (*func)(void *), void *func_data,
> > > > + bool (*skip_func)(int cpu, void *),
> > > > + void *skip_data)
> > > I only see one caller of this. Where is the duplicated code?
> > The other caller is in patch 15 (x86/virt/tdx: Configure global KeyID on all packages).
> >
> > I kinda mentioned this in the changelog:
> >
> > " Similar to the per-cpu module initialization, a later step to config the key for the global KeyID..."
> >
> > If we don't have this helper, then we can end up with having below loop in two functions:
> >
> > for_each_online(cpu) {
> > if (should_skip(cpu))
> > continue;
> >
> > // call @func on @cpu.
> > }
>
> I don't think saving two lines of actual code is worth the opacity that
> results from this abstraction.

Alright thanks for the suggestion. I'll remove this tdx_on_each_cpu_cond() and
do directly.

But just checking:

LP.INIT can actually be called in parallel on different cpus (doesn't have to,
of course), so we can actually just use on_each_cpu_cond() for LP.INIT:

on_each_cpu_cond(should_skip_cpu, smp_func_module_lp_init, NULL, true);

But IIUC Peter doesn't like using IPI and prefers using via work:

https://lore.kernel.org/lkml/[email protected]/

So I used smp_call_on_cpu() here, which only calls @func on one cpu, but not a
cpumask. For LP.INIT ideally we can have something like:

schedule_on_cpu(struct cpumask *cpus, work_func_t func);

to call @func on a cpu set, but that doesn't exist now, and I don't think it's
worth to introduce it?

So, should I use on_each_cpu_cond(), or use smp_call_on_cpu() here?

(btw for the TDH.SYS.KEY.CONFIG we must do one by one as it cannot run in
parallel on multi cpus, so I'll use smp_call_on_cpu().)

2023-02-14 00:09:15

by Huang, Kai

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

On Mon, 2023-02-13 at 15:52 -0800, Dave Hansen wrote:
> On 2/13/23 15:43, Huang, Kai wrote:
> > ( My main concern is "Run after the KVM handler" seems a little bit hacky to me.
> > Logically, it's more reasonable to have the TDX callback _before_ KVM's but not
> > _after_. If any user (KVM) has done tdx_enable() successfully, the TDX code
> > should give the user a "TDX-runnable" cpu before user (KVM)'s own callback is
> > involved. Anyway as mentioned above, I'll do above as you suggested.)
>
> I was assuming that the KVM callback is what does VMXON for a given
> logical CPU. If that were the case, you'd need to do the TDX stuff
> *AFTER* VMXON.
>
> Am I wrong?
>
>

You are right.

What I meant was: because we choose to not support VMXON in the (non-KVM)
kernel, we need/have to put TDX's callback after KVM's. Otherwise, perhaps a
better way is to put TDX's callback before KVM's. But maybe it's an arguable
"perhaps", so let's just do TDX's callback after KVM's as you suggested.

2023-02-14 03:31:39

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v9 09/18] x86/virt/tdx: Use all system memory when initializing TDX module as TDX memory

Kai Huang <[email protected]> writes:

> As a step of initializing the TDX module, the kernel needs to tell the
> TDX module which memory regions can be used by the TDX module as TDX
> guest memory.
>
> TDX reports a list of "Convertible Memory Region" (CMR) to tell the
> kernel which memory is TDX compatible. The kernel needs to build a list
> of memory regions (out of CMRs) as "TDX-usable" memory and pass them to
> the TDX module. Once this is done, those "TDX-usable" memory regions
> are fixed during module's lifetime.
>
> To keep things simple, assume that all TDX-protected memory will come
> from the page allocator. Make sure all pages in the page allocator
> *are* TDX-usable memory.
>
> As TDX-usable memory is a fixed configuration, take a snapshot of the
> memory configuration from memblocks at the time of module initialization
> (memblocks are modified on memory hotplug). This snapshot is used to
> enable TDX support for *this* memory configuration only. Use a memory
> hotplug notifier to ensure that no other RAM can be added outside of
> this configuration.
>
> This approach requires all memblock memory regions at the time of module
> initialization to be TDX convertible memory to work, otherwise module
> initialization will fail in a later SEAMCALL when passing those regions
> to the module. This approach works when all boot-time "system RAM" are
> TDX convertible memory, and no non-TDX-convertible memory is hot-added
> to the core-mm before module initialization.
>
> For instance, on the first generation of TDX machines, both CXL memory
> and NVDIMM are not TDX convertible memory. Using kmem driver to hot-add
> any CXL memory or NVDIMM to the core-mm before module initialization
> will result in module fail to initialize. The SEAMCALL error code will
> be available in the dmesg to help user to understand the failure.
>
> Signed-off-by: Kai Huang <[email protected]>

Looks good to me! Thanks!

Reviewed-by: "Huang, Ying" <[email protected]>

> ---
>
> v8 -> v9:
> - Replace "The initial support ..." with timeless sentence in both
> changelog and comments(Dave).
> - Fix run-on sentence in changelog, and senstence to explain why to
> stash off memblock (Dave).
> - Tried to improve why to choose this approach and how it work in
> changelog based on Dave's suggestion.
> - Many other comments enhancement (Dave).
>
> v7 -> v8:
> - Trimed down changelog (Dave).
> - Changed to use PHYS_PFN() and PFN_PHYS() throughout this series
> (Ying).
> - Moved memory hotplug handling from add_arch_memory() to
> memory_notifier (Dan/David).
> - Removed 'nid' from 'struct tdx_memblock' to later patch (Dave).
> - {build|free}_tdx_memory() -> {build|}free_tdx_memlist() (Dave).
> - Removed pfn_covered_by_cmr() check as no code to trim CMRs now.
> - Improve the comment around first 1MB (Dave).
> - Added a comment around reserve_real_mode() to point out TDX code
> relies on first 1MB being reserved (Ying).
> - Added comment to explain why the new online memory range cannot
> cross multiple TDX memory blocks (Dave).
> - Improved other comments (Dave).
>
> ---
> arch/x86/Kconfig | 1 +
> arch/x86/kernel/setup.c | 2 +
> arch/x86/virt/vmx/tdx/tdx.c | 159 +++++++++++++++++++++++++++++++++++-
> arch/x86/virt/vmx/tdx/tdx.h | 6 ++
> 4 files changed, 167 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 6dd5d5586099..f23bc540778a 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -1958,6 +1958,7 @@ config INTEL_TDX_HOST
> depends on X86_64
> depends on KVM_INTEL
> depends on X86_X2APIC
> + select ARCH_KEEP_MEMBLOCK
> help
> Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
> host and certain physical attacks. This option enables necessary TDX
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index 88188549647c..a8a119a9b48c 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -1165,6 +1165,8 @@ void __init setup_arch(char **cmdline_p)
> *
> * Moreover, on machines with SandyBridge graphics or in setups that use
> * crashkernel the entire 1M is reserved anyway.
> + *
> + * Note the host kernel TDX also requires the first 1MB being reserved.
> */
> x86_platform.realmode_reserve();
>
> diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
> index ae8e59294b46..5101b636a9b0 100644
> --- a/arch/x86/virt/vmx/tdx/tdx.c
> +++ b/arch/x86/virt/vmx/tdx/tdx.c
> @@ -15,6 +15,13 @@
> #include <linux/mutex.h>
> #include <linux/cpumask.h>
> #include <linux/cpu.h>
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <linux/memblock.h>
> +#include <linux/memory.h>
> +#include <linux/minmax.h>
> +#include <linux/sizes.h>
> +#include <linux/pfn.h>
> #include <asm/msr-index.h>
> #include <asm/msr.h>
> #include <asm/page.h>
> @@ -33,6 +40,9 @@ static DEFINE_MUTEX(tdx_module_lock);
> static cpumask_t __cpu_tdx_mask;
> static cpumask_t *cpu_tdx_mask = &__cpu_tdx_mask;
>
> +/* All TDX-usable memory regions. Protected by mem_hotplug_lock. */
> +static LIST_HEAD(tdx_memlist);
> +
> /*
> * Use tdx_global_keyid to indicate that TDX is uninitialized.
> * This is used in TDX initialization error paths to take it from
> @@ -71,6 +81,51 @@ static int __init record_keyid_partitioning(u32 *tdx_keyid_start,
> return 0;
> }
>
> +static bool is_tdx_memory(unsigned long start_pfn, unsigned long end_pfn)
> +{
> + struct tdx_memblock *tmb;
> +
> + /* Empty list means TDX isn't enabled. */
> + if (list_empty(&tdx_memlist))
> + return true;
> +
> + /*
> + * This check assumes that the start_pfn<->end_pfn range does not
> + * cross multiple @tdx_memlist entries. A single memory online
> + * event across multiple memblocks (from which @tdx_memlist
> + * entries are derived at the time of module initialization) is
> + * not possible. This is because memory offline/online is done
> + * on granularity of 'struct memory_block', and the hotpluggable
> + * memory region (one memblock) must be multiple of memory_block.
> + */
> + list_for_each_entry(tmb, &tdx_memlist, list) {
> + if (start_pfn >= tmb->start_pfn && end_pfn <= tmb->end_pfn)
> + return true;
> + }
> + return false;
> +}
> +
> +static int tdx_memory_notifier(struct notifier_block *nb, unsigned long action,
> + void *v)
> +{
> + struct memory_notify *mn = v;
> +
> + if (action != MEM_GOING_ONLINE)
> + return NOTIFY_OK;
> +
> + /*
> + * The TDX memory configuration is static and can not be
> + * changed. Reject onlining any memory which is outside of
> + * the static configuration whether it supports TDX or not.
> + */
> + return is_tdx_memory(mn->start_pfn, mn->start_pfn + mn->nr_pages) ?
> + NOTIFY_OK : NOTIFY_BAD;
> +}
> +
> +static struct notifier_block tdx_memory_nb = {
> + .notifier_call = tdx_memory_notifier,
> +};
> +
> static int __init tdx_init(void)
> {
> u32 tdx_keyid_start, nr_tdx_keyids;
> @@ -101,6 +156,13 @@ static int __init tdx_init(void)
> goto no_tdx;
> }
>
> + err = register_memory_notifier(&tdx_memory_nb);
> + if (err) {
> + pr_info("initialization failed: register_memory_notifier() failed (%d)\n",
> + err);
> + goto no_tdx;
> + }
> +
> tdx_guest_keyid_start = tdx_keyid_start;
> tdx_nr_guest_keyids = nr_tdx_keyids;
>
> @@ -288,6 +350,79 @@ static int tdx_get_sysinfo(struct tdsysinfo_struct *sysinfo,
> return 0;
> }
>
> +/*
> + * Add a memory region as a TDX memory block. The caller must make sure
> + * all memory regions are added in address ascending order and don't
> + * overlap.
> + */
> +static int add_tdx_memblock(struct list_head *tmb_list, unsigned long start_pfn,
> + unsigned long end_pfn)
> +{
> + struct tdx_memblock *tmb;
> +
> + tmb = kmalloc(sizeof(*tmb), GFP_KERNEL);
> + if (!tmb)
> + return -ENOMEM;
> +
> + INIT_LIST_HEAD(&tmb->list);
> + tmb->start_pfn = start_pfn;
> + tmb->end_pfn = end_pfn;
> +
> + /* @tmb_list is protected by mem_hotplug_lock */
> + list_add_tail(&tmb->list, tmb_list);
> + return 0;
> +}
> +
> +static void free_tdx_memlist(struct list_head *tmb_list)
> +{
> + /* @tmb_list is protected by mem_hotplug_lock */
> + while (!list_empty(tmb_list)) {
> + struct tdx_memblock *tmb = list_first_entry(tmb_list,
> + struct tdx_memblock, list);
> +
> + list_del(&tmb->list);
> + kfree(tmb);
> + }
> +}
> +
> +/*
> + * Ensure that all memblock memory regions are convertible to TDX
> + * memory. Once this has been established, stash the memblock
> + * ranges off in a secondary structure because memblock is modified
> + * in memory hotplug while TDX memory regions are fixed.
> + */
> +static int build_tdx_memlist(struct list_head *tmb_list)
> +{
> + unsigned long start_pfn, end_pfn;
> + int i, ret;
> +
> + for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, NULL) {
> + /*
> + * The first 1MB is not reported as TDX convertible memory.
> + * Although the first 1MB is always reserved and won't end up
> + * to the page allocator, it is still in memblock's memory
> + * regions. Skip them manually to exclude them as TDX memory.
> + */
> + start_pfn = max(start_pfn, PHYS_PFN(SZ_1M));
> + if (start_pfn >= end_pfn)
> + continue;
> +
> + /*
> + * Add the memory regions as TDX memory. The regions in
> + * memblock has already guaranteed they are in address
> + * ascending order and don't overlap.
> + */
> + ret = add_tdx_memblock(tmb_list, start_pfn, end_pfn);
> + if (ret)
> + goto err;
> + }
> +
> + return 0;
> +err:
> + free_tdx_memlist(tmb_list);
> + return ret;
> +}
> +
> static int init_tdx_module(void)
> {
> static DECLARE_PADDED_STRUCT(tdsysinfo_struct, tdsysinfo,
> @@ -326,10 +461,25 @@ static int init_tdx_module(void)
> if (ret)
> goto out;
>
> + /*
> + * To keep things simple, assume that all TDX-protected memory
> + * will come from the page allocator. Make sure all pages in the
> + * page allocator are TDX-usable memory.
> + *
> + * Build the list of "TDX-usable" memory regions which cover all
> + * pages in the page allocator to guarantee that. Do it while
> + * holding mem_hotplug_lock read-lock as the memory hotplug code
> + * path reads the @tdx_memlist to reject any new memory.
> + */
> + get_online_mems();
> +
> + ret = build_tdx_memlist(&tdx_memlist);
> + if (ret)
> + goto out;
> +
> /*
> * TODO:
> *
> - * - Build the list of TDX-usable memory regions.
> * - Construct a list of "TD Memory Regions" (TDMRs) to cover
> * all TDX-usable memory regions.
> * - Configure the TDMRs and the global KeyID to the TDX module.
> @@ -340,6 +490,12 @@ static int init_tdx_module(void)
> */
> ret = -EINVAL;
> out:
> + /*
> + * @tdx_memlist is written here and read at memory hotplug time.
> + * Lock out memory hotplug code while building it.
> + */
> + put_online_mems();
> +
> /*
> * Clear @cpu_tdx_mask if module initialization fails before
> * CPU hotplug is re-enabled. tdx_cpu_online() uses it to check
> @@ -382,6 +538,7 @@ static void disable_tdx_module(void)
> * init_tdx_module(). Remove this comment after
> * all steps are done.
> */
> + free_tdx_memlist(&tdx_memlist);
> cpumask_clear(cpu_tdx_mask);
> }
>
> diff --git a/arch/x86/virt/vmx/tdx/tdx.h b/arch/x86/virt/vmx/tdx/tdx.h
> index e32d9920b3a7..edb1d697347f 100644
> --- a/arch/x86/virt/vmx/tdx/tdx.h
> +++ b/arch/x86/virt/vmx/tdx/tdx.h
> @@ -112,6 +112,12 @@ enum tdx_module_status_t {
> TDX_MODULE_ERROR
> };
>
> +struct tdx_memblock {
> + struct list_head list;
> + unsigned long start_pfn;
> + unsigned long end_pfn;
> +};
> +
> struct tdx_module_output;
> u64 __seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
> struct tdx_module_output *out);

2023-02-14 08:24:58

by Huang, Kai

[permalink] [raw]
Subject: Re: [PATCH v9 09/18] x86/virt/tdx: Use all system memory when initializing TDX module as TDX memory

On Tue, 2023-02-14 at 11:30 +0800, Huang, Ying wrote:
> Looks good to me!  Thanks!
>
> Reviewed-by: "Huang, Ying" <[email protected]>

Thanks!

2023-02-14 08:58:40

by Huang, Kai

[permalink] [raw]
Subject: Re: [PATCH v9 05/18] x86/virt/tdx: Add SEAMCALL infrastructure

On Mon, 2023-02-13 at 23:22 +0000, Huang, Kai wrote:
> On Mon, 2023-02-13 at 14:39 -0800, Dave Hansen wrote:
> > On 2/13/23 03:59, Kai Huang wrote:
> > > SEAMCALL instruction causes #GP when TDX isn't BIOS enabled, and #UD
> > > when CPU is not in VMX operation. The current TDX_MODULE_CALL macro
> > > doesn't handle any of them. There's no way to check whether the CPU is
> > > in VMX operation or not.
> >
> > Really? ... *REALLY*?
> >
> > Like, there's no possible way for the kernel to record whether it has
> > executed VMXON or not?
> >
> > I think what you're saying here is that there's no architecturally
> > visible flag that tells you whether in spot #1 or #2 in the following code:
> >
> > static int kvm_cpu_vmxon(u64 vmxon_pointer)
> > {
> > u64 msr;
> >
> > cr4_set_bits(X86_CR4_VMXE);
> > // spot #1
> > asm_volatile_goto("1: vmxon %[vmxon_pointer]\n\t"
> > _ASM_EXTABLE(1b, %l[fault])
> > : : [vmxon_pointer] "m"(vmxon_pointer)
> > : : fault);
> > // spot #2
> >
>
> Yes I was talking about architectural flag rather than kernel defined software
> tracking mechanism.
>
> > That's _maybe_ technically correct (I don't know enough about VMX
> > enabling to tell you). But, what I *DO* know is that it's nonsense to
> > say that it's impossible in the *kernel* to tell whether we're on a CPU
> > that's successfully executed VMXON or not.
> >
> > kvm_cpu_vmxon() has two paths through it:
> >
> > 1. Successfully executes VMXON and leaves with X86_CR4_VMXE=1
> > 2. Fails VMXON and leaves with X86_CR4_VMXE=0
> >
> > Guess what? CR4 is rather architecturally visible. From what I can
> > tell, it's *ENTIRELY* plausible to assume that X86_CR4_VMXE==1 means
> > that VMXON has been done.  
> >
>
> Yes CR4.VMXE bit can be used to check. This is what KVM does.
>
> Architecturally CR4.VMXE bit only checks whether VMX is enabled, but not VMXON
> has been done, but in current kernel implement they are always done together. 
>
> So checking CR4 is fine.
>
> > Even if that's wrong, it's only a cpumask and
> > a cpumask_set() away from becoming plausible. Like so:
> >
> > static int kvm_cpu_vmxon(u64 vmxon_pointer)
> > {
> > u64 msr;
> >
> > cr4_set_bits(X86_CR4_VMXE);
> >
> > asm_volatile_goto("1: vmxon %[vmxon_pointer]\n\t"
> > _ASM_EXTABLE(1b, %l[fault])
> > : : [vmxon_pointer] "m"(vmxon_pointer)
> > : : fault);
> > // set cpumask bit here
> > return 0;
> >
> > fault:
> > // clear cpu bit here
> > cr4_clear_bits(X86_CR4_VMXE);
> >
> > return -EFAULT;
> > }
> >
> > How many design decisions down the line in this series were predicated
> > on the idea that:
> >
> > There's no way to check whether the CPU is
> > in VMX operation or not.
> >
> > ?
>
> Only the assembly code to handle TDX_SEAMCALL_UD in this patch is.  
>
> Whether we have definitive way to _check_ whether VMXON has been done doesn't
> matter. What impacts the design decisions is (non-KVM) kernel doesn't support
> doing VMXON and we depend on KVM to do that (which is also a design decision).
>
> We can remove the assembly code which returns TDX_SEAMCALL_{UD|GP} and replace
> it with a below check in seamcall():
>
> static int seamcall(...)
> {
> cpu = get_cpu();
>
> if (cr4_read_shadow() & X86_CR4_VMXE) {
> WARN_ONCE("VMXON isn't done for cpu ...\n");
> ret = -EINVAL;
> }
>
> ...
>
> out:
> put_cpu();
> return ret;
> }
>
> But this was actually discussed in the v5, in which IIUC you prefer to having
> the assembly code to return additional TDX_SEAMCALL_UD rather than having above
> CR4 check:
>
> https://lore.kernel.org/all/[email protected]/
>
>

Hmm I replied too quickly. If we need to consider other non-KVM kernel
components mistakenly call tdx_enable() w/o doing VMXON on all online cpus
first, there's one issue when using CR4.VMXE to check whether VMXON has been
done (or even whether VMX has been enabled) in my above pseudo seamcall()
implementation.

The problem is above seamcall() code isn't IRQ safe between CR4.VMXE check and
the actual SEAMCALL.

KVM does VMXON/VMXOFF for all online cpus via IPI:

// when first VM is created
on_each_cpu(hardware_enable, NULL, 1);

// when last VM is destroyed
on_each_cpu(hardware_disable, NULL, 1);

Consider this case:

1) KVM does VMXON for all online cpus (a VM created)
2) Another kernel component is calling tdx_enable()
3) KVM does VMXOFF for all online cpus (last VM is destroyed)

When 2) and 3) happen in parallel on different cpus, below race can happen:

CPU 0 (CR4.VMXE enabled) CPU 1 (CR4.VMXE enabled)

non-KVM thread calling seamcall() KVM thread doing VMXOFF via IPI

Check CR4.VMXE <- pass
on_each_cpu(hardware_disable)

send IPI to CPU 0 to do VMXOFF
<-------
// Interrupted
// IRQ handler to do VMXOFF

VMXOFF 
clear CR4.VMXE

// IRQ done.
// Resume to seamcall()

SEAMCALL <-- #UD

So we do need to handle #UD in the assembly if we want tdx_enable() to be safe
in general (doesn't cause Oops even mistakenly used out of KVM).

However, in TDX CPU online callback, checking CR4.VMXE to know whether VMXON has
been done is fine, since KVM will never send IPI to those "to-be-online" cpus.

2023-02-14 15:27:19

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

On Tue, Feb 14, 2023 at 12:59:14AM +1300, Kai Huang wrote:
> +/*
> + * Call @func on all online cpus one by one but skip those cpus
> + * when @skip_func is valid and returns true for them.
> + */
> +static int tdx_on_each_cpu_cond(int (*func)(void *), void *func_data,
> + bool (*skip_func)(int cpu, void *),
> + void *skip_data)
> +{
> + int cpu;
> +
> + for_each_online_cpu(cpu) {
> + int ret;
> +
> + if (skip_func && skip_func(cpu, skip_data))
> + continue;
> +
> + /*
> + * SEAMCALL can be time consuming. Call the @func on
> + * remote cpu via smp_call_on_cpu() instead of
> + * smp_call_function_single() to avoid busy waiting.
> + */
> + ret = smp_call_on_cpu(cpu, func, func_data, true);
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}

schedule_on_each_cpu() ?

2023-02-14 15:27:21

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v9 05/18] x86/virt/tdx: Add SEAMCALL infrastructure

On Tue, Feb 14, 2023 at 12:59:12AM +1300, Kai Huang wrote:
> +/*
> + * Wrapper of __seamcall() to convert SEAMCALL leaf function error code
> + * to kernel error code. @seamcall_ret and @out contain the SEAMCALL
> + * leaf function return code and the additional output respectively if
> + * not NULL.
> + */
> +static int __always_unused seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
> + u64 *seamcall_ret,
> + struct tdx_module_output *out)
> +{
> + int cpu, ret = 0;
> + u64 sret;
> +
> + /* Need a stable CPU id for printing error message */
> + cpu = get_cpu();
> +
> + sret = __seamcall(fn, rcx, rdx, r8, r9, out);
> +
> + /* Save SEAMCALL return code if the caller wants it */
> + if (seamcall_ret)
> + *seamcall_ret = sret;
> +
> + /* SEAMCALL was successful */
> + if (!sret)
> + goto out;

I'm thinking you want if (likely(!sret)), here. That whole switch thing
should end up in cold storage.

> +
> + switch (sret) {
> + case TDX_SEAMCALL_GP:
> + /*
> + * tdx_enable() has already checked that BIOS has
> + * enabled TDX at the very beginning before going
> + * forward. It's likely a firmware bug if the
> + * SEAMCALL still caused #GP.
> + */
> + pr_err_once("[firmware bug]: TDX is not enabled by BIOS.\n");
> + ret = -ENODEV;
> + break;
> + case TDX_SEAMCALL_VMFAILINVALID:
> + pr_err_once("TDX module is not loaded.\n");
> + ret = -ENODEV;
> + break;
> + case TDX_SEAMCALL_UD:
> + pr_err_once("SEAMCALL failed: CPU %d is not in VMX operation.\n",
> + cpu);
> + ret = -EINVAL;
> + break;
> + default:
> + pr_err_once("SEAMCALL failed: CPU %d: leaf %llu, error 0x%llx.\n",
> + cpu, fn, sret);
> + if (out)
> + pr_err_once("additional output: rcx 0x%llx, rdx 0x%llx, r8 0x%llx, r9 0x%llx, r10 0x%llx, r11 0x%llx.\n",
> + out->rcx, out->rdx, out->r8,
> + out->r9, out->r10, out->r11);
> + ret = -EIO;
> + }
> +out:
> + put_cpu();
> + return ret;
> +}

2023-02-14 16:02:21

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

On Mon, Feb 13, 2023 at 10:07:30AM -0800, Dave Hansen wrote:
> On 2/13/23 03:59, Kai Huang wrote:
> > @@ -247,8 +395,17 @@ int tdx_enable(void)
> > ret = __tdx_enable();
> > break;
> > case TDX_MODULE_INITIALIZED:
> > - /* Already initialized, great, tell the caller. */
> > - ret = 0;
> > + /*
> > + * The previous call of __tdx_enable() may only have
> > + * initialized part of present cpus during module
> > + * initialization, and new cpus may have become online
> > + * since then.
> > + *
> > + * To make sure all online cpus are TDX-runnable, always
> > + * do per-cpu initialization for all online cpus here
> > + * even the module has been initialized.
> > + */
> > + ret = __tdx_enable_online_cpus();
>
> I'm missing something here. CPUs get initialized through either:
>
> 1. __tdx_enable(), for the CPUs around at the time
> 2. tdx_cpu_online(), for hotplugged CPUs after __tdx_enable()
>
> But, this is a third class. CPUs that came online after #1, but which
> got missed by #2. How can that happen?

offline CPUs, start TDX crap, online CPUs.

2023-02-14 16:07:19

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

On Tue, Feb 14, 2023 at 12:02:22AM +0000, Huang, Kai wrote:
> On Mon, 2023-02-13 at 14:43 -0800, Dave Hansen wrote:
> > On 2/13/23 13:19, Huang, Kai wrote:
> > > > On 2/13/23 03:59, Kai Huang wrote:
> > > > > To avoid duplicated code, add a
> > > > > helper to call SEAMCALL on all online cpus one by one but with a skip
> > > > > function to check whether to skip certain cpus, and use that helper to
> > > > > do the per-cpu initialization.
> > > > ...
> > > > > +/*
> > > > > + * Call @func on all online cpus one by one but skip those cpus
> > > > > + * when @skip_func is valid and returns true for them.
> > > > > + */
> > > > > +static int tdx_on_each_cpu_cond(int (*func)(void *), void *func_data,
> > > > > + bool (*skip_func)(int cpu, void *),
> > > > > + void *skip_data)
> > > > I only see one caller of this. Where is the duplicated code?
> > > The other caller is in patch 15 (x86/virt/tdx: Configure global KeyID on all packages).
> > >
> > > I kinda mentioned this in the changelog:
> > >
> > > " Similar to the per-cpu module initialization, a later step to config the key for the global KeyID..."
> > >
> > > If we don't have this helper, then we can end up with having below loop in two functions:
> > >
> > > for_each_online(cpu) {
> > > if (should_skip(cpu))
> > > continue;
> > >
> > > // call @func on @cpu.
> > > }
> >
> > I don't think saving two lines of actual code is worth the opacity that
> > results from this abstraction.
>
> Alright thanks for the suggestion. I'll remove this tdx_on_each_cpu_cond() and
> do directly.
>
> But just checking:
>
> LP.INIT can actually be called in parallel on different cpus (doesn't have to,
> of course), so we can actually just use on_each_cpu_cond() for LP.INIT:
>
> on_each_cpu_cond(should_skip_cpu, smp_func_module_lp_init, NULL, true);
>
> But IIUC Peter doesn't like using IPI and prefers using via work:
>
> https://lore.kernel.org/lkml/[email protected]/
>
> So I used smp_call_on_cpu() here, which only calls @func on one cpu, but not a
> cpumask. For LP.INIT ideally we can have something like:
>
> schedule_on_cpu(struct cpumask *cpus, work_func_t func);
>
> to call @func on a cpu set, but that doesn't exist now, and I don't think it's
> worth to introduce it?

schedule_on_each_cpu() exists and can easily be extended to take a cond
function if you so please.


2023-02-14 17:03:26

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v9 04/18] x86/virt/tdx: Add skeleton to initialize TDX on demand

On Tue, Feb 14, 2023 at 12:59:11AM +1300, Kai Huang wrote:
> Use a state machine protected by mutex to make sure the initialization
> will only be done once, as tdx_enable() can be called multiple times
> (i.e. KVM module can be reloaded) and be called concurrently by other
> kernel components in the future.

I still object to doing tdx_enable() at kvm module load.

kvm.ko gets loaded unconditionally on boot, even if I then never use
kvm.

This stuff needs to be done when an actual VM is created, not before.

2023-02-14 17:23:57

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v9 04/18] x86/virt/tdx: Add skeleton to initialize TDX on demand

On 2/14/23 04:46, Peter Zijlstra wrote:
> On Tue, Feb 14, 2023 at 12:59:11AM +1300, Kai Huang wrote:
>> Use a state machine protected by mutex to make sure the initialization
>> will only be done once, as tdx_enable() can be called multiple times
>> (i.e. KVM module can be reloaded) and be called concurrently by other
>> kernel components in the future.
> I still object to doing tdx_enable() at kvm module load.
>
> kvm.ko gets loaded unconditionally on boot, even if I then never use
> kvm.
>
> This stuff needs to be done when an actual VM is created, not before.

The actually implementation of this is hidden over in the KVM side of
this. But, tdx_enable() and all of this jazz should not be called on
kvm.ko load. It'll happen when the KVM tries to start the first TDX VM.

I think what Kai was thinking of was *this* sequence:

1. insmod kvm.ko
2. Start a TDX guest, tdx_enable() gets run
3. rmmod kvm
4. insmod kvm.ko (again)
5. Start another TDX guest, run tdx_enable() (again)

The rmmod/insmod pair is what triggers the second call of tdx_enable().

2023-02-14 17:27:51

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v9 05/18] x86/virt/tdx: Add SEAMCALL infrastructure

On 2/14/23 00:57, Huang, Kai wrote:
> Consider this case:
>
> 1) KVM does VMXON for all online cpus (a VM created)
> 2) Another kernel component is calling tdx_enable()
> 3) KVM does VMXOFF for all online cpus (last VM is destroyed)

Doctor, it hurts when I...

Then let's just call tdx_enable() from other kernel components.

Kai, I'm worried that this is, again, making things more complicated
than they have to be.

2023-02-14 21:02:34

by Huang, Kai

[permalink] [raw]
Subject: Re: [PATCH v9 05/18] x86/virt/tdx: Add SEAMCALL infrastructure

On Tue, 2023-02-14 at 13:42 +0100, Peter Zijlstra wrote:
> On Tue, Feb 14, 2023 at 12:59:12AM +1300, Kai Huang wrote:
> > +/*
> > + * Wrapper of __seamcall() to convert SEAMCALL leaf function error code
> > + * to kernel error code. @seamcall_ret and @out contain the SEAMCALL
> > + * leaf function return code and the additional output respectively if
> > + * not NULL.
> > + */
> > +static int __always_unused seamcall(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
> > + u64 *seamcall_ret,
> > + struct tdx_module_output *out)
> > +{
> > + int cpu, ret = 0;
> > + u64 sret;
> > +
> > + /* Need a stable CPU id for printing error message */
> > + cpu = get_cpu();
> > +
> > + sret = __seamcall(fn, rcx, rdx, r8, r9, out);
> > +
> > + /* Save SEAMCALL return code if the caller wants it */
> > + if (seamcall_ret)
> > + *seamcall_ret = sret;
> > +
> > + /* SEAMCALL was successful */
> > + if (!sret)
> > + goto out;
>
> I'm thinking you want if (likely(!sret)), here. That whole switch thing
> should end up in cold storage.
>

Thanks Peter. Will do.

> > +
> > + switch (sret) {
> > + case TDX_SEAMCALL_GP:
> > + /*
> > + * tdx_enable() has already checked that BIOS has
> > + * enabled TDX at the very beginning before going
> > + * forward. It's likely a firmware bug if the
> > + * SEAMCALL still caused #GP.
> > + */
> > + pr_err_once("[firmware bug]: TDX is not enabled by BIOS.\n");
> > + ret = -ENODEV;
> > + break;
> > + case TDX_SEAMCALL_VMFAILINVALID:
> > + pr_err_once("TDX module is not loaded.\n");
> > + ret = -ENODEV;
> > + break;
> > + case TDX_SEAMCALL_UD:
> > + pr_err_once("SEAMCALL failed: CPU %d is not in VMX operation.\n",
> > + cpu);
> > + ret = -EINVAL;
> > + break;
> > + default:
> > + pr_err_once("SEAMCALL failed: CPU %d: leaf %llu, error 0x%llx.\n",
> > + cpu, fn, sret);
> > + if (out)
> > + pr_err_once("additional output: rcx 0x%llx, rdx 0x%llx, r8 0x%llx, r9 0x%llx, r10 0x%llx, r11 0x%llx.\n",
> > + out->rcx, out->rdx, out->r8,
> > + out->r9, out->r10, out->r11);
> > + ret = -EIO;
> > + }
> > +out:
> > + put_cpu();
> > + return ret;
> > +}
>

2023-02-14 21:08:45

by Huang, Kai

[permalink] [raw]
Subject: Re: [PATCH v9 04/18] x86/virt/tdx: Add skeleton to initialize TDX on demand

On Tue, 2023-02-14 at 09:23 -0800, Dave Hansen wrote:
> On 2/14/23 04:46, Peter Zijlstra wrote:
> > On Tue, Feb 14, 2023 at 12:59:11AM +1300, Kai Huang wrote:
> > > Use a state machine protected by mutex to make sure the initialization
> > > will only be done once, as tdx_enable() can be called multiple times
> > > (i.e. KVM module can be reloaded) and be called concurrently by other
> > > kernel components in the future.
> > I still object to doing tdx_enable() at kvm module load.
> >
> > kvm.ko gets loaded unconditionally on boot, even if I then never use
> > kvm.
> >
> > This stuff needs to be done when an actual VM is created, not before.
>
> The actually implementation of this is hidden over in the KVM side of
> this. But, tdx_enable() and all of this jazz should not be called on
> kvm.ko load. It'll happen when the KVM tries to start the first TDX VM.
>
> I think what Kai was thinking of was *this* sequence:
>
> 1. insmod kvm.ko
> 2. Start a TDX guest, tdx_enable() gets run
> 3. rmmod kvm
> 4. insmod kvm.ko (again)
> 5. Start another TDX guest, run tdx_enable() (again)
>
> The rmmod/insmod pair is what triggers the second call of tdx_enable().

Yes. The point is tdx_enable() can get called multi times.

We can discuss more when to enable TDX at KVM side, and I don't want to speak
for KVM maintainers, but this is actually not that relevant to this series.

In the changelog, I just said: 

"...initialize TDX until there is a real need (e.g when requested by KVM)".  

I didn't say exactly when KVM will call this.

2023-02-14 22:18:05

by Huang, Kai

[permalink] [raw]
Subject: Re: [PATCH v9 05/18] x86/virt/tdx: Add SEAMCALL infrastructure

On Tue, 2023-02-14 at 09:27 -0800, Dave Hansen wrote:
> On 2/14/23 00:57, Huang, Kai wrote:
> > Consider this case:
> >
> > 1) KVM does VMXON for all online cpus (a VM created)
> > 2) Another kernel component is calling tdx_enable()
> > 3) KVM does VMXOFF for all online cpus (last VM is destroyed)
>
> Doctor, it hurts when I...
>
> Then let's just call tdx_enable() from other kernel components.
>
> Kai, I'm worried that this is, again, making things more complicated
> than they have to be.

The handling of #UD/#GP itself only takes ~10 LoC. All those complicated logic
comes from we depend on caller of TDX to ensure VMXON has been done.

AFAICT we have below options:

1) Don't support VMXON in the core-kernel, then
1.a Handle #UD/#GP in assembly as shown in this patch; Or
1.b Disable interrupt from CR4.VMXE check until SEAMCALL is done in 
seamcall().

2) Let's support VMXON in the core-kernel (by moving VMXON from KVM to the core-
x86), then we get rid of all above. We explicitly do VMXON (if haven't done)
inside tdx_enable() to make sure SEAMCALL doesn't cause #UD. No #UD/#GP
handling is needed in assembly. No interrupt disable in seamcall().

(well #GP can theoretically still happen if BIOS is buggy, we can keep assembly
code change if it's better -- just ~10 LoC).

Supporting VMXON in the core-kernel also has other advantages:

1) We can get rid of the logic to always try to do LP.INIT for all online cpus.
LP.INIT can just be done: a) during module initialization; b) in TDX CPU hotplug
callback.

2) The TDX CPU hotplug callback can just do VMXON and LP.INIT. No CR4.VMXE
check is needed. And it can be put before KVM (all TDX users)' hotplug
callback.

The downside of supporting VMXON to the core-kernel:

1) Need patch(es) to change KVM, so those patches need to be reviewed by KVM
maintainers.
2) No other cons.

Logically, supporting VMXON in the core-kernel makes things simple. And long-
termly, I _think_ we will need it to support future TDX features.

The effort to support VMXON in the core-kernel would be ~300 LOC. I can already
utilize some old patches, but need to polish those patches and do some test.

What's your thinking?

2023-02-14 22:53:42

by Huang, Kai

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization


> >
> > But just checking:
> >
> > LP.INIT can actually be called in parallel on different cpus (doesn't have to,
> > of course), so we can actually just use on_each_cpu_cond() for LP.INIT:
> >
> > on_each_cpu_cond(should_skip_cpu, smp_func_module_lp_init, NULL, true);
> >
> > But IIUC Peter doesn't like using IPI and prefers using via work:
> >
> > https://lore.kernel.org/lkml/[email protected]/
> >
> > So I used smp_call_on_cpu() here, which only calls @func on one cpu, but not a
> > cpumask. For LP.INIT ideally we can have something like:
> >
> > schedule_on_cpu(struct cpumask *cpus, work_func_t func);
> >
> > to call @func on a cpu set, but that doesn't exist now, and I don't think it's
> > worth to introduce it?
>
> schedule_on_each_cpu() exists and can easily be extended to take a cond
> function if you so please.
>

Sure. I just tried to do. There are two minor things:

1) should I just use smp_cond_func_t directly as the cond function?
2) schedule_on_each_cpu() takes cpus_read_lock() internally. However in my
case, tdx_enable() already takes that so I need a _locked_ version.

How does below look like? (Not tested)

+/**
+ * schedule_on_each_cpu_cond_locked - execute a function synchronously
+ * on each online CPU for which the
+ * condition function returns positive
+ * @func: the function to call
+ * @cond_func: the condition function to call
+ * @cond_data: the data passed to the condition function
+ *
+ * schedule_on_each_cpu_cond_locked() executes @func on each online CPU
+ * when @cond_func returns positive for that cpu, using the system
+ * workqueue and blocks until all CPUs have completed.
+ *
+ * schedule_on_each_cpu_cond_locked() doesn't hold read lock of CPU
+ * hotplug lock but depend on the caller to do.
+ *
+ * schedule_on_each_cpu_cond_locked() is very slow.
+ *
+ * Return:
+ * 0 on success, -errno on failure.
+ */
+int schedule_on_each_cpu_cond_locked(work_func_t func,
+ smp_cond_func_t cond_func,
+ void *cond_data)
+{
+ int cpu;
+ struct work_struct __percpu *works;
+
+ works = alloc_percpu(struct work_struct);
+ if (!works)
+ return -ENOMEM;
+
+ for_each_online_cpu(cpu) {
+ struct work_struct *work = per_cpu_ptr(works, cpu);
+
+ if (cond_func && !cond_func(cpu, cond_data))
+ continue;
+
+ INIT_WORK(work, func);
+ schedule_work_on(cpu, work);
+ }
+
+ for_each_online_cpu(cpu)
+ flush_work(per_cpu_ptr(works, cpu));
+
+ free_percpu(works);
+ return 0;
+}
+
+/**
+ * schedule_on_each_cpu_cond - execute a function synchronously on each
+ * online CPU for which the condition
+ * function returns positive
+ * @func: the function to call
+ * @cond_func: the condition function to call
+ * @cond_data: the data passed to the condition function
+ *
+ * schedule_on_each_cpu_cond() executes @func on each online CPU
+ * when @cond_func returns positive for that cpu, using the system
+ * workqueue and blocks until all CPUs have completed.
+ *
+ * schedule_on_each_cpu_cond() is very slow.
+ *
+ * Return:
+ * 0 on success, -errno on failure.
+ */
+int schedule_on_each_cpu_cond(work_func_t func,
+ smp_cond_func_t cond_func,
+ void *cond_data)
+{
+ int ret;
+
+ cpus_read_lock();
+
+ ret = schedule_on_each_cpu_cond_locked(func, cond_func, cond_data);
+
+ cpus_read_unlock();
+
+ return ret;
+}

2023-02-15 09:16:49

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

On Tue, Feb 14, 2023 at 10:53:26PM +0000, Huang, Kai wrote:

> Sure. I just tried to do. There are two minor things:
>
> 1) should I just use smp_cond_func_t directly as the cond function?

Yeah, might as well I suppose...

> 2) schedule_on_each_cpu() takes cpus_read_lock() internally. However in my
> case, tdx_enable() already takes that so I need a _locked_ version.
>
> How does below look like? (Not tested)
>
> +/**
> + * schedule_on_each_cpu_cond_locked - execute a function synchronously
> + * on each online CPU for which the
> + * condition function returns positive
> + * @func: the function to call
> + * @cond_func: the condition function to call
> + * @cond_data: the data passed to the condition function
> + *
> + * schedule_on_each_cpu_cond_locked() executes @func on each online CPU
> + * when @cond_func returns positive for that cpu, using the system
> + * workqueue and blocks until all CPUs have completed.
> + *
> + * schedule_on_each_cpu_cond_locked() doesn't hold read lock of CPU
> + * hotplug lock but depend on the caller to do.
> + *
> + * schedule_on_each_cpu_cond_locked() is very slow.
> + *
> + * Return:
> + * 0 on success, -errno on failure.
> + */
> +int schedule_on_each_cpu_cond_locked(work_func_t func,
> + smp_cond_func_t cond_func,
> + void *cond_data)
> +{
> + int cpu;
> + struct work_struct __percpu *works;
> +
> + works = alloc_percpu(struct work_struct);
> + if (!works)
> + return -ENOMEM;
> +
> + for_each_online_cpu(cpu) {
> + struct work_struct *work = per_cpu_ptr(works, cpu);
> +
> + if (cond_func && !cond_func(cpu, cond_data))
> + continue;
> +
> + INIT_WORK(work, func);
> + schedule_work_on(cpu, work);
> + }
> +
> + for_each_online_cpu(cpu)

I think you need to skip some flushes too. Given we skip setting
work->func, this will go WARN, see __flush_work().

> + flush_work(per_cpu_ptr(works, cpu));
> +
> + free_percpu(works);
> + return 0;
> +}
> +
> +/**
> + * schedule_on_each_cpu_cond - execute a function synchronously on each
> + * online CPU for which the condition
> + * function returns positive
> + * @func: the function to call
> + * @cond_func: the condition function to call
> + * @cond_data: the data passed to the condition function
> + *
> + * schedule_on_each_cpu_cond() executes @func on each online CPU
> + * when @cond_func returns positive for that cpu, using the system
> + * workqueue and blocks until all CPUs have completed.
> + *
> + * schedule_on_each_cpu_cond() is very slow.
> + *
> + * Return:
> + * 0 on success, -errno on failure.
> + */
> +int schedule_on_each_cpu_cond(work_func_t func,
> + smp_cond_func_t cond_func,
> + void *cond_data)
> +{
> + int ret;
> +
> + cpus_read_lock();
> +
> + ret = schedule_on_each_cpu_cond_locked(func, cond_func, cond_data);
> +
> + cpus_read_unlock();
> +
> + return ret;
> +}

Also, re-implement schedule_on_each_cpu() using the above to save a
bunch of duplication:

int schedule_on_each_cpu(work_func_t func)
{
return schedule_on_each_cpu_cond(func, NULL, NULL);
}


That said, I find it jarring that the schedule_on*() family doesn't have
a void* argument to the function, like the smp_*() family has. So how
about something like the below (equally untested). It preserves the
current semantics, but allows a work function to cast to schedule_work
and access ->info if it so desires.


diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index a0143dd24430..5e97111322b2 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -103,6 +103,11 @@ struct work_struct {
#endif
};

+struct schedule_work {
+ struct work_struct work;
+ void *info;
+};
+
#define WORK_DATA_INIT() ATOMIC_LONG_INIT((unsigned long)WORK_STRUCT_NO_POOL)
#define WORK_DATA_STATIC_INIT() \
ATOMIC_LONG_INIT((unsigned long)(WORK_STRUCT_NO_POOL | WORK_STRUCT_STATIC))
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 07895deca271..c73bb8860bbc 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -51,6 +51,7 @@
#include <linux/sched/isolation.h>
#include <linux/nmi.h>
#include <linux/kvm_para.h>
+#include <linux/smp.h>

#include "workqueue_internal.h"

@@ -3302,43 +3303,64 @@ bool cancel_delayed_work_sync(struct delayed_work *dwork)
}
EXPORT_SYMBOL(cancel_delayed_work_sync);

-/**
- * schedule_on_each_cpu - execute a function synchronously on each online CPU
- * @func: the function to call
- *
- * schedule_on_each_cpu() executes @func on each online CPU using the
- * system workqueue and blocks until all CPUs have completed.
- * schedule_on_each_cpu() is very slow.
- *
- * Return:
- * 0 on success, -errno on failure.
- */
-int schedule_on_each_cpu(work_func_t func)
+int schedule_on_each_cpu_cond_locked(work_func_t func, smp_cond_func_t cond_func, void *info)
{
+ struct schedule_work __percpu *works;
int cpu;
- struct work_struct __percpu *works;

- works = alloc_percpu(struct work_struct);
+ works = alloc_percpu(struct schedule_work);
if (!works)
return -ENOMEM;

- cpus_read_lock();
-
for_each_online_cpu(cpu) {
- struct work_struct *work = per_cpu_ptr(works, cpu);
+ struct schedule_work *work = per_cpu_ptr(works, cpu);

- INIT_WORK(work, func);
- schedule_work_on(cpu, work);
+ if (cond_func && !cond_func(cpu, info))
+ continue;
+
+ INIT_WORK(&work->work, func);
+ work->info = info;
+ schedule_work_on(cpu, &work->work);
}

- for_each_online_cpu(cpu)
- flush_work(per_cpu_ptr(works, cpu));
+ for_each_online_cpu(cpu) {
+ struct schedule_work *work = per_cpu_ptr(works, cpu);
+
+ if (work->work.func)
+ flush_work(&work->work);
+ }

- cpus_read_unlock();
free_percpu(works);
return 0;
}

+int schedule_on_each_cpu_cond(work_func_t func, smp_cond_func_t cond_func, void *info)
+{
+ int ret;
+
+ cpus_read_lock();
+ ret = schedule_on_each_cpu_cond_locked(func, cond, info);
+ cpus_read_unlock();
+
+ return ret;
+}
+
+/**
+ * schedule_on_each_cpu - execute a function synchronously on each online CPU
+ * @func: the function to call
+ *
+ * schedule_on_each_cpu() executes @func on each online CPU using the
+ * system workqueue and blocks until all CPUs have completed.
+ * schedule_on_each_cpu() is very slow.
+ *
+ * Return:
+ * 0 on success, -errno on failure.
+ */
+int schedule_on_each_cpu(work_func_t func)
+{
+ return schedule_on_each_cpu_cond(func, NULL, NULL);
+}
+
/**
* execute_in_process_context - reliably execute the routine with user context
* @fn: the function to execute

2023-02-15 09:47:05

by Huang, Kai

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization


> > +
> > + for_each_online_cpu(cpu)
>
> I think you need to skip some flushes too. Given we skip setting
> work->func, this will go WARN, see __flush_work().

I missed. Thanks for pointing out.

>
> > + flush_work(per_cpu_ptr(works, cpu));
> > +
>

[...]

> That said, I find it jarring that the schedule_on*() family doesn't have
> a void* argument to the function, like the smp_*() family has. So how
> about something like the below (equally untested). It preserves the
> current semantics, but allows a work function to cast to schedule_work
> and access ->info if it so desires.

Yes agreed. Your code below looks indeed better. Thanks!

Would you mind send me a patch so I can include to this series, or would you
mind get it merged to tip/x86/tdx (or other branch I am not sure) so I can
rebase?

Appreciate your help!

>
>
> diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
> index a0143dd24430..5e97111322b2 100644
> --- a/include/linux/workqueue.h
> +++ b/include/linux/workqueue.h
> @@ -103,6 +103,11 @@ struct work_struct {
> #endif
> };
>
> +struct schedule_work {
> + struct work_struct work;
> + void *info;
> +};
> +
> #define WORK_DATA_INIT() ATOMIC_LONG_INIT((unsigned long)WORK_STRUCT_NO_POOL)
> #define WORK_DATA_STATIC_INIT() \
> ATOMIC_LONG_INIT((unsigned long)(WORK_STRUCT_NO_POOL | WORK_STRUCT_STATIC))
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 07895deca271..c73bb8860bbc 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -51,6 +51,7 @@
> #include <linux/sched/isolation.h>
> #include <linux/nmi.h>
> #include <linux/kvm_para.h>
> +#include <linux/smp.h>
>
> #include "workqueue_internal.h"
>
> @@ -3302,43 +3303,64 @@ bool cancel_delayed_work_sync(struct delayed_work *dwork)
> }
> EXPORT_SYMBOL(cancel_delayed_work_sync);
>
> -/**
> - * schedule_on_each_cpu - execute a function synchronously on each online CPU
> - * @func: the function to call
> - *
> - * schedule_on_each_cpu() executes @func on each online CPU using the
> - * system workqueue and blocks until all CPUs have completed.
> - * schedule_on_each_cpu() is very slow.
> - *
> - * Return:
> - * 0 on success, -errno on failure.
> - */
> -int schedule_on_each_cpu(work_func_t func)
> +int schedule_on_each_cpu_cond_locked(work_func_t func, smp_cond_func_t cond_func, void *info)
> {
> + struct schedule_work __percpu *works;
> int cpu;
> - struct work_struct __percpu *works;
>
> - works = alloc_percpu(struct work_struct);
> + works = alloc_percpu(struct schedule_work);
> if (!works)
> return -ENOMEM;
>
> - cpus_read_lock();
> -
> for_each_online_cpu(cpu) {
> - struct work_struct *work = per_cpu_ptr(works, cpu);
> + struct schedule_work *work = per_cpu_ptr(works, cpu);
>
> - INIT_WORK(work, func);
> - schedule_work_on(cpu, work);
> + if (cond_func && !cond_func(cpu, info))
> + continue;
> +
> + INIT_WORK(&work->work, func);
> + work->info = info;
> + schedule_work_on(cpu, &work->work);
> }
>
> - for_each_online_cpu(cpu)
> - flush_work(per_cpu_ptr(works, cpu));
> + for_each_online_cpu(cpu) {
> + struct schedule_work *work = per_cpu_ptr(works, cpu);
> +
> + if (work->work.func)
> + flush_work(&work->work);
> + }
>
> - cpus_read_unlock();
> free_percpu(works);
> return 0;
> }
>
> +int schedule_on_each_cpu_cond(work_func_t func, smp_cond_func_t cond_func, void *info)
> +{
> + int ret;
> +
> + cpus_read_lock();
> + ret = schedule_on_each_cpu_cond_locked(func, cond, info);
> + cpus_read_unlock();
> +
> + return ret;
> +}
> +
> +/**
> + * schedule_on_each_cpu - execute a function synchronously on each online CPU
> + * @func: the function to call
> + *
> + * schedule_on_each_cpu() executes @func on each online CPU using the
> + * system workqueue and blocks until all CPUs have completed.
> + * schedule_on_each_cpu() is very slow.
> + *
> + * Return:
> + * 0 on success, -errno on failure.
> + */
> +int schedule_on_each_cpu(work_func_t func)
> +{
> + return schedule_on_each_cpu_cond(func, NULL, NULL);
> +}
> +
> /**
> * execute_in_process_context - reliably execute the routine with user context
> * @fn: the function to execute

2023-02-15 13:25:37

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

On Wed, Feb 15, 2023 at 09:46:10AM +0000, Huang, Kai wrote:
> Yes agreed. Your code below looks indeed better. Thanks!
>
> Would you mind send me a patch so I can include to this series, or would you
> mind get it merged to tip/x86/tdx (or other branch I am not sure) so I can
> rebase?

Just take the patch, add your comments and test it.. enjoy! :-)

2023-02-15 21:38:13

by Huang, Kai

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

On Wed, 2023-02-15 at 14:25 +0100, Peter Zijlstra wrote:
> On Wed, Feb 15, 2023 at 09:46:10AM +0000, Huang, Kai wrote:
> > Yes agreed. Your code below looks indeed better. Thanks!
> >
> > Would you mind send me a patch so I can include to this series, or would you
> > mind get it merged to tip/x86/tdx (or other branch I am not sure) so I can
> > rebase?
>
> Just take the patch, add your comments and test it.. enjoy! :-)

Thank you! I'll at least add your Suggested-by :)

2023-03-06 14:31:36

by Huang, Kai

[permalink] [raw]
Subject: Re: [PATCH v9 07/18] x86/virt/tdx: Do TDX module per-cpu initialization

On Wed, 2023-02-15 at 21:37 +0000, Huang, Kai wrote:
> On Wed, 2023-02-15 at 14:25 +0100, Peter Zijlstra wrote:
> > On Wed, Feb 15, 2023 at 09:46:10AM +0000, Huang, Kai wrote:
> > > Yes agreed. Your code below looks indeed better. Thanks!
> > >
> > > Would you mind send me a patch so I can include to this series, or would you
> > > mind get it merged to tip/x86/tdx (or other branch I am not sure) so I can
> > > rebase?
> >
> > Just take the patch, add your comments and test it.. enjoy! :-)
>
> Thank you! I'll at least add your Suggested-by :)

Hi Peter,

After discussing with Kirill, I changed the way of how to handle the per-cpu
initialization, and in the new version (v10, just sent) I don't need the
schedule_on_each_cpu_cond_locked() anymore, because I essentially moved such
handling out of TDX host series to KVM. I'll use your patch if the review found
we still need to handle it. Thanks!