Currently kexec() support and TDX host are muturally exclusive in the
Kconfig. This series adds the TDX host kexec support so that they can
work together and can be enabled at the same time in the Kconfig.
v3 -> v4:
- Updated changelog and comments of patch 1/2 per comments from
Kirill and Tom (see specific patch for details).
v3: https://lore.kernel.org/linux-kernel/[email protected]/
v2 -> v3:
- Change to only do WBINVD for bare-metal, as Kirill/Tom pointed out
WBINVD in TDX guests and SEV-ES/SEV-SNP guests triggers #VE.
v2: https://lore.kernel.org/linux-kernel/[email protected]/
v1 -> v2:
- Do unconditional WBINVD during kexec() -- Boris
- Change to cover crash kexec() -- Rick
- Add a new patch (last one) to add a mechanism to reset all TDX private
pages due to having to cover crash kexec().
- Other code improvements -- Dave
- Rebase to latest tip/master.
v1: https://lore.kernel.org/linux-kernel/[email protected]/
Hi Dave, Kirill, Sean, Paolo,
The last patch provides a new mechanism to handle all other TDX private
pages when they become possible to exist, e.g., when KVM is ready to run
TDX guests. It covers both normal kexec and crash kexec. Strictly
speaking, it is not mandatory to be in this series though. I appreciate
if you can help to review.
Hi Tom, Ashish,
This series touches AMD SME code too, and I don't have AMD machine to
test. I appreciate if you can help to review and/or test.
Kai Huang (5):
x86/kexec: do unconditional WBINVD for bare-metal in stop_this_cpu()
x86/kexec: do unconditional WBINVD for bare-metal in relocate_kernel()
x86/kexec: Reset TDX private memory on platforms with TDX erratum
x86/virt/tdx: Remove the !KEXEC_CORE dependency
x86/virt/tdx: Add TDX memory reset notifier to reset other private
pages
arch/x86/Kconfig | 1 -
arch/x86/include/asm/kexec.h | 2 +-
arch/x86/include/asm/tdx.h | 16 +++++
arch/x86/kernel/machine_kexec_64.c | 29 ++++++--
arch/x86/kernel/process.c | 19 +++--
arch/x86/kernel/relocate_kernel_64.S | 19 +++--
arch/x86/virt/vmx/tdx/tdx.c | 100 +++++++++++++++++++++++++++
7 files changed, 165 insertions(+), 21 deletions(-)
base-commit: 1e0fd81e4f32a8a383c05d27a672d742b45c1088
--
2.43.2
TL;DR:
Change to do unconditional WBINVD in stop_this_cpu() for bare metal
to cover kexec support for both AMD SME and Intel TDX, despite there
_was_ some issue preventing from doing so but now has it got fixed.
Long version:
Both AMD SME and Intel TDX can leave caches in an incoherent state due
to memory encryption, which can lead to silent memory corruption during
kexec. To address this issue, it is necessary to flush the caches
before jumping to the second kernel.
Currently, the kernel only performs WBINVD in stop_this_cpu() when SME
is supported by hardware. To support TDX, instead of adding one more
vendor-specific check, it is proposed to perform unconditional WBINVD.
Kexec() is a slow path, and the additional WBINVD is acceptable for the
sake of simplicity and maintainability.
It is important to note that WBINVD should only be done for bare-metal
scenarios, as TDX guests and SEV-ES/SEV-SNP guests may not handle the
unexpected exception (#VE or #VC) caused by WBINVD.
Note:
Historically, there _was_ an issue preventing doing unconditional WBINVD
but that has been fixed.
When SME kexec() support was initially added in commit
bba4ed011a52: ("x86/mm, kexec: Allow kexec to be used with SME")
WBINVD was done unconditionally. However since then some issues were
reported that different Intel systems would hang or reset due to that
commit.
To try to fix, a later commit
f23d74f6c66c: ("x86/mm: Rework wbinvd, hlt operation in stop_this_cpu()")
then changed to only do WBINVD when hardware supports SME.
While this commit made the reported issues go away, it didn't pinpoint
the root cause. Also, it forgot to handle a corner case[*], which
resulted in the reveal of the root cause and the final fix by commit
1f5e7eb7868e: ("x86/smp: Make stop_other_cpus() more robust")
See [1][2] for more information.
Further testing of doing unconditional WBINVD based on the above fix on
the problematic machines (that issues were originally reported)
confirmed the issues couldn't be reproduced.
See [3][4] for more information.
Therefore, it is safe to do unconditional WBINVD for bare-metal now.
[*] The commit didn't check whether the CPUID leaf is available or not.
Making unsupported CPUID leaf on Intel returns garbage resulting in
unintended WBINVD which caused some issue (followed by the analysis and
the reveal of the final root cause). The corner case was independently
fixed by commit
9b040453d444: ("x86/smp: Dont access non-existing CPUID leaf")
[1]: https://lore.kernel.org/lkml/CALu+AoQKmeixJdkO07t7BtttN7v3RM4_aBKi642bQ3fTBbSAVg@mail.gmail.com/T/#m300f3f9790850b5daa20a71abcc200ae8d94a12a
[2]: https://lore.kernel.org/lkml/CALu+AoQKmeixJdkO07t7BtttN7v3RM4_aBKi642bQ3fTBbSAVg@mail.gmail.com/T/#ma7263a7765483db0dabdeef62a1110940e634846
[3]: https://lore.kernel.org/lkml/CALu+AoQKmeixJdkO07t7BtttN7v3RM4_aBKi642bQ3fTBbSAVg@mail.gmail.com/T/#mc043191f2ff860d649c8466775dc61ac1e0ae320
[4]: https://lore.kernel.org/lkml/CALu+AoQKmeixJdkO07t7BtttN7v3RM4_aBKi642bQ3fTBbSAVg@mail.gmail.com/T/#md23f1a8f6afcc59fa2b0ac1967f18e418e24347c
Signed-off-by: Kai Huang <[email protected]>
Suggested-by: Borislav Petkov <[email protected]>
Cc: Tom Lendacky <[email protected]>
Cc: Dave Young <[email protected]>
---
v3 -> v4:
- Update part of changelog based on Kirill's version (with minor tweak).
- Use "exception (#VE or #VC)" for TDX and SEV-ES/SEV-SNP in changelog
and comments. (Kirill, Tom)
- Point out "WBINVD is not necessary for TDX and SEV-ES/SEV-SNP guests"
in the comment. (Tom)
v2 -> v3:
- Change to only do WBINVD for bare metal
---
arch/x86/kernel/process.c | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index b8441147eb5e..d3c904bfe874 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -813,18 +813,17 @@ void __noreturn stop_this_cpu(void *dummy)
mcheck_cpu_clear(c);
/*
- * Use wbinvd on processors that support SME. This provides support
- * for performing a successful kexec when going from SME inactive
- * to SME active (or vice-versa). The cache must be cleared so that
- * if there are entries with the same physical address, both with and
- * without the encryption bit, they don't race each other when flushed
- * and potentially end up with the wrong entry being committed to
- * memory.
+ * The kernel could leave caches in incoherent state on SME/TDX
+ * capable platforms. Flush cache to avoid silent memory
+ * corruption for these platforms.
*
- * Test the CPUID bit directly because the machine might've cleared
- * X86_FEATURE_SME due to cmdline options.
+ * stop_this_cpu() isn't a fast path, just do WBINVD for bare-metal
+ * to cover both SME and TDX. It isn't necessary to perform WBINVD
+ * in a guest and performing one could result in an exception (#VE
+ * or #VC) for a TDX or SEV-ES/SEV-SNP guest that the guest may
+ * not be able to handle (e.g., TDX guest panics if it sees #VE).
*/
- if (c->extended_cpuid_level >= 0x8000001f && (cpuid_eax(0x8000001f) & BIT(0)))
+ if (!boot_cpu_has(X86_FEATURE_HYPERVISOR))
native_wbinvd();
/*
--
2.43.2
Both SME and TDX can leave caches in incoherent state due to memory
encryption. During kexec, the caches must be flushed before jumping to
the second kernel to avoid silent memory corruption to the second kernel.
During kexec, the WBINVD in stop_this_cpu() flushes caches for all
remote cpus when they are being stopped. For SME, the WBINVD in
relocate_kernel() flushes the cache for the last running cpu (which is
executing the kexec).
Similarly, to support kexec for TDX host, after stopping all remote cpus
with cache flushed, the kernel needs to flush cache for the last running
cpu.
Use the existing WBINVD in relocate_kernel() to cover TDX host as well.
However, instead of sprinkling around vendor-specific checks, just do
unconditional WBINVD to cover both SME and TDX. Kexec is not a fast path
so having one additional WBINVD for platforms w/o SME/TDX is acceptable.
But only do WBINVD for bare-metal because TDX guests and SEV-ES/SEV-SNP
guests will get unexpected (and yet unnecessary) exception (#VE or #VC)
which the kernel is unable to handle at this stage.
Signed-off-by: Kai Huang <[email protected]>
Reviewed-by: Kirill A. Shutemov <[email protected]>
Cc: Tom Lendacky <[email protected]>
Cc: Dave Young <[email protected]>
---
v3 -> v4:
- Use "exception (#VE or #VC)" for TDX and SEV-ES/SEV-SNP in changelog
and comments. (Kirill, Tom)
- "Save the bare_metal" -> "Save the bare_metal flag" (Tom)
- Point out "WBINVD is not necessary for TDX and SEV-ES/SEV-SNP guests"
in the comment. (Tom)
v2 -> v3:
- Change to only do WBINVD for bare metal
---
arch/x86/include/asm/kexec.h | 2 +-
arch/x86/kernel/machine_kexec_64.c | 2 +-
arch/x86/kernel/relocate_kernel_64.S | 19 +++++++++++++++----
3 files changed, 17 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 91ca9a9ee3a2..455f8a6c66a9 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -128,7 +128,7 @@ relocate_kernel(unsigned long indirection_page,
unsigned long page_list,
unsigned long start_address,
unsigned int preserve_context,
- unsigned int host_mem_enc_active);
+ unsigned int bare_metal);
#endif
#define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index b180d8e497c3..a454477b7b4c 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -358,7 +358,7 @@ void machine_kexec(struct kimage *image)
(unsigned long)page_list,
image->start,
image->preserve_context,
- cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT));
+ !boot_cpu_has(X86_FEATURE_HYPERVISOR));
#ifdef CONFIG_KEXEC_JUMP
if (image->preserve_context)
diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index 56cab1bb25f5..6e1590b24e41 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -50,7 +50,7 @@ SYM_CODE_START_NOALIGN(relocate_kernel)
* %rsi page_list
* %rdx start address
* %rcx preserve_context
- * %r8 host_mem_enc_active
+ * %r8 bare_metal
*/
/* Save the CPU context, used for jumping back */
@@ -78,7 +78,7 @@ SYM_CODE_START_NOALIGN(relocate_kernel)
pushq $0
popfq
- /* Save SME active flag */
+ /* Save the bare_metal flag */
movq %r8, %r12
/*
@@ -160,9 +160,20 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
movq %r9, %cr3
/*
- * If SME is active, there could be old encrypted cache line
+ * The kernel could leave caches in incoherent state on SME/TDX
+ * capable platforms. Just do unconditional WBINVD to avoid
+ * silent memory corruption to the new kernel for these platforms.
+ *
+ * For SME, need to flush cache here before copying the kernel.
+ * When it is active, there could be old encrypted cache line
* entries that will conflict with the now unencrypted memory
- * used by kexec. Flush the caches before copying the kernel.
+ * used by kexec.
+ *
+ * Do WBINVD for bare-metal only to cover both SME and TDX. It
+ * isn't necessary to perform a WBINVD in a guest and performing
+ * one could result in an exception (#VE or #VC) for a TDX or
+ * SEV-ES/SEV-SNP guest that can crash the guest since, at this
+ * stage, the kernel has torn down the IDT.
*/
testq %r12, %r12
jz 1f
--
2.43.2
TL;DR:
On the platforms with TDX "partial write machine check" erratum, during
kexec, convert TDX private memory back to normal before jumping to the
second kernel to avoid the second kernel seeing potential unexpected
machine check.
Long version:
The first few generations of TDX hardware have an erratum. A partial
write to a TDX private memory cacheline will silently "poison" the
line. Subsequent reads will consume the poison and generate a machine
check. According to the TDX hardware spec, neither of these things
should have happened.
== Background ==
Virtually all kernel memory accesses operations happen in full
cachelines. In practice, writing a "byte" of memory usually reads a 64
byte cacheline of memory, modifies it, then writes the whole line back.
Those operations do not trigger this problem.
This problem is triggered by "partial" writes where a write transaction
of less than cacheline lands at the memory controller. The CPU does
these via non-temporal write instructions (like MOVNTI), or through
UC/WC memory mappings. The issue can also be triggered away from the
CPU by devices doing partial writes via DMA.
== Problem ==
A fast warm reset doesn't reset TDX private memory. Kexec() can also
boot into the new kernel directly. Thus if the old kernel has left any
TDX private pages on the platform with this erratum, the new kernel
might get unexpected machine check.
Note that w/o this erratum any kernel read/write on TDX private memory
should never cause machine check, thus it's OK for the old kernel to
leave TDX private pages as is.
== Solution ==
In short, with this erratum, the kernel needs to explicitly convert all
TDX private pages back to normal to give the new kernel a clean slate
after kexec(). The BIOS is also expected to disable fast warm reset as
a workaround to this erratum, thus this implementation doesn't try to
reset TDX private memory for the reboot case in the kernel but depends
on the BIOS to enable the workaround.
Convert TDX private pages back to normal (using MOVDIR64B to clear these
pages) after all remote cpus have been stopped and cache flush has been
done on all cpus, when no more TDX activity can happen further.
Do it in machine_kexec() to cover both normal kexec, and crash kexec.
For now TDX private memory can only be PAMT pages. It would be ideal to
cover all types of TDX private memory here, but there are practical
problems to do so:
1) There's no existing infrastructure to track TDX private pages;
2) It's not feasible to query the TDX module about page type, because
VMX, which making SEAMCALL requires, has already been disabled;
3) Even if it is feasible to query the TDX module, the result may not be
accurate. E.g., the remote CPU could be stopped right before
MOVDIR64B.
One temporary solution is to blindly convert all memory pages, but it's
problematic to do so too, because not all pages are mapped as writable
in the direct mapping. It can be done by switching to the identical
mapping created for kexec(), or a new page table, but the complexity
looks overkill.
Therefore, rather than doing something dramatic, only reset PAMT pages
here.
Leave resetting other TDX private pages as a future work when they
become possible to exist.
Signed-off-by: Kai Huang <[email protected]>
Reviewed-by: Kirill A. Shutemov <[email protected]>
---
v3 -> v4:
- No change
v2 -> v3:
- No change
v1 -> v2:
- Remove using reboot notifier to stop TDX module as it doesn't
cover crash kexec. Change to use a variable with barrier instead.
(Rick)
- Introduce kexec_save_processor_start() to make code better, and
make the comment around calling site of tdx_reset_memory() more
concise. (Dave)
- Mention cache for all other cpus have been flushed around
native_wbinvd() in tdx_reset_memory(). (Dave)
- Remove the extended alternaties discussion from the comment, but leave
it in the changelog. Point out what does current code do and point out
risk. (Dave)
---
arch/x86/include/asm/tdx.h | 2 +
arch/x86/kernel/machine_kexec_64.c | 27 ++++++++--
arch/x86/virt/vmx/tdx/tdx.c | 79 ++++++++++++++++++++++++++++++
3 files changed, 104 insertions(+), 4 deletions(-)
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index eba178996d84..ed3ac9a8a079 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -116,11 +116,13 @@ static inline u64 sc_retry(sc_func_t func, u64 fn,
int tdx_cpu_enable(void);
int tdx_enable(void);
const char *tdx_dump_mce_info(struct mce *m);
+void tdx_reset_memory(void);
#else
static inline void tdx_init(void) { }
static inline int tdx_cpu_enable(void) { return -ENODEV; }
static inline int tdx_enable(void) { return -ENODEV; }
static inline const char *tdx_dump_mce_info(struct mce *m) { return NULL; }
+static inline void tdx_reset_memory(void) { }
#endif /* CONFIG_INTEL_TDX_HOST */
#endif /* !__ASSEMBLY__ */
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index a454477b7b4c..ba5a66bf724e 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -28,6 +28,7 @@
#include <asm/setup.h>
#include <asm/set_memory.h>
#include <asm/cpu.h>
+#include <asm/tdx.h>
#ifdef CONFIG_ACPI
/*
@@ -288,6 +289,14 @@ void machine_kexec_cleanup(struct kimage *image)
free_transition_pgtable(image);
}
+static void kexec_save_processor_start(struct kimage *image)
+{
+#ifdef CONFIG_KEXEC_JUMP
+ if (image->preserve_context)
+ save_processor_state();
+#endif
+}
+
/*
* Do not allocate memory (or fail in any way) in machine_kexec().
* We are past the point of no return, committed to rebooting now.
@@ -298,10 +307,20 @@ void machine_kexec(struct kimage *image)
void *control_page;
int save_ftrace_enabled;
-#ifdef CONFIG_KEXEC_JUMP
- if (image->preserve_context)
- save_processor_state();
-#endif
+ kexec_save_processor_start(image);
+
+ /*
+ * Convert TDX private memory back to normal (when needed) to
+ * avoid the second kernel potentially seeing unexpected machine
+ * check.
+ *
+ * However skip this when preserve_context is on. By reaching
+ * here, TDX (if ever got enabled by the kernel) has survived
+ * from the suspend when preserve_context is on, and it can
+ * continue to work after jumping back from the second kernel.
+ */
+ if (!image->preserve_context)
+ tdx_reset_memory();
save_ftrace_enabled = __ftrace_enabled_save();
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 49a1c6890b55..7f5d388c5461 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -52,6 +52,8 @@ static DEFINE_MUTEX(tdx_module_lock);
/* All TDX-usable memory regions. Protected by mem_hotplug_lock. */
static LIST_HEAD(tdx_memlist);
+static bool tdx_may_have_private_memory __read_mostly;
+
typedef void (*sc_err_func_t)(u64 fn, u64 err, struct tdx_module_args *args);
static inline void seamcall_err(u64 fn, u64 err, struct tdx_module_args *args)
@@ -1096,6 +1098,18 @@ static int init_tdmrs(struct tdmr_info_list *tdmr_list)
return 0;
}
+static void mark_may_have_private_memory(bool may)
+{
+ tdx_may_have_private_memory = may;
+
+ /*
+ * Ensure update to tdx_may_have_private_memory is visible to all
+ * cpus. This ensures when any remote cpu reads it as true, the
+ * 'tdx_tdmr_list' must be stable for reading PAMTs.
+ */
+ smp_wmb();
+}
+
static int init_tdx_module(void)
{
struct tdx_tdmr_sysinfo tdmr_sysinfo;
@@ -1141,6 +1155,12 @@ static int init_tdx_module(void)
if (ret)
goto err_reset_pamts;
+ /*
+ * Starting from this point the system is possible to have
+ * TDX private memory.
+ */
+ mark_may_have_private_memory(true);
+
/* Initialize TDMRs to complete the TDX module initialization */
ret = init_tdmrs(&tdx_tdmr_list);
if (ret)
@@ -1172,6 +1192,7 @@ static int init_tdx_module(void)
* as suggested by the TDX spec.
*/
tdmrs_reset_pamt_all(&tdx_tdmr_list);
+ mark_may_have_private_memory(false);
err_free_pamts:
tdmrs_free_pamt_all(&tdx_tdmr_list);
err_free_tdmrs:
@@ -1489,3 +1510,61 @@ void __init tdx_init(void)
check_tdx_erratum();
}
+
+void tdx_reset_memory(void)
+{
+ if (!boot_cpu_has(X86_FEATURE_TDX_HOST_PLATFORM))
+ return;
+
+ /*
+ * Converting TDX private pages back to normal must be done
+ * when there's no TDX activity anymore on all remote cpus.
+ * Verify this is only called when all remote cpus have
+ * been stopped.
+ */
+ WARN_ON_ONCE(num_online_cpus() != 1);
+
+ /*
+ * Kernel read/write to TDX private memory doesn't cause
+ * machine check on hardware w/o this erratum.
+ */
+ if (!boot_cpu_has_bug(X86_BUG_TDX_PW_MCE))
+ return;
+
+ /*
+ * Nothing to convert if it's not possible to have any TDX
+ * private pages.
+ */
+ if (!tdx_may_have_private_memory)
+ return;
+
+ /*
+ * Ensure the 'tdx_tdmr_list' is stable for reading PAMTs
+ * when tdx_may_have_private_memory reads true, paired with
+ * the smp_wmb() in mark_may_have_private_memory().
+ */
+ smp_rmb();
+
+ /*
+ * All remote cpus have been stopped, and their caches have
+ * been flushed in stop_this_cpu(). Now flush cache for the
+ * last running cpu _before_ converting TDX private pages.
+ */
+ native_wbinvd();
+
+ /*
+ * It's ideal to cover all types of TDX private pages here, but
+ * currently there's no unified way to tell whether a given page
+ * is TDX private page or not.
+ *
+ * Just convert PAMT pages now, as currently TDX private pages
+ * can only be PAMT pages.
+ *
+ * TODO:
+ *
+ * This leaves all other types of TDX private pages undealt
+ * with. They must be handled in _some_ way when they become
+ * possible to exist.
+ */
+ tdmrs_reset_pamt_all(&tdx_tdmr_list);
+}
--
2.43.2
Now TDX host can work with kexec(). Remove the !KEXEC_CORE dependency.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 2dac256b6e8d..3761f55c41ab 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1968,7 +1968,6 @@ config INTEL_TDX_HOST
depends on X86_X2APIC
select ARCH_KEEP_MEMBLOCK
depends on CONTIG_ALLOC
- depends on !KEXEC_CORE
depends on X86_MCE
help
Intel Trust Domain Extensions (TDX) protects guest VMs from malicious
--
2.43.2
TL;DR:
To cover both normal kexec and crash kexec, add a TDX specific memory
reset notifier to let "in-kernel TDX users" use their own way to convert
TDX private pages (that they manage respectively) in tdx_reset_memory().
Long version:
On the platforms with TDX "partial write machine check" erratum, during
kexec, the kernel needs to convert TDX private memory back to normal
before jumping to the second kernel to avoid the second kernel seeing
potential machine check.
For now tdx_reset_memory() only resets PAMT pages. KVM will be the
first in-kernel TDX user to support running TDX guests, and by then
other TDX private pages will start to exist. They need to be covered
too.
Currently the kernel doesn't have a unified way to tell whether a given
page is TDX private page or not. One choice is to add such unified way,
and there are couple of options to do it:
1) Use a bitmap, or Xarray, etc to track TDX private page for all PFNs;
2) Use a "software-only" bit in the direct-mapping PTE to mark a given
page is TDX private page;
3) Use a new flag in 'struct page' to mark TDX private page;
4) ... potential other ways.
Option 1) consumes additional memory. E.g., if using bitmap, the
overhead is "number of total RAM pages / 8" bytes.
Option 2) would cause splitting large-page mapping to 4K mapping in the
direct mapping when one page is allocated as TDX private page, and cause
additional TLB flush etc. It's not ideal for such use case.
Option 3) apparently contradicts to the effort to reduce the use of the
flags of 'struct page'.
None of above is ideal.
Therefore, instead of providing a unified way to tell whether a given
page is TDX private page or not, leave "resetting TDX private pages" to
the "in-kernel user" of TDX.
This is motivated by the fact that KVM is already maintaining an Xarray
to track "memory attributes (e.g., private or shared)" for each GFN for
each guest. Thus KVM can use its own way to find all TDX private pages
that it manages and convert them back to normal.
For the normal kexec the reboot notifier could be used, but it doesn't
cover the cash kexec.
Add a TDX specific memory reset notifier to achieve this. The in-kernel
TDX users will need to register their own notifiers to reset TDX private
pages. Call these notifiers in tdx_reset_memory() right before
resetting PAMT pages.
KVM will be the first user of this notifier. Export the "register" and
"unregister" APIs for KVM to use.
Signed-off-by: Kai Huang <[email protected]>
---
arch/x86/include/asm/tdx.h | 14 ++++++++++++
arch/x86/virt/vmx/tdx/tdx.c | 45 +++++++++++++++++++++++++++----------
2 files changed, 47 insertions(+), 12 deletions(-)
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index ed3ac9a8a079..7c2c0a0b9754 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -117,12 +117,26 @@ int tdx_cpu_enable(void);
int tdx_enable(void);
const char *tdx_dump_mce_info(struct mce *m);
void tdx_reset_memory(void);
+
+struct notifier_block;
+
+int tdx_register_memory_reset_notifier(struct notifier_block *nb);
+void tdx_unregister_memory_reset_notifier(struct notifier_block *nb);
#else
static inline void tdx_init(void) { }
static inline int tdx_cpu_enable(void) { return -ENODEV; }
static inline int tdx_enable(void) { return -ENODEV; }
static inline const char *tdx_dump_mce_info(struct mce *m) { return NULL; }
static inline void tdx_reset_memory(void) { }
+
+struct notifier_block;
+
+static inline int tdx_register_memory_reset_notifier(struct notifier_block *nb)
+{
+ return -EOPNOTSUPP;
+}
+static inline void tdx_unregister_memory_reset_notifier(
+ struct notifier_block *nb) { }
#endif /* CONFIG_INTEL_TDX_HOST */
#endif /* !__ASSEMBLY__ */
diff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c
index 7f5d388c5461..af62fbffcd96 100644
--- a/arch/x86/virt/vmx/tdx/tdx.c
+++ b/arch/x86/virt/vmx/tdx/tdx.c
@@ -27,6 +27,7 @@
#include <linux/log2.h>
#include <linux/acpi.h>
#include <linux/suspend.h>
+#include <linux/notifier.h>
#include <asm/page.h>
#include <asm/special_insns.h>
#include <asm/msr-index.h>
@@ -54,6 +55,8 @@ static LIST_HEAD(tdx_memlist);
static bool tdx_may_have_private_memory __read_mostly;
+static BLOCKING_NOTIFIER_HEAD(tdx_memory_reset_chain);
+
typedef void (*sc_err_func_t)(u64 fn, u64 err, struct tdx_module_args *args);
static inline void seamcall_err(u64 fn, u64 err, struct tdx_module_args *args)
@@ -1511,6 +1514,27 @@ void __init tdx_init(void)
check_tdx_erratum();
}
+int tdx_register_memory_reset_notifier(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_register(&tdx_memory_reset_chain, nb);
+}
+EXPORT_SYMBOL_GPL(tdx_register_memory_reset_notifier);
+
+void tdx_unregister_memory_reset_notifier(struct notifier_block *nb)
+{
+ blocking_notifier_chain_unregister(&tdx_memory_reset_chain, nb);
+}
+EXPORT_SYMBOL_GPL(tdx_unregister_memory_reset_notifier);
+
+static int notify_reset_memory(void)
+{
+ int ret;
+
+ ret = blocking_notifier_call_chain(&tdx_memory_reset_chain, 0, NULL);
+
+ return notifier_to_errno(ret);
+}
+
void tdx_reset_memory(void)
{
if (!boot_cpu_has(X86_FEATURE_TDX_HOST_PLATFORM))
@@ -1553,18 +1577,15 @@ void tdx_reset_memory(void)
native_wbinvd();
/*
- * It's ideal to cover all types of TDX private pages here, but
- * currently there's no unified way to tell whether a given page
- * is TDX private page or not.
- *
- * Just convert PAMT pages now, as currently TDX private pages
- * can only be PAMT pages.
- *
- * TODO:
- *
- * This leaves all other types of TDX private pages undealt
- * with. They must be handled in _some_ way when they become
- * possible to exist.
+ * Tell all in-kernel TDX users to reset TDX private pages
+ * that they manage.
+ */
+ if (notify_reset_memory())
+ pr_err("Failed to reset all TDX private pages.\n");
+
+ /*
+ * The only remaining TDX private pages are PAMT pages.
+ * Reset them.
*/
tdmrs_reset_pamt_all(&tdx_tdmr_list);
}
--
2.43.2
On 4/18/24 06:48, Kai Huang wrote:
> Both SME and TDX can leave caches in incoherent state due to memory
> encryption. During kexec, the caches must be flushed before jumping to
> the second kernel to avoid silent memory corruption to the second kernel.
>
> During kexec, the WBINVD in stop_this_cpu() flushes caches for all
> remote cpus when they are being stopped. For SME, the WBINVD in
> relocate_kernel() flushes the cache for the last running cpu (which is
> executing the kexec).
>
> Similarly, to support kexec for TDX host, after stopping all remote cpus
> with cache flushed, the kernel needs to flush cache for the last running
> cpu.
>
> Use the existing WBINVD in relocate_kernel() to cover TDX host as well.
>
> However, instead of sprinkling around vendor-specific checks, just do
> unconditional WBINVD to cover both SME and TDX. Kexec is not a fast path
> so having one additional WBINVD for platforms w/o SME/TDX is acceptable.
>
> But only do WBINVD for bare-metal because TDX guests and SEV-ES/SEV-SNP
> guests will get unexpected (and yet unnecessary) exception (#VE or #VC)
> which the kernel is unable to handle at this stage.
>
> Signed-off-by: Kai Huang <[email protected]>
> Reviewed-by: Kirill A. Shutemov <[email protected]>
> Cc: Tom Lendacky <[email protected]>
> Cc: Dave Young <[email protected]>
Reviewed-by: Tom Lendacky <[email protected]>
> ---
On 19/04/2024 1:47 am, Tom Lendacky wrote:
> On 4/18/24 06:48, Kai Huang wrote:
>>
>
> ...
>
>> Signed-off-by: Kai Huang <[email protected]>
>> Suggested-by: Borislav Petkov <[email protected]>
>> Cc: Tom Lendacky <[email protected]>
>> Cc: Dave Young <[email protected]>
>
> Reviewed-by: Tom Lendacky <[email protected]>
>
Thanks Tom!
On Thu, 2024-04-18 at 23:48 +1200, Kai Huang wrote:
> Currently kexec() support and TDX host are muturally exclusive in the
> Kconfig. This series adds the TDX host kexec support so that they can
> work together and can be enabled at the same time in the Kconfig.
>
Hi Maintainers,
I appreciate if you can help to take a look. Thanks!
On 18/04/2024 11:48 pm, Kai Huang wrote:
> TL;DR:
>
> Change to do unconditional WBINVD in stop_this_cpu() for bare metal
> to cover kexec support for both AMD SME and Intel TDX, despite there
> _was_ some issue preventing from doing so but now has it got fixed.
>
> Long version:
>
> Both AMD SME and Intel TDX can leave caches in an incoherent state due
> to memory encryption, which can lead to silent memory corruption during
> kexec. To address this issue, it is necessary to flush the caches
> before jumping to the second kernel.
>
> Currently, the kernel only performs WBINVD in stop_this_cpu() when SME
> is supported by hardware. To support TDX, instead of adding one more
> vendor-specific check, it is proposed to perform unconditional WBINVD.
> Kexec() is a slow path, and the additional WBINVD is acceptable for the
> sake of simplicity and maintainability.
>
Hi Tom,
May I ask how does SME work with kdump in crash_kexec(). Looking at the
code, AFAICT the crash_kexec() path doesn't use stop_this_cpu() to stop
all other cpus. Instead, kdump_nmi_shootdown_cpus() is called to send
NMI to remote cpus and crash_nmi_callback() is invoked to stop them.
But the crash_nmi_callback() doesn't invoke WBINVD for SME AFAICT. It
does call the kdump_nmi_callback() callback where a WBINVD is performed
for the SNP host:
void kdump_sev_callback(void)
{
/*
* Do wbinvd() on remote CPUs when SNP is enabled in order to
* safely do SNP_SHUTDOWN on the local CPU.
*/
if (cc_platform_has(CC_ATTR_HOST_SEV_SNP))
wbinvd();
}
So if I read correctly, what's the reason the WBINVD is skipped for SME
in case of crash_kexec()?
On Thu, Apr 18, 2024 at 11:48:05PM +1200,
Kai Huang <[email protected]> wrote:
> TL;DR:
>
> To cover both normal kexec and crash kexec, add a TDX specific memory
> reset notifier to let "in-kernel TDX users" use their own way to convert
> TDX private pages (that they manage respectively) in tdx_reset_memory().
>
> Long version:
>
> On the platforms with TDX "partial write machine check" erratum, during
> kexec, the kernel needs to convert TDX private memory back to normal
> before jumping to the second kernel to avoid the second kernel seeing
> potential machine check.
>
> For now tdx_reset_memory() only resets PAMT pages. KVM will be the
> first in-kernel TDX user to support running TDX guests, and by then
> other TDX private pages will start to exist. They need to be covered
> too.
>
> Currently the kernel doesn't have a unified way to tell whether a given
> page is TDX private page or not. One choice is to add such unified way,
> and there are couple of options to do it:
>
> 1) Use a bitmap, or Xarray, etc to track TDX private page for all PFNs;
> 2) Use a "software-only" bit in the direct-mapping PTE to mark a given
> page is TDX private page;
> 3) Use a new flag in 'struct page' to mark TDX private page;
> 4) ... potential other ways.
>
> Option 1) consumes additional memory. E.g., if using bitmap, the
> overhead is "number of total RAM pages / 8" bytes.
>
> Option 2) would cause splitting large-page mapping to 4K mapping in the
> direct mapping when one page is allocated as TDX private page, and cause
> additional TLB flush etc. It's not ideal for such use case.
>
> Option 3) apparently contradicts to the effort to reduce the use of the
> flags of 'struct page'.
>
> None of above is ideal.
>
> Therefore, instead of providing a unified way to tell whether a given
> page is TDX private page or not, leave "resetting TDX private pages" to
> the "in-kernel user" of TDX.
>
> This is motivated by the fact that KVM is already maintaining an Xarray
> to track "memory attributes (e.g., private or shared)" for each GFN for
> each guest. Thus KVM can use its own way to find all TDX private pages
> that it manages and convert them back to normal.
>
> For the normal kexec the reboot notifier could be used, but it doesn't
> cover the cash kexec.
If we do so, KVM needs to traverse complex data structures. Anyway, we
can miss TDX private pages in a transitional state between private-shared.
Also, we have to maintain the logic if the data structure changes, both
the KVM code change or TDX module data structure update for future features.
Here are other options. What do you think?
Option 5) refuse normal kexec when TDX module is initialized.
Enforce the user to go system reset. For kdump case, we know that TDX
private page resides in the area of the crashed kernel. Kdump kernel
will use reboot with BIOS after capturing crash image.
Pro: No need to traverse complex data structures
Pro: Keep room for future client support without complication
Con: LINUX_REBOOT_CMD_KEXEC can fail
Option 5.1) refuse normal kexec if TDX guests are running.
The admin needs to kill the TDX guest be kexec. Perhaps update
systemd unit file.
Option 6) Clear the all pages instead of selected ones.
We can use the e820 entries passed to the second kernel.
Pro: No dependency on in-kernel TDX users.
Pro: Work for possible future TDX module data structure change
Con: It may take long time if the memory size is large.
If TDX guests use most of the available memory, this is inevitable.
--
Isaku Yamahata <[email protected]>
On 5/22/24 21:49, Huang, Kai wrote:
> On 18/04/2024 11:48 pm, Kai Huang wrote:
>> TL;DR:
>>
>> Change to do unconditional WBINVD in stop_this_cpu() for bare metal
>> to cover kexec support for both AMD SME and Intel TDX, despite there
>> _was_ some issue preventing from doing so but now has it got fixed.
>>
>> Long version:
>>
>> Both AMD SME and Intel TDX can leave caches in an incoherent state due
>> to memory encryption, which can lead to silent memory corruption during
>> kexec. To address this issue, it is necessary to flush the caches
>> before jumping to the second kernel.
>>
>> Currently, the kernel only performs WBINVD in stop_this_cpu() when SME
>> is supported by hardware. To support TDX, instead of adding one more
>> vendor-specific check, it is proposed to perform unconditional WBINVD.
>> Kexec() is a slow path, and the additional WBINVD is acceptable for the
>> sake of simplicity and maintainability.
>>
>
> Hi Tom,
>
> May I ask how does SME work with kdump in crash_kexec(). Looking at the
> code, AFAICT the crash_kexec() path doesn't use stop_this_cpu() to stop
> all other cpus. Instead, kdump_nmi_shootdown_cpus() is called to send NMI
> to remote cpus and crash_nmi_callback() is invoked to stop them.
>
> But the crash_nmi_callback() doesn't invoke WBINVD for SME AFAICT. It
> does call the kdump_nmi_callback() callback where a WBINVD is performed
> for the SNP host:
>
> void kdump_sev_callback(void)
> {
> /*
> * Do wbinvd() on remote CPUs when SNP is enabled in order to
> * safely do SNP_SHUTDOWN on the local CPU.
> */
> if (cc_platform_has(CC_ATTR_HOST_SEV_SNP))
> wbinvd();
> }
>
> So if I read correctly, what's the reason the WBINVD is skipped for SME in
> case of crash_kexec()?
The system is rebooted after a crash and doesn't continue directly on into
a new kernel.
Thanks,
Tom
On 1/06/2024 8:45 am, Tom Lendacky wrote:
> On 5/22/24 21:49, Huang, Kai wrote:
>> On 18/04/2024 11:48 pm, Kai Huang wrote:
>>> TL;DR:
>>>
>>> Change to do unconditional WBINVD in stop_this_cpu() for bare metal
>>> to cover kexec support for both AMD SME and Intel TDX, despite there
>>> _was_ some issue preventing from doing so but now has it got fixed.
>>>
>>> Long version:
>>>
>>> Both AMD SME and Intel TDX can leave caches in an incoherent state due
>>> to memory encryption, which can lead to silent memory corruption during
>>> kexec. To address this issue, it is necessary to flush the caches
>>> before jumping to the second kernel.
>>>
>>> Currently, the kernel only performs WBINVD in stop_this_cpu() when SME
>>> is supported by hardware. To support TDX, instead of adding one more
>>> vendor-specific check, it is proposed to perform unconditional WBINVD.
>>> Kexec() is a slow path, and the additional WBINVD is acceptable for the
>>> sake of simplicity and maintainability.
>>>
>>
>> Hi Tom,
>>
>> May I ask how does SME work with kdump in crash_kexec(). Looking at
>> the code, AFAICT the crash_kexec() path doesn't use stop_this_cpu() to
>> stop all other cpus. Instead, kdump_nmi_shootdown_cpus() is called to
>> send NMI to remote cpus and crash_nmi_callback() is invoked to stop them.
>>
>> But the crash_nmi_callback() doesn't invoke WBINVD for SME AFAICT. It
>> does call the kdump_nmi_callback() callback where a WBINVD is
>> performed for the SNP host:
>>
>> void kdump_sev_callback(void)
>> {
>> /*
>> * Do wbinvd() on remote CPUs when SNP is enabled in order to
>> * safely do SNP_SHUTDOWN on the local CPU.
>> */
>> if (cc_platform_has(CC_ATTR_HOST_SEV_SNP))
>> wbinvd();
>> }
>>
>> So if I read correctly, what's the reason the WBINVD is skipped for
>> SME in case of crash_kexec()?
>
> The system is rebooted after a crash and doesn't continue directly on
> into a new kernel.
>
How about the kdump kernel itself? Would the stale cachelines
potentially corrupt it?
And how about /proc/vmcore, which reflects the system RAM used by the
first, crashed, kernel? Is it OK to have stale cachelines for it?