2024-02-12 10:46:54

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv7 00/16] x86/tdx: Add kexec support

The patchset adds bits and pieces to get kexec (and crashkernel) work on
TDX guest.

The last patch implements CPU offlining according to the approved ACPI
spec change poposal[1]. It unlocks kexec with all CPUs visible in the target
kernel. It requires BIOS-side enabling. If it missing we fallback to booting
2nd kernel with single CPU.

Please review. I would be glad for any feedback.

[1] https://lore.kernel.org/all/13356251.uLZWGnKmhe@kreacher

v7:
- Call enc_kexec_stop_conversion() and enc_kexec_unshare_mem() after shutting
down IO-APIC, lapic and hpet. It meets AMD requirements.
- Minor style changes;
- Add Acked/Reviewed-bys;
v6:
- Rebased to v6.8-rc1;
- Provide default noop callbacks from .enc_kexec_stop_conversion and
.enc_kexec_unshare_mem;
- Split off patch that introduces .enc_kexec_* callbacks;
- asm_acpi_mp_play_dead(): program CR3 directly from RSI, no MOV to RAX
required;
- Restructure how smp_ops.stop_this_cpu() hooked up in crash_nmi_callback();
- kvmclock patch got merged via KVM tree;
v5:
- Rename smp_ops.crash_play_dead to smp_ops.stop_this_cpu and use it in
stop_this_cpu();
- Split off enc_kexec_stop_conversion() from enc_kexec_unshare_mem();
- Introduce kernel_ident_mapping_free();
- Add explicit include for alternatives and stringify.
- Add barrier() after setting conversion_allowed to false;
- Mark cpu_hotplug_offline_disabled __ro_after_init;
- Print error if failed to hand over CPU to BIOS;
- Update comments and commit messages;
v4:
- Fix build for !KEXEC_CORE;
- Cleaner ATLERNATIVE use;
- Update commit messages and comments;
- Add Reviewed-bys;
v3:
- Rework acpi_mp_crash_stop_other_cpus() to avoid invoking hotplug state
machine;
- Free page tables if reset vector setup failed;
- Change asm_acpi_mp_play_dead() to pass reset vector and PGD as arguments;
- Mark acpi_mp_* variables as static and __ro_after_init;
- Use u32 for apicid;
- Disable CPU offlining if reset vector setup failed;
- Rename madt.S -> madt_playdead.S;
- Mark tdx_kexec_unshare_mem() as static;
- Rebase onto up-to-date tip/master;
- Whitespace fixes;
- Reorder patches;
- Add Reviewed-bys;
- Update comments and commit messages;
v2:
- Rework how unsharing hook ups into kexec codepath;
- Rework kvmclock_disable() fix based on Sean's;
- s/cpu_hotplug_not_supported()/cpu_hotplug_disable_offlining()/;
- use play_dead_common() to implement acpi_mp_play_dead();
- cond_resched() in tdx_shared_memory_show();
- s/target kernel/second kernel/;
- Update commit messages and comments;

Kirill A. Shutemov (16):
x86/acpi: Extract ACPI MADT wakeup code into a separate file
x86/apic: Mark acpi_mp_wake_* variables as __ro_after_init
cpu/hotplug: Add support for declaring CPU offlining not supported
cpu/hotplug, x86/acpi: Disable CPU offlining for ACPI MADT wakeup
x86/kexec: Keep CR4.MCE set during kexec for TDX guest
x86/mm: Make x86_platform.guest.enc_status_change_*() return errno
x86/mm: Return correct level from lookup_address() if pte is none
x86/tdx: Account shared memory
x86/mm: Adding callbacks to prepare encrypted memory for kexec
x86/tdx: Convert shared memory back to private on kexec
x86/mm: Make e820_end_ram_pfn() cover E820_TYPE_ACPI ranges
x86/acpi: Rename fields in acpi_madt_multiproc_wakeup structure
x86/acpi: Do not attempt to bring up secondary CPUs in kexec case
x86/smp: Add smp_ops.stop_this_cpu() callback
x86/mm: Introduce kernel_ident_mapping_free()
x86/acpi: Add support for CPU offlining for ACPI MADT wakeup method

arch/x86/Kconfig | 7 +
arch/x86/coco/core.c | 1 -
arch/x86/coco/tdx/tdx.c | 209 ++++++++++++++++++-
arch/x86/hyperv/ivm.c | 9 +-
arch/x86/include/asm/acpi.h | 7 +
arch/x86/include/asm/init.h | 3 +
arch/x86/include/asm/pgtable_types.h | 1 +
arch/x86/include/asm/smp.h | 1 +
arch/x86/include/asm/x86_init.h | 6 +-
arch/x86/kernel/acpi/Makefile | 11 +-
arch/x86/kernel/acpi/boot.c | 86 +-------
arch/x86/kernel/acpi/madt_playdead.S | 28 +++
arch/x86/kernel/acpi/madt_wakeup.c | 292 +++++++++++++++++++++++++++
arch/x86/kernel/crash.c | 6 +
arch/x86/kernel/e820.c | 9 +-
arch/x86/kernel/process.c | 7 +
arch/x86/kernel/reboot.c | 18 ++
arch/x86/kernel/relocate_kernel_64.S | 5 +
arch/x86/kernel/x86_init.c | 8 +-
arch/x86/mm/ident_map.c | 73 +++++++
arch/x86/mm/mem_encrypt_amd.c | 8 +-
arch/x86/mm/pat/set_memory.c | 17 +-
include/acpi/actbl2.h | 19 +-
include/linux/cc_platform.h | 10 -
include/linux/cpu.h | 2 +
kernel/cpu.c | 12 +-
26 files changed, 715 insertions(+), 140 deletions(-)
create mode 100644 arch/x86/kernel/acpi/madt_playdead.S
create mode 100644 arch/x86/kernel/acpi/madt_wakeup.c

--
2.43.0



2024-02-12 10:47:10

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv7 03/16] cpu/hotplug: Add support for declaring CPU offlining not supported

The ACPI MADT mailbox wakeup method doesn't allow to offline CPU after
it got woke up.

Currently offlining hotplug is prevented based on the confidential
computing attribute which is set for Intel TDX. But TDX is not
the only possible user of the wake up method. The MADT wakeup can be
implemented outside of a confidential computing environment. Offline
support is a property of the wakeup method, not the CoCo implementation.

Introduce cpu_hotplug_disable_offlining() that can be called to indicate
that CPU offlining should be disabled.

This function is going to replace CC_ATTR_HOTPLUG_DISABLED for ACPI
MADT wakeup method.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: Thomas Gleixner <[email protected]>
---
include/linux/cpu.h | 2 ++
kernel/cpu.c | 13 ++++++++++++-
2 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index dcb89c987164..aa89ef93a884 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -139,6 +139,7 @@ extern void cpus_read_lock(void);
extern void cpus_read_unlock(void);
extern int cpus_read_trylock(void);
extern void lockdep_assert_cpus_held(void);
+extern void cpu_hotplug_disable_offlining(void);
extern void cpu_hotplug_disable(void);
extern void cpu_hotplug_enable(void);
void clear_tasks_mm_cpumask(int cpu);
@@ -154,6 +155,7 @@ static inline void cpus_read_lock(void) { }
static inline void cpus_read_unlock(void) { }
static inline int cpus_read_trylock(void) { return true; }
static inline void lockdep_assert_cpus_held(void) { }
+static inline void cpu_hotplug_disable_offlining(void) { }
static inline void cpu_hotplug_disable(void) { }
static inline void cpu_hotplug_enable(void) { }
static inline int remove_cpu(unsigned int cpu) { return -EPERM; }
diff --git a/kernel/cpu.c b/kernel/cpu.c
index e6ec3ba4950b..7c28a07afe8b 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -484,6 +484,8 @@ static int cpu_hotplug_disabled;

DEFINE_STATIC_PERCPU_RWSEM(cpu_hotplug_lock);

+static bool cpu_hotplug_offline_disabled __ro_after_init;
+
void cpus_read_lock(void)
{
percpu_down_read(&cpu_hotplug_lock);
@@ -543,6 +545,14 @@ static void lockdep_release_cpus_lock(void)
rwsem_release(&cpu_hotplug_lock.dep_map, _THIS_IP_);
}

+/* Declare CPU offlining not supported */
+void cpu_hotplug_disable_offlining(void)
+{
+ cpu_maps_update_begin();
+ cpu_hotplug_offline_disabled = true;
+ cpu_maps_update_done();
+}
+
/*
* Wait for currently running CPU hotplug operations to complete (if any) and
* disable future CPU hotplug (from sysfs). The 'cpu_add_remove_lock' protects
@@ -1522,7 +1532,8 @@ static int cpu_down_maps_locked(unsigned int cpu, enum cpuhp_state target)
* If the platform does not support hotplug, report it explicitly to
* differentiate it from a transient offlining failure.
*/
- if (cc_platform_has(CC_ATTR_HOTPLUG_DISABLED))
+ if (cc_platform_has(CC_ATTR_HOTPLUG_DISABLED) ||
+ cpu_hotplug_offline_disabled)
return -EOPNOTSUPP;
if (cpu_hotplug_disabled)
return -EBUSY;
--
2.43.0


2024-02-12 10:47:54

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none

lookup_address() only returns correct page table level for the entry if
the entry is not none.

Make the helper to always return correct 'level'. It allows to implement
iterator over kernel page tables using lookup_address().

Add one more entry into enum pg_level to indicate size of VA covered by
one PGD entry in 5-level paging mode.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: Rick Edgecombe <[email protected]>
---
arch/x86/include/asm/pgtable_types.h | 1 +
arch/x86/mm/pat/set_memory.c | 8 ++++----
2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 0b748ee16b3d..3f648ffdfbe5 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -548,6 +548,7 @@ enum pg_level {
PG_LEVEL_2M,
PG_LEVEL_1G,
PG_LEVEL_512G,
+ PG_LEVEL_256T,
PG_LEVEL_NUM
};

diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index f92da8c9a86d..3612e3167147 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -666,32 +666,32 @@ pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address,
pud_t *pud;
pmd_t *pmd;

- *level = PG_LEVEL_NONE;
+ *level = PG_LEVEL_256T;

if (pgd_none(*pgd))
return NULL;

+ *level = PG_LEVEL_512G;
p4d = p4d_offset(pgd, address);
if (p4d_none(*p4d))
return NULL;

- *level = PG_LEVEL_512G;
if (p4d_large(*p4d) || !p4d_present(*p4d))
return (pte_t *)p4d;

+ *level = PG_LEVEL_1G;
pud = pud_offset(p4d, address);
if (pud_none(*pud))
return NULL;

- *level = PG_LEVEL_1G;
if (pud_large(*pud) || !pud_present(*pud))
return (pte_t *)pud;

+ *level = PG_LEVEL_2M;
pmd = pmd_offset(pud, address);
if (pmd_none(*pmd))
return NULL;

- *level = PG_LEVEL_2M;
if (pmd_large(*pmd) || !pmd_present(*pmd))
return (pte_t *)pmd;

--
2.43.0


2024-02-12 10:48:26

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv7 08/16] x86/tdx: Account shared memory

The kernel will convert all shared memory back to private during kexec.
The direct mapping page tables will provide information on which memory
is shared.

It is extremely important to convert all shared memory. If a page is
missed, it will cause the second kernel to crash when it accesses it.

Keep track of the number of shared pages. This will allow for
cross-checking against the shared information in the direct mapping and
reporting if the shared bit is lost.

Include a debugfs interface that allows for the check to be performed at
any point.

Signed-off-by: Kirill A. Shutemov <[email protected]>
---
arch/x86/coco/tdx/tdx.c | 69 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 69 insertions(+)

diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
index 26fa47db5782..fd212c9bad89 100644
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -5,6 +5,7 @@
#define pr_fmt(fmt) "tdx: " fmt

#include <linux/cpufeature.h>
+#include <linux/debugfs.h>
#include <linux/export.h>
#include <linux/io.h>
#include <asm/coco.h>
@@ -38,6 +39,13 @@

#define TDREPORT_SUBTYPE_0 0

+static atomic_long_t nr_shared;
+
+static inline bool pte_decrypted(pte_t pte)
+{
+ return cc_mkdec(pte_val(pte)) == pte_val(pte);
+}
+
/* Called from __tdx_hypercall() for unrecoverable failure */
noinstr void __noreturn __tdx_hypercall_failed(void)
{
@@ -821,6 +829,11 @@ static int tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc))
return -EIO;

+ if (enc)
+ atomic_long_sub(numpages, &nr_shared);
+ else
+ atomic_long_add(numpages, &nr_shared);
+
return 0;
}

@@ -896,3 +909,59 @@ void __init tdx_early_init(void)

pr_info("Guest detected\n");
}
+
+#ifdef CONFIG_DEBUG_FS
+static int tdx_shared_memory_show(struct seq_file *m, void *p)
+{
+ unsigned long addr, end;
+ unsigned long found = 0;
+
+ addr = PAGE_OFFSET;
+ end = PAGE_OFFSET + get_max_mapped();
+
+ while (addr < end) {
+ unsigned long size;
+ unsigned int level;
+ pte_t *pte;
+
+ pte = lookup_address(addr, &level);
+ size = page_level_size(level);
+
+ if (pte && pte_decrypted(*pte))
+ found += size / PAGE_SIZE;
+
+ addr += size;
+
+ cond_resched();
+ }
+
+ seq_printf(m, "Number of shared pages in kernel page tables: %16lu\n",
+ found);
+ seq_printf(m, "Number of pages accounted as shared: %16ld\n",
+ atomic_long_read(&nr_shared));
+ return 0;
+}
+
+static int tdx_shared_memory_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, tdx_shared_memory_show, NULL);
+}
+
+static const struct file_operations tdx_shared_memory_fops = {
+ .open = tdx_shared_memory_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+static __init int debug_tdx_shared_memory(void)
+{
+ if (!cpu_feature_enabled(X86_FEATURE_TDX_GUEST))
+ return 0;
+
+ debugfs_create_file("tdx_shared_memory", 0400, arch_debugfs_dir,
+ NULL, &tdx_shared_memory_fops);
+ return 0;
+}
+fs_initcall(debug_tdx_shared_memory);
+#endif
--
2.43.0


2024-02-12 10:49:07

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv7 10/16] x86/tdx: Convert shared memory back to private on kexec

TDX guests allocate shared buffers to perform I/O. It is done by
allocating pages normally from the buddy allocator and converting them
to shared with set_memory_decrypted().

The second kernel has no idea what memory is converted this way. It only
sees E820_TYPE_RAM.

Accessing shared memory via private mapping is fatal. It leads to
unrecoverable TD exit.

On kexec walk direct mapping and convert all shared memory back to
private. It makes all RAM private again and second kernel may use it
normally.

The conversion occurs in two steps: stopping new conversions and
unsharing all memory. In the case of normal kexec, the stopping of
conversions takes place while scheduling is still functioning. This
allows for waiting until any ongoing conversions are finished. The
second step is carried out when all CPUs except one are inactive and
interrupts are disabled. This prevents any conflicts with code that may
access shared memory.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: Rick Edgecombe <[email protected]>
---
arch/x86/coco/tdx/tdx.c | 124 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 122 insertions(+), 2 deletions(-)

diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
index fd212c9bad89..bb77a927a831 100644
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -6,8 +6,10 @@

#include <linux/cpufeature.h>
#include <linux/debugfs.h>
+#include <linux/delay.h>
#include <linux/export.h>
#include <linux/io.h>
+#include <linux/kexec.h>
#include <asm/coco.h>
#include <asm/tdx.h>
#include <asm/vmx.h>
@@ -15,6 +17,7 @@
#include <asm/insn.h>
#include <asm/insn-eval.h>
#include <asm/pgtable.h>
+#include <asm/set_memory.h>

/* MMIO direction */
#define EPT_READ 0
@@ -41,6 +44,9 @@

static atomic_long_t nr_shared;

+static atomic_t conversions_in_progress;
+static bool conversion_allowed = true;
+
static inline bool pte_decrypted(pte_t pte)
{
return cc_mkdec(pte_val(pte)) == pte_val(pte);
@@ -726,6 +732,14 @@ static bool tdx_tlb_flush_required(bool private)

static bool tdx_cache_flush_required(void)
{
+ /*
+ * Avoid issuing CLFLUSH on set_memory_decrypted() if conversions
+ * stopped. Otherwise it can race with unshare_all_memory() and trigger
+ * implicit conversion to shared.
+ */
+ if (!conversion_allowed)
+ return false;
+
/*
* AMD SME/SEV can avoid cache flushing if HW enforces cache coherence.
* TDX doesn't have such capability.
@@ -809,12 +823,25 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
static int tdx_enc_status_change_prepare(unsigned long vaddr, int numpages,
bool enc)
{
+ atomic_inc(&conversions_in_progress);
+
+ /*
+ * Check after bumping conversions_in_progress to serialize
+ * against tdx_kexec_stop_conversion().
+ */
+ if (!conversion_allowed) {
+ atomic_dec(&conversions_in_progress);
+ return -EBUSY;
+ }
+
/*
* Only handle shared->private conversion here.
* See the comment in tdx_early_init().
*/
- if (enc && !tdx_enc_status_changed(vaddr, numpages, enc))
+ if (enc && !tdx_enc_status_changed(vaddr, numpages, enc)) {
+ atomic_dec(&conversions_in_progress);
return -EIO;
+ }

return 0;
}
@@ -826,17 +853,107 @@ static int tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
* Only handle private->shared conversion here.
* See the comment in tdx_early_init().
*/
- if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc))
+ if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc)) {
+ atomic_dec(&conversions_in_progress);
return -EIO;
+ }

if (enc)
atomic_long_sub(numpages, &nr_shared);
else
atomic_long_add(numpages, &nr_shared);

+ atomic_dec(&conversions_in_progress);
+
return 0;
}

+static void tdx_kexec_stop_conversion(bool crash)
+{
+ /* Stop new private<->shared conversions */
+ conversion_allowed = false;
+
+ /*
+ * Make sure conversion_allowed is cleared before checking
+ * conversions_in_progress.
+ */
+ barrier();
+
+ /*
+ * Crash kernel reaches here with interrupts disabled: can't wait for
+ * conversions to finish.
+ *
+ * If race happened, just report and proceed.
+ */
+ if (!crash) {
+ unsigned long timeout;
+
+ /*
+ * Wait for in-flight conversions to complete.
+ *
+ * Do not wait more than 30 seconds.
+ */
+ timeout = 30 * USEC_PER_SEC;
+ while (atomic_read(&conversions_in_progress) && timeout--)
+ udelay(1);
+ }
+
+ if (atomic_read(&conversions_in_progress))
+ pr_warn("Failed to finish shared<->private conversions\n");
+}
+
+static void tdx_kexec_unshare_mem(void)
+{
+ unsigned long addr, end;
+ long found = 0, shared;
+
+ /*
+ * Walk direct mapping and convert all shared memory back to private,
+ */
+
+ addr = PAGE_OFFSET;
+ end = PAGE_OFFSET + get_max_mapped();
+
+ while (addr < end) {
+ unsigned long size;
+ unsigned int level;
+ pte_t *pte;
+
+ pte = lookup_address(addr, &level);
+ size = page_level_size(level);
+
+ if (pte && pte_decrypted(*pte)) {
+ int pages = size / PAGE_SIZE;
+
+ /*
+ * Touching memory with shared bit set triggers implicit
+ * conversion to shared.
+ *
+ * Make sure nobody touches the shared range from
+ * now on.
+ */
+ set_pte(pte, __pte(0));
+
+ if (!tdx_enc_status_changed(addr, pages, true)) {
+ pr_err("Failed to unshare range %#lx-%#lx\n",
+ addr, addr + size);
+ }
+
+ found += pages;
+ }
+
+ addr += size;
+ }
+
+ __flush_tlb_all();
+
+ shared = atomic_long_read(&nr_shared);
+ if (shared != found) {
+ pr_err("shared page accounting is off\n");
+ pr_err("nr_shared = %ld, nr_found = %ld\n", shared, found);
+ }
+}
+
void __init tdx_early_init(void)
{
struct tdx_module_args args = {
@@ -896,6 +1013,9 @@ void __init tdx_early_init(void)
x86_platform.guest.enc_cache_flush_required = tdx_cache_flush_required;
x86_platform.guest.enc_tlb_flush_required = tdx_tlb_flush_required;

+ x86_platform.guest.enc_kexec_stop_conversion = tdx_kexec_stop_conversion;
+ x86_platform.guest.enc_kexec_unshare_mem = tdx_kexec_unshare_mem;
+
/*
* TDX intercepts the RDMSR to read the X2APIC ID in the parallel
* bringup low level code. That raises #VE which cannot be handled
--
2.43.0


2024-02-12 10:49:24

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv7 06/16] x86/mm: Make x86_platform.guest.enc_status_change_*() return errno

TDX is going to have more than one reason to fail
enc_status_change_prepare().

Change the callback to return errno instead of assuming -EIO;
enc_status_change_finish() changed too to keep the interface symmetric.

Signed-off-by: Kirill A. Shutemov <[email protected]>
---
arch/x86/coco/tdx/tdx.c | 20 +++++++++++---------
arch/x86/hyperv/ivm.c | 9 +++------
arch/x86/include/asm/x86_init.h | 4 ++--
arch/x86/kernel/x86_init.c | 4 ++--
arch/x86/mm/mem_encrypt_amd.c | 8 ++++----
arch/x86/mm/pat/set_memory.c | 9 +++++----
6 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
index c1cb90369915..26fa47db5782 100644
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -798,28 +798,30 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
return true;
}

-static bool tdx_enc_status_change_prepare(unsigned long vaddr, int numpages,
- bool enc)
+static int tdx_enc_status_change_prepare(unsigned long vaddr, int numpages,
+ bool enc)
{
/*
* Only handle shared->private conversion here.
* See the comment in tdx_early_init().
*/
- if (enc)
- return tdx_enc_status_changed(vaddr, numpages, enc);
- return true;
+ if (enc && !tdx_enc_status_changed(vaddr, numpages, enc))
+ return -EIO;
+
+ return 0;
}

-static bool tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
+static int tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
bool enc)
{
/*
* Only handle private->shared conversion here.
* See the comment in tdx_early_init().
*/
- if (!enc)
- return tdx_enc_status_changed(vaddr, numpages, enc);
- return true;
+ if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc))
+ return -EIO;
+
+ return 0;
}

void __init tdx_early_init(void)
diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c
index 7dcbf153ad72..49b4f427268f 100644
--- a/arch/x86/hyperv/ivm.c
+++ b/arch/x86/hyperv/ivm.c
@@ -510,13 +510,12 @@ static int hv_mark_gpa_visibility(u16 count, const u64 pfn[],
* with host. This function works as wrap of hv_mark_gpa_visibility()
* with memory base and size.
*/
-static bool hv_vtom_set_host_visibility(unsigned long kbuffer, int pagecount, bool enc)
+static int hv_vtom_set_host_visibility(unsigned long kbuffer, int pagecount, bool enc)
{
enum hv_mem_host_visibility visibility = enc ?
VMBUS_PAGE_NOT_VISIBLE : VMBUS_PAGE_VISIBLE_READ_WRITE;
u64 *pfn_array;
int ret = 0;
- bool result = true;
int i, pfn;

pfn_array = kmalloc(HV_HYP_PAGE_SIZE, GFP_KERNEL);
@@ -530,17 +529,15 @@ static bool hv_vtom_set_host_visibility(unsigned long kbuffer, int pagecount, bo
if (pfn == HV_MAX_MODIFY_GPA_REP_COUNT || i == pagecount - 1) {
ret = hv_mark_gpa_visibility(pfn, pfn_array,
visibility);
- if (ret) {
- result = false;
+ if (ret)
goto err_free_pfn_array;
- }
pfn = 0;
}
}

err_free_pfn_array:
kfree(pfn_array);
- return result;
+ return ret;
}

static bool hv_vtom_tlb_flush_required(bool private)
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index c878616a18b8..c9503fe2d13a 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -150,8 +150,8 @@ struct x86_init_acpi {
* @enc_cache_flush_required Returns true if a cache flush is needed before changing page encryption status
*/
struct x86_guest {
- bool (*enc_status_change_prepare)(unsigned long vaddr, int npages, bool enc);
- bool (*enc_status_change_finish)(unsigned long vaddr, int npages, bool enc);
+ int (*enc_status_change_prepare)(unsigned long vaddr, int npages, bool enc);
+ int (*enc_status_change_finish)(unsigned long vaddr, int npages, bool enc);
bool (*enc_tlb_flush_required)(bool enc);
bool (*enc_cache_flush_required)(void);
};
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index a37ebd3b4773..f0f54e109eb9 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -131,8 +131,8 @@ struct x86_cpuinit_ops x86_cpuinit = {

static void default_nmi_init(void) { };

-static bool enc_status_change_prepare_noop(unsigned long vaddr, int npages, bool enc) { return true; }
-static bool enc_status_change_finish_noop(unsigned long vaddr, int npages, bool enc) { return true; }
+static int enc_status_change_prepare_noop(unsigned long vaddr, int npages, bool enc) { return 0; }
+static int enc_status_change_finish_noop(unsigned long vaddr, int npages, bool enc) { return 0; }
static bool enc_tlb_flush_required_noop(bool enc) { return false; }
static bool enc_cache_flush_required_noop(void) { return false; }
static bool is_private_mmio_noop(u64 addr) {return false; }
diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
index 70b91de2e053..d314e577836d 100644
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -283,7 +283,7 @@ static void enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc)
#endif
}

-static bool amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool enc)
+static int amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool enc)
{
/*
* To maintain the security guarantees of SEV-SNP guests, make sure
@@ -292,11 +292,11 @@ static bool amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool
if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP) && !enc)
snp_set_memory_shared(vaddr, npages);

- return true;
+ return 0;
}

/* Return true unconditionally: return value doesn't matter for the SEV side */
-static bool amd_enc_status_change_finish(unsigned long vaddr, int npages, bool enc)
+static int amd_enc_status_change_finish(unsigned long vaddr, int npages, bool enc)
{
/*
* After memory is mapped encrypted in the page table, validate it
@@ -308,7 +308,7 @@ static bool amd_enc_status_change_finish(unsigned long vaddr, int npages, bool e
if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
enc_dec_hypercall(vaddr, npages << PAGE_SHIFT, enc);

- return true;
+ return 0;
}

static void __init __set_clr_pte_enc(pte_t *kpte, int level, bool enc)
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index e9b448d1b1b7..f92da8c9a86d 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2152,8 +2152,9 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc)
cpa_flush(&cpa, x86_platform.guest.enc_cache_flush_required());

/* Notify hypervisor that we are about to set/clr encryption attribute. */
- if (!x86_platform.guest.enc_status_change_prepare(addr, numpages, enc))
- return -EIO;
+ ret = x86_platform.guest.enc_status_change_prepare(addr, numpages, enc);
+ if (ret)
+ return ret;

ret = __change_page_attr_set_clr(&cpa, 1);

@@ -2168,8 +2169,8 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc)

/* Notify hypervisor that we have successfully set/clr encryption attribute. */
if (!ret) {
- if (!x86_platform.guest.enc_status_change_finish(addr, numpages, enc))
- ret = -EIO;
+ ret = x86_platform.guest.enc_status_change_finish(addr,
+ numpages, enc);
}

return ret;
--
2.43.0


2024-02-12 10:49:35

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv7 14/16] x86/smp: Add smp_ops.stop_this_cpu() callback

If the helper is defined, it is called instead of halt() to stop the CPU
at the end of stop_this_cpu() and on crash CPU shutdown.

ACPI MADT will use it to hand over the CPU to BIOS in order to be able
to wake it up again after kexec.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Acked-by: Kai Huang <[email protected]>
---
arch/x86/include/asm/smp.h | 1 +
arch/x86/kernel/process.c | 7 +++++++
arch/x86/kernel/reboot.c | 6 ++++++
3 files changed, 14 insertions(+)

diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
index 4fab2ed454f3..390d53fd34f9 100644
--- a/arch/x86/include/asm/smp.h
+++ b/arch/x86/include/asm/smp.h
@@ -38,6 +38,7 @@ struct smp_ops {
int (*cpu_disable)(void);
void (*cpu_die)(unsigned int cpu);
void (*play_dead)(void);
+ void (*stop_this_cpu)(void);

void (*send_call_func_ipi)(const struct cpumask *mask);
void (*send_call_func_single_ipi)(int cpu);
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index ab49ade31b0d..00c1b957476d 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -835,6 +835,13 @@ void __noreturn stop_this_cpu(void *dummy)
*/
cpumask_clear_cpu(cpu, &cpus_stop_mask);

+#ifdef CONFIG_SMP
+ if (smp_ops.stop_this_cpu) {
+ smp_ops.stop_this_cpu();
+ unreachable();
+ }
+#endif
+
for (;;) {
/*
* Use native_halt() so that memory contents don't change
diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index 0574d4ad6b41..0a75efe579c0 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -880,6 +880,12 @@ static int crash_nmi_callback(unsigned int val, struct pt_regs *regs)
cpu_emergency_disable_virtualization();

atomic_dec(&waiting_for_crash_ipi);
+
+ if (smp_ops.stop_this_cpu) {
+ smp_ops.stop_this_cpu();
+ unreachable();
+ }
+
/* Assume hlt works */
halt();
for (;;)
--
2.43.0


2024-02-12 10:49:39

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv7 09/16] x86/mm: Adding callbacks to prepare encrypted memory for kexec

AMD SEV and Intel TDX guests allocate shared buffers for performing I/O.
This is done by allocating pages normally from the buddy allocator and
then converting them to shared using set_memory_decrypted().

On kexec, the second kernel is unaware of which memory has been
converted in this manner. It only sees E820_TYPE_RAM. Accessing shared
memory as private is fatal.

Therefore, the memory state must be reset to its original state before
starting the new kernel with kexec.

The process of converting shared memory back to private occurs in two
steps:

- enc_kexec_stop_conversion() stops new conversions.

- enc_kexec_unshare_mem() unshares all existing shared memory, reverting
it back to private.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: Nikolay Borisov <[email protected]>x
---
arch/x86/include/asm/x86_init.h | 2 ++
arch/x86/kernel/crash.c | 6 ++++++
arch/x86/kernel/reboot.c | 12 ++++++++++++
arch/x86/kernel/x86_init.c | 4 ++++
4 files changed, 24 insertions(+)

diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index c9503fe2d13a..3196ff20a29e 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -154,6 +154,8 @@ struct x86_guest {
int (*enc_status_change_finish)(unsigned long vaddr, int npages, bool enc);
bool (*enc_tlb_flush_required)(bool enc);
bool (*enc_cache_flush_required)(void);
+ void (*enc_kexec_stop_conversion)(bool crash);
+ void (*enc_kexec_unshare_mem)(void);
};

/**
diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
index b6b044356f1b..3001f4857ed7 100644
--- a/arch/x86/kernel/crash.c
+++ b/arch/x86/kernel/crash.c
@@ -124,6 +124,12 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
#ifdef CONFIG_HPET_TIMER
hpet_disable();
#endif
+
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
+ x86_platform.guest.enc_kexec_stop_conversion(true);
+ x86_platform.guest.enc_kexec_unshare_mem();
+ }
+
crash_save_cpu(regs, safe_smp_processor_id());
}

diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index 830425e6d38e..0574d4ad6b41 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -12,6 +12,7 @@
#include <linux/delay.h>
#include <linux/objtool.h>
#include <linux/pgtable.h>
+#include <linux/kexec.h>
#include <acpi/reboot.h>
#include <asm/io.h>
#include <asm/apic.h>
@@ -716,6 +717,14 @@ static void native_machine_emergency_restart(void)

void native_machine_shutdown(void)
{
+ /*
+ * Call enc_kexec_stop_conversion() while all CPUs are still active and
+ * interrupts are enabled. This will allow all in-flight memory
+ * conversions to finish cleanly.
+ */
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) && kexec_in_progress)
+ x86_platform.guest.enc_kexec_stop_conversion(false);
+
/* Stop the cpus and apics */
#ifdef CONFIG_X86_IO_APIC
/*
@@ -752,6 +761,9 @@ void native_machine_shutdown(void)
#ifdef CONFIG_X86_64
x86_platform.iommu_shutdown();
#endif
+
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) && kexec_in_progress)
+ x86_platform.guest.enc_kexec_unshare_mem();
}

static void __machine_emergency_restart(int emergency)
diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c
index f0f54e109eb9..b95206ebc621 100644
--- a/arch/x86/kernel/x86_init.c
+++ b/arch/x86/kernel/x86_init.c
@@ -135,6 +135,8 @@ static int enc_status_change_prepare_noop(unsigned long vaddr, int npages, bool
static int enc_status_change_finish_noop(unsigned long vaddr, int npages, bool enc) { return 0; }
static bool enc_tlb_flush_required_noop(bool enc) { return false; }
static bool enc_cache_flush_required_noop(void) { return false; }
+static void enc_kexec_stop_conversion_noop(bool crash) {}
+static void enc_kexec_unshare_mem_noop(void) {}
static bool is_private_mmio_noop(u64 addr) {return false; }

struct x86_platform_ops x86_platform __ro_after_init = {
@@ -158,6 +160,8 @@ struct x86_platform_ops x86_platform __ro_after_init = {
.enc_status_change_finish = enc_status_change_finish_noop,
.enc_tlb_flush_required = enc_tlb_flush_required_noop,
.enc_cache_flush_required = enc_cache_flush_required_noop,
+ .enc_kexec_stop_conversion = enc_kexec_stop_conversion_noop,
+ .enc_kexec_unshare_mem = enc_kexec_unshare_mem_noop,
},
};

--
2.43.0


2024-02-12 10:49:41

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv7 13/16] x86/acpi: Do not attempt to bring up secondary CPUs in kexec case

ACPI MADT doesn't allow to offline a CPU after it was onlined. This
limits kexec: the second kernel won't be able to use more than one CPU.

To prevent a kexec kernel from onlining secondary CPUs invalidate the
mailbox address in the ACPI MADT wakeup structure which prevents a
kexec kernel to use it.

This is safe as the booting kernel has the mailbox address cached
already and acpi_wakeup_cpu() uses the cached value to bring up the
secondary CPUs.

Note: This is a Linux specific convention and not covered by the
ACPI specification.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: Kai Huang <[email protected]>
Reviewed-by: Kuppuswamy Sathyanarayanan <[email protected]>
---
arch/x86/kernel/acpi/madt_wakeup.c | 29 ++++++++++++++++++++++++++++-
1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/acpi/madt_wakeup.c b/arch/x86/kernel/acpi/madt_wakeup.c
index 004801b9b151..30820f9de5af 100644
--- a/arch/x86/kernel/acpi/madt_wakeup.c
+++ b/arch/x86/kernel/acpi/madt_wakeup.c
@@ -14,6 +14,11 @@ static struct acpi_madt_multiproc_wakeup_mailbox *acpi_mp_wake_mailbox __ro_afte

static int acpi_wakeup_cpu(u32 apicid, unsigned long start_ip)
{
+ if (!acpi_mp_wake_mailbox_paddr) {
+ pr_warn_once("No MADT mailbox: cannot bringup secondary CPUs. Booting with kexec?\n");
+ return -EOPNOTSUPP;
+ }
+
/*
* Remap mailbox memory only for the first call to acpi_wakeup_cpu().
*
@@ -64,6 +69,28 @@ static int acpi_wakeup_cpu(u32 apicid, unsigned long start_ip)
return 0;
}

+static void acpi_mp_disable_offlining(struct acpi_madt_multiproc_wakeup *mp_wake)
+{
+ cpu_hotplug_disable_offlining();
+
+ /*
+ * ACPI MADT doesn't allow to offline a CPU after it was onlined. This
+ * limits kexec: the second kernel won't be able to use more than one CPU.
+ *
+ * To prevent a kexec kernel from onlining secondary CPUs invalidate the
+ * mailbox address in the ACPI MADT wakeup structure which prevents a
+ * kexec kernel to use it.
+ *
+ * This is safe as the booting kernel has the mailbox address cached
+ * already and acpi_wakeup_cpu() uses the cached value to bring up the
+ * secondary CPUs.
+ *
+ * Note: This is a Linux specific convention and not covered by the
+ * ACPI specification.
+ */
+ mp_wake->mailbox_address = 0;
+}
+
int __init acpi_parse_mp_wake(union acpi_subtable_headers *header,
const unsigned long end)
{
@@ -77,7 +104,7 @@ int __init acpi_parse_mp_wake(union acpi_subtable_headers *header,

acpi_mp_wake_mailbox_paddr = mp_wake->mailbox_address;

- cpu_hotplug_disable_offlining();
+ acpi_mp_disable_offlining(mp_wake);

apic_update_callback(wakeup_secondary_cpu_64, acpi_wakeup_cpu);

--
2.43.0


2024-02-12 10:50:15

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv7 15/16] x86/mm: Introduce kernel_ident_mapping_free()

The helper complements kernel_ident_mapping_init(): it frees the
identity mapping that was previously allocated. It will be used in the
error path to free a partially allocated mapping or if the mapping is no
longer needed.

The caller provides a struct x86_mapping_info with the free_pgd_page()
callback hooked up and the pgd_t to free.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Acked-by: Kai Huang <[email protected]>
---
arch/x86/include/asm/init.h | 3 ++
arch/x86/mm/ident_map.c | 73 +++++++++++++++++++++++++++++++++++++
2 files changed, 76 insertions(+)

diff --git a/arch/x86/include/asm/init.h b/arch/x86/include/asm/init.h
index cc9ccf61b6bd..14d72727d7ee 100644
--- a/arch/x86/include/asm/init.h
+++ b/arch/x86/include/asm/init.h
@@ -6,6 +6,7 @@

struct x86_mapping_info {
void *(*alloc_pgt_page)(void *); /* allocate buf for page table */
+ void (*free_pgt_page)(void *, void *); /* free buf for page table */
void *context; /* context for alloc_pgt_page */
unsigned long page_flag; /* page flag for PMD or PUD entry */
unsigned long offset; /* ident mapping offset */
@@ -16,4 +17,6 @@ struct x86_mapping_info {
int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
unsigned long pstart, unsigned long pend);

+void kernel_ident_mapping_free(struct x86_mapping_info *info, pgd_t *pgd);
+
#endif /* _ASM_X86_INIT_H */
diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
index 968d7005f4a7..3996af7b4abf 100644
--- a/arch/x86/mm/ident_map.c
+++ b/arch/x86/mm/ident_map.c
@@ -4,6 +4,79 @@
* included by both the compressed kernel and the regular kernel.
*/

+static void free_pte(struct x86_mapping_info *info, pmd_t *pmd)
+{
+ pte_t *pte = pte_offset_kernel(pmd, 0);
+
+ info->free_pgt_page(pte, info->context);
+}
+
+static void free_pmd(struct x86_mapping_info *info, pud_t *pud)
+{
+ pmd_t *pmd = pmd_offset(pud, 0);
+ int i;
+
+ for (i = 0; i < PTRS_PER_PMD; i++) {
+ if (!pmd_present(pmd[i]))
+ continue;
+
+ if (pmd_leaf(pmd[i]))
+ continue;
+
+ free_pte(info, &pmd[i]);
+ }
+
+ info->free_pgt_page(pmd, info->context);
+}
+
+static void free_pud(struct x86_mapping_info *info, p4d_t *p4d)
+{
+ pud_t *pud = pud_offset(p4d, 0);
+ int i;
+
+ for (i = 0; i < PTRS_PER_PUD; i++) {
+ if (!pud_present(pud[i]))
+ continue;
+
+ if (pud_leaf(pud[i]))
+ continue;
+
+ free_pmd(info, &pud[i]);
+ }
+
+ info->free_pgt_page(pud, info->context);
+}
+
+static void free_p4d(struct x86_mapping_info *info, pgd_t *pgd)
+{
+ p4d_t *p4d = p4d_offset(pgd, 0);
+ int i;
+
+ for (i = 0; i < PTRS_PER_P4D; i++) {
+ if (!p4d_present(p4d[i]))
+ continue;
+
+ free_pud(info, &p4d[i]);
+ }
+
+ if (pgtable_l5_enabled())
+ info->free_pgt_page(pgd, info->context);
+}
+
+void kernel_ident_mapping_free(struct x86_mapping_info *info, pgd_t *pgd)
+{
+ int i;
+
+ for (i = 0; i < PTRS_PER_PGD; i++) {
+ if (!pgd_present(pgd[i]))
+ continue;
+
+ free_p4d(info, &pgd[i]);
+ }
+
+ info->free_pgt_page(pgd, info->context);
+}
+
static void ident_pmd_init(struct x86_mapping_info *info, pmd_t *pmd_page,
unsigned long addr, unsigned long end)
{
--
2.43.0


2024-02-12 12:50:19

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv7 12/16] x86/acpi: Rename fields in acpi_madt_multiproc_wakeup structure

To prepare for the addition of support for MADT wakeup structure version
1, it is necessary to provide more appropriate names for the fields in
the structure.

The field 'mailbox_version' renamed as 'version'. This field signifies
the version of the structure and the related protocols, rather than the
version of the mailbox. This field has not been utilized in the code
thus far.

The field 'base_address' renamed as 'mailbox_address' to clarify the
kind of address it represents. In version 1, the structure includes the
reset vector address. Clear and distinct naming helps to prevent any
confusion.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: Kai Huang <[email protected]>
Reviewed-by: Kuppuswamy Sathyanarayanan <[email protected]>
---
arch/x86/kernel/acpi/madt_wakeup.c | 2 +-
include/acpi/actbl2.h | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/acpi/madt_wakeup.c b/arch/x86/kernel/acpi/madt_wakeup.c
index d222be8d7a07..004801b9b151 100644
--- a/arch/x86/kernel/acpi/madt_wakeup.c
+++ b/arch/x86/kernel/acpi/madt_wakeup.c
@@ -75,7 +75,7 @@ int __init acpi_parse_mp_wake(union acpi_subtable_headers *header,

acpi_table_print_madt_entry(&header->common);

- acpi_mp_wake_mailbox_paddr = mp_wake->base_address;
+ acpi_mp_wake_mailbox_paddr = mp_wake->mailbox_address;

cpu_hotplug_disable_offlining();

diff --git a/include/acpi/actbl2.h b/include/acpi/actbl2.h
index 9775384d61c6..e1a395af7591 100644
--- a/include/acpi/actbl2.h
+++ b/include/acpi/actbl2.h
@@ -1117,9 +1117,9 @@ struct acpi_madt_generic_translator {

struct acpi_madt_multiproc_wakeup {
struct acpi_subtable_header header;
- u16 mailbox_version;
+ u16 version;
u32 reserved; /* reserved - must be zero */
- u64 base_address;
+ u64 mailbox_address;
};

#define ACPI_MULTIPROC_WAKEUP_MB_OS_SIZE 2032
--
2.43.0


2024-02-12 13:16:30

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv7 05/16] x86/kexec: Keep CR4.MCE set during kexec for TDX guest

TDX guests are not allowed to clear CR4.MCE. Attempt to clear it leads
to #VE.

Use alternatives to keep the flag during kexec for TDX guests.

The change doesn't affect non-TDX-guest environments.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: Kai Huang <[email protected]>
---
arch/x86/kernel/relocate_kernel_64.S | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index 56cab1bb25f5..e144bcf60cbe 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -5,6 +5,8 @@
*/

#include <linux/linkage.h>
+#include <linux/stringify.h>
+#include <asm/alternative.h>
#include <asm/page_types.h>
#include <asm/kexec.h>
#include <asm/processor-flags.h>
@@ -145,12 +147,15 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
* Set cr4 to a known state:
* - physical address extension enabled
* - 5-level paging, if it was enabled before
+ * - Machine check exception on TDX guest. Clearing MCE is not allowed
+ * in TDX guests.
*/
movl $X86_CR4_PAE, %eax
testq $X86_CR4_LA57, %r13
jz 1f
orl $X86_CR4_LA57, %eax
1:
+ ALTERNATIVE "", __stringify(orl $X86_CR4_MCE, %eax), X86_FEATURE_TDX_GUEST
movq %rax, %cr4

jmp 1f
--
2.43.0


2024-02-12 13:46:56

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv7 11/16] x86/mm: Make e820_end_ram_pfn() cover E820_TYPE_ACPI ranges

e820__end_of_ram_pfn() is used to calculate max_pfn which, among other
things, guides where direct mapping ends. Any memory above max_pfn is
not going to be present in the direct mapping.

e820__end_of_ram_pfn() finds the end of the ram based on the highest
E820_TYPE_RAM range. But it doesn't includes E820_TYPE_ACPI ranges into
calculation.

Despite the name, E820_TYPE_ACPI covers not only ACPI data, but also EFI
tables and might be required by kernel to function properly.

Usually the problem is hidden because there is some E820_TYPE_RAM memory
above E820_TYPE_ACPI. But crashkernel only presents pre-allocated crash
memory as E820_TYPE_RAM on boot. If the preallocated range is small, it
can fit under the last E820_TYPE_ACPI range.

Modify e820__end_of_ram_pfn() and e820__end_of_low_ram_pfn() to cover
E820_TYPE_ACPI memory.

The problem was discovered during debugging kexec for TDX guest. TDX
guest uses E820_TYPE_ACPI to store the unaccepted memory bitmap and pass
it between the kernels on kexec.

Signed-off-by: Kirill A. Shutemov <[email protected]>
---
arch/x86/kernel/e820.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
index fb8cf953380d..99c80680dc9e 100644
--- a/arch/x86/kernel/e820.c
+++ b/arch/x86/kernel/e820.c
@@ -827,7 +827,7 @@ u64 __init e820__memblock_alloc_reserved(u64 size, u64 align)
/*
* Find the highest page frame number we have available
*/
-static unsigned long __init e820_end_pfn(unsigned long limit_pfn, enum e820_type type)
+static unsigned long __init e820_end_ram_pfn(unsigned long limit_pfn)
{
int i;
unsigned long last_pfn = 0;
@@ -838,7 +838,8 @@ static unsigned long __init e820_end_pfn(unsigned long limit_pfn, enum e820_type
unsigned long start_pfn;
unsigned long end_pfn;

- if (entry->type != type)
+ if (entry->type != E820_TYPE_RAM &&
+ entry->type != E820_TYPE_ACPI)
continue;

start_pfn = entry->addr >> PAGE_SHIFT;
@@ -864,12 +865,12 @@ static unsigned long __init e820_end_pfn(unsigned long limit_pfn, enum e820_type

unsigned long __init e820__end_of_ram_pfn(void)
{
- return e820_end_pfn(MAX_ARCH_PFN, E820_TYPE_RAM);
+ return e820_end_ram_pfn(MAX_ARCH_PFN);
}

unsigned long __init e820__end_of_low_ram_pfn(void)
{
- return e820_end_pfn(1UL << (32 - PAGE_SHIFT), E820_TYPE_RAM);
+ return e820_end_ram_pfn(1UL << (32 - PAGE_SHIFT));
}

static void __init early_panic(char *msg)
--
2.43.0


2024-02-12 14:29:32

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv7 16/16] x86/acpi: Add support for CPU offlining for ACPI MADT wakeup method

MADT Multiprocessor Wakeup structure version 1 brings support of CPU
offlining: BIOS provides a reset vector where the CPU has to jump to
for offlining itself. The new TEST mailbox command can be used to test
whether the CPU offlined itself which means the BIOS has control over
the CPU and can online it again via the ACPI MADT wakeup method.

Add CPU offling support for the ACPI MADT wakeup method by implementing
custom cpu_die(), play_dead() and stop_this_cpu() SMP operations.

CPU offlining makes is possible to hand over secondary CPUs over kexec,
not limiting the second kernel to a single CPU.

The change conforms to the approved ACPI spec change proposal. See the
Link.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Link: https://lore.kernel.org/all/13356251.uLZWGnKmhe@kreacher
Acked-by: Kai Huang <[email protected]>
Reviewed-by: Kuppuswamy Sathyanarayanan <[email protected]>
---
arch/x86/include/asm/acpi.h | 2 +
arch/x86/kernel/acpi/Makefile | 2 +-
arch/x86/kernel/acpi/madt_playdead.S | 28 ++++
arch/x86/kernel/acpi/madt_wakeup.c | 184 ++++++++++++++++++++++++++-
include/acpi/actbl2.h | 15 ++-
5 files changed, 227 insertions(+), 4 deletions(-)
create mode 100644 arch/x86/kernel/acpi/madt_playdead.S

diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h
index 2625b915ae7f..021cafa214c2 100644
--- a/arch/x86/include/asm/acpi.h
+++ b/arch/x86/include/asm/acpi.h
@@ -81,6 +81,8 @@ union acpi_subtable_headers;
int __init acpi_parse_mp_wake(union acpi_subtable_headers *header,
const unsigned long end);

+void asm_acpi_mp_play_dead(u64 reset_vector, u64 pgd_pa);
+
/*
* Check if the CPU can handle C2 and deeper
*/
diff --git a/arch/x86/kernel/acpi/Makefile b/arch/x86/kernel/acpi/Makefile
index 8c7329c88a75..37b1f28846de 100644
--- a/arch/x86/kernel/acpi/Makefile
+++ b/arch/x86/kernel/acpi/Makefile
@@ -4,7 +4,7 @@ obj-$(CONFIG_ACPI) += boot.o
obj-$(CONFIG_ACPI_SLEEP) += sleep.o wakeup_$(BITS).o
obj-$(CONFIG_ACPI_APEI) += apei.o
obj-$(CONFIG_ACPI_CPPC_LIB) += cppc.o
-obj-$(CONFIG_X86_ACPI_MADT_WAKEUP) += madt_wakeup.o
+obj-$(CONFIG_X86_ACPI_MADT_WAKEUP) += madt_wakeup.o madt_playdead.o

ifneq ($(CONFIG_ACPI_PROCESSOR),)
obj-y += cstate.o
diff --git a/arch/x86/kernel/acpi/madt_playdead.S b/arch/x86/kernel/acpi/madt_playdead.S
new file mode 100644
index 000000000000..4e498d28cdc8
--- /dev/null
+++ b/arch/x86/kernel/acpi/madt_playdead.S
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#include <linux/linkage.h>
+#include <asm/nospec-branch.h>
+#include <asm/page_types.h>
+#include <asm/processor-flags.h>
+
+ .text
+ .align PAGE_SIZE
+
+/*
+ * asm_acpi_mp_play_dead() - Hand over control of the CPU to the BIOS
+ *
+ * rdi: Address of the ACPI MADT MPWK ResetVector
+ * rsi: PGD of the identity mapping
+ */
+SYM_FUNC_START(asm_acpi_mp_play_dead)
+ /* Turn off global entries. Following CR3 write will flush them. */
+ movq %cr4, %rdx
+ andq $~(X86_CR4_PGE), %rdx
+ movq %rdx, %cr4
+
+ /* Switch to identity mapping */
+ movq %rsi, %cr3
+
+ /* Jump to reset vector */
+ ANNOTATE_RETPOLINE_SAFE
+ jmp *%rdi
+SYM_FUNC_END(asm_acpi_mp_play_dead)
diff --git a/arch/x86/kernel/acpi/madt_wakeup.c b/arch/x86/kernel/acpi/madt_wakeup.c
index 30820f9de5af..6cfe762be28b 100644
--- a/arch/x86/kernel/acpi/madt_wakeup.c
+++ b/arch/x86/kernel/acpi/madt_wakeup.c
@@ -1,10 +1,19 @@
// SPDX-License-Identifier: GPL-2.0-or-later
#include <linux/acpi.h>
#include <linux/cpu.h>
+#include <linux/delay.h>
#include <linux/io.h>
+#include <linux/kexec.h>
+#include <linux/memblock.h>
+#include <linux/pgtable.h>
+#include <linux/sched/hotplug.h>
#include <asm/apic.h>
#include <asm/barrier.h>
+#include <asm/init.h>
+#include <asm/intel_pt.h>
+#include <asm/nmi.h>
#include <asm/processor.h>
+#include <asm/reboot.h>

/* Physical address of the Multiprocessor Wakeup Structure mailbox */
static u64 acpi_mp_wake_mailbox_paddr __ro_after_init;
@@ -12,6 +21,154 @@ static u64 acpi_mp_wake_mailbox_paddr __ro_after_init;
/* Virtual address of the Multiprocessor Wakeup Structure mailbox */
static struct acpi_madt_multiproc_wakeup_mailbox *acpi_mp_wake_mailbox __ro_after_init;

+static u64 acpi_mp_pgd __ro_after_init;
+static u64 acpi_mp_reset_vector_paddr __ro_after_init;
+
+static void acpi_mp_stop_this_cpu(void)
+{
+ asm_acpi_mp_play_dead(acpi_mp_reset_vector_paddr, acpi_mp_pgd);
+}
+
+static void acpi_mp_play_dead(void)
+{
+ play_dead_common();
+ asm_acpi_mp_play_dead(acpi_mp_reset_vector_paddr, acpi_mp_pgd);
+}
+
+static void acpi_mp_cpu_die(unsigned int cpu)
+{
+ u32 apicid = per_cpu(x86_cpu_to_apicid, cpu);
+ unsigned long timeout;
+
+ /*
+ * Use TEST mailbox command to prove that BIOS got control over
+ * the CPU before declaring it dead.
+ *
+ * BIOS has to clear 'command' field of the mailbox.
+ */
+ acpi_mp_wake_mailbox->apic_id = apicid;
+ smp_store_release(&acpi_mp_wake_mailbox->command,
+ ACPI_MP_WAKE_COMMAND_TEST);
+
+ /* Don't wait longer than a second. */
+ timeout = USEC_PER_SEC;
+ while (READ_ONCE(acpi_mp_wake_mailbox->command) && --timeout)
+ udelay(1);
+
+ if (!timeout)
+ pr_err("Failed to hand over CPU %d to BIOS\n", cpu);
+}
+
+/* The argument is required to match type of x86_mapping_info::alloc_pgt_page */
+static void __init *alloc_pgt_page(void *dummy)
+{
+ return memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+}
+
+static void __init free_pgt_page(void *pgt, void *dummy)
+{
+ return memblock_free(pgt, PAGE_SIZE);
+}
+
+/*
+ * Make sure asm_acpi_mp_play_dead() is present in the identity mapping at
+ * the same place as in the kernel page tables. asm_acpi_mp_play_dead() switches
+ * to the identity mapping and the function has be present at the same spot in
+ * the virtual address space before and after switching page tables.
+ */
+static int __init init_transition_pgtable(pgd_t *pgd)
+{
+ pgprot_t prot = PAGE_KERNEL_EXEC_NOENC;
+ unsigned long vaddr, paddr;
+ p4d_t *p4d;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+
+ vaddr = (unsigned long)asm_acpi_mp_play_dead;
+ pgd += pgd_index(vaddr);
+ if (!pgd_present(*pgd)) {
+ p4d = (p4d_t *)alloc_pgt_page(NULL);
+ if (!p4d)
+ return -ENOMEM;
+ set_pgd(pgd, __pgd(__pa(p4d) | _KERNPG_TABLE));
+ }
+ p4d = p4d_offset(pgd, vaddr);
+ if (!p4d_present(*p4d)) {
+ pud = (pud_t *)alloc_pgt_page(NULL);
+ if (!pud)
+ return -ENOMEM;
+ set_p4d(p4d, __p4d(__pa(pud) | _KERNPG_TABLE));
+ }
+ pud = pud_offset(p4d, vaddr);
+ if (!pud_present(*pud)) {
+ pmd = (pmd_t *)alloc_pgt_page(NULL);
+ if (!pmd)
+ return -ENOMEM;
+ set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE));
+ }
+ pmd = pmd_offset(pud, vaddr);
+ if (!pmd_present(*pmd)) {
+ pte = (pte_t *)alloc_pgt_page(NULL);
+ if (!pte)
+ return -ENOMEM;
+ set_pmd(pmd, __pmd(__pa(pte) | _KERNPG_TABLE));
+ }
+ pte = pte_offset_kernel(pmd, vaddr);
+
+ paddr = __pa(vaddr);
+ set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, prot));
+
+ return 0;
+}
+
+static int __init acpi_mp_setup_reset(u64 reset_vector)
+{
+ struct x86_mapping_info info = {
+ .alloc_pgt_page = alloc_pgt_page,
+ .free_pgt_page = free_pgt_page,
+ .page_flag = __PAGE_KERNEL_LARGE_EXEC,
+ .kernpg_flag = _KERNPG_TABLE_NOENC,
+ };
+ pgd_t *pgd;
+
+ pgd = alloc_pgt_page(NULL);
+ if (!pgd)
+ return -ENOMEM;
+
+ for (int i = 0; i < nr_pfn_mapped; i++) {
+ unsigned long mstart, mend;
+
+ mstart = pfn_mapped[i].start << PAGE_SHIFT;
+ mend = pfn_mapped[i].end << PAGE_SHIFT;
+ if (kernel_ident_mapping_init(&info, pgd, mstart, mend)) {
+ kernel_ident_mapping_free(&info, pgd);
+ return -ENOMEM;
+ }
+ }
+
+ if (kernel_ident_mapping_init(&info, pgd,
+ PAGE_ALIGN_DOWN(reset_vector),
+ PAGE_ALIGN(reset_vector + 1))) {
+ kernel_ident_mapping_free(&info, pgd);
+ return -ENOMEM;
+ }
+
+ if (init_transition_pgtable(pgd)) {
+ kernel_ident_mapping_free(&info, pgd);
+ return -ENOMEM;
+ }
+
+ smp_ops.play_dead = acpi_mp_play_dead;
+ smp_ops.stop_this_cpu = acpi_mp_stop_this_cpu;
+ smp_ops.cpu_die = acpi_mp_cpu_die;
+
+ acpi_mp_reset_vector_paddr = reset_vector;
+ acpi_mp_pgd = __pa(pgd);
+
+ return 0;
+}
+
static int acpi_wakeup_cpu(u32 apicid, unsigned long start_ip)
{
if (!acpi_mp_wake_mailbox_paddr) {
@@ -97,14 +254,37 @@ int __init acpi_parse_mp_wake(union acpi_subtable_headers *header,
struct acpi_madt_multiproc_wakeup *mp_wake;

mp_wake = (struct acpi_madt_multiproc_wakeup *)header;
- if (BAD_MADT_ENTRY(mp_wake, end))
+
+ /*
+ * Cannot use the standard BAD_MADT_ENTRY() to sanity check the @mp_wake
+ * entry. 'sizeof (struct acpi_madt_multiproc_wakeup)' can be larger
+ * than the actual size of the MP wakeup entry in ACPI table because the
+ * 'reset_vector' is only available in the V1 MP wakeup structure.
+ */
+ if (!mp_wake)
+ return -EINVAL;
+ if (end - (unsigned long)mp_wake < ACPI_MADT_MP_WAKEUP_SIZE_V0)
+ return -EINVAL;
+ if (mp_wake->header.length < ACPI_MADT_MP_WAKEUP_SIZE_V0)
return -EINVAL;

acpi_table_print_madt_entry(&header->common);

acpi_mp_wake_mailbox_paddr = mp_wake->mailbox_address;

- acpi_mp_disable_offlining(mp_wake);
+ if (mp_wake->version >= ACPI_MADT_MP_WAKEUP_VERSION_V1 &&
+ mp_wake->header.length >= ACPI_MADT_MP_WAKEUP_SIZE_V1) {
+ if (acpi_mp_setup_reset(mp_wake->reset_vector)) {
+ pr_warn("Failed to setup MADT reset vector\n");
+ acpi_mp_disable_offlining(mp_wake);
+ }
+ } else {
+ /*
+ * CPU offlining requires version 1 of the ACPI MADT wakeup
+ * structure.
+ */
+ acpi_mp_disable_offlining(mp_wake);
+ }

apic_update_callback(wakeup_secondary_cpu_64, acpi_wakeup_cpu);

diff --git a/include/acpi/actbl2.h b/include/acpi/actbl2.h
index e1a395af7591..2aedda70ef88 100644
--- a/include/acpi/actbl2.h
+++ b/include/acpi/actbl2.h
@@ -1120,8 +1120,20 @@ struct acpi_madt_multiproc_wakeup {
u16 version;
u32 reserved; /* reserved - must be zero */
u64 mailbox_address;
+ u64 reset_vector;
};

+/* Values for Version field above */
+
+enum acpi_madt_multiproc_wakeup_version {
+ ACPI_MADT_MP_WAKEUP_VERSION_NONE = 0,
+ ACPI_MADT_MP_WAKEUP_VERSION_V1 = 1,
+ ACPI_MADT_MP_WAKEUP_VERSION_RESERVED = 2, /* 2 and greater are reserved */
+};
+
+#define ACPI_MADT_MP_WAKEUP_SIZE_V0 16
+#define ACPI_MADT_MP_WAKEUP_SIZE_V1 24
+
#define ACPI_MULTIPROC_WAKEUP_MB_OS_SIZE 2032
#define ACPI_MULTIPROC_WAKEUP_MB_FIRMWARE_SIZE 2048

@@ -1134,7 +1146,8 @@ struct acpi_madt_multiproc_wakeup_mailbox {
u8 reserved_firmware[ACPI_MULTIPROC_WAKEUP_MB_FIRMWARE_SIZE]; /* reserved for firmware use */
};

-#define ACPI_MP_WAKE_COMMAND_WAKEUP 1
+#define ACPI_MP_WAKE_COMMAND_WAKEUP 1
+#define ACPI_MP_WAKE_COMMAND_TEST 2

/* 17: CPU Core Interrupt Controller (ACPI 6.5) */

--
2.43.0


2024-02-12 14:56:06

by Kirill A. Shutemov

[permalink] [raw]
Subject: [PATCHv7 04/16] cpu/hotplug, x86/acpi: Disable CPU offlining for ACPI MADT wakeup

ACPI MADT doesn't allow to offline CPU after it got woke up.

Currently CPU hotplug is prevented based on the confidential computing
attribute which is set for Intel TDX. But TDX is not the only possible
user of the wake up method.

Disable CPU offlining on ACPI MADT wakeup enumeration.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: Thomas Gleixner <[email protected]>
---
arch/x86/coco/core.c | 1 -
arch/x86/kernel/acpi/madt_wakeup.c | 3 +++
include/linux/cc_platform.h | 10 ----------
kernel/cpu.c | 3 +--
4 files changed, 4 insertions(+), 13 deletions(-)

diff --git a/arch/x86/coco/core.c b/arch/x86/coco/core.c
index eeec9986570e..f07c3bb7deab 100644
--- a/arch/x86/coco/core.c
+++ b/arch/x86/coco/core.c
@@ -20,7 +20,6 @@ static bool noinstr intel_cc_platform_has(enum cc_attr attr)
{
switch (attr) {
case CC_ATTR_GUEST_UNROLL_STRING_IO:
- case CC_ATTR_HOTPLUG_DISABLED:
case CC_ATTR_GUEST_MEM_ENCRYPT:
case CC_ATTR_MEM_ENCRYPT:
return true;
diff --git a/arch/x86/kernel/acpi/madt_wakeup.c b/arch/x86/kernel/acpi/madt_wakeup.c
index cf79ea6f3007..d222be8d7a07 100644
--- a/arch/x86/kernel/acpi/madt_wakeup.c
+++ b/arch/x86/kernel/acpi/madt_wakeup.c
@@ -1,5 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-or-later
#include <linux/acpi.h>
+#include <linux/cpu.h>
#include <linux/io.h>
#include <asm/apic.h>
#include <asm/barrier.h>
@@ -76,6 +77,8 @@ int __init acpi_parse_mp_wake(union acpi_subtable_headers *header,

acpi_mp_wake_mailbox_paddr = mp_wake->base_address;

+ cpu_hotplug_disable_offlining();
+
apic_update_callback(wakeup_secondary_cpu_64, acpi_wakeup_cpu);

return 0;
diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h
index cb0d6cd1c12f..d08dd65b5c43 100644
--- a/include/linux/cc_platform.h
+++ b/include/linux/cc_platform.h
@@ -80,16 +80,6 @@ enum cc_attr {
* using AMD SEV-SNP features.
*/
CC_ATTR_GUEST_SEV_SNP,
-
- /**
- * @CC_ATTR_HOTPLUG_DISABLED: Hotplug is not supported or disabled.
- *
- * The platform/OS is running as a guest/virtual machine does not
- * support CPU hotplug feature.
- *
- * Examples include TDX Guest.
- */
- CC_ATTR_HOTPLUG_DISABLED,
};

#ifdef CONFIG_ARCH_HAS_CC_PLATFORM
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 7c28a07afe8b..3ffdbb1f7bb4 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -1532,8 +1532,7 @@ static int cpu_down_maps_locked(unsigned int cpu, enum cpuhp_state target)
* If the platform does not support hotplug, report it explicitly to
* differentiate it from a transient offlining failure.
*/
- if (cc_platform_has(CC_ATTR_HOTPLUG_DISABLED) ||
- cpu_hotplug_offline_disabled)
+ if (cpu_hotplug_offline_disabled)
return -EOPNOTSUPP;
if (cpu_hotplug_disabled)
return -EBUSY;
--
2.43.0


2024-02-19 05:12:52

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none

On 02/12/24 at 12:44pm, Kirill A. Shutemov wrote:
> lookup_address() only returns correct page table level for the entry if
> the entry is not none.
>
> Make the helper to always return correct 'level'. It allows to implement
> iterator over kernel page tables using lookup_address().
>
> Add one more entry into enum pg_level to indicate size of VA covered by
> one PGD entry in 5-level paging mode.
>
> Signed-off-by: Kirill A. Shutemov <[email protected]>
> Reviewed-by: Rick Edgecombe <[email protected]>
> ---
> arch/x86/include/asm/pgtable_types.h | 1 +
> arch/x86/mm/pat/set_memory.c | 8 ++++----
> 2 files changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> index 0b748ee16b3d..3f648ffdfbe5 100644
> --- a/arch/x86/include/asm/pgtable_types.h
> +++ b/arch/x86/include/asm/pgtable_types.h
> @@ -548,6 +548,7 @@ enum pg_level {
> PG_LEVEL_2M,
> PG_LEVEL_1G,
> PG_LEVEL_512G,
> + PG_LEVEL_256T,
> PG_LEVEL_NUM
> };
>
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index f92da8c9a86d..3612e3167147 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -666,32 +666,32 @@ pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address,

LGTM,

Reviewed-by: Baoquan He <[email protected]>

By the way, we may need update the code comment above function
lookup_address_in_pgd() and function lookup_address() since they don't
reflect the latest behaviour of them.

> pud_t *pud;
> pmd_t *pmd;
>
> - *level = PG_LEVEL_NONE;
> + *level = PG_LEVEL_256T;
>
> if (pgd_none(*pgd))
> return NULL;
>
> + *level = PG_LEVEL_512G;
> p4d = p4d_offset(pgd, address);
> if (p4d_none(*p4d))
> return NULL;
>
> - *level = PG_LEVEL_512G;
> if (p4d_large(*p4d) || !p4d_present(*p4d))
> return (pte_t *)p4d;
>
> + *level = PG_LEVEL_1G;
> pud = pud_offset(p4d, address);
> if (pud_none(*pud))
> return NULL;
>
> - *level = PG_LEVEL_1G;
> if (pud_large(*pud) || !pud_present(*pud))
> return (pte_t *)pud;
>
> + *level = PG_LEVEL_2M;
> pmd = pmd_offset(pud, address);
> if (pmd_none(*pmd))
> return NULL;
>
> - *level = PG_LEVEL_2M;
> if (pmd_large(*pmd) || !pmd_present(*pmd))
> return (pte_t *)pmd;
>
> --
> 2.43.0
>


2024-02-19 13:52:58

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none

On Mon, Feb 19, 2024 at 01:12:32PM +0800, Baoquan He wrote:
> On 02/12/24 at 12:44pm, Kirill A. Shutemov wrote:
> > lookup_address() only returns correct page table level for the entry if
> > the entry is not none.
> >
> > Make the helper to always return correct 'level'. It allows to implement
> > iterator over kernel page tables using lookup_address().
> >
> > Add one more entry into enum pg_level to indicate size of VA covered by
> > one PGD entry in 5-level paging mode.
> >
> > Signed-off-by: Kirill A. Shutemov <[email protected]>
> > Reviewed-by: Rick Edgecombe <[email protected]>
> > ---
> > arch/x86/include/asm/pgtable_types.h | 1 +
> > arch/x86/mm/pat/set_memory.c | 8 ++++----
> > 2 files changed, 5 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> > index 0b748ee16b3d..3f648ffdfbe5 100644
> > --- a/arch/x86/include/asm/pgtable_types.h
> > +++ b/arch/x86/include/asm/pgtable_types.h
> > @@ -548,6 +548,7 @@ enum pg_level {
> > PG_LEVEL_2M,
> > PG_LEVEL_1G,
> > PG_LEVEL_512G,
> > + PG_LEVEL_256T,
> > PG_LEVEL_NUM
> > };
> >
> > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> > index f92da8c9a86d..3612e3167147 100644
> > --- a/arch/x86/mm/pat/set_memory.c
> > +++ b/arch/x86/mm/pat/set_memory.c
> > @@ -666,32 +666,32 @@ pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address,
>
> LGTM,
>
> Reviewed-by: Baoquan He <[email protected]>
>
> By the way, we may need update the code comment above function
> lookup_address_in_pgd() and function lookup_address() since they don't
> reflect the latest behaviour of them.

I am not sure what part of the comment you see doesn't reflect the
behaviour. From my PoV, changed code matches the comment closer that
original.

Hm?

--
Kiryl Shutsemau / Kirill A. Shutemov

2024-02-20 01:18:43

by Kalra, Ashish

[permalink] [raw]
Subject: [PATCH 0/2] x86/snp: Add kexec support

From: Ashish Kalra <[email protected]>

The patchset adds bits and pieces to get kexec (and crashkernel) work on
SNP guest.

This patchset requires [1] for chained guest kexec to work correctly.

[1]: https://lore.kernel.org/lkml/[email protected]/

Ashish Kalra (2):
x86/mm: Do not zap PMD entry mapping unaccepted memory table during
kdump.
x86/snp: Convert shared memory back to private on kexec

arch/x86/include/asm/probe_roms.h | 1 +
arch/x86/include/asm/sev.h | 8 ++
arch/x86/kernel/probe_roms.c | 16 +++
arch/x86/kernel/sev.c | 211 ++++++++++++++++++++++++++++++
arch/x86/mm/init_64.c | 4 +-
arch/x86/mm/mem_encrypt_amd.c | 18 ++-
6 files changed, 256 insertions(+), 2 deletions(-)

--
2.34.1


2024-02-20 01:19:00

by Kalra, Ashish

[permalink] [raw]
Subject: [PATCH 1/2] x86/mm: Do not zap PMD entry mapping unaccepted memory table during kdump.

From: Ashish Kalra <[email protected]>

During crashkernel boot only pre-allocated crash memory is presented as
E820_TYPE_RAM. This can cause PMD entry mapping unaccepted memory table
to be zapped during phys_pmd_init() as SNP/TDX guest use E820_TYPE_ACPI
to store the unaccepted memory table and pass it between the kernels on
kexec/kdump.

E820_TYPE_ACPI covers not only ACPI data, but also EFI tables and might
be required by kernel to function properly.

The problem was discovered during debugging kdump for SNP guest. The
unaccepted memory table stored with E820_TYPE_ACPI and passed between
the kernels on kdump was getting zapped as the PMD entry mapping this
is above the E820_TYPE_RAM range for the reserved crashkernel memory.

Signed-off-by: Ashish Kalra <[email protected]>
---
arch/x86/mm/init_64.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index a0dffaca6d2b..207c6dddde0c 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -524,7 +524,9 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
!e820__mapped_any(paddr & PMD_MASK, paddr_next,
E820_TYPE_RAM) &&
!e820__mapped_any(paddr & PMD_MASK, paddr_next,
- E820_TYPE_RESERVED_KERN))
+ E820_TYPE_RESERVED_KERN) &&
+ !e820__mapped_any(paddr & PMD_MASK, paddr_next,
+ E820_TYPE_ACPI))
set_pmd_init(pmd, __pmd(0), init);
continue;
}
--
2.34.1


2024-02-20 01:19:16

by Kalra, Ashish

[permalink] [raw]
Subject: [PATCH 2/2] x86/snp: Convert shared memory back to private on kexec

From: Ashish Kalra <[email protected]>

SNP guests allocate shared buffers to perform I/O. It is done by
allocating pages normally from the buddy allocator and converting them
to shared with set_memory_decrypted().

The second kernel has no idea what memory is converted this way. It only
sees E820_TYPE_RAM.

Accessing shared memory via private mapping will cause unrecoverable RMP
page-faults.

On kexec walk direct mapping and convert all shared memory back to
private. It makes all RAM private again and second kernel may use it
normally. Additionally for SNP guests convert all bss decrypted section
pages back to private and switch back ROM regions to shared so that
their revalidation does not fail during kexec kernel boot.

The conversion occurs in two steps: stopping new conversions and
unsharing all memory. In the case of normal kexec, the stopping of
conversions takes place while scheduling is still functioning. This
allows for waiting until any ongoing conversions are finished. The
second step is carried out when all CPUs except one are inactive and
interrupts are disabled. This prevents any conflicts with code that may
access shared memory.

Signed-off-by: Ashish Kalra <[email protected]>
---
arch/x86/include/asm/probe_roms.h | 1 +
arch/x86/include/asm/sev.h | 8 ++
arch/x86/kernel/probe_roms.c | 16 +++
arch/x86/kernel/sev.c | 211 ++++++++++++++++++++++++++++++
arch/x86/mm/mem_encrypt_amd.c | 18 ++-
5 files changed, 253 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/probe_roms.h b/arch/x86/include/asm/probe_roms.h
index 1c7f3815bbd6..d50b67dbff33 100644
--- a/arch/x86/include/asm/probe_roms.h
+++ b/arch/x86/include/asm/probe_roms.h
@@ -6,4 +6,5 @@ struct pci_dev;
extern void __iomem *pci_map_biosrom(struct pci_dev *pdev);
extern void pci_unmap_biosrom(void __iomem *rom);
extern size_t pci_biosrom_size(struct pci_dev *pdev);
+extern void snp_kexec_unprep_rom_memory(void);
#endif
diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
index 5b4a1ce3d368..dd236d7e9407 100644
--- a/arch/x86/include/asm/sev.h
+++ b/arch/x86/include/asm/sev.h
@@ -81,6 +81,10 @@ extern void vc_no_ghcb(void);
extern void vc_boot_ghcb(void);
extern bool handle_vc_boot_ghcb(struct pt_regs *regs);

+extern atomic_t conversions_in_progress;
+extern bool conversion_allowed;
+extern unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot);
+
/* PVALIDATE return codes */
#define PVALIDATE_FAIL_SIZEMISMATCH 6

@@ -213,6 +217,8 @@ int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct sn
void snp_accept_memory(phys_addr_t start, phys_addr_t end);
u64 snp_get_unsupported_features(u64 status);
u64 sev_get_status(void);
+void snp_kexec_unshare_mem(void);
+void snp_kexec_stop_conversion(bool crash);
#else
static inline void sev_es_ist_enter(struct pt_regs *regs) { }
static inline void sev_es_ist_exit(void) { }
@@ -241,6 +247,8 @@ static inline int snp_issue_guest_request(u64 exit_code, struct snp_req_data *in
static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { }
static inline u64 snp_get_unsupported_features(u64 status) { return 0; }
static inline u64 sev_get_status(void) { return 0; }
+void snp_kexec_unshare_mem(void) {}
+static void snp_kexec_stop_conversion(bool crash) {}
#endif

#endif
diff --git a/arch/x86/kernel/probe_roms.c b/arch/x86/kernel/probe_roms.c
index 319fef37d9dc..457f1e5c8d00 100644
--- a/arch/x86/kernel/probe_roms.c
+++ b/arch/x86/kernel/probe_roms.c
@@ -177,6 +177,22 @@ size_t pci_biosrom_size(struct pci_dev *pdev)
}
EXPORT_SYMBOL(pci_biosrom_size);

+void snp_kexec_unprep_rom_memory(void)
+{
+ unsigned long vaddr, npages, sz;
+
+ /*
+ * Switch back ROM regions to shared so that their validation
+ * does not fail during kexec kernel boot.
+ */
+ vaddr = (unsigned long)__va(video_rom_resource.start);
+ sz = (system_rom_resource.end + 1) - video_rom_resource.start;
+ npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
+
+ snp_set_memory_shared(vaddr, npages);
+}
+EXPORT_SYMBOL(snp_kexec_unprep_rom_memory);
+
#define ROMSIGNATURE 0xaa55

static int __init romsignature(const unsigned char *rom)
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index c67285824e82..765ab83129eb 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -23,6 +23,9 @@
#include <linux/platform_device.h>
#include <linux/io.h>
#include <linux/psp-sev.h>
+#include <linux/pagewalk.h>
+#include <linux/cacheflush.h>
+#include <linux/delay.h>
#include <uapi/linux/sev-guest.h>

#include <asm/cpu_entry_area.h>
@@ -40,6 +43,7 @@
#include <asm/apic.h>
#include <asm/cpuid.h>
#include <asm/cmdline.h>
+#include <asm/probe_roms.h>

#define DR7_RESET_VALUE 0x400

@@ -71,6 +75,13 @@ static struct ghcb *boot_ghcb __section(".data");
/* Bitmap of SEV features supported by the hypervisor */
static u64 sev_hv_features __ro_after_init;

+/* Last address to be switched to private during kexec */
+static unsigned long last_address_shd_kexec;
+
+static bool crash_requested;
+atomic_t conversions_in_progress;
+bool conversion_allowed = true;
+
/* #VC handler runtime per-CPU data */
struct sev_es_runtime_data {
struct ghcb ghcb_page;
@@ -906,6 +917,206 @@ void snp_accept_memory(phys_addr_t start, phys_addr_t end)
set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE);
}

+static inline bool pte_decrypted(pte_t pte)
+{
+ return cc_mkdec(pte_val(pte)) == pte_val(pte);
+}
+
+static int set_pte_enc(pte_t *kpte, int level, void *va)
+{
+ pgprot_t old_prot, new_prot;
+ unsigned long pfn, pa, size;
+ pte_t new_pte;
+
+ pfn = pg_level_to_pfn(level, kpte, &old_prot);
+ if (!pfn)
+ return 0;
+
+ new_prot = old_prot;
+ pgprot_val(new_prot) |= _PAGE_ENC;
+ pa = pfn << PAGE_SHIFT;
+ size = page_level_size(level);
+
+ /*
+ * Change the physical page attribute from C=0 to C=1. Flush the
+ * caches to ensure that data gets accessed with the correct C-bit.
+ */
+ clflush_cache_range(va, size);
+
+ /* Change the page encryption mask. */
+ new_pte = pfn_pte(pfn, new_prot);
+ set_pte_atomic(kpte, new_pte);
+
+ return 1;
+}
+
+static int unshare_pte(pte_t *pte, unsigned long addr, int pages, int level)
+{
+ struct sev_es_runtime_data *data;
+ struct ghcb *ghcb;
+
+ data = this_cpu_read(runtime_data);
+ ghcb = &data->ghcb_page;
+
+ /*
+ * check for GHCB for being part of a PMD range.
+ */
+ if ((unsigned long)ghcb >= addr &&
+ (unsigned long)ghcb <= (addr + (pages * PAGE_SIZE))) {
+ /*
+ * setup last address to be made private so that this GHCB
+ * is made private at the end of unshared loop so that RMP
+ * does not possibly getting PSMASHed from using the
+ * MSR protocol.
+ */
+ pr_debug("setting boot_ghcb to NULL for this cpu ghcb\n");
+ last_address_shd_kexec = addr;
+ return 1;
+ }
+ if (!set_pte_enc(pte, level, (void *)addr))
+ return 0;
+ snp_set_memory_private(addr, pages);
+
+ return 1;
+}
+
+static void unshare_all_memory(bool unmap)
+{
+ unsigned long addr, end;
+
+ /*
+ * Walk direct mapping and convert all shared memory back to private,
+ */
+
+ addr = PAGE_OFFSET;
+ end = PAGE_OFFSET + get_max_mapped();
+
+ while (addr < end) {
+ unsigned long size;
+ unsigned int level;
+ pte_t *pte;
+
+ pte = lookup_address(addr, &level);
+ size = page_level_size(level);
+
+ /*
+ * pte_none() check is required to skip physical memory holes in direct mapped.
+ */
+ if (pte && pte_decrypted(*pte) && !pte_none(*pte)) {
+ int pages = size / PAGE_SIZE;
+
+ if (!unshare_pte(pte, addr, pages, level)) {
+ pr_err("Failed to unshare range %#lx-%#lx\n",
+ addr, addr + size);
+ }
+
+ }
+
+ addr += size;
+ }
+ __flush_tlb_all();
+
+}
+
+static void unshare_all_bss_decrypted_memory(void)
+{
+ unsigned long vaddr, vaddr_end;
+ unsigned long size;
+ unsigned int level;
+ unsigned int npages;
+ pte_t *pte;
+
+ vaddr = (unsigned long)__start_bss_decrypted;
+ vaddr_end = (unsigned long)__start_bss_decrypted_unused;
+ npages = (vaddr_end - vaddr) >> PAGE_SHIFT;
+ for (; vaddr < vaddr_end; vaddr += PAGE_SIZE) {
+ pte = lookup_address(vaddr, &level);
+ if (!pte || !pte_decrypted(*pte) || pte_none(*pte))
+ continue;
+
+ size = page_level_size(level);
+ set_pte_enc(pte, level, (void *)vaddr);
+ }
+ vaddr = (unsigned long)__start_bss_decrypted;
+ snp_set_memory_private(vaddr, npages);
+}
+
+void snp_kexec_stop_conversion(bool crash)
+{
+ /* Stop new private<->shared conversions */
+ conversion_allowed = false;
+ crash_requested = crash;
+
+ /*
+ * Make sure conversion_allowed is cleared before checking
+ * conversions_in_progress.
+ */
+ barrier();
+
+ /*
+ * Crash kernel reaches here with interrupts disabled: can't wait for
+ * conversions to finish.
+ *
+ * If race happened, just report and proceed.
+ */
+ if (!crash) {
+ unsigned long timeout;
+
+ /*
+ * Wait for in-flight conversions to complete.
+ *
+ * Do not wait more than 30 seconds.
+ */
+ timeout = 30 * USEC_PER_SEC;
+ while (atomic_read(&conversions_in_progress) && timeout--)
+ udelay(1);
+ }
+
+ if (atomic_read(&conversions_in_progress))
+ pr_warn("Failed to finish shared<->private conversions\n");
+}
+
+void snp_kexec_unshare_mem(void)
+{
+ if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
+ return;
+
+ /*
+ * Switch back any specific memory regions such as option
+ * ROM regions back to shared so that (re)validation does
+ * not fail when kexec kernel boots.
+ */
+ snp_kexec_unprep_rom_memory();
+
+ unshare_all_memory(true);
+
+ unshare_all_bss_decrypted_memory();
+
+ if (last_address_shd_kexec) {
+ unsigned long size;
+ unsigned int level;
+ pte_t *pte;
+
+ /*
+ * Switch to using the MSR protocol to change this cpu's
+ * GHCB to private.
+ */
+ boot_ghcb = NULL;
+ /*
+ * All the per-cpu GHCBs have been switched back to private,
+ * so can't do any more GHCB calls to the hypervisor beyond
+ * this point till the kexec kernel starts running.
+ */
+ sev_cfg.ghcbs_initialized = false;
+
+ pr_debug("boot ghcb 0x%lx\n", last_address_shd_kexec);
+ pte = lookup_address(last_address_shd_kexec, &level);
+ size = page_level_size(level);
+ set_pte_enc(pte, level, (void *)last_address_shd_kexec);
+ snp_set_memory_private(last_address_shd_kexec, (size / PAGE_SIZE));
+ }
+}
+
static int snp_set_vmsa(void *va, bool vmsa)
{
u64 attrs;
diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
index d314e577836d..87b6475358ad 100644
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -214,7 +214,7 @@ void __init sme_map_bootdata(char *real_mode_data)
__sme_early_map_unmap_mem(__va(cmdline_paddr), COMMAND_LINE_SIZE, true);
}

-static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot)
+unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot)
{
unsigned long pfn = 0;
pgprot_t prot;
@@ -285,6 +285,17 @@ static void enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc)

static int amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool enc)
{
+ atomic_inc(&conversions_in_progress);
+
+ /*
+ * Check after bumping conversions_in_progress to serialize
+ * against snp_kexec_stop_conversion().
+ */
+ if (!conversion_allowed) {
+ atomic_dec(&conversions_in_progress);
+ return -EBUSY;
+ }
+
/*
* To maintain the security guarantees of SEV-SNP guests, make sure
* to invalidate the memory before encryption attribute is cleared.
@@ -308,6 +319,8 @@ static int amd_enc_status_change_finish(unsigned long vaddr, int npages, bool en
if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
enc_dec_hypercall(vaddr, npages << PAGE_SHIFT, enc);

+ atomic_dec(&conversions_in_progress);
+
return 0;
}

@@ -468,6 +481,9 @@ void __init sme_early_init(void)
x86_platform.guest.enc_tlb_flush_required = amd_enc_tlb_flush_required;
x86_platform.guest.enc_cache_flush_required = amd_enc_cache_flush_required;

+ x86_platform.guest.enc_kexec_stop_conversion = snp_kexec_stop_conversion;
+ x86_platform.guest.enc_kexec_unshare_mem = snp_kexec_unshare_mem;
+
/*
* AMD-SEV-ES intercepts the RDMSR to read the X2APIC ID in the
* parallel bringup low level code. That raises #VC which cannot be
--
2.34.1


2024-02-20 10:26:09

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none

On 02/19/24 at 03:52pm, Kirill A. Shutemov wrote:
> On Mon, Feb 19, 2024 at 01:12:32PM +0800, Baoquan He wrote:
> > On 02/12/24 at 12:44pm, Kirill A. Shutemov wrote:
> > > lookup_address() only returns correct page table level for the entry if
> > > the entry is not none.
> > >
> > > Make the helper to always return correct 'level'. It allows to implement
> > > iterator over kernel page tables using lookup_address().
> > >
> > > Add one more entry into enum pg_level to indicate size of VA covered by
> > > one PGD entry in 5-level paging mode.
> > >
> > > Signed-off-by: Kirill A. Shutemov <[email protected]>
> > > Reviewed-by: Rick Edgecombe <[email protected]>
> > > ---
> > > arch/x86/include/asm/pgtable_types.h | 1 +
> > > arch/x86/mm/pat/set_memory.c | 8 ++++----
> > > 2 files changed, 5 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> > > index 0b748ee16b3d..3f648ffdfbe5 100644
> > > --- a/arch/x86/include/asm/pgtable_types.h
> > > +++ b/arch/x86/include/asm/pgtable_types.h
> > > @@ -548,6 +548,7 @@ enum pg_level {
> > > PG_LEVEL_2M,
> > > PG_LEVEL_1G,
> > > PG_LEVEL_512G,
> > > + PG_LEVEL_256T,
> > > PG_LEVEL_NUM
> > > };
> > >
> > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> > > index f92da8c9a86d..3612e3167147 100644
> > > --- a/arch/x86/mm/pat/set_memory.c
> > > +++ b/arch/x86/mm/pat/set_memory.c
> > > @@ -666,32 +666,32 @@ pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address,
> >
> > LGTM,
> >
> > Reviewed-by: Baoquan He <[email protected]>
> >
> > By the way, we may need update the code comment above function
> > lookup_address_in_pgd() and function lookup_address() since they don't
> > reflect the latest behaviour of them.
>
> I am not sure what part of the comment you see doesn't reflect the
> behaviour. From my PoV, changed code matches the comment closer that
> original.

Oh, I didn't make it clear. I mean update the code comment for
lookup_address(), and add code comment for lookup_address_in_pgd() to
mention the level thing. Maybe it's a chance to do that.

===1>
*
* Lookup the page table entry for a virtual address. Return a pointer
* to the entry and the level of the mapping.
*
* Note: We return pud and pmd either when the entry is marked large
~~~~~~~~~~~ seems we return p4d too
* or when the present bit is not set. Otherwise we would return a
* pointer to a nonexisting mapping.
~~~~~~~~~~~~~~~ NULL?
*/
pte_t *lookup_address(unsigned long address, unsigned int *level)
{
return lookup_address_in_pgd(pgd_offset_k(address), address, level);
}
EXPORT_SYMBOL_GPL(lookup_address);
===

===2>
/*
* Lookup the page table entry for a virtual address in a specific pgd.
* Return a pointer to the entry and the level of the mapping.
~~ also could return NULL if it's none entry. And do we need to
mention the level thing?
*/
pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address,
unsigned int *level)
..
}


2024-02-20 12:37:05

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none

On Tue, Feb 20, 2024 at 06:25:43PM +0800, Baoquan He wrote:
> > I am not sure what part of the comment you see doesn't reflect the
> > behaviour. From my PoV, changed code matches the comment closer that
> > original.
>
> Oh, I didn't make it clear. I mean update the code comment for
> lookup_address(), and add code comment for lookup_address_in_pgd() to
> mention the level thing. Maybe it's a chance to do that.
>
> ===1>
> *
> * Lookup the page table entry for a virtual address. Return a pointer
> * to the entry and the level of the mapping.
> *
> * Note: We return pud and pmd either when the entry is marked large
> ~~~~~~~~~~~ seems we return p4d too
> * or when the present bit is not set. Otherwise we would return a
> * pointer to a nonexisting mapping.
> ~~~~~~~~~~~~~~~ NULL?
> */
> pte_t *lookup_address(unsigned long address, unsigned int *level)
> {
> return lookup_address_in_pgd(pgd_offset_k(address), address, level);
> }
> EXPORT_SYMBOL_GPL(lookup_address);
> ===
>
> ===2>
> /*
> * Lookup the page table entry for a virtual address in a specific pgd.
> * Return a pointer to the entry and the level of the mapping.
> ~~ also could return NULL if it's none entry. And do we need to
> mention the level thing?
> */
> pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address,
> unsigned int *level)
> ...
> }
>

What about this fixup:

diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 3612e3167147..425ff6e192e6 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -657,7 +657,8 @@ static inline pgprot_t verify_rwx(pgprot_t old, pgprot_t new, unsigned long star

/*
* Lookup the page table entry for a virtual address in a specific pgd.
- * Return a pointer to the entry and the level of the mapping.
+ * Return a pointer to the entry and the level of the mapping (or NULL if
+ * the entry is none) and level of the entry.
*/
pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address,
unsigned int *level)
@@ -704,9 +705,8 @@ pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address,
* Lookup the page table entry for a virtual address. Return a pointer
* to the entry and the level of the mapping.
*
- * Note: We return pud and pmd either when the entry is marked large
- * or when the present bit is not set. Otherwise we would return a
- * pointer to a nonexisting mapping.
+ * Note: the function returns p4d, pud and pmd either when the entry is marked
+ * large or when the present bit is not set. Otherwise it returns NULL.
*/
pte_t *lookup_address(unsigned long address, unsigned int *level)
{
--
Kiryl Shutsemau / Kirill A. Shutemov

2024-02-20 12:42:21

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH 1/2] x86/mm: Do not zap PMD entry mapping unaccepted memory table during kdump.

On Tue, Feb 20, 2024 at 01:18:29AM +0000, Ashish Kalra wrote:
> From: Ashish Kalra <[email protected]>
>
> During crashkernel boot only pre-allocated crash memory is presented as
> E820_TYPE_RAM. This can cause PMD entry mapping unaccepted memory table
> to be zapped during phys_pmd_init() as SNP/TDX guest use E820_TYPE_ACPI
> to store the unaccepted memory table and pass it between the kernels on
> kexec/kdump.
>
> E820_TYPE_ACPI covers not only ACPI data, but also EFI tables and might
> be required by kernel to function properly.
>
> The problem was discovered during debugging kdump for SNP guest. The
> unaccepted memory table stored with E820_TYPE_ACPI and passed between
> the kernels on kdump was getting zapped as the PMD entry mapping this
> is above the E820_TYPE_RAM range for the reserved crashkernel memory.
>
> Signed-off-by: Ashish Kalra <[email protected]>
> ---
> arch/x86/mm/init_64.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index a0dffaca6d2b..207c6dddde0c 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -524,7 +524,9 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
> !e820__mapped_any(paddr & PMD_MASK, paddr_next,
> E820_TYPE_RAM) &&
> !e820__mapped_any(paddr & PMD_MASK, paddr_next,
> - E820_TYPE_RESERVED_KERN))
> + E820_TYPE_RESERVED_KERN) &&
> + !e820__mapped_any(paddr & PMD_MASK, paddr_next,
> + E820_TYPE_ACPI))
> set_pmd_init(pmd, __pmd(0), init);
> continue;

Why do you single out phys_pmd_init()? I think it has to be addressed for
all page table levels as we do for E820_TYPE_RAM and E820_TYPE_RESERVED_KERN.

--
Kiryl Shutsemau / Kirill A. Shutemov

2024-02-20 19:13:41

by Kalra, Ashish

[permalink] [raw]
Subject: Re: [PATCH 1/2] x86/mm: Do not zap PMD entry mapping unaccepted memory table during kdump.

Hi Kirill,

On 2/20/2024 6:42 AM, Kirill A. Shutemov wrote:
> On Tue, Feb 20, 2024 at 01:18:29AM +0000, Ashish Kalra wrote:
>> From: Ashish Kalra <[email protected]>
>>
>> During crashkernel boot only pre-allocated crash memory is presented as
>> E820_TYPE_RAM. This can cause PMD entry mapping unaccepted memory table
>> to be zapped during phys_pmd_init() as SNP/TDX guest use E820_TYPE_ACPI
>> to store the unaccepted memory table and pass it between the kernels on
>> kexec/kdump.
>>
>> E820_TYPE_ACPI covers not only ACPI data, but also EFI tables and might
>> be required by kernel to function properly.
>>
>> The problem was discovered during debugging kdump for SNP guest. The
>> unaccepted memory table stored with E820_TYPE_ACPI and passed between
>> the kernels on kdump was getting zapped as the PMD entry mapping this
>> is above the E820_TYPE_RAM range for the reserved crashkernel memory.
>>
>> Signed-off-by: Ashish Kalra <[email protected]>
>> ---
>> arch/x86/mm/init_64.c | 4 +++-
>> 1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
>> index a0dffaca6d2b..207c6dddde0c 100644
>> --- a/arch/x86/mm/init_64.c
>> +++ b/arch/x86/mm/init_64.c
>> @@ -524,7 +524,9 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
>> !e820__mapped_any(paddr & PMD_MASK, paddr_next,
>> E820_TYPE_RAM) &&
>> !e820__mapped_any(paddr & PMD_MASK, paddr_next,
>> - E820_TYPE_RESERVED_KERN))
>> + E820_TYPE_RESERVED_KERN) &&
>> + !e820__mapped_any(paddr & PMD_MASK, paddr_next,
>> + E820_TYPE_ACPI))
>> set_pmd_init(pmd, __pmd(0), init);
>> continue;
> Why do you single out phys_pmd_init()? I think it has to be addressed for
> all page table levels as we do for E820_TYPE_RAM and E820_TYPE_RESERVED_KERN.

I believe i only discovered the issue with PMDe's (phys_pmd_init())
because of the crashkernel reserved memory size and the E820_TYPE_ACPI
physical memory range mapping on my test system, but you are right this
fix needs to be done for all page table levels and i will add also the
fix in phys_pte_init(), phys_pud_init() and phys_p4d_init().

Thanks, Ashish


2024-02-21 02:38:05

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none

On 02/20/24 at 02:36pm, Kirill A. Shutemov wrote:
> On Tue, Feb 20, 2024 at 06:25:43PM +0800, Baoquan He wrote:
> > > I am not sure what part of the comment you see doesn't reflect the
> > > behaviour. From my PoV, changed code matches the comment closer that
> > > original.
> >
> > Oh, I didn't make it clear. I mean update the code comment for
> > lookup_address(), and add code comment for lookup_address_in_pgd() to
> > mention the level thing. Maybe it's a chance to do that.
> >
> > ===1>
> > *
> > * Lookup the page table entry for a virtual address. Return a pointer
> > * to the entry and the level of the mapping.
> > *
> > * Note: We return pud and pmd either when the entry is marked large
> > ~~~~~~~~~~~ seems we return p4d too
> > * or when the present bit is not set. Otherwise we would return a
> > * pointer to a nonexisting mapping.
> > ~~~~~~~~~~~~~~~ NULL?
> > */
> > pte_t *lookup_address(unsigned long address, unsigned int *level)
> > {
> > return lookup_address_in_pgd(pgd_offset_k(address), address, level);
> > }
> > EXPORT_SYMBOL_GPL(lookup_address);
> > ===
> >
> > ===2>
> > /*
> > * Lookup the page table entry for a virtual address in a specific pgd.
> > * Return a pointer to the entry and the level of the mapping.
> > ~~ also could return NULL if it's none entry. And do we need to
> > mention the level thing?
> > */
> > pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address,
> > unsigned int *level)
> > ...
> > }
> >
>
> What about this fixup:

Some nitpicks.
>
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index 3612e3167147..425ff6e192e6 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -657,7 +657,8 @@ static inline pgprot_t verify_rwx(pgprot_t old, pgprot_t new, unsigned long star
>
> /*
> * Lookup the page table entry for a virtual address in a specific pgd.
> - * Return a pointer to the entry and the level of the mapping.
> + * Return a pointer to the entry and the level of the mapping (or NULL if
> + * the entry is none) and level of the entry.
^ this right parenthesis may need be moved to the end.


=======
* Return a pointer to the entry and the level of the mapping (or NULL if
* the entry is none and level of the entry).
=======

> */
> pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address,
> unsigned int *level)
> @@ -704,9 +705,8 @@ pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address,
> * Lookup the page table entry for a virtual address. Return a pointer
> * to the entry and the level of the mapping.
> *
> - * Note: We return pud and pmd either when the entry is marked large
> - * or when the present bit is not set. Otherwise we would return a
> - * pointer to a nonexisting mapping.
> + * Note: the function returns p4d, pud and pmd either when the entry is marked
~~~
^ s/and/or/
> + * large or when the present bit is not set. Otherwise it returns NULL.
> */
> pte_t *lookup_address(unsigned long address, unsigned int *level)
> {
> --
> Kiryl Shutsemau / Kirill A. Shutemov
>


2024-02-21 14:16:55

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none

On Wed, Feb 21, 2024 at 10:37:29AM +0800, Baoquan He wrote:
> > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> > index 3612e3167147..425ff6e192e6 100644
> > --- a/arch/x86/mm/pat/set_memory.c
> > +++ b/arch/x86/mm/pat/set_memory.c
> > @@ -657,7 +657,8 @@ static inline pgprot_t verify_rwx(pgprot_t old, pgprot_t new, unsigned long star
> >
> > /*
> > * Lookup the page table entry for a virtual address in a specific pgd.
> > - * Return a pointer to the entry and the level of the mapping.
> > + * Return a pointer to the entry and the level of the mapping (or NULL if
> > + * the entry is none) and level of the entry.
> ^ this right parenthesis may need be moved to the end.
>
>
> =======
> * Return a pointer to the entry and the level of the mapping (or NULL if
> * the entry is none and level of the entry).
> =======

Emm.. I like my variant more. We return level regardless if the entry none
or not. I don't see a reason to repeat it twice.

> > */
> > pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address,
> > unsigned int *level)
> > @@ -704,9 +705,8 @@ pte_t *lookup_address_in_pgd(pgd_t *pgd, unsigned long address,
> > * Lookup the page table entry for a virtual address. Return a pointer
> > * to the entry and the level of the mapping.
> > *
> > - * Note: We return pud and pmd either when the entry is marked large
> > - * or when the present bit is not set. Otherwise we would return a
> > - * pointer to a nonexisting mapping.
> > + * Note: the function returns p4d, pud and pmd either when the entry is marked
> ~~~
> ^ s/and/or/

Fair enough.

--
Kiryl Shutsemau / Kirill A. Shutemov

2024-02-21 20:47:21

by Tom Lendacky

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/snp: Convert shared memory back to private on kexec

On 2/19/24 19:18, Ashish Kalra wrote:
> From: Ashish Kalra <[email protected]>
>
> SNP guests allocate shared buffers to perform I/O. It is done by
> allocating pages normally from the buddy allocator and converting them
> to shared with set_memory_decrypted().
>
> The second kernel has no idea what memory is converted this way. It only
> sees E820_TYPE_RAM.
>
> Accessing shared memory via private mapping will cause unrecoverable RMP
> page-faults.
>
> On kexec walk direct mapping and convert all shared memory back to
> private. It makes all RAM private again and second kernel may use it
> normally. Additionally for SNP guests convert all bss decrypted section
> pages back to private and switch back ROM regions to shared so that
> their revalidation does not fail during kexec kernel boot.
>
> The conversion occurs in two steps: stopping new conversions and
> unsharing all memory. In the case of normal kexec, the stopping of
> conversions takes place while scheduling is still functioning. This
> allows for waiting until any ongoing conversions are finished. The
> second step is carried out when all CPUs except one are inactive and
> interrupts are disabled. This prevents any conflicts with code that may
> access shared memory.

This seems like this patch should be broken down into multiple patches
with the final patch setting x86_platform.guest.enc_kexec_stop_conversion
and x86_platform.guest.enc_kexec_unshare_mem

>
> Signed-off-by: Ashish Kalra <[email protected]>
> ---
> arch/x86/include/asm/probe_roms.h | 1 +
> arch/x86/include/asm/sev.h | 8 ++
> arch/x86/kernel/probe_roms.c | 16 +++
> arch/x86/kernel/sev.c | 211 ++++++++++++++++++++++++++++++
> arch/x86/mm/mem_encrypt_amd.c | 18 ++-
> 5 files changed, 253 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/probe_roms.h b/arch/x86/include/asm/probe_roms.h
> index 1c7f3815bbd6..d50b67dbff33 100644
> --- a/arch/x86/include/asm/probe_roms.h
> +++ b/arch/x86/include/asm/probe_roms.h
> @@ -6,4 +6,5 @@ struct pci_dev;
> extern void __iomem *pci_map_biosrom(struct pci_dev *pdev);
> extern void pci_unmap_biosrom(void __iomem *rom);
> extern size_t pci_biosrom_size(struct pci_dev *pdev);
> +extern void snp_kexec_unprep_rom_memory(void);
> #endif
> diff --git a/arch/x86/include/asm/sev.h b/arch/x86/include/asm/sev.h
> index 5b4a1ce3d368..dd236d7e9407 100644
> --- a/arch/x86/include/asm/sev.h
> +++ b/arch/x86/include/asm/sev.h
> @@ -81,6 +81,10 @@ extern void vc_no_ghcb(void);
> extern void vc_boot_ghcb(void);
> extern bool handle_vc_boot_ghcb(struct pt_regs *regs);
>
> +extern atomic_t conversions_in_progress;
> +extern bool conversion_allowed;
> +extern unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot);
> +
> /* PVALIDATE return codes */
> #define PVALIDATE_FAIL_SIZEMISMATCH 6
>
> @@ -213,6 +217,8 @@ int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct sn
> void snp_accept_memory(phys_addr_t start, phys_addr_t end);
> u64 snp_get_unsupported_features(u64 status);
> u64 sev_get_status(void);
> +void snp_kexec_unshare_mem(void);
> +void snp_kexec_stop_conversion(bool crash);
> #else
> static inline void sev_es_ist_enter(struct pt_regs *regs) { }
> static inline void sev_es_ist_exit(void) { }
> @@ -241,6 +247,8 @@ static inline int snp_issue_guest_request(u64 exit_code, struct snp_req_data *in
> static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { }
> static inline u64 snp_get_unsupported_features(u64 status) { return 0; }
> static inline u64 sev_get_status(void) { return 0; }
> +void snp_kexec_unshare_mem(void) {}
> +static void snp_kexec_stop_conversion(bool crash) {}
> #endif
>
> #endif
> diff --git a/arch/x86/kernel/probe_roms.c b/arch/x86/kernel/probe_roms.c
> index 319fef37d9dc..457f1e5c8d00 100644
> --- a/arch/x86/kernel/probe_roms.c
> +++ b/arch/x86/kernel/probe_roms.c
> @@ -177,6 +177,22 @@ size_t pci_biosrom_size(struct pci_dev *pdev)
> }
> EXPORT_SYMBOL(pci_biosrom_size);
>
> +void snp_kexec_unprep_rom_memory(void)
> +{
> + unsigned long vaddr, npages, sz;
> +
> + /*
> + * Switch back ROM regions to shared so that their validation
> + * does not fail during kexec kernel boot.
> + */
> + vaddr = (unsigned long)__va(video_rom_resource.start);
> + sz = (system_rom_resource.end + 1) - video_rom_resource.start;
> + npages = PAGE_ALIGN(sz) >> PAGE_SHIFT;
> +
> + snp_set_memory_shared(vaddr, npages);
> +}
> +EXPORT_SYMBOL(snp_kexec_unprep_rom_memory);
> +
> #define ROMSIGNATURE 0xaa55
>
> static int __init romsignature(const unsigned char *rom)
> diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
> index c67285824e82..765ab83129eb 100644
> --- a/arch/x86/kernel/sev.c
> +++ b/arch/x86/kernel/sev.c
> @@ -23,6 +23,9 @@
> #include <linux/platform_device.h>
> #include <linux/io.h>
> #include <linux/psp-sev.h>
> +#include <linux/pagewalk.h>
> +#include <linux/cacheflush.h>
> +#include <linux/delay.h>
> #include <uapi/linux/sev-guest.h>
>
> #include <asm/cpu_entry_area.h>
> @@ -40,6 +43,7 @@
> #include <asm/apic.h>
> #include <asm/cpuid.h>
> #include <asm/cmdline.h>
> +#include <asm/probe_roms.h>
>
> #define DR7_RESET_VALUE 0x400
>
> @@ -71,6 +75,13 @@ static struct ghcb *boot_ghcb __section(".data");
> /* Bitmap of SEV features supported by the hypervisor */
> static u64 sev_hv_features __ro_after_init;
>
> +/* Last address to be switched to private during kexec */
> +static unsigned long last_address_shd_kexec;

Maybe kexec_last_address_to_make_private ? Or just something that makes a
bit more sense.

> +
> +static bool crash_requested;
> +atomic_t conversions_in_progress;
> +bool conversion_allowed = true;
> +
> /* #VC handler runtime per-CPU data */
> struct sev_es_runtime_data {
> struct ghcb ghcb_page;
> @@ -906,6 +917,206 @@ void snp_accept_memory(phys_addr_t start, phys_addr_t end)
> set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE);
> }
>
> +static inline bool pte_decrypted(pte_t pte)
> +{
> + return cc_mkdec(pte_val(pte)) == pte_val(pte);
> +}
> +

This is duplicated in TDX code, arch/x86/coco/tdx/tdx.c, looks like
something that can go in a header file, maybe mem_encrypt.h.

> +static int set_pte_enc(pte_t *kpte, int level, void *va)
> +{
> + pgprot_t old_prot, new_prot;
> + unsigned long pfn, pa, size;
> + pte_t new_pte;
> +
> + pfn = pg_level_to_pfn(level, kpte, &old_prot);
> + if (!pfn)
> + return 0;

Not sure this matters... a zero PFN is a valid PFN, it's just marked not
present. This seems a bit overdone to me, see the end of this function to
see if the more compact version works.

> +
> + new_prot = old_prot;
> + pgprot_val(new_prot) |= _PAGE_ENC;
> + pa = pfn << PAGE_SHIFT;
> + size = page_level_size(level);
> +
> + /*
> + * Change the physical page attribute from C=0 to C=1. Flush the
> + * caches to ensure that data gets accessed with the correct C-bit.
> + */
> + clflush_cache_range(va, size);
> +
> + /* Change the page encryption mask. */
> + new_pte = pfn_pte(pfn, new_prot);
> + set_pte_atomic(kpte, new_pte);
> +
> + return 1;
> +}

static bool set_pte_enc(pte_t *kpte, int level, void *va)
{
pte_t new_pte;

if (pte_none(*kpte))
return false;

if (pte_present(*kpte))
clflush_cache_range(va, page_leve_size(level);

new_pte = cc_mkenc(*kpte);
set_pte_atomic(kpte, new_pte);

return true;
}

> +
> +static int unshare_pte(pte_t *pte, unsigned long addr, int pages, int level)

Maybe a better name is make_pte_private ?

And if you are only returning 0 or 1, it begs to be a bool.

> +{
> + struct sev_es_runtime_data *data;
> + struct ghcb *ghcb;
> +
> + data = this_cpu_read(runtime_data);
> + ghcb = &data->ghcb_page;
> +
> + /*
> + * check for GHCB for being part of a PMD range.
> + */

Single line comment.

> + if ((unsigned long)ghcb >= addr &&
> + (unsigned long)ghcb <= (addr + (pages * PAGE_SIZE))) {
> + /*
> + * setup last address to be made private so that this GHCB
> + * is made private at the end of unshared loop so that RMP
> + * does not possibly getting PSMASHed from using the
> + * MSR protocol.
> + */

Please clarify this comment a bit more... it's a bit hard to follow.

> + pr_debug("setting boot_ghcb to NULL for this cpu ghcb\n");
> + last_address_shd_kexec = addr;
> + return 1;
> + }

Add a blank line here.

> + if (!set_pte_enc(pte, level, (void *)addr))
> + return 0;

Add a blank line here.

> + snp_set_memory_private(addr, pages);
> +
> + return 1;
> +}
> +
> +static void unshare_all_memory(bool unmap)

Unused input, looks like this can be removed.

> +{
> + unsigned long addr, end;
> +
> + /*
> + * Walk direct mapping and convert all shared memory back to private,
> + */
> +
> + addr = PAGE_OFFSET;
> + end = PAGE_OFFSET + get_max_mapped();
> +
> + while (addr < end) {
> + unsigned long size;
> + unsigned int level;
> + pte_t *pte;
> +
> + pte = lookup_address(addr, &level);
> + size = page_level_size(level);
> +
> + /*
> + * pte_none() check is required to skip physical memory holes in direct mapped.
> + */
> + if (pte && pte_decrypted(*pte) && !pte_none(*pte)) {
> + int pages = size / PAGE_SIZE;
> +
> + if (!unshare_pte(pte, addr, pages, level)) {
> + pr_err("Failed to unshare range %#lx-%#lx\n",
> + addr, addr + size);
> + }
> +
> + }
> +
> + addr += size;
> + }
> + __flush_tlb_all();

This is also mostly in the TDX code and begs to be made common and not
copied... please figure out a way to do the "different" things through a
registered callback or such.

> +
> +}
> +
> +static void unshare_all_bss_decrypted_memory(void)
> +{
> + unsigned long vaddr, vaddr_end;
> + unsigned long size;
> + unsigned int level;
> + unsigned int npages;
> + pte_t *pte;
> +
> + vaddr = (unsigned long)__start_bss_decrypted;
> + vaddr_end = (unsigned long)__start_bss_decrypted_unused;
> + npages = (vaddr_end - vaddr) >> PAGE_SHIFT;
> + for (; vaddr < vaddr_end; vaddr += PAGE_SIZE) {
> + pte = lookup_address(vaddr, &level);
> + if (!pte || !pte_decrypted(*pte) || pte_none(*pte))
> + continue;
> +
> + size = page_level_size(level);
> + set_pte_enc(pte, level, (void *)vaddr);
> + }
> + vaddr = (unsigned long)__start_bss_decrypted;
> + snp_set_memory_private(vaddr, npages);
> +}
> +
> +void snp_kexec_stop_conversion(bool crash)
> +{
> + /* Stop new private<->shared conversions */
> + conversion_allowed = false;
> + crash_requested = crash;
> +
> + /*
> + * Make sure conversion_allowed is cleared before checking
> + * conversions_in_progress.
> + */
> + barrier();

This should be smp_wmb().

> +
> + /*
> + * Crash kernel reaches here with interrupts disabled: can't wait for
> + * conversions to finish.
> + *
> + * If race happened, just report and proceed.
> + */
> + if (!crash) {
> + unsigned long timeout;
> +
> + /*
> + * Wait for in-flight conversions to complete.
> + *
> + * Do not wait more than 30 seconds.
> + */
> + timeout = 30 * USEC_PER_SEC;
> + while (atomic_read(&conversions_in_progress) && timeout--)
> + udelay(1);
> + }
> +
> + if (atomic_read(&conversions_in_progress))
> + pr_warn("Failed to finish shared<->private conversions\n");
> +}

Again, same code as in TDX (except for the crash_requested, but I don't
see that used anywhere), so please make it common.

> +
> +void snp_kexec_unshare_mem(void)
> +{
> + if (!cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
> + return;
> +
> + /*
> + * Switch back any specific memory regions such as option
> + * ROM regions back to shared so that (re)validation does
> + * not fail when kexec kernel boots.
> + */
> + snp_kexec_unprep_rom_memory();
> +
> + unshare_all_memory(true);
> +
> + unshare_all_bss_decrypted_memory();
> +
> + if (last_address_shd_kexec) {
> + unsigned long size;
> + unsigned int level;
> + pte_t *pte;
> +
> + /*
> + * Switch to using the MSR protocol to change this cpu's
> + * GHCB to private.
> + */
> + boot_ghcb = NULL;
> + /*
> + * All the per-cpu GHCBs have been switched back to private,
> + * so can't do any more GHCB calls to the hypervisor beyond
> + * this point till the kexec kernel starts running.
> + */
> + sev_cfg.ghcbs_initialized = false;

Maybe combine the two comments above into a single comment and then keep
the two assignments together.

> +
> + pr_debug("boot ghcb 0x%lx\n", last_address_shd_kexec);
> + pte = lookup_address(last_address_shd_kexec, &level);
> + size = page_level_size(level);
> + set_pte_enc(pte, level, (void *)last_address_shd_kexec);
> + snp_set_memory_private(last_address_shd_kexec, (size / PAGE_SIZE));
> + }
> +}
> +
> static int snp_set_vmsa(void *va, bool vmsa)
> {
> u64 attrs;
> diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
> index d314e577836d..87b6475358ad 100644
> --- a/arch/x86/mm/mem_encrypt_amd.c
> +++ b/arch/x86/mm/mem_encrypt_amd.c
> @@ -214,7 +214,7 @@ void __init sme_map_bootdata(char *real_mode_data)
> __sme_early_map_unmap_mem(__va(cmdline_paddr), COMMAND_LINE_SIZE, true);
> }
>
> -static unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot)
> +unsigned long pg_level_to_pfn(int level, pte_t *kpte, pgprot_t *ret_prot)

This change shouldn't be needed anymore if you modify the set_pte_enc()
function.

> {
> unsigned long pfn = 0;
> pgprot_t prot;
> @@ -285,6 +285,17 @@ static void enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool enc)
>
> static int amd_enc_status_change_prepare(unsigned long vaddr, int npages, bool enc)
> {
> + atomic_inc(&conversions_in_progress);
> +
> + /*
> + * Check after bumping conversions_in_progress to serialize
> + * against snp_kexec_stop_conversion().
> + */
> + if (!conversion_allowed) {
> + atomic_dec(&conversions_in_progress);
> + return -EBUSY;
> + }

Duplicate code, please move to a common file, along with the varialbles,
such as arch/x86/mm/mem_encrypt.c ?

> +
> /*
> * To maintain the security guarantees of SEV-SNP guests, make sure
> * to invalidate the memory before encryption attribute is cleared.
> @@ -308,6 +319,8 @@ static int amd_enc_status_change_finish(unsigned long vaddr, int npages, bool en
> if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
> enc_dec_hypercall(vaddr, npages << PAGE_SHIFT, enc);
>
> + atomic_dec(&conversions_in_progress);

Ditto here.

Thanks,
Tom

> +
> return 0;
> }
>
> @@ -468,6 +481,9 @@ void __init sme_early_init(void)
> x86_platform.guest.enc_tlb_flush_required = amd_enc_tlb_flush_required;
> x86_platform.guest.enc_cache_flush_required = amd_enc_cache_flush_required;
>
> + x86_platform.guest.enc_kexec_stop_conversion = snp_kexec_stop_conversion;
> + x86_platform.guest.enc_kexec_unshare_mem = snp_kexec_unshare_mem;
> +
> /*
> * AMD-SEV-ES intercepts the RDMSR to read the X2APIC ID in the
> * parallel bringup low level code. That raises #VC which cannot be

2024-02-22 11:02:19

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none

On 02/21/24 at 04:15pm, Kirill A. Shutemov wrote:
> On Wed, Feb 21, 2024 at 10:37:29AM +0800, Baoquan He wrote:
> > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> > > index 3612e3167147..425ff6e192e6 100644
> > > --- a/arch/x86/mm/pat/set_memory.c
> > > +++ b/arch/x86/mm/pat/set_memory.c
> > > @@ -657,7 +657,8 @@ static inline pgprot_t verify_rwx(pgprot_t old, pgprot_t new, unsigned long star
> > >
> > > /*
> > > * Lookup the page table entry for a virtual address in a specific pgd.
> > > - * Return a pointer to the entry and the level of the mapping.
> > > + * Return a pointer to the entry and the level of the mapping (or NULL if
> > > + * the entry is none) and level of the entry.
> > ^ this right parenthesis may need be moved to the end.
> >
> >
> > =======
> > * Return a pointer to the entry and the level of the mapping (or NULL if
> > * the entry is none and level of the entry).
> > =======
>
> Emm.. I like my variant more. We return level regardless if the entry none
> or not. I don't see a reason to repeat it twice.


* Lookup the page table entry for a virtual address in a specific pgd.
* Return a pointer to the entry and the level of the mapping (or NULL if
* the entry is none) and level of the entry.

Hmm, I am confused. Why do we need to stress the level of the mapping
and level of the entry? Wondering what is the difference. I must miss
something.


2024-02-22 13:37:51

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/snp: Convert shared memory back to private on kexec

On Wed, Feb 21, 2024 at 02:35:13PM -0600, Tom Lendacky wrote:
> > @@ -906,6 +917,206 @@ void snp_accept_memory(phys_addr_t start, phys_addr_t end)
> > set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE);
> > }
> > +static inline bool pte_decrypted(pte_t pte)
> > +{
> > + return cc_mkdec(pte_val(pte)) == pte_val(pte);
> > +}
> > +
>
> This is duplicated in TDX code, arch/x86/coco/tdx/tdx.c, looks like
> something that can go in a header file, maybe mem_encrypt.h.
>

I think <asm/pgtable.h> is a better fit.

> > +void snp_kexec_stop_conversion(bool crash)
> > +{
> > + /* Stop new private<->shared conversions */
> > + conversion_allowed = false;
> > + crash_requested = crash;
> > +
> > + /*
> > + * Make sure conversion_allowed is cleared before checking
> > + * conversions_in_progress.
> > + */
> > + barrier();
>
> This should be smp_wmb().
>

Why?

--
Kiryl Shutsemau / Kirill A. Shutemov

2024-02-22 13:58:57

by Tom Lendacky

[permalink] [raw]
Subject: Re: [PATCH 2/2] x86/snp: Convert shared memory back to private on kexec

On 2/22/24 04:50, Kirill A. Shutemov wrote:
> On Wed, Feb 21, 2024 at 02:35:13PM -0600, Tom Lendacky wrote:
>>> @@ -906,6 +917,206 @@ void snp_accept_memory(phys_addr_t start, phys_addr_t end)
>>> set_pages_state(vaddr, npages, SNP_PAGE_STATE_PRIVATE);
>>> }
>>> +static inline bool pte_decrypted(pte_t pte)
>>> +{
>>> + return cc_mkdec(pte_val(pte)) == pte_val(pte);
>>> +}
>>> +
>>
>> This is duplicated in TDX code, arch/x86/coco/tdx/tdx.c, looks like
>> something that can go in a header file, maybe mem_encrypt.h.
>>
>
> I think <asm/pgtable.h> is a better fit.
>
>>> +void snp_kexec_stop_conversion(bool crash)
>>> +{
>>> + /* Stop new private<->shared conversions */
>>> + conversion_allowed = false;
>>> + crash_requested = crash;
>>> +
>>> + /*
>>> + * Make sure conversion_allowed is cleared before checking
>>> + * conversions_in_progress.
>>> + */
>>> + barrier();
>>
>> This should be smp_wmb().
>>
>
> Why?

IIUC, this is because conversions_in_progress can be set on another thread
and so this needs an smp barrier. In this case, smp_wmb() just ends up
being barrier(), but to me it is clearer this way. Just my opinion, though.

Thanks,
Tom


>

2024-02-22 14:04:46

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none

On Thu, Feb 22, 2024 at 07:01:41PM +0800, Baoquan He wrote:
> On 02/21/24 at 04:15pm, Kirill A. Shutemov wrote:
> > On Wed, Feb 21, 2024 at 10:37:29AM +0800, Baoquan He wrote:
> > > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> > > > index 3612e3167147..425ff6e192e6 100644
> > > > --- a/arch/x86/mm/pat/set_memory.c
> > > > +++ b/arch/x86/mm/pat/set_memory.c
> > > > @@ -657,7 +657,8 @@ static inline pgprot_t verify_rwx(pgprot_t old, pgprot_t new, unsigned long star
> > > >
> > > > /*
> > > > * Lookup the page table entry for a virtual address in a specific pgd.
> > > > - * Return a pointer to the entry and the level of the mapping.
> > > > + * Return a pointer to the entry and the level of the mapping (or NULL if
> > > > + * the entry is none) and level of the entry.
> > > ^ this right parenthesis may need be moved to the end.
> > >
> > >
> > > =======
> > > * Return a pointer to the entry and the level of the mapping (or NULL if
> > > * the entry is none and level of the entry).
> > > =======
> >
> > Emm.. I like my variant more. We return level regardless if the entry none
> > or not. I don't see a reason to repeat it twice.
>
>
> * Lookup the page table entry for a virtual address in a specific pgd.
> * Return a pointer to the entry and the level of the mapping (or NULL if
> * the entry is none) and level of the entry.
>
> Hmm, I am confused. Why do we need to stress the level of the mapping
> and level of the entry? Wondering what is the difference. I must miss
> something.

My bad. This is way I meant to write:

* Lookup the page table entry for a virtual address in a specific pgd.
* Return a pointer to the entry (or NULL if the entry does not exist) and
* the level of the entry.

--
Kiryl Shutsemau / Kirill A. Shutemov

2024-02-22 15:38:35

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none

On 02/22/24 at 04:04pm, Kirill A. Shutemov wrote:
> On Thu, Feb 22, 2024 at 07:01:41PM +0800, Baoquan He wrote:
> > On 02/21/24 at 04:15pm, Kirill A. Shutemov wrote:
> > > On Wed, Feb 21, 2024 at 10:37:29AM +0800, Baoquan He wrote:
> > > > > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> > > > > index 3612e3167147..425ff6e192e6 100644
> > > > > --- a/arch/x86/mm/pat/set_memory.c
> > > > > +++ b/arch/x86/mm/pat/set_memory.c
> > > > > @@ -657,7 +657,8 @@ static inline pgprot_t verify_rwx(pgprot_t old, pgprot_t new, unsigned long star
> > > > >
> > > > > /*
> > > > > * Lookup the page table entry for a virtual address in a specific pgd.
> > > > > - * Return a pointer to the entry and the level of the mapping.
> > > > > + * Return a pointer to the entry and the level of the mapping (or NULL if
> > > > > + * the entry is none) and level of the entry.
> > > > ^ this right parenthesis may need be moved to the end.
> > > >
> > > >
> > > > =======
> > > > * Return a pointer to the entry and the level of the mapping (or NULL if
> > > > * the entry is none and level of the entry).
> > > > =======
> > >
> > > Emm.. I like my variant more. We return level regardless if the entry none
> > > or not. I don't see a reason to repeat it twice.
> >
> >
> > * Lookup the page table entry for a virtual address in a specific pgd.
> > * Return a pointer to the entry and the level of the mapping (or NULL if
> > * the entry is none) and level of the entry.
> >
> > Hmm, I am confused. Why do we need to stress the level of the mapping
> > and level of the entry? Wondering what is the difference. I must miss
> > something.
>
> My bad. This is way I meant to write:
>
> * Lookup the page table entry for a virtual address in a specific pgd.
> * Return a pointer to the entry (or NULL if the entry does not exist) and
> * the level of the entry.

ACK. Thanks.


2024-02-22 22:07:04

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCHv7 05/16] x86/kexec: Keep CR4.MCE set during kexec for TDX guest

On Mon, Feb 12 2024 at 12:44, Kirill A. Shutemov wrote:

> TDX guests are not allowed to clear CR4.MCE. Attempt to clear it leads
> to #VE.
>
> Use alternatives to keep the flag during kexec for TDX guests.
>
> The change doesn't affect non-TDX-guest environments.
>
> Signed-off-by: Kirill A. Shutemov <[email protected]>
> Reviewed-by: Kai Huang <[email protected]>

Reviewed-by: Thomas Gleixner <[email protected]>

2024-02-23 10:26:53

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCHv7 14/16] x86/smp: Add smp_ops.stop_this_cpu() callback

On Mon, Feb 12 2024 at 12:44, Kirill A. Shutemov wrote:

> If the helper is defined, it is called instead of halt() to stop the CPU
> at the end of stop_this_cpu() and on crash CPU shutdown.
>
> ACPI MADT will use it to hand over the CPU to BIOS in order to be able
> to wake it up again after kexec.
>
> Signed-off-by: Kirill A. Shutemov <[email protected]>
> Acked-by: Kai Huang <[email protected]>

Reviewed-by: Thomas Gleixner <[email protected]>

2024-02-23 10:27:52

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCHv7 12/16] x86/acpi: Rename fields in acpi_madt_multiproc_wakeup structure

On Mon, Feb 12 2024 at 12:44, Kirill A. Shutemov wrote:
> To prepare for the addition of support for MADT wakeup structure version
> 1, it is necessary to provide more appropriate names for the fields in
> the structure.
>
> The field 'mailbox_version' renamed as 'version'. This field signifies
> the version of the structure and the related protocols, rather than the
> version of the mailbox. This field has not been utilized in the code
> thus far.
>
> The field 'base_address' renamed as 'mailbox_address' to clarify the
> kind of address it represents. In version 1, the structure includes the
> reset vector address. Clear and distinct naming helps to prevent any
> confusion.
>
> Signed-off-by: Kirill A. Shutemov <[email protected]>
> Reviewed-by: Kai Huang <[email protected]>
> Reviewed-by: Kuppuswamy Sathyanarayanan <[email protected]>

Reviewed-by: Thomas Gleixner <[email protected]>

2024-02-23 10:32:00

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCHv7 16/16] x86/acpi: Add support for CPU offlining for ACPI MADT wakeup method

On Mon, Feb 12 2024 at 12:44, Kirill A. Shutemov wrote:
> MADT Multiprocessor Wakeup structure version 1 brings support of CPU
> offlining: BIOS provides a reset vector where the CPU has to jump to
> for offlining itself. The new TEST mailbox command can be used to test
> whether the CPU offlined itself which means the BIOS has control over
> the CPU and can online it again via the ACPI MADT wakeup method.
>
> Add CPU offling support for the ACPI MADT wakeup method by implementing
> custom cpu_die(), play_dead() and stop_this_cpu() SMP operations.
>
> CPU offlining makes is possible to hand over secondary CPUs over kexec,
> not limiting the second kernel to a single CPU.
>
> The change conforms to the approved ACPI spec change proposal. See the
> Link.
>
> Signed-off-by: Kirill A. Shutemov <[email protected]>
> Link: https://lore.kernel.org/all/13356251.uLZWGnKmhe@kreacher
> Acked-by: Kai Huang <[email protected]>
> Reviewed-by: Kuppuswamy Sathyanarayanan <[email protected]>

Reviewed-by: Thomas Gleixner <[email protected]>

2024-02-23 10:43:52

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCHv7 13/16] x86/acpi: Do not attempt to bring up secondary CPUs in kexec case

On Mon, Feb 12 2024 at 12:44, Kirill A. Shutemov wrote:
> ACPI MADT doesn't allow to offline a CPU after it was onlined. This
> limits kexec: the second kernel won't be able to use more than one CPU.
>
> To prevent a kexec kernel from onlining secondary CPUs invalidate the
> mailbox address in the ACPI MADT wakeup structure which prevents a
> kexec kernel to use it.
>
> This is safe as the booting kernel has the mailbox address cached
> already and acpi_wakeup_cpu() uses the cached value to bring up the
> secondary CPUs.
>
> Note: This is a Linux specific convention and not covered by the
> ACPI specification.
>
> Signed-off-by: Kirill A. Shutemov <[email protected]>
> Reviewed-by: Kai Huang <[email protected]>
> Reviewed-by: Kuppuswamy Sathyanarayanan <[email protected]>

Reviewed-by: Thomas Gleixner <[email protected]>

2024-02-23 18:27:06

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCHv7 06/16] x86/mm: Make x86_platform.guest.enc_status_change_*() return errno

On 2/12/24 02:44, Kirill A. Shutemov wrote:
> TDX is going to have more than one reason to fail
> enc_status_change_prepare().
>
> Change the callback to return errno instead of assuming -EIO;
> enc_status_change_finish() changed too to keep the interface symmetric.

Good riddance to the bools.

Reviewed-by: Dave Hansen <[email protected]>

2024-02-23 18:47:48

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none

On 2/12/24 02:44, Kirill A. Shutemov wrote:
> lookup_address() only returns correct page table level for the entry if
> the entry is not none.

Currently, lookup_address() returns two things:
1. A "pte_t" (which might be a p[g4um]d_t)
2. The 'level' of the page tables where the "pte_t" was found
(returned via a pointer)

If no pte_t is found, 'level' is essentially garbage.

> Make the helper to always return correct 'level'. It allows to implement
> iterator over kernel page tables using lookup_address().

One nit with this description: What's "correct" isn't immediately
obvious to me. It wasn't exactly incorrect before. I think it would be
better to say:

Always fill out the level. For NULL "pte_t"s, fill in the level
where the p*d_none() entry was found mirroring the "found"
behavior.

Always filling out the level allows using lookup_address() to
iterate over kernel page tables.

> Add one more entry into enum pg_level to indicate size of VA covered by
> one PGD entry in 5-level paging mode.

Needs some 'the's:

Add one more entry into enum pg_level to indicate the size of the VA
covered by one PGD entry in 5-level paging mode.

With that fixed:

Reviewed-by: Dave Hansen <[email protected]>

2024-02-23 18:59:03

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCHv7 07/16] x86/mm: Return correct level from lookup_address() if pte is none

On 2/23/24 10:45, Dave Hansen wrote:
> Always filling out the level allows using lookup_address() to
> iterate over kernel page tables.

This doesn't parse very well. How about this instead:

Always filling out the level allows using lookup_address() to
precisely skip over holes when walking kernel page tables.

I think that more accurately captures what you're doing with it in the
next patch.

2024-02-23 19:08:30

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCHv7 08/16] x86/tdx: Account shared memory

On 2/12/24 02:44, Kirill A. Shutemov wrote:
> The kernel will convert all shared memory back to private during kexec.
> The direct mapping page tables will provide information on which memory
> is shared.
>
> It is extremely important to convert all shared memory. If a page is
> missed, it will cause the second kernel to crash when it accesses it.
>
> Keep track of the number of shared pages. This will allow for
> cross-checking against the shared information in the direct mapping and
> reporting if the shared bit is lost.
>
> Include a debugfs interface that allows for the check to be performed at
> any point.

When I read this, I thought you were going to do some automatic
checking. Could you make it more clear here that it's 100% up to the
user to figure out if the numbers in debugfs match and whether there's a
problem? This would also be a great place to mention that the whole
thing is racy.

> +static atomic_long_t nr_shared;
> +
> +static inline bool pte_decrypted(pte_t pte)
> +{
> + return cc_mkdec(pte_val(pte)) == pte_val(pte);
> +}

Name this pte_is_decrypted(), please.

> /* Called from __tdx_hypercall() for unrecoverable failure */
> noinstr void __noreturn __tdx_hypercall_failed(void)
> {
> @@ -821,6 +829,11 @@ static int tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
> if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc))
> return -EIO;
>
> + if (enc)
> + atomic_long_sub(numpages, &nr_shared);
> + else
> + atomic_long_add(numpages, &nr_shared);
> +
> return 0;
> }
>
> @@ -896,3 +909,59 @@ void __init tdx_early_init(void)
>
> pr_info("Guest detected\n");
> }
> +
> +#ifdef CONFIG_DEBUG_FS
> +static int tdx_shared_memory_show(struct seq_file *m, void *p)
> +{
> + unsigned long addr, end;
> + unsigned long found = 0;
> +
> + addr = PAGE_OFFSET;
> + end = PAGE_OFFSET + get_max_mapped();
> +
> + while (addr < end) {
> + unsigned long size;
> + unsigned int level;
> + pte_t *pte;
> +
> + pte = lookup_address(addr, &level);
> + size = page_level_size(level);
> +
> + if (pte && pte_decrypted(*pte))
> + found += size / PAGE_SIZE;
> +
> + addr += size;
> +
> + cond_resched();
> + }

This is totally racy, right? Nothing prevents the PTE from
flip-flopping all over the place.

> + seq_printf(m, "Number of shared pages in kernel page tables: %16lu\n",
> + found);
> + seq_printf(m, "Number of pages accounted as shared: %16ld\n",
> + atomic_long_read(&nr_shared));
> + return 0;
> +}

Ditto with 'nr_shared'. There's nothing to say that the page table walk
has anything to do with 'nr_shared' by the time we get down here.

That's not _fatal_ for a debug interface, but the pitfalls need to at
least be discussed. Better yet would be to make sure this and the cpa
code don't stomp on each other.

2024-02-23 19:39:19

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCHv7 10/16] x86/tdx: Convert shared memory back to private on kexec

On 2/12/24 02:44, Kirill A. Shutemov wrote:
> +static void tdx_kexec_stop_conversion(bool crash)
> +{
> + /* Stop new private<->shared conversions */
> + conversion_allowed = false;
> +
> + /*
> + * Make sure conversion_allowed is cleared before checking
> + * conversions_in_progress.
> + */
> + barrier();
> +
> + /*
> + * Crash kernel reaches here with interrupts disabled: can't wait for
> + * conversions to finish.
> + *
> + * If race happened, just report and proceed.
> + */
> + if (!crash) {
> + unsigned long timeout;
> +
> + /*
> + * Wait for in-flight conversions to complete.
> + *
> + * Do not wait more than 30 seconds.
> + */
> + timeout = 30 * USEC_PER_SEC;
> + while (atomic_read(&conversions_in_progress) && timeout--)
> + udelay(1);
> + }
> +
> + if (atomic_read(&conversions_in_progress))
> + pr_warn("Failed to finish shared<->private conversions\n");
> +}

I'd really prefer we find a way to do this with actual locks, especially
'conversion_allowed'.

This is _awfully_ close to being able to be handled by a rwsem where the
readers are the converters and tdx_kexec_stop_conversion() takes a write.



2024-02-23 19:50:03

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCHv7 11/16] x86/mm: Make e820_end_ram_pfn() cover E820_TYPE_ACPI ranges

On 2/12/24 02:44, Kirill A. Shutemov wrote:
> Despite the name, E820_TYPE_ACPI covers not only ACPI data, but also EFI
> tables and might be required by kernel to function properly.

Lovely. You learn something new every day.

Reviewed-by: Dave Hansen <[email protected]>

2024-02-25 14:59:03

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCHv7 10/16] x86/tdx: Convert shared memory back to private on kexec

On Fri, Feb 23, 2024 at 11:39:07AM -0800, Dave Hansen wrote:
> On 2/12/24 02:44, Kirill A. Shutemov wrote:
> > +static void tdx_kexec_stop_conversion(bool crash)
> > +{
> > + /* Stop new private<->shared conversions */
> > + conversion_allowed = false;
> > +
> > + /*
> > + * Make sure conversion_allowed is cleared before checking
> > + * conversions_in_progress.
> > + */
> > + barrier();
> > +
> > + /*
> > + * Crash kernel reaches here with interrupts disabled: can't wait for
> > + * conversions to finish.
> > + *
> > + * If race happened, just report and proceed.
> > + */
> > + if (!crash) {
> > + unsigned long timeout;
> > +
> > + /*
> > + * Wait for in-flight conversions to complete.
> > + *
> > + * Do not wait more than 30 seconds.
> > + */
> > + timeout = 30 * USEC_PER_SEC;
> > + while (atomic_read(&conversions_in_progress) && timeout--)
> > + udelay(1);
> > + }
> > +
> > + if (atomic_read(&conversions_in_progress))
> > + pr_warn("Failed to finish shared<->private conversions\n");
> > +}
>
> I'd really prefer we find a way to do this with actual locks, especially
> 'conversion_allowed'.
>
> This is _awfully_ close to being able to be handled by a rwsem where the
> readers are the converters and tdx_kexec_stop_conversion() takes a write.

Okay, here's what I come up with. It needs more testing.

Any comments?

diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
index fd212c9bad89..5eb0dac33f37 100644
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -6,8 +6,10 @@

#include <linux/cpufeature.h>
#include <linux/debugfs.h>
+#include <linux/delay.h>
#include <linux/export.h>
#include <linux/io.h>
+#include <linux/kexec.h>
#include <asm/coco.h>
#include <asm/tdx.h>
#include <asm/vmx.h>
@@ -15,6 +17,7 @@
#include <asm/insn.h>
#include <asm/insn-eval.h>
#include <asm/pgtable.h>
+#include <asm/set_memory.h>

/* MMIO direction */
#define EPT_READ 0
@@ -837,6 +840,65 @@ static int tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
return 0;
}

+static void tdx_kexec_stop_conversion(bool crash)
+{
+ /* Stop new private<->shared conversions */
+ if (!stop_memory_enc_conversion(!crash))
+ pr_warn("Failed to finish shared<->private conversions\n");
+}
+
+static void tdx_kexec_unshare_mem(void)
+{
+ unsigned long addr, end;
+ long found = 0, shared;
+
+ /*
+ * Walk direct mapping and convert all shared memory back to private,
+ */
+
+ addr = PAGE_OFFSET;
+ end = PAGE_OFFSET + get_max_mapped();
+
+ while (addr < end) {
+ unsigned long size;
+ unsigned int level;
+ pte_t *pte;
+
+ pte = lookup_address(addr, &level);
+ size = page_level_size(level);
+
+ if (pte && pte_decrypted(*pte)) {
+ int pages = size / PAGE_SIZE;
+
+ /*
+ * Touching memory with shared bit set triggers implicit
+ * conversion to shared.
+ *
+ * Make sure nobody touches the shared range from
+ * now on.
+ */
+ set_pte(pte, __pte(0));
+
+ if (!tdx_enc_status_changed(addr, pages, true)) {
+ pr_err("Failed to unshare range %#lx-%#lx\n",
+ addr, addr + size);
+ }
+
+ found += pages;
+ }
+
+ addr += size;
+ }
+
+ __flush_tlb_all();
+
+ shared = atomic_long_read(&nr_shared);
+ if (shared != found) {
+ pr_err("shared page accounting is off\n");
+ pr_err("nr_shared = %ld, nr_found = %ld\n", shared, found);
+ }
+}
+
void __init tdx_early_init(void)
{
struct tdx_module_args args = {
@@ -896,6 +958,9 @@ void __init tdx_early_init(void)
x86_platform.guest.enc_cache_flush_required = tdx_cache_flush_required;
x86_platform.guest.enc_tlb_flush_required = tdx_tlb_flush_required;

+ x86_platform.guest.enc_kexec_stop_conversion = tdx_kexec_stop_conversion;
+ x86_platform.guest.enc_kexec_unshare_mem = tdx_kexec_unshare_mem;
+
/*
* TDX intercepts the RDMSR to read the X2APIC ID in the parallel
* bringup low level code. That raises #VE which cannot be handled
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index a5e89641bd2d..9d4a8e548820 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -48,8 +48,11 @@ int set_memory_wc(unsigned long addr, int numpages);
int set_memory_wb(unsigned long addr, int numpages);
int set_memory_np(unsigned long addr, int numpages);
int set_memory_4k(unsigned long addr, int numpages);
+
+bool stop_memory_enc_conversion(bool wait);
int set_memory_encrypted(unsigned long addr, int numpages);
int set_memory_decrypted(unsigned long addr, int numpages);
+
int set_memory_np_noalias(unsigned long addr, int numpages);
int set_memory_nonglobal(unsigned long addr, int numpages);
int set_memory_global(unsigned long addr, int numpages);
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 0d2267ad4e0e..e074b2aca970 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2176,12 +2176,32 @@ static int __set_memory_enc_pgtable(unsigned long addr, int numpages, bool enc)
return ret;
}

+static DECLARE_RWSEM(mem_enc_lock);
+
+bool stop_memory_enc_conversion(bool wait)
+{
+ if (!wait)
+ return down_write_trylock(&mem_enc_lock);
+
+ down_write(&mem_enc_lock);
+
+ return true;
+}
+
static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
{
- if (cc_platform_has(CC_ATTR_MEM_ENCRYPT))
- return __set_memory_enc_pgtable(addr, numpages, enc);
+ int ret = 0;

- return 0;
+ if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) {
+ if (!down_read_trylock(&mem_enc_lock))
+ return -EBUSY;
+
+ ret =__set_memory_enc_pgtable(addr, numpages, enc);
+
+ up_read(&mem_enc_lock);
+ }
+
+ return ret;
}

int set_memory_encrypted(unsigned long addr, int numpages)
--
Kiryl Shutsemau / Kirill A. Shutemov

2024-02-25 15:54:55

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCHv7 08/16] x86/tdx: Account shared memory

On Fri, Feb 23, 2024 at 11:08:18AM -0800, Dave Hansen wrote:
> On 2/12/24 02:44, Kirill A. Shutemov wrote:
> > The kernel will convert all shared memory back to private during kexec.
> > The direct mapping page tables will provide information on which memory
> > is shared.
> >
> > It is extremely important to convert all shared memory. If a page is
> > missed, it will cause the second kernel to crash when it accesses it.
> >
> > Keep track of the number of shared pages. This will allow for
> > cross-checking against the shared information in the direct mapping and
> > reporting if the shared bit is lost.
> >
> > Include a debugfs interface that allows for the check to be performed at
> > any point.
>
> When I read this, I thought you were going to do some automatic
> checking. Could you make it more clear here that it's 100% up to the
> user to figure out if the numbers in debugfs match and whether there's a
> problem? This would also be a great place to mention that the whole
> thing is racy.

What about this:

Include a debugfs interface to dump the number of shared pages in the
direct mapping and the expected number. There is no serialization
against memory conversion. The numbers might not match if access to the
debugfs interface races with the conversion.

> > +static atomic_long_t nr_shared;
> > +
> > +static inline bool pte_decrypted(pte_t pte)
> > +{
> > + return cc_mkdec(pte_val(pte)) == pte_val(pte);
> > +}
>
> Name this pte_is_decrypted(), please.

But why? pte_decrypted() is consistent with other pte helpers pte_none(),
pte_present, pte_dirty(), ...

> > /* Called from __tdx_hypercall() for unrecoverable failure */
> > noinstr void __noreturn __tdx_hypercall_failed(void)
> > {
> > @@ -821,6 +829,11 @@ static int tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
> > if (!enc && !tdx_enc_status_changed(vaddr, numpages, enc))
> > return -EIO;
> >
> > + if (enc)
> > + atomic_long_sub(numpages, &nr_shared);
> > + else
> > + atomic_long_add(numpages, &nr_shared);
> > +
> > return 0;
> > }
> >
> > @@ -896,3 +909,59 @@ void __init tdx_early_init(void)
> >
> > pr_info("Guest detected\n");
> > }
> > +
> > +#ifdef CONFIG_DEBUG_FS
> > +static int tdx_shared_memory_show(struct seq_file *m, void *p)
> > +{
> > + unsigned long addr, end;
> > + unsigned long found = 0;
> > +
> > + addr = PAGE_OFFSET;
> > + end = PAGE_OFFSET + get_max_mapped();
> > +
> > + while (addr < end) {
> > + unsigned long size;
> > + unsigned int level;
> > + pte_t *pte;
> > +
> > + pte = lookup_address(addr, &level);
> > + size = page_level_size(level);
> > +
> > + if (pte && pte_decrypted(*pte))
> > + found += size / PAGE_SIZE;
> > +
> > + addr += size;
> > +
> > + cond_resched();
> > + }
>
> This is totally racy, right? Nothing prevents the PTE from
> flip-flopping all over the place.

Yes.

> > + seq_printf(m, "Number of shared pages in kernel page tables: %16lu\n",
> > + found);
> > + seq_printf(m, "Number of pages accounted as shared: %16ld\n",
> > + atomic_long_read(&nr_shared));
> > + return 0;
> > +}
>
> Ditto with 'nr_shared'. There's nothing to say that the page table walk
> has anything to do with 'nr_shared' by the time we get down here.
>
> That's not _fatal_ for a debug interface, but the pitfalls need to at
> least be discussed. Better yet would be to make sure this and the cpa
> code don't stomp on each other.

Serializing is cumbersome here. I can also just drop the interface.

--
Kiryl Shutsemau / Kirill A. Shutemov

2024-02-25 17:41:31

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCHv7 08/16] x86/tdx: Account shared memory

On 2/25/24 07:54, Kirill A. Shutemov wrote:
> Serializing is cumbersome here. I can also just drop the interface.

Just drop it for now. We can come back after the fact and debate how to
do the debugging.

2024-02-26 14:08:03

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCHv7 10/16] x86/tdx: Convert shared memory back to private on kexec

On 2/25/24 06:58, Kirill A. Shutemov wrote:
> On Fri, Feb 23, 2024 at 11:39:07AM -0800, Dave Hansen wrote:
..
>> I'd really prefer we find a way to do this with actual locks, especially
>> 'conversion_allowed'.
>>
>> This is _awfully_ close to being able to be handled by a rwsem where the
>> readers are the converters and tdx_kexec_stop_conversion() takes a write.
>
> Okay, here's what I come up with. It needs more testing.
>
> Any comments?

Looks a heck of a lot more straightforward to me.

> +static void tdx_kexec_stop_conversion(bool crash)
> +{
> + /* Stop new private<->shared conversions */
> + if (!stop_memory_enc_conversion(!crash))
> + pr_warn("Failed to finish shared<->private conversions\n");
> +}

FWIW, this is one of those places that could use a temporary variable to
help explain what is going on:

bool wait_for_lock = !crash;

/*
* ... explain why it doesn't or shouldn't take the lock here
*/
if (!stop_memory_enc_conversion(wait_for_lock))
...

This makes it understandable without looking at the called function.

> +bool stop_memory_enc_conversion(bool wait)
> +{
> + if (!wait)
> + return down_write_trylock(&mem_enc_lock);
> +
> + down_write(&mem_enc_lock);
> +
> + return true;
> +}

This also needs a comment about why the lock isn't being released.

2024-02-26 16:15:44

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCHv7 10/16] x86/tdx: Convert shared memory back to private on kexec

On Sun, Feb 25, 2024 at 04:58:46PM +0200, Kirill A. Shutemov wrote:
> On Fri, Feb 23, 2024 at 11:39:07AM -0800, Dave Hansen wrote:
> > On 2/12/24 02:44, Kirill A. Shutemov wrote:
> > > +static void tdx_kexec_stop_conversion(bool crash)
> > > +{
> > > + /* Stop new private<->shared conversions */
> > > + conversion_allowed = false;
> > > +
> > > + /*
> > > + * Make sure conversion_allowed is cleared before checking
> > > + * conversions_in_progress.
> > > + */
> > > + barrier();
> > > +
> > > + /*
> > > + * Crash kernel reaches here with interrupts disabled: can't wait for
> > > + * conversions to finish.
> > > + *
> > > + * If race happened, just report and proceed.
> > > + */
> > > + if (!crash) {
> > > + unsigned long timeout;
> > > +
> > > + /*
> > > + * Wait for in-flight conversions to complete.
> > > + *
> > > + * Do not wait more than 30 seconds.
> > > + */
> > > + timeout = 30 * USEC_PER_SEC;
> > > + while (atomic_read(&conversions_in_progress) && timeout--)
> > > + udelay(1);
> > > + }
> > > +
> > > + if (atomic_read(&conversions_in_progress))
> > > + pr_warn("Failed to finish shared<->private conversions\n");
> > > +}
> >
> > I'd really prefer we find a way to do this with actual locks, especially
> > 'conversion_allowed'.
> >
> > This is _awfully_ close to being able to be handled by a rwsem where the
> > readers are the converters and tdx_kexec_stop_conversion() takes a write.
>
> Okay, here's what I come up with. It needs more testing.

I don't see a problem during testing.

#include <linux/delay.h> has to be dropped, but otherwise the patch is
fine to me.

Any feedback?

--
Kiryl Shutsemau / Kirill A. Shutemov