2021-09-29 14:59:25

by Joerg Roedel

[permalink] [raw]
Subject: [PATCH v2 0/4] x86/mm: Fix some issues with using trampoline_pgd

From: Joerg Roedel <[email protected]>

Hi,

here are a couple of fixes and documentation improvements for the
kernels use of the trampoline_pgd. The first patch adds a comment to
document that the trampoline_pgd aliases kernel page-tables in the
user address range, establishing global TLB entries for these
addresses.

The next two patches add global TLB flushes when switching to and from
the trampoline_pgd. The last patch extends the trampoline_pgd to cover
the whole kernel address range. This is needed to make sure the stack
and the real_mode_header don't get unmapped when switching to the
trampoline_pgd.

Please review.

Thanks,

Joerg

Joerg Roedel (4):
x86/realmode: Add comment for Global bit usage in trampline_pgd
x86/mm/64: Flush global TLB on AP bringup
x86/mm: Flush global TLB when switching to trampoline page-table
x86/64/mm: Map all kernel memory into trampoline_pgd

arch/x86/include/asm/realmode.h | 1 +
arch/x86/kernel/cpu/common.c | 6 ++++++
arch/x86/kernel/reboot.c | 12 ++----------
arch/x86/mm/init.c | 5 +++++
arch/x86/realmode/init.c | 31 ++++++++++++++++++++++++++++++-
5 files changed, 44 insertions(+), 11 deletions(-)


base-commit: 5816b3e6577eaa676ceb00a848f0fd65fe2adc29
--
2.33.0


2021-09-29 17:24:00

by Joerg Roedel

[permalink] [raw]
Subject: [PATCH v2 3/4] x86/mm: Flush global TLB when switching to trampoline page-table

From: Joerg Roedel <[email protected]>

Move the switching code into a function so that it can be re-used and
add a global TLB flush. This makes sure that usage of memory which is
not mapped in the trampoline page-table is reliably caught.

Signed-off-by: Joerg Roedel <[email protected]>
---
arch/x86/include/asm/realmode.h | 1 +
arch/x86/kernel/reboot.c | 12 ++----------
arch/x86/realmode/init.c | 19 +++++++++++++++++++
3 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index 5db5d083c873..331474b150f1 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -89,6 +89,7 @@ static inline void set_real_mode_mem(phys_addr_t mem)
}

void reserve_real_mode(void);
+void load_trampoline_pgtable(void);

#endif /* __ASSEMBLY__ */

diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index 0a40df66a40d..fa700b46588e 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -113,17 +113,9 @@ void __noreturn machine_real_restart(unsigned int type)
spin_unlock(&rtc_lock);

/*
- * Switch back to the initial page table.
+ * Switch to the trampoline page table.
*/
-#ifdef CONFIG_X86_32
- load_cr3(initial_page_table);
-#else
- write_cr3(real_mode_header->trampoline_pgd);
-
- /* Exiting long mode will fail if CR4.PCIDE is set. */
- if (boot_cpu_has(X86_FEATURE_PCID))
- cr4_clear_bits(X86_CR4_PCIDE);
-#endif
+ load_trampoline_pgtable();

/* Jump to the identity-mapped low memory code */
#ifdef CONFIG_X86_32
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index 31b5856010cb..0cfe1046cec9 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -17,6 +17,25 @@ u32 *trampoline_cr4_features;
/* Hold the pgd entry used on booting additional CPUs */
pgd_t trampoline_pgd_entry;

+void load_trampoline_pgtable(void)
+{
+#ifdef CONFIG_X86_32
+ load_cr3(initial_page_table);
+#else
+ /* Exiting long mode will fail if CR4.PCIDE is set. */
+ if (boot_cpu_has(X86_FEATURE_PCID))
+ cr4_clear_bits(X86_CR4_PCIDE);
+
+ write_cr3(real_mode_header->trampoline_pgd);
+#endif
+
+ /*
+ * Flush global TLB entries to catch any bugs where code running on the
+ * trampoline_pgd uses memory not mapped into the trampoline page-table.
+ */
+ __flush_tlb_all();
+}
+
void __init reserve_real_mode(void)
{
phys_addr_t mem;
--
2.33.0

2021-09-29 17:24:00

by Joerg Roedel

[permalink] [raw]
Subject: [PATCH v2 2/4] x86/mm/64: Flush global TLB on AP bringup

From: Joerg Roedel <[email protected]>

The AP bringup code uses the trampoline_pgd page-table, which
establishes global mappings in the user range of the address space.
Flush the global TLB entries after CR4 is setup for the AP to make sure
no stale entries remain in the TLB.

Signed-off-by: Joerg Roedel <[email protected]>
---
arch/x86/kernel/cpu/common.c | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 0f8885949e8c..0f71ea2e5680 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -436,6 +436,12 @@ void cr4_init(void)

/* Initialize cr4 shadow for this CPU. */
this_cpu_write(cpu_tlbstate.cr4, cr4);
+
+ /*
+ * Flush any global TLB entries that might be left from the
+ * trampline_pgd.
+ */
+ __flush_tlb_all();
}

/*
--
2.33.0

2021-09-29 17:24:28

by Joerg Roedel

[permalink] [raw]
Subject: [PATCH v2 4/4] x86/64/mm: Map all kernel memory into trampoline_pgd

From: Joerg Roedel <[email protected]>

The trampoline_pgd only maps the 0xfffffff000000000-0xffffffffffffffff
range of kernel memory (with 4-level paging). This range contains the
kernels text+data+bss mappings and the module mapping space, but not the
direct mapping and the vmalloc area.

This is enough to get an application processors out of real-mode, but
for code that switches back to real-mode the trampoline_pgd is missing
important parts of the address space. For example, consider this code
from arch/x86/kernel/reboot.c, function machine_real_restart() for a
64-bit kernel:

#ifdef CONFIG_X86_32
load_cr3(initial_page_table);
#else
write_cr3(real_mode_header->trampoline_pgd);

/* Exiting long mode will fail if CR4.PCIDE is set. */
if (boot_cpu_has(X86_FEATURE_PCID))
cr4_clear_bits(X86_CR4_PCIDE);
#endif

/* Jump to the identity-mapped low memory code */
#ifdef CONFIG_X86_32
asm volatile("jmpl *%0" : :
"rm" (real_mode_header->machine_real_restart_asm),
"a" (type));
#else
asm volatile("ljmpl *%0" : :
"m" (real_mode_header->machine_real_restart_asm),
"D" (type));
#endif

The code switches to the trampoline_pgd, which unmaps the direct mapping
and also the kernel stack. The call to cr4_clear_bits() will find no
stack and crash the machine. The real_mode_header pointer below points
into the direct mapping, and dereferencing it also causes a crash.

The reason this does not crash always is only that kernel mappings are
global and the CR3 switch does not flush those mappings. But if theses
mappings are not in the TLB already, the above code will crash before it
can jump to the real-mode stub.

Extend the trampoline_pgd to contain all kernel mappings to prevent
these crashes and to make code which runs on this page-table more
robust.

Cc: [email protected]
Signed-off-by: Joerg Roedel <[email protected]>
---
arch/x86/realmode/init.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index 0cfe1046cec9..792cb9ca9b29 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -91,6 +91,7 @@ static void __init setup_real_mode(void)
#ifdef CONFIG_X86_64
u64 *trampoline_pgd;
u64 efer;
+ int i;
#endif

base = (unsigned char *)real_mode_header;
@@ -147,8 +148,17 @@ static void __init setup_real_mode(void)
trampoline_header->flags = 0;

trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);
+
+ /*
+ * Map all of kernel memory into the trampoline PGD so that it includes
+ * the direct mapping and vmalloc space. This is needed to keep the
+ * stack and real_mode_header mapped when switching to this page table.
+ */
+ for (i = pgd_index(__PAGE_OFFSET); i < PTRS_PER_PGD; i++)
+ trampoline_pgd[i] = init_top_pgt[i].pgd;
+
+ /* Map the real mode stub as virtual == physical */
trampoline_pgd[0] = trampoline_pgd_entry.pgd;
- trampoline_pgd[511] = init_top_pgt[511].pgd;
#endif

sme_sev_setup_real_mode(trampoline_header);
--
2.33.0