2022-09-21 14:56:23

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 00/16] x86: head_64.S spring cleaning

After doing some cleanup work on the EFI code in head_64.S, the mixed
mode code in particular, I noticed that the memory encryption pieces
could use some attention as well, so I cleaned that up too.

I have been sitting on these patches since November, waiting for the
right time to post them, i.e., after the SEV/SNP review had finished.
This has yet to happen, so I'm posting them now instead. Please feel
free to disregard for the time being, and propose a suitable timeframe
to repost them if this is likely to conflict with ongoing work.

Changes since v1:
- at Boris's request, split the patches into smaller ones that are
easier to review

Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Michael Roth <[email protected]>

Ard Biesheuvel (16):
x86/compressed: efi-mixed: rename efi_thunk_64.S to efi-mixed.S
x86/compressed: efi-mixed: move 32-bit entrypoint code into .text
section
x86/compressed: efi-mixed: move bootargs parsing out of 32-bit startup
code
x86/compressed: efi-mixed: move efi32_pe_entry into .text section
x86/compressed: efi-mixed: move efi32_entry out of head_64.S
x86/compressed: efi-mixed: move efi32_pe_entry() out of head_64.S
x86/compressed: efi: merge multiple definitions of image_offset into
one
x86/compressed: efi-mixed: simplify IDT/GDT preserve/restore
x86/compressed: avoid touching ECX in startup32_set_idt_entry()
x86/compressed: pull global variable ref up into startup32_load_idt()
x86/compressed: move startup32_load_idt() into .text section
x86/compressed: move startup32_load_idt() out of head_64.S
x86/compressed: move startup32_check_sev_cbit() into .text
x86/compressed: move startup32_check_sev_cbit() out of head_64.S
x86/compressed: adhere to calling convention in
get_sev_encryption_bit()
x86/compressed: only build mem_encrypt.S if AMD_MEM_ENCRYPT=y

arch/x86/boot/compressed/Makefile | 8 +-
arch/x86/boot/compressed/efi_mixed.S | 337 ++++++++++++++++++++
arch/x86/boot/compressed/efi_thunk_64.S | 195 -----------
arch/x86/boot/compressed/head_32.S | 4 -
arch/x86/boot/compressed/head_64.S | 299 +----------------
arch/x86/boot/compressed/mem_encrypt.S | 152 ++++++++-
drivers/firmware/efi/libstub/x86-stub.c | 2 +-
7 files changed, 496 insertions(+), 501 deletions(-)
create mode 100644 arch/x86/boot/compressed/efi_mixed.S
delete mode 100644 arch/x86/boot/compressed/efi_thunk_64.S

--
2.35.1


2022-09-21 14:56:24

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 02/16] x86/compressed: efi-mixed: move 32-bit entrypoint code into .text section

Move the code that stores the arguments passed to the EFI entrypoint
into the .text section, so that it can be moved into a separate
compilation unit in a subsequent patch.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/boot/compressed/head_64.S | 34 ++++++++++++--------
1 file changed, 20 insertions(+), 14 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index d33f060900d2..1ba2fc2357e6 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -303,24 +303,28 @@ SYM_FUNC_START(efi32_stub_entry)
popl %ecx
popl %edx
popl %esi
+ jmp efi32_entry
+SYM_FUNC_END(efi32_stub_entry)

+ .text
+SYM_FUNC_START_LOCAL(efi32_entry)
call 1f
-1: pop %ebp
- subl $ rva(1b), %ebp
-
- movl %esi, rva(efi32_boot_args+8)(%ebp)
-SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
- movl %ecx, rva(efi32_boot_args)(%ebp)
- movl %edx, rva(efi32_boot_args+4)(%ebp)
- movb $0, rva(efi_is64)(%ebp)
+1: pop %ebx

/* Save firmware GDTR and code/data selectors */
- sgdtl rva(efi32_boot_gdt)(%ebp)
- movw %cs, rva(efi32_boot_cs)(%ebp)
- movw %ds, rva(efi32_boot_ds)(%ebp)
+ sgdtl (efi32_boot_gdt - 1b)(%ebx)
+ movw %cs, (efi32_boot_cs - 1b)(%ebx)
+ movw %ds, (efi32_boot_ds - 1b)(%ebx)

/* Store firmware IDT descriptor */
- sidtl rva(efi32_boot_idt)(%ebp)
+ sidtl (efi32_boot_idt - 1b)(%ebx)
+
+ /* Store boot arguments */
+ leal (efi32_boot_args - 1b)(%ebx), %ebx
+ movl %ecx, 0(%ebx)
+ movl %edx, 4(%ebx)
+ movl %esi, 8(%ebx)
+ movb $0x0, 12(%ebx) // efi_is64

/* Disable paging */
movl %cr0, %eax
@@ -328,7 +332,8 @@ SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
movl %eax, %cr0

jmp startup_32
-SYM_FUNC_END(efi32_stub_entry)
+SYM_FUNC_END(efi32_entry)
+ __HEAD
#endif

.code64
@@ -831,7 +836,8 @@ SYM_FUNC_START(efi32_pe_entry)
*/
subl %esi, %ebx
movl %ebx, rva(image_offset)(%ebp) // save image_offset
- jmp efi32_pe_stub_entry
+ xorl %esi, %esi
+ jmp efi32_entry

2: popl %edi // restore callee-save registers
popl %ebx
--
2.35.1

2022-09-21 14:56:45

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 04/16] x86/compressed: efi-mixed: move efi32_pe_entry into .text section

Move efi32_pe_entry() into the .text section, so that it can be moved
out of head_64.S and into a separate compilation unit in a subsequent
patch.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/boot/compressed/head_64.S | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index b51f0e107c2e..9ae6ddccd3ef 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -757,7 +757,7 @@ SYM_DATA(efi_is64, .byte 1)
#define BS32_handle_protocol 88 // offsetof(efi_boot_services_32_t, handle_protocol)
#define LI32_image_base 32 // offsetof(efi_loaded_image_32_t, image_base)

- __HEAD
+ .text
.code32
SYM_FUNC_START(efi32_pe_entry)
/*
@@ -779,12 +779,11 @@ SYM_FUNC_START(efi32_pe_entry)

call 1f
1: pop %ebx
- subl $ rva(1b), %ebx

/* Get the loaded image protocol pointer from the image handle */
leal -4(%ebp), %eax
pushl %eax // &loaded_image
- leal rva(loaded_image_proto)(%ebx), %eax
+ leal (loaded_image_proto - 1b)(%ebx), %eax
pushl %eax // pass the GUID address
pushl 8(%ebp) // pass the image handle

@@ -813,13 +812,13 @@ SYM_FUNC_START(efi32_pe_entry)
movl 12(%ebp), %edx // sys_table
movl -4(%ebp), %esi // loaded_image
movl LI32_image_base(%esi), %esi // loaded_image->image_base
- movl %ebx, %ebp // startup_32 for efi32_pe_stub_entry
+ leal (startup_32 - 1b)(%ebx), %ebp // runtime address of startup_32
/*
* We need to set the image_offset variable here since startup_32() will
* use it before we get to the 64-bit efi_pe_entry() in C code.
*/
- subl %esi, %ebx
- movl %ebx, rva(image_offset)(%ebp) // save image_offset
+ subl %esi, %ebp // calculate image_offset
+ movl %ebp, (image_offset - 1b)(%ebx) // save image_offset
xorl %esi, %esi
jmp efi32_entry

--
2.35.1

2022-09-21 14:57:10

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 09/16] x86/compressed: avoid touching ECX in startup32_set_idt_entry()

Avoid touching register %ecx in startup32_set_idt_entry(), by folding
the MOV, SHL and ORL instructions into a single ORL which no longer
requires a temp register.

This permits ECX to be used as a function argument in a subsequent
patch.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/boot/compressed/head_64.S | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 90b119fbef58..3db7e4a634b0 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -733,7 +733,6 @@ SYM_DATA_END_LABEL(boot32_idt, SYM_L_GLOBAL, boot32_idt_end)
*/
SYM_FUNC_START(startup32_set_idt_entry)
push %ebx
- push %ecx

/* IDT entry address to %ebx */
leal rva(boot32_idt)(%ebp), %ebx
@@ -742,10 +741,8 @@ SYM_FUNC_START(startup32_set_idt_entry)

/* Build IDT entry, lower 4 bytes */
movl %eax, %edx
- andl $0x0000ffff, %edx # Target code segment offset [15:0]
- movl $__KERNEL32_CS, %ecx # Target code segment selector
- shl $16, %ecx
- orl %ecx, %edx
+ andl $0x0000ffff, %edx # Target code segment offset [15:0]
+ orl $(__KERNEL32_CS << 16), %edx # Target code segment selector

/* Store lower 4 bytes to IDT */
movl %edx, (%ebx)
@@ -758,7 +755,6 @@ SYM_FUNC_START(startup32_set_idt_entry)
/* Store upper 4 bytes to IDT */
movl %edx, 4(%ebx)

- pop %ecx
pop %ebx
RET
SYM_FUNC_END(startup32_set_idt_entry)
--
2.35.1

2022-09-21 14:57:17

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 11/16] x86/compressed: move startup32_load_idt() into .text section

Convert startup32_load_idt() into an ordinary function and move it into
the .text section. This involves turning the rva() immediates into ones
derived from a local label, and preserving/restoring the %ebp and %ebx
as per the calling convention.

Also move the #ifdef to the only existing call site. This makes it clear
that the function call does nothing if support for memory encryption is
not compiled in.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/boot/compressed/head_64.S | 29 ++++++++++++++------
1 file changed, 20 insertions(+), 9 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index a1f893dd5bbf..b4b2b76ed1af 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -118,7 +118,9 @@ SYM_FUNC_START(startup_32)
1:

/* Setup Exception handling for SEV-ES */
+#ifdef CONFIG_AMD_MEM_ENCRYPT
call startup32_load_idt
+#endif

/* Make sure cpu supports long mode. */
call verify_cpu
@@ -719,7 +721,7 @@ SYM_DATA_END_LABEL(boot32_idt, SYM_L_GLOBAL, boot32_idt_end)
#endif

#ifdef CONFIG_AMD_MEM_ENCRYPT
- __HEAD
+ .text
.code32
/*
* Write an IDT entry into boot32_idt
@@ -752,24 +754,32 @@ SYM_FUNC_START_LOCAL(startup32_set_idt_entry)

RET
SYM_FUNC_END(startup32_set_idt_entry)
-#endif

SYM_FUNC_START(startup32_load_idt)
-#ifdef CONFIG_AMD_MEM_ENCRYPT
- leal rva(boot32_idt)(%ebp), %ecx
+ push %ebp
+ push %ebx
+
+ call 1f
+1: pop %ebp
+
+ leal (boot32_idt - 1b)(%ebp), %ebx

/* #VC handler */
- leal rva(startup32_vc_handler)(%ebp), %eax
+ leal (startup32_vc_handler - 1b)(%ebp), %eax
movl $X86_TRAP_VC, %edx
+ movl %ebx, %ecx
call startup32_set_idt_entry

/* Load IDT */
- leal rva(boot32_idt)(%ebp), %eax
- movl %eax, rva(boot32_idt_desc+2)(%ebp)
- lidt rva(boot32_idt_desc)(%ebp)
-#endif
+ leal (boot32_idt_desc - 1b)(%ebp), %ecx
+ movl %ebx, 2(%ecx)
+ lidt (%ecx)
+
+ pop %ebx
+ pop %ebp
RET
SYM_FUNC_END(startup32_load_idt)
+#endif

/*
* Check for the correct C-bit position when the startup_32 boot-path is used.
@@ -788,6 +798,7 @@ SYM_FUNC_END(startup32_load_idt)
* succeed. An incorrect C-bit position will map all memory unencrypted, so that
* the compare will use the encrypted random data and fail.
*/
+ __HEAD
SYM_FUNC_START(startup32_check_sev_cbit)
#ifdef CONFIG_AMD_MEM_ENCRYPT
pushl %eax
--
2.35.1

2022-09-21 14:57:44

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 12/16] x86/compressed: move startup32_load_idt() out of head_64.S

Now that startup32_load_idt() has been refactored into an ordinary
callable function, move it into mem-encrypt.S where it belongs.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/boot/compressed/head_64.S | 74 --------------------
arch/x86/boot/compressed/mem_encrypt.S | 72 ++++++++++++++++++-
2 files changed, 71 insertions(+), 75 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index b4b2b76ed1af..abb5a650a816 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -707,80 +707,6 @@ SYM_DATA_START(boot_idt)
.endr
SYM_DATA_END_LABEL(boot_idt, SYM_L_GLOBAL, boot_idt_end)

-#ifdef CONFIG_AMD_MEM_ENCRYPT
-SYM_DATA_START(boot32_idt_desc)
- .word boot32_idt_end - boot32_idt - 1
- .long 0
-SYM_DATA_END(boot32_idt_desc)
- .balign 8
-SYM_DATA_START(boot32_idt)
- .rept 32
- .quad 0
- .endr
-SYM_DATA_END_LABEL(boot32_idt, SYM_L_GLOBAL, boot32_idt_end)
-#endif
-
-#ifdef CONFIG_AMD_MEM_ENCRYPT
- .text
- .code32
-/*
- * Write an IDT entry into boot32_idt
- *
- * Parameters:
- *
- * %eax: Handler address
- * %edx: Vector number
- * %ecx: IDT address
- */
-SYM_FUNC_START_LOCAL(startup32_set_idt_entry)
- /* IDT entry address to %ecx */
- leal (%ecx, %edx, 8), %ecx
-
- /* Build IDT entry, lower 4 bytes */
- movl %eax, %edx
- andl $0x0000ffff, %edx # Target code segment offset [15:0]
- orl $(__KERNEL32_CS << 16), %edx # Target code segment selector
-
- /* Store lower 4 bytes to IDT */
- movl %edx, (%ecx)
-
- /* Build IDT entry, upper 4 bytes */
- movl %eax, %edx
- andl $0xffff0000, %edx # Target code segment offset [31:16]
- orl $0x00008e00, %edx # Present, Type 32-bit Interrupt Gate
-
- /* Store upper 4 bytes to IDT */
- movl %edx, 4(%ecx)
-
- RET
-SYM_FUNC_END(startup32_set_idt_entry)
-
-SYM_FUNC_START(startup32_load_idt)
- push %ebp
- push %ebx
-
- call 1f
-1: pop %ebp
-
- leal (boot32_idt - 1b)(%ebp), %ebx
-
- /* #VC handler */
- leal (startup32_vc_handler - 1b)(%ebp), %eax
- movl $X86_TRAP_VC, %edx
- movl %ebx, %ecx
- call startup32_set_idt_entry
-
- /* Load IDT */
- leal (boot32_idt_desc - 1b)(%ebp), %ecx
- movl %ebx, 2(%ecx)
- lidt (%ecx)
-
- pop %ebx
- pop %ebp
- RET
-SYM_FUNC_END(startup32_load_idt)
-#endif
-
/*
* Check for the correct C-bit position when the startup_32 boot-path is used.
*
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index a73e4d783cae..6747e5e4c696 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -12,6 +12,8 @@
#include <asm/processor-flags.h>
#include <asm/msr.h>
#include <asm/asm-offsets.h>
+#include <asm/segment.h>
+#include <asm/trapnr.h>

.text
.code32
@@ -98,7 +100,7 @@ SYM_CODE_START_LOCAL(sev_es_req_cpuid)
jmp 1b
SYM_CODE_END(sev_es_req_cpuid)

-SYM_CODE_START(startup32_vc_handler)
+SYM_CODE_START_LOCAL(startup32_vc_handler)
pushl %eax
pushl %ebx
pushl %ecx
@@ -184,6 +186,63 @@ SYM_CODE_START(startup32_vc_handler)
jmp .Lfail
SYM_CODE_END(startup32_vc_handler)

+/*
+ * Write an IDT entry into boot32_idt
+ *
+ * Parameters:
+ *
+ * %eax: Handler address
+ * %edx: Vector number
+ * %ecx: IDT address
+ */
+SYM_FUNC_START_LOCAL(startup32_set_idt_entry)
+ /* IDT entry address to %ecx */
+ leal (%ecx, %edx, 8), %ecx
+
+ /* Build IDT entry, lower 4 bytes */
+ movl %eax, %edx
+ andl $0x0000ffff, %edx # Target code segment offset [15:0]
+ orl $(__KERNEL32_CS << 16), %edx # Target code segment selector
+
+ /* Store lower 4 bytes to IDT */
+ movl %edx, (%ecx)
+
+ /* Build IDT entry, upper 4 bytes */
+ movl %eax, %edx
+ andl $0xffff0000, %edx # Target code segment offset [31:16]
+ orl $0x00008e00, %edx # Present, Type 32-bit Interrupt Gate
+
+ /* Store upper 4 bytes to IDT */
+ movl %edx, 4(%ecx)
+
+ RET
+SYM_FUNC_END(startup32_set_idt_entry)
+
+SYM_FUNC_START(startup32_load_idt)
+ push %ebp
+ push %ebx
+
+ call 1f
+1: pop %ebp
+
+ leal (boot32_idt - 1b)(%ebp), %ebx
+
+ /* #VC handler */
+ leal (startup32_vc_handler - 1b)(%ebp), %eax
+ movl $X86_TRAP_VC, %edx
+ movl %ebx, %ecx
+ call startup32_set_idt_entry
+
+ /* Load IDT */
+ leal (boot32_idt_desc - 1b)(%ebp), %ecx
+ movl %ebx, 2(%ecx)
+ lidt (%ecx)
+
+ pop %ebx
+ pop %ebp
+ RET
+SYM_FUNC_END(startup32_load_idt)
+
.code64

#include "../../kernel/sev_verify_cbit.S"
@@ -195,4 +254,15 @@ SYM_CODE_END(startup32_vc_handler)
SYM_DATA(sme_me_mask, .quad 0)
SYM_DATA(sev_status, .quad 0)
SYM_DATA(sev_check_data, .quad 0)
+
+SYM_DATA_START_LOCAL(boot32_idt)
+ .rept 32
+ .quad 0
+ .endr
+SYM_DATA_END(boot32_idt)
+
+SYM_DATA_START_LOCAL(boot32_idt_desc)
+ .word . - boot32_idt - 1
+ .long 0
+SYM_DATA_END(boot32_idt_desc)
#endif
--
2.35.1

2022-09-21 14:58:12

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 13/16] x86/compressed: move startup32_check_sev_cbit() into .text

Move startup32_check_sev_cbit() into the .text section and turn it into
an ordinary function using the ordinary 32-bit calling convention,
instead of saving/restoring the registers that are known to be live at
the only call site. This improves maintainability, and makes it possible
to move this function out of head_64.S and into a separate compilation
unit that is specific to memory encryption.

Note that this requires the call site to be moved before the mixed mode
check, as %eax will be live otherwise.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/boot/compressed/head_64.S | 35 +++++++++++---------
1 file changed, 19 insertions(+), 16 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index abb5a650a816..639f688e4949 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -251,6 +251,11 @@ SYM_FUNC_START(startup_32)
movl $__BOOT_TSS, %eax
ltr %ax

+#ifdef CONFIG_AMD_MEM_ENCRYPT
+ /* Check if the C-bit position is correct when SEV is active */
+ call startup32_check_sev_cbit
+#endif
+
/*
* Setup for the jump to 64bit mode
*
@@ -268,8 +273,6 @@ SYM_FUNC_START(startup_32)
leal rva(startup_64_mixedmode)(%ebp), %eax
1:
#endif
- /* Check if the C-bit position is correct when SEV is active */
- call startup32_check_sev_cbit

pushl $__KERNEL_CS
pushl %eax
@@ -724,16 +727,17 @@ SYM_DATA_END_LABEL(boot_idt, SYM_L_GLOBAL, boot_idt_end)
* succeed. An incorrect C-bit position will map all memory unencrypted, so that
* the compare will use the encrypted random data and fail.
*/
- __HEAD
-SYM_FUNC_START(startup32_check_sev_cbit)
#ifdef CONFIG_AMD_MEM_ENCRYPT
- pushl %eax
+ .text
+SYM_FUNC_START(startup32_check_sev_cbit)
pushl %ebx
- pushl %ecx
- pushl %edx
+ pushl %ebp
+
+ call 0f
+0: popl %ebp

/* Check for non-zero sev_status */
- movl rva(sev_status)(%ebp), %eax
+ movl (sev_status - 0b)(%ebp), %eax
testl %eax, %eax
jz 4f

@@ -748,17 +752,18 @@ SYM_FUNC_START(startup32_check_sev_cbit)
jnc 2b

/* Store to memory and keep it in the registers */
- movl %eax, rva(sev_check_data)(%ebp)
- movl %ebx, rva(sev_check_data+4)(%ebp)
+ leal (sev_check_data - 0b)(%ebp), %ebp
+ movl %eax, 0(%ebp)
+ movl %ebx, 4(%ebp)

/* Enable paging to see if encryption is active */
movl %cr0, %edx /* Backup %cr0 in %edx */
movl $(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */
movl %ecx, %cr0

- cmpl %eax, rva(sev_check_data)(%ebp)
+ cmpl %eax, 0(%ebp)
jne 3f
- cmpl %ebx, rva(sev_check_data+4)(%ebp)
+ cmpl %ebx, 4(%ebp)
jne 3f

movl %edx, %cr0 /* Restore previous %cr0 */
@@ -770,13 +775,11 @@ SYM_FUNC_START(startup32_check_sev_cbit)
jmp 3b

4:
- popl %edx
- popl %ecx
+ popl %ebp
popl %ebx
- popl %eax
-#endif
RET
SYM_FUNC_END(startup32_check_sev_cbit)
+#endif

/*
* Stack and heap for uncompression
--
2.35.1

2022-09-21 14:59:16

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 16/16] x86/compressed: only build mem_encrypt.S if AMD_MEM_ENCRYPT=y

Avoid building the mem_encrypt.o object if memory encryption support is
not enabled to begin with.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/boot/compressed/Makefile | 2 +-
arch/x86/boot/compressed/mem_encrypt.S | 2 --
2 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index d6dbb46696a2..9aad9ddcf3b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -99,7 +99,7 @@ vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/kaslr.o
ifdef CONFIG_X86_64
vmlinux-objs-y += $(obj)/ident_map_64.o
vmlinux-objs-y += $(obj)/idt_64.o $(obj)/idt_handlers_64.o
- vmlinux-objs-y += $(obj)/mem_encrypt.o
+ vmlinux-objs-$(CONFIG_AMD_MEM_ENCRYPT) += $(obj)/mem_encrypt.o
vmlinux-objs-y += $(obj)/pgtable_64.o
vmlinux-objs-$(CONFIG_AMD_MEM_ENCRYPT) += $(obj)/sev.o
endif
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index e69674588a31..32f7cc8a8625 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -307,7 +307,6 @@ SYM_FUNC_END(startup32_check_sev_cbit)

.data

-#ifdef CONFIG_AMD_MEM_ENCRYPT
.balign 8
SYM_DATA(sme_me_mask, .quad 0)
SYM_DATA(sev_status, .quad 0)
@@ -323,4 +322,3 @@ SYM_DATA_START_LOCAL(boot32_idt_desc)
.word . - boot32_idt - 1
.long 0
SYM_DATA_END(boot32_idt_desc)
-#endif
--
2.35.1

2022-09-21 15:06:51

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 03/16] x86/compressed: efi-mixed: move bootargs parsing out of 32-bit startup code

Move the logic that chooses between the different EFI entrypoints out of
the 32-bit boot path, and into a 64-bit helper that can perform the same
task much more cleanly. While at it, document the mixed mode boot flow
in a code comment.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/boot/compressed/efi_mixed.S | 43 ++++++++++++++++++++
arch/x86/boot/compressed/head_64.S | 24 ++---------
2 files changed, 47 insertions(+), 20 deletions(-)

diff --git a/arch/x86/boot/compressed/efi_mixed.S b/arch/x86/boot/compressed/efi_mixed.S
index 67e7edcdfea8..77e77c3ea393 100644
--- a/arch/x86/boot/compressed/efi_mixed.S
+++ b/arch/x86/boot/compressed/efi_mixed.S
@@ -22,6 +22,49 @@

.code64
.text
+/*
+ * When booting in 64-bit mode on 32-bit EFI firmware, startup_64_mixedmode()
+ * is the first thing that runs after switching to long mode. Depending on
+ * whether the EFI handover protocol or the compat entry point was used to
+ * enter the kernel, it will either branch to the 64-bit EFI handover
+ * entrypoint at offset 0x390 in the image, or to the 64-bit EFI PE/COFF
+ * entrypoint efi_pe_entry(). In the former case, the bootloader must provide a
+ * struct bootparams pointer as the third argument, so the presence of such a
+ * pointer is used to disambiguate.
+ *
+ * +--------------+
+ * +------------------+ +------------+ +------>| efi_pe_entry |
+ * | efi32_pe_entry |---->| | | +-----------+--+
+ * +------------------+ | | +------+---------------+ |
+ * | startup_32 |---->| startup_64_mixedmode | |
+ * +------------------+ | | +------+---------------+ V
+ * | efi32_stub_entry |---->| | | +------------------+
+ * +------------------+ +------------+ +---->| efi64_stub_entry |
+ * +-------------+----+
+ * +------------+ +----------+ |
+ * | startup_64 |<----| efi_main |<--------------+
+ * +------------+ +----------+
+ */
+SYM_FUNC_START(startup_64_mixedmode)
+ lea efi32_boot_args(%rip), %rdx
+ mov 0(%rdx), %edi
+ mov 4(%rdx), %esi
+ mov 8(%rdx), %edx // saved bootparams pointer
+ test %edx, %edx
+ jnz efi64_stub_entry
+ /*
+ * efi_pe_entry uses MS calling convention, which requires 32 bytes of
+ * shadow space on the stack even if all arguments are passed in
+ * registers. We also need an additional 8 bytes for the space that
+ * would be occupied by the return address, and this also results in
+ * the correct stack alignment for entry.
+ */
+ sub $40, %rsp
+ mov %rdi, %rcx // MS calling convention
+ mov %rsi, %rdx
+ jmp efi_pe_entry
+SYM_FUNC_END(startup_64_mixedmode)
+
SYM_FUNC_START(__efi64_thunk)
push %rbp
push %rbx
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 1ba2fc2357e6..b51f0e107c2e 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -261,25 +261,9 @@ SYM_FUNC_START(startup_32)
*/
leal rva(startup_64)(%ebp), %eax
#ifdef CONFIG_EFI_MIXED
- movl rva(efi32_boot_args)(%ebp), %edi
- testl %edi, %edi
- jz 1f
- leal rva(efi64_stub_entry)(%ebp), %eax
- movl rva(efi32_boot_args+4)(%ebp), %esi
- movl rva(efi32_boot_args+8)(%ebp), %edx // saved bootparams pointer
- testl %edx, %edx
- jnz 1f
- /*
- * efi_pe_entry uses MS calling convention, which requires 32 bytes of
- * shadow space on the stack even if all arguments are passed in
- * registers. We also need an additional 8 bytes for the space that
- * would be occupied by the return address, and this also results in
- * the correct stack alignment for entry.
- */
- subl $40, %esp
- leal rva(efi_pe_entry)(%ebp), %eax
- movl %edi, %ecx // MS calling convention
- movl %esi, %edx
+ cmpb $1, rva(efi_is64)(%ebp)
+ je 1f
+ leal rva(startup_64_mixedmode)(%ebp), %eax
1:
#endif
/* Check if the C-bit position is correct when SEV is active */
@@ -766,7 +750,7 @@ SYM_DATA_END_LABEL(boot32_idt, SYM_L_GLOBAL, boot32_idt_end)
SYM_DATA(image_offset, .long 0)
#endif
#ifdef CONFIG_EFI_MIXED
-SYM_DATA_LOCAL(efi32_boot_args, .long 0, 0, 0)
+SYM_DATA(efi32_boot_args, .long 0, 0, 0)
SYM_DATA(efi_is64, .byte 1)

#define ST32_boottime 60 // offsetof(efi_system_table_32_t, boottime)
--
2.35.1

2022-09-21 15:14:50

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 10/16] x86/compressed: pull global variable ref up into startup32_load_idt()

In preparation for moving startup32_load_idt() out of head_64.S and
turning it into an ordinary function using the ordinary 32-bit calling
convention, pull the global variable reference to boot32_idt up into
startup32_load_idt() so that startup32_set_idt_entry() does not need to
discover its own runtime physical address, which will no longer be
correlated with startup_32 once this code is moved into .text.

While at it, give startup32_set_idt_entry() static linkage.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/boot/compressed/head_64.S | 20 ++++++++------------
1 file changed, 8 insertions(+), 12 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 3db7e4a634b0..a1f893dd5bbf 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -728,16 +728,11 @@ SYM_DATA_END_LABEL(boot32_idt, SYM_L_GLOBAL, boot32_idt_end)
*
* %eax: Handler address
* %edx: Vector number
- *
- * Physical offset is expected in %ebp
+ * %ecx: IDT address
*/
-SYM_FUNC_START(startup32_set_idt_entry)
- push %ebx
-
- /* IDT entry address to %ebx */
- leal rva(boot32_idt)(%ebp), %ebx
- shl $3, %edx
- addl %edx, %ebx
+SYM_FUNC_START_LOCAL(startup32_set_idt_entry)
+ /* IDT entry address to %ecx */
+ leal (%ecx, %edx, 8), %ecx

/* Build IDT entry, lower 4 bytes */
movl %eax, %edx
@@ -745,7 +740,7 @@ SYM_FUNC_START(startup32_set_idt_entry)
orl $(__KERNEL32_CS << 16), %edx # Target code segment selector

/* Store lower 4 bytes to IDT */
- movl %edx, (%ebx)
+ movl %edx, (%ecx)

/* Build IDT entry, upper 4 bytes */
movl %eax, %edx
@@ -753,15 +748,16 @@ SYM_FUNC_START(startup32_set_idt_entry)
orl $0x00008e00, %edx # Present, Type 32-bit Interrupt Gate

/* Store upper 4 bytes to IDT */
- movl %edx, 4(%ebx)
+ movl %edx, 4(%ecx)

- pop %ebx
RET
SYM_FUNC_END(startup32_set_idt_entry)
#endif

SYM_FUNC_START(startup32_load_idt)
#ifdef CONFIG_AMD_MEM_ENCRYPT
+ leal rva(boot32_idt)(%ebp), %ecx
+
/* #VC handler */
leal rva(startup32_vc_handler)(%ebp), %eax
movl $X86_TRAP_VC, %edx
--
2.35.1

2022-09-21 15:15:34

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 14/16] x86/compressed: move startup32_check_sev_cbit() out of head_64.S

Now that the startup32_check_sev_cbit() routine can execute from
anywhere and behaves like an ordinary function, we no longer need to
keep it in head_64.S.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/boot/compressed/head_64.S | 71 --------------------
arch/x86/boot/compressed/mem_encrypt.S | 68 +++++++++++++++++++
2 files changed, 68 insertions(+), 71 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 639f688e4949..232cd3fa3e84 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -710,77 +710,6 @@ SYM_DATA_START(boot_idt)
.endr
SYM_DATA_END_LABEL(boot_idt, SYM_L_GLOBAL, boot_idt_end)

-/*
- * Check for the correct C-bit position when the startup_32 boot-path is used.
- *
- * The check makes use of the fact that all memory is encrypted when paging is
- * disabled. The function creates 64 bits of random data using the RDRAND
- * instruction. RDRAND is mandatory for SEV guests, so always available. If the
- * hypervisor violates that the kernel will crash right here.
- *
- * The 64 bits of random data are stored to a memory location and at the same
- * time kept in the %eax and %ebx registers. Since encryption is always active
- * when paging is off the random data will be stored encrypted in main memory.
- *
- * Then paging is enabled. When the C-bit position is correct all memory is
- * still mapped encrypted and comparing the register values with memory will
- * succeed. An incorrect C-bit position will map all memory unencrypted, so that
- * the compare will use the encrypted random data and fail.
- */
-#ifdef CONFIG_AMD_MEM_ENCRYPT
- .text
-SYM_FUNC_START(startup32_check_sev_cbit)
- pushl %ebx
- pushl %ebp
-
- call 0f
-0: popl %ebp
-
- /* Check for non-zero sev_status */
- movl (sev_status - 0b)(%ebp), %eax
- testl %eax, %eax
- jz 4f
-
- /*
- * Get two 32-bit random values - Don't bail out if RDRAND fails
- * because it is better to prevent forward progress if no random value
- * can be gathered.
- */
-1: rdrand %eax
- jnc 1b
-2: rdrand %ebx
- jnc 2b
-
- /* Store to memory and keep it in the registers */
- leal (sev_check_data - 0b)(%ebp), %ebp
- movl %eax, 0(%ebp)
- movl %ebx, 4(%ebp)
-
- /* Enable paging to see if encryption is active */
- movl %cr0, %edx /* Backup %cr0 in %edx */
- movl $(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */
- movl %ecx, %cr0
-
- cmpl %eax, 0(%ebp)
- jne 3f
- cmpl %ebx, 4(%ebp)
- jne 3f
-
- movl %edx, %cr0 /* Restore previous %cr0 */
-
- jmp 4f
-
-3: /* Check failed - hlt the machine */
- hlt
- jmp 3b
-
-4:
- popl %ebp
- popl %ebx
- RET
-SYM_FUNC_END(startup32_check_sev_cbit)
-#endif
-
/*
* Stack and heap for uncompression
*/
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index 6747e5e4c696..14cf04a1ed09 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -243,6 +243,74 @@ SYM_FUNC_START(startup32_load_idt)
RET
SYM_FUNC_END(startup32_load_idt)

+/*
+ * Check for the correct C-bit position when the startup_32 boot-path is used.
+ *
+ * The check makes use of the fact that all memory is encrypted when paging is
+ * disabled. The function creates 64 bits of random data using the RDRAND
+ * instruction. RDRAND is mandatory for SEV guests, so always available. If the
+ * hypervisor violates that the kernel will crash right here.
+ *
+ * The 64 bits of random data are stored to a memory location and at the same
+ * time kept in the %eax and %ebx registers. Since encryption is always active
+ * when paging is off the random data will be stored encrypted in main memory.
+ *
+ * Then paging is enabled. When the C-bit position is correct all memory is
+ * still mapped encrypted and comparing the register values with memory will
+ * succeed. An incorrect C-bit position will map all memory unencrypted, so that
+ * the compare will use the encrypted random data and fail.
+ */
+SYM_FUNC_START(startup32_check_sev_cbit)
+ pushl %ebx
+ pushl %ebp
+
+ call 0f
+0: popl %ebp
+
+ /* Check for non-zero sev_status */
+ movl (sev_status - 0b)(%ebp), %eax
+ testl %eax, %eax
+ jz 4f
+
+ /*
+ * Get two 32-bit random values - Don't bail out if RDRAND fails
+ * because it is better to prevent forward progress if no random value
+ * can be gathered.
+ */
+1: rdrand %eax
+ jnc 1b
+2: rdrand %ebx
+ jnc 2b
+
+ /* Store to memory and keep it in the registers */
+ leal (sev_check_data - 0b)(%ebp), %ebp
+ movl %eax, 0(%ebp)
+ movl %ebx, 4(%ebp)
+
+ /* Enable paging to see if encryption is active */
+ movl %cr0, %edx /* Backup %cr0 in %edx */
+ movl $(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */
+ movl %ecx, %cr0
+
+ cmpl %eax, 0(%ebp)
+ jne 3f
+ cmpl %ebx, 4(%ebp)
+ jne 3f
+
+ movl %edx, %cr0 /* Restore previous %cr0 */
+
+ jmp 4f
+
+3: /* Check failed - hlt the machine */
+ hlt
+ jmp 3b
+
+4:
+ popl %ebp
+ popl %ebx
+ RET
+SYM_FUNC_END(startup32_check_sev_cbit)
+
.code64

#include "../../kernel/sev_verify_cbit.S"
--
2.35.1

2022-09-21 15:23:32

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 15/16] x86/compressed: adhere to calling convention in get_sev_encryption_bit()

Make get_sev_encryption_bit() follow the ordinary i386 calling
convention, and only call it if CONFIG_AMD_MEM_ENCRYPT is actually
enabled. This clarifies the calling code, and makes it more
maintainable.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/boot/compressed/head_64.S | 5 +++--
arch/x86/boot/compressed/mem_encrypt.S | 10 ----------
2 files changed, 3 insertions(+), 12 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 232cd3fa3e84..a7bbc8d73a08 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -180,12 +180,13 @@ SYM_FUNC_START(startup_32)
*/
/*
* If SEV is active then set the encryption mask in the page tables.
- * This will insure that when the kernel is copied and decompressed
+ * This will ensure that when the kernel is copied and decompressed
* it will be done so encrypted.
*/
- call get_sev_encryption_bit
xorl %edx, %edx
#ifdef CONFIG_AMD_MEM_ENCRYPT
+ call get_sev_encryption_bit
+ xorl %edx, %edx
testl %eax, %eax
jz 1f
subl $32, %eax /* Encryption bit is always above bit 31 */
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index 14cf04a1ed09..e69674588a31 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -18,12 +18,7 @@
.text
.code32
SYM_FUNC_START(get_sev_encryption_bit)
- xor %eax, %eax
-
-#ifdef CONFIG_AMD_MEM_ENCRYPT
push %ebx
- push %ecx
- push %edx

movl $0x80000000, %eax /* CPUID to check the highest leaf */
cpuid
@@ -54,12 +49,7 @@ SYM_FUNC_START(get_sev_encryption_bit)
xor %eax, %eax

.Lsev_exit:
- pop %edx
- pop %ecx
pop %ebx
-
-#endif /* CONFIG_AMD_MEM_ENCRYPT */
-
RET
SYM_FUNC_END(get_sev_encryption_bit)

--
2.35.1

2022-09-21 15:25:44

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 08/16] x86/compressed: efi-mixed: simplify IDT/GDT preserve/restore

Tweak the asm and remove some redundant instructions. While at it,
fix the associated comment for style and correctness.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/boot/compressed/efi_mixed.S | 20 +++++++-------------
1 file changed, 7 insertions(+), 13 deletions(-)

diff --git a/arch/x86/boot/compressed/efi_mixed.S b/arch/x86/boot/compressed/efi_mixed.S
index 838514f7685a..e5b8f1d2310c 100644
--- a/arch/x86/boot/compressed/efi_mixed.S
+++ b/arch/x86/boot/compressed/efi_mixed.S
@@ -96,24 +96,20 @@ SYM_FUNC_START(__efi64_thunk)

leaq 0x20(%rsp), %rbx
sgdt (%rbx)
-
- addq $16, %rbx
- sidt (%rbx)
+ sidt 16(%rbx)

leaq 1f(%rip), %rbp

/*
- * Switch to IDT and GDT with 32-bit segments. This is the firmware GDT
- * and IDT that was installed when the kernel started executing. The
- * pointers were saved by the efi32_entry() routine below.
+ * Switch to IDT and GDT with 32-bit segments. These are the firmware
+ * GDT and IDT that were installed when the kernel started executing.
+ * The pointers were saved by the efi32_entry() routine below.
*
* Pass the saved DS selector to the 32-bit code, and use far return to
* restore the saved CS selector.
*/
- leaq efi32_boot_idt(%rip), %rax
- lidt (%rax)
- leaq efi32_boot_gdt(%rip), %rax
- lgdt (%rax)
+ lidt efi32_boot_idt(%rip)
+ lgdt efi32_boot_gdt(%rip)

movzwl efi32_boot_ds(%rip), %edx
movzwq efi32_boot_cs(%rip), %rax
@@ -187,9 +183,7 @@ SYM_FUNC_START_LOCAL(efi_enter32)
*/
cli

- lidtl (%ebx)
- subl $16, %ebx
-
+ lidtl 16(%ebx)
lgdtl (%ebx)

movl %cr4, %eax
--
2.35.1

2022-09-21 15:25:57

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 07/16] x86/compressed: efi: merge multiple definitions of image_offset into one

There is no need for head_32.S and head_64.S both declaring a copy of
the globale 'image_offset' variable, so drop those and make the extern C
declaration the definition.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/boot/compressed/head_32.S | 4 ----
arch/x86/boot/compressed/head_64.S | 4 ----
drivers/firmware/efi/libstub/x86-stub.c | 2 +-
3 files changed, 1 insertion(+), 9 deletions(-)

diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
index 3b354eb9516d..6589ddd4cfaf 100644
--- a/arch/x86/boot/compressed/head_32.S
+++ b/arch/x86/boot/compressed/head_32.S
@@ -208,10 +208,6 @@ SYM_DATA_START_LOCAL(gdt)
.quad 0x00cf92000000ffff /* __KERNEL_DS */
SYM_DATA_END_LABEL(gdt, SYM_L_LOCAL, gdt_end)

-#ifdef CONFIG_EFI_STUB
-SYM_DATA(image_offset, .long 0)
-#endif
-
/*
* Stack and heap for uncompression
*/
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 8da2396a35a8..90b119fbef58 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -718,10 +718,6 @@ SYM_DATA_START(boot32_idt)
SYM_DATA_END_LABEL(boot32_idt, SYM_L_GLOBAL, boot32_idt_end)
#endif

-#ifdef CONFIG_EFI_STUB
-SYM_DATA(image_offset, .long 0)
-#endif
-
#ifdef CONFIG_AMD_MEM_ENCRYPT
__HEAD
.code32
diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
index 05ae8bcc9d67..9083ccc1d46b 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -23,7 +23,7 @@

const efi_system_table_t *efi_system_table;
const efi_dxe_services_table_t *efi_dxe_table;
-extern u32 image_offset;
+u32 image_offset;
static efi_loaded_image_t *image = NULL;

static efi_status_t
--
2.35.1

2022-09-21 15:26:10

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 05/16] x86/compressed: efi-mixed: move efi32_entry out of head_64.S

Move the efi32_entry() routine out of head_64.S and into efi-mixed.S,
which reduces clutter in the complicated startup routines. It also
permits linkage of some symbols used by code to be made local.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/boot/compressed/efi_mixed.S | 44 +++++++++++++++-----
arch/x86/boot/compressed/head_64.S | 32 --------------
2 files changed, 34 insertions(+), 42 deletions(-)

diff --git a/arch/x86/boot/compressed/efi_mixed.S b/arch/x86/boot/compressed/efi_mixed.S
index 77e77c3ea393..5007a44cd966 100644
--- a/arch/x86/boot/compressed/efi_mixed.S
+++ b/arch/x86/boot/compressed/efi_mixed.S
@@ -105,7 +105,7 @@ SYM_FUNC_START(__efi64_thunk)
/*
* Switch to IDT and GDT with 32-bit segments. This is the firmware GDT
* and IDT that was installed when the kernel started executing. The
- * pointers were saved at the EFI stub entry point in head_64.S.
+ * pointers were saved by the efi32_entry() routine below.
*
* Pass the saved DS selector to the 32-bit code, and use far return to
* restore the saved CS selector.
@@ -217,22 +217,46 @@ SYM_FUNC_START_LOCAL(efi_enter32)
lret
SYM_FUNC_END(efi_enter32)

+SYM_FUNC_START(efi32_entry)
+ call 1f
+1: pop %ebx
+
+ /* Save firmware GDTR and code/data selectors */
+ sgdtl (efi32_boot_gdt - 1b)(%ebx)
+ movw %cs, (efi32_boot_cs - 1b)(%ebx)
+ movw %ds, (efi32_boot_ds - 1b)(%ebx)
+
+ /* Store firmware IDT descriptor */
+ sidtl (efi32_boot_idt - 1b)(%ebx)
+
+ /* Store boot arguments */
+ leal (efi32_boot_args - 1b)(%ebx), %ebx
+ movl %ecx, 0(%ebx)
+ movl %edx, 4(%ebx)
+ movl %esi, 8(%ebx)
+ movb $0x0, 12(%ebx) // efi_is64
+
+ /* Disable paging */
+ movl %cr0, %eax
+ btrl $X86_CR0_PG_BIT, %eax
+ movl %eax, %cr0
+
+ jmp startup_32
+SYM_FUNC_END(efi32_entry)
+
.data
.balign 8
-SYM_DATA_START(efi32_boot_gdt)
+SYM_DATA_START_LOCAL(efi32_boot_gdt)
.word 0
.quad 0
SYM_DATA_END(efi32_boot_gdt)

-SYM_DATA_START(efi32_boot_idt)
+SYM_DATA_START_LOCAL(efi32_boot_idt)
.word 0
.quad 0
SYM_DATA_END(efi32_boot_idt)

-SYM_DATA_START(efi32_boot_cs)
- .word 0
-SYM_DATA_END(efi32_boot_cs)
-
-SYM_DATA_START(efi32_boot_ds)
- .word 0
-SYM_DATA_END(efi32_boot_ds)
+SYM_DATA_LOCAL(efi32_boot_cs, .word 0)
+SYM_DATA_LOCAL(efi32_boot_ds, .word 0)
+SYM_DATA_LOCAL(efi32_boot_args, .long 0, 0, 0)
+SYM_DATA(efi_is64, .byte 1)
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 9ae6ddccd3ef..be95d5685717 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -289,35 +289,6 @@ SYM_FUNC_START(efi32_stub_entry)
popl %esi
jmp efi32_entry
SYM_FUNC_END(efi32_stub_entry)
-
- .text
-SYM_FUNC_START_LOCAL(efi32_entry)
- call 1f
-1: pop %ebx
-
- /* Save firmware GDTR and code/data selectors */
- sgdtl (efi32_boot_gdt - 1b)(%ebx)
- movw %cs, (efi32_boot_cs - 1b)(%ebx)
- movw %ds, (efi32_boot_ds - 1b)(%ebx)
-
- /* Store firmware IDT descriptor */
- sidtl (efi32_boot_idt - 1b)(%ebx)
-
- /* Store boot arguments */
- leal (efi32_boot_args - 1b)(%ebx), %ebx
- movl %ecx, 0(%ebx)
- movl %edx, 4(%ebx)
- movl %esi, 8(%ebx)
- movb $0x0, 12(%ebx) // efi_is64
-
- /* Disable paging */
- movl %cr0, %eax
- btrl $X86_CR0_PG_BIT, %eax
- movl %eax, %cr0
-
- jmp startup_32
-SYM_FUNC_END(efi32_entry)
- __HEAD
#endif

.code64
@@ -750,9 +721,6 @@ SYM_DATA_END_LABEL(boot32_idt, SYM_L_GLOBAL, boot32_idt_end)
SYM_DATA(image_offset, .long 0)
#endif
#ifdef CONFIG_EFI_MIXED
-SYM_DATA(efi32_boot_args, .long 0, 0, 0)
-SYM_DATA(efi_is64, .byte 1)
-
#define ST32_boottime 60 // offsetof(efi_system_table_32_t, boottime)
#define BS32_handle_protocol 88 // offsetof(efi_boot_services_32_t, handle_protocol)
#define LI32_image_base 32 // offsetof(efi_loaded_image_32_t, image_base)
--
2.35.1

2022-09-21 15:26:40

by Ard Biesheuvel

[permalink] [raw]
Subject: [PATCH v2 06/16] x86/compressed: efi-mixed: move efi32_pe_entry() out of head_64.S

Move the implementation of efi32_pe_entry() into efi-mixed.S, which is a
more suitable location that only gets built if EFI mixed mode is
actually enabled.

Signed-off-by: Ard Biesheuvel <[email protected]>
---
arch/x86/boot/compressed/efi_mixed.S | 81 ++++++++++++++++++
arch/x86/boot/compressed/head_64.S | 86 +-------------------
2 files changed, 82 insertions(+), 85 deletions(-)

diff --git a/arch/x86/boot/compressed/efi_mixed.S b/arch/x86/boot/compressed/efi_mixed.S
index 5007a44cd966..838514f7685a 100644
--- a/arch/x86/boot/compressed/efi_mixed.S
+++ b/arch/x86/boot/compressed/efi_mixed.S
@@ -244,6 +244,87 @@ SYM_FUNC_START(efi32_entry)
jmp startup_32
SYM_FUNC_END(efi32_entry)

+#define ST32_boottime 60 // offsetof(efi_system_table_32_t, boottime)
+#define BS32_handle_protocol 88 // offsetof(efi_boot_services_32_t, handle_protocol)
+#define LI32_image_base 32 // offsetof(efi_loaded_image_32_t, image_base)
+
+/*
+ * efi_status_t efi32_pe_entry(efi_handle_t image_handle,
+ * efi_system_table_32_t *sys_table)
+ */
+SYM_FUNC_START(efi32_pe_entry)
+ pushl %ebp
+ movl %esp, %ebp
+ pushl %eax // dummy push to allocate loaded_image
+
+ pushl %ebx // save callee-save registers
+ pushl %edi
+
+ call verify_cpu // check for long mode support
+ testl %eax, %eax
+ movl $0x80000003, %eax // EFI_UNSUPPORTED
+ jnz 2f
+
+ call 1f
+1: pop %ebx
+
+ /* Get the loaded image protocol pointer from the image handle */
+ leal -4(%ebp), %eax
+ pushl %eax // &loaded_image
+ leal (loaded_image_proto - 1b)(%ebx), %eax
+ pushl %eax // pass the GUID address
+ pushl 8(%ebp) // pass the image handle
+
+ /*
+ * Note the alignment of the stack frame.
+ * sys_table
+ * handle <-- 16-byte aligned on entry by ABI
+ * return address
+ * frame pointer
+ * loaded_image <-- local variable
+ * saved %ebx <-- 16-byte aligned here
+ * saved %edi
+ * &loaded_image
+ * &loaded_image_proto
+ * handle <-- 16-byte aligned for call to handle_protocol
+ */
+
+ movl 12(%ebp), %eax // sys_table
+ movl ST32_boottime(%eax), %eax // sys_table->boottime
+ call *BS32_handle_protocol(%eax) // sys_table->boottime->handle_protocol
+ addl $12, %esp // restore argument space
+ testl %eax, %eax
+ jnz 2f
+
+ movl 8(%ebp), %ecx // image_handle
+ movl 12(%ebp), %edx // sys_table
+ movl -4(%ebp), %esi // loaded_image
+ movl LI32_image_base(%esi), %esi // loaded_image->image_base
+ leal (startup_32 - 1b)(%ebx), %ebp // runtime address of startup_32
+ /*
+ * We need to set the image_offset variable here since startup_32() will
+ * use it before we get to the 64-bit efi_pe_entry() in C code.
+ */
+ subl %esi, %ebp // calculate image_offset
+ movl %ebp, (image_offset - 1b)(%ebx) // save image_offset
+ xorl %esi, %esi
+ jmp efi32_entry
+
+2: popl %edi // restore callee-save registers
+ popl %ebx
+ leave
+ RET
+SYM_FUNC_END(efi32_pe_entry)
+
+ .section ".rodata"
+ /* EFI loaded image protocol GUID */
+ .balign 4
+SYM_DATA_START_LOCAL(loaded_image_proto)
+ .long 0x5b1b31a1
+ .word 0x9562, 0x11d2
+ .byte 0x8e, 0x3f, 0x00, 0xa0, 0xc9, 0x69, 0x72, 0x3b
+SYM_DATA_END(loaded_image_proto)
+
.data
.balign 8
SYM_DATA_START_LOCAL(efi32_boot_gdt)
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index be95d5685717..8da2396a35a8 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -673,6 +673,7 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lno_longmode)
jmp 1b
SYM_FUNC_END(.Lno_longmode)

+ .globl verify_cpu
#include "../../kernel/verify_cpu.S"

.data
@@ -720,91 +721,6 @@ SYM_DATA_END_LABEL(boot32_idt, SYM_L_GLOBAL, boot32_idt_end)
#ifdef CONFIG_EFI_STUB
SYM_DATA(image_offset, .long 0)
#endif
-#ifdef CONFIG_EFI_MIXED
-#define ST32_boottime 60 // offsetof(efi_system_table_32_t, boottime)
-#define BS32_handle_protocol 88 // offsetof(efi_boot_services_32_t, handle_protocol)
-#define LI32_image_base 32 // offsetof(efi_loaded_image_32_t, image_base)
-
- .text
- .code32
-SYM_FUNC_START(efi32_pe_entry)
-/*
- * efi_status_t efi32_pe_entry(efi_handle_t image_handle,
- * efi_system_table_32_t *sys_table)
- */
-
- pushl %ebp
- movl %esp, %ebp
- pushl %eax // dummy push to allocate loaded_image
-
- pushl %ebx // save callee-save registers
- pushl %edi
-
- call verify_cpu // check for long mode support
- testl %eax, %eax
- movl $0x80000003, %eax // EFI_UNSUPPORTED
- jnz 2f
-
- call 1f
-1: pop %ebx
-
- /* Get the loaded image protocol pointer from the image handle */
- leal -4(%ebp), %eax
- pushl %eax // &loaded_image
- leal (loaded_image_proto - 1b)(%ebx), %eax
- pushl %eax // pass the GUID address
- pushl 8(%ebp) // pass the image handle
-
- /*
- * Note the alignment of the stack frame.
- * sys_table
- * handle <-- 16-byte aligned on entry by ABI
- * return address
- * frame pointer
- * loaded_image <-- local variable
- * saved %ebx <-- 16-byte aligned here
- * saved %edi
- * &loaded_image
- * &loaded_image_proto
- * handle <-- 16-byte aligned for call to handle_protocol
- */
-
- movl 12(%ebp), %eax // sys_table
- movl ST32_boottime(%eax), %eax // sys_table->boottime
- call *BS32_handle_protocol(%eax) // sys_table->boottime->handle_protocol
- addl $12, %esp // restore argument space
- testl %eax, %eax
- jnz 2f
-
- movl 8(%ebp), %ecx // image_handle
- movl 12(%ebp), %edx // sys_table
- movl -4(%ebp), %esi // loaded_image
- movl LI32_image_base(%esi), %esi // loaded_image->image_base
- leal (startup_32 - 1b)(%ebx), %ebp // runtime address of startup_32
- /*
- * We need to set the image_offset variable here since startup_32() will
- * use it before we get to the 64-bit efi_pe_entry() in C code.
- */
- subl %esi, %ebp // calculate image_offset
- movl %ebp, (image_offset - 1b)(%ebx) // save image_offset
- xorl %esi, %esi
- jmp efi32_entry
-
-2: popl %edi // restore callee-save registers
- popl %ebx
- leave
- RET
-SYM_FUNC_END(efi32_pe_entry)
-
- .section ".rodata"
- /* EFI loaded image protocol GUID */
- .balign 4
-SYM_DATA_START_LOCAL(loaded_image_proto)
- .long 0x5b1b31a1
- .word 0x9562, 0x11d2
- .byte 0x8e, 0x3f, 0x00, 0xa0, 0xc9, 0x69, 0x72, 0x3b
-SYM_DATA_END(loaded_image_proto)
-#endif

#ifdef CONFIG_AMD_MEM_ENCRYPT
__HEAD
--
2.35.1

2022-10-06 10:59:40

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v2 02/16] x86/compressed: efi-mixed: move 32-bit entrypoint code into .text section

On Thu, 6 Oct 2022 at 12:42, Borislav Petkov <[email protected]> wrote:
>
> On Wed, Sep 21, 2022 at 04:54:08PM +0200, Ard Biesheuvel wrote:
> > Move the code that stores the arguments passed to the EFI entrypoint
> > into the .text section, so that it can be moved into a separate
> > compilation unit in a subsequent patch.
> >
> > Signed-off-by: Ard Biesheuvel <[email protected]>
> > ---
> > arch/x86/boot/compressed/head_64.S | 34 ++++++++++++--------
> > 1 file changed, 20 insertions(+), 14 deletions(-)
> >
> > diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
> > index d33f060900d2..1ba2fc2357e6 100644
> > --- a/arch/x86/boot/compressed/head_64.S
> > +++ b/arch/x86/boot/compressed/head_64.S
> > @@ -303,24 +303,28 @@ SYM_FUNC_START(efi32_stub_entry)
> > popl %ecx
> > popl %edx
> > popl %esi
> > + jmp efi32_entry
> > +SYM_FUNC_END(efi32_stub_entry)
> >
> > + .text
> > +SYM_FUNC_START_LOCAL(efi32_entry)
> > call 1f
> > -1: pop %ebp
> > - subl $ rva(1b), %ebp
> > -
> > - movl %esi, rva(efi32_boot_args+8)(%ebp)
> > -SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
> > - movl %ecx, rva(efi32_boot_args)(%ebp)
> > - movl %edx, rva(efi32_boot_args+4)(%ebp)
> > - movb $0, rva(efi_is64)(%ebp)
> > +1: pop %ebx
>
> I'm guessing according to the EFI mixed mode calling convention, %ebx is
> not a live register which gets overwritten here...?
>
> Looking at efi32_pe_entry() from where this is called, %ebx looks live.
>
> What am I missing?
>

efi32_pe_entry() preserves and restores the caller's value of %ebx,
because from there, we might actually return control to the firmware.
The value it keeps in %ebx itself is not live when it jumps to
efi32_entry - it stores its value into image_offset, which is reloaded
from memory at a later point.

efi32_stub_entry() is the 'EFI handover protocol' entry point, which
cannot return to the firmware (and we discard the return address
already) so %ebx can be clobbered.

2022-10-06 11:04:32

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 03/16] x86/compressed: efi-mixed: move bootargs parsing out of 32-bit startup code

On Wed, Sep 21, 2022 at 04:54:09PM +0200, Ard Biesheuvel wrote:
> Move the logic that chooses between the different EFI entrypoints out of
> the 32-bit boot path, and into a 64-bit helper that can perform the same
> task much more cleanly. While at it, document the mixed mode boot flow
> in a code comment.
>
> Signed-off-by: Ard Biesheuvel <[email protected]>
> ---
> arch/x86/boot/compressed/efi_mixed.S | 43 ++++++++++++++++++++
> arch/x86/boot/compressed/head_64.S | 24 ++---------
> 2 files changed, 47 insertions(+), 20 deletions(-)
>
> diff --git a/arch/x86/boot/compressed/efi_mixed.S b/arch/x86/boot/compressed/efi_mixed.S
> index 67e7edcdfea8..77e77c3ea393 100644
> --- a/arch/x86/boot/compressed/efi_mixed.S
> +++ b/arch/x86/boot/compressed/efi_mixed.S
> @@ -22,6 +22,49 @@
>
> .code64
> .text
> +/*
> + * When booting in 64-bit mode on 32-bit EFI firmware, startup_64_mixedmode()
> + * is the first thing that runs after switching to long mode. Depending on
> + * whether the EFI handover protocol or the compat entry point was used to
> + * enter the kernel, it will either branch to the 64-bit EFI handover
> + * entrypoint at offset 0x390 in the image, or to the 64-bit EFI PE/COFF
> + * entrypoint efi_pe_entry(). In the former case, the bootloader must provide a
> + * struct bootparams pointer as the third argument, so the presence of such a
> + * pointer is used to disambiguate.
> + *
> + * +--------------+
> + * +------------------+ +------------+ +------>| efi_pe_entry |
> + * | efi32_pe_entry |---->| | | +-----------+--+
> + * +------------------+ | | +------+---------------+ |
> + * | startup_32 |---->| startup_64_mixedmode | |
> + * +------------------+ | | +------+---------------+ V
> + * | efi32_stub_entry |---->| | | +------------------+
> + * +------------------+ +------------+ +---->| efi64_stub_entry |
> + * +-------------+----+
> + * +------------+ +----------+ |
> + * | startup_64 |<----| efi_main |<--------------+
> + * +------------+ +----------+
> + */

That is much appreciated.

Questions:

- is this whole handover ABI documented somewhere?

- efi32_pe_entry() is the 32-bit PE/COFF entry point? I.e., that is
called by a 32-bit EFI fw when the kernel is a PE/COFF executable?

But then Documentation/admin-guide/efi-stub.rst talks about the EFI stub
and exactly that. Hmm, so what is efi32_pe_entry() then?

> +SYM_FUNC_START(startup_64_mixedmode)

... mixed_mode

I guess.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2022-10-06 11:20:44

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 02/16] x86/compressed: efi-mixed: move 32-bit entrypoint code into .text section

On Wed, Sep 21, 2022 at 04:54:08PM +0200, Ard Biesheuvel wrote:
> Move the code that stores the arguments passed to the EFI entrypoint
> into the .text section, so that it can be moved into a separate
> compilation unit in a subsequent patch.
>
> Signed-off-by: Ard Biesheuvel <[email protected]>
> ---
> arch/x86/boot/compressed/head_64.S | 34 ++++++++++++--------
> 1 file changed, 20 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
> index d33f060900d2..1ba2fc2357e6 100644
> --- a/arch/x86/boot/compressed/head_64.S
> +++ b/arch/x86/boot/compressed/head_64.S
> @@ -303,24 +303,28 @@ SYM_FUNC_START(efi32_stub_entry)
> popl %ecx
> popl %edx
> popl %esi
> + jmp efi32_entry
> +SYM_FUNC_END(efi32_stub_entry)
>
> + .text
> +SYM_FUNC_START_LOCAL(efi32_entry)
> call 1f
> -1: pop %ebp
> - subl $ rva(1b), %ebp
> -
> - movl %esi, rva(efi32_boot_args+8)(%ebp)
> -SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
> - movl %ecx, rva(efi32_boot_args)(%ebp)
> - movl %edx, rva(efi32_boot_args+4)(%ebp)
> - movb $0, rva(efi_is64)(%ebp)
> +1: pop %ebx

I'm guessing according to the EFI mixed mode calling convention, %ebx is
not a live register which gets overwritten here...?

Looking at efi32_pe_entry() from where this is called, %ebx looks live.

What am I missing?

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2022-10-06 11:44:35

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v2 02/16] x86/compressed: efi-mixed: move 32-bit entrypoint code into .text section

On Thu, 6 Oct 2022 at 13:13, Borislav Petkov <[email protected]> wrote:
>
> On Thu, Oct 06, 2022 at 12:56:09PM +0200, Ard Biesheuvel wrote:
> > efi32_pe_entry() preserves and restores the caller's value of %ebx,
> > because from there, we might actually return control to the firmware.
> > The value it keeps in %ebx itself is not live when it jumps to
> > efi32_entry - it stores its value into image_offset, which is reloaded
> > from memory at a later point.
>
> Hmm, might be prudent to have a comment there because it is using %ebx a
> couple of insns before the JMP:
>
> subl %esi, %ebx
> ^^^^
> movl %ebx, rva(image_offset)(%ebp) // save image_offset
>
> <--- I think you mean that after this, %ebx is not needed anymore?
>

Exactly.

> xorl %esi, %esi
> jmp efi32_entry
>
> 2: popl %edi // restore callee-save registers
> popl %ebx
>
> and this restores its original value ofc.
>
> > efi32_stub_entry() is the 'EFI handover protocol' entry point, which
> > cannot return to the firmware (and we discard the return address
> > already) so %ebx can be clobbered.
>
> That info would be good to have in a comment above it.
>

Fair enough.

2022-10-06 11:44:36

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 02/16] x86/compressed: efi-mixed: move 32-bit entrypoint code into .text section

On Thu, Oct 06, 2022 at 12:56:09PM +0200, Ard Biesheuvel wrote:
> efi32_pe_entry() preserves and restores the caller's value of %ebx,
> because from there, we might actually return control to the firmware.
> The value it keeps in %ebx itself is not live when it jumps to
> efi32_entry - it stores its value into image_offset, which is reloaded
> from memory at a later point.

Hmm, might be prudent to have a comment there because it is using %ebx a
couple of insns before the JMP:

subl %esi, %ebx
^^^^
movl %ebx, rva(image_offset)(%ebp) // save image_offset

<--- I think you mean that after this, %ebx is not needed anymore?

xorl %esi, %esi
jmp efi32_entry

2: popl %edi // restore callee-save registers
popl %ebx

and this restores its original value ofc.

> efi32_stub_entry() is the 'EFI handover protocol' entry point, which
> cannot return to the firmware (and we discard the return address
> already) so %ebx can be clobbered.

That info would be good to have in a comment above it.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2022-10-06 11:46:16

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v2 03/16] x86/compressed: efi-mixed: move bootargs parsing out of 32-bit startup code

On Thu, 6 Oct 2022 at 13:03, Borislav Petkov <[email protected]> wrote:
>
> On Wed, Sep 21, 2022 at 04:54:09PM +0200, Ard Biesheuvel wrote:
> > Move the logic that chooses between the different EFI entrypoints out of
> > the 32-bit boot path, and into a 64-bit helper that can perform the same
> > task much more cleanly. While at it, document the mixed mode boot flow
> > in a code comment.
> >
> > Signed-off-by: Ard Biesheuvel <[email protected]>
> > ---
> > arch/x86/boot/compressed/efi_mixed.S | 43 ++++++++++++++++++++
> > arch/x86/boot/compressed/head_64.S | 24 ++---------
> > 2 files changed, 47 insertions(+), 20 deletions(-)
> >
> > diff --git a/arch/x86/boot/compressed/efi_mixed.S b/arch/x86/boot/compressed/efi_mixed.S
> > index 67e7edcdfea8..77e77c3ea393 100644
> > --- a/arch/x86/boot/compressed/efi_mixed.S
> > +++ b/arch/x86/boot/compressed/efi_mixed.S
> > @@ -22,6 +22,49 @@
> >
> > .code64
> > .text
> > +/*
> > + * When booting in 64-bit mode on 32-bit EFI firmware, startup_64_mixedmode()
> > + * is the first thing that runs after switching to long mode. Depending on
> > + * whether the EFI handover protocol or the compat entry point was used to
> > + * enter the kernel, it will either branch to the 64-bit EFI handover
> > + * entrypoint at offset 0x390 in the image, or to the 64-bit EFI PE/COFF
> > + * entrypoint efi_pe_entry(). In the former case, the bootloader must provide a
> > + * struct bootparams pointer as the third argument, so the presence of such a
> > + * pointer is used to disambiguate.
> > + *
> > + * +--------------+
> > + * +------------------+ +------------+ +------>| efi_pe_entry |
> > + * | efi32_pe_entry |---->| | | +-----------+--+
> > + * +------------------+ | | +------+---------------+ |
> > + * | startup_32 |---->| startup_64_mixedmode | |
> > + * +------------------+ | | +------+---------------+ V
> > + * | efi32_stub_entry |---->| | | +------------------+
> > + * +------------------+ +------------+ +---->| efi64_stub_entry |
> > + * +-------------+----+
> > + * +------------+ +----------+ |
> > + * | startup_64 |<----| efi_main |<--------------+
> > + * +------------+ +----------+
> > + */
>
> That is much appreciated.
>
> Questions:
>
> - is this whole handover ABI documented somewhere?
>

Documentation/x86/boot.rst has a section on this (at the end), but we
should really stop using it. It is only implemented by out-of-tree
GRUB at the moment (last time I checked) and leaking all those struct
bootparams specific details into every bootloader is not great,
especially the ones that intend to be generic and boot any EFI OS on
any EFI arch.


> - efi32_pe_entry() is the 32-bit PE/COFF entry point? I.e., that is
> called by a 32-bit EFI fw when the kernel is a PE/COFF executable?
>

Yes. But I should note that this is actually something that goes
outside of the EFI spec as well: 32-bit firmware can /load/ 64-bit
PE/COFF binaries but not *start* them.

Commit 97aa276579b28b86f4a3e235b50762c0191c2ac3 has some more
background. This is currently implement by 32-bit OVMF, and
systemd-boot.

> But then Documentation/admin-guide/efi-stub.rst talks about the EFI stub
> and exactly that. Hmm, so what is efi32_pe_entry() then?
>

That is the same thing. The EFI stub is what enables the kernel (or
decompressor) to masquerade as a PE/COFF executable.

In short, every EFI stub kernel on every architecture has a native
PE/COFF entry point that calls the EFI stub, and the EFi stub does the
arch-specific bootloader work and boots it.

In addition, the x86_64 EFI stub kernel has an extra, non-native
PE/COFF entry point, which is exposed in a way that is not covered by
the EFI spec, but which allows Linux specific loaders such as
systemd-boot to boot such kernels on 32-bit firmware without having to
do the whole struct bootparams dance in the bootloader.

2022-10-06 12:35:36

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v2 02/16] x86/compressed: efi-mixed: move 32-bit entrypoint code into .text section

On Thu, 6 Oct 2022 at 13:19, Ard Biesheuvel <[email protected]> wrote:
>
> On Thu, 6 Oct 2022 at 13:13, Borislav Petkov <[email protected]> wrote:
> >
> > On Thu, Oct 06, 2022 at 12:56:09PM +0200, Ard Biesheuvel wrote:
> > > efi32_pe_entry() preserves and restores the caller's value of %ebx,
> > > because from there, we might actually return control to the firmware.
> > > The value it keeps in %ebx itself is not live when it jumps to
> > > efi32_entry - it stores its value into image_offset, which is reloaded
> > > from memory at a later point.
> >
> > Hmm, might be prudent to have a comment there because it is using %ebx a
> > couple of insns before the JMP:
> >
> > subl %esi, %ebx
> > ^^^^
> > movl %ebx, rva(image_offset)(%ebp) // save image_offset
> >
> > <--- I think you mean that after this, %ebx is not needed anymore?
> >
>
> Exactly.
>
> > xorl %esi, %esi
> > jmp efi32_entry
> >
> > 2: popl %edi // restore callee-save registers
> > popl %ebx
> >
> > and this restores its original value ofc.
> >
> > > efi32_stub_entry() is the 'EFI handover protocol' entry point, which
> > > cannot return to the firmware (and we discard the return address
> > > already) so %ebx can be clobbered.
> >
> > That info would be good to have in a comment above it.
> >
>
> Fair enough.

I'll add the below in the next revision

--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -307,6 +307,19 @@ SYM_FUNC_START(efi32_stub_entry)
SYM_FUNC_END(efi32_stub_entry)

.text
+/*
+ * This is the common EFI stub entry point for mixed mode.
+ *
+ * Arguments: %ecx image handle
+ * %edx EFI system table pointer
+ * %esi struct bootparams pointer (or NULL when not using
+ * the EFI handover protocol)
+ *
+ * Since this is the point of no return for ordinary execution, no registers
+ * are considered live except for the function parameters. [Note that the EFI
+ * stub may still exit and return to the firmware using the Exit() EFI boot
+ * service.]
+ */
SYM_FUNC_START_LOCAL(efi32_entry)
call 1f
1: pop %ebx
@@ -837,7 +850,8 @@ SYM_FUNC_START(efi32_pe_entry)
subl %esi, %ebx
movl %ebx, rva(image_offset)(%ebp) // save image_offset
xorl %esi, %esi
- jmp efi32_entry
+ jmp efi32_entry // pass %ecx, %edx, %esi
+ // no other registers
remain live

2: popl %edi // restore callee-save registers
popl %ebx

2022-10-07 09:07:53

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 02/16] x86/compressed: efi-mixed: move 32-bit entrypoint code into .text section

On Thu, Oct 06, 2022 at 02:27:54PM +0200, Ard Biesheuvel wrote:
> I'll add the below in the next revision

LGTM.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2022-10-07 09:38:22

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 03/16] x86/compressed: efi-mixed: move bootargs parsing out of 32-bit startup code

On Thu, Oct 06, 2022 at 01:29:55PM +0200, Ard Biesheuvel wrote:
> Documentation/x86/boot.rst has a section on this (at the end),

Ah, and I really like that NOTE at the end.

> but we should really stop using it. It is only implemented by
> out-of-tree GRUB at the moment (last time I checked) and leaking all
> those struct bootparams specific details into every bootloader is not
> great, especially the ones that intend to be generic and boot any EFI
> OS on any EFI arch.

I'm all for making early asm code simpler so yes, can we start removing
it?

Dunno, maybe ifdef around it with a Kconfig option which is default off
and see who complains...

> That is the same thing. The EFI stub is what enables the kernel (or
> decompressor) to masquerade as a PE/COFF executable.
>
> In short, every EFI stub kernel on every architecture has a native
> PE/COFF entry point that calls the EFI stub, and the EFi stub does the
> arch-specific bootloader work and boots it.

Right.

> In addition, the x86_64 EFI stub kernel has an extra, non-native
> PE/COFF entry point, which is exposed in a way that is not covered by
> the EFI spec, but which allows Linux specific loaders such as
> systemd-boot to boot such kernels on 32-bit firmware without having to
> do the whole struct bootparams dance in the bootloader.

Ok, thanks for explaining.

I like the simplification and obviating the need for the bootloader to
do any dancing before loading the kernel.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2022-11-17 16:14:48

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 04/16] x86/compressed: efi-mixed: move efi32_pe_entry into .text section

On Wed, Sep 21, 2022 at 04:54:10PM +0200, Ard Biesheuvel wrote:
> /*
> * We need to set the image_offset variable here since startup_32() will
> * use it before we get to the 64-bit efi_pe_entry() in C code.
> */
> - subl %esi, %ebx
> - movl %ebx, rva(image_offset)(%ebp) // save image_offset
> + subl %esi, %ebp // calculate image_offset
> + movl %ebp, (image_offset - 1b)(%ebx) // save image_offset

All looks ok, just one question: what was the reason for that
image_offset thing?

I see:

1887c9b653f9 ("efi/x86: Decompress at start of PE image load address")

It says that if the kernel is loaded as a PE executable using
LoadImage() we don't know where that image will be loaded each time so
we're saving that offset for later when relocating (or not) the kernel?

All part of those improvements:

https://lore.kernel.org/all/[email protected]/

Am I close?

I.e., that image_offset is purely a kernel thing and not something EFI
LoadImage's inner workings mandate...? It doesn't seem so from where I'm
standing but lemme doublecheck still.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2022-11-17 16:55:57

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v2 04/16] x86/compressed: efi-mixed: move efi32_pe_entry into .text section

On Thu, 17 Nov 2022 at 16:57, Borislav Petkov <[email protected]> wrote:
>
> On Wed, Sep 21, 2022 at 04:54:10PM +0200, Ard Biesheuvel wrote:
> > /*
> > * We need to set the image_offset variable here since startup_32() will
> > * use it before we get to the 64-bit efi_pe_entry() in C code.
> > */
> > - subl %esi, %ebx
> > - movl %ebx, rva(image_offset)(%ebp) // save image_offset
> > + subl %esi, %ebp // calculate image_offset
> > + movl %ebp, (image_offset - 1b)(%ebx) // save image_offset
>
> All looks ok, just one question: what was the reason for that
> image_offset thing?
>
> I see:
>
> 1887c9b653f9 ("efi/x86: Decompress at start of PE image load address")
>
> It says that if the kernel is loaded as a PE executable using
> LoadImage() we don't know where that image will be loaded each time so
> we're saving that offset for later when relocating (or not) the kernel?
>
> All part of those improvements:
>
> https://lore.kernel.org/all/[email protected]/
>
> Am I close?
>

Yes.

The x86 boot protocol does not require that the setup data block comes
right before the image, it just receives the address in %esi

When doing PE boot, this is guaranteed, and so we can reuse the memory
before the image.

> I.e., that image_offset is purely a kernel thing and not something EFI
> LoadImage's inner workings mandate...? It doesn't seem so from where I'm
> standing but lemme doublecheck still.
>

No this has nothing do with the EFI in particular, only with how the
x86 boot image is constructed and wrapped into a PE/COFF executable.

2022-11-17 17:21:10

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 04/16] x86/compressed: efi-mixed: move efi32_pe_entry into .text section

On Thu, Nov 17, 2022 at 05:06:37PM +0100, Ard Biesheuvel wrote:
> No this has nothing do with the EFI in particular, only with how the
> x86 boot image is constructed and wrapped into a PE/COFF executable.

Ok, thanks.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2022-11-18 17:13:16

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 11/16] x86/compressed: move startup32_load_idt() into .text section

On Wed, Sep 21, 2022 at 04:54:17PM +0200, Ard Biesheuvel wrote:
> Convert startup32_load_idt() into an ordinary function and move it into
> the .text section. This involves turning the rva() immediates into ones
> derived from a local label, and preserving/restoring the %ebp and %ebx
> as per the calling convention.
>
> Also move the #ifdef to the only existing call site. This makes it clear
> that the function call does nothing if support for memory encryption is
> not compiled in.

I'm not crazy about all that ifdeffery in there but this will need a
serious look.

Btw, you can drop one, diff ontop:

---
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index bc5bd639217e..a862fd8ac0bd 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -726,9 +726,7 @@ SYM_DATA_START(boot32_idt)
.quad 0
.endr
SYM_DATA_END_LABEL(boot32_idt, SYM_L_GLOBAL, boot32_idt_end)
-#endif

-#ifdef CONFIG_AMD_MEM_ENCRYPT
.text
.code32
/*

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2022-11-18 19:24:52

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 00/16] x86: head_64.S spring cleaning

On Wed, Sep 21, 2022 at 04:54:06PM +0200, Ard Biesheuvel wrote:
> arch/x86/boot/compressed/Makefile | 8 +-
> arch/x86/boot/compressed/efi_mixed.S | 337 ++++++++++++++++++++
> arch/x86/boot/compressed/efi_thunk_64.S | 195 -----------
> arch/x86/boot/compressed/head_32.S | 4 -
> arch/x86/boot/compressed/head_64.S | 299 +----------------
> arch/x86/boot/compressed/mem_encrypt.S | 152 ++++++++-
> drivers/firmware/efi/libstub/x86-stub.c | 2 +-
> 7 files changed, 496 insertions(+), 501 deletions(-)
> create mode 100644 arch/x86/boot/compressed/efi_mixed.S
> delete mode 100644 arch/x86/boot/compressed/efi_thunk_64.S

Ok, it all looks ok to me.

You could send me a refreshed version ontop of latest tip/master, after
having tested the EFI side and I'll test the memory encryption side.

If there's no fallout, I think we could queue this.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2022-11-19 01:14:39

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v2 00/16] x86: head_64.S spring cleaning

On Fri, 18 Nov 2022 at 19:26, Borislav Petkov <[email protected]> wrote:
>
> On Wed, Sep 21, 2022 at 04:54:06PM +0200, Ard Biesheuvel wrote:
> > arch/x86/boot/compressed/Makefile | 8 +-
> > arch/x86/boot/compressed/efi_mixed.S | 337 ++++++++++++++++++++
> > arch/x86/boot/compressed/efi_thunk_64.S | 195 -----------
> > arch/x86/boot/compressed/head_32.S | 4 -
> > arch/x86/boot/compressed/head_64.S | 299 +----------------
> > arch/x86/boot/compressed/mem_encrypt.S | 152 ++++++++-
> > drivers/firmware/efi/libstub/x86-stub.c | 2 +-
> > 7 files changed, 496 insertions(+), 501 deletions(-)
> > create mode 100644 arch/x86/boot/compressed/efi_mixed.S
> > delete mode 100644 arch/x86/boot/compressed/efi_thunk_64.S
>
> Ok, it all looks ok to me.
>
> You could send me a refreshed version ontop of latest tip/master, after
> having tested the EFI side and I'll test the memory encryption side.
>
> If there's no fallout, I think we could queue this.
>

Sounds good.