2020-10-20 12:21:46

by Joerg Roedel

[permalink] [raw]
Subject: [PATCH v2 0/5] x86/sev-es: Mitigate some HV attack vectors

From: Joerg Roedel <[email protected]>

Hi,

here are some enhancements to the SEV(-ES) code in the Linux kernel to
self-protect it against some newly detected hypervisor attacks. There
are 3 attacks addressed here:

1) Hypervisor does not present the SEV-enabled bit via CPUID

2) The Hypervisor presents the wrong C-bit position via CPUID

3) An encrypted RAM page is mapped as MMIO in the nested
page-table, causing #VC exceptions and possible leak of the
data to the hypervisor or data/code injection from the
Hypervisor.

The attacks are described in more detail in this paper:

https://arxiv.org/abs/2010.07094

Please review.

Thanks,

Joerg

Changes to v1:

- Disable CR4.PGE during C-bit test

- Do not safe/restore caller-safed registers in
set_sev_encryption_mask()

Joerg Roedel (5):
x86/boot/compressed/64: Introduce sev_status
x86/boot/compressed/64: Add CPUID sanity check to early #VC handler
x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path
x86/head/64: Check SEV encryption before switching to kernel
page-table
x86/sev-es: Do not support MMIO to/from encrypted memory

arch/x86/boot/compressed/ident_map_64.c | 1 +
arch/x86/boot/compressed/mem_encrypt.S | 14 +++-
arch/x86/boot/compressed/misc.h | 2 +
arch/x86/kernel/head_64.S | 14 +++-
arch/x86/kernel/sev-es-shared.c | 26 +++++++
arch/x86/kernel/sev-es.c | 20 ++++--
arch/x86/kernel/sev_verify_cbit.S | 91 +++++++++++++++++++++++++
arch/x86/mm/mem_encrypt.c | 1 +
8 files changed, 160 insertions(+), 9 deletions(-)
create mode 100644 arch/x86/kernel/sev_verify_cbit.S

--
2.28.0


2020-10-20 12:22:10

by Joerg Roedel

[permalink] [raw]
Subject: [PATCH v2 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path

From: Joerg Roedel <[email protected]>

Check whether the hypervisor reported the correct C-bit when running as
an SEV guest. Using a wrong C-bit position could be used to leak
sensitive data from the guest to the hypervisor.

The check function is in arch/x86/kernel/sev_verify_cbit.S so that it
can be re-used in the running kernel image.

Signed-off-by: Joerg Roedel <[email protected]>
---
arch/x86/boot/compressed/ident_map_64.c | 1 +
arch/x86/boot/compressed/mem_encrypt.S | 4 ++
arch/x86/boot/compressed/misc.h | 2 +
arch/x86/kernel/sev_verify_cbit.S | 91 +++++++++++++++++++++++++
4 files changed, 98 insertions(+)
create mode 100644 arch/x86/kernel/sev_verify_cbit.S

diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
index 063a60edcf99..73abba3312a7 100644
--- a/arch/x86/boot/compressed/ident_map_64.c
+++ b/arch/x86/boot/compressed/ident_map_64.c
@@ -153,6 +153,7 @@ void initialize_identity_maps(void)
* into cr3.
*/
add_identity_map((unsigned long)_head, (unsigned long)_end);
+ sev_verify_cbit(top_level_pgt);
write_cr3(top_level_pgt);
}

diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index 2192b3bd78d8..7409f2343d38 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -68,6 +68,9 @@ SYM_FUNC_START(get_sev_encryption_bit)
SYM_FUNC_END(get_sev_encryption_bit)

.code64
+
+#include "../../kernel/sev_verify_cbit.S"
+
SYM_FUNC_START(set_sev_encryption_mask)
#ifdef CONFIG_AMD_MEM_ENCRYPT
push %rbp
@@ -105,4 +108,5 @@ SYM_FUNC_END(set_sev_encryption_mask)
.balign 8
SYM_DATA(sme_me_mask, .quad 0)
SYM_DATA(sev_status, .quad 0)
+SYM_DATA(sev_check_data, .quad 0)
#endif
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 6d31f1b4c4d1..53f4848ad392 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -159,4 +159,6 @@ void boot_page_fault(void);
void boot_stage1_vc(void);
void boot_stage2_vc(void);

+void sev_verify_cbit(unsigned long cr3);
+
#endif /* BOOT_COMPRESSED_MISC_H */
diff --git a/arch/x86/kernel/sev_verify_cbit.S b/arch/x86/kernel/sev_verify_cbit.S
new file mode 100644
index 000000000000..3f7153607956
--- /dev/null
+++ b/arch/x86/kernel/sev_verify_cbit.S
@@ -0,0 +1,91 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * sev_verify_cbit.S - Code for verification of the C-bit position reported
+ * by the Hypervisor when running with SEV enabled.
+ *
+ * Copyright (c) 2020 Joerg Roedel ([email protected])
+ *
+ * Implements sev_verify_cbit() which is called before switching to a new
+ * long-mode page-table at boot.
+ *
+ * It verifies that the C-bit position is correct by writing a random value to
+ * an encrypted memory location while on the current page-table. Then it
+ * switches to the new page-table to verify the memory content is still the
+ * same. After that it switches back to the current page-table and when the
+ * check succeeded it returns. If the check failed the code invalidates the
+ * stack pointer and goes into a hlt loop. The stack-pointer is invalidated to
+ * make sure no interrupt or exception can get the CPU out of the hlt loop.
+ *
+ * New page-table pointer is expected in %rdi (first parameter)
+ *
+ */
+SYM_FUNC_START(sev_verify_cbit)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+ /* First check if a C-bit was detected */
+ movq sme_me_mask(%rip), %r10
+ testq %r10, %r10
+ jz 3f
+
+ /* sme_me_mask != 0 could mean SME or SEV - Check also for SEV */
+ movq sev_status(%rip), %r10
+ testq %r10, %r10
+ jz 3f
+
+ /* Save CR4 in %r12 */
+ pushq %r12
+ movq %cr4, %r12
+
+ /* Disable Global Pages */
+ pushq %r12
+ andq $(~X86_CR4_PGE), %r12
+ movq %r12, %cr4
+ popq %r12
+
+ /*
+ * Verified that running under SEV - now get a random value using
+ * RDRAND. This instruction is mandatory when running as an SEV guest.
+ *
+ * Don't bail out of the loop if RDRAND returns errors. It is better to
+ * prevent forward progress than to work with a non-random value here.
+ */
+1: rdrand %r10
+ jnc 1b
+
+ /* Store value to memory and keep it in %r10 */
+ movq %r10, sev_check_data(%rip)
+
+ /* Backup current %cr3 value to restore it later */
+ movq %cr3, %r11
+
+ /* Switch to new %cr3 - This might unmap the stack */
+ movq %rdi, %cr3
+
+ /*
+ * Compare value in %r10 with memory location - If C-Bit is incorrect
+ * this would read the encrypted data and make the check fail.
+ */
+ cmpq %r10, sev_check_data(%rip)
+
+ /* Restore old %cr3 */
+ movq %r11, %cr3
+
+ /* Restore previous CR4 and %r12 */
+ movq %r12, %cr4
+ popq %r12
+
+ /* Check CMPQ result */
+ je 3f
+
+ /*
+ * The check failed - Prevent any forward progress to prevent ROP
+ * attacks, invalidate the stack and go into a hlt loop.
+ */
+ xorq %rsp, %rsp
+ subq $0x1000, %rsp
+2: hlt
+ jmp 2b
+3:
+#endif
+ ret
+SYM_FUNC_END(sev_verify_cbit)
+
--
2.28.0

2020-10-20 14:14:52

by Arvind Sankar

[permalink] [raw]
Subject: Re: [PATCH v2 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path

On Tue, Oct 20, 2020 at 02:18:54PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel <[email protected]>
>
> Check whether the hypervisor reported the correct C-bit when running as
> an SEV guest. Using a wrong C-bit position could be used to leak
> sensitive data from the guest to the hypervisor.
>
> The check function is in arch/x86/kernel/sev_verify_cbit.S so that it
> can be re-used in the running kernel image.
>
> Signed-off-by: Joerg Roedel <[email protected]>
> ---
> arch/x86/boot/compressed/ident_map_64.c | 1 +
> arch/x86/boot/compressed/mem_encrypt.S | 4 ++
> arch/x86/boot/compressed/misc.h | 2 +
> arch/x86/kernel/sev_verify_cbit.S | 91 +++++++++++++++++++++++++
> 4 files changed, 98 insertions(+)
> create mode 100644 arch/x86/kernel/sev_verify_cbit.S
>
> diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c
> index 063a60edcf99..73abba3312a7 100644
> --- a/arch/x86/boot/compressed/ident_map_64.c
> +++ b/arch/x86/boot/compressed/ident_map_64.c
> @@ -153,6 +153,7 @@ void initialize_identity_maps(void)
> * into cr3.
> */
> add_identity_map((unsigned long)_head, (unsigned long)_end);
> + sev_verify_cbit(top_level_pgt);
> write_cr3(top_level_pgt);
> }
>
> diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
> index 2192b3bd78d8..7409f2343d38 100644
> --- a/arch/x86/boot/compressed/mem_encrypt.S
> +++ b/arch/x86/boot/compressed/mem_encrypt.S
> @@ -68,6 +68,9 @@ SYM_FUNC_START(get_sev_encryption_bit)
> SYM_FUNC_END(get_sev_encryption_bit)
>
> .code64
> +
> +#include "../../kernel/sev_verify_cbit.S"
> +
> SYM_FUNC_START(set_sev_encryption_mask)
> #ifdef CONFIG_AMD_MEM_ENCRYPT
> push %rbp
> @@ -105,4 +108,5 @@ SYM_FUNC_END(set_sev_encryption_mask)
> .balign 8
> SYM_DATA(sme_me_mask, .quad 0)
> SYM_DATA(sev_status, .quad 0)
> +SYM_DATA(sev_check_data, .quad 0)
> #endif
> diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
> index 6d31f1b4c4d1..53f4848ad392 100644
> --- a/arch/x86/boot/compressed/misc.h
> +++ b/arch/x86/boot/compressed/misc.h
> @@ -159,4 +159,6 @@ void boot_page_fault(void);
> void boot_stage1_vc(void);
> void boot_stage2_vc(void);
>
> +void sev_verify_cbit(unsigned long cr3);
> +
> #endif /* BOOT_COMPRESSED_MISC_H */
> diff --git a/arch/x86/kernel/sev_verify_cbit.S b/arch/x86/kernel/sev_verify_cbit.S
> new file mode 100644
> index 000000000000..3f7153607956
> --- /dev/null
> +++ b/arch/x86/kernel/sev_verify_cbit.S
> @@ -0,0 +1,91 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * sev_verify_cbit.S - Code for verification of the C-bit position reported
> + * by the Hypervisor when running with SEV enabled.
> + *
> + * Copyright (c) 2020 Joerg Roedel ([email protected])
> + *
> + * Implements sev_verify_cbit() which is called before switching to a new
> + * long-mode page-table at boot.
> + *
> + * It verifies that the C-bit position is correct by writing a random value to
> + * an encrypted memory location while on the current page-table. Then it
> + * switches to the new page-table to verify the memory content is still the
> + * same. After that it switches back to the current page-table and when the
> + * check succeeded it returns. If the check failed the code invalidates the
> + * stack pointer and goes into a hlt loop. The stack-pointer is invalidated to
> + * make sure no interrupt or exception can get the CPU out of the hlt loop.
> + *
> + * New page-table pointer is expected in %rdi (first parameter)
> + *
> + */
> +SYM_FUNC_START(sev_verify_cbit)
> +#ifdef CONFIG_AMD_MEM_ENCRYPT
> + /* First check if a C-bit was detected */
> + movq sme_me_mask(%rip), %r10
> + testq %r10, %r10
> + jz 3f
> +
> + /* sme_me_mask != 0 could mean SME or SEV - Check also for SEV */
> + movq sev_status(%rip), %r10
> + testq %r10, %r10
> + jz 3f
> +
> + /* Save CR4 in %r12 */
> + pushq %r12
> + movq %cr4, %r12
> +
> + /* Disable Global Pages */
> + pushq %r12
> + andq $(~X86_CR4_PGE), %r12
> + movq %r12, %cr4
> + popq %r12
> +
> + /*
> + * Verified that running under SEV - now get a random value using
> + * RDRAND. This instruction is mandatory when running as an SEV guest.
> + *
> + * Don't bail out of the loop if RDRAND returns errors. It is better to
> + * prevent forward progress than to work with a non-random value here.
> + */
> +1: rdrand %r10
> + jnc 1b
> +
> + /* Store value to memory and keep it in %r10 */
> + movq %r10, sev_check_data(%rip)
> +
> + /* Backup current %cr3 value to restore it later */
> + movq %cr3, %r11
> +
> + /* Switch to new %cr3 - This might unmap the stack */
> + movq %rdi, %cr3
> +
> + /*
> + * Compare value in %r10 with memory location - If C-Bit is incorrect
> + * this would read the encrypted data and make the check fail.
> + */
> + cmpq %r10, sev_check_data(%rip)
> +
> + /* Restore old %cr3 */
> + movq %r11, %cr3
> +
> + /* Restore previous CR4 and %r12 */
> + movq %r12, %cr4
> + popq %r12
> +
> + /* Check CMPQ result */
> + je 3f
> +
> + /*
> + * The check failed - Prevent any forward progress to prevent ROP
> + * attacks, invalidate the stack and go into a hlt loop.
> + */
> + xorq %rsp, %rsp
> + subq $0x1000, %rsp
> +2: hlt
> + jmp 2b
> +3:
> +#endif
> + ret
> +SYM_FUNC_END(sev_verify_cbit)
> +
> --
> 2.28.0
>

Why use r10-r12 rather than the caller-save registers? Even for the head
code where you need to perserve the cr3 value you can just return it in
rax?

2020-10-21 02:22:01

by Joerg Roedel

[permalink] [raw]
Subject: [PATCH v2 2/5] x86/boot/compressed/64: Add CPUID sanity check to early #VC handler

From: Joerg Roedel <[email protected]>

The early #VC handler which doesn't have a GHCB can only handle CPUID
exit codes. It is needed by the early boot code to handle #VC
exceptions raised in verify_cpu() and to get the position of the C
bit.

But the CPUID information comes from the hypervisor, which is untrusted
and might return results which trick the guest into the no-SEV boot path
with no C bit set in the page-tables. All data written to memory would
then be unencrypted and could leak sensitive data to the hypervisor.

Add sanity checks to the early #VC handlers to make sure the hypervisor
can not pretend that SEV is disabled.

Signed-off-by: Joerg Roedel <[email protected]>
---
arch/x86/kernel/sev-es-shared.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)

diff --git a/arch/x86/kernel/sev-es-shared.c b/arch/x86/kernel/sev-es-shared.c
index 5f83ccaab877..48bb14563dcd 100644
--- a/arch/x86/kernel/sev-es-shared.c
+++ b/arch/x86/kernel/sev-es-shared.c
@@ -178,6 +178,32 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
goto fail;
regs->dx = val >> 32;

+ /*
+ * This is a VC handler and it is only raised when SEV-ES is active,
+ * which means SEV must be active too. Do sanity checks on the CPUID
+ * results to make sure the hypervisor does not trick the kernel into
+ * the no-sev path. This could map sensitive data unencrypted and make
+ * it accessible to the hypervisor.
+ *
+ * In particular, check for:
+ * - Hypervisor CPUID bit
+ * - Availability of CPUID leaf 0x8000001f
+ * - SEV CPUID bit.
+ *
+ * The hypervisor might still report the wrong C-bit position, but this
+ * can't be checked here.
+ */
+
+ if ((fn == 1 && !(regs->cx & BIT(31))))
+ /* Hypervisor Bit */
+ goto fail;
+ else if (fn == 0x80000000 && (regs->ax < 0x8000001f))
+ /* SEV Leaf check */
+ goto fail;
+ else if ((fn == 0x8000001f && !(regs->ax & BIT(1))))
+ /* SEV Bit */
+ goto fail;
+
/* Skip over the CPUID two-byte opcode */
regs->ip += 2;

--
2.28.0

2020-10-21 02:24:02

by Joerg Roedel

[permalink] [raw]
Subject: [PATCH v2 4/5] x86/head/64: Check SEV encryption before switching to kernel page-table

From: Joerg Roedel <[email protected]>

When SEV is enabled the kernel requests the C-Bit position again from
the hypervisor to built its own page-table. Since the hypervisor is an
untrusted source the C-bit position needs to be verified before the
kernel page-table is used.

Call the sev_verify_cbit() function before writing the CR3.

Signed-off-by: Joerg Roedel <[email protected]>
---
arch/x86/kernel/head_64.S | 14 +++++++++++++-
arch/x86/mm/mem_encrypt.c | 1 +
2 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 7eb2a1c87969..c6f4562359a5 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -161,7 +161,18 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)

/* Setup early boot stage 4-/5-level pagetables. */
addq phys_base(%rip), %rax
- movq %rax, %cr3
+
+ /*
+ * For SEV guests: Verify that the C-bit is correct. A malicious
+ * hypervisor could lie about the C-bit position to perform a ROP
+ * attack on the guest by writing to the unencrypted stack and wait for
+ * the next RET instruction.
+ */
+ movq %rax, %rdi
+ call sev_verify_cbit
+
+ /* Switch to new page-table */
+ movq %rdi, %cr3

/* Ensure I am executing from virtual addresses */
movq $1f, %rax
@@ -279,6 +290,7 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
SYM_CODE_END(secondary_startup_64)

#include "verify_cpu.S"
+#include "sev_verify_cbit.S"

#ifdef CONFIG_HOTPLUG_CPU
/*
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ebb7edc8bc0a..bd9b62af2e3d 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -39,6 +39,7 @@
*/
u64 sme_me_mask __section(.data) = 0;
u64 sev_status __section(.data) = 0;
+u64 sev_check_data __section(.data) = 0;
EXPORT_SYMBOL(sme_me_mask);
DEFINE_STATIC_KEY_FALSE(sev_enable_key);
EXPORT_SYMBOL_GPL(sev_enable_key);
--
2.28.0

2020-10-21 06:32:18

by Arvind Sankar

[permalink] [raw]
Subject: Re: [PATCH v2 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path

On Tue, Oct 20, 2020 at 05:48:12PM +0200, Joerg Roedel wrote:
> On Tue, Oct 20, 2020 at 10:12:59AM -0400, Arvind Sankar wrote:
> > On Tue, Oct 20, 2020 at 02:18:54PM +0200, Joerg Roedel wrote:
> > Why use r10-r12 rather than the caller-save registers? Even for the head
> > code where you need to perserve the cr3 value you can just return it in
> > rax?
>
> It can surely be optimized, but it makes the code less robust. This
> function is only called from assembly so the standard x86-64 calling
> conventions might not be followed strictly. I think its better to make
> as few assumptions as possible about the calling code to avoid
> regressions. Changes to the head code are not necessarily tested with
> SEV/SEV-ES guests by developers.
>
> Regards,
>
> Joerg

This is called from both assembly and C, but anyway, you're already
assuming r10 and r11 can be clobbered safely, and you just took out the
save/restores in set_sev_encryption_mask, which is actually called only
from assembly.

2020-10-21 15:47:48

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH v2 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path

On Tue, Oct 20, 2020 at 10:12:59AM -0400, Arvind Sankar wrote:
> On Tue, Oct 20, 2020 at 02:18:54PM +0200, Joerg Roedel wrote:
> Why use r10-r12 rather than the caller-save registers? Even for the head
> code where you need to perserve the cr3 value you can just return it in
> rax?

It can surely be optimized, but it makes the code less robust. This
function is only called from assembly so the standard x86-64 calling
conventions might not be followed strictly. I think its better to make
as few assumptions as possible about the calling code to avoid
regressions. Changes to the head code are not necessarily tested with
SEV/SEV-ES guests by developers.

Regards,

Joerg

2020-10-22 00:40:54

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH v2 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path

On Tue, Oct 20, 2020 at 12:04:28PM -0400, Arvind Sankar wrote:
> This is called from both assembly and C, but anyway, you're already
> assuming r10 and r11 can be clobbered safely, and you just took out the
> save/restores in set_sev_encryption_mask, which is actually called only
> from assembly.

Alright, maybe I was a bit too caucious. I changed the CR4 handling to
use %r8 and %r9 instead, which are also clobbered.

Regards,

Joerg