2021-02-10 10:34:25

by Joerg Roedel

[permalink] [raw]
Subject: [PATCH 6/7] x86/boot/compressed/64: Check SEV encryption in 32-bit boot-path

From: Joerg Roedel <[email protected]>

Check whether the hypervisor reported the correct C-bit when running as
an SEV guest. Using a wrong C-bit position could be used to leak
sensitive data from the guest to the hypervisor.

Signed-off-by: Joerg Roedel <[email protected]>
---
arch/x86/boot/compressed/head_64.S | 80 ++++++++++++++++++++++++++
arch/x86/boot/compressed/mem_encrypt.S | 1 +
2 files changed, 81 insertions(+)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index eadaa0a082b8..047af1cba041 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -185,11 +185,18 @@ SYM_FUNC_START(startup_32)
*/
call get_sev_encryption_bit
xorl %edx, %edx
+#ifdef CONFIG_AMD_MEM_ENCRYPT
testl %eax, %eax
jz 1f
subl $32, %eax /* Encryption bit is always above bit 31 */
bts %eax, %edx /* Set encryption mask for page tables */
+ /*
+ * Store the sme_me_mask as an indicator that SEV is active. It will be
+ * set again in startup_64().
+ */
+ movl %edx, rva(sme_me_mask+4)(%ebp)
1:
+#endif

/* Initialize Page tables to 0 */
leal rva(pgtable)(%ebx), %edi
@@ -274,6 +281,9 @@ SYM_FUNC_START(startup_32)
movl %esi, %edx
1:
#endif
+ /* Check if the C-bit position is correct when SEV is active */
+ call sev_startup32_cbit_check
+
pushl $__KERNEL_CS
pushl %eax

@@ -870,6 +880,76 @@ SYM_FUNC_START(startup32_load_idt)
ret
SYM_FUNC_END(startup32_load_idt)
#endif
+
+/*
+ * Check for the correct C-bit position when the startup_32 boot-path is used.
+ *
+ * The check makes use of the fact that all memory is encrypted when paging is
+ * disabled. The function creates 64 bits of random data using the RDRAND
+ * instruction. RDRAND is mandatory for SEV guests, so always available. If the
+ * hypervisor violates that the kernel will crash right here.
+ *
+ * The 64 bits of random data are stored to a memory location and at the same
+ * time kept in the %eax and %ebx registers. Since encryption is always active
+ * when paging is off the random data will be stored encrypted in main memory.
+ *
+ * Then paging is enabled. When the C-bit position is correct all memory is
+ * still mapped encrypted and comparing the register values with memory will
+ * succeed. An incorrect C-bit position will map all memory unencrypted, so that
+ * the compare will use the encrypted random data and fail.
+ */
+SYM_FUNC_START(sev_startup32_cbit_check)
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+ pushl %eax
+ pushl %ebx
+ pushl %ecx
+ pushl %edx
+
+ /* Check for non-zero sev_status */
+ movl rva(sev_status)(%ebp), %eax
+ testl %eax, %eax
+ jz 4f
+
+ /*
+ * Get two 32-bit random values - Don't bail out if RDRAND fails
+ * because it is better to prevent forward progress if no random value
+ * can be gathered.
+ */
+1: rdrand %eax
+ jnc 1b
+2: rdrand %ebx
+ jnc 2b
+
+ /* Store to memory and keep it in the registers */
+ movl %eax, rva(sev_check_data)(%ebp)
+ movl %ebx, rva(sev_check_data+4)(%ebp)
+
+ /* Enable paging to see if encryption is active */
+ movl %cr0, %edx /* Backup %cr0 in %edx */
+ movl $(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */
+ movl %ecx, %cr0
+
+ cmpl %eax, rva(sev_check_data)(%ebp)
+ jne 3f
+ cmpl %ebx, rva(sev_check_data+4)(%ebp)
+ jne 3f
+
+ movl %edx, %cr0 /* Restore previous %cr0 */
+
+ jmp 4f
+
+3: /* Check failed - hlt the machine */
+ hlt
+ jmp 3b
+
+4:
+ popl %edx
+ popl %ecx
+ popl %ebx
+ popl %eax
+#endif
+ ret
+SYM_FUNC_END(sev_startup32_cbit_check)
/*
* Stack and heap for uncompression
*/
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index 091502cde070..b80fed167903 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -7,6 +7,7 @@
* Author: Tom Lendacky <[email protected]>
*/

+#define rva(X) ((X) - startup_32)
#include <linux/linkage.h>

#include <asm/processor-flags.h>
--
2.30.0


2021-02-10 16:28:48

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH 6/7] x86/boot/compressed/64: Check SEV encryption in 32-bit boot-path

On 2/10/21 2:21 AM, Joerg Roedel wrote:
> +1: rdrand %eax
> + jnc 1b
> +2: rdrand %ebx
> + jnc 2b
> +
> + /* Store to memory and keep it in the registers */
> + movl %eax, rva(sev_check_data)(%ebp)
> + movl %ebx, rva(sev_check_data+4)(%ebp)
> +
> + /* Enable paging to see if encryption is active */
> + movl %cr0, %edx /* Backup %cr0 in %edx */
> + movl $(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */
> + movl %ecx, %cr0
> +
> + cmpl %eax, rva(sev_check_data)(%ebp)
> + jne 3f
> + cmpl %ebx, rva(sev_check_data+4)(%ebp)
> + jne 3f
> +
> + movl %edx, %cr0 /* Restore previous %cr0 */
> +
> + jmp 4f

This is all very cute. But, if this fails, it means that the .data
section is now garbage, right?. I guess failing here is less
entertaining than trying to run the kernel with random garbage in .data,
but it doesn't make it very far either way, right?

Why bother with rdrand, though? Couldn't you just pick any old piece of
.data and compare before and after?

2021-02-10 16:50:41

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH 6/7] x86/boot/compressed/64: Check SEV encryption in 32-bit boot-path

On 2/10/21 2:21 AM, Joerg Roedel wrote:
> + /* Store to memory and keep it in the registers */
> + movl %eax, rva(sev_check_data)(%ebp)
> + movl %ebx, rva(sev_check_data+4)(%ebp)
> +
> + /* Enable paging to see if encryption is active */
> + movl %cr0, %edx /* Backup %cr0 in %edx */
> + movl $(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */
> + movl %ecx, %cr0
> +
> + cmpl %eax, rva(sev_check_data)(%ebp)
> + jne 3f
> + cmpl %ebx, rva(sev_check_data+4)(%ebp)
> + jne 3f

Also, I know that turning paging on is a *BIG* barrier. But, I didn't
think it has any effect on the caches.

I would expect that the underlying physical address of 'sev_check_data'
would change when paging gets enabled because paging sets the C bit.
So, how does the write of 'sev_check_data' get out of the caches and
into memory where it can be read back with the new physical address?

I think there's some bit of the SEV architecture that I'm missing.

2021-02-10 16:51:28

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH 6/7] x86/boot/compressed/64: Check SEV encryption in 32-bit boot-path

On Wed, Feb 10, 2021 at 08:25:11AM -0800, Dave Hansen wrote:
> This is all very cute. But, if this fails, it means that the .data
> section is now garbage, right?. I guess failing here is less
> entertaining than trying to run the kernel with random garbage in .data,
> but it doesn't make it very far either way, right?

Yes, if this fails the .data section is garbage, and more importantly,
the .text section of the decompressed kernel image would be garbage too.
The kernel won't get very far, but could possibly be tricked into
releasing secrets to the hypervisor.

> Why bother with rdrand, though? Couldn't you just pick any old piece of
> .data and compare before and after?

It is important that the Hypervisor can't predict what data will be
written. It is written with paging off, so it will implicitly be
encrypted. If the Hypervisor knows the data, it could use the small time
window until it is read again to remap the gpa to a page with the
expected data.

Regards,

Joerg

2021-02-10 20:48:06

by Tom Lendacky

[permalink] [raw]
Subject: Re: [PATCH 6/7] x86/boot/compressed/64: Check SEV encryption in 32-bit boot-path

On 2/10/21 10:47 AM, Dave Hansen wrote:
> On 2/10/21 2:21 AM, Joerg Roedel wrote:
>> + /* Store to memory and keep it in the registers */
>> + movl %eax, rva(sev_check_data)(%ebp)
>> + movl %ebx, rva(sev_check_data+4)(%ebp)
>> +
>> + /* Enable paging to see if encryption is active */
>> + movl %cr0, %edx /* Backup %cr0 in %edx */
>> + movl $(X86_CR0_PG | X86_CR0_PE), %ecx /* Enable Paging and Protected mode */
>> + movl %ecx, %cr0
>> +
>> + cmpl %eax, rva(sev_check_data)(%ebp)
>> + jne 3f
>> + cmpl %ebx, rva(sev_check_data+4)(%ebp)
>> + jne 3f
>
> Also, I know that turning paging on is a *BIG* barrier. But, I didn't
> think it has any effect on the caches.
>
> I would expect that the underlying physical address of 'sev_check_data'
> would change when paging gets enabled because paging sets the C bit.
> So, how does the write of 'sev_check_data' get out of the caches and
> into memory where it can be read back with the new physical address?
>
> I think there's some bit of the SEV architecture that I'm missing.

Non-paging memory accesses are always considered private (APM Volume 2,
15.34.4) and thus are cached with the C bit set.

Thanks,
Tom

>

2021-03-04 07:16:41

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH 6/7] x86/boot/compressed/64: Check SEV encryption in 32-bit boot-path

On Wed, Feb 10, 2021 at 11:21:34AM +0100, Joerg Roedel wrote:
> + /*
> + * Store the sme_me_mask as an indicator that SEV is active. It will be
> + * set again in startup_64().

So why bother? Or does something needs it before that?

...

> +SYM_FUNC_START(sev_startup32_cbit_check)

s/sev_startup32_cbit_check/startup32_check_sev_cbit/

I guess.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-03-09 10:06:29

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH 6/7] x86/boot/compressed/64: Check SEV encryption in 32-bit boot-path

On Tue, Mar 02, 2021 at 08:43:53PM +0100, Borislav Petkov wrote:
> On Wed, Feb 10, 2021 at 11:21:34AM +0100, Joerg Roedel wrote:
> > + /*
> > + * Store the sme_me_mask as an indicator that SEV is active. It will be
> > + * set again in startup_64().
>
> So why bother? Or does something needs it before that?

This was actually a bug. The startup32_check_sev_cbit() needs something
to skip the check when SEV is not active. Therefore the value is set
here in sme_me_mask, but the function later checks sev_status.

I fixed it by setting sev_status to 1 here (indicates SEV is active).

Regards,

Joerg