2024-02-14 22:32:23

by Ross Philipson

[permalink] [raw]
Subject: [PATCH v8 00/15] x86: Trenchboot secure dynamic launch Linux kernel support

The larger focus of the TrenchBoot project (https://github.com/TrenchBoot) is to
enhance the boot security and integrity in a unified manner. The first area of
focus has been on the Trusted Computing Group's Dynamic Launch for establishing
a hardware Root of Trust for Measurement, also know as DRTM (Dynamic Root of
Trust for Measurement). The project has been and continues to work on providing
a unified means to Dynamic Launch that is a cross-platform (Intel and AMD) and
cross-architecture (x86 and Arm), with our recent involvment in the upcoming
Arm DRTM specification. The order of introducing DRTM to the Linux kernel
follows the maturity of DRTM in the architectures. Intel's Trusted eXecution
Technology (TXT) is present today and only requires a preamble loader, e.g. a
boot loader, and an OS kernel that is TXT-aware. AMD DRTM implementation has
been present since the introduction of AMD-V but requires an additional
component that is AMD specific and referred to in the specification as the
Secure Loader, which the TrenchBoot project has an active prototype in
development. Finally Arm's implementation is in specification development stage
and the project is looking to support it when it becomes available.

This patchset provides detailed documentation of DRTM, the approach used for
adding the capbility, and relevant API/ABI documentation. In addition to the
documentation the patch set introduces Intel TXT support as the first platform
for Linux Secure Launch.

A quick note on terminology. The larger open source project itself is called
TrenchBoot, which is hosted on Github (links below). The kernel feature enabling
the use of Dynamic Launch technology is referred to as "Secure Launch" within
the kernel code. As such the prefixes sl_/SL_ or slaunch/SLAUNCH will be seen
in the code. The stub code discussed above is referred to as the SL stub.

The Secure Launch feature starts with patch #2. Patch #1 was authored by Arvind
Sankar. There is no further status on this patch at this point but
Secure Launch depends on it so it is included with the set.

Links:

The TrenchBoot project including documentation:

https://trenchboot.org

The TrenchBoot project on Github:

https://github.com/trenchboot

Intel TXT is documented in its own specification and in the SDM Instruction Set volume:

https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
https://software.intel.com/en-us/articles/intel-sdm

AMD SKINIT is documented in the System Programming manual:

https://www.amd.com/system/files/TechDocs/24593.pdf

The TrenchBoot project provides a quick start guide to help get a system
up and running with Secure Launch for Linux:

https://github.com/TrenchBoot/documentation/blob/master/QUICKSTART.md

Patch set based on commit:

torvolds/master/54be6c6c5ae8e0d93a6c4641cb7528eb0b6ba478

Thanks
Ross Philipson and Daniel P. Smith

Changes in v2:

- Modified 32b entry code to prevent causing relocations in the compressed
kernel.
- Dropped patches for compressed kernel TPM PCR extender.
- Modified event log code to insert log delimiter events and not rely
on TPM access.
- Stop extending PCRs in the early Secure Launch stub code.
- Removed Kconfig options for hash algorithms and use the algorithms the
ACM used.
- Match Secure Launch measurement algorithm use to those reported in the
TPM 2.0 event log.
- Read the TPM events out of the TPM and extend them into the PCRs using
the mainline TPM driver. This is done in the late initcall module.
- Allow use of alternate PCR 19 and 20 for post ACM measurements.
- Add Kconfig constraints needed by Secure Launch (disable KASLR
and add x2apic dependency).
- Fix testing of SL_FLAGS when determining if Secure Launch is active
and the architecture is TXT.
- Use SYM_DATA_START_LOCAL macros in early entry point code.
- Security audit changes:
- Validate buffers passed to MLE do not overlap the MLE and are
properly laid out.
- Validate buffers and memory regions used by the MLE are
protected by IOMMU PMRs.
- Force IOMMU to not use passthrough mode during a Secure Launch.
- Prevent KASLR use during a Secure Launch.

Changes in v3:

- Introduce x86 documentation patch to provide background, overview
and configuration/ABI information for the Secure Launch kernel
feature.
- Remove the IOMMU patch with special cases for disabling IOMMU
passthrough. Configuring the IOMMU is now a documentation matter
in the previously mentioned new patch.
- Remove special case KASLR disabling code. Configuring KASLR is now
a documentation matter in the previously mentioned new patch.
- Fix incorrect panic on TXT public register read.
- Properly handle and measure setup_indirect bootparams in the early
launch code.
- Use correct compressed kernel image base address when testing buffers
in the early launch stub code. This bug was introduced by the changes
to avoid relocation in the compressed kernel.
- Use CPUID feature bits instead of CPUID vendor strings to determine
if SMX mode is supported and the system is Intel.
- Remove early NMI re-enable on the BSP. This can be safely done later
on the BSP after an IDT is setup.

Changes in v4:
- Expand the cover letter to provide more context to the order that DRTM
support will be added.
- Removed debug tracing in TPM request locality funciton and fixed
local variable declarations.
- Fixed missing break in default case in slmodule.c.
- Reworded commit messages in patches 1 and 2 per suggestions.

Changes in v5:
- Comprehensive documentation rewrite.
- Use boot param loadflags to communicate Secure Launch status to
kernel proper.
- Fix incorrect check of X86_FEATURE_BIT_SMX bit.
- Rename the alternate details and authorities PCR support.
- Refactor the securityfs directory and file setup in slmodule.c.
- Misc. cleanup from internal code reviews.
- Use reverse fir tree format for variables.

Changes in v6:
- Support for the new Secure Launch Resourse Table that standardizes
the information passed and forms the ABI between the pre and post
launch code.
- Support for booting Linux through the EFI stub entry point and
then being able to do a Secure Launch once EFI stub is done and EBS
is called.
- Updates to the documentation to reflect the previous two items listed.

Changes in v7:
- Switch to using MONITOR/MWAIT instead of NMIs to park the APs for
later bringup by the SMP code.
- Use static inline dummy functions instead of macros when the Secure
Launch feature is disabled.
- Move early SHA1 code to lib/crypto and pull it in from there.
- Numerous formatting fixes from comments on LKML.
- Remove efi-stub/DL stub patch temporarily for redesign/rework.

Changes in v8:
- Reintroduce efi-stub Linux kernel booting through the dynamic launch
stub (DL stub).
- Add new approach to setting localities > 0 through kernel and sysfs
interfaces in the TPM mainline driver.
- General code cleanup from v7 post comments.

Arvind Sankar (1):
x86/boot: Place kernel_info at a fixed offset

Daniel P. Smith (2):
x86: Add early SHA support for Secure Launch early measurements
x86: Secure Launch late initcall platform module

Ross Philipson (12):
Documentation/x86: Secure Launch kernel documentation
x86: Secure Launch Kconfig
x86: Secure Launch Resource Table header file
x86: Secure Launch main header file
x86: Secure Launch kernel early boot stub
x86: Secure Launch kernel late boot stub
x86: Secure Launch SMP bringup support
kexec: Secure Launch kexec SEXIT support
reboot: Secure Launch SEXIT support on reboot paths
tpm: Add ability to set the preferred locality the TPM chip uses
tpm: Add sysfs interface to allow setting and querying the preferred
locality
x86: EFI stub DRTM launch support for Secure Launch

*** BLURB HERE ***

Arvind Sankar (1):
x86/boot: Place kernel_info at a fixed offset

Daniel P. Smith (2):
x86: Add early SHA support for Secure Launch early measurements
x86: Secure Launch late initcall platform module

Ross Philipson (12):
Documentation/x86: Secure Launch kernel documentation
x86: Secure Launch Kconfig
x86: Secure Launch Resource Table header file
x86: Secure Launch main header file
x86: Secure Launch kernel early boot stub
x86: Secure Launch kernel late boot stub
x86: Secure Launch SMP bringup support
kexec: Secure Launch kexec SEXIT support
reboot: Secure Launch SEXIT support on reboot paths
tpm: Add ability to set the preferred locality the TPM chip uses
tpm: Add sysfs interface to allow setting and querying the preferred
locality
x86: EFI stub DRTM launch support for Secure Launch

Documentation/arch/x86/boot.rst | 21 +
Documentation/security/index.rst | 1 +
.../security/launch-integrity/index.rst | 11 +
.../security/launch-integrity/principles.rst | 320 ++++++++
.../secure_launch_details.rst | 584 +++++++++++++++
.../secure_launch_overview.rst | 226 ++++++
arch/x86/Kconfig | 12 +
arch/x86/boot/compressed/Makefile | 3 +
arch/x86/boot/compressed/early_sha1.c | 12 +
arch/x86/boot/compressed/early_sha256.c | 6 +
arch/x86/boot/compressed/head_64.S | 34 +
arch/x86/boot/compressed/kernel_info.S | 53 +-
arch/x86/boot/compressed/kernel_info.h | 12 +
arch/x86/boot/compressed/sl_main.c | 582 +++++++++++++++
arch/x86/boot/compressed/sl_stub.S | 705 ++++++++++++++++++
arch/x86/boot/compressed/vmlinux.lds.S | 6 +
arch/x86/include/asm/msr-index.h | 5 +
arch/x86/include/asm/realmode.h | 3 +
arch/x86/include/uapi/asm/bootparam.h | 1 +
arch/x86/kernel/Makefile | 2 +
arch/x86/kernel/asm-offsets.c | 20 +
arch/x86/kernel/reboot.c | 10 +
arch/x86/kernel/setup.c | 3 +
arch/x86/kernel/slaunch.c | 598 +++++++++++++++
arch/x86/kernel/slmodule.c | 511 +++++++++++++
arch/x86/kernel/smpboot.c | 58 +-
arch/x86/realmode/init.c | 3 +
arch/x86/realmode/rm/header.S | 3 +
arch/x86/realmode/rm/trampoline_64.S | 32 +
drivers/char/tpm/tpm-chip.c | 24 +-
drivers/char/tpm/tpm-interface.c | 15 +
drivers/char/tpm/tpm-sysfs.c | 30 +
drivers/char/tpm/tpm.h | 1 +
drivers/firmware/efi/libstub/x86-stub.c | 55 ++
drivers/iommu/intel/dmar.c | 4 +
include/crypto/sha1.h | 1 +
include/linux/slaunch.h | 542 ++++++++++++++
include/linux/slr_table.h | 270 +++++++
include/linux/tpm.h | 10 +
kernel/kexec_core.c | 4 +
lib/crypto/sha1.c | 81 ++
41 files changed, 4867 insertions(+), 7 deletions(-)
create mode 100644 Documentation/security/launch-integrity/index.rst
create mode 100644 Documentation/security/launch-integrity/principles.rst
create mode 100644 Documentation/security/launch-integrity/secure_launch_details.rst
create mode 100644 Documentation/security/launch-integrity/secure_launch_overview.rst
create mode 100644 arch/x86/boot/compressed/early_sha1.c
create mode 100644 arch/x86/boot/compressed/early_sha256.c
create mode 100644 arch/x86/boot/compressed/kernel_info.h
create mode 100644 arch/x86/boot/compressed/sl_main.c
create mode 100644 arch/x86/boot/compressed/sl_stub.S
create mode 100644 arch/x86/kernel/slaunch.c
create mode 100644 arch/x86/kernel/slmodule.c
create mode 100644 include/linux/slaunch.h
create mode 100644 include/linux/slr_table.h

--
2.39.3



2024-02-14 22:32:39

by Ross Philipson

[permalink] [raw]
Subject: [PATCH v8 01/15] x86/boot: Place kernel_info at a fixed offset

From: Arvind Sankar <[email protected]>

There are use cases for storing the offset of a symbol in kernel_info.
For example, the trenchboot series [0] needs to store the offset of the
Measured Launch Environment header in kernel_info.

Since commit (note: commit ID from tip/master)

commit 527afc212231 ("x86/boot: Check that there are no run-time relocations")

run-time relocations are not allowed in the compressed kernel, so simply
using the symbol in kernel_info, as

.long symbol

will cause a linker error because this is not position-independent.

With kernel_info being a separate object file and in a different section
from startup_32, there is no way to calculate the offset of a symbol
from the start of the image in a position-independent way.

To enable such use cases, put kernel_info into its own section which is
placed at a predetermined offset (KERNEL_INFO_OFFSET) via the linker
script. This will allow calculating the symbol offset in a
position-independent way, by adding the offset from the start of
kernel_info to KERNEL_INFO_OFFSET.

Ensure that kernel_info is aligned, and use the SYM_DATA.* macros
instead of bare labels. This stores the size of the kernel_info
structure in the ELF symbol table.

Signed-off-by: Arvind Sankar <[email protected]>
Cc: Ross Philipson <[email protected]>
Signed-off-by: Ross Philipson <[email protected]>
---
arch/x86/boot/compressed/kernel_info.S | 19 +++++++++++++++----
arch/x86/boot/compressed/kernel_info.h | 12 ++++++++++++
arch/x86/boot/compressed/vmlinux.lds.S | 6 ++++++
3 files changed, 33 insertions(+), 4 deletions(-)
create mode 100644 arch/x86/boot/compressed/kernel_info.h

diff --git a/arch/x86/boot/compressed/kernel_info.S b/arch/x86/boot/compressed/kernel_info.S
index f818ee8fba38..c18f07181dd5 100644
--- a/arch/x86/boot/compressed/kernel_info.S
+++ b/arch/x86/boot/compressed/kernel_info.S
@@ -1,12 +1,23 @@
/* SPDX-License-Identifier: GPL-2.0 */

+#include <linux/linkage.h>
#include <asm/bootparam.h>
+#include "kernel_info.h"

- .section ".rodata.kernel_info", "a"
+/*
+ * If a field needs to hold the offset of a symbol from the start
+ * of the image, use the macro below, eg
+ * .long rva(symbol)
+ * This will avoid creating run-time relocations, which are not
+ * allowed in the compressed kernel.
+ */
+
+#define rva(X) (((X) - kernel_info) + KERNEL_INFO_OFFSET)

- .global kernel_info
+ .section ".rodata.kernel_info", "a"

-kernel_info:
+ .balign 16
+SYM_DATA_START(kernel_info)
/* Header, Linux top (structure). */
.ascii "LToP"
/* Size. */
@@ -19,4 +30,4 @@ kernel_info:

kernel_info_var_len_data:
/* Empty for time being... */
-kernel_info_end:
+SYM_DATA_END_LABEL(kernel_info, SYM_L_LOCAL, kernel_info_end)
diff --git a/arch/x86/boot/compressed/kernel_info.h b/arch/x86/boot/compressed/kernel_info.h
new file mode 100644
index 000000000000..c127f84aec63
--- /dev/null
+++ b/arch/x86/boot/compressed/kernel_info.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef BOOT_COMPRESSED_KERNEL_INFO_H
+#define BOOT_COMPRESSED_KERNEL_INFO_H
+
+#ifdef CONFIG_X86_64
+#define KERNEL_INFO_OFFSET 0x500
+#else /* 32-bit */
+#define KERNEL_INFO_OFFSET 0x100
+#endif
+
+#endif /* BOOT_COMPRESSED_KERNEL_INFO_H */
diff --git a/arch/x86/boot/compressed/vmlinux.lds.S b/arch/x86/boot/compressed/vmlinux.lds.S
index 083ec6d7722a..718c52f3f1e6 100644
--- a/arch/x86/boot/compressed/vmlinux.lds.S
+++ b/arch/x86/boot/compressed/vmlinux.lds.S
@@ -7,6 +7,7 @@ OUTPUT_FORMAT(CONFIG_OUTPUT_FORMAT)

#include <asm/cache.h>
#include <asm/page_types.h>
+#include "kernel_info.h"

#ifdef CONFIG_X86_64
OUTPUT_ARCH(i386:x86-64)
@@ -27,6 +28,11 @@ SECTIONS
HEAD_TEXT
_ehead = . ;
}
+ .rodata.kernel_info KERNEL_INFO_OFFSET : {
+ *(.rodata.kernel_info)
+ }
+ ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info at bad address!")
+
.rodata..compressed : {
*(.rodata..compressed)
}
--
2.39.3


2024-02-14 22:33:19

by Ross Philipson

[permalink] [raw]
Subject: [PATCH v8 05/15] x86: Secure Launch main header file

Introduce the main Secure Launch header file used in the early SL stub
and the early setup code.

Signed-off-by: Ross Philipson <[email protected]>
---
include/linux/slaunch.h | 542 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 542 insertions(+)
create mode 100644 include/linux/slaunch.h

diff --git a/include/linux/slaunch.h b/include/linux/slaunch.h
new file mode 100644
index 000000000000..da2988e32ada
--- /dev/null
+++ b/include/linux/slaunch.h
@@ -0,0 +1,542 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Main Secure Launch header file.
+ *
+ * Copyright (c) 2022, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLAUNCH_H
+#define _LINUX_SLAUNCH_H
+
+/*
+ * Secure Launch Defined State Flags
+ */
+#define SL_FLAG_ACTIVE 0x00000001
+#define SL_FLAG_ARCH_SKINIT 0x00000002
+#define SL_FLAG_ARCH_TXT 0x00000004
+
+/*
+ * Secure Launch CPU Type
+ */
+#define SL_CPU_AMD 1
+#define SL_CPU_INTEL 2
+
+#if IS_ENABLED(CONFIG_SECURE_LAUNCH)
+
+#define __SL32_CS 0x0008
+#define __SL32_DS 0x0010
+
+/*
+ * Intel Safer Mode Extensions (SMX)
+ *
+ * Intel SMX provides a programming interface to establish a Measured Launched
+ * Environment (MLE). The measurement and protection mechanisms supported by the
+ * capabilities of an Intel Trusted Execution Technology (TXT) platform. SMX is
+ * the processor’s programming interface in an Intel TXT platform.
+ *
+ * See Intel SDM Volume 2 - 6.1 "Safer Mode Extensions Reference"
+ */
+
+/*
+ * SMX GETSEC Leaf Functions
+ */
+#define SMX_X86_GETSEC_SEXIT 5
+#define SMX_X86_GETSEC_SMCTRL 7
+#define SMX_X86_GETSEC_WAKEUP 8
+
+/*
+ * Intel Trusted Execution Technology MMIO Registers Banks
+ */
+#define TXT_PUB_CONFIG_REGS_BASE 0xfed30000
+#define TXT_PRIV_CONFIG_REGS_BASE 0xfed20000
+#define TXT_NR_CONFIG_PAGES ((TXT_PUB_CONFIG_REGS_BASE - \
+ TXT_PRIV_CONFIG_REGS_BASE) >> PAGE_SHIFT)
+
+/*
+ * Intel Trusted Execution Technology (TXT) Registers
+ */
+#define TXT_CR_STS 0x0000
+#define TXT_CR_ESTS 0x0008
+#define TXT_CR_ERRORCODE 0x0030
+#define TXT_CR_CMD_RESET 0x0038
+#define TXT_CR_CMD_CLOSE_PRIVATE 0x0048
+#define TXT_CR_DIDVID 0x0110
+#define TXT_CR_VER_EMIF 0x0200
+#define TXT_CR_CMD_UNLOCK_MEM_CONFIG 0x0218
+#define TXT_CR_SINIT_BASE 0x0270
+#define TXT_CR_SINIT_SIZE 0x0278
+#define TXT_CR_MLE_JOIN 0x0290
+#define TXT_CR_HEAP_BASE 0x0300
+#define TXT_CR_HEAP_SIZE 0x0308
+#define TXT_CR_SCRATCHPAD 0x0378
+#define TXT_CR_CMD_OPEN_LOCALITY1 0x0380
+#define TXT_CR_CMD_CLOSE_LOCALITY1 0x0388
+#define TXT_CR_CMD_OPEN_LOCALITY2 0x0390
+#define TXT_CR_CMD_CLOSE_LOCALITY2 0x0398
+#define TXT_CR_CMD_SECRETS 0x08e0
+#define TXT_CR_CMD_NO_SECRETS 0x08e8
+#define TXT_CR_E2STS 0x08f0
+
+/* TXT default register value */
+#define TXT_REGVALUE_ONE 0x1ULL
+
+/* TXTCR_STS status bits */
+#define TXT_SENTER_DONE_STS BIT(0)
+#define TXT_SEXIT_DONE_STS BIT(1)
+
+/*
+ * SINIT/MLE Capabilities Field Bit Definitions
+ */
+#define TXT_SINIT_MLE_CAP_WAKE_GETSEC 0
+#define TXT_SINIT_MLE_CAP_WAKE_MONITOR 1
+
+/*
+ * OS/MLE Secure Launch Specific Definitions
+ */
+#define TXT_OS_MLE_STRUCT_VERSION 1
+#define TXT_OS_MLE_MAX_VARIABLE_MTRRS 32
+
+/*
+ * TXT Heap Table Enumeration
+ */
+#define TXT_BIOS_DATA_TABLE 1
+#define TXT_OS_MLE_DATA_TABLE 2
+#define TXT_OS_SINIT_DATA_TABLE 3
+#define TXT_SINIT_MLE_DATA_TABLE 4
+#define TXT_SINIT_TABLE_MAX TXT_SINIT_MLE_DATA_TABLE
+
+/*
+ * Secure Launch Defined Error Codes used in MLE-initiated TXT resets.
+ *
+ * TXT Specification
+ * Appendix I ACM Error Codes
+ */
+#define SL_ERROR_GENERIC 0xc0008001
+#define SL_ERROR_TPM_INIT 0xc0008002
+#define SL_ERROR_TPM_INVALID_LOG20 0xc0008003
+#define SL_ERROR_TPM_LOGGING_FAILED 0xc0008004
+#define SL_ERROR_REGION_STRADDLE_4GB 0xc0008005
+#define SL_ERROR_TPM_EXTEND 0xc0008006
+#define SL_ERROR_MTRR_INV_VCNT 0xc0008007
+#define SL_ERROR_MTRR_INV_DEF_TYPE 0xc0008008
+#define SL_ERROR_MTRR_INV_BASE 0xc0008009
+#define SL_ERROR_MTRR_INV_MASK 0xc000800a
+#define SL_ERROR_MSR_INV_MISC_EN 0xc000800b
+#define SL_ERROR_INV_AP_INTERRUPT 0xc000800c
+#define SL_ERROR_INTEGER_OVERFLOW 0xc000800d
+#define SL_ERROR_HEAP_WALK 0xc000800e
+#define SL_ERROR_HEAP_MAP 0xc000800f
+#define SL_ERROR_REGION_ABOVE_4GB 0xc0008010
+#define SL_ERROR_HEAP_INVALID_DMAR 0xc0008011
+#define SL_ERROR_HEAP_DMAR_SIZE 0xc0008012
+#define SL_ERROR_HEAP_DMAR_MAP 0xc0008013
+#define SL_ERROR_HI_PMR_BASE 0xc0008014
+#define SL_ERROR_HI_PMR_SIZE 0xc0008015
+#define SL_ERROR_LO_PMR_BASE 0xc0008016
+#define SL_ERROR_LO_PMR_MLE 0xc0008017
+#define SL_ERROR_INITRD_TOO_BIG 0xc0008018
+#define SL_ERROR_HEAP_ZERO_OFFSET 0xc0008019
+#define SL_ERROR_WAKE_BLOCK_TOO_SMALL 0xc000801a
+#define SL_ERROR_MLE_BUFFER_OVERLAP 0xc000801b
+#define SL_ERROR_BUFFER_BEYOND_PMR 0xc000801c
+#define SL_ERROR_OS_SINIT_BAD_VERSION 0xc000801d
+#define SL_ERROR_EVENTLOG_MAP 0xc000801e
+#define SL_ERROR_TPM_NUMBER_ALGS 0xc000801f
+#define SL_ERROR_TPM_UNKNOWN_DIGEST 0xc0008020
+#define SL_ERROR_TPM_INVALID_EVENT 0xc0008021
+#define SL_ERROR_INVALID_SLRT 0xc0008022
+#define SL_ERROR_SLRT_MISSING_ENTRY 0xc0008023
+#define SL_ERROR_SLRT_MAP 0xc0008024
+
+/*
+ * Secure Launch Defined Limits
+ */
+#define TXT_MAX_CPUS 512
+#define TXT_BOOT_STACK_SIZE 128
+
+/*
+ * Secure Launch event log entry type. The TXT specification defines the
+ * base event value as 0x400 for DRTM values.
+ */
+#define TXT_EVTYPE_BASE 0x400
+#define TXT_EVTYPE_SLAUNCH (TXT_EVTYPE_BASE + 0x102)
+#define TXT_EVTYPE_SLAUNCH_START (TXT_EVTYPE_BASE + 0x103)
+#define TXT_EVTYPE_SLAUNCH_END (TXT_EVTYPE_BASE + 0x104)
+
+/*
+ * Measured Launch PCRs
+ */
+#define SL_DEF_DLME_DETAIL_PCR17 17
+#define SL_DEF_DLME_AUTHORITY_PCR18 18
+#define SL_ALT_DLME_AUTHORITY_PCR19 19
+#define SL_ALT_DLME_DETAIL_PCR20 20
+
+/*
+ * MLE scratch area offsets
+ */
+#define SL_SCRATCH_AP_EBX 0
+#define SL_SCRATCH_AP_JMP_OFFSET 4
+#define SL_SCRATCH_AP_STACKS_OFFSET 8
+
+#ifndef __ASSEMBLY__
+
+#include <linux/io.h>
+#include <linux/tpm.h>
+#include <linux/tpm_eventlog.h>
+
+/*
+ * Secure Launch AP stack and monitor block
+ */
+struct sl_ap_stack_and_monitor {
+ u32 monitor;
+ u32 cache_pad[15];
+ u32 stack_pad[15];
+ u32 apicid;
+} __packed;
+
+/*
+ * Secure Launch AP wakeup information fetched in SMP boot code.
+ */
+struct sl_ap_wake_info {
+ u32 ap_wake_block;
+ u32 ap_wake_block_size;
+ u32 ap_jmp_offset;
+ u32 ap_stacks_offset;
+};
+
+/*
+ * TXT heap extended data elements.
+ */
+struct txt_heap_ext_data_element {
+ u32 type;
+ u32 size;
+ /* Data */
+} __packed;
+
+#define TXT_HEAP_EXTDATA_TYPE_END 0
+
+struct txt_heap_end_element {
+ u32 type;
+ u32 size;
+} __packed;
+
+#define TXT_HEAP_EXTDATA_TYPE_TPM_EVENT_LOG_PTR 5
+
+struct txt_heap_event_log_element {
+ u64 event_log_phys_addr;
+} __packed;
+
+#define TXT_HEAP_EXTDATA_TYPE_EVENT_LOG_POINTER2_1 8
+
+struct txt_heap_event_log_pointer2_1_element {
+ u64 phys_addr;
+ u32 allocated_event_container_size;
+ u32 first_record_offset;
+ u32 next_record_offset;
+} __packed;
+
+/*
+ * Secure Launch defined OS/MLE TXT Heap table
+ */
+struct txt_os_mle_data {
+ u32 version;
+ u32 boot_params_addr;
+ u64 slrt;
+ u64 txt_info;
+ u32 ap_wake_block;
+ u32 ap_wake_block_size;
+ u8 mle_scratch[64];
+} __packed;
+
+/*
+ * TXT specification defined BIOS data TXT Heap table
+ */
+struct txt_bios_data {
+ u32 version; /* Currently 5 for TPM 1.2 and 6 for TPM 2.0 */
+ u32 bios_sinit_size;
+ u64 reserved1;
+ u64 reserved2;
+ u32 num_logical_procs;
+ /* Versions >= 5 with updates in version 6 */
+ u32 sinit_flags;
+ u32 mle_flags;
+ /* Versions >= 4 */
+ /* Ext Data Elements */
+} __packed;
+
+/*
+ * TXT specification defined OS/SINIT TXT Heap table
+ */
+struct txt_os_sinit_data {
+ u32 version; /* Currently 6 for TPM 1.2 and 7 for TPM 2.0 */
+ u32 flags;
+ u64 mle_ptab;
+ u64 mle_size;
+ u64 mle_hdr_base;
+ u64 vtd_pmr_lo_base;
+ u64 vtd_pmr_lo_size;
+ u64 vtd_pmr_hi_base;
+ u64 vtd_pmr_hi_size;
+ u64 lcp_po_base;
+ u64 lcp_po_size;
+ u32 capabilities;
+ /* Version = 5 */
+ u64 efi_rsdt_ptr;
+ /* Versions >= 6 */
+ /* Ext Data Elements */
+} __packed;
+
+/*
+ * TXT specification defined SINIT/MLE TXT Heap table
+ */
+struct txt_sinit_mle_data {
+ u32 version; /* Current values are 6 through 9 */
+ /* Versions <= 8 */
+ u8 bios_acm_id[20];
+ u32 edx_senter_flags;
+ u64 mseg_valid;
+ u8 sinit_hash[20];
+ u8 mle_hash[20];
+ u8 stm_hash[20];
+ u8 lcp_policy_hash[20];
+ u32 lcp_policy_control;
+ /* Versions >= 7 */
+ u32 rlp_wakeup_addr;
+ u32 reserved;
+ u32 num_of_sinit_mdrs;
+ u32 sinit_mdrs_table_offset;
+ u32 sinit_vtd_dmar_table_size;
+ u32 sinit_vtd_dmar_table_offset;
+ /* Versions >= 8 */
+ u32 processor_scrtm_status;
+ /* Versions >= 9 */
+ /* Ext Data Elements */
+} __packed;
+
+/*
+ * TXT data reporting structure for memory types
+ */
+struct txt_sinit_memory_descriptor_record {
+ u64 address;
+ u64 length;
+ u8 type;
+ u8 reserved[7];
+} __packed;
+
+/*
+ * TXT data structure used by a responsive local processor (RLP) to start
+ * execution in response to a GETSEC[WAKEUP].
+ */
+struct smx_rlp_mle_join {
+ u32 rlp_gdt_limit;
+ u32 rlp_gdt_base;
+ u32 rlp_seg_sel; /* cs (ds, es, ss are seg_sel+8) */
+ u32 rlp_entry_point; /* phys addr */
+} __packed;
+
+/*
+ * TPM event log structures defined in both the TXT specification and
+ * the TCG documentation.
+ */
+#define TPM12_EVTLOG_SIGNATURE "TXT Event Container"
+
+struct tpm12_event_log_header {
+ char signature[20];
+ char reserved[12];
+ u8 container_ver_major;
+ u8 container_ver_minor;
+ u8 pcr_event_ver_major;
+ u8 pcr_event_ver_minor;
+ u32 container_size;
+ u32 pcr_events_offset;
+ u32 next_event_offset;
+ /* PCREvents[] */
+} __packed;
+
+/*
+ * Functions to extract data from the Intel TXT Heap Memory. The layout
+ * of the heap is as follows:
+ * +----------------------------+
+ * | Size Bios Data table (u64) |
+ * +----------------------------+
+ * | Bios Data table |
+ * +----------------------------+
+ * | Size OS MLE table (u64) |
+ * +----------------------------+
+ * | OS MLE table |
+ * +--------------------------- +
+ * | Size OS SINIT table (u64) |
+ * +----------------------------+
+ * | OS SINIT table |
+ * +----------------------------+
+ * | Size SINIT MLE table (u64) |
+ * +----------------------------+
+ * | SINIT MLE table |
+ * +----------------------------+
+ *
+ * NOTE: the table size fields include the 8 byte size field itself.
+ */
+static inline u64 txt_bios_data_size(void *heap)
+{
+ return *((u64 *)heap);
+}
+
+static inline void *txt_bios_data_start(void *heap)
+{
+ return heap + sizeof(u64);
+}
+
+static inline u64 txt_os_mle_data_size(void *heap)
+{
+ return *((u64 *)(heap + txt_bios_data_size(heap)));
+}
+
+static inline void *txt_os_mle_data_start(void *heap)
+{
+ return heap + txt_bios_data_size(heap) + sizeof(u64);
+}
+
+static inline u64 txt_os_sinit_data_size(void *heap)
+{
+ return *((u64 *)(heap + txt_bios_data_size(heap) +
+ txt_os_mle_data_size(heap)));
+}
+
+static inline void *txt_os_sinit_data_start(void *heap)
+{
+ return heap + txt_bios_data_size(heap) +
+ txt_os_mle_data_size(heap) + sizeof(u64);
+}
+
+static inline u64 txt_sinit_mle_data_size(void *heap)
+{
+ return *((u64 *)(heap + txt_bios_data_size(heap) +
+ txt_os_mle_data_size(heap) +
+ txt_os_sinit_data_size(heap)));
+}
+
+static inline void *txt_sinit_mle_data_start(void *heap)
+{
+ return heap + txt_bios_data_size(heap) +
+ txt_os_mle_data_size(heap) +
+ txt_os_sinit_data_size(heap) + sizeof(u64);
+}
+
+/*
+ * TPM event logging functions.
+ */
+static inline struct txt_heap_event_log_pointer2_1_element*
+tpm20_find_log2_1_element(struct txt_os_sinit_data *os_sinit_data)
+{
+ struct txt_heap_ext_data_element *ext_elem;
+
+ /* The extended element array as at the end of this table */
+ ext_elem = (struct txt_heap_ext_data_element *)
+ ((u8 *)os_sinit_data + sizeof(struct txt_os_sinit_data));
+
+ while (ext_elem->type != TXT_HEAP_EXTDATA_TYPE_END) {
+ if (ext_elem->type ==
+ TXT_HEAP_EXTDATA_TYPE_EVENT_LOG_POINTER2_1) {
+ return (struct txt_heap_event_log_pointer2_1_element *)
+ ((u8 *)ext_elem +
+ sizeof(struct txt_heap_ext_data_element));
+ }
+ ext_elem =
+ (struct txt_heap_ext_data_element *)
+ ((u8 *)ext_elem + ext_elem->size);
+ }
+
+ return NULL;
+}
+
+static inline int tpm12_log_event(void *evtlog_base, u32 evtlog_size,
+ u32 event_size, void *event)
+{
+ struct tpm12_event_log_header *evtlog =
+ (struct tpm12_event_log_header *)evtlog_base;
+
+ if (memcmp(evtlog->signature, TPM12_EVTLOG_SIGNATURE,
+ sizeof(TPM12_EVTLOG_SIGNATURE)))
+ return -EINVAL;
+
+ if (evtlog->container_size > evtlog_size)
+ return -EINVAL;
+
+ if (evtlog->next_event_offset + event_size > evtlog->container_size)
+ return -E2BIG;
+
+ memcpy(evtlog_base + evtlog->next_event_offset, event, event_size);
+ evtlog->next_event_offset += event_size;
+
+ return 0;
+}
+
+static inline int tpm20_log_event(struct txt_heap_event_log_pointer2_1_element *elem,
+ void *evtlog_base, u32 evtlog_size,
+ u32 event_size, void *event)
+{
+ struct tcg_pcr_event *header =
+ (struct tcg_pcr_event *)evtlog_base;
+
+ /* Has to be at least big enough for the signature */
+ if (header->event_size < sizeof(TCG_SPECID_SIG))
+ return -EINVAL;
+
+ if (memcmp((u8 *)header + sizeof(struct tcg_pcr_event),
+ TCG_SPECID_SIG, sizeof(TCG_SPECID_SIG)))
+ return -EINVAL;
+
+ if (elem->allocated_event_container_size > evtlog_size)
+ return -EINVAL;
+
+ if (elem->next_record_offset + event_size >
+ elem->allocated_event_container_size)
+ return -E2BIG;
+
+ memcpy(evtlog_base + elem->next_record_offset, event, event_size);
+ elem->next_record_offset += event_size;
+
+ return 0;
+}
+
+/*
+ * External functions avalailable in mainline kernel.
+ */
+void slaunch_setup_txt(void);
+void slaunch_fixup_jump_vector(void);
+u32 slaunch_get_flags(void);
+struct sl_ap_wake_info *slaunch_get_ap_wake_info(void);
+struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table_header *dmar);
+void __noreturn slaunch_txt_reset(void __iomem *txt,
+ const char *msg, u64 error);
+extern void slaunch_finalize(int do_sexit);
+
+#endif /* !__ASSEMBLY */
+
+#else
+
+static inline void slaunch_setup_txt(void)
+{
+}
+
+static inline void slaunch_fixup_jump_vector(void)
+{
+}
+
+static inline u32 slaunch_get_flags(void)
+{
+ return 0;
+}
+
+static inline struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table_header *dmar)
+{
+ return dmar;
+}
+
+static inline void slaunch_finalize(int do_sexit)
+{
+}
+
+#endif /* !IS_ENABLED(CONFIG_SECURE_LAUNCH) */
+
+#endif /* _LINUX_SLAUNCH_H */
--
2.39.3


2024-02-14 22:33:59

by Ross Philipson

[permalink] [raw]
Subject: [PATCH v8 14/15] x86: Secure Launch late initcall platform module

From: "Daniel P. Smith" <[email protected]>

The Secure Launch platform module is a late init module. During the
init call, the TPM event log is read and measurements taken in the
early boot stub code are located. These measurements are extended
into the TPM PCRs using the mainline TPM kernel driver.

The platform module also registers the securityfs nodes to allow
access to TXT register fields on Intel along with the fetching of
and writing events to the late launch TPM log.

Signed-off-by: Daniel P. Smith <[email protected]>
Signed-off-by: garnetgrimm <[email protected]>
Signed-off-by: Ross Philipson <[email protected]>
---
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/slmodule.c | 511 +++++++++++++++++++++++++++++++++++++
2 files changed, 512 insertions(+)
create mode 100644 arch/x86/kernel/slmodule.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5848ea310175..948346ff4595 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -75,6 +75,7 @@ obj-$(CONFIG_IA32_EMULATION) += tls.o
obj-y += step.o
obj-$(CONFIG_INTEL_TXT) += tboot.o
obj-$(CONFIG_SECURE_LAUNCH) += slaunch.o
+obj-$(CONFIG_SECURE_LAUNCH) += slmodule.o
obj-$(CONFIG_ISA_DMA_API) += i8237.o
obj-y += stacktrace.o
obj-y += cpu/
diff --git a/arch/x86/kernel/slmodule.c b/arch/x86/kernel/slmodule.c
new file mode 100644
index 000000000000..52269f24902e
--- /dev/null
+++ b/arch/x86/kernel/slmodule.c
@@ -0,0 +1,511 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup, securityfs exposure and finalization.
+ *
+ * Copyright (c) 2022 Apertus Solutions, LLC
+ * Copyright (c) 2021 Assured Information Security, Inc.
+ * Copyright (c) 2022, Oracle and/or its affiliates.
+ *
+ * Co-developed-by: Garnet T. Grimm <[email protected]>
+ * Signed-off-by: Garnet T. Grimm <[email protected]>
+ * Signed-off-by: Daniel P. Smith <[email protected]>
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/fs.h>
+#include <linux/init.h>
+#include <linux/linkage.h>
+#include <linux/mm.h>
+#include <linux/io.h>
+#include <linux/uaccess.h>
+#include <linux/security.h>
+#include <linux/memblock.h>
+#include <asm/segment.h>
+#include <asm/sections.h>
+#include <crypto/sha2.h>
+#include <linux/slr_table.h>
+#include <linux/slaunch.h>
+
+/*
+ * The macro DECLARE_TXT_PUB_READ_U is used to read values from the TXT
+ * public registers as unsigned values.
+ */
+#define DECLARE_TXT_PUB_READ_U(size, fmt, msg_size) \
+static ssize_t txt_pub_read_u##size(unsigned int offset, \
+ loff_t *read_offset, \
+ size_t read_len, \
+ char __user *buf) \
+{ \
+ char msg_buffer[msg_size]; \
+ u##size reg_value = 0; \
+ void __iomem *txt; \
+ \
+ txt = ioremap(TXT_PUB_CONFIG_REGS_BASE, \
+ TXT_NR_CONFIG_PAGES * PAGE_SIZE); \
+ if (!txt) \
+ return -EFAULT; \
+ memcpy_fromio(&reg_value, txt + offset, sizeof(u##size)); \
+ iounmap(txt); \
+ snprintf(msg_buffer, msg_size, fmt, reg_value); \
+ return simple_read_from_buffer(buf, read_len, read_offset, \
+ &msg_buffer, msg_size); \
+}
+
+DECLARE_TXT_PUB_READ_U(8, "%#04x\n", 6);
+DECLARE_TXT_PUB_READ_U(32, "%#010x\n", 12);
+DECLARE_TXT_PUB_READ_U(64, "%#018llx\n", 20);
+
+#define DECLARE_TXT_FOPS(reg_name, reg_offset, reg_size) \
+static ssize_t txt_##reg_name##_read(struct file *flip, \
+ char __user *buf, size_t read_len, loff_t *read_offset) \
+{ \
+ return txt_pub_read_u##reg_size(reg_offset, read_offset, \
+ read_len, buf); \
+} \
+static const struct file_operations reg_name##_ops = { \
+ .read = txt_##reg_name##_read, \
+}
+
+DECLARE_TXT_FOPS(sts, TXT_CR_STS, 64);
+DECLARE_TXT_FOPS(ests, TXT_CR_ESTS, 8);
+DECLARE_TXT_FOPS(errorcode, TXT_CR_ERRORCODE, 32);
+DECLARE_TXT_FOPS(didvid, TXT_CR_DIDVID, 64);
+DECLARE_TXT_FOPS(e2sts, TXT_CR_E2STS, 64);
+DECLARE_TXT_FOPS(ver_emif, TXT_CR_VER_EMIF, 32);
+DECLARE_TXT_FOPS(scratchpad, TXT_CR_SCRATCHPAD, 64);
+
+/*
+ * Securityfs exposure
+ */
+struct memfile {
+ char *name;
+ void *addr;
+ size_t size;
+};
+
+static struct memfile sl_evtlog = {"eventlog", NULL, 0};
+static void *txt_heap;
+static struct txt_heap_event_log_pointer2_1_element *evtlog20;
+static DEFINE_MUTEX(sl_evt_log_mutex);
+
+static ssize_t sl_evtlog_read(struct file *file, char __user *buf,
+ size_t count, loff_t *pos)
+{
+ ssize_t size;
+
+ if (!sl_evtlog.addr)
+ return 0;
+
+ mutex_lock(&sl_evt_log_mutex);
+ size = simple_read_from_buffer(buf, count, pos, sl_evtlog.addr,
+ sl_evtlog.size);
+ mutex_unlock(&sl_evt_log_mutex);
+
+ return size;
+}
+
+static ssize_t sl_evtlog_write(struct file *file, const char __user *buf,
+ size_t datalen, loff_t *ppos)
+{
+ ssize_t result;
+ char *data;
+
+ if (!sl_evtlog.addr)
+ return 0;
+
+ /* No partial writes. */
+ result = -EINVAL;
+ if (*ppos != 0)
+ goto out;
+
+ data = memdup_user(buf, datalen);
+ if (IS_ERR(data)) {
+ result = PTR_ERR(data);
+ goto out;
+ }
+
+ mutex_lock(&sl_evt_log_mutex);
+ if (evtlog20)
+ result = tpm20_log_event(evtlog20, sl_evtlog.addr,
+ sl_evtlog.size, datalen, data);
+ else
+ result = tpm12_log_event(sl_evtlog.addr, sl_evtlog.size,
+ datalen, data);
+ mutex_unlock(&sl_evt_log_mutex);
+
+ kfree(data);
+out:
+ return result;
+}
+
+static const struct file_operations sl_evtlog_ops = {
+ .read = sl_evtlog_read,
+ .write = sl_evtlog_write,
+ .llseek = default_llseek,
+};
+
+struct sfs_file {
+ const char *name;
+ const struct file_operations *fops;
+};
+
+#define SL_TXT_ENTRY_COUNT 7
+static const struct sfs_file sl_txt_files[] = {
+ { "sts", &sts_ops },
+ { "ests", &ests_ops },
+ { "errorcode", &errorcode_ops },
+ { "didvid", &didvid_ops },
+ { "ver_emif", &ver_emif_ops },
+ { "scratchpad", &scratchpad_ops },
+ { "e2sts", &e2sts_ops }
+};
+
+/* sysfs file handles */
+static struct dentry *slaunch_dir;
+static struct dentry *event_file;
+static struct dentry *txt_dir;
+static struct dentry *txt_entries[SL_TXT_ENTRY_COUNT];
+
+static long slaunch_expose_securityfs(void)
+{
+ long ret = 0;
+ int i;
+
+ slaunch_dir = securityfs_create_dir("slaunch", NULL);
+ if (IS_ERR(slaunch_dir))
+ return PTR_ERR(slaunch_dir);
+
+ if (slaunch_get_flags() & SL_FLAG_ARCH_TXT) {
+ txt_dir = securityfs_create_dir("txt", slaunch_dir);
+ if (IS_ERR(txt_dir)) {
+ ret = PTR_ERR(txt_dir);
+ goto remove_slaunch;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(sl_txt_files); i++) {
+ txt_entries[i] = securityfs_create_file(
+ sl_txt_files[i].name, 0440,
+ txt_dir, NULL,
+ sl_txt_files[i].fops);
+ if (IS_ERR(txt_entries[i])) {
+ ret = PTR_ERR(txt_entries[i]);
+ goto remove_files;
+ }
+ }
+ }
+
+ if (sl_evtlog.addr) {
+ event_file = securityfs_create_file(sl_evtlog.name, 0440,
+ slaunch_dir, NULL,
+ &sl_evtlog_ops);
+ if (IS_ERR(event_file)) {
+ ret = PTR_ERR(event_file);
+ goto remove_files;
+ }
+ }
+
+ return 0;
+
+remove_files:
+ if (slaunch_get_flags() & SL_FLAG_ARCH_TXT) {
+ while (--i >= 0)
+ securityfs_remove(txt_entries[i]);
+ securityfs_remove(txt_dir);
+ }
+
+remove_slaunch:
+ securityfs_remove(slaunch_dir);
+
+ return ret;
+}
+
+static void slaunch_teardown_securityfs(void)
+{
+ int i;
+
+ securityfs_remove(event_file);
+ if (sl_evtlog.addr) {
+ memunmap(sl_evtlog.addr);
+ sl_evtlog.addr = NULL;
+ }
+ sl_evtlog.size = 0;
+
+ if (slaunch_get_flags() & SL_FLAG_ARCH_TXT) {
+ for (i = 0; i < ARRAY_SIZE(sl_txt_files); i++)
+ securityfs_remove(txt_entries[i]);
+
+ securityfs_remove(txt_dir);
+
+ if (txt_heap) {
+ memunmap(txt_heap);
+ txt_heap = NULL;
+ }
+ }
+
+ securityfs_remove(slaunch_dir);
+}
+
+static void slaunch_intel_evtlog(void __iomem *txt)
+{
+ struct slr_entry_log_info *log_info;
+ struct txt_os_mle_data *params;
+ struct slr_table *slrt;
+ void *os_sinit_data;
+ u64 base, size;
+
+ memcpy_fromio(&base, txt + TXT_CR_HEAP_BASE, sizeof(base));
+ memcpy_fromio(&size, txt + TXT_CR_HEAP_SIZE, sizeof(size));
+
+ /* now map TXT heap */
+ txt_heap = memremap(base, size, MEMREMAP_WB);
+ if (!txt_heap)
+ slaunch_txt_reset(txt, "Error failed to memremap TXT heap\n",
+ SL_ERROR_HEAP_MAP);
+
+ params = (struct txt_os_mle_data *)txt_os_mle_data_start(txt_heap);
+
+ /* Get the SLRT and remap it */
+ slrt = memremap(params->slrt, sizeof(*slrt), MEMREMAP_WB);
+ if (!slrt)
+ slaunch_txt_reset(txt, "Error failed to memremap SLR Table\n",
+ SL_ERROR_SLRT_MAP);
+ size = slrt->size;
+ memunmap(slrt);
+
+ slrt = memremap(params->slrt, size, MEMREMAP_WB);
+ if (!slrt)
+ slaunch_txt_reset(txt, "Error failed to memremap SLR Table\n",
+ SL_ERROR_SLRT_MAP);
+
+ log_info = (struct slr_entry_log_info *)slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_LOG_INFO);
+ if (!log_info)
+ slaunch_txt_reset(txt, "Error failed to memremap SLR Table\n",
+ SL_ERROR_SLRT_MISSING_ENTRY);
+
+ sl_evtlog.size = log_info->size;
+ sl_evtlog.addr = memremap(log_info->addr, log_info->size,
+ MEMREMAP_WB);
+ if (!sl_evtlog.addr)
+ slaunch_txt_reset(txt, "Error failed to memremap TPM event log\n",
+ SL_ERROR_EVENTLOG_MAP);
+
+ memunmap(slrt);
+
+ /* Determine if this is TPM 1.2 or 2.0 event log */
+ if (memcmp(sl_evtlog.addr + sizeof(struct tcg_pcr_event),
+ TCG_SPECID_SIG, sizeof(TCG_SPECID_SIG)))
+ return; /* looks like it is not 2.0 */
+
+ /* For TPM 2.0 logs, the extended heap element must be located */
+ os_sinit_data = txt_os_sinit_data_start(txt_heap);
+
+ evtlog20 = tpm20_find_log2_1_element(os_sinit_data);
+
+ /*
+ * If this fails, things are in really bad shape. Any attempt to write
+ * events to the log will fail.
+ */
+ if (!evtlog20)
+ slaunch_txt_reset(txt, "Error failed to find TPM20 event log element\n",
+ SL_ERROR_TPM_INVALID_LOG20);
+}
+
+static void slaunch_tpm20_extend_event(struct tpm_chip *tpm, void __iomem *txt,
+ struct tcg_pcr_event2_head *event)
+{
+ u16 *alg_id_field = (u16 *)((u8 *)event + sizeof(struct tcg_pcr_event2_head));
+ struct tpm_digest *digests;
+ u8 *dptr;
+ u32 i, j;
+ int ret;
+
+ digests = kcalloc(tpm->nr_allocated_banks, sizeof(*digests),
+ GFP_KERNEL);
+ if (!digests)
+ slaunch_txt_reset(txt, "Failed to allocate array of digests\n",
+ SL_ERROR_GENERIC);
+
+ for (i = 0; i < tpm->nr_allocated_banks; i++)
+ digests[i].alg_id = tpm->allocated_banks[i].alg_id;
+
+ /* Early SL code ensured there was a max count of 2 digests */
+ for (i = 0; i < event->count; i++) {
+ dptr = (u8 *)alg_id_field + sizeof(u16);
+
+ for (j = 0; j < tpm->nr_allocated_banks; j++) {
+ if (digests[j].alg_id != *alg_id_field)
+ continue;
+
+ switch (digests[j].alg_id) {
+ case TPM_ALG_SHA256:
+ memcpy(&digests[j].digest[0], dptr,
+ SHA256_DIGEST_SIZE);
+ alg_id_field = (u16 *)((u8 *)alg_id_field +
+ SHA256_DIGEST_SIZE + sizeof(u16));
+ break;
+ case TPM_ALG_SHA1:
+ memcpy(&digests[j].digest[0], dptr,
+ SHA1_DIGEST_SIZE);
+ alg_id_field = (u16 *)((u8 *)alg_id_field +
+ SHA1_DIGEST_SIZE + sizeof(u16));
+ default:
+ break;
+ }
+ }
+ }
+
+ ret = tpm_pcr_extend(tpm, event->pcr_idx, digests);
+ if (ret) {
+ pr_err("Error extending TPM20 PCR, result: %d\n", ret);
+ slaunch_txt_reset(txt, "Failed to extend TPM20 PCR\n",
+ SL_ERROR_TPM_EXTEND);
+ }
+
+ kfree(digests);
+}
+
+static void slaunch_tpm20_extend(struct tpm_chip *tpm, void __iomem *txt)
+{
+ struct tcg_pcr_event *event_header;
+ struct tcg_pcr_event2_head *event;
+ int start = 0, end = 0, size;
+
+ event_header = (struct tcg_pcr_event *)(sl_evtlog.addr +
+ evtlog20->first_record_offset);
+
+ /* Skip first TPM 1.2 event to get to first TPM 2.0 event */
+ event = (struct tcg_pcr_event2_head *)((u8 *)event_header +
+ sizeof(struct tcg_pcr_event) +
+ event_header->event_size);
+
+ while ((void *)event < sl_evtlog.addr + evtlog20->next_record_offset) {
+ size = __calc_tpm2_event_size(event, event_header, false);
+ if (!size)
+ slaunch_txt_reset(txt, "TPM20 invalid event in event log\n",
+ SL_ERROR_TPM_INVALID_EVENT);
+
+ /*
+ * Marker events indicate where the Secure Launch early stub
+ * started and ended adding post launch events.
+ */
+ if (event->event_type == TXT_EVTYPE_SLAUNCH_END) {
+ end = 1;
+ break;
+ } else if (event->event_type == TXT_EVTYPE_SLAUNCH_START) {
+ start = 1;
+ goto next;
+ }
+
+ if (start)
+ slaunch_tpm20_extend_event(tpm, txt, event);
+
+next:
+ event = (struct tcg_pcr_event2_head *)((u8 *)event + size);
+ }
+
+ if (!start || !end)
+ slaunch_txt_reset(txt, "Missing start or end events for extending TPM20 PCRs\n",
+ SL_ERROR_TPM_EXTEND);
+}
+
+static void slaunch_tpm12_extend(struct tpm_chip *tpm, void __iomem *txt)
+{
+ struct tpm12_event_log_header *event_header;
+ struct tcg_pcr_event *event;
+ struct tpm_digest digest;
+ int start = 0, end = 0;
+ int size, ret;
+
+ event_header = (struct tpm12_event_log_header *)sl_evtlog.addr;
+ event = (struct tcg_pcr_event *)((u8 *)event_header +
+ sizeof(struct tpm12_event_log_header));
+
+ while ((void *)event < sl_evtlog.addr + event_header->next_event_offset) {
+ size = sizeof(struct tcg_pcr_event) + event->event_size;
+
+ /*
+ * Marker events indicate where the Secure Launch early stub
+ * started and ended adding post launch events.
+ */
+ if (event->event_type == TXT_EVTYPE_SLAUNCH_END) {
+ end = 1;
+ break;
+ } else if (event->event_type == TXT_EVTYPE_SLAUNCH_START) {
+ start = 1;
+ goto next;
+ }
+
+ if (start) {
+ memset(&digest.digest[0], 0, TPM_MAX_DIGEST_SIZE);
+ digest.alg_id = TPM_ALG_SHA1;
+ memcpy(&digest.digest[0], &event->digest[0],
+ SHA1_DIGEST_SIZE);
+
+ ret = tpm_pcr_extend(tpm, event->pcr_idx, &digest);
+ if (ret) {
+ pr_err("Error extending TPM12 PCR, result: %d\n", ret);
+ slaunch_txt_reset(txt, "Failed to extend TPM12 PCR\n",
+ SL_ERROR_TPM_EXTEND);
+ }
+ }
+
+next:
+ event = (struct tcg_pcr_event *)((u8 *)event + size);
+ }
+
+ if (!start || !end)
+ slaunch_txt_reset(txt, "Missing start or end events for extending TPM12 PCRs\n",
+ SL_ERROR_TPM_EXTEND);
+}
+
+static void slaunch_pcr_extend(void __iomem *txt)
+{
+ struct tpm_chip *tpm;
+
+ tpm = tpm_default_chip();
+ if (!tpm)
+ slaunch_txt_reset(txt, "Could not get default TPM chip\n",
+ SL_ERROR_TPM_INIT);
+
+ if (!tpm_preferred_locality(tpm, 2))
+ slaunch_txt_reset(txt, "Could not set TPM chip locality 2\n",
+ SL_ERROR_TPM_INIT);
+
+ if (evtlog20)
+ slaunch_tpm20_extend(tpm, txt);
+ else
+ slaunch_tpm12_extend(tpm, txt);
+
+ tpm_preferred_locality(tpm, 0);
+}
+
+static int __init slaunch_module_init(void)
+{
+ void __iomem *txt;
+
+ /* Check to see if Secure Launch happened */
+ if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) !=
+ (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
+ return 0;
+
+ txt = ioremap(TXT_PRIV_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+ PAGE_SIZE);
+ if (!txt)
+ panic("Error ioremap of TXT priv registers\n");
+
+ /* Only Intel TXT is supported at this point */
+ slaunch_intel_evtlog(txt);
+ slaunch_pcr_extend(txt);
+ iounmap(txt);
+
+ return slaunch_expose_securityfs();
+}
+
+static void __exit slaunch_module_exit(void)
+{
+ slaunch_teardown_securityfs();
+}
+
+late_initcall(slaunch_module_init);
+__exitcall(slaunch_module_exit);
--
2.39.3


2024-02-14 22:34:04

by Ross Philipson

[permalink] [raw]
Subject: [PATCH v8 08/15] x86: Secure Launch kernel late boot stub

The routine slaunch_setup is called out of the x86 specific setup_arch()
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.

For TXT, this code also reserves the original compressed kernel setup
area where the APs were left looping so that this memory cannot be used.

Signed-off-by: Ross Philipson <[email protected]>
---
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/setup.c | 3 +
arch/x86/kernel/slaunch.c | 525 +++++++++++++++++++++++++++++++++++++
drivers/iommu/intel/dmar.c | 4 +
4 files changed, 533 insertions(+)
create mode 100644 arch/x86/kernel/slaunch.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 0000325ab98f..5848ea310175 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -74,6 +74,7 @@ obj-$(CONFIG_X86_32) += tls.o
obj-$(CONFIG_IA32_EMULATION) += tls.o
obj-y += step.o
obj-$(CONFIG_INTEL_TXT) += tboot.o
+obj-$(CONFIG_SECURE_LAUNCH) += slaunch.o
obj-$(CONFIG_ISA_DMA_API) += i8237.o
obj-y += stacktrace.o
obj-y += cpu/
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 84201071dfac..69a2a6cdaa72 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -21,6 +21,7 @@
#include <linux/root_dev.h>
#include <linux/hugetlb.h>
#include <linux/tboot.h>
+#include <linux/slaunch.h>
#include <linux/usb/xhci-dbgp.h>
#include <linux/static_call.h>
#include <linux/swiotlb.h>
@@ -935,6 +936,8 @@ void __init setup_arch(char **cmdline_p)
early_gart_iommu_check();
#endif

+ slaunch_setup_txt();
+
/*
* partially used pages are not usable - thus
* we are rounding upwards:
diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
new file mode 100644
index 000000000000..1fae323e8d1b
--- /dev/null
+++ b/arch/x86/kernel/slaunch.c
@@ -0,0 +1,525 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup and finalization support.
+ *
+ * Copyright (c) 2022, Oracle and/or its affiliates.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/linkage.h>
+#include <linux/mm.h>
+#include <linux/io.h>
+#include <linux/uaccess.h>
+#include <linux/security.h>
+#include <linux/memblock.h>
+#include <asm/segment.h>
+#include <asm/sections.h>
+#include <asm/tlbflush.h>
+#include <asm/e820/api.h>
+#include <asm/setup.h>
+#include <asm/realmode.h>
+#include <linux/slr_table.h>
+#include <linux/slaunch.h>
+
+static u32 sl_flags __ro_after_init;
+static struct sl_ap_wake_info ap_wake_info __ro_after_init;
+static u64 evtlog_addr __ro_after_init;
+static u32 evtlog_size __ro_after_init;
+static u64 vtd_pmr_lo_size __ro_after_init;
+
+/* This should be plenty of room */
+static u8 txt_dmar[PAGE_SIZE] __aligned(16);
+
+/*
+ * Get the Secure Launch flags that indicate what kind of launch is being done.
+ * E.g. a TXT launch is in progress or no Secure Launch is happening.
+ */
+u32 slaunch_get_flags(void)
+{
+ return sl_flags;
+}
+
+/*
+ * Return the AP wakeup information used in the SMP boot code to start up
+ * the APs that are parked using MONITOR/MWAIT.
+ */
+struct sl_ap_wake_info *slaunch_get_ap_wake_info(void)
+{
+ return &ap_wake_info;
+}
+
+/*
+ * On Intel platforms, TXT passes a safe copy of the DMAR ACPI table to the
+ * DRTM. The DRTM is supposed to use this instead of the one found in the
+ * ACPI tables.
+ */
+struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table_header *dmar)
+{
+ /* The DMAR is only stashed and provided via TXT on Intel systems */
+ if (memcmp(txt_dmar, "DMAR", 4))
+ return dmar;
+
+ return (struct acpi_table_header *)(txt_dmar);
+}
+
+/*
+ * If running within a TXT established DRTM, this is the proper way to reset
+ * the system if a failure occurs or a security issue is found.
+ */
+void __noreturn slaunch_txt_reset(void __iomem *txt,
+ const char *msg, u64 error)
+{
+ u64 one = 1, val;
+
+ pr_err("%s", msg);
+
+ /*
+ * This performs a TXT reset with a sticky error code. The reads of
+ * TXT_CR_E2STS act as barriers.
+ */
+ memcpy_toio(txt + TXT_CR_ERRORCODE, &error, sizeof(error));
+ memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+ memcpy_toio(txt + TXT_CR_CMD_NO_SECRETS, &one, sizeof(one));
+ memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+ memcpy_toio(txt + TXT_CR_CMD_UNLOCK_MEM_CONFIG, &one, sizeof(one));
+ memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+ memcpy_toio(txt + TXT_CR_CMD_RESET, &one, sizeof(one));
+
+ for ( ; ; )
+ asm volatile ("hlt");
+
+ unreachable();
+}
+
+/*
+ * The TXT heap is too big to map all at once with early_ioremap
+ * so it is done a table at a time.
+ */
+static void __init *txt_early_get_heap_table(void __iomem *txt, u32 type,
+ u32 bytes)
+{
+ u64 base, size, offset = 0;
+ void *heap;
+ int i;
+
+ if (type > TXT_SINIT_TABLE_MAX)
+ slaunch_txt_reset(txt, "Error invalid table type for early heap walk\n",
+ SL_ERROR_HEAP_WALK);
+
+ memcpy_fromio(&base, txt + TXT_CR_HEAP_BASE, sizeof(base));
+ memcpy_fromio(&size, txt + TXT_CR_HEAP_SIZE, sizeof(size));
+
+ /* Iterate over heap tables looking for table of "type" */
+ for (i = 0; i < type; i++) {
+ base += offset;
+ heap = early_memremap(base, sizeof(u64));
+ if (!heap)
+ slaunch_txt_reset(txt, "Error early_memremap of heap for heap walk\n",
+ SL_ERROR_HEAP_MAP);
+
+ offset = *((u64 *)heap);
+
+ /*
+ * After the first iteration, any offset of zero is invalid and
+ * implies the TXT heap is corrupted.
+ */
+ if (!offset)
+ slaunch_txt_reset(txt, "Error invalid 0 offset in heap walk\n",
+ SL_ERROR_HEAP_ZERO_OFFSET);
+
+ early_memunmap(heap, sizeof(u64));
+ }
+
+ /* Skip the size field at the head of each table */
+ base += sizeof(u64);
+ heap = early_memremap(base, bytes);
+ if (!heap)
+ slaunch_txt_reset(txt, "Error early_memremap of heap section\n",
+ SL_ERROR_HEAP_MAP);
+
+ return heap;
+}
+
+static void __init txt_early_put_heap_table(void *addr, unsigned long size)
+{
+ early_memunmap(addr, size);
+}
+
+/*
+ * TXT uses a special set of VTd registers to protect all of memory from DMA
+ * until the IOMMU can be programmed to protect memory. There is the low
+ * memory PMR that can protect all memory up to 4G. The high memory PRM can
+ * be setup to protect all memory beyond 4Gb. Validate that these values cover
+ * what is expected.
+ */
+static void __init slaunch_verify_pmrs(void __iomem *txt)
+{
+ struct txt_os_sinit_data *os_sinit_data;
+ u32 field_offset, err = 0;
+ const char *errmsg = "";
+ unsigned long last_pfn;
+
+ field_offset = offsetof(struct txt_os_sinit_data, lcp_po_base);
+ os_sinit_data = txt_early_get_heap_table(txt, TXT_OS_SINIT_DATA_TABLE,
+ field_offset);
+
+ /* Save a copy */
+ vtd_pmr_lo_size = os_sinit_data->vtd_pmr_lo_size;
+
+ last_pfn = e820__end_of_ram_pfn();
+
+ /*
+ * First make sure the hi PMR covers all memory above 4G. In the
+ * unlikely case where there is < 4G on the system, the hi PMR will
+ * not be set.
+ */
+ if (os_sinit_data->vtd_pmr_hi_base != 0x0ULL) {
+ if (os_sinit_data->vtd_pmr_hi_base != 0x100000000ULL) {
+ err = SL_ERROR_HI_PMR_BASE;
+ errmsg = "Error hi PMR base\n";
+ goto out;
+ }
+
+ if (PFN_PHYS(last_pfn) > os_sinit_data->vtd_pmr_hi_base +
+ os_sinit_data->vtd_pmr_hi_size) {
+ err = SL_ERROR_HI_PMR_SIZE;
+ errmsg = "Error hi PMR size\n";
+ goto out;
+ }
+ }
+
+ /*
+ * Lo PMR base should always be 0. This was already checked in
+ * early stub.
+ */
+
+ /*
+ * Check that if the kernel was loaded below 4G, that it is protected
+ * by the lo PMR. Note this is the decompressed kernel. The ACM would
+ * have ensured the compressed kernel (the MLE image) was protected.
+ */
+ if (__pa_symbol(_end) < 0x100000000ULL && __pa_symbol(_end) > os_sinit_data->vtd_pmr_lo_size) {
+ err = SL_ERROR_LO_PMR_MLE;
+ errmsg = "Error lo PMR does not cover MLE kernel\n";
+ }
+
+ /*
+ * Other regions of interest like boot param, AP wake block, cmdline
+ * already checked for PMR coverage in the early stub code.
+ */
+
+out:
+ txt_early_put_heap_table(os_sinit_data, field_offset);
+
+ if (err)
+ slaunch_txt_reset(txt, errmsg, err);
+}
+
+static void __init slaunch_txt_reserve_range(u64 base, u64 size)
+{
+ int type;
+
+ type = e820__get_entry_type(base, base + size - 1);
+ if (type == E820_TYPE_RAM) {
+ pr_info("memblock reserve base: %llx size: %llx\n", base, size);
+ memblock_reserve(base, size);
+ }
+}
+
+/*
+ * For Intel, certain regions of memory must be marked as reserved by putting
+ * them on the memblock reserved list if they are not already e820 reserved.
+ * This includes:
+ * - The TXT HEAP
+ * - The ACM area
+ * - The TXT private register bank
+ * - The MDR list sent to the MLE by the ACM (see TXT specification)
+ * (Normally the above are properly reserved by firmware but if it was not
+ * done, reserve them now)
+ * - The AP wake block
+ * - TPM log external to the TXT heap
+ *
+ * Also if the low PMR doesn't cover all memory < 4G, any RAM regions above
+ * the low PMR must be reserved too.
+ */
+static void __init slaunch_txt_reserve(void __iomem *txt)
+{
+ struct txt_sinit_memory_descriptor_record *mdr;
+ struct txt_sinit_mle_data *sinit_mle_data;
+ u64 base, size, heap_base, heap_size;
+ u32 mdrnum, mdroffset, mdrslen;
+ u32 field_offset, i;
+ void *mdrs;
+
+ base = TXT_PRIV_CONFIG_REGS_BASE;
+ size = TXT_PUB_CONFIG_REGS_BASE - TXT_PRIV_CONFIG_REGS_BASE;
+ slaunch_txt_reserve_range(base, size);
+
+ memcpy_fromio(&heap_base, txt + TXT_CR_HEAP_BASE, sizeof(heap_base));
+ memcpy_fromio(&heap_size, txt + TXT_CR_HEAP_SIZE, sizeof(heap_size));
+ slaunch_txt_reserve_range(heap_base, heap_size);
+
+ memcpy_fromio(&base, txt + TXT_CR_SINIT_BASE, sizeof(base));
+ memcpy_fromio(&size, txt + TXT_CR_SINIT_SIZE, sizeof(size));
+ slaunch_txt_reserve_range(base, size);
+
+ field_offset = offsetof(struct txt_sinit_mle_data,
+ sinit_vtd_dmar_table_size);
+ sinit_mle_data = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE,
+ field_offset);
+
+ mdrnum = sinit_mle_data->num_of_sinit_mdrs;
+ mdroffset = sinit_mle_data->sinit_mdrs_table_offset;
+
+ txt_early_put_heap_table(sinit_mle_data, field_offset);
+
+ if (!mdrnum)
+ goto nomdr;
+
+ mdrslen = mdrnum * sizeof(struct txt_sinit_memory_descriptor_record);
+
+ mdrs = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE,
+ mdroffset + mdrslen - 8);
+
+ mdr = mdrs + mdroffset - 8;
+
+ for (i = 0; i < mdrnum; i++, mdr++) {
+ /* Spec says some entries can have length 0, ignore them */
+ if (mdr->type > 0 && mdr->length > 0)
+ slaunch_txt_reserve_range(mdr->address, mdr->length);
+ }
+
+ txt_early_put_heap_table(mdrs, mdroffset + mdrslen - 8);
+
+nomdr:
+ slaunch_txt_reserve_range(ap_wake_info.ap_wake_block,
+ ap_wake_info.ap_wake_block_size);
+
+ /*
+ * Earlier checks ensured that the event log was properly situated
+ * either inside the TXT heap or outside. This is a check to see if the
+ * event log needs to be reserved. If it is in the TXT heap, it is
+ * already reserved.
+ */
+ if (evtlog_addr < heap_base || evtlog_addr > (heap_base + heap_size))
+ slaunch_txt_reserve_range(evtlog_addr, evtlog_size);
+
+ for (i = 0; i < e820_table->nr_entries; i++) {
+ base = e820_table->entries[i].addr;
+ size = e820_table->entries[i].size;
+ if (base >= vtd_pmr_lo_size && base < 0x100000000ULL)
+ slaunch_txt_reserve_range(base, size);
+ else if (base < vtd_pmr_lo_size && base + size > vtd_pmr_lo_size)
+ slaunch_txt_reserve_range(vtd_pmr_lo_size,
+ base + size - vtd_pmr_lo_size);
+ }
+}
+
+/*
+ * TXT stashes a safe copy of the DMAR ACPI table to prevent tampering.
+ * It is stored in the TXT heap. Fetch it from there and make it available
+ * to the IOMMU driver.
+ */
+static void __init slaunch_copy_dmar_table(void __iomem *txt)
+{
+ struct txt_sinit_mle_data *sinit_mle_data;
+ u32 field_offset, dmar_size, dmar_offset;
+ void *dmar;
+
+ field_offset = offsetof(struct txt_sinit_mle_data,
+ processor_scrtm_status);
+ sinit_mle_data = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE,
+ field_offset);
+
+ dmar_size = sinit_mle_data->sinit_vtd_dmar_table_size;
+ dmar_offset = sinit_mle_data->sinit_vtd_dmar_table_offset;
+
+ txt_early_put_heap_table(sinit_mle_data, field_offset);
+
+ if (!dmar_size || !dmar_offset)
+ slaunch_txt_reset(txt, "Error invalid DMAR table values\n",
+ SL_ERROR_HEAP_INVALID_DMAR);
+
+ if (unlikely(dmar_size > PAGE_SIZE))
+ slaunch_txt_reset(txt, "Error DMAR too big to store\n",
+ SL_ERROR_HEAP_DMAR_SIZE);
+
+ dmar = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE,
+ dmar_offset + dmar_size - 8);
+ if (!dmar)
+ slaunch_txt_reset(txt, "Error early_ioremap of DMAR\n",
+ SL_ERROR_HEAP_DMAR_MAP);
+
+ memcpy(txt_dmar, dmar + dmar_offset - 8, dmar_size);
+
+ txt_early_put_heap_table(dmar, dmar_offset + dmar_size - 8);
+}
+
+/*
+ * The location of the safe AP wake code block is stored in the TXT heap.
+ * Fetch needed values here in the early init code for later use in SMP
+ * startup.
+ *
+ * Also get the TPM event log values are in the SLRT and have to be fetched.
+ * They will be put on the memblock reserve list later.
+ */
+static void __init slaunch_fetch_values(void __iomem *txt)
+{
+ struct txt_os_mle_data *os_mle_data;
+ struct slr_entry_log_info *log_info;
+ u8 *jmp_offset, *stacks_offset;
+ struct slr_table *slrt;
+ u32 size;
+
+ os_mle_data = txt_early_get_heap_table(txt, TXT_OS_MLE_DATA_TABLE,
+ sizeof(*os_mle_data));
+
+ ap_wake_info.ap_wake_block = os_mle_data->ap_wake_block;
+ ap_wake_info.ap_wake_block_size = os_mle_data->ap_wake_block_size;
+
+ jmp_offset = os_mle_data->mle_scratch + SL_SCRATCH_AP_JMP_OFFSET;
+ ap_wake_info.ap_jmp_offset = *((u32 *)jmp_offset);
+
+ stacks_offset = os_mle_data->mle_scratch + SL_SCRATCH_AP_STACKS_OFFSET;
+ ap_wake_info.ap_stacks_offset = *((u32 *)stacks_offset);
+
+ slrt = (struct slr_table *)early_memremap(os_mle_data->slrt, sizeof(*slrt));
+ if (!slrt)
+ slaunch_txt_reset(txt, "Error early_memremap of SLRT failed\n",
+ SL_ERROR_SLRT_MAP);
+
+ size = slrt->size;
+ early_memunmap(slrt, sizeof(*slrt));
+
+ slrt = (struct slr_table *)early_memremap(os_mle_data->slrt, size);
+ if (!slrt)
+ slaunch_txt_reset(txt, "Error early_memremap of SLRT failed\n",
+ SL_ERROR_SLRT_MAP);
+
+ log_info = (struct slr_entry_log_info *)slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_LOG_INFO);
+
+ if (!log_info)
+ slaunch_txt_reset(txt, "SLRT missing logging info entry\n",
+ SL_ERROR_SLRT_MISSING_ENTRY);
+
+ evtlog_addr = log_info->addr;
+ evtlog_size = log_info->size;
+
+ early_memunmap(slrt, size);
+
+ txt_early_put_heap_table(os_mle_data, sizeof(*os_mle_data));
+}
+
+/*
+ * Called to fix the long jump address for the waiting APs to vector to
+ * the correct startup location in the Secure Launch stub in the rmpiggy.
+ */
+void __init slaunch_fixup_jump_vector(void)
+{
+ struct sl_ap_wake_info *ap_wake_info;
+ u32 *ap_jmp_ptr;
+
+ if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) !=
+ (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
+ return;
+
+ ap_wake_info = slaunch_get_ap_wake_info();
+
+ ap_jmp_ptr = (u32 *)__va(ap_wake_info->ap_wake_block +
+ ap_wake_info->ap_jmp_offset);
+
+ *ap_jmp_ptr = real_mode_header->sl_trampoline_start32;
+
+ pr_info("TXT AP startup vector address updated\n");
+}
+
+/*
+ * Intel TXT specific late stub setup and validation called from within
+ * x86 specific setup_arch().
+ */
+void __init slaunch_setup_txt(void)
+{
+ u64 one = TXT_REGVALUE_ONE, val;
+ void __iomem *txt;
+
+ if (!boot_cpu_has(X86_FEATURE_SMX))
+ return;
+
+ /*
+ * If booted through secure launch entry point, the loadflags
+ * option will be set.
+ */
+ if (!(boot_params.hdr.loadflags & SLAUNCH_FLAG))
+ return;
+
+ /*
+ * See if SENTER was done by reading the status register in the
+ * public space. If the public register space cannot be read, TXT may
+ * be disabled.
+ */
+ txt = early_ioremap(TXT_PUB_CONFIG_REGS_BASE,
+ TXT_NR_CONFIG_PAGES * PAGE_SIZE);
+ if (!txt)
+ panic("Error early_ioremap in TXT setup failed\n");
+
+ memcpy_fromio(&val, txt + TXT_CR_STS, sizeof(val));
+ early_iounmap(txt, TXT_NR_CONFIG_PAGES * PAGE_SIZE);
+
+ /* SENTER should have been done */
+ if (!(val & TXT_SENTER_DONE_STS))
+ panic("Error TXT.STS SENTER_DONE not set\n");
+
+ /* SEXIT should have been cleared */
+ if (val & TXT_SEXIT_DONE_STS)
+ panic("Error TXT.STS SEXIT_DONE set\n");
+
+ /* Now we want to use the private register space */
+ txt = early_ioremap(TXT_PRIV_CONFIG_REGS_BASE,
+ TXT_NR_CONFIG_PAGES * PAGE_SIZE);
+ if (!txt) {
+ /* This is really bad, no where to go from here */
+ panic("Error early_ioremap of TXT priv registers\n");
+ }
+
+ /*
+ * Try to read the Intel VID from the TXT private registers to see if
+ * TXT measured launch happened properly and the private space is
+ * available.
+ */
+ memcpy_fromio(&val, txt + TXT_CR_DIDVID, sizeof(val));
+ if ((val & 0xffff) != 0x8086) {
+ /*
+ * Can't do a proper TXT reset since it appears something is
+ * wrong even though SENTER happened and it should be in SMX
+ * mode.
+ */
+ panic("Invalid TXT vendor ID, not in SMX mode\n");
+ }
+
+ /* Set flags so subsequent code knows the status of the launch */
+ sl_flags |= (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT);
+
+ /*
+ * Reading the proper DIDVID from the private register space means we
+ * are in SMX mode and private registers are open for read/write.
+ */
+
+ /* On Intel, have to handle TPM localities via TXT */
+ memcpy_toio(txt + TXT_CR_CMD_SECRETS, &one, sizeof(one));
+ memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+ memcpy_toio(txt + TXT_CR_CMD_OPEN_LOCALITY1, &one, sizeof(one));
+ memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+
+ slaunch_fetch_values(txt);
+
+ slaunch_verify_pmrs(txt);
+
+ slaunch_txt_reserve(txt);
+
+ slaunch_copy_dmar_table(txt);
+
+ early_iounmap(txt, TXT_NR_CONFIG_PAGES * PAGE_SIZE);
+
+ pr_info("Intel TXT setup complete\n");
+}
diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
index 23cb80d62a9a..1eb0746b0c59 100644
--- a/drivers/iommu/intel/dmar.c
+++ b/drivers/iommu/intel/dmar.c
@@ -28,6 +28,7 @@
#include <linux/iommu.h>
#include <linux/numa.h>
#include <linux/limits.h>
+#include <linux/slaunch.h>
#include <asm/irq_remapping.h>

#include "iommu.h"
@@ -660,6 +661,9 @@ parse_dmar_table(void)
*/
dmar_tbl = tboot_get_dmar_table(dmar_tbl);

+ /* If Secure Launch is active, it has similar logic */
+ dmar_tbl = slaunch_get_dmar_table(dmar_tbl);
+
dmar = (struct acpi_table_dmar *)dmar_tbl;
if (!dmar)
return -ENODEV;
--
2.39.3


2024-02-14 22:34:14

by Ross Philipson

[permalink] [raw]
Subject: [PATCH v8 15/15] x86: EFI stub DRTM launch support for Secure Launch

This support allows the DRTM launch to be initiated after an EFI stub
launch of the Linux kernel is done. This is accomplished by providing
a handler to jump to when a Secure Launch is in progress. This has to be
called after the EFI stub does Exit Boot Services.

Signed-off-by: Ross Philipson <[email protected]>
---
drivers/firmware/efi/libstub/x86-stub.c | 55 +++++++++++++++++++++++++
1 file changed, 55 insertions(+)

diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
index 0d510c9a06a4..4df2cf539194 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -9,6 +9,7 @@
#include <linux/efi.h>
#include <linux/pci.h>
#include <linux/stddef.h>
+#include <linux/slr_table.h>

#include <asm/efi.h>
#include <asm/e820/types.h>
@@ -810,6 +811,57 @@ static efi_status_t efi_decompress_kernel(unsigned long *kernel_entry)
return EFI_SUCCESS;
}

+static void efi_secure_launch(struct boot_params *boot_params)
+{
+ struct slr_entry_uefi_config *uefi_config;
+ struct slr_uefi_cfg_entry *uefi_entry;
+ struct slr_entry_dl_info *dlinfo;
+ efi_guid_t guid = SLR_TABLE_GUID;
+ struct slr_table *slrt;
+ u64 memmap_hi;
+ void *table;
+ u8 buf[64] = {0};
+
+ table = get_efi_config_table(guid);
+
+ /*
+ * The presence of this table indicated a Secure Launch
+ * is being requested.
+ */
+ if (!table)
+ return;
+
+ slrt = (struct slr_table *)table;
+
+ if (slrt->magic != SLR_TABLE_MAGIC)
+ return;
+
+ /* Add config information to measure the UEFI memory map */
+ uefi_config = (struct slr_entry_uefi_config *)buf;
+ uefi_config->hdr.tag = SLR_ENTRY_UEFI_CONFIG;
+ uefi_config->hdr.size = sizeof(*uefi_config) + sizeof(*uefi_entry);
+ uefi_config->revision = SLR_UEFI_CONFIG_REVISION;
+ uefi_config->nr_entries = 1;
+ uefi_entry = (struct slr_uefi_cfg_entry *)(buf + sizeof(*uefi_config));
+ uefi_entry->pcr = 18;
+ uefi_entry->cfg = boot_params->efi_info.efi_memmap;
+ memmap_hi = boot_params->efi_info.efi_memmap_hi;
+ uefi_entry->cfg |= memmap_hi << 32;
+ uefi_entry->size = boot_params->efi_info.efi_memmap_size;
+ memcpy(&uefi_entry->evt_info[0], "Measured UEFI memory map",
+ strlen("Measured UEFI memory map"));
+
+ if (slr_add_entry(slrt, (struct slr_entry_hdr *)uefi_config))
+ return;
+
+ /* Jump through DL stub to initiate Secure Launch */
+ dlinfo = (struct slr_entry_dl_info *)
+ slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);
+
+ asm volatile ("jmp *%%rax"
+ : : "a" (dlinfo->dl_handler), "D" (&dlinfo->bl_context));
+}
+
static void __noreturn enter_kernel(unsigned long kernel_addr,
struct boot_params *boot_params)
{
@@ -934,6 +986,9 @@ void __noreturn efi_stub_entry(efi_handle_t handle,
goto fail;
}

+ /* If a Secure Launch is in progress, this never returns */
+ efi_secure_launch(boot_params);
+
/*
* Call the SEV init code while still running with the firmware's
* GDT/IDT, so #VC exceptions will be handled by EFI.
--
2.39.3


2024-02-14 22:38:52

by Ross Philipson

[permalink] [raw]
Subject: [PATCH v8 09/15] x86: Secure Launch SMP bringup support

On Intel, the APs are left in a well documented state after TXT performs
the late launch. Specifically they cannot have #INIT asserted on them so
a standard startup via INIT/SIPI/SIPI cannot be performed. Instead the
early SL stub code uses MONITOR and MWAIT to park the APs. The realmode/init.c
code updates the jump address for the waiting APs with the location of the
Secure Launch entry point in the RM piggy after it is loaded and fixed up.
As the APs are woken up by writing the monitor, the APs jump to the Secure
Launch entry point in the RM piggy which mimics what the real mode code would
do then jumps to the standard RM piggy protected mode entry point.

Signed-off-by: Ross Philipson <[email protected]>
---
arch/x86/include/asm/realmode.h | 3 ++
arch/x86/kernel/smpboot.c | 58 +++++++++++++++++++++++++++-
arch/x86/realmode/init.c | 3 ++
arch/x86/realmode/rm/header.S | 3 ++
arch/x86/realmode/rm/trampoline_64.S | 32 +++++++++++++++
5 files changed, 97 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index 87e5482acd0d..339b48e2543d 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -38,6 +38,9 @@ struct real_mode_header {
#ifdef CONFIG_X86_64
u32 machine_real_restart_seg;
#endif
+#ifdef CONFIG_SECURE_LAUNCH
+ u32 sl_trampoline_start32;
+#endif
};

/* This must match data at realmode/rm/trampoline_{32,64}.S */
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 3f57ce68a3f1..74511be5b779 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -60,6 +60,7 @@
#include <linux/stackprotector.h>
#include <linux/cpuhotplug.h>
#include <linux/mc146818rtc.h>
+#include <linux/slaunch.h>

#include <asm/acpi.h>
#include <asm/cacheinfo.h>
@@ -987,6 +988,56 @@ int common_cpu_up(unsigned int cpu, struct task_struct *idle)
return 0;
}

+#ifdef CONFIG_SECURE_LAUNCH
+
+static bool slaunch_is_txt_launch(void)
+{
+ if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) ==
+ (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
+ return true;
+
+ return false;
+}
+
+/*
+ * TXT AP startup is quite different than normal. The APs cannot have #INIT
+ * asserted on them or receive SIPIs. The early Secure Launch code has parked
+ * the APs using monitor/mwait. This will wake the APs by writing the monitor
+ * and have them jump to the protected mode code in the rmpiggy where the rest
+ * of the SMP boot of the AP will proceed normally.
+ */
+static void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+ struct sl_ap_wake_info *ap_wake_info;
+ struct sl_ap_stack_and_monitor *stack_monitor = NULL;
+
+ ap_wake_info = slaunch_get_ap_wake_info();
+
+ stack_monitor = (struct sl_ap_stack_and_monitor *)__va(ap_wake_info->ap_wake_block +
+ ap_wake_info->ap_stacks_offset);
+
+ for (unsigned int i = TXT_MAX_CPUS - 1; i >= 0; i--) {
+ if (stack_monitor[i].apicid == apicid) {
+ /* Write the monitor */
+ stack_monitor[i].monitor = 1;
+ break;
+ }
+ }
+}
+
+#else
+
+static inline bool slaunch_is_txt_launch(void)
+{
+ return false;
+}
+
+static inline void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+}
+
+#endif /* !CONFIG_SECURE_LAUNCH */
+
/*
* NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
* (ie clustered apic addressing mode), this is a LOGICAL apic ID.
@@ -996,7 +1047,7 @@ int common_cpu_up(unsigned int cpu, struct task_struct *idle)
static int do_boot_cpu(u32 apicid, int cpu, struct task_struct *idle)
{
unsigned long start_ip = real_mode_header->trampoline_start;
- int ret;
+ int ret = 0;

#ifdef CONFIG_X86_64
/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
@@ -1041,12 +1092,15 @@ static int do_boot_cpu(u32 apicid, int cpu, struct task_struct *idle)

/*
* Wake up a CPU in difference cases:
+ * - Intel TXT DRTM launch uses its own method to wake the APs
* - Use a method from the APIC driver if one defined, with wakeup
* straight to 64-bit mode preferred over wakeup to RM.
* Otherwise,
* - Use an INIT boot APIC message
*/
- if (apic->wakeup_secondary_cpu_64)
+ if (slaunch_is_txt_launch())
+ slaunch_wakeup_cpu_from_txt(cpu, apicid);
+ else if (apic->wakeup_secondary_cpu_64)
ret = apic->wakeup_secondary_cpu_64(apicid, start_ip);
else if (apic->wakeup_secondary_cpu)
ret = apic->wakeup_secondary_cpu(apicid, start_ip);
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index f9bc444a3064..d95776cb30d3 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -4,6 +4,7 @@
#include <linux/memblock.h>
#include <linux/cc_platform.h>
#include <linux/pgtable.h>
+#include <linux/slaunch.h>

#include <asm/set_memory.h>
#include <asm/realmode.h>
@@ -210,6 +211,8 @@ void __init init_real_mode(void)

setup_real_mode();
set_real_mode_permissions();
+
+ slaunch_fixup_jump_vector();
}

static int __init do_init_real_mode(void)
diff --git a/arch/x86/realmode/rm/header.S b/arch/x86/realmode/rm/header.S
index 2eb62be6d256..3b5cbcbbfc90 100644
--- a/arch/x86/realmode/rm/header.S
+++ b/arch/x86/realmode/rm/header.S
@@ -37,6 +37,9 @@ SYM_DATA_START(real_mode_header)
#ifdef CONFIG_X86_64
.long __KERNEL32_CS
#endif
+#ifdef CONFIG_SECURE_LAUNCH
+ .long pa_sl_trampoline_start32
+#endif
SYM_DATA_END(real_mode_header)

/* End signature, used to verify integrity */
diff --git a/arch/x86/realmode/rm/trampoline_64.S b/arch/x86/realmode/rm/trampoline_64.S
index c9f76fae902e..526d449d5383 100644
--- a/arch/x86/realmode/rm/trampoline_64.S
+++ b/arch/x86/realmode/rm/trampoline_64.S
@@ -120,6 +120,38 @@ SYM_CODE_END(sev_es_trampoline_start)

.section ".text32","ax"
.code32
+#ifdef CONFIG_SECURE_LAUNCH
+ .balign 4
+SYM_CODE_START(sl_trampoline_start32)
+ /*
+ * The early secure launch stub AP wakeup code has taken care of all
+ * the vagaries of launching out of TXT. This bit just mimics what the
+ * 16b entry code does and jumps off to the real startup_32.
+ */
+ cli
+ wbinvd
+
+ /*
+ * The %ebx provided is not terribly useful since it is the physical
+ * address of tb_trampoline_start and not the base of the image.
+ * Use pa_real_mode_base, which is fixed up, to get a run time
+ * base register to use for offsets to location that do not have
+ * pa_ symbols.
+ */
+ movl $pa_real_mode_base, %ebx
+
+ LOCK_AND_LOAD_REALMODE_ESP lock_pa=1
+
+ lgdt tr_gdt(%ebx)
+ lidt tr_idt(%ebx)
+
+ movw $__KERNEL_DS, %dx # Data segment descriptor
+
+ /* Jump to where the 16b code would have jumped */
+ ljmpl $__KERNEL32_CS, $pa_startup_32
+SYM_CODE_END(sl_trampoline_start32)
+#endif
+
.balign 4
SYM_CODE_START(startup_32)
movl %edx, %ss
--
2.39.3


2024-02-14 22:38:54

by Ross Philipson

[permalink] [raw]
Subject: [PATCH v8 10/15] kexec: Secure Launch kexec SEXIT support

Prior to running the next kernel via kexec, the Secure Launch code
closes down private SMX resources and does an SEXIT. This allows the
next kernel to start normally without any issues starting the APs etc.

Signed-off-by: Ross Philipson <[email protected]>
---
arch/x86/kernel/slaunch.c | 73 +++++++++++++++++++++++++++++++++++++++
kernel/kexec_core.c | 4 +++
2 files changed, 77 insertions(+)

diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index 1fae323e8d1b..429e6d39e73b 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -523,3 +523,76 @@ void __init slaunch_setup_txt(void)

pr_info("Intel TXT setup complete\n");
}
+
+static inline void smx_getsec_sexit(void)
+{
+ asm volatile ("getsec\n"
+ : : "a" (SMX_X86_GETSEC_SEXIT));
+}
+
+/*
+ * Used during kexec and on reboot paths to finalize the TXT state
+ * and do an SEXIT exiting the DRTM and disabling SMX mode.
+ */
+void slaunch_finalize(int do_sexit)
+{
+ u64 one = TXT_REGVALUE_ONE, val;
+ void __iomem *config;
+
+ if ((slaunch_get_flags() & (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT)) !=
+ (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
+ return;
+
+ config = ioremap(TXT_PRIV_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+ PAGE_SIZE);
+ if (!config) {
+ pr_emerg("Error SEXIT failed to ioremap TXT private reqs\n");
+ return;
+ }
+
+ /* Clear secrets bit for SEXIT */
+ memcpy_toio(config + TXT_CR_CMD_NO_SECRETS, &one, sizeof(one));
+ memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+ /* Unlock memory configurations */
+ memcpy_toio(config + TXT_CR_CMD_UNLOCK_MEM_CONFIG, &one, sizeof(one));
+ memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+ /* Close the TXT private register space */
+ memcpy_toio(config + TXT_CR_CMD_CLOSE_PRIVATE, &one, sizeof(one));
+ memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+ /*
+ * Calls to iounmap are not being done because of the state of the
+ * system this late in the kexec process. Local IRQs are disabled and
+ * iounmap causes a TLB flush which in turn causes a warning. Leaving
+ * thse mappings is not an issue since the next kernel is going to
+ * completely re-setup memory management.
+ */
+
+ /* Map public registers and do a final read fence */
+ config = ioremap(TXT_PUB_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+ PAGE_SIZE);
+ if (!config) {
+ pr_emerg("Error SEXIT failed to ioremap TXT public reqs\n");
+ return;
+ }
+
+ memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+ pr_emerg("TXT clear secrets bit and unlock memory complete.\n");
+
+ if (!do_sexit)
+ return;
+
+ if (smp_processor_id() != 0)
+ panic("Error TXT SEXIT must be called on CPU 0\n");
+
+ /* In case SMX mode was disabled, enable it for SEXIT */
+ cr4_set_bits(X86_CR4_SMXE);
+
+ /* Do the SEXIT SMX operation */
+ smx_getsec_sexit();
+
+ pr_info("TXT SEXIT complete.\n");
+}
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index d08fc7b5db97..8036a731b1bb 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -40,6 +40,7 @@
#include <linux/hugetlb.h>
#include <linux/objtool.h>
#include <linux/kmsg_dump.h>
+#include <linux/slaunch.h>

#include <asm/page.h>
#include <asm/sections.h>
@@ -1268,6 +1269,9 @@ int kernel_kexec(void)
cpu_hotplug_enable();
pr_notice("Starting new kernel\n");
machine_shutdown();
+
+ /* Finalize TXT registers and do SEXIT */
+ slaunch_finalize(1);
}

kmsg_dump(KMSG_DUMP_SHUTDOWN);
--
2.39.3


2024-02-14 22:40:05

by Ross Philipson

[permalink] [raw]
Subject: [PATCH v8 13/15] tpm: Add sysfs interface to allow setting and querying the preferred locality

Expose a sysfs interface to allow user mode to set and query the preferred
locality for the TPM chip.

Signed-off-by: Ross Philipson <[email protected]>
---
drivers/char/tpm/tpm-sysfs.c | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)

diff --git a/drivers/char/tpm/tpm-sysfs.c b/drivers/char/tpm/tpm-sysfs.c
index 54c71473aa29..c43630d0996e 100644
--- a/drivers/char/tpm/tpm-sysfs.c
+++ b/drivers/char/tpm/tpm-sysfs.c
@@ -309,6 +309,34 @@ static ssize_t tpm_version_major_show(struct device *dev,
}
static DEVICE_ATTR_RO(tpm_version_major);

+static ssize_t preferred_locality_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct tpm_chip *chip = to_tpm_chip(dev);
+
+ return sprintf(buf, "%d\n", chip->pref_locality);
+}
+
+static ssize_t preferred_locality_store(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct tpm_chip *chip = to_tpm_chip(dev);
+ unsigned int locality;
+
+ if (kstrtouint(buf, 0, &locality))
+ return -ERANGE;
+
+ if (locality >= TPM_MAX_LOCALITY)
+ return -ERANGE;
+
+ if (tpm_chip_preferred_locality(chip, (int)locality))
+ return count;
+ else
+ return 0;
+}
+
+static DEVICE_ATTR_RW(preferred_locality);
+
static struct attribute *tpm1_dev_attrs[] = {
&dev_attr_pubek.attr,
&dev_attr_pcrs.attr,
@@ -321,11 +349,13 @@ static struct attribute *tpm1_dev_attrs[] = {
&dev_attr_durations.attr,
&dev_attr_timeouts.attr,
&dev_attr_tpm_version_major.attr,
+ &dev_attr_preferred_locality.attr,
NULL,
};

static struct attribute *tpm2_dev_attrs[] = {
&dev_attr_tpm_version_major.attr,
+ &dev_attr_preferred_locality.attr,
NULL
};

--
2.39.3


2024-02-15 07:57:16

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v8 01/15] x86/boot: Place kernel_info at a fixed offset

On Wed, 14 Feb 2024 at 23:31, Ross Philipson <[email protected]> wrote:
>
> From: Arvind Sankar <[email protected]>
>
> There are use cases for storing the offset of a symbol in kernel_info.
> For example, the trenchboot series [0] needs to store the offset of the
> Measured Launch Environment header in kernel_info.
>

Why? Is this information consumed by the bootloader?

I'd like to get away from x86 specific hacks for boot code and boot
images, so I would like to explore if we can avoid kernel_info, or at
least expose it in a generic way. We might just add a 32-bit offset
somewhere in the first 64 bytes of the bootable image: this could
co-exist with EFI bootable images, and can be implemented on arm64,
RISC-V and LoongArch as well.

> Since commit (note: commit ID from tip/master)
>
> commit 527afc212231 ("x86/boot: Check that there are no run-time relocations")
>
> run-time relocations are not allowed in the compressed kernel, so simply
> using the symbol in kernel_info, as
>
> .long symbol
>
> will cause a linker error because this is not position-independent.
>
> With kernel_info being a separate object file and in a different section
> from startup_32, there is no way to calculate the offset of a symbol
> from the start of the image in a position-independent way.
>
> To enable such use cases, put kernel_info into its own section which is
> placed at a predetermined offset (KERNEL_INFO_OFFSET) via the linker
> script. This will allow calculating the symbol offset in a
> position-independent way, by adding the offset from the start of
> kernel_info to KERNEL_INFO_OFFSET.
>
> Ensure that kernel_info is aligned, and use the SYM_DATA.* macros
> instead of bare labels. This stores the size of the kernel_info
> structure in the ELF symbol table.
>
> Signed-off-by: Arvind Sankar <[email protected]>
> Cc: Ross Philipson <[email protected]>
> Signed-off-by: Ross Philipson <[email protected]>
> ---
> arch/x86/boot/compressed/kernel_info.S | 19 +++++++++++++++----
> arch/x86/boot/compressed/kernel_info.h | 12 ++++++++++++
> arch/x86/boot/compressed/vmlinux.lds.S | 6 ++++++
> 3 files changed, 33 insertions(+), 4 deletions(-)
> create mode 100644 arch/x86/boot/compressed/kernel_info.h
>
> diff --git a/arch/x86/boot/compressed/kernel_info.S b/arch/x86/boot/compressed/kernel_info.S
> index f818ee8fba38..c18f07181dd5 100644
> --- a/arch/x86/boot/compressed/kernel_info.S
> +++ b/arch/x86/boot/compressed/kernel_info.S
> @@ -1,12 +1,23 @@
> /* SPDX-License-Identifier: GPL-2.0 */
>
> +#include <linux/linkage.h>
> #include <asm/bootparam.h>
> +#include "kernel_info.h"
>
> - .section ".rodata.kernel_info", "a"
> +/*
> + * If a field needs to hold the offset of a symbol from the start
> + * of the image, use the macro below, eg
> + * .long rva(symbol)
> + * This will avoid creating run-time relocations, which are not
> + * allowed in the compressed kernel.
> + */
> +
> +#define rva(X) (((X) - kernel_info) + KERNEL_INFO_OFFSET)
>
> - .global kernel_info
> + .section ".rodata.kernel_info", "a"
>
> -kernel_info:
> + .balign 16
> +SYM_DATA_START(kernel_info)
> /* Header, Linux top (structure). */
> .ascii "LToP"
> /* Size. */
> @@ -19,4 +30,4 @@ kernel_info:
>
> kernel_info_var_len_data:
> /* Empty for time being... */
> -kernel_info_end:
> +SYM_DATA_END_LABEL(kernel_info, SYM_L_LOCAL, kernel_info_end)
> diff --git a/arch/x86/boot/compressed/kernel_info.h b/arch/x86/boot/compressed/kernel_info.h
> new file mode 100644
> index 000000000000..c127f84aec63
> --- /dev/null
> +++ b/arch/x86/boot/compressed/kernel_info.h
> @@ -0,0 +1,12 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef BOOT_COMPRESSED_KERNEL_INFO_H
> +#define BOOT_COMPRESSED_KERNEL_INFO_H
> +
> +#ifdef CONFIG_X86_64
> +#define KERNEL_INFO_OFFSET 0x500
> +#else /* 32-bit */
> +#define KERNEL_INFO_OFFSET 0x100
> +#endif
> +
> +#endif /* BOOT_COMPRESSED_KERNEL_INFO_H */
> diff --git a/arch/x86/boot/compressed/vmlinux.lds.S b/arch/x86/boot/compressed/vmlinux.lds.S
> index 083ec6d7722a..718c52f3f1e6 100644
> --- a/arch/x86/boot/compressed/vmlinux.lds.S
> +++ b/arch/x86/boot/compressed/vmlinux.lds.S
> @@ -7,6 +7,7 @@ OUTPUT_FORMAT(CONFIG_OUTPUT_FORMAT)
>
> #include <asm/cache.h>
> #include <asm/page_types.h>
> +#include "kernel_info.h"
>
> #ifdef CONFIG_X86_64
> OUTPUT_ARCH(i386:x86-64)
> @@ -27,6 +28,11 @@ SECTIONS
> HEAD_TEXT
> _ehead = . ;
> }
> + .rodata.kernel_info KERNEL_INFO_OFFSET : {
> + *(.rodata.kernel_info)
> + }
> + ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info at bad address!")
> +
> .rodata..compressed : {
> *(.rodata..compressed)
> }
> --
> 2.39.3
>

2024-02-15 08:41:10

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v8 14/15] x86: Secure Launch late initcall platform module

On Wed, 14 Feb 2024 at 23:32, Ross Philipson <[email protected]> wrote:
>
> From: "Daniel P. Smith" <[email protected]>
>
> The Secure Launch platform module is a late init module. During the
> init call, the TPM event log is read and measurements taken in the
> early boot stub code are located. These measurements are extended
> into the TPM PCRs using the mainline TPM kernel driver.
>
> The platform module also registers the securityfs nodes to allow
> access to TXT register fields on Intel along with the fetching of
> and writing events to the late launch TPM log.
>
> Signed-off-by: Daniel P. Smith <[email protected]>
> Signed-off-by: garnetgrimm <[email protected]>
> Signed-off-by: Ross Philipson <[email protected]>

There is an awful amount of code that executes between the point where
the measurements are taken and the point where they are loaded into
the PCRs. All of this code could subvert the boot flow and hide this
fact, by replacing the actual taken measurement values with the known
'blessed' ones that will unseal the keys and/or phone home to do a
successful remote attestation.

At the very least, this should be documented somewhere. And if at all
possible, it should also be documented why this is ok, and to what
extent it limits the provided guarantees compared to a true D-RTM boot
where the early boot code measures straight into the TPMs before
proceeding.


> ---
> arch/x86/kernel/Makefile | 1 +
> arch/x86/kernel/slmodule.c | 511 +++++++++++++++++++++++++++++++++++++
> 2 files changed, 512 insertions(+)
> create mode 100644 arch/x86/kernel/slmodule.c
>
> diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
> index 5848ea310175..948346ff4595 100644
> --- a/arch/x86/kernel/Makefile
> +++ b/arch/x86/kernel/Makefile
> @@ -75,6 +75,7 @@ obj-$(CONFIG_IA32_EMULATION) += tls.o
> obj-y += step.o
> obj-$(CONFIG_INTEL_TXT) += tboot.o
> obj-$(CONFIG_SECURE_LAUNCH) += slaunch.o
> +obj-$(CONFIG_SECURE_LAUNCH) += slmodule.o
> obj-$(CONFIG_ISA_DMA_API) += i8237.o
> obj-y += stacktrace.o
> obj-y += cpu/
> diff --git a/arch/x86/kernel/slmodule.c b/arch/x86/kernel/slmodule.c
> new file mode 100644
> index 000000000000..52269f24902e
> --- /dev/null
> +++ b/arch/x86/kernel/slmodule.c
> @@ -0,0 +1,511 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Secure Launch late validation/setup, securityfs exposure and finalization.
> + *
> + * Copyright (c) 2022 Apertus Solutions, LLC
> + * Copyright (c) 2021 Assured Information Security, Inc.
> + * Copyright (c) 2022, Oracle and/or its affiliates.
> + *
> + * Co-developed-by: Garnet T. Grimm <[email protected]>
> + * Signed-off-by: Garnet T. Grimm <[email protected]>
> + * Signed-off-by: Daniel P. Smith <[email protected]>
> + */
> +
> +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
> +
> +#include <linux/fs.h>
> +#include <linux/init.h>
> +#include <linux/linkage.h>
> +#include <linux/mm.h>
> +#include <linux/io.h>
> +#include <linux/uaccess.h>
> +#include <linux/security.h>
> +#include <linux/memblock.h>
> +#include <asm/segment.h>
> +#include <asm/sections.h>
> +#include <crypto/sha2.h>
> +#include <linux/slr_table.h>
> +#include <linux/slaunch.h>
> +
> +/*
> + * The macro DECLARE_TXT_PUB_READ_U is used to read values from the TXT
> + * public registers as unsigned values.
> + */
> +#define DECLARE_TXT_PUB_READ_U(size, fmt, msg_size) \
> +static ssize_t txt_pub_read_u##size(unsigned int offset, \
> + loff_t *read_offset, \
> + size_t read_len, \
> + char __user *buf) \
> +{ \
> + char msg_buffer[msg_size]; \
> + u##size reg_value = 0; \
> + void __iomem *txt; \
> + \
> + txt = ioremap(TXT_PUB_CONFIG_REGS_BASE, \
> + TXT_NR_CONFIG_PAGES * PAGE_SIZE); \
> + if (!txt) \
> + return -EFAULT; \
> + memcpy_fromio(&reg_value, txt + offset, sizeof(u##size)); \
> + iounmap(txt); \
> + snprintf(msg_buffer, msg_size, fmt, reg_value); \
> + return simple_read_from_buffer(buf, read_len, read_offset, \
> + &msg_buffer, msg_size); \
> +}
> +
> +DECLARE_TXT_PUB_READ_U(8, "%#04x\n", 6);
> +DECLARE_TXT_PUB_READ_U(32, "%#010x\n", 12);
> +DECLARE_TXT_PUB_READ_U(64, "%#018llx\n", 20);
> +
> +#define DECLARE_TXT_FOPS(reg_name, reg_offset, reg_size) \
> +static ssize_t txt_##reg_name##_read(struct file *flip, \
> + char __user *buf, size_t read_len, loff_t *read_offset) \
> +{ \
> + return txt_pub_read_u##reg_size(reg_offset, read_offset, \
> + read_len, buf); \
> +} \
> +static const struct file_operations reg_name##_ops = { \
> + .read = txt_##reg_name##_read, \
> +}
> +
> +DECLARE_TXT_FOPS(sts, TXT_CR_STS, 64);
> +DECLARE_TXT_FOPS(ests, TXT_CR_ESTS, 8);
> +DECLARE_TXT_FOPS(errorcode, TXT_CR_ERRORCODE, 32);
> +DECLARE_TXT_FOPS(didvid, TXT_CR_DIDVID, 64);
> +DECLARE_TXT_FOPS(e2sts, TXT_CR_E2STS, 64);
> +DECLARE_TXT_FOPS(ver_emif, TXT_CR_VER_EMIF, 32);
> +DECLARE_TXT_FOPS(scratchpad, TXT_CR_SCRATCHPAD, 64);
> +
> +/*
> + * Securityfs exposure
> + */
> +struct memfile {
> + char *name;
> + void *addr;
> + size_t size;
> +};
> +
> +static struct memfile sl_evtlog = {"eventlog", NULL, 0};
> +static void *txt_heap;
> +static struct txt_heap_event_log_pointer2_1_element *evtlog20;
> +static DEFINE_MUTEX(sl_evt_log_mutex);
> +
> +static ssize_t sl_evtlog_read(struct file *file, char __user *buf,
> + size_t count, loff_t *pos)
> +{
> + ssize_t size;
> +
> + if (!sl_evtlog.addr)
> + return 0;
> +
> + mutex_lock(&sl_evt_log_mutex);
> + size = simple_read_from_buffer(buf, count, pos, sl_evtlog.addr,
> + sl_evtlog.size);
> + mutex_unlock(&sl_evt_log_mutex);
> +
> + return size;
> +}
> +
> +static ssize_t sl_evtlog_write(struct file *file, const char __user *buf,
> + size_t datalen, loff_t *ppos)
> +{
> + ssize_t result;
> + char *data;
> +
> + if (!sl_evtlog.addr)
> + return 0;
> +
> + /* No partial writes. */
> + result = -EINVAL;
> + if (*ppos != 0)
> + goto out;
> +
> + data = memdup_user(buf, datalen);
> + if (IS_ERR(data)) {
> + result = PTR_ERR(data);
> + goto out;
> + }
> +
> + mutex_lock(&sl_evt_log_mutex);
> + if (evtlog20)
> + result = tpm20_log_event(evtlog20, sl_evtlog.addr,
> + sl_evtlog.size, datalen, data);
> + else
> + result = tpm12_log_event(sl_evtlog.addr, sl_evtlog.size,
> + datalen, data);
> + mutex_unlock(&sl_evt_log_mutex);
> +
> + kfree(data);
> +out:
> + return result;
> +}
> +
> +static const struct file_operations sl_evtlog_ops = {
> + .read = sl_evtlog_read,
> + .write = sl_evtlog_write,
> + .llseek = default_llseek,
> +};
> +
> +struct sfs_file {
> + const char *name;
> + const struct file_operations *fops;
> +};
> +
> +#define SL_TXT_ENTRY_COUNT 7
> +static const struct sfs_file sl_txt_files[] = {
> + { "sts", &sts_ops },
> + { "ests", &ests_ops },
> + { "errorcode", &errorcode_ops },
> + { "didvid", &didvid_ops },
> + { "ver_emif", &ver_emif_ops },
> + { "scratchpad", &scratchpad_ops },
> + { "e2sts", &e2sts_ops }
> +};
> +
> +/* sysfs file handles */
> +static struct dentry *slaunch_dir;
> +static struct dentry *event_file;
> +static struct dentry *txt_dir;
> +static struct dentry *txt_entries[SL_TXT_ENTRY_COUNT];
> +
> +static long slaunch_expose_securityfs(void)
> +{
> + long ret = 0;
> + int i;
> +
> + slaunch_dir = securityfs_create_dir("slaunch", NULL);
> + if (IS_ERR(slaunch_dir))
> + return PTR_ERR(slaunch_dir);
> +
> + if (slaunch_get_flags() & SL_FLAG_ARCH_TXT) {
> + txt_dir = securityfs_create_dir("txt", slaunch_dir);
> + if (IS_ERR(txt_dir)) {
> + ret = PTR_ERR(txt_dir);
> + goto remove_slaunch;
> + }
> +
> + for (i = 0; i < ARRAY_SIZE(sl_txt_files); i++) {
> + txt_entries[i] = securityfs_create_file(
> + sl_txt_files[i].name, 0440,
> + txt_dir, NULL,
> + sl_txt_files[i].fops);
> + if (IS_ERR(txt_entries[i])) {
> + ret = PTR_ERR(txt_entries[i]);
> + goto remove_files;
> + }
> + }
> + }
> +
> + if (sl_evtlog.addr) {
> + event_file = securityfs_create_file(sl_evtlog.name, 0440,
> + slaunch_dir, NULL,
> + &sl_evtlog_ops);
> + if (IS_ERR(event_file)) {
> + ret = PTR_ERR(event_file);
> + goto remove_files;
> + }
> + }
> +
> + return 0;
> +
> +remove_files:
> + if (slaunch_get_flags() & SL_FLAG_ARCH_TXT) {
> + while (--i >= 0)
> + securityfs_remove(txt_entries[i]);
> + securityfs_remove(txt_dir);
> + }
> +
> +remove_slaunch:
> + securityfs_remove(slaunch_dir);
> +
> + return ret;
> +}
> +
> +static void slaunch_teardown_securityfs(void)
> +{
> + int i;
> +
> + securityfs_remove(event_file);
> + if (sl_evtlog.addr) {
> + memunmap(sl_evtlog.addr);
> + sl_evtlog.addr = NULL;
> + }
> + sl_evtlog.size = 0;
> +
> + if (slaunch_get_flags() & SL_FLAG_ARCH_TXT) {
> + for (i = 0; i < ARRAY_SIZE(sl_txt_files); i++)
> + securityfs_remove(txt_entries[i]);
> +
> + securityfs_remove(txt_dir);
> +
> + if (txt_heap) {
> + memunmap(txt_heap);
> + txt_heap = NULL;
> + }
> + }
> +
> + securityfs_remove(slaunch_dir);
> +}
> +
> +static void slaunch_intel_evtlog(void __iomem *txt)
> +{
> + struct slr_entry_log_info *log_info;
> + struct txt_os_mle_data *params;
> + struct slr_table *slrt;
> + void *os_sinit_data;
> + u64 base, size;
> +
> + memcpy_fromio(&base, txt + TXT_CR_HEAP_BASE, sizeof(base));
> + memcpy_fromio(&size, txt + TXT_CR_HEAP_SIZE, sizeof(size));
> +
> + /* now map TXT heap */
> + txt_heap = memremap(base, size, MEMREMAP_WB);
> + if (!txt_heap)
> + slaunch_txt_reset(txt, "Error failed to memremap TXT heap\n",
> + SL_ERROR_HEAP_MAP);
> +
> + params = (struct txt_os_mle_data *)txt_os_mle_data_start(txt_heap);
> +
> + /* Get the SLRT and remap it */
> + slrt = memremap(params->slrt, sizeof(*slrt), MEMREMAP_WB);
> + if (!slrt)
> + slaunch_txt_reset(txt, "Error failed to memremap SLR Table\n",
> + SL_ERROR_SLRT_MAP);
> + size = slrt->size;
> + memunmap(slrt);
> +
> + slrt = memremap(params->slrt, size, MEMREMAP_WB);
> + if (!slrt)
> + slaunch_txt_reset(txt, "Error failed to memremap SLR Table\n",
> + SL_ERROR_SLRT_MAP);
> +
> + log_info = (struct slr_entry_log_info *)slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_LOG_INFO);
> + if (!log_info)
> + slaunch_txt_reset(txt, "Error failed to memremap SLR Table\n",
> + SL_ERROR_SLRT_MISSING_ENTRY);
> +
> + sl_evtlog.size = log_info->size;
> + sl_evtlog.addr = memremap(log_info->addr, log_info->size,
> + MEMREMAP_WB);
> + if (!sl_evtlog.addr)
> + slaunch_txt_reset(txt, "Error failed to memremap TPM event log\n",
> + SL_ERROR_EVENTLOG_MAP);
> +
> + memunmap(slrt);
> +
> + /* Determine if this is TPM 1.2 or 2.0 event log */
> + if (memcmp(sl_evtlog.addr + sizeof(struct tcg_pcr_event),
> + TCG_SPECID_SIG, sizeof(TCG_SPECID_SIG)))
> + return; /* looks like it is not 2.0 */
> +
> + /* For TPM 2.0 logs, the extended heap element must be located */
> + os_sinit_data = txt_os_sinit_data_start(txt_heap);
> +
> + evtlog20 = tpm20_find_log2_1_element(os_sinit_data);
> +
> + /*
> + * If this fails, things are in really bad shape. Any attempt to write
> + * events to the log will fail.
> + */
> + if (!evtlog20)
> + slaunch_txt_reset(txt, "Error failed to find TPM20 event log element\n",
> + SL_ERROR_TPM_INVALID_LOG20);
> +}
> +
> +static void slaunch_tpm20_extend_event(struct tpm_chip *tpm, void __iomem *txt,
> + struct tcg_pcr_event2_head *event)
> +{
> + u16 *alg_id_field = (u16 *)((u8 *)event + sizeof(struct tcg_pcr_event2_head));
> + struct tpm_digest *digests;
> + u8 *dptr;
> + u32 i, j;
> + int ret;
> +
> + digests = kcalloc(tpm->nr_allocated_banks, sizeof(*digests),
> + GFP_KERNEL);
> + if (!digests)
> + slaunch_txt_reset(txt, "Failed to allocate array of digests\n",
> + SL_ERROR_GENERIC);
> +
> + for (i = 0; i < tpm->nr_allocated_banks; i++)
> + digests[i].alg_id = tpm->allocated_banks[i].alg_id;
> +
> + /* Early SL code ensured there was a max count of 2 digests */
> + for (i = 0; i < event->count; i++) {
> + dptr = (u8 *)alg_id_field + sizeof(u16);
> +
> + for (j = 0; j < tpm->nr_allocated_banks; j++) {
> + if (digests[j].alg_id != *alg_id_field)
> + continue;
> +
> + switch (digests[j].alg_id) {
> + case TPM_ALG_SHA256:
> + memcpy(&digests[j].digest[0], dptr,
> + SHA256_DIGEST_SIZE);
> + alg_id_field = (u16 *)((u8 *)alg_id_field +
> + SHA256_DIGEST_SIZE + sizeof(u16));
> + break;
> + case TPM_ALG_SHA1:
> + memcpy(&digests[j].digest[0], dptr,
> + SHA1_DIGEST_SIZE);
> + alg_id_field = (u16 *)((u8 *)alg_id_field +
> + SHA1_DIGEST_SIZE + sizeof(u16));
> + default:
> + break;
> + }
> + }
> + }
> +
> + ret = tpm_pcr_extend(tpm, event->pcr_idx, digests);
> + if (ret) {
> + pr_err("Error extending TPM20 PCR, result: %d\n", ret);
> + slaunch_txt_reset(txt, "Failed to extend TPM20 PCR\n",
> + SL_ERROR_TPM_EXTEND);
> + }
> +
> + kfree(digests);
> +}
> +
> +static void slaunch_tpm20_extend(struct tpm_chip *tpm, void __iomem *txt)
> +{
> + struct tcg_pcr_event *event_header;
> + struct tcg_pcr_event2_head *event;
> + int start = 0, end = 0, size;
> +
> + event_header = (struct tcg_pcr_event *)(sl_evtlog.addr +
> + evtlog20->first_record_offset);
> +
> + /* Skip first TPM 1.2 event to get to first TPM 2.0 event */
> + event = (struct tcg_pcr_event2_head *)((u8 *)event_header +
> + sizeof(struct tcg_pcr_event) +
> + event_header->event_size);
> +
> + while ((void *)event < sl_evtlog.addr + evtlog20->next_record_offset) {
> + size = __calc_tpm2_event_size(event, event_header, false);
> + if (!size)
> + slaunch_txt_reset(txt, "TPM20 invalid event in event log\n",
> + SL_ERROR_TPM_INVALID_EVENT);
> +
> + /*
> + * Marker events indicate where the Secure Launch early stub
> + * started and ended adding post launch events.
> + */
> + if (event->event_type == TXT_EVTYPE_SLAUNCH_END) {
> + end = 1;
> + break;
> + } else if (event->event_type == TXT_EVTYPE_SLAUNCH_START) {
> + start = 1;
> + goto next;
> + }
> +
> + if (start)
> + slaunch_tpm20_extend_event(tpm, txt, event);
> +
> +next:
> + event = (struct tcg_pcr_event2_head *)((u8 *)event + size);
> + }
> +
> + if (!start || !end)
> + slaunch_txt_reset(txt, "Missing start or end events for extending TPM20 PCRs\n",
> + SL_ERROR_TPM_EXTEND);
> +}
> +
> +static void slaunch_tpm12_extend(struct tpm_chip *tpm, void __iomem *txt)
> +{
> + struct tpm12_event_log_header *event_header;
> + struct tcg_pcr_event *event;
> + struct tpm_digest digest;
> + int start = 0, end = 0;
> + int size, ret;
> +
> + event_header = (struct tpm12_event_log_header *)sl_evtlog.addr;
> + event = (struct tcg_pcr_event *)((u8 *)event_header +
> + sizeof(struct tpm12_event_log_header));
> +
> + while ((void *)event < sl_evtlog.addr + event_header->next_event_offset) {
> + size = sizeof(struct tcg_pcr_event) + event->event_size;
> +
> + /*
> + * Marker events indicate where the Secure Launch early stub
> + * started and ended adding post launch events.
> + */
> + if (event->event_type == TXT_EVTYPE_SLAUNCH_END) {
> + end = 1;
> + break;
> + } else if (event->event_type == TXT_EVTYPE_SLAUNCH_START) {
> + start = 1;
> + goto next;
> + }
> +
> + if (start) {
> + memset(&digest.digest[0], 0, TPM_MAX_DIGEST_SIZE);
> + digest.alg_id = TPM_ALG_SHA1;
> + memcpy(&digest.digest[0], &event->digest[0],
> + SHA1_DIGEST_SIZE);
> +
> + ret = tpm_pcr_extend(tpm, event->pcr_idx, &digest);
> + if (ret) {
> + pr_err("Error extending TPM12 PCR, result: %d\n", ret);
> + slaunch_txt_reset(txt, "Failed to extend TPM12 PCR\n",
> + SL_ERROR_TPM_EXTEND);
> + }
> + }
> +
> +next:
> + event = (struct tcg_pcr_event *)((u8 *)event + size);
> + }
> +
> + if (!start || !end)
> + slaunch_txt_reset(txt, "Missing start or end events for extending TPM12 PCRs\n",
> + SL_ERROR_TPM_EXTEND);
> +}
> +
> +static void slaunch_pcr_extend(void __iomem *txt)
> +{
> + struct tpm_chip *tpm;
> +
> + tpm = tpm_default_chip();
> + if (!tpm)
> + slaunch_txt_reset(txt, "Could not get default TPM chip\n",
> + SL_ERROR_TPM_INIT);
> +
> + if (!tpm_preferred_locality(tpm, 2))
> + slaunch_txt_reset(txt, "Could not set TPM chip locality 2\n",
> + SL_ERROR_TPM_INIT);
> +
> + if (evtlog20)
> + slaunch_tpm20_extend(tpm, txt);
> + else
> + slaunch_tpm12_extend(tpm, txt);
> +
> + tpm_preferred_locality(tpm, 0);
> +}
> +
> +static int __init slaunch_module_init(void)
> +{
> + void __iomem *txt;
> +
> + /* Check to see if Secure Launch happened */
> + if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) !=
> + (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
> + return 0;
> +
> + txt = ioremap(TXT_PRIV_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
> + PAGE_SIZE);
> + if (!txt)
> + panic("Error ioremap of TXT priv registers\n");
> +
> + /* Only Intel TXT is supported at this point */
> + slaunch_intel_evtlog(txt);
> + slaunch_pcr_extend(txt);
> + iounmap(txt);
> +
> + return slaunch_expose_securityfs();
> +}
> +
> +static void __exit slaunch_module_exit(void)
> +{
> + slaunch_teardown_securityfs();
> +}
> +
> +late_initcall(slaunch_module_init);
> +__exitcall(slaunch_module_exit);
> --
> 2.39.3
>

2024-02-15 09:05:34

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v8 15/15] x86: EFI stub DRTM launch support for Secure Launch

On Wed, 14 Feb 2024 at 23:32, Ross Philipson <[email protected]> wrote:
>
> This support allows the DRTM launch to be initiated after an EFI stub
> launch of the Linux kernel is done. This is accomplished by providing
> a handler to jump to when a Secure Launch is in progress. This has to be
> called after the EFI stub does Exit Boot Services.
>
> Signed-off-by: Ross Philipson <[email protected]>
> ---
> drivers/firmware/efi/libstub/x86-stub.c | 55 +++++++++++++++++++++++++
> 1 file changed, 55 insertions(+)
>
> diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
> index 0d510c9a06a4..4df2cf539194 100644
> --- a/drivers/firmware/efi/libstub/x86-stub.c
> +++ b/drivers/firmware/efi/libstub/x86-stub.c
> @@ -9,6 +9,7 @@
> #include <linux/efi.h>
> #include <linux/pci.h>
> #include <linux/stddef.h>
> +#include <linux/slr_table.h>
>
> #include <asm/efi.h>
> #include <asm/e820/types.h>
> @@ -810,6 +811,57 @@ static efi_status_t efi_decompress_kernel(unsigned long *kernel_entry)
> return EFI_SUCCESS;
> }
>
> +static void efi_secure_launch(struct boot_params *boot_params)
> +{
> + struct slr_entry_uefi_config *uefi_config;
> + struct slr_uefi_cfg_entry *uefi_entry;
> + struct slr_entry_dl_info *dlinfo;
> + efi_guid_t guid = SLR_TABLE_GUID;
> + struct slr_table *slrt;
> + u64 memmap_hi;
> + void *table;
> + u8 buf[64] = {0};
> +

If you add a flex array to slr_entry_uefi_config as I suggested in
response to the other patch, we could simplify this substantially

static struct slr_entry_uefi_config cfg = {
.hdr.tag = SLR_ENTRY_UEFI_CONFIG,
.hdr.size = sizeof(cfg),
.revision = SLR_UEFI_CONFIG_REVISION,
.nr_entries = 1,
.entries[0] = {
.pcr = 18,
.evt_info = "Measured UEFI memory map",
},
};

cfg.entries[0].cfg = boot_params->efi_info.efi_memmap |
(u64)boot_params->efi_info.efi_memmap_hi << 32;
cfg.entries[0].size = boot_params->efi_info.efi_memmap_size;



> + table = get_efi_config_table(guid);
> +
> + /*
> + * The presence of this table indicated a Secure Launch
> + * is being requested.
> + */
> + if (!table)
> + return;
> +
> + slrt = (struct slr_table *)table;
> +
> + if (slrt->magic != SLR_TABLE_MAGIC)
> + return;
> +

slrt = (struct slr_table *)get_efi_config_table(guid);
if (!slrt || slrt->magic != SLR_TABLE_MAGIC)
return;

> + /* Add config information to measure the UEFI memory map */
> + uefi_config = (struct slr_entry_uefi_config *)buf;
> + uefi_config->hdr.tag = SLR_ENTRY_UEFI_CONFIG;
> + uefi_config->hdr.size = sizeof(*uefi_config) + sizeof(*uefi_entry);
> + uefi_config->revision = SLR_UEFI_CONFIG_REVISION;
> + uefi_config->nr_entries = 1;
> + uefi_entry = (struct slr_uefi_cfg_entry *)(buf + sizeof(*uefi_config));
> + uefi_entry->pcr = 18;
> + uefi_entry->cfg = boot_params->efi_info.efi_memmap;
> + memmap_hi = boot_params->efi_info.efi_memmap_hi;
> + uefi_entry->cfg |= memmap_hi << 32;
> + uefi_entry->size = boot_params->efi_info.efi_memmap_size;
> + memcpy(&uefi_entry->evt_info[0], "Measured UEFI memory map",
> + strlen("Measured UEFI memory map"));
> +

Drop all of this

> + if (slr_add_entry(slrt, (struct slr_entry_hdr *)uefi_config))

if (slr_add_entry(slrt, &uefi_config.hdr))


> + return;
> +
> + /* Jump through DL stub to initiate Secure Launch */
> + dlinfo = (struct slr_entry_dl_info *)
> + slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);
> +
> + asm volatile ("jmp *%%rax"
> + : : "a" (dlinfo->dl_handler), "D" (&dlinfo->bl_context));

Fix the prototype and just do

dlinfo->dl_handler(&dlinfo->bl_context);
unreachable();


So in summary, this becomes

static void efi_secure_launch(struct boot_params *boot_params)
{
static struct slr_entry_uefi_config cfg = {
.hdr.tag = SLR_ENTRY_UEFI_CONFIG,
.hdr.size = sizeof(cfg),
.revision = SLR_UEFI_CONFIG_REVISION,
.nr_entries = 1,
.entries[0] = {
.pcr = 18,
.evt_info = "Measured UEFI memory map",
},
};
struct slr_entry_dl_info *dlinfo;
efi_guid_t guid = SLR_TABLE_GUID;
struct slr_table *slrt;

/*
* The presence of this table indicated a Secure Launch
* is being requested.
*/
slrt = (struct slr_table *)get_efi_config_table(guid);
if (!slrt || slrt->magic != SLR_TABLE_MAGIC)
return;

cfg.entries[0].cfg = boot_params->efi_info.efi_memmap |
(u64)boot_params->efi_info.efi_memmap_hi << 32;
cfg.entries[0].size = boot_params->efi_info.efi_memmap_size;

if (slr_add_entry(slrt, &cfg.hdr))
return;

/* Jump through DL stub to initiate Secure Launch */
dlinfo = (struct slr_entry_dl_info *)
slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);

dlinfo->dl_handler(&dlinfo->bl_context);

unreachable();
}


> +}
> +
> static void __noreturn enter_kernel(unsigned long kernel_addr,
> struct boot_params *boot_params)
> {
> @@ -934,6 +986,9 @@ void __noreturn efi_stub_entry(efi_handle_t handle,
> goto fail;
> }
>
> + /* If a Secure Launch is in progress, this never returns */

if (IS_ENABLED(CONFIG_SECURE_LAUNCH))

> + efi_secure_launch(boot_params);
> +
> /*
> * Call the SEV init code while still running with the firmware's
> * GDT/IDT, so #VC exceptions will be handled by EFI.
> --
> 2.39.3
>

2024-02-17 07:32:45

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v8 15/15] x86: EFI stub DRTM launch support for Secure Launch

Hi Ross,

kernel test robot noticed the following build errors:

[auto build test ERROR on char-misc/char-misc-testing]
[also build test ERROR on char-misc/char-misc-next char-misc/char-misc-linus herbert-cryptodev-2.6/master herbert-crypto-2.6/master linus/master v6.8-rc4 next-20240216]
[cannot apply to tip/x86/core]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Ross-Philipson/x86-boot-Place-kernel_info-at-a-fixed-offset/20240215-064712
base: char-misc/char-misc-testing
patch link: https://lore.kernel.org/r/20240214221847.2066632-16-ross.philipson%40oracle.com
patch subject: [PATCH v8 15/15] x86: EFI stub DRTM launch support for Secure Launch
config: i386-randconfig-052-20240215 (https://download.01.org/0day-ci/archive/20240217/[email protected]/config)
compiler: clang version 17.0.6 (https://github.com/llvm/llvm-project 6009708b4367171ccdbf4b5905cb6a803753fe18)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240217/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All errors (new ones prefixed by >>):

>> drivers/firmware/efi/libstub/x86-stub.c:862:18: error: invalid input size for constraint 'a'
862 | : : "a" (dlinfo->dl_handler), "D" (&dlinfo->bl_context));
| ^
1 error generated.


vim +/a +862 drivers/firmware/efi/libstub/x86-stub.c

813
814 static void efi_secure_launch(struct boot_params *boot_params)
815 {
816 struct slr_entry_uefi_config *uefi_config;
817 struct slr_uefi_cfg_entry *uefi_entry;
818 struct slr_entry_dl_info *dlinfo;
819 efi_guid_t guid = SLR_TABLE_GUID;
820 struct slr_table *slrt;
821 u64 memmap_hi;
822 void *table;
823 u8 buf[64] = {0};
824
825 table = get_efi_config_table(guid);
826
827 /*
828 * The presence of this table indicated a Secure Launch
829 * is being requested.
830 */
831 if (!table)
832 return;
833
834 slrt = (struct slr_table *)table;
835
836 if (slrt->magic != SLR_TABLE_MAGIC)
837 return;
838
839 /* Add config information to measure the UEFI memory map */
840 uefi_config = (struct slr_entry_uefi_config *)buf;
841 uefi_config->hdr.tag = SLR_ENTRY_UEFI_CONFIG;
842 uefi_config->hdr.size = sizeof(*uefi_config) + sizeof(*uefi_entry);
843 uefi_config->revision = SLR_UEFI_CONFIG_REVISION;
844 uefi_config->nr_entries = 1;
845 uefi_entry = (struct slr_uefi_cfg_entry *)(buf + sizeof(*uefi_config));
846 uefi_entry->pcr = 18;
847 uefi_entry->cfg = boot_params->efi_info.efi_memmap;
848 memmap_hi = boot_params->efi_info.efi_memmap_hi;
849 uefi_entry->cfg |= memmap_hi << 32;
850 uefi_entry->size = boot_params->efi_info.efi_memmap_size;
851 memcpy(&uefi_entry->evt_info[0], "Measured UEFI memory map",
852 strlen("Measured UEFI memory map"));
853
854 if (slr_add_entry(slrt, (struct slr_entry_hdr *)uefi_config))
855 return;
856
857 /* Jump through DL stub to initiate Secure Launch */
858 dlinfo = (struct slr_entry_dl_info *)
859 slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);
860
861 asm volatile ("jmp *%%rax"
> 862 : : "a" (dlinfo->dl_handler), "D" (&dlinfo->bl_context));
863 }
864

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2024-02-17 07:54:48

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v8 14/15] x86: Secure Launch late initcall platform module

Hi Ross,

kernel test robot noticed the following build errors:

[auto build test ERROR on char-misc/char-misc-testing]
[also build test ERROR on char-misc/char-misc-next char-misc/char-misc-linus herbert-cryptodev-2.6/master herbert-crypto-2.6/master linus/master v6.8-rc4 next-20240216]
[cannot apply to tip/x86/core]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Ross-Philipson/x86-boot-Place-kernel_info-at-a-fixed-offset/20240215-064712
base: char-misc/char-misc-testing
patch link: https://lore.kernel.org/r/20240214221847.2066632-15-ross.philipson%40oracle.com
patch subject: [PATCH v8 14/15] x86: Secure Launch late initcall platform module
config: x86_64-randconfig-r061-20240216 (https://download.01.org/0day-ci/archive/20240217/[email protected]/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240217/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All errors (new ones prefixed by >>):

arch/x86/kernel/slmodule.c: In function 'slaunch_pcr_extend':
>> arch/x86/kernel/slmodule.c:471:14: error: implicit declaration of function 'tpm_preferred_locality' [-Werror=implicit-function-declaration]
471 | if (!tpm_preferred_locality(tpm, 2))
| ^~~~~~~~~~~~~~~~~~~~~~
cc1: some warnings being treated as errors


vim +/tpm_preferred_locality +471 arch/x86/kernel/slmodule.c

461
462 static void slaunch_pcr_extend(void __iomem *txt)
463 {
464 struct tpm_chip *tpm;
465
466 tpm = tpm_default_chip();
467 if (!tpm)
468 slaunch_txt_reset(txt, "Could not get default TPM chip\n",
469 SL_ERROR_TPM_INIT);
470
> 471 if (!tpm_preferred_locality(tpm, 2))
472 slaunch_txt_reset(txt, "Could not set TPM chip locality 2\n",
473 SL_ERROR_TPM_INIT);
474
475 if (evtlog20)
476 slaunch_tpm20_extend(tpm, txt);
477 else
478 slaunch_tpm12_extend(tpm, txt);
479
480 tpm_preferred_locality(tpm, 0);
481 }
482

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2024-02-17 20:07:53

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v8 15/15] x86: EFI stub DRTM launch support for Secure Launch

Hi Ross,

kernel test robot noticed the following build errors:

[auto build test ERROR on char-misc/char-misc-testing]
[also build test ERROR on char-misc/char-misc-next char-misc/char-misc-linus herbert-cryptodev-2.6/master herbert-crypto-2.6/master linus/master v6.8-rc4 next-20240216]
[cannot apply to tip/x86/core]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Ross-Philipson/x86-boot-Place-kernel_info-at-a-fixed-offset/20240215-064712
base: char-misc/char-misc-testing
patch link: https://lore.kernel.org/r/20240214221847.2066632-16-ross.philipson%40oracle.com
patch subject: [PATCH v8 15/15] x86: EFI stub DRTM launch support for Secure Launch
config: i386-allmodconfig (https://download.01.org/0day-ci/archive/20240218/[email protected]/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240218/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All errors (new ones prefixed by >>):

In function 'efi_secure_launch',
inlined from 'efi_stub_entry' at drivers/firmware/efi/libstub/x86-stub.c:990:2:
>> drivers/firmware/efi/libstub/x86-stub.c:861:9: error: inconsistent operand constraints in an 'asm'
861 | asm volatile ("jmp *%%rax"
| ^~~


vim +/asm +861 drivers/firmware/efi/libstub/x86-stub.c

813
814 static void efi_secure_launch(struct boot_params *boot_params)
815 {
816 struct slr_entry_uefi_config *uefi_config;
817 struct slr_uefi_cfg_entry *uefi_entry;
818 struct slr_entry_dl_info *dlinfo;
819 efi_guid_t guid = SLR_TABLE_GUID;
820 struct slr_table *slrt;
821 u64 memmap_hi;
822 void *table;
823 u8 buf[64] = {0};
824
825 table = get_efi_config_table(guid);
826
827 /*
828 * The presence of this table indicated a Secure Launch
829 * is being requested.
830 */
831 if (!table)
832 return;
833
834 slrt = (struct slr_table *)table;
835
836 if (slrt->magic != SLR_TABLE_MAGIC)
837 return;
838
839 /* Add config information to measure the UEFI memory map */
840 uefi_config = (struct slr_entry_uefi_config *)buf;
841 uefi_config->hdr.tag = SLR_ENTRY_UEFI_CONFIG;
842 uefi_config->hdr.size = sizeof(*uefi_config) + sizeof(*uefi_entry);
843 uefi_config->revision = SLR_UEFI_CONFIG_REVISION;
844 uefi_config->nr_entries = 1;
845 uefi_entry = (struct slr_uefi_cfg_entry *)(buf + sizeof(*uefi_config));
846 uefi_entry->pcr = 18;
847 uefi_entry->cfg = boot_params->efi_info.efi_memmap;
848 memmap_hi = boot_params->efi_info.efi_memmap_hi;
849 uefi_entry->cfg |= memmap_hi << 32;
850 uefi_entry->size = boot_params->efi_info.efi_memmap_size;
851 memcpy(&uefi_entry->evt_info[0], "Measured UEFI memory map",
852 strlen("Measured UEFI memory map"));
853
854 if (slr_add_entry(slrt, (struct slr_entry_hdr *)uefi_config))
855 return;
856
857 /* Jump through DL stub to initiate Secure Launch */
858 dlinfo = (struct slr_entry_dl_info *)
859 slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);
860
> 861 asm volatile ("jmp *%%rax"
862 : : "a" (dlinfo->dl_handler), "D" (&dlinfo->bl_context));
863 }
864

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2024-02-21 20:18:26

by Ross Philipson

[permalink] [raw]
Subject: Re: [PATCH v8 15/15] x86: EFI stub DRTM launch support for Secure Launch

On 2/15/24 1:01 AM, Ard Biesheuvel wrote:
> On Wed, 14 Feb 2024 at 23:32, Ross Philipson <[email protected]> wrote:
>>
>> This support allows the DRTM launch to be initiated after an EFI stub
>> launch of the Linux kernel is done. This is accomplished by providing
>> a handler to jump to when a Secure Launch is in progress. This has to be
>> called after the EFI stub does Exit Boot Services.
>>
>> Signed-off-by: Ross Philipson <[email protected]>
>> ---
>> drivers/firmware/efi/libstub/x86-stub.c | 55 +++++++++++++++++++++++++
>> 1 file changed, 55 insertions(+)
>>
>> diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
>> index 0d510c9a06a4..4df2cf539194 100644
>> --- a/drivers/firmware/efi/libstub/x86-stub.c
>> +++ b/drivers/firmware/efi/libstub/x86-stub.c
>> @@ -9,6 +9,7 @@
>> #include <linux/efi.h>
>> #include <linux/pci.h>
>> #include <linux/stddef.h>
>> +#include <linux/slr_table.h>
>>
>> #include <asm/efi.h>
>> #include <asm/e820/types.h>
>> @@ -810,6 +811,57 @@ static efi_status_t efi_decompress_kernel(unsigned long *kernel_entry)
>> return EFI_SUCCESS;
>> }
>>
>> +static void efi_secure_launch(struct boot_params *boot_params)
>> +{
>> + struct slr_entry_uefi_config *uefi_config;
>> + struct slr_uefi_cfg_entry *uefi_entry;
>> + struct slr_entry_dl_info *dlinfo;
>> + efi_guid_t guid = SLR_TABLE_GUID;
>> + struct slr_table *slrt;
>> + u64 memmap_hi;
>> + void *table;
>> + u8 buf[64] = {0};
>> +
>
> If you add a flex array to slr_entry_uefi_config as I suggested in
> response to the other patch, we could simplify this substantially

I feel like there is some reason why we did not use flex arrays. We were
talking and we seem to remember we used to use them and someone asked us
to remove them. We are still looking into it. But if we can go back to
them, I will take all the changes you recommended here.

Thanks
Ross

>
> static struct slr_entry_uefi_config cfg = {
> .hdr.tag = SLR_ENTRY_UEFI_CONFIG,
> .hdr.size = sizeof(cfg),
> .revision = SLR_UEFI_CONFIG_REVISION,
> .nr_entries = 1,
> .entries[0] = {
> .pcr = 18,
> .evt_info = "Measured UEFI memory map",
> },
> };
>
> cfg.entries[0].cfg = boot_params->efi_info.efi_memmap |
> (u64)boot_params->efi_info.efi_memmap_hi << 32;
> cfg.entries[0].size = boot_params->efi_info.efi_memmap_size;
>
>
>
>> + table = get_efi_config_table(guid);
>> +
>> + /*
>> + * The presence of this table indicated a Secure Launch
>> + * is being requested.
>> + */
>> + if (!table)
>> + return;
>> +
>> + slrt = (struct slr_table *)table;
>> +
>> + if (slrt->magic != SLR_TABLE_MAGIC)
>> + return;
>> +
>
> slrt = (struct slr_table *)get_efi_config_table(guid);
> if (!slrt || slrt->magic != SLR_TABLE_MAGIC)
> return;
>
>> + /* Add config information to measure the UEFI memory map */
>> + uefi_config = (struct slr_entry_uefi_config *)buf;
>> + uefi_config->hdr.tag = SLR_ENTRY_UEFI_CONFIG;
>> + uefi_config->hdr.size = sizeof(*uefi_config) + sizeof(*uefi_entry);
>> + uefi_config->revision = SLR_UEFI_CONFIG_REVISION;
>> + uefi_config->nr_entries = 1;
>> + uefi_entry = (struct slr_uefi_cfg_entry *)(buf + sizeof(*uefi_config));
>> + uefi_entry->pcr = 18;
>> + uefi_entry->cfg = boot_params->efi_info.efi_memmap;
>> + memmap_hi = boot_params->efi_info.efi_memmap_hi;
>> + uefi_entry->cfg |= memmap_hi << 32;
>> + uefi_entry->size = boot_params->efi_info.efi_memmap_size;
>> + memcpy(&uefi_entry->evt_info[0], "Measured UEFI memory map",
>> + strlen("Measured UEFI memory map"));
>> +
>
> Drop all of this
>
>> + if (slr_add_entry(slrt, (struct slr_entry_hdr *)uefi_config))
>
> if (slr_add_entry(slrt, &uefi_config.hdr))
>
>
>> + return;
>> +
>> + /* Jump through DL stub to initiate Secure Launch */
>> + dlinfo = (struct slr_entry_dl_info *)
>> + slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);
>> +
>> + asm volatile ("jmp *%%rax"
>> + : : "a" (dlinfo->dl_handler), "D" (&dlinfo->bl_context));
>
> Fix the prototype and just do
>
> dlinfo->dl_handler(&dlinfo->bl_context);
> unreachable();
>
>
> So in summary, this becomes
>
> static void efi_secure_launch(struct boot_params *boot_params)
> {
> static struct slr_entry_uefi_config cfg = {
> .hdr.tag = SLR_ENTRY_UEFI_CONFIG,
> .hdr.size = sizeof(cfg),
> .revision = SLR_UEFI_CONFIG_REVISION,
> .nr_entries = 1,
> .entries[0] = {
> .pcr = 18,
> .evt_info = "Measured UEFI memory map",
> },
> };
> struct slr_entry_dl_info *dlinfo;
> efi_guid_t guid = SLR_TABLE_GUID;
> struct slr_table *slrt;
>
> /*
> * The presence of this table indicated a Secure Launch
> * is being requested.
> */
> slrt = (struct slr_table *)get_efi_config_table(guid);
> if (!slrt || slrt->magic != SLR_TABLE_MAGIC)
> return;
>
> cfg.entries[0].cfg = boot_params->efi_info.efi_memmap |
> (u64)boot_params->efi_info.efi_memmap_hi << 32;
> cfg.entries[0].size = boot_params->efi_info.efi_memmap_size;
>
> if (slr_add_entry(slrt, &cfg.hdr))
> return;
>
> /* Jump through DL stub to initiate Secure Launch */
> dlinfo = (struct slr_entry_dl_info *)
> slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);
>
> dlinfo->dl_handler(&dlinfo->bl_context);
>
> unreachable();
> }
>
>
>> +}
>> +
>> static void __noreturn enter_kernel(unsigned long kernel_addr,
>> struct boot_params *boot_params)
>> {
>> @@ -934,6 +986,9 @@ void __noreturn efi_stub_entry(efi_handle_t handle,
>> goto fail;
>> }
>>
>> + /* If a Secure Launch is in progress, this never returns */
>
> if (IS_ENABLED(CONFIG_SECURE_LAUNCH))
>
>> + efi_secure_launch(boot_params);
>> +
>> /*
>> * Call the SEV init code while still running with the firmware's
>> * GDT/IDT, so #VC exceptions will be handled by EFI.
>> --
>> 2.39.3
>>
>


2024-02-21 20:38:28

by H. Peter Anvin

[permalink] [raw]
Subject: Re: [PATCH v8 15/15] x86: EFI stub DRTM launch support for Secure Launch

On February 21, 2024 12:17:30 PM PST, [email protected] wrote:
>On 2/15/24 1:01 AM, Ard Biesheuvel wrote:
>> On Wed, 14 Feb 2024 at 23:32, Ross Philipson <[email protected]> wrote:
>>>
>>> This support allows the DRTM launch to be initiated after an EFI stub
>>> launch of the Linux kernel is done. This is accomplished by providing
>>> a handler to jump to when a Secure Launch is in progress. This has to be
>>> called after the EFI stub does Exit Boot Services.
>>>
>>> Signed-off-by: Ross Philipson <[email protected]>
>>> ---
>>> drivers/firmware/efi/libstub/x86-stub.c | 55 +++++++++++++++++++++++++
>>> 1 file changed, 55 insertions(+)
>>>
>>> diff --git a/drivers/firmware/efi/libstub/x86-stub.c b/drivers/firmware/efi/libstub/x86-stub.c
>>> index 0d510c9a06a4..4df2cf539194 100644
>>> --- a/drivers/firmware/efi/libstub/x86-stub.c
>>> +++ b/drivers/firmware/efi/libstub/x86-stub.c
>>> @@ -9,6 +9,7 @@
>>> #include <linux/efi.h>
>>> #include <linux/pci.h>
>>> #include <linux/stddef.h>
>>> +#include <linux/slr_table.h>
>>>
>>> #include <asm/efi.h>
>>> #include <asm/e820/types.h>
>>> @@ -810,6 +811,57 @@ static efi_status_t efi_decompress_kernel(unsigned long *kernel_entry)
>>> return EFI_SUCCESS;
>>> }
>>>
>>> +static void efi_secure_launch(struct boot_params *boot_params)
>>> +{
>>> + struct slr_entry_uefi_config *uefi_config;
>>> + struct slr_uefi_cfg_entry *uefi_entry;
>>> + struct slr_entry_dl_info *dlinfo;
>>> + efi_guid_t guid = SLR_TABLE_GUID;
>>> + struct slr_table *slrt;
>>> + u64 memmap_hi;
>>> + void *table;
>>> + u8 buf[64] = {0};
>>> +
>>
>> If you add a flex array to slr_entry_uefi_config as I suggested in
>> response to the other patch, we could simplify this substantially
>
>I feel like there is some reason why we did not use flex arrays. We were talking and we seem to remember we used to use them and someone asked us to remove them. We are still looking into it. But if we can go back to them, I will take all the changes you recommended here.
>
>Thanks
>Ross
>
>>
>> static struct slr_entry_uefi_config cfg = {
>> .hdr.tag = SLR_ENTRY_UEFI_CONFIG,
>> .hdr.size = sizeof(cfg),
>> .revision = SLR_UEFI_CONFIG_REVISION,
>> .nr_entries = 1,
>> .entries[0] = {
>> .pcr = 18,
>> .evt_info = "Measured UEFI memory map",
>> },
>> };
>>
>> cfg.entries[0].cfg = boot_params->efi_info.efi_memmap |
>> (u64)boot_params->efi_info.efi_memmap_hi << 32;
>> cfg.entries[0].size = boot_params->efi_info.efi_memmap_size;
>>
>>
>>
>>> + table = get_efi_config_table(guid);
>>> +
>>> + /*
>>> + * The presence of this table indicated a Secure Launch
>>> + * is being requested.
>>> + */
>>> + if (!table)
>>> + return;
>>> +
>>> + slrt = (struct slr_table *)table;
>>> +
>>> + if (slrt->magic != SLR_TABLE_MAGIC)
>>> + return;
>>> +
>>
>> slrt = (struct slr_table *)get_efi_config_table(guid);
>> if (!slrt || slrt->magic != SLR_TABLE_MAGIC)
>> return;
>>
>>> + /* Add config information to measure the UEFI memory map */
>>> + uefi_config = (struct slr_entry_uefi_config *)buf;
>>> + uefi_config->hdr.tag = SLR_ENTRY_UEFI_CONFIG;
>>> + uefi_config->hdr.size = sizeof(*uefi_config) + sizeof(*uefi_entry);
>>> + uefi_config->revision = SLR_UEFI_CONFIG_REVISION;
>>> + uefi_config->nr_entries = 1;
>>> + uefi_entry = (struct slr_uefi_cfg_entry *)(buf + sizeof(*uefi_config));
>>> + uefi_entry->pcr = 18;
>>> + uefi_entry->cfg = boot_params->efi_info.efi_memmap;
>>> + memmap_hi = boot_params->efi_info.efi_memmap_hi;
>>> + uefi_entry->cfg |= memmap_hi << 32;
>>> + uefi_entry->size = boot_params->efi_info.efi_memmap_size;
>>> + memcpy(&uefi_entry->evt_info[0], "Measured UEFI memory map",
>>> + strlen("Measured UEFI memory map"));
>>> +
>>
>> Drop all of this
>>
>>> + if (slr_add_entry(slrt, (struct slr_entry_hdr *)uefi_config))
>>
>> if (slr_add_entry(slrt, &uefi_config.hdr))
>>
>>
>>> + return;
>>> +
>>> + /* Jump through DL stub to initiate Secure Launch */
>>> + dlinfo = (struct slr_entry_dl_info *)
>>> + slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);
>>> +
>>> + asm volatile ("jmp *%%rax"
>>> + : : "a" (dlinfo->dl_handler), "D" (&dlinfo->bl_context));
>>
>> Fix the prototype and just do
>>
>> dlinfo->dl_handler(&dlinfo->bl_context);
>> unreachable();
>>
>>
>> So in summary, this becomes
>>
>> static void efi_secure_launch(struct boot_params *boot_params)
>> {
>> static struct slr_entry_uefi_config cfg = {
>> .hdr.tag = SLR_ENTRY_UEFI_CONFIG,
>> .hdr.size = sizeof(cfg),
>> .revision = SLR_UEFI_CONFIG_REVISION,
>> .nr_entries = 1,
>> .entries[0] = {
>> .pcr = 18,
>> .evt_info = "Measured UEFI memory map",
>> },
>> };
>> struct slr_entry_dl_info *dlinfo;
>> efi_guid_t guid = SLR_TABLE_GUID;
>> struct slr_table *slrt;
>>
>> /*
>> * The presence of this table indicated a Secure Launch
>> * is being requested.
>> */
>> slrt = (struct slr_table *)get_efi_config_table(guid);
>> if (!slrt || slrt->magic != SLR_TABLE_MAGIC)
>> return;
>>
>> cfg.entries[0].cfg = boot_params->efi_info.efi_memmap |
>> (u64)boot_params->efi_info.efi_memmap_hi << 32;
>> cfg.entries[0].size = boot_params->efi_info.efi_memmap_size;
>>
>> if (slr_add_entry(slrt, &cfg.hdr))
>> return;
>>
>> /* Jump through DL stub to initiate Secure Launch */
>> dlinfo = (struct slr_entry_dl_info *)
>> slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);
>>
>> dlinfo->dl_handler(&dlinfo->bl_context);
>>
>> unreachable();
>> }
>>
>>
>>> +}
>>> +
>>> static void __noreturn enter_kernel(unsigned long kernel_addr,
>>> struct boot_params *boot_params)
>>> {
>>> @@ -934,6 +986,9 @@ void __noreturn efi_stub_entry(efi_handle_t handle,
>>> goto fail;
>>> }
>>>
>>> + /* If a Secure Launch is in progress, this never returns */
>>
>> if (IS_ENABLED(CONFIG_SECURE_LAUNCH))
>>
>>> + efi_secure_launch(boot_params);
>>> +
>>> /*
>>> * Call the SEV init code while still running with the firmware's
>>> * GDT/IDT, so #VC exceptions will be handled by EFI.
>>> --
>>> 2.39.3
>>>
>>
>

Linux kernel code doesn't use VLAs because of the limited stack size, and VLAs or alloca() makes stack size tracking impossible. Although this technically speaking runs in a different environment, it is easier to enforce the constraint globally.

2024-02-23 09:37:22

by Ard Biesheuvel

[permalink] [raw]
Subject: Re: [PATCH v8 14/15] x86: Secure Launch late initcall platform module

On Thu, 22 Feb 2024 at 14:58, Daniel P. Smith
<[email protected]> wrote:
>
> On 2/15/24 03:40, Ard Biesheuvel wrote:
> > On Wed, 14 Feb 2024 at 23:32, Ross Philipson <[email protected]> wrote:
> >>
> >> From: "Daniel P. Smith" <[email protected]>
> >>
> >> The Secure Launch platform module is a late init module. During the
> >> init call, the TPM event log is read and measurements taken in the
> >> early boot stub code are located. These measurements are extended
> >> into the TPM PCRs using the mainline TPM kernel driver.
> >>
> >> The platform module also registers the securityfs nodes to allow
> >> access to TXT register fields on Intel along with the fetching of
> >> and writing events to the late launch TPM log.
> >>
> >> Signed-off-by: Daniel P. Smith <[email protected]>
> >> Signed-off-by: garnetgrimm <[email protected]>
> >> Signed-off-by: Ross Philipson <[email protected]>
> >
> > There is an awful amount of code that executes between the point where
> > the measurements are taken and the point where they are loaded into
> > the PCRs. All of this code could subvert the boot flow and hide this
> > fact, by replacing the actual taken measurement values with the known
> > 'blessed' ones that will unseal the keys and/or phone home to do a
> > successful remote attestation.
>
> To set context, in general the motivation to employ an RTM, Static or
> Dynamic, integrity solution is to enable external platform validation,
> aka attestation. These trust chains are constructed from the principle
> of measure and execute that rely on the presence of a RoT for Storage
> (RTS) and a RoT for Reporting (RTR). Under the TCG architecture adopted
> by x86 vendors and now recently by Arm, those roles are fulfilled by the
> TPM. With this context, lets layout the assumptive trusts being made here,
> 1. The CPU GETSEC instruction functions correctly
> 2. The IOMMU, and by extension the PMRs, functions correctly
> 2. The ACM authentication process functions correctly
> 3. The ACM functions correctly
> 4. The TPM interactions function correctly
> 5. The TPM functions correctly
>
> With this basis, let's explore your assertion here. The assertion breaks
> down into two scenarios. The first is that the at-rest kernel binary is
> corrupt, unintentionally (bug) or maliciously, either of which does not
> matter for the situation. For the sake of simplicity, corruption of the
> Linux kernel during loading or before the DRTM Event is considered an
> equivalent to corruption of the kernel at-rest. The second is that the
> kernel binary was corrupted in memory at some point after the DRTM event
> occurs.
>
> For both scenarios, the ACM will correctly configure the IOMMU PMRs to
> ensure the kernel can no longer be tampered with in memory. After which,
> the ACM will then accurately measure the kernel (bzImage) and safely
> store the measurement in the TPM.
>
> In the first scenario, the TPM will accurately report the kernel
> measurement in the attestation. The attestation authority will be able
> to detect if an invalid kernel was started and can take whatever
> remediation actions it may employ.
>
> In the second scenario, any attempt to corrupt the binary after the ACM
> has configured the IOMMU PMR will fail.
>
>

This protects the memory image from external masters after the
measurement has been taken.

So any external influences in the time window between taking the
measurements and loading them into the PCRs are out of scope here, I
guess?

Maybe it would help (or if I missed it - apologies) to include a
threat model here. I suppose physical tampering is out of scope?

> > At the very least, this should be documented somewhere. And if at all
> > possible, it should also be documented why this is ok, and to what
> > extent it limits the provided guarantees compared to a true D-RTM boot
> > where the early boot code measures straight into the TPMs before
> > proceeding.
>
> I can add a rendition of the above into the existing section of the
> documentation patch that already discusses separation of the measurement
> from the TPM recording code. As to the limits it incurs on the DRTM
> integrity, as explained above, I submit there are none.
>

Thanks for the elaborate explananation. And yes, please document this
with the changes.

2024-03-21 13:47:07

by Daniel P. Smith

[permalink] [raw]
Subject: Re: [PATCH v8 01/15] x86/boot: Place kernel_info at a fixed offset

Hi Ard!

On 2/15/24 02:56, Ard Biesheuvel wrote:
> On Wed, 14 Feb 2024 at 23:31, Ross Philipson <[email protected]> wrote:
>>
>> From: Arvind Sankar <[email protected]>
>>
>> There are use cases for storing the offset of a symbol in kernel_info.
>> For example, the trenchboot series [0] needs to store the offset of the
>> Measured Launch Environment header in kernel_info.
>>
>
> Why? Is this information consumed by the bootloader?

Yes, the bootloader needs a standardized means to find the offset of the
MLE header, which communicates a set of meta-data needed by the DCE in
order to set up for and start the loaded kernel. Arm will also need to
provide a similar metadata structure and alternative entry point (or a
complete rewrite of the existing entry point), as the current Arm entry
point is in direct conflict with Arm DRTM specification.

> I'd like to get away from x86 specific hacks for boot code and boot
> images, so I would like to explore if we can avoid kernel_info, or at
> least expose it in a generic way. We might just add a 32-bit offset
> somewhere in the first 64 bytes of the bootable image: this could
> co-exist with EFI bootable images, and can be implemented on arm64,
> RISC-V and LoongArch as well.

With all due respect, I would not refer to boot params and the kern_info
extension designed by the x86 maintainers as a hack. It is the
well-defined boot protocol for x86, just as Arm has its own boot
protocol around Device Tree.

We would gladly adopt a cross arch/cross image type, zImage and bzImage,
means to embedded meta-data about the kernel that can be discovered by a
bootloader. Otherwise, we are relegated to doing a per arch/per image
type discovery mechanism. If you have any suggestions that are cross
arch/cross image type that we could explore, we would be grateful and
willing to investigate how to adopt such a method.

V/r,
Daniel

2024-03-21 14:12:53

by Daniel P. Smith

[permalink] [raw]
Subject: Re: [PATCH v8 14/15] x86: Secure Launch late initcall platform module

Hi Ard,

On 2/23/24 04:36, Ard Biesheuvel wrote:
> On Thu, 22 Feb 2024 at 14:58, Daniel P. Smith
> <[email protected]> wrote:
>>
>> On 2/15/24 03:40, Ard Biesheuvel wrote:
>>> On Wed, 14 Feb 2024 at 23:32, Ross Philipson <[email protected]> wrote:
>>>>
>>>> From: "Daniel P. Smith" <[email protected]>
>>>>
>>>> The Secure Launch platform module is a late init module. During the
>>>> init call, the TPM event log is read and measurements taken in the
>>>> early boot stub code are located. These measurements are extended
>>>> into the TPM PCRs using the mainline TPM kernel driver.
>>>>
>>>> The platform module also registers the securityfs nodes to allow
>>>> access to TXT register fields on Intel along with the fetching of
>>>> and writing events to the late launch TPM log.
>>>>
>>>> Signed-off-by: Daniel P. Smith <[email protected]>
>>>> Signed-off-by: garnetgrimm <[email protected]>
>>>> Signed-off-by: Ross Philipson <[email protected]>
>>>
>>> There is an awful amount of code that executes between the point where
>>> the measurements are taken and the point where they are loaded into
>>> the PCRs. All of this code could subvert the boot flow and hide this
>>> fact, by replacing the actual taken measurement values with the known
>>> 'blessed' ones that will unseal the keys and/or phone home to do a
>>> successful remote attestation.
>>
>> To set context, in general the motivation to employ an RTM, Static or
>> Dynamic, integrity solution is to enable external platform validation,
>> aka attestation. These trust chains are constructed from the principle
>> of measure and execute that rely on the presence of a RoT for Storage
>> (RTS) and a RoT for Reporting (RTR). Under the TCG architecture adopted
>> by x86 vendors and now recently by Arm, those roles are fulfilled by the
>> TPM. With this context, lets layout the assumptive trusts being made here,
>> 1. The CPU GETSEC instruction functions correctly
>> 2. The IOMMU, and by extension the PMRs, functions correctly
>> 2. The ACM authentication process functions correctly
>> 3. The ACM functions correctly
>> 4. The TPM interactions function correctly
>> 5. The TPM functions correctly
>>
>> With this basis, let's explore your assertion here. The assertion breaks
>> down into two scenarios. The first is that the at-rest kernel binary is
>> corrupt, unintentionally (bug) or maliciously, either of which does not
>> matter for the situation. For the sake of simplicity, corruption of the
>> Linux kernel during loading or before the DRTM Event is considered an
>> equivalent to corruption of the kernel at-rest. The second is that the
>> kernel binary was corrupted in memory at some point after the DRTM event
>> occurs.
>>
>> For both scenarios, the ACM will correctly configure the IOMMU PMRs to
>> ensure the kernel can no longer be tampered with in memory. After which,
>> the ACM will then accurately measure the kernel (bzImage) and safely
>> store the measurement in the TPM.
>>
>> In the first scenario, the TPM will accurately report the kernel
>> measurement in the attestation. The attestation authority will be able
>> to detect if an invalid kernel was started and can take whatever
>> remediation actions it may employ.
>>
>> In the second scenario, any attempt to corrupt the binary after the ACM
>> has configured the IOMMU PMR will fail.
>>
>>
>
> This protects the memory image from external masters after the
> measurement has been taken.

It blocks access before the measurement is taken.

> So any external influences in the time window between taking the
> measurements and loading them into the PCRs are out of scope here, I
> guess?

Correct, as long as the assumption that the user configured the kernel
to program the IOMMU correctly after gaining control. In early versions
of this series the correct IOMMU configuration was enforced. This was
changed due to objections that the user should be free to configure the
system how they see fit, even if it results in an insecure system.

> Maybe it would help (or if I missed it - apologies) to include a
> threat model here. I suppose physical tampering is out of scope?

I can take a look at what other security capabilities have documented in
this area and provide a similar level of explanation.

I would not say physical tampering is out, I would say that it is
supported to the degree to which the TPM was designed to mitigate it.

>>> At the very least, this should be documented somewhere. And if at all
>>> possible, it should also be documented why this is ok, and to what
>>> extent it limits the provided guarantees compared to a true D-RTM boot
>>> where the early boot code measures straight into the TPMs before
>>> proceeding.
>>
>> I can add a rendition of the above into the existing section of the
>> documentation patch that already discusses separation of the measurement
>> from the TPM recording code. As to the limits it incurs on the DRTM
>> integrity, as explained above, I submit there are none.
>>
>
> Thanks for the elaborate explananation. And yes, please document this
> with the changes.

Ack.

V/r,
Daniel

2024-03-22 14:23:50

by H. Peter Anvin

[permalink] [raw]
Subject: Re: [PATCH v8 01/15] x86/boot: Place kernel_info at a fixed offset

On March 21, 2024 6:45:48 AM PDT, "Daniel P. Smith" <[email protected]> wrote:
>Hi Ard!
>
>On 2/15/24 02:56, Ard Biesheuvel wrote:
>> On Wed, 14 Feb 2024 at 23:31, Ross Philipson <[email protected]> wrote:
>>>
>>> From: Arvind Sankar <[email protected]>
>>>
>>> There are use cases for storing the offset of a symbol in kernel_info.
>>> For example, the trenchboot series [0] needs to store the offset of the
>>> Measured Launch Environment header in kernel_info.
>>>
>>
>> Why? Is this information consumed by the bootloader?
>
>Yes, the bootloader needs a standardized means to find the offset of the MLE header, which communicates a set of meta-data needed by the DCE in order to set up for and start the loaded kernel. Arm will also need to provide a similar metadata structure and alternative entry point (or a complete rewrite of the existing entry point), as the current Arm entry point is in direct conflict with Arm DRTM specification.
>
>> I'd like to get away from x86 specific hacks for boot code and boot
>> images, so I would like to explore if we can avoid kernel_info, or at
>> least expose it in a generic way. We might just add a 32-bit offset
>> somewhere in the first 64 bytes of the bootable image: this could
>> co-exist with EFI bootable images, and can be implemented on arm64,
>> RISC-V and LoongArch as well.
>
>With all due respect, I would not refer to boot params and the kern_info extension designed by the x86 maintainers as a hack. It is the well-defined boot protocol for x86, just as Arm has its own boot protocol around Device Tree.
>
>We would gladly adopt a cross arch/cross image type, zImage and bzImage, means to embedded meta-data about the kernel that can be discovered by a bootloader. Otherwise, we are relegated to doing a per arch/per image type discovery mechanism. If you have any suggestions that are cross arch/cross image type that we could explore, we would be grateful and willing to investigate how to adopt such a method.
>
>V/r,
>Daniel

To be fair, the way things are going UEFI, i.e. PE/COFF, is becoming the new standard format. Yes, ELF would have been better, but...

2024-03-23 01:34:43

by Daniel P. Smith

[permalink] [raw]
Subject: Re: [PATCH v8 01/15] x86/boot: Place kernel_info at a fixed offset

On 3/22/24 10:18, H. Peter Anvin wrote:
> On March 21, 2024 6:45:48 AM PDT, "Daniel P. Smith" <[email protected]> wrote:
>> Hi Ard!
>>
>> On 2/15/24 02:56, Ard Biesheuvel wrote:
>>> On Wed, 14 Feb 2024 at 23:31, Ross Philipson <[email protected]> wrote:
>>>>
>>>> From: Arvind Sankar <[email protected]>
>>>>
>>>> There are use cases for storing the offset of a symbol in kernel_info.
>>>> For example, the trenchboot series [0] needs to store the offset of the
>>>> Measured Launch Environment header in kernel_info.
>>>>
>>>
>>> Why? Is this information consumed by the bootloader?
>>
>> Yes, the bootloader needs a standardized means to find the offset of the MLE header, which communicates a set of meta-data needed by the DCE in order to set up for and start the loaded kernel. Arm will also need to provide a similar metadata structure and alternative entry point (or a complete rewrite of the existing entry point), as the current Arm entry point is in direct conflict with Arm DRTM specification.
>>
>>> I'd like to get away from x86 specific hacks for boot code and boot
>>> images, so I would like to explore if we can avoid kernel_info, or at
>>> least expose it in a generic way. We might just add a 32-bit offset
>>> somewhere in the first 64 bytes of the bootable image: this could
>>> co-exist with EFI bootable images, and can be implemented on arm64,
>>> RISC-V and LoongArch as well.
>>
>> With all due respect, I would not refer to boot params and the kern_info extension designed by the x86 maintainers as a hack. It is the well-defined boot protocol for x86, just as Arm has its own boot protocol around Device Tree.
>>
>> We would gladly adopt a cross arch/cross image type, zImage and bzImage, means to embedded meta-data about the kernel that can be discovered by a bootloader. Otherwise, we are relegated to doing a per arch/per image type discovery mechanism. If you have any suggestions that are cross arch/cross image type that we could explore, we would be grateful and willing to investigate how to adopt such a method.
>>
>> V/r,
>> Daniel
>
> To be fair, the way things are going UEFI, i.e. PE/COFF, is becoming the new standard format. Yes, ELF would have been better, but...

Fully agree with the ELF sentiment. We started looking to see if PE/COFF
has something similar to a ELF NOTE, but figured maybe this has been
solved for other cases. If that is not the case or there are not any
suggestions, then we can see what we can devise.