Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp1823364pxj; Fri, 18 Jun 2021 17:02:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwFWqN2wvb5Jn7IseXZr4d8lcyv89uL9HAHWsPnTBJ6FIcgS3mdC5Iwbvp/Ag5qNCipbXmg X-Received: by 2002:a17:906:b2cb:: with SMTP id cf11mr13435496ejb.448.1624060965095; Fri, 18 Jun 2021 17:02:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624060965; cv=none; d=google.com; s=arc-20160816; b=yE7R/wZQ7jpPxqiWjrwX23ZBfxJdVKlreB/8ob2mXAzaVhQPdET6pWbbdYc8Po4/Cm lbqeDUnYvkXoY2Mtsutl3s0ENNimuC4FX7ZYpL+/CoJu2hFZBdpkXlv0+WZdxugmMy2s 87Uwk5PXmZB9qUNyVdDo4WeskVTqB5F+euuM9U5iHqw4/OSGPIp4VWB/VA8Ip4TUXJMB HfbZPoLMFfRduQCT5cWpZzF6NOoZq95h8iCbrqrx4fhOt0Gv8RVur16bh427W9ezjw56 JoO8XXuiroLEvSTLW4iHG71LQ+m1PMH4j3CDC0rl5pg9MNlGHO1/PHLBigZLulnTDu4C kwiw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from:dkim-signature; bh=FrJNUSMkLI5Y3fOLVtYg0ZMVfkZx+UQSZg6E5JfQErI=; b=g4qwSRZ2ALXudx6vS0qxGzRKoRsmHg/HOMAZEEeJzc+noVtp8cK69o5s+GMnrfgUmh cM61C8EAm0c6PW41ZReIt+PNTCa2OYoCLJjM3K8iqpebzUFfJ9uecQr8W8n/0LSVWVjt 7TpwwzV9fjz3KnmocMdmdwTgegXkFOToaeZjUG7S/HgRwV+tLN31a0MVT8QE1ISo3VFg JUChM0UMawhSdJsWY9sM1Wb7VJ3LoDJwjRPhZy9Ectui6pwnEXTcWLz5jYIYv90AKph6 XKTwvBfbwFD1IXojCwzYk347C1bt/JSz5JHtbjKrqBcM1eQB6O6xI4upKWSSn3zBSJfE hpxQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2020-01-29 header.b="hkIO/5mV"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i12si4028342edx.505.2021.06.18.17.02.09; Fri, 18 Jun 2021 17:02:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2020-01-29 header.b="hkIO/5mV"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231987AbhFRQJd (ORCPT + 99 others); Fri, 18 Jun 2021 12:09:33 -0400 Received: from mx0b-00069f02.pphosted.com ([205.220.177.32]:52684 "EHLO mx0b-00069f02.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230151AbhFRQJ3 (ORCPT ); Fri, 18 Jun 2021 12:09:29 -0400 Received: from pps.filterd (m0246631.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 15IG1kN3010633; Fri, 18 Jun 2021 16:06:56 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2020-01-29; bh=FrJNUSMkLI5Y3fOLVtYg0ZMVfkZx+UQSZg6E5JfQErI=; b=hkIO/5mV8W11CPv5VtXOly5HCLgWyDpYnUnOB6BnDvvZBIxhHHDCGs3WApD8MuN876Bn FvLBQzIY45e7GmfTTI1IclBLJXPSKGsgg8X2HeLerCJ3ND1yHduuSS3yFR5ASqrRk8eG HKzVNqnX4HsRgMeHCqvl4Hs904jaYjqq5+aXgej8mnrNu1FCnUUc5Dk8OzNkQHOEQX3L kkiH+a05CmlPQMaaG0BwDJefPKe2TNMt8BEgLHjklaSsqZZAV6EWAE5UIYIlw8ZGRLeY NYleRYk8ZZNlgBQBNjdQJShk46KAaisOIKgmGnupw4mAcUUzNA/JezuVR6eqLT+ZBEAD vQ== Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79]) by mx0b-00069f02.pphosted.com with ESMTP id 39893qt81w-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Jun 2021 16:06:56 +0000 Received: from pps.filterd (userp3020.oracle.com [127.0.0.1]) by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 15IG1b5w154878; Fri, 18 Jun 2021 16:06:55 GMT Received: from pps.reinject (localhost [127.0.0.1]) by userp3020.oracle.com with ESMTP id 396wayyuy1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Jun 2021 16:06:55 +0000 Received: from userp3020.oracle.com (userp3020.oracle.com [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 15IG6s8Y167647; Fri, 18 Jun 2021 16:06:54 GMT Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by userp3020.oracle.com with ESMTP id 396wayyuxj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 18 Jun 2021 16:06:54 +0000 Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 15IG6ojK018483; Fri, 18 Jun 2021 16:06:50 GMT Received: from lateralus.us.oracle.com (/10.149.232.101) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 18 Jun 2021 16:06:49 +0000 From: Ross Philipson To: linux-kernel@vger.kernel.org, x86@kernel.org, iommu@lists.linux-foundation.org, linux-integrity@vger.kernel.org, linux-doc@vger.kernel.org Cc: ross.philipson@oracle.com, dpsmith@apertussolutions.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, luto@amacapital.net, trenchboot-devel@googlegroups.com Subject: [PATCH v2 06/12] x86: Secure Launch kernel late boot stub Date: Fri, 18 Jun 2021 12:12:51 -0400 Message-Id: <1624032777-7013-7-git-send-email-ross.philipson@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1624032777-7013-1-git-send-email-ross.philipson@oracle.com> References: <1624032777-7013-1-git-send-email-ross.philipson@oracle.com> X-Proofpoint-ORIG-GUID: BLrTMQyGCgYrOBQpskrwcmErL9GHNyDZ X-Proofpoint-GUID: BLrTMQyGCgYrOBQpskrwcmErL9GHNyDZ Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The routine slaunch_setup is called out of the x86 specific setup_arch routine during early kernel boot. After determining what platform is present, various operations specific to that platform occur. This includes finalizing setting for the platform late launch and verifying that memory protections are in place. For TXT, this code also reserves the original compressed kernel setup area where the APs were left looping so that this memory cannot be used. Signed-off-by: Ross Philipson --- arch/x86/kernel/Makefile | 1 + arch/x86/kernel/setup.c | 3 + arch/x86/kernel/slaunch.c | 472 +++++++++++++++++++++++++++++++++++++++++++++ drivers/iommu/intel/dmar.c | 4 + 4 files changed, 480 insertions(+) create mode 100644 arch/x86/kernel/slaunch.c diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 0f66682..574e643 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -80,6 +80,7 @@ obj-$(CONFIG_X86_32) += tls.o obj-$(CONFIG_IA32_EMULATION) += tls.o obj-y += step.o obj-$(CONFIG_INTEL_TXT) += tboot.o +obj-$(CONFIG_SECURE_LAUNCH) += slaunch.o obj-$(CONFIG_ISA_DMA_API) += i8237.o obj-$(CONFIG_STACKTRACE) += stacktrace.o obj-y += cpu/ diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 1e72062..7f5d12a 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -993,6 +994,8 @@ void __init setup_arch(char **cmdline_p) early_gart_iommu_check(); #endif + slaunch_setup(); + /* * partially used pages are not usable - thus * we are rounding upwards: diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c new file mode 100644 index 00000000..a24d384 --- /dev/null +++ b/arch/x86/kernel/slaunch.c @@ -0,0 +1,472 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Secure Launch late validation/setup and finalization support. + * + * Copyright (c) 2021, Oracle and/or its affiliates. + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static u32 sl_flags; +static struct sl_ap_wake_info ap_wake_info; +static u64 evtlog_addr; +static u32 evtlog_size; +static u64 vtd_pmr_lo_size; + +/* This should be plenty of room */ +static u8 txt_dmar[PAGE_SIZE] __aligned(16); + +u32 slaunch_get_flags(void) +{ + return sl_flags; +} +EXPORT_SYMBOL(slaunch_get_flags); + +struct sl_ap_wake_info *slaunch_get_ap_wake_info(void) +{ + return &ap_wake_info; +} + +struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table_header *dmar) +{ + /* The DMAR is only stashed and provided via TXT on Intel systems */ + if (memcmp(txt_dmar, "DMAR", 4)) + return dmar; + + return (struct acpi_table_header *)(&txt_dmar[0]); +} + +void __noreturn slaunch_txt_reset(void __iomem *txt, + const char *msg, u64 error) +{ + u64 one = 1, val; + + pr_err("%s", msg); + + /* + * This performs a TXT reset with a sticky error code. The reads of + * TXT_CR_E2STS act as barriers. + */ + memcpy_toio(txt + TXT_CR_ERRORCODE, &error, sizeof(error)); + memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val)); + memcpy_toio(txt + TXT_CR_CMD_NO_SECRETS, &one, sizeof(one)); + memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val)); + memcpy_toio(txt + TXT_CR_CMD_UNLOCK_MEM_CONFIG, &one, sizeof(one)); + memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val)); + memcpy_toio(txt + TXT_CR_CMD_RESET, &one, sizeof(one)); + + for ( ; ; ) + asm volatile ("hlt"); + + unreachable(); +} + +/* + * The TXT heap is too big to map all at once with early_ioremap + * so it is done a table at a time. + */ +static void __init *txt_early_get_heap_table(void __iomem *txt, u32 type, + u32 bytes) +{ + void *heap; + u64 base, size, offset = 0; + int i; + + if (type > TXT_SINIT_TABLE_MAX) + slaunch_txt_reset(txt, + "Error invalid table type for early heap walk\n", + SL_ERROR_HEAP_WALK); + + memcpy_fromio(&base, txt + TXT_CR_HEAP_BASE, sizeof(base)); + memcpy_fromio(&size, txt + TXT_CR_HEAP_SIZE, sizeof(size)); + + /* Iterate over heap tables looking for table of "type" */ + for (i = 0; i < type; i++) { + base += offset; + heap = early_memremap(base, sizeof(u64)); + if (!heap) + slaunch_txt_reset(txt, + "Error early_memremap of heap for heap walk\n", + SL_ERROR_HEAP_MAP); + + offset = *((u64 *)heap); + + /* + * After the first iteration, any offset of zero is invalid and + * implies the TXT heap is corrupted. + */ + if (!offset) + slaunch_txt_reset(txt, + "Error invalid 0 offset in heap walk\n", + SL_ERROR_HEAP_ZERO_OFFSET); + + early_memunmap(heap, sizeof(u64)); + } + + /* Skip the size field at the head of each table */ + base += sizeof(u64); + heap = early_memremap(base, bytes); + if (!heap) + slaunch_txt_reset(txt, + "Error early_memremap of heap section\n", + SL_ERROR_HEAP_MAP); + + return heap; +} + +static void __init txt_early_put_heap_table(void *addr, unsigned long size) +{ + early_memunmap(addr, size); +} + +/* + * TXT uses a special set of VTd registers to protect all of memory from DMA + * until the IOMMU can be programmed to protect memory. There is the low + * memory PMR that can protect all memory up to 4G. The high memory PRM can + * be setup to protect all memory beyond 4Gb. Validate that these values cover + * what is expected. + */ +static void __init slaunch_verify_pmrs(void __iomem *txt) +{ + struct txt_os_sinit_data *os_sinit_data; + unsigned long last_pfn; + u32 field_offset, err = 0; + const char *errmsg = ""; + + field_offset = offsetof(struct txt_os_sinit_data, lcp_po_base); + os_sinit_data = txt_early_get_heap_table(txt, TXT_OS_SINIT_DATA_TABLE, + field_offset); + + /* Save a copy */ + vtd_pmr_lo_size = os_sinit_data->vtd_pmr_lo_size; + + last_pfn = e820__end_of_ram_pfn(); + + /* + * First make sure the hi PMR covers all memory above 4G. In the + * unlikely case where there is < 4G on the system, the hi PMR will + * not be set. + */ + if (os_sinit_data->vtd_pmr_hi_base != 0x0ULL) { + if (os_sinit_data->vtd_pmr_hi_base != 0x100000000ULL) { + err = SL_ERROR_HI_PMR_BASE; + errmsg = "Error hi PMR base\n"; + goto out; + } + + if (PFN_PHYS(last_pfn) > os_sinit_data->vtd_pmr_hi_base + + os_sinit_data->vtd_pmr_hi_size) { + err = SL_ERROR_HI_PMR_SIZE; + errmsg = "Error hi PMR size\n"; + goto out; + } + } + + /* + * Lo PMR base should always be 0. This was already checked in + * early stub. + */ + + /* + * Check that if the kernel was loaded below 4G, that it is protected + * by the lo PMR. Note this is the decompressed kernel. The ACM would + * have ensured the compressed kernel (the MLE image) was protected. + */ + if ((__pa_symbol(_end) < 0x100000000ULL) && + (__pa_symbol(_end) > os_sinit_data->vtd_pmr_lo_size)) { + err = SL_ERROR_LO_PMR_MLE; + errmsg = "Error lo PMR does not cover MLE kernel\n"; + } + + /* + * Other regions of interest like boot param, AP wake block, cmdline + * already checked for PMR coverage in the early stub code. + */ + +out: + txt_early_put_heap_table(os_sinit_data, field_offset); + + if (err) + slaunch_txt_reset(txt, errmsg, err); +} + +static void __init slaunch_txt_reserve_range(u64 base, u64 size) +{ + int type; + + type = e820__get_entry_type(base, base + size - 1); + if (type == E820_TYPE_RAM) { + pr_info("memblock reserve base: %llx size: %llx\n", base, size); + memblock_reserve(base, size); + } +} + +/* + * For Intel, certain regions of memory must be marked as reserved by putting + * them on the memblock reserved list if they are not already e820 reserved. + * This includes: + * - The TXT HEAP + * - The ACM area + * - The TXT private register bank + * - The MDR list sent to the MLE by the ACM (see TXT specification) + * (Normally the above are properly reserved by firmware but if it was not + * done, reserve them now) + * - The AP wake block + * - TPM log external to the TXT heap + * + * Also if the low PMR doesn't cover all memory < 4G, any RAM regions above + * the low PMR must be reservered too. + */ +static void __init slaunch_txt_reserve(void __iomem *txt) +{ + struct txt_sinit_memory_descriptor_record *mdr; + struct txt_sinit_mle_data *sinit_mle_data; + void *mdrs; + u64 base, size, heap_base, heap_size; + u32 field_offset, mdrnum, mdroffset, mdrslen, i; + + base = TXT_PRIV_CONFIG_REGS_BASE; + size = TXT_PUB_CONFIG_REGS_BASE - TXT_PRIV_CONFIG_REGS_BASE; + slaunch_txt_reserve_range(base, size); + + memcpy_fromio(&heap_base, txt + TXT_CR_HEAP_BASE, sizeof(heap_base)); + memcpy_fromio(&heap_size, txt + TXT_CR_HEAP_SIZE, sizeof(heap_size)); + slaunch_txt_reserve_range(heap_base, heap_size); + + memcpy_fromio(&base, txt + TXT_CR_SINIT_BASE, sizeof(base)); + memcpy_fromio(&size, txt + TXT_CR_SINIT_SIZE, sizeof(size)); + slaunch_txt_reserve_range(base, size); + + field_offset = offsetof(struct txt_sinit_mle_data, + sinit_vtd_dmar_table_size); + sinit_mle_data = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE, + field_offset); + + mdrnum = sinit_mle_data->num_of_sinit_mdrs; + mdroffset = sinit_mle_data->sinit_mdrs_table_offset; + + txt_early_put_heap_table(sinit_mle_data, field_offset); + + if (!mdrnum) + goto nomdr; + + mdrslen = (mdrnum * sizeof(struct txt_sinit_memory_descriptor_record)); + + mdrs = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE, + mdroffset + mdrslen - 8); + + mdr = (struct txt_sinit_memory_descriptor_record *) + (mdrs + mdroffset - 8); + + for (i = 0; i < mdrnum; i++, mdr++) { + /* Spec says some entries can have length 0, ignore them */ + if (mdr->type > 0 && mdr->length > 0) + slaunch_txt_reserve_range(mdr->address, mdr->length); + } + + txt_early_put_heap_table(mdrs, mdroffset + mdrslen - 8); + +nomdr: + slaunch_txt_reserve_range(ap_wake_info.ap_wake_block, + ap_wake_info.ap_wake_block_size); + + /* + * Earlier checks ensured that the event log was properly situated + * either inside the TXT heap or outside. This is a check to see if the + * event log needs to be reserved. If it is in the TXT heap, it is + * already reserved. + */ + if (evtlog_addr < heap_base || evtlog_addr > (heap_base + heap_size)) + slaunch_txt_reserve_range(evtlog_addr, evtlog_size); + + for (i = 0; i < e820_table->nr_entries; i++) { + base = e820_table->entries[i].addr; + size = e820_table->entries[i].size; + if ((base >= vtd_pmr_lo_size) && (base < 0x100000000ULL)) + slaunch_txt_reserve_range(base, size); + else if ((base < vtd_pmr_lo_size) && + (base + size > vtd_pmr_lo_size)) + slaunch_txt_reserve_range(vtd_pmr_lo_size, + base + size - vtd_pmr_lo_size); + } +} + +/* + * TXT stashes a safe copy of the DMAR ACPI table to prevent tampering. + * It is stored in the TXT heap. Fetch it from there and make it available + * to the IOMMU driver. + */ +static void __init slaunch_copy_dmar_table(void __iomem *txt) +{ + struct txt_sinit_mle_data *sinit_mle_data; + void *dmar; + u32 field_offset, dmar_size, dmar_offset; + + memset(&txt_dmar, 0, PAGE_SIZE); + + field_offset = offsetof(struct txt_sinit_mle_data, + processor_scrtm_status); + sinit_mle_data = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE, + field_offset); + + dmar_size = sinit_mle_data->sinit_vtd_dmar_table_size; + dmar_offset = sinit_mle_data->sinit_vtd_dmar_table_offset; + + txt_early_put_heap_table(sinit_mle_data, field_offset); + + if (!dmar_size || !dmar_offset) + slaunch_txt_reset(txt, + "Error invalid DMAR table values\n", + SL_ERROR_HEAP_INVALID_DMAR); + + if (unlikely(dmar_size > PAGE_SIZE)) + slaunch_txt_reset(txt, + "Error DMAR too big to store\n", + SL_ERROR_HEAP_DMAR_SIZE); + + + dmar = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE, + dmar_offset + dmar_size - 8); + if (!dmar) + slaunch_txt_reset(txt, + "Error early_ioremap of DMAR\n", + SL_ERROR_HEAP_DMAR_MAP); + + memcpy(&txt_dmar[0], dmar + dmar_offset - 8, dmar_size); + + txt_early_put_heap_table(dmar, dmar_offset + dmar_size - 8); +} + +/* + * The location of the safe AP wake code block is stored in the TXT heap. + * Fetch it here in the early init code for later use in SMP startup. + * + * Also get the TPM event log values that may have to be put on the + * memblock reserve list later. + */ +static void __init slaunch_fetch_os_mle_fields(void __iomem *txt) +{ + struct txt_os_mle_data *os_mle_data; + u8 *jmp_offset; + + os_mle_data = txt_early_get_heap_table(txt, TXT_OS_MLE_DATA_TABLE, + sizeof(*os_mle_data)); + + ap_wake_info.ap_wake_block = os_mle_data->ap_wake_block; + ap_wake_info.ap_wake_block_size = os_mle_data->ap_wake_block_size; + + jmp_offset = os_mle_data->mle_scratch + SL_SCRATCH_AP_JMP_OFFSET; + ap_wake_info.ap_jmp_offset = *((u32 *)jmp_offset); + + evtlog_addr = os_mle_data->evtlog_addr; + evtlog_size = os_mle_data->evtlog_size; + + txt_early_put_heap_table(os_mle_data, sizeof(*os_mle_data)); +} + +/* + * Intel specific late stub setup and validation. + */ +static void __init slaunch_setup_intel(void) +{ + void __iomem *txt; + u64 one = TXT_REGVALUE_ONE, val; + + /* + * First see if SENTER was done and not by TBOOT by reading the status + * register in the public space. + */ + txt = early_ioremap(TXT_PUB_CONFIG_REGS_BASE, + TXT_NR_CONFIG_PAGES * PAGE_SIZE); + if (!txt) { + /* This is really bad, no where to go from here */ + panic("Error early_ioremap of TXT pub registers\n"); + } + + memcpy_fromio(&val, txt + TXT_CR_STS, sizeof(val)); + early_iounmap(txt, TXT_NR_CONFIG_PAGES * PAGE_SIZE); + + /* Was SENTER done? */ + if (!(val & TXT_SENTER_DONE_STS)) + return; + + /* Was it done by TBOOT? */ + if (boot_params.tboot_addr) + return; + + /* Now we want to use the private register space */ + txt = early_ioremap(TXT_PRIV_CONFIG_REGS_BASE, + TXT_NR_CONFIG_PAGES * PAGE_SIZE); + if (!txt) { + /* This is really bad, no where to go from here */ + panic("Error early_ioremap of TXT priv registers\n"); + } + + /* + * Try to read the Intel VID from the TXT private registers to see if + * TXT measured launch happened properly and the private space is + * available. + */ + memcpy_fromio(&val, txt + TXT_CR_DIDVID, sizeof(val)); + if ((u16)(val & 0xffff) != 0x8086) { + /* + * Can't do a proper TXT reset since it appears something is + * wrong even though SENTER happened and it should be in SMX + * mode. + */ + panic("Invalid TXT vendor ID, not in SMX mode\n"); + } + + /* Set flags so subsequent code knows the status of the launch */ + sl_flags |= (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT); + + /* + * Reading the proper DIDVID from the private register space means we + * are in SMX mode and private registers are open for read/write. + */ + + /* On Intel, have to handle TPM localities via TXT */ + memcpy_toio(txt + TXT_CR_CMD_SECRETS, &one, sizeof(one)); + memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val)); + memcpy_toio(txt + TXT_CR_CMD_OPEN_LOCALITY1, &one, sizeof(one)); + memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val)); + + slaunch_fetch_os_mle_fields(txt); + + slaunch_verify_pmrs(txt); + + slaunch_txt_reserve(txt); + + slaunch_copy_dmar_table(txt); + + early_iounmap(txt, TXT_NR_CONFIG_PAGES * PAGE_SIZE); + + pr_info("Intel TXT setup complete\n"); +} + +void __init slaunch_setup(void) +{ + u32 vendor[4]; + + /* Get manufacturer string with CPUID 0 */ + cpuid(0, &vendor[0], &vendor[1], &vendor[2], &vendor[3]); + + /* Only Intel TXT is supported at this point */ + if (vendor[1] == INTEL_CPUID_MFGID_EBX && + vendor[2] == INTEL_CPUID_MFGID_ECX && + vendor[3] == INTEL_CPUID_MFGID_EDX) + slaunch_setup_intel(); +} diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c index 84057cb..ba7100c 100644 --- a/drivers/iommu/intel/dmar.c +++ b/drivers/iommu/intel/dmar.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -662,6 +663,9 @@ static inline int dmar_walk_dmar_table(struct acpi_table_dmar *dmar, */ dmar_tbl = tboot_get_dmar_table(dmar_tbl); + /* If Secure Launch is active, it has similar logic */ + dmar_tbl = slaunch_get_dmar_table(dmar_tbl); + dmar = (struct acpi_table_dmar *)dmar_tbl; if (!dmar) return -ENODEV; -- 1.8.3.1