Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2713635imu; Sat, 10 Nov 2018 22:37:19 -0800 (PST) X-Google-Smtp-Source: AJdET5dvOU5ySaSgA6wlLaZbvyfbW/n1WS64Uw5cQmqmQOcJw3TdDW5Xsq5XAYYu/xhEc8Z4JqmB X-Received: by 2002:a63:db48:: with SMTP id x8mr12884119pgi.365.1541918239690; Sat, 10 Nov 2018 22:37:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541918239; cv=none; d=google.com; s=arc-20160816; b=blr4DP5DURsHoknhfVMBF4fNiaFiYn5gI4BtQws6JuOJDpZVKV1YR5Yc/ieVjNVHsM WYujToozGM0+EK3hEgTzhtGX3n7zj1/g/HEUzVkG3xtgyDLUL0vQ7IXs/Qwr04Xchsir 51myco3C4+CZZoxXzxO3Fu5njqJjtyUca3wyoITVQzS1CsiU8tzd9epxe2IXU2CWOA5o 6yZjext0VkhTSo1BNfZvNxyH+ETw3eP8jr5gAVXuPame+lFOzV0lkLTyfAX86cgh6PBx oBdDWXjMMthy3fQ3mGNBlTSx4VB5hIlCIBIg9AotPO3ODe2i/h1yHlpuSCEgni6KUyEk bnYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=+vya1TRl8kPaAocsm7SZaaZjT+bBoGmfBof0ckPHgo8=; b=fbGA0OeyHVDuYNdQ42HqZwS1NtbkWo1VefMSPohnZv10yTlvlmn+FUN8LrM1bybhXO Uc7s3Bb7KU0QBqI4O0YdQUrpBRy1gqo868vfqnxkvBJKcyfhXzp9sXPpK+wYZAmZDTKt /KdqjIW4ZY1DB3C71a3/ulJY8CUng5jBLXziS6VXXqg0gFY3bWV2MzsRNzAVrjkd1hfs IJjW23c5oTOT6q2vge9jV+eq//N08etU11YKBscy/pGfGDGp36Uz6FfPWOvKkR5Eu/pI Me5hj+hfsghcyfXczzGYJqKSO+bvMdFjzVH0CilqI1AIbiLzVK3jlFl7G6ZNYzdhqj8q yKCw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vmware.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v13si12703093pgn.355.2018.11.10.22.36.25; Sat, 10 Nov 2018 22:37:19 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vmware.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727866AbeKKQXH (ORCPT + 99 others); Sun, 11 Nov 2018 11:23:07 -0500 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:25452 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727648AbeKKQXH (ORCPT ); Sun, 11 Nov 2018 11:23:07 -0500 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Sat, 10 Nov 2018 22:35:01 -0800 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id 6E87C40789; Sat, 10 Nov 2018 22:35:24 -0800 (PST) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Dave Hansen , Nadav Amit , Andy Lutomirski , Kees Cook , Peter Zijlstra , Dave Hansen , Masami Hiramatsu Subject: [PATCH v4 06/10] x86/alternative: use temporary mm for text poking Date: Sat, 10 Nov 2018 15:17:28 -0800 Message-ID: <20181110231732.15060-7-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181110231732.15060-1-namit@vmware.com> References: <20181110231732.15060-1-namit@vmware.com> MIME-Version: 1.0 Content-Type: text/plain Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org text_poke() can potentially compromise the security as it sets temporary PTEs in the fixmap. These PTEs might be used to rewrite the kernel code from other cores accidentally or maliciously, if an attacker gains the ability to write onto kernel memory. Moreover, since remote TLBs are not flushed after the temporary PTEs are removed, the time-window in which the code is writable is not limited if the fixmap PTEs - maliciously or accidentally - are cached in the TLB. To address these potential security hazards, we use a temporary mm for patching the code. Finally, text_poke() is also not conservative enough when mapping pages, as it always tries to map 2 pages, even when a single one is sufficient. So try to be more conservative, and do not map more than needed. Cc: Andy Lutomirski Cc: Kees Cook Cc: Peter Zijlstra Cc: Dave Hansen Cc: Masami Hiramatsu Signed-off-by: Nadav Amit --- arch/x86/include/asm/fixmap.h | 2 - arch/x86/kernel/alternative.c | 112 +++++++++++++++++++++++++++------- 2 files changed, 89 insertions(+), 25 deletions(-) diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 50ba74a34a37..9da8cccdf3fb 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -103,8 +103,6 @@ enum fixed_addresses { #ifdef CONFIG_PARAVIRT FIX_PARAVIRT_BOOTMAP, #endif - FIX_TEXT_POKE1, /* reserve 2 pages for text_poke() */ - FIX_TEXT_POKE0, /* first page is last, because allocation is backward */ #ifdef CONFIG_X86_INTEL_MID FIX_LNW_VRTC, #endif diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index d3ae5c26e5a0..96607ef285c3 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -683,43 +684,108 @@ __ro_after_init unsigned long poking_addr; static int __text_poke(void *addr, const void *opcode, size_t len) { + bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE; + temporary_mm_state_t prev; + struct page *pages[2] = {NULL}; unsigned long flags; - char *vaddr; - struct page *pages[2]; - int i, r = 0; + pte_t pte, *ptep; + spinlock_t *ptl; + int r = 0; /* - * While boot memory allocator is runnig we cannot use struct - * pages as they are not yet initialized. + * While boot memory allocator is running we cannot use struct pages as + * they are not yet initialized. */ BUG_ON(!after_bootmem); if (!core_kernel_text((unsigned long)addr)) { pages[0] = vmalloc_to_page(addr); - pages[1] = vmalloc_to_page(addr + PAGE_SIZE); + if (cross_page_boundary) + pages[1] = vmalloc_to_page(addr + PAGE_SIZE); } else { pages[0] = virt_to_page(addr); WARN_ON(!PageReserved(pages[0])); - pages[1] = virt_to_page(addr + PAGE_SIZE); + if (cross_page_boundary) + pages[1] = virt_to_page(addr + PAGE_SIZE); } - if (!pages[0]) + + if (!pages[0] || (cross_page_boundary && !pages[1])) return -EFAULT; + local_irq_save(flags); - set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0])); - if (pages[1]) - set_fixmap(FIX_TEXT_POKE1, page_to_phys(pages[1])); - vaddr = (char *)fix_to_virt(FIX_TEXT_POKE0); - memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len); - clear_fixmap(FIX_TEXT_POKE0); - if (pages[1]) - clear_fixmap(FIX_TEXT_POKE1); - local_flush_tlb(); - sync_core(); - /* Could also do a CLFLUSH here to speed up CPU recovery; but - that causes hangs on some VIA CPUs. */ - for (i = 0; i < len; i++) - if (((char *)addr)[i] != ((char *)opcode)[i]) - r = -EFAULT; + + /* + * The lock is not really needed, but this allows to avoid open-coding. + */ + ptep = get_locked_pte(poking_mm, poking_addr, &ptl); + + /* + * If we failed to allocate a PTE, fail. This should *never* happen, + * since we preallocate the PTE. + */ + if (WARN_ON_ONCE(!ptep)) + goto out; + + pte = mk_pte(pages[0], PAGE_KERNEL); + set_pte_at(poking_mm, poking_addr, ptep, pte); + + if (cross_page_boundary) { + pte = mk_pte(pages[1], PAGE_KERNEL); + set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte); + } + + /* + * Loading the temporary mm behaves as a compiler barrier, which + * guarantees that the PTE will be set at the time memcpy() is done. + */ + prev = use_temporary_mm(poking_mm); + + kasan_disable_current(); + memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len); + kasan_enable_current(); + + /* + * Ensure that the PTE is only cleared after the instructions of memcpy + * were issued by using a compiler barrier. + */ + barrier(); + + pte_clear(poking_mm, poking_addr, ptep); + + /* + * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on, + * as it also flushes the corresponding "user" address spaces, which + * does not exist. + * + * Poking, however, is already very inefficient since it does not try to + * batch updates, so we ignore this problem for the time being. + * + * Since the PTEs do not exist in other kernel address-spaces, we do + * not use __flush_tlb_one_kernel(), which when PTI is on would cause + * more unwarranted TLB flushes. + * + * There is a slight anomaly here: the PTE is a supervisor-only and + * (potentially) global and we use __flush_tlb_one_user() but this + * should be fine. + */ + __flush_tlb_one_user(poking_addr); + if (cross_page_boundary) { + pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1); + __flush_tlb_one_user(poking_addr + PAGE_SIZE); + } + + /* + * Loading the previous page-table hierarchy requires a serializing + * instruction that already allows the core to see the updated version. + * Xen-PV is assumed to serialize execution in a similar manner. + */ + unuse_temporary_mm(prev); + + pte_unmap_unlock(ptep, ptl); +out: + if (memcmp(addr, opcode, len)) + r = -EFAULT; + local_irq_restore(flags); return r; } -- 2.17.1