Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp1125869imm; Sun, 2 Sep 2018 10:36:04 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZ8rcSRxUBiqkd21CYIZOWFcCjhPUooNKQIyQUiwTzPWVpjI177CmgTY59iSAXL395HCh3d X-Received: by 2002:a17:902:468:: with SMTP id 95-v6mr15315189ple.122.1535909764834; Sun, 02 Sep 2018 10:36:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535909764; cv=none; d=google.com; s=arc-20160816; b=NJcoc1s37CBBFqQCr5u7t089Nxyb41XsvTTsOjJFxGCHry+LKh0yinBqy8W1mGXqI4 xFIrWNyIX/ygTTYGgcrMTH1WX6RiNL3t8LTSbxJ4AfQxHr+F52XkAMGHhy8JplUyGQjr 4OenXrFLPrRgr+hIMbfiSc0TjUu66XqCvl45a6BQ3ktGrysx1Rr6lfCUJNN492lJ6k5D P9tBE9qVgkycJgfj2m0AkUXi3xw+C/Km2lEyy1xMdN7KK1arB9eVg9eAW0cJJI7dxGJP pOuQy03fwraP4+sG+udmrcBawXEOD7k49N9HRhQKklj7zliyACU1mCIIK28K98A1Se6L dBxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=cuiEXhKeDKGebxEJ4NSe/Bn0AWCGPEfoF8CgkGhCZyQ=; b=LJoG3cpwde2TCMobw7SXdLTzUk11XWrs6hooUYIjlAfZ1obricstwcEQvPk9NWcezF NK8pMmuOGFxUD1n7v0/Nu4ErAp8TK6N9MDW62E+AGswv5KyTGUDe/7U9GIBPr3CFMVlC sbxv+Lm7uOd1sd8DxkIjlVCxC2jCZUpv2IjUB4qyo51xmKdTIne7aVNDUIZSNqisMEBX Lj4mFjXysTQK4gL3K6je4OOFPnpNxxMQHbbvskB2XgxtU9MZhfc07jV9X731FC3oJjeP cqNBMwKu9ajKfKIf6kQHHyufRxzIBl7Rf90r6r6cwYgJa8IWsIceEMmsqctAkXp7+xIx 1ewA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vmware.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 68-v6si15962734pff.55.2018.09.02.10.35.35; Sun, 02 Sep 2018 10:36:04 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vmware.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727286AbeIBVuS (ORCPT + 99 others); Sun, 2 Sep 2018 17:50:18 -0400 Received: from ex13-edg-ou-002.vmware.com ([208.91.0.190]:36915 "EHLO EX13-EDG-OU-002.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726421AbeIBVuR (ORCPT ); Sun, 2 Sep 2018 17:50:17 -0400 Received: from sc9-mailhost3.vmware.com (10.113.161.73) by EX13-EDG-OU-002.vmware.com (10.113.208.156) with Microsoft SMTP Server id 15.0.1156.6; Sun, 2 Sep 2018 10:33:40 -0700 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost3.vmware.com (Postfix) with ESMTP id E4D474071B; Sun, 2 Sep 2018 10:33:42 -0700 (PDT) From: Nadav Amit To: Thomas Gleixner CC: , Ingo Molnar , , Arnd Bergmann , , Dave Hansen , Nadav Amit , Nadav Amit , Andy Lutomirski , Kees Cook , Peter Zijlstra , Dave Hansen Subject: [PATCH v2 5/6] x86/alternatives: use temporary mm for text poking Date: Sun, 2 Sep 2018 10:32:23 -0700 Message-ID: <20180902173224.30606-6-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180902173224.30606-1-namit@vmware.com> References: <20180902173224.30606-1-namit@vmware.com> MIME-Version: 1.0 Content-Type: text/plain Received-SPF: None (EX13-EDG-OU-002.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org text_poke() can potentially compromise the security as it sets temporary PTEs in the fixmap. These PTEs might be used to rewrite the kernel code from other cores accidentally or maliciously, if an attacker gains the ability to write onto kernel memory. Moreover, since remote TLBs are not flushed after the temporary PTEs are removed, the time-window in which the code is writable is not limited if the fixmap PTEs - maliciously or accidentally - are cached in the TLB. To address these potential security hazards, we use a temporary mm for patching the code. Unfortunately, the temporary-mm cannot be initialized early enough during the init, and as a result x86_late_time_init() needs to use text_poke() before it can be initialized. text_poke() therefore keeps the two poking versions - using fixmap and using temporary mm - and uses them accordingly. More adventurous developers can try to reorder the init sequence or use text_poke_early() instead of text_poke() to remove the use of fixmap for patching completely. Finally, text_poke() is also not conservative enough when mapping pages, as it always tries to map 2 pages, even when a single one is sufficient. So try to be more conservative, and do not map more than needed. Cc: Andy Lutomirski Cc: Kees Cook Cc: Peter Zijlstra Cc: Dave Hansen Reviewed-by: Masami Hiramatsu Tested-by: Masami Hiramatsu Signed-off-by: Nadav Amit --- arch/x86/kernel/alternative.c | 165 +++++++++++++++++++++++++++++----- 1 file changed, 144 insertions(+), 21 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index e9be18245698..edca599c4479 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -674,6 +675,124 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode, return addr; } +/** + * text_poke_fixmap - poke using the fixmap. + * + * Fallback function for poking the text using the fixmap. It is used during + * early boot and in the rare case in which initialization of safe poking fails. + * + * Poking in this manner should be avoided, since it allows other cores to use + * the fixmap entries, and can be exploited by an attacker to overwrite the code + * (assuming he gained the write access through another bug). + */ +static void text_poke_fixmap(void *addr, const void *opcode, size_t len, + struct page *pages[2]) +{ + u8 *vaddr; + + set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0])); + if (pages[1]) + set_fixmap(FIX_TEXT_POKE1, page_to_phys(pages[1])); + vaddr = (u8 *)fix_to_virt(FIX_TEXT_POKE0); + memcpy(vaddr + offset_in_page(addr), opcode, len); + + /* + * clear_fixmap() performs a TLB flush, so no additional TLB + * flush is needed. + */ + clear_fixmap(FIX_TEXT_POKE0); + if (pages[1]) + clear_fixmap(FIX_TEXT_POKE1); + sync_core(); + + /* + * Could also do a CLFLUSH here to speed up CPU recovery; but + * that causes hangs on some VIA CPUs. + */ +} + +__ro_after_init struct mm_struct *poking_mm; +__ro_after_init unsigned long poking_addr; + +/** + * text_poke_safe() - Pokes the text using a separate address space. + * + * This is the preferable way for patching the kernel after boot, as it does not + * allow other cores to accidentally or maliciously modify the code using the + * temporary PTEs. + */ +static void text_poke_safe(void *addr, const void *opcode, size_t len, + struct page *pages[2]) +{ + temporary_mm_state_t prev; + pte_t pte, *ptep; + spinlock_t *ptl; + + /* + * The lock is not really needed, but this allows to avoid open-coding. + */ + ptep = get_locked_pte(poking_mm, poking_addr, &ptl); + + /* + * If we failed to allocate a PTE, fail silently. The caller (text_poke) + * will detect that the write failed when it compares the memory with + * the new opcode. + */ + if (unlikely(!ptep)) + return; + + pte = mk_pte(pages[0], PAGE_KERNEL); + set_pte_at(poking_mm, poking_addr, ptep, pte); + + if (pages[1]) { + pte = mk_pte(pages[1], PAGE_KERNEL); + set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte); + } + + /* + * Loading the temporary mm behaves as a compiler barrier, which + * guarantees that the PTE will be set at the time memcpy() is done. + */ + prev = use_temporary_mm(poking_mm); + + memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len); + + /* + * Ensure that the PTE is only cleared after the instructions of memcpy + * were issued by using a compiler barrier. + */ + barrier(); + + pte_clear(poking_mm, poking_addr, ptep); + + /* + * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on, + * as it also flushes the corresponding "user" address spaces, which + * does not exist. + * + * Poking, however, is already very inefficient since it does not try to + * batch updates, so we ignore this problem for the time being. + * + * Since the PTEs do not exist in other kernel address-spaces, we do + * not use __flush_tlb_one_kernel(), which when PTI is on would cause + * more unwarranted TLB flushes. + */ + __flush_tlb_one_user(poking_addr); + if (pages[1]) { + pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1); + __flush_tlb_one_user(poking_addr + PAGE_SIZE); + } + + /* + * Loading the previous page-table hierarchy requires a serializing + * instruction that already allows the core to see the updated version. + * Xen-PV is assumed to serialize execution in a similar manner. + */ + unuse_temporary_mm(prev); + + pte_unmap_unlock(ptep, ptl); +} + /** * text_poke - Update instructions on a live kernel * @addr: address to modify @@ -692,41 +811,45 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode, */ void *text_poke(void *addr, const void *opcode, size_t len) { + bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE; + struct page *pages[2] = {0}; unsigned long flags; - char *vaddr; - struct page *pages[2]; - int i; /* - * While boot memory allocator is runnig we cannot use struct - * pages as they are not yet initialized. + * While boot memory allocator is running we cannot use struct pages as + * they are not yet initialized. */ BUG_ON(!after_bootmem); if (!core_kernel_text((unsigned long)addr)) { pages[0] = vmalloc_to_page(addr); - pages[1] = vmalloc_to_page(addr + PAGE_SIZE); + if (cross_page_boundary) + pages[1] = vmalloc_to_page(addr + PAGE_SIZE); } else { pages[0] = virt_to_page(addr); WARN_ON(!PageReserved(pages[0])); - pages[1] = virt_to_page(addr + PAGE_SIZE); + if (cross_page_boundary) + pages[1] = virt_to_page(addr + PAGE_SIZE); } BUG_ON(!pages[0]); local_irq_save(flags); - set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0])); - if (pages[1]) - set_fixmap(FIX_TEXT_POKE1, page_to_phys(pages[1])); - vaddr = (char *)fix_to_virt(FIX_TEXT_POKE0); - memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len); - clear_fixmap(FIX_TEXT_POKE0); - if (pages[1]) - clear_fixmap(FIX_TEXT_POKE1); - local_flush_tlb(); - sync_core(); - /* Could also do a CLFLUSH here to speed up CPU recovery; but - that causes hangs on some VIA CPUs. */ - for (i = 0; i < len; i++) - BUG_ON(((char *)addr)[i] != ((char *)opcode)[i]); + + /* + * During initial boot, it is hard to initialize poking_mm due to + * dependencies in boot order. + */ + if (poking_mm) + text_poke_safe(addr, opcode, len, pages); + else + text_poke_fixmap(addr, opcode, len, pages); + + /* + * To be on the safe side, do the comparison before enabling IRQs, as it + * was done before. However, it makes more sense to allow the callers to + * deal with potential failures and not to panic so easily. + */ + BUG_ON(memcmp(addr, opcode, len)); + local_irq_restore(flags); return addr; } -- 2.17.1