Received: by 2002:ac0:98c7:0:0:0:0:0 with SMTP id g7-v6csp6914imd; Fri, 2 Nov 2018 16:32:35 -0700 (PDT) X-Google-Smtp-Source: AJdET5dblhFDF4M347QsJm4ODRgTaWYrZgG03MkeCe/qMxHXDK9cfsydfBjHaMe84yvOJcR2RYVw X-Received: by 2002:a62:6f43:: with SMTP id k64-v6mr13139646pfc.87.1541201555475; Fri, 02 Nov 2018 16:32:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1541201555; cv=none; d=google.com; s=arc-20160816; b=VjxCU12ZQ1d5XCuquJ3ad5MaMr6T//7TOd0FaS5oCcGL3x1geVF8sfRVJ1XVAXEn/p 7nZssGeNGNIFv69GQcW5wMtwyxgPPJTi7yIVBFW1ovtiBY3Y+vVwr4MWYv/2dZ4iCku9 fYHJQ7CFKSIqgvpMiOPJM/wfkfoa1mu3pm82hw07TnlEz9+WELiCvk9vKfpdaCk0n1k2 VDJWTQLC/6TJC8VGVrKyOkDfn8gPJThX1oh1huJDIOQPxSDeNg0oMQhmFvxCGdpzLOWW jwJLkIt5kfNhZR43C2Fz6uJYmvsOX1oNryOBOQih0jfleE5FcjGE6pizCscrT9ljbp5m 1jkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=Z095ni3nhYOgBXfpGPQ2XalFhNBZIm55OrUznO2nBe8=; b=T4R88AvFm2nkW+0jtn3TuqT5bMypS6K4R5K+GCwvwmDQ//bEpx42QNyIYszyK1zH1b At9+N23VqV0HJQIFCkmd8vh7EPT05gfzYgE+QGIKeJcYOn3R9XzfTPaXHhjHITKjW5Im w75/rclvJg9u95meEi/YA/w6mYfLwf6znA6BB9jZY+o17hUFWScjkFzAjZ9kVaqla1n4 +eNIHAqNiFwlUcsUEPA0l4+h7Djtwy+rAxBkT66m4j68ddfRPQaVAjACEa/WtzPUar/J TlNOcwNkpcIQ8ami3dpUGxj97FTkTVn57pZ57vxFL2UI0HFwb8bYFkuWsGt7oKRdtEjJ bshg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vmware.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y12-v6si9704043pgg.158.2018.11.02.16.32.20; Fri, 02 Nov 2018 16:32:35 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=vmware.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728657AbeKCIlA (ORCPT + 99 others); Sat, 3 Nov 2018 04:41:00 -0400 Received: from ex13-edg-ou-001.vmware.com ([208.91.0.189]:8611 "EHLO EX13-EDG-OU-001.vmware.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728598AbeKCIk7 (ORCPT ); Sat, 3 Nov 2018 04:40:59 -0400 Received: from sc9-mailhost2.vmware.com (10.113.161.72) by EX13-EDG-OU-001.vmware.com (10.113.208.155) with Microsoft SMTP Server id 15.0.1156.6; Fri, 2 Nov 2018 16:31:32 -0700 Received: from sc2-haas01-esx0118.eng.vmware.com (sc2-haas01-esx0118.eng.vmware.com [10.172.44.118]) by sc9-mailhost2.vmware.com (Postfix) with ESMTP id C1DDDB14D5; Fri, 2 Nov 2018 19:31:42 -0400 (EDT) From: Nadav Amit To: Ingo Molnar CC: , , "H. Peter Anvin" , Thomas Gleixner , Borislav Petkov , Dave Hansen , Nadav Amit , Andy Lutomirski , Kees Cook , Peter Zijlstra , Dave Hansen , Masami Hiramatsu Subject: [PATCH v3 6/7] x86/alternatives: use temporary mm for text poking Date: Fri, 2 Nov 2018 16:29:45 -0700 Message-ID: <20181102232946.98461-7-namit@vmware.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181102232946.98461-1-namit@vmware.com> References: <20181102232946.98461-1-namit@vmware.com> MIME-Version: 1.0 Content-Type: text/plain Received-SPF: None (EX13-EDG-OU-001.vmware.com: namit@vmware.com does not designate permitted sender hosts) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org text_poke() can potentially compromise the security as it sets temporary PTEs in the fixmap. These PTEs might be used to rewrite the kernel code from other cores accidentally or maliciously, if an attacker gains the ability to write onto kernel memory. Moreover, since remote TLBs are not flushed after the temporary PTEs are removed, the time-window in which the code is writable is not limited if the fixmap PTEs - maliciously or accidentally - are cached in the TLB. To address these potential security hazards, we use a temporary mm for patching the code. More adventurous developers can try to reorder the init sequence or use text_poke_early() instead of text_poke() to remove the use of fixmap for patching completely. Finally, text_poke() is also not conservative enough when mapping pages, as it always tries to map 2 pages, even when a single one is sufficient. So try to be more conservative, and do not map more than needed. Cc: Andy Lutomirski Cc: Kees Cook Cc: Peter Zijlstra Cc: Dave Hansen Cc: Masami Hiramatsu Signed-off-by: Nadav Amit --- arch/x86/include/asm/fixmap.h | 2 - arch/x86/kernel/alternative.c | 112 +++++++++++++++++++++++++++------- 2 files changed, 91 insertions(+), 23 deletions(-) diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h index 50ba74a34a37..9da8cccdf3fb 100644 --- a/arch/x86/include/asm/fixmap.h +++ b/arch/x86/include/asm/fixmap.h @@ -103,8 +103,6 @@ enum fixed_addresses { #ifdef CONFIG_PARAVIRT FIX_PARAVIRT_BOOTMAP, #endif - FIX_TEXT_POKE1, /* reserve 2 pages for text_poke() */ - FIX_TEXT_POKE0, /* first page is last, because allocation is backward */ #ifdef CONFIG_X86_INTEL_MID FIX_LNW_VRTC, #endif diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 9ceae28db1af..1a40df4db450 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -699,41 +700,110 @@ __ro_after_init unsigned long poking_addr; */ void *text_poke(void *addr, const void *opcode, size_t len) { - unsigned long flags; - char *vaddr; + bool cross_page_boundary = offset_in_page(addr) + len > PAGE_SIZE; + temporary_mm_state_t prev; struct page *pages[2]; - int i; + unsigned long flags; + pte_t pte, *ptep; + spinlock_t *ptl; /* - * While boot memory allocator is runnig we cannot use struct - * pages as they are not yet initialized. + * While boot memory allocator is running we cannot use struct pages as + * they are not yet initialized. */ BUG_ON(!after_bootmem); if (!core_kernel_text((unsigned long)addr)) { pages[0] = vmalloc_to_page(addr); - pages[1] = vmalloc_to_page(addr + PAGE_SIZE); + if (cross_page_boundary) + pages[1] = vmalloc_to_page(addr + PAGE_SIZE); } else { pages[0] = virt_to_page(addr); WARN_ON(!PageReserved(pages[0])); - pages[1] = virt_to_page(addr + PAGE_SIZE); + if (cross_page_boundary) + pages[1] = virt_to_page(addr + PAGE_SIZE); } + + /* TODO: let the caller deal with a failure and fail gracefully. */ BUG_ON(!pages[0]); + BUG_ON(cross_page_boundary && !pages[1]); local_irq_save(flags); - set_fixmap(FIX_TEXT_POKE0, page_to_phys(pages[0])); - if (pages[1]) - set_fixmap(FIX_TEXT_POKE1, page_to_phys(pages[1])); - vaddr = (char *)fix_to_virt(FIX_TEXT_POKE0); - memcpy(&vaddr[(unsigned long)addr & ~PAGE_MASK], opcode, len); - clear_fixmap(FIX_TEXT_POKE0); - if (pages[1]) - clear_fixmap(FIX_TEXT_POKE1); - local_flush_tlb(); - sync_core(); - /* Could also do a CLFLUSH here to speed up CPU recovery; but - that causes hangs on some VIA CPUs. */ - for (i = 0; i < len; i++) - BUG_ON(((char *)addr)[i] != ((char *)opcode)[i]); + + /* + * The lock is not really needed, but this allows to avoid open-coding. + */ + ptep = get_locked_pte(poking_mm, poking_addr, &ptl); + + /* + * If we failed to allocate a PTE, fail silently. The caller (text_poke) + * will detect that the write failed when it compares the memory with + * the new opcode. + */ + if (unlikely(!ptep)) + goto out; + + pte = mk_pte(pages[0], PAGE_KERNEL); + set_pte_at(poking_mm, poking_addr, ptep, pte); + + if (cross_page_boundary) { + pte = mk_pte(pages[1], PAGE_KERNEL); + set_pte_at(poking_mm, poking_addr + PAGE_SIZE, ptep + 1, pte); + } + + /* + * Loading the temporary mm behaves as a compiler barrier, which + * guarantees that the PTE will be set at the time memcpy() is done. + */ + prev = use_temporary_mm(poking_mm); + + kasan_disable_current(); + memcpy((u8 *)poking_addr + offset_in_page(addr), opcode, len); + kasan_enable_current(); + + /* + * Ensure that the PTE is only cleared after the instructions of memcpy + * were issued by using a compiler barrier. + */ + barrier(); + + pte_clear(poking_mm, poking_addr, ptep); + + /* + * __flush_tlb_one_user() performs a redundant TLB flush when PTI is on, + * as it also flushes the corresponding "user" address spaces, which + * does not exist. + * + * Poking, however, is already very inefficient since it does not try to + * batch updates, so we ignore this problem for the time being. + * + * Since the PTEs do not exist in other kernel address-spaces, we do + * not use __flush_tlb_one_kernel(), which when PTI is on would cause + * more unwarranted TLB flushes. + * + * There is a slight anomaly here: the PTE is a supervisor-only and + * (potentially) global and we use __flush_tlb_one_user() but this + * should be fine. + */ + __flush_tlb_one_user(poking_addr); + if (cross_page_boundary) { + pte_clear(poking_mm, poking_addr + PAGE_SIZE, ptep + 1); + __flush_tlb_one_user(poking_addr + PAGE_SIZE); + } + + /* + * Loading the previous page-table hierarchy requires a serializing + * instruction that already allows the core to see the updated version. + * Xen-PV is assumed to serialize execution in a similar manner. + */ + unuse_temporary_mm(prev); + + pte_unmap_unlock(ptep, ptl); +out: + /* + * TODO: allow the callers to deal with potential failures and do not + * panic so easily. + */ + BUG_ON(memcmp(addr, opcode, len)); local_irq_restore(flags); return addr; } -- 2.17.1