Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp6221565imm; Mon, 27 Aug 2018 11:49:06 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZ9oZpKqYeYPglqPVq2yWhLrP4KyeGCN7GmLHAu3AZwe+8moAfF36XBYzZ3J0eEVb054LAc X-Received: by 2002:a63:ad44:: with SMTP id y4-v6mr5930182pgo.138.1535395746031; Mon, 27 Aug 2018 11:49:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535395745; cv=none; d=google.com; s=arc-20160816; b=cknKfONO83bh3LHwsiaMegy8rmr1xBJnEXQnmm6lZM+AFP0BWbI+6pKLRcg6hCp+f3 MykApF64EgISzdcSAfPPuxvqrxbBom8tiPI8m8UQNJFDKKSWnAUWk9Y9XVTZm5kA6QIQ BELWEc3BWTaLJAam/ZpmhkYWzfdz9zSx1ZXFMEa1Xy7oSikFcFz2rcenjXZSsqMO0jXh /tP1quxSRbgjTq//XMylizdnZQXDj7NQ3Y2Ef5BuGEs36R7JsfrJi+pIGDMzehgGUoDZ No909HfzM7W777VAlY48A4ZzTT7batj5oMoTXWvDtuyTQMFfO6g2Y7+FYELY+MQEcBzE xqmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:cc:to:subject :message-id:date:from:references:in-reply-to:mime-version :dkim-signature:arc-authentication-results; bh=Zkws3t3XVtEs0tTRAZ7cNfvaSVZ5l/5dJTuUU2Ls6Sg=; b=I2GB+PeCr/5Lq6agxVDTu4PMu0hBZMUAuKaDi324EAmX7l2s9lLItH5eH7TyemQihd YYsncYSNk/o7qNQu5lRkgJihd5eQu1bZbV9cXkw1E/3y8ZwM9PbCjgaUeRaXISGY4yAS 61HoCMKZ8RKTrAqQ/ktkmcoWlL0edeXl6WxqTdvLRgA4TFEhGCgMXHYJQaYglMy4M2D1 Z21Gv7lgCpRSYnC48QlTfV0q3dfJZkkuwvObZHgq+3b5IxrO2Y/t8EK5o2IxYENs1ger I/c7UMSoAhG7bQsMTYz2Io70OjaskowNQbXTXiZQ6+227P5AHQ108JYUxptCiKkQU5Yz sifw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=I5X58W+T; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y14-v6si14214986plp.371.2018.08.27.11.48.50; Mon, 27 Aug 2018 11:49:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=I5X58W+T; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727412AbeH0Wds (ORCPT + 99 others); Mon, 27 Aug 2018 18:33:48 -0400 Received: from mail.kernel.org ([198.145.29.99]:38112 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726939AbeH0Wdr (ORCPT ); Mon, 27 Aug 2018 18:33:47 -0400 Received: from mail-wm0-f44.google.com (mail-wm0-f44.google.com [74.125.82.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8D862208B9 for ; Mon, 27 Aug 2018 18:46:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1535395560; bh=qHJIBt2cMc2VgKACHPY4nsrV9wqgEkvaeuXA/+tAQRY=; h=In-Reply-To:References:From:Date:Subject:To:Cc:From; b=I5X58W+T5/Ag1fE6sQoU+3p0564ocvYxs2r3QZ+I65Z4a0z2ARfSsu9r4bvhlXwEc iLI1v7LF3mspUfHhK+T0oJVlKYVFrpeWaQ+QwIbRqkT4U/Q+nQPtr+ywwzNtb0UgGS Wqd/1CfEFs1aC1nRm55hdq9cufrsdOl48bMxqp4c= Received: by mail-wm0-f44.google.com with SMTP id s12-v6so8191wmc.0 for ; Mon, 27 Aug 2018 11:46:00 -0700 (PDT) X-Gm-Message-State: APzg51DtjZzgOWpXrH3EbvWBpg/wgaTCni7VoPjtlniwnZ0P6/zVoGQU G39gEaVOfZ5539eyy5Ye+2wcjUZVvRuhfyS0AGilbw== X-Received: by 2002:a1c:ef0f:: with SMTP id n15-v6mr5996778wmh.116.1535395559001; Mon, 27 Aug 2018 11:45:59 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a1c:548:0:0:0:0:0 with HTTP; Mon, 27 Aug 2018 11:45:38 -0700 (PDT) In-Reply-To: <01DA0BDD-7504-4209-8A8F-20B27CF6A1C7@gmail.com> References: <20180824180438.GS24124@hirez.programming.kicks-ass.net> <56A9902F-44BE-4520-A17C-26650FCC3A11@gmail.com> <9A38D3F4-2F75-401D-8B4D-83A844C9061B@gmail.com> <8E0D8C66-6F21-4890-8984-B6B3082D4CC5@gmail.com> <20180826112341.f77a528763e297cbc36058fa@kernel.org> <20180826090958.GT24124@hirez.programming.kicks-ass.net> <20180827120305.01a6f26267c64610cadec5d8@kernel.org> <4BF82052-4738-441C-8763-26C85003F2C9@gmail.com> <20180827170511.6bafa15cbc102ae135366e86@kernel.org> <01DA0BDD-7504-4209-8A8F-20B27CF6A1C7@gmail.com> From: Andy Lutomirski Date: Mon, 27 Aug 2018 11:45:38 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: TLB flushes on fixmap changes To: Nadav Amit Cc: Masami Hiramatsu , Peter Zijlstra , Andy Lutomirski , Kees Cook , Linus Torvalds , Paolo Bonzini , Jiri Kosina , Will Deacon , Benjamin Herrenschmidt , Nick Piggin , "the arch/x86 maintainers" , Borislav Petkov , Rik van Riel , Jann Horn , Adin Scannell , Dave Hansen , Linux Kernel Mailing List , linux-mm , David Miller , Martin Schwidefsky , Michael Ellerman Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Aug 27, 2018 at 10:34 AM, Nadav Amit wrote: > at 1:05 AM, Masami Hiramatsu wrote: > >> On Sun, 26 Aug 2018 20:26:09 -0700 >> Nadav Amit wrote: >> >>> at 8:03 PM, Masami Hiramatsu wrote: >>> >>>> On Sun, 26 Aug 2018 11:09:58 +0200 >>>> Peter Zijlstra wrote: >>>> >>>>> On Sat, Aug 25, 2018 at 09:21:22PM -0700, Andy Lutomirski wrote: >>>>>> I just re-read text_poke(). It's, um, horrible. Not only is the >>>>>> implementation overcomplicated and probably buggy, but it's SLOOOOOW= . >>>>>> It's totally the wrong API -- poking one instruction at a time >>>>>> basically can't be efficient on x86. The API should either poke lot= s >>>>>> of instructions at once or should be text_poke_begin(); ...; >>>>>> text_poke_end();. >>>>> >>>>> I don't think anybody ever cared about performance here. Only >>>>> correctness. That whole text_poke_bp() thing is entirely tricky. >>>> >>>> Agreed. Self modification is a special event. >>>> >>>>> FWIW, before text_poke_bp(), text_poke() would only be used from >>>>> stop_machine, so all the other CPUs would be stuck busy-waiting with >>>>> IRQs disabled. These days, yeah, that's lots more dodgy, but yes >>>>> text_mutex should be serializing all that. >>>> >>>> I'm still not sure that speculative page-table walk can be done >>>> over the mutex. Also, if the fixmap area is for aliasing >>>> pages (which always mapped to memory), what kind of >>>> security issue can happen? >>> >>> The PTE is accessible from other cores, so just as we assume for L1TF t= hat >>> the every addressable memory might be cached in L1, we should assume an= d >>> PTE might be cached in the TLB when it is present. >> >> Ok, so other cores can accidentally cache the PTE in TLB, (and no way >> to shoot down explicitly?) > > There is way (although current it does not). But it seems that the consen= sus > is that it is better to avoid it being mapped at all in remote cores. > >>> Although the mapping is for an alias, there are a couple of issues here= . >>> First, this alias mapping is writable, so it might an attacker to chang= e the >>> kernel code (following another initial attack). >> >> Combined with some buffer overflow, correct? If the attacker already can >> write a kernel data directly, he is in the kernel mode. > > Right. > >> >>> Second, the alias mapping is >>> never explicitly flushed. We may assume that once the original mapping = is >>> removed/changed, a full TLB flush would take place, but there is no >>> guarantee it actually takes place. >> >> Hmm, would this means a full TLB flush will not flush alias mapping? >> (or, the full TLB flush just doesn't work?) > > It will flush the alias mapping, but currently there is no such explicit > flush. > >>>> Anyway, from the viewpoint of kprobes, either per-cpu fixmap or >>>> changing CR3 sounds good to me. I think we don't even need per-cpu, >>>> it can call a thread/function on a dedicated core (like the first >>>> boot processor) and wait :) This may prevent leakage of pte change >>>> to other cores. >>> >>> I implemented per-cpu fixmap, but I think that it makes more sense to t= ake >>> peterz approach and set an entry in the PGD level. Per-CPU fixmap eithe= r >>> requires to pre-populate various levels in the page-table hierarchy, or >>> conditionally synchronize whenever module memory is allocated, since th= ey >>> can share the same PGD, PUD & PMD. While usually the synchronization is= not >>> needed, the possibility that synchronization is needed complicates lock= ing. >> >> Could you point which PeterZ approach you said? I guess it will be >> make a clone of PGD and use it for local page mapping (as new mm). >> If so, yes it sounds perfectly fine to me. > > The thread is too long. What I think is best is having a mapping in the P= GD > level. I=E2=80=99ll try to give it a shot, and see what I get. > >>> Anyhow, having fixed addresses for the fixmap can be used to circumvent >>> KASLR. >> >> I think text_poke doesn't mind using random address :) >> >>> I don=E2=80=99t think a dedicated core is needed. Anyhow there is a loc= k >>> (text_mutex), so use_mm() can be used after acquiring the mutex. >> >> Hmm, use_mm() said; >> >> /* >> * use_mm >> * Makes the calling kernel thread take on the specified >> * mm context. >> * (Note: this routine is intended to be called only >> * from a kernel thread context) >> */ >> >> So maybe we need a dedicated kernel thread for safeness? > > Yes, it says so. But I am not sure it cannot be changed, at least for thi= s > specific use-case. Switching kernel threads just for patching seems to me= as > an overkill. > > Let me see if I can get something half-reasonable doing so... > I don't understand at all how a kernel thread helps. The useful bit is to have a dedicated mm, which would involve setting up an mm_struct and mapping the kernel and module text, EFI-style, in the user portion of the mm. But, to do the text_poke(), we'd just use the mm *without calling use_mm*. In other words, the following sequence should be (almost) just fine: typedef struct { struct mm_struct *prev; } temporary_mm_state_t; temporary_mm_state_t use_temporary_mm(struct mm_struct *mm) { temporary_mm_state_t state; lockdep_assert_irqs_disabled(); state.prev =3D this_cpu_read(cpu_tlbstate.loaded_mm); switch_mm_irqs_off(NULL, mm, current); } void unuse_temporary_mm(temporary_mm_state_t prev) { lockdep_assert_irqs_disabled(); switch_mm_irqs_off(NULL, prev.prev, current); } The only thing wrong with this that I can see is that it interacts poorly with perf. But perf is *already* busted in this regard. The following (whitespace damaged, sorry) should fix it: commit b62bff5a8406d252de752cfe75068d0b73b9cdf0 Author: Andy Lutomirski Date: Mon Aug 27 11:41:55 2018 -0700 x86/nmi: Fix some races in NMI uaccess In NMI context, we might be in the middle of context switching or in the middle of switch_mm_irqs_off(). In either case, CR3 might not match current->mm, which could cause copy_from_user_nmi() and friends to read the wrong memory. Fix it by adding a new nmi_uaccess_okay() helper and checking it in copy_from_user_nmi() and in __copy_from_user_nmi()'s callers. Signed-off-by: Andy Lutomirski diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 5f4829f10129..dfb2f7c0d019 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2465,7 +2465,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs perf_callchain_store(entry, regs->ip); - if (!current->mm) + if (!nmi_uaccess_okay()) return; if (perf_callchain_user32(regs, entry)) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflus= h.h index 89a73bc31622..b23b2625793b 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -230,6 +230,22 @@ struct tlb_state { }; DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate); +/* + * Blindly accessing user memory from NMI context can be dangerous + * if we're in the middle of switching the current user task or + * switching the loaded mm. It can also be dangerous if we + * interrupted some kernel code that was temporarily using a + * different mm. + */ +static inline bool nmi_uaccess_okay(void) +{ + struct mm_struct *loaded_mm =3D this_cpu_read(cpu_tlbstate.loaded_mm); + struct mm_struct *current_mm =3D current->mm; + + return current_mm && loaded_mm =3D=3D current_mm && + loaded_mm->pgd =3D=3D __va(read_cr3_pa()); +} + /* Initialize cr4 shadow for this CPU. */ static inline void cr4_init_shadow(void) { diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c index c8c6ad0d58b8..c5f758430be2 100644 --- a/arch/x86/lib/usercopy.c +++ b/arch/x86/lib/usercopy.c @@ -19,6 +19,9 @@ copy_from_user_nmi(void *to, const void __user *from, unsigned long n) if (__range_not_ok(from, n, TASK_SIZE)) return n; + if (!nmi_uaccess_okay()) + return n; + /* * Even though this function is typically called from NMI/IRQ context * disable pagefaults so that its behaviour is consistent even when diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 457b281b9339..f4b41d5a93dd 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -345,6 +345,9 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, */ trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL); } else { + /* Let NMI code know that CR3 may not match expectations. */ + this_cpu_write(cpu_tlbstate.loaded_mm, NULL); + /* The new ASID is already up to date. */ load_new_mm_cr3(next->pgd, new_asid, false); What do you all think?