Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp596934imm; Fri, 31 Aug 2018 08:18:21 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZQ+AHR7ODaaczR60FbcdV7kWhTmoFAazmsMroMBvyEDZKqD9YKoj+2iSTWNShc8EoluHS5 X-Received: by 2002:a65:5189:: with SMTP id h9-v6mr14988440pgq.13.1535728701379; Fri, 31 Aug 2018 08:18:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535728701; cv=none; d=google.com; s=arc-20160816; b=aoZQZ7gx0LZ1uqDWdCP9Q+DPEHUNs6WCk3OPIPBx/I8nq0++p7fsP/+g8EIctqo42P yW8B4OenVNh5mCOBuLuiI6wVd2M/OnIfFOxOUp/JnXuLfIdmSuEGdvI4xUkQBk2DHr4H 20t+C7W2zYp76G8OLRXnqKSkjwtnL1fpes3wkAq6AEunUdCdwYRuP8YjcBVDQHIUMa6X ki4e/ZIznqQ72SD9/Q3oj3VVE8q/m30oXq0IeOxEyA+egv4lrbD10hJARqd5clwFHX5T a819BgHCxRtIhro11cTEI8fa3MHKdaYbTzfsN456Bmwu3tJzUEhqX5AEnNy1d+T1mXXX 0hNg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-disposition :content-transfer-encoding:mime-version:robot-unsubscribe:robot-id :git-commit-id:subject:to:references:in-reply-to:reply-to:cc :message-id:from:date:arc-authentication-results; bh=vpyHoC7Bpke1uQSyRyp7wx9pP/tHWWGTsXkSodkeIgQ=; b=klWBSH3MYUrsIKsi5vzkPkjNnY3O/o1S2zMWueApUJeyvkJSdFlgAFQEcS0WnjlBzH YoZ/paRD4chOhP7TS0AONKwbUQxAb6MfzNSk38i9YUqmNOrM6C1c9pqNCtumcnqDh+h9 6cADFjUZN5r9tJArZJ5DGFnpgwEIT9HxV6drqOMOLY6yrcpwnejbTkDH4tqg/SLgo7WS b1SJvDkSz2i5z+zGkm9ad1Cdrg36GAZ+o7gZrYGGQxRXNu/Q5ZTtQ8otoZvBzxI85KFl LgUZ8MZYeTNL+TjRNKxYA23dExqy2sq/eoAGy8LolphjdA/yDw+wYDA7aWJ2M2HypXa0 y92w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b34-v6si10159547pla.84.2018.08.31.08.18.05; Fri, 31 Aug 2018 08:18:21 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728338AbeHaTW5 (ORCPT + 99 others); Fri, 31 Aug 2018 15:22:57 -0400 Received: from terminus.zytor.com ([198.137.202.136]:52741 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727303AbeHaTW5 (ORCPT ); Fri, 31 Aug 2018 15:22:57 -0400 Received: from terminus.zytor.com (localhost [127.0.0.1]) by terminus.zytor.com (8.15.2/8.15.2) with ESMTPS id w7VFDwtr2195734 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Fri, 31 Aug 2018 08:13:58 -0700 Received: (from tipbot@localhost) by terminus.zytor.com (8.15.2/8.15.2/Submit) id w7VFDw362195731; Fri, 31 Aug 2018 08:13:58 -0700 Date: Fri, 31 Aug 2018 08:13:58 -0700 X-Authentication-Warning: terminus.zytor.com: tipbot set sender to tipbot@zytor.com using -f From: tip-bot for Andy Lutomirski Message-ID: Cc: peterz@infradead.org, hpa@zytor.com, mingo@kernel.org, jannh@google.com, tglx@linutronix.de, nadav.amit@gmail.com, bp@alien8.de, luto@kernel.org, riel@surriel.com, linux-kernel@vger.kernel.org Reply-To: mingo@kernel.org, hpa@zytor.com, peterz@infradead.org, riel@surriel.com, linux-kernel@vger.kernel.org, jannh@google.com, luto@kernel.org, tglx@linutronix.de, nadav.amit@gmail.com, bp@alien8.de In-Reply-To: References: To: linux-tip-commits@vger.kernel.org Subject: [tip:x86/urgent] x86/nmi: Fix NMI uaccess race against CR3 switching Git-Commit-ID: 4012e77a903d114f915fc607d6d2ed54a3d6c9b1 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00, T_DATE_IN_FUTURE_96_Q autolearn=ham autolearn_force=no version=3.4.1 X-Spam-Checker-Version: SpamAssassin 3.4.1 (2015-04-28) on terminus.zytor.com Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit-ID: 4012e77a903d114f915fc607d6d2ed54a3d6c9b1 Gitweb: https://git.kernel.org/tip/4012e77a903d114f915fc607d6d2ed54a3d6c9b1 Author: Andy Lutomirski AuthorDate: Wed, 29 Aug 2018 08:47:18 -0700 Committer: Thomas Gleixner CommitDate: Fri, 31 Aug 2018 17:08:22 +0200 x86/nmi: Fix NMI uaccess race against CR3 switching A NMI can hit in the middle of context switching or in the middle of switch_mm_irqs_off(). In either case, CR3 might not match current->mm, which could cause copy_from_user_nmi() and friends to read the wrong memory. Fix it by adding a new nmi_uaccess_okay() helper and checking it in copy_from_user_nmi() and in __copy_from_user_nmi()'s callers. Signed-off-by: Andy Lutomirski Signed-off-by: Thomas Gleixner Reviewed-by: Rik van Riel Cc: Nadav Amit Cc: Borislav Petkov Cc: Jann Horn Cc: Peter Zijlstra Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/dd956eba16646fd0b15c3c0741269dfd84452dac.1535557289.git.luto@kernel.org --- arch/x86/events/core.c | 2 +- arch/x86/include/asm/tlbflush.h | 40 ++++++++++++++++++++++++++++++++++++++++ arch/x86/lib/usercopy.c | 5 +++++ arch/x86/mm/tlb.c | 7 +++++++ 4 files changed, 53 insertions(+), 1 deletion(-) diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 5f4829f10129..dfb2f7c0d019 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2465,7 +2465,7 @@ perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs perf_callchain_store(entry, regs->ip); - if (!current->mm) + if (!nmi_uaccess_okay()) return; if (perf_callchain_user32(regs, entry)) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 29c9da6c62fc..58ce5288878e 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -175,8 +175,16 @@ struct tlb_state { * are on. This means that it may not match current->active_mm, * which will contain the previous user mm when we're in lazy TLB * mode even if we've already switched back to swapper_pg_dir. + * + * During switch_mm_irqs_off(), loaded_mm will be set to + * LOADED_MM_SWITCHING during the brief interrupts-off window + * when CR3 and loaded_mm would otherwise be inconsistent. This + * is for nmi_uaccess_okay()'s benefit. */ struct mm_struct *loaded_mm; + +#define LOADED_MM_SWITCHING ((struct mm_struct *)1) + u16 loaded_mm_asid; u16 next_asid; /* last user mm's ctx id */ @@ -246,6 +254,38 @@ struct tlb_state { }; DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate); +/* + * Blindly accessing user memory from NMI context can be dangerous + * if we're in the middle of switching the current user task or + * switching the loaded mm. It can also be dangerous if we + * interrupted some kernel code that was temporarily using a + * different mm. + */ +static inline bool nmi_uaccess_okay(void) +{ + struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm); + struct mm_struct *current_mm = current->mm; + + VM_WARN_ON_ONCE(!loaded_mm); + + /* + * The condition we want to check is + * current_mm->pgd == __va(read_cr3_pa()). This may be slow, though, + * if we're running in a VM with shadow paging, and nmi_uaccess_okay() + * is supposed to be reasonably fast. + * + * Instead, we check the almost equivalent but somewhat conservative + * condition below, and we rely on the fact that switch_mm_irqs_off() + * sets loaded_mm to LOADED_MM_SWITCHING before writing to CR3. + */ + if (loaded_mm != current_mm) + return false; + + VM_WARN_ON_ONCE(current_mm->pgd != __va(read_cr3_pa())); + + return true; +} + /* Initialize cr4 shadow for this CPU. */ static inline void cr4_init_shadow(void) { diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c index c8c6ad0d58b8..3f435d7fca5e 100644 --- a/arch/x86/lib/usercopy.c +++ b/arch/x86/lib/usercopy.c @@ -7,6 +7,8 @@ #include #include +#include + /* * We rely on the nested NMI work to allow atomic faults from the NMI path; the * nested NMI paths are careful to preserve CR2. @@ -19,6 +21,9 @@ copy_from_user_nmi(void *to, const void __user *from, unsigned long n) if (__range_not_ok(from, n, TASK_SIZE)) return n; + if (!nmi_uaccess_okay()) + return n; + /* * Even though this function is typically called from NMI/IRQ context * disable pagefaults so that its behaviour is consistent even when diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 9517d1b2a281..e96b99eb800c 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -305,6 +305,10 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush); + /* Let nmi_uaccess_okay() know that we're changing CR3. */ + this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING); + barrier(); + if (need_flush) { this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id); this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen); @@ -335,6 +339,9 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, if (next != &init_mm) this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id); + /* Make sure we write CR3 before loaded_mm. */ + barrier(); + this_cpu_write(cpu_tlbstate.loaded_mm, next); this_cpu_write(cpu_tlbstate.loaded_mm_asid, new_asid); }