Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp1829106imm; Mon, 3 Sep 2018 10:30:12 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYjC7MODJVwVBQ0eDnobxxUpHyxz5C2Yz5T0sKUq6yCIA+MutmX2XUz7hAAWrFu268T1zV4 X-Received: by 2002:a17:902:102c:: with SMTP id b41-v6mr16398690pla.257.1535995812713; Mon, 03 Sep 2018 10:30:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535995812; cv=none; d=google.com; s=arc-20160816; b=wabXN7m0lWAGec6XwjHDVXaGmMTJVUzLHX4s2yfVT0PjY3PPpXvrmOu7ltLdQYGhZB KLCLSJx7GekgPcC2Num9OCt4zLyy4Nv2Ud8VmzSFL2oVXUCQbz7dSNCxy9Y0uTSutVjW /fKtDaVJmbn/iCcONdoTbsjhmUvFpZFMJ2oqj2biu80EmFJnoupvMq1uUUiElwWXs+Pq tQohCwl9ig3Xnyfj+Pafc3hoAFFIOgi9YK/iLYHPRR66Mad8p+mCHe1tuvx0K5ma0yuu r+yJjH9+kyJvZklkSzdmbH1a/5hk8ajxvTTfnWADKjc0uWqTHCSxnJ4vv/dGVuOQaVZC GJaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=tyBjFHzmjS0uDQFGzS+oRhSB2E0K82OM9pjeewFw6Cg=; b=iV2ngNOx8eG+erYL4NjjrFugHQ8sWBq1lMtCGjdrhXY7C1kDjOnNS+eXP2CwkSzym4 nIw2ihpTqxcLBJju9UYGgUuAy7GLKWHkLBCQ1q1lkbFbJTGrLNjrr53kbqFOkfhLY9Dk SERNtuEE8/1lh0UDfAxnEj/EqZK6Wv9pdP68gvxWdiiYk1mbyABmt9ambdKDNpmzCk5L vAyhbzBfkGD2kJrbeIeyIOxwjP6J/my1LSUyjjYIgFhwjqtIlRWcljFDClbuQGeZU4Cg vtBUo5L2Sn35gh/X28YsZo5a8XlGDB45DZIkJmYY0EWssgW/UkRmO4a9yrnnvDmWsCaf Gh2A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g10-v6si17740386plt.468.2018.09.03.10.29.57; Mon, 03 Sep 2018 10:30:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731201AbeICVt7 (ORCPT + 99 others); Mon, 3 Sep 2018 17:49:59 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:46466 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728696AbeICVt6 (ORCPT ); Mon, 3 Sep 2018 17:49:58 -0400 Received: from localhost (ip-213-127-74-90.ip.prioritytelecom.net [213.127.74.90]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id B99A1CF4; Mon, 3 Sep 2018 17:28:49 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Andy Lutomirski , Thomas Gleixner , Rik van Riel , Nadav Amit , Borislav Petkov , Jann Horn , Peter Zijlstra Subject: [PATCH 4.14 136/165] x86/nmi: Fix NMI uaccess race against CR3 switching Date: Mon, 3 Sep 2018 18:57:02 +0200 Message-Id: <20180903165702.368309717@linuxfoundation.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180903165655.003605184@linuxfoundation.org> References: <20180903165655.003605184@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Andy Lutomirski commit 4012e77a903d114f915fc607d6d2ed54a3d6c9b1 upstream. A NMI can hit in the middle of context switching or in the middle of switch_mm_irqs_off(). In either case, CR3 might not match current->mm, which could cause copy_from_user_nmi() and friends to read the wrong memory. Fix it by adding a new nmi_uaccess_okay() helper and checking it in copy_from_user_nmi() and in __copy_from_user_nmi()'s callers. Signed-off-by: Andy Lutomirski Signed-off-by: Thomas Gleixner Reviewed-by: Rik van Riel Cc: Nadav Amit Cc: Borislav Petkov Cc: Jann Horn Cc: Peter Zijlstra Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/dd956eba16646fd0b15c3c0741269dfd84452dac.1535557289.git.luto@kernel.org Signed-off-by: Greg Kroah-Hartman --- arch/x86/events/core.c | 2 +- arch/x86/include/asm/tlbflush.h | 40 ++++++++++++++++++++++++++++++++++++++++ arch/x86/lib/usercopy.c | 5 +++++ arch/x86/mm/tlb.c | 7 +++++++ 4 files changed, 53 insertions(+), 1 deletion(-) --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2462,7 +2462,7 @@ perf_callchain_user(struct perf_callchai perf_callchain_store(entry, regs->ip); - if (!current->mm) + if (!nmi_uaccess_okay()) return; if (perf_callchain_user32(regs, entry)) --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -175,8 +175,16 @@ struct tlb_state { * are on. This means that it may not match current->active_mm, * which will contain the previous user mm when we're in lazy TLB * mode even if we've already switched back to swapper_pg_dir. + * + * During switch_mm_irqs_off(), loaded_mm will be set to + * LOADED_MM_SWITCHING during the brief interrupts-off window + * when CR3 and loaded_mm would otherwise be inconsistent. This + * is for nmi_uaccess_okay()'s benefit. */ struct mm_struct *loaded_mm; + +#define LOADED_MM_SWITCHING ((struct mm_struct *)1) + u16 loaded_mm_asid; u16 next_asid; /* last user mm's ctx id */ @@ -246,6 +254,38 @@ struct tlb_state { }; DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate); +/* + * Blindly accessing user memory from NMI context can be dangerous + * if we're in the middle of switching the current user task or + * switching the loaded mm. It can also be dangerous if we + * interrupted some kernel code that was temporarily using a + * different mm. + */ +static inline bool nmi_uaccess_okay(void) +{ + struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm); + struct mm_struct *current_mm = current->mm; + + VM_WARN_ON_ONCE(!loaded_mm); + + /* + * The condition we want to check is + * current_mm->pgd == __va(read_cr3_pa()). This may be slow, though, + * if we're running in a VM with shadow paging, and nmi_uaccess_okay() + * is supposed to be reasonably fast. + * + * Instead, we check the almost equivalent but somewhat conservative + * condition below, and we rely on the fact that switch_mm_irqs_off() + * sets loaded_mm to LOADED_MM_SWITCHING before writing to CR3. + */ + if (loaded_mm != current_mm) + return false; + + VM_WARN_ON_ONCE(current_mm->pgd != __va(read_cr3_pa())); + + return true; +} + /* Initialize cr4 shadow for this CPU. */ static inline void cr4_init_shadow(void) { --- a/arch/x86/lib/usercopy.c +++ b/arch/x86/lib/usercopy.c @@ -7,6 +7,8 @@ #include #include +#include + /* * We rely on the nested NMI work to allow atomic faults from the NMI path; the * nested NMI paths are careful to preserve CR2. @@ -19,6 +21,9 @@ copy_from_user_nmi(void *to, const void if (__range_not_ok(from, n, TASK_SIZE)) return n; + if (!nmi_uaccess_okay()) + return n; + /* * Even though this function is typically called from NMI/IRQ context * disable pagefaults so that its behaviour is consistent even when --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -292,6 +292,10 @@ void switch_mm_irqs_off(struct mm_struct choose_new_asid(next, next_tlb_gen, &new_asid, &need_flush); + /* Let nmi_uaccess_okay() know that we're changing CR3. */ + this_cpu_write(cpu_tlbstate.loaded_mm, LOADED_MM_SWITCHING); + barrier(); + if (need_flush) { this_cpu_write(cpu_tlbstate.ctxs[new_asid].ctx_id, next->context.ctx_id); this_cpu_write(cpu_tlbstate.ctxs[new_asid].tlb_gen, next_tlb_gen); @@ -322,6 +326,9 @@ void switch_mm_irqs_off(struct mm_struct if (next != &init_mm) this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id); + /* Make sure we write CR3 before loaded_mm. */ + barrier(); + this_cpu_write(cpu_tlbstate.loaded_mm, next); this_cpu_write(cpu_tlbstate.loaded_mm_asid, new_asid); }