Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp2521989imm; Mon, 10 Sep 2018 02:25:51 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZNHRhutL89n+Ndmdw6gta9txFsV+COMtziAfyKpI9oi2vg9p/16fXA2r7TWERZ3OkZ8fQp X-Received: by 2002:a63:804a:: with SMTP id j71-v6mr21498688pgd.171.1536571551615; Mon, 10 Sep 2018 02:25:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536571551; cv=none; d=google.com; s=arc-20160816; b=dPMoZ5JbESkMk+TNovp0GfhQYNBXC7YSES/b3wATsn7K+U2wM9s/hmPuXbGqfWJM0x xuaqhuVHNXI9ZuuGv03/qTPsRxCql/wLw/kWQbXlFIBuJ8juNA7YwajrbDAtCWdc7n/c Bchc+zc5aY4AbUD7UpsSR9OmT7KYPFg2GhU7z+jgjihfe+Gp4vyohbAjCHLqRZj8/LQ1 gY0hXNuvu+BLYrxeKs8lc6P9b21xHTrO47QcMTavabzln/rGFML8TZmR085A0uOz/VGN K0DCyd/1tpSJYQJmsg/Tm6wku9xOo2InAD76JoYr5MGOXG8qsNl40T/oMDcXLsbTfgzN fFSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date; bh=tT9xrgD9luWCZWHeWuBPsp3vv5G5yXWK/esCK36oba8=; b=I20RXMXWJVoaDDlfS8F0K37O8hgnCJjtIlRHkkTWzGJEepX/xhYyHUJNHX6H8jVEY+ +DUjHxcUJ92NuCc8PbL1rEDnI0B4anaxmtn4mM5orqKIYcN6qtA7L1AODY/HI37tDKRF URho2MWx/okTdbrTb04jcnTNBzZWel6P9L97OZetQbbd5qipfStVJf0ogQeTA9/NbN/a LOh9KBaRH4KmuquQVPKutj8s3uXGPzQoKDuX/cIBnz+E36nCFZUfQP5caTwvFXTu9ySr v4Y+b49Da0JnhtrKU4pBJVJcijkBnXH4WOcE8xXF73dA/tADg6BTiGASdqWBPDYqm8Gw ZUGw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f73-v6si17496716pfk.97.2018.09.10.02.25.36; Mon, 10 Sep 2018 02:25:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727989AbeIJOQn (ORCPT + 99 others); Mon, 10 Sep 2018 10:16:43 -0400 Received: from mx2.suse.de ([195.135.220.15]:39580 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727941AbeIJOQn (ORCPT ); Mon, 10 Sep 2018 10:16:43 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id DC071AEFC; Mon, 10 Sep 2018 09:23:33 +0000 (UTC) Date: Mon, 10 Sep 2018 11:23:33 +0200 (CEST) From: Jiri Kosina To: Thomas Gleixner , Ingo Molnar , Peter Zijlstra , Josh Poimboeuf , Andrea Arcangeli , "Woodhouse, David" , Andi Kleen , Tim Chen , "Schaufler, Casey" cc: linux-kernel@vger.kernel.org, x86@kernel.org Subject: [PATCH v5 1/2] x86/speculation: apply IBPB more strictly to avoid cross-process data leak In-Reply-To: Message-ID: References: User-Agent: Alpine 2.21 (LSU 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jiri Kosina Currently, we are issuing IBPB only in cases when switching into a non-dumpable process, the rationale being to protect such 'important and security sensitive' processess (such as GPG) from data leak into a different userspace process via spectre v2. This is however completely insufficient to provide proper userspace-to-userpace spectrev2 protection, as any process can poison branch buffers before being scheduled out, and the newly scheduled process immediately becomes spectrev2 victim. In order to minimize the performance impact (for usecases that do require spectrev2 protection), issue the barrier only in cases when switching between processess where the victim can't be ptraced by the potential attacker (as in such cases, the attacker doesn't have to bother with branch buffers at all). Fixes: 18bf3c3ea8 ("x86/speculation: Use Indirect Branch Prediction Barrier in context switch") Originally-by: Tim Chen Signed-off-by: Jiri Kosina --- arch/x86/mm/tlb.c | 31 ++++++++++++++++++++----------- include/linux/ptrace.h | 4 ++++ kernel/ptrace.c | 12 ++++++++---- 3 files changed, 32 insertions(+), 15 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index e96b99eb800c..ed4444402441 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include @@ -180,6 +181,19 @@ static void sync_current_stack_to_mm(struct mm_struct *mm) } } +static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id) +{ + /* + * Check if the current (previous) task has access to the memory + * of the @tsk (next) task. If access is denied, make sure to + * issue a IBPB to stop user->user Spectre-v2 attacks. + * + * Note: __ptrace_may_access() returns 0 or -ERRNO. + */ + return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id && + __ptrace_may_access(tsk, PTRACE_MODE_IBPB)); +} + void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, struct task_struct *tsk) { @@ -262,18 +276,13 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, * one process from doing Spectre-v2 attacks on another. * * As an optimization, flush indirect branches only when - * switching into processes that disable dumping. This - * protects high value processes like gpg, without having - * too high performance overhead. IBPB is *expensive*! - * - * This will not flush branches when switching into kernel - * threads. It will also not flush if we switch to idle - * thread and back to the same process. It will flush if we - * switch to a different non-dumpable process. + * switching into a processes that can't be ptrace by the + * current one (as in such case, attacker has much more + * convenient way how to tamper with the next process than + * branch buffer poisoning). */ - if (tsk && tsk->mm && - tsk->mm->context.ctx_id != last_ctx_id && - get_dumpable(tsk->mm) != SUID_DUMP_USER) + if (static_cpu_has(X86_FEATURE_USE_IBPB) && + ibpb_needed(tsk, last_ctx_id)) indirect_branch_prediction_barrier(); if (IS_ENABLED(CONFIG_VMAP_STACK)) { diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h index 4f36431c380b..983d3f5545a8 100644 --- a/include/linux/ptrace.h +++ b/include/linux/ptrace.h @@ -64,12 +64,15 @@ extern void exit_ptrace(struct task_struct *tracer, struct list_head *dead); #define PTRACE_MODE_NOAUDIT 0x04 #define PTRACE_MODE_FSCREDS 0x08 #define PTRACE_MODE_REALCREDS 0x10 +#define PTRACE_MODE_NOACCESS_CHK 0x20 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */ #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS) #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS) #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS) #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_REALCREDS) +#define PTRACE_MODE_IBPB (PTRACE_MODE_ATTACH | PTRACE_MODE_NOAUDIT \ + | PTRACE_MODE_NOACCESS_CHK | PTRACE_MODE_REALCREDS) /** * ptrace_may_access - check whether the caller is permitted to access @@ -86,6 +89,7 @@ extern void exit_ptrace(struct task_struct *tracer, struct list_head *dead); * process_vm_writev or ptrace (and should use the real credentials). */ extern bool ptrace_may_access(struct task_struct *task, unsigned int mode); +extern int __ptrace_may_access(struct task_struct *task, unsigned int mode); static inline int ptrace_reparented(struct task_struct *child) { diff --git a/kernel/ptrace.c b/kernel/ptrace.c index 21fec73d45d4..5c5e7cb597cd 100644 --- a/kernel/ptrace.c +++ b/kernel/ptrace.c @@ -268,7 +268,7 @@ static int ptrace_has_cap(struct user_namespace *ns, unsigned int mode) } /* Returns 0 on success, -errno on denial. */ -static int __ptrace_may_access(struct task_struct *task, unsigned int mode) +int __ptrace_may_access(struct task_struct *task, unsigned int mode) { const struct cred *cred = current_cred(), *tcred; struct mm_struct *mm; @@ -316,7 +316,8 @@ static int __ptrace_may_access(struct task_struct *task, unsigned int mode) gid_eq(caller_gid, tcred->sgid) && gid_eq(caller_gid, tcred->gid)) goto ok; - if (ptrace_has_cap(tcred->user_ns, mode)) + if (!(mode & PTRACE_MODE_NOACCESS_CHK) && + ptrace_has_cap(tcred->user_ns, mode)) goto ok; rcu_read_unlock(); return -EPERM; @@ -325,10 +326,13 @@ static int __ptrace_may_access(struct task_struct *task, unsigned int mode) mm = task->mm; if (mm && ((get_dumpable(mm) != SUID_DUMP_USER) && - !ptrace_has_cap(mm->user_ns, mode))) + ((mode & PTRACE_MODE_NOACCESS_CHK) || + !ptrace_has_cap(mm->user_ns, mode)))) return -EPERM; - return security_ptrace_access_check(task, mode); + if (!(mode & PTRACE_MODE_NOACCESS_CHK)) + return security_ptrace_access_check(task, mode); + return 0; } bool ptrace_may_access(struct task_struct *task, unsigned int mode) -- Jiri Kosina SUSE Labs