Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp5765231imm; Mon, 23 Jul 2018 05:49:10 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdB9ow+B3IcMp3B85MURfd3EKOzgKR1L3KbmPC5hasl994D7My0KbfKM0jtzkFfilGSpmuv X-Received: by 2002:a17:902:740b:: with SMTP id g11-v6mr12964009pll.85.1532350150000; Mon, 23 Jul 2018 05:49:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532350149; cv=none; d=google.com; s=arc-20160816; b=wpo+qgBkGhnxL0QDG6F6P30RSrkijdCF0FzwtrPLcKBMMifs28PpQhdvmmGmcm91Sn X/6uOcr9irnQjBf7ZtidEypqpJQ05Ke1ZJXsiTMhPtIyWCYPr/lb1+BOk+CVpHAHe0yu 9htzUp/Brcwy34z/P57lr91RAdUoUihweq8mU5oWZlHZBANhLSTPap1NoueDJNkgs0aL oz+0WQoUxB+HCgQywitmgwMT50xn8WYbCIxmyU3VuLBQ8cvvt8kt2z3PyxpZm8LM9KQT tr52gweUsamcsAtyNFmvkIrNLQdG5LjPl6ZQ2CcVJRD4EPomyCVraI59C6o5lc00SBj2 1LfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=bNrDXxvquo/KwFM6z3PyCOu9C4fw/Vu4UsaKPGMiXYA=; b=09d9VxhzU/tVCV6pkSsyEUSgd0+04/lqJp7Gx5ZnTMBG6TEm+Y/u/7Ne6vpMsBYjh4 Opn8vzc/mkUvOKQDF97MTd7jCDr0LNIDyFpOMcksG1q4zCv3Vppfd4400O+HzKyrwJqs oHUIvFzkXUVgY9Mwu2emmOI0cZcaLeFDE1WVn+VN/yLgMDPQdRmrjpZQu9PUI+9nESw2 OJCRiWoHCPJYDcah6V87ZFBGHcRDLdDWvxbJmCcq9xicgC+e0isJwBReYlOQzkfNgx0k j2hCCZ9vixw7MesOFll1acxKcI0GfP7fGCca/ROy71IVFFQKxy8kyMlcva+3JWFkkmke urDQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k1-v6si7669421pgh.65.2018.07.23.05.48.55; Mon, 23 Jul 2018 05:49:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2389773AbeGWNsa (ORCPT + 99 others); Mon, 23 Jul 2018 09:48:30 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:51948 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388281AbeGWNs3 (ORCPT ); Mon, 23 Jul 2018 09:48:29 -0400 Received: from localhost (LFbn-1-12238-233.w90-92.abo.wanadoo.fr [90.92.53.233]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 05D2CBB3; Mon, 23 Jul 2018 12:47:24 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Konrad Rzeszutek Wilk , David Woodhouse , "Srivatsa S. Bhat" , "Matt Helsley (VMware)" , Alexey Makhalov , Bo Gan Subject: [PATCH 4.4 067/107] x86/speculation: Add prctl for Speculative Store Bypass mitigation Date: Mon, 23 Jul 2018 14:42:01 +0200 Message-Id: <20180723122416.639109622@linuxfoundation.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180723122413.003644357@linuxfoundation.org> References: <20180723122413.003644357@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Thomas Gleixner commit a73ec77ee17ec556fe7f165d00314cb7c047b1ac upstream Add prctl based control for Speculative Store Bypass mitigation and make it the default mitigation for Intel and AMD. Andi Kleen provided the following rationale (slightly redacted): There are multiple levels of impact of Speculative Store Bypass: 1) JITed sandbox. It cannot invoke system calls, but can do PRIME+PROBE and may have call interfaces to other code 2) Native code process. No protection inside the process at this level. 3) Kernel. 4) Between processes. The prctl tries to protect against case (1) doing attacks. If the untrusted code can do random system calls then control is already lost in a much worse way. So there needs to be system call protection in some way (using a JIT not allowing them or seccomp). Or rather if the process can subvert its environment somehow to do the prctl it can already execute arbitrary code, which is much worse than SSB. To put it differently, the point of the prctl is to not allow JITed code to read data it shouldn't read from its JITed sandbox. If it already has escaped its sandbox then it can already read everything it wants in its address space, and do much worse. The ability to control Speculative Store Bypass allows to enable the protection selectively without affecting overall system performance. Based on an initial patch from Tim Chen. Completely rewritten. Signed-off-by: Thomas Gleixner Reviewed-by: Konrad Rzeszutek Wilk Signed-off-by: David Woodhouse Signed-off-by: Greg Kroah-Hartman Signed-off-by: Srivatsa S. Bhat Reviewed-by: Matt Helsley (VMware) Reviewed-by: Alexey Makhalov Reviewed-by: Bo Gan Signed-off-by: Greg Kroah-Hartman --- Documentation/kernel-parameters.txt | 6 ++ arch/x86/include/asm/nospec-branch.h | 1 arch/x86/kernel/cpu/bugs.c | 83 ++++++++++++++++++++++++++++++----- 3 files changed, 79 insertions(+), 11 deletions(-) --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -3651,7 +3651,11 @@ bytes respectively. Such letter suffixes off - Unconditionally enable Speculative Store Bypass auto - Kernel detects whether the CPU model contains an implementation of Speculative Store Bypass and - picks the most appropriate mitigation + picks the most appropriate mitigation. + prctl - Control Speculative Store Bypass per thread + via prctl. Speculative Store Bypass is enabled + for a process by default. The state of the control + is inherited on fork. Not specifying this option is equivalent to spec_store_bypass_disable=auto. --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -187,6 +187,7 @@ extern u64 x86_spec_ctrl_get_default(voi enum ssb_mitigation { SPEC_STORE_BYPASS_NONE, SPEC_STORE_BYPASS_DISABLE, + SPEC_STORE_BYPASS_PRCTL, }; extern char __indirect_thunk_start[]; --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -11,6 +11,8 @@ #include #include #include +#include +#include #include #include @@ -411,20 +413,23 @@ enum ssb_mitigation_cmd { SPEC_STORE_BYPASS_CMD_NONE, SPEC_STORE_BYPASS_CMD_AUTO, SPEC_STORE_BYPASS_CMD_ON, + SPEC_STORE_BYPASS_CMD_PRCTL, }; static const char *ssb_strings[] = { [SPEC_STORE_BYPASS_NONE] = "Vulnerable", - [SPEC_STORE_BYPASS_DISABLE] = "Mitigation: Speculative Store Bypass disabled" + [SPEC_STORE_BYPASS_DISABLE] = "Mitigation: Speculative Store Bypass disabled", + [SPEC_STORE_BYPASS_PRCTL] = "Mitigation: Speculative Store Bypass disabled via prctl" }; static const struct { const char *option; enum ssb_mitigation_cmd cmd; } ssb_mitigation_options[] = { - { "auto", SPEC_STORE_BYPASS_CMD_AUTO }, /* Platform decides */ - { "on", SPEC_STORE_BYPASS_CMD_ON }, /* Disable Speculative Store Bypass */ - { "off", SPEC_STORE_BYPASS_CMD_NONE }, /* Don't touch Speculative Store Bypass */ + { "auto", SPEC_STORE_BYPASS_CMD_AUTO }, /* Platform decides */ + { "on", SPEC_STORE_BYPASS_CMD_ON }, /* Disable Speculative Store Bypass */ + { "off", SPEC_STORE_BYPASS_CMD_NONE }, /* Don't touch Speculative Store Bypass */ + { "prctl", SPEC_STORE_BYPASS_CMD_PRCTL }, /* Disable Speculative Store Bypass via prctl */ }; static enum ssb_mitigation_cmd __init ssb_parse_cmdline(void) @@ -474,14 +479,15 @@ static enum ssb_mitigation_cmd __init __ switch (cmd) { case SPEC_STORE_BYPASS_CMD_AUTO: - /* - * AMD platforms by default don't need SSB mitigation. - */ - if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) - break; + /* Choose prctl as the default mode */ + mode = SPEC_STORE_BYPASS_PRCTL; + break; case SPEC_STORE_BYPASS_CMD_ON: mode = SPEC_STORE_BYPASS_DISABLE; break; + case SPEC_STORE_BYPASS_CMD_PRCTL: + mode = SPEC_STORE_BYPASS_PRCTL; + break; case SPEC_STORE_BYPASS_CMD_NONE: break; } @@ -492,7 +498,7 @@ static enum ssb_mitigation_cmd __init __ * - X86_FEATURE_RDS - CPU is able to turn off speculative store bypass * - X86_FEATURE_SPEC_STORE_BYPASS_DISABLE - engage the mitigation */ - if (mode != SPEC_STORE_BYPASS_NONE) { + if (mode == SPEC_STORE_BYPASS_DISABLE) { setup_force_cpu_cap(X86_FEATURE_SPEC_STORE_BYPASS_DISABLE); /* * Intel uses the SPEC CTRL MSR Bit(2) for this, while AMD uses @@ -523,6 +529,63 @@ static void ssb_select_mitigation() #undef pr_fmt +static int ssb_prctl_set(unsigned long ctrl) +{ + bool rds = !!test_tsk_thread_flag(current, TIF_RDS); + + if (ssb_mode != SPEC_STORE_BYPASS_PRCTL) + return -ENXIO; + + if (ctrl == PR_SPEC_ENABLE) + clear_tsk_thread_flag(current, TIF_RDS); + else + set_tsk_thread_flag(current, TIF_RDS); + + if (rds != !!test_tsk_thread_flag(current, TIF_RDS)) + speculative_store_bypass_update(); + + return 0; +} + +static int ssb_prctl_get(void) +{ + switch (ssb_mode) { + case SPEC_STORE_BYPASS_DISABLE: + return PR_SPEC_DISABLE; + case SPEC_STORE_BYPASS_PRCTL: + if (test_tsk_thread_flag(current, TIF_RDS)) + return PR_SPEC_PRCTL | PR_SPEC_DISABLE; + return PR_SPEC_PRCTL | PR_SPEC_ENABLE; + default: + if (boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS)) + return PR_SPEC_ENABLE; + return PR_SPEC_NOT_AFFECTED; + } +} + +int arch_prctl_spec_ctrl_set(unsigned long which, unsigned long ctrl) +{ + if (ctrl != PR_SPEC_ENABLE && ctrl != PR_SPEC_DISABLE) + return -ERANGE; + + switch (which) { + case PR_SPEC_STORE_BYPASS: + return ssb_prctl_set(ctrl); + default: + return -ENODEV; + } +} + +int arch_prctl_spec_ctrl_get(unsigned long which) +{ + switch (which) { + case PR_SPEC_STORE_BYPASS: + return ssb_prctl_get(); + default: + return -ENODEV; + } +} + void x86_spec_ctrl_setup_ap(void) { if (boot_cpu_has(X86_FEATURE_IBRS))