2018-09-26 01:21:52

by Tim Chen

[permalink] [raw]
Subject: [Patch v2 0/4] Provide options to enable spectre_v2 userspace-userspace protection

I have merged Tom's changes to extend the patchset for AMD cpus, and
also added a prctl option to control per process indirect branch
speculation per Peter's comments.

Tim

Changes:
v2:
1. Extend per process STIBP to AMD cpus
2. Add prctl option to control per process indirect branch speculation
3. Bug fixes and cleanups

This patchset provides an option to apply IBPB and STIBP mitigation
to only non-dumpable processes.

Jiri's patch to harden spectre_v2 makes IBPB and STIBP available for
general spectre v2 app to app mitigation. IBPB will be issued for
switching to an app that's not ptraceable by the previous
app and STIBP will be always turned on.

However, leaving STIBP on all the time is expensive for certain
applications that have frequent indirect branches. One such application
is perlbench in the SpecInt Rate 2006 test suite which shows a
21% reduction in throughput. Other application like bzip2 in
the same test suite with minimal indirct branches have
only a 0.7% reduction in throughput. IBPB will also impose
overhead during context switches.

App to app exploit is in general difficult
due to address space layout randomization in apps and
the need to know an app's address space layout ahead of time.
Users may not wish to incur app to app performance
overhead from IBPB and STIBP for general non security sensitive apps
and use these mitigations only for non-dumpable apps.

The first patch provides a lite option for spectre_v2 app to app
mitigation where IBPB is only issued for security sensitive
non-dumpable app. The second patch extends this option
where STIBP is only issued for non-dumpable app.
The third patch extends per process STIBP update for AMD cpus.
The fourth patch adds a prctl option to control per process
indirect branch speculation.

Thomas Lendacky (1):
x86/speculation: Extend per process STIBP to AMD cpus.

Tim Chen (3):
x86/speculation: Option to select app to app mitigation for spectre_v2
x86/speculation: Provide application property based STIBP protection
x86/speculation: Add prctl to control indirect branch speculation per
process

Documentation/admin-guide/kernel-parameters.txt | 13 ++
Documentation/userspace-api/spec_ctrl.rst | 8 +
arch/x86/include/asm/msr-index.h | 3 +-
arch/x86/include/asm/nospec-branch.h | 9 ++
arch/x86/include/asm/spec-ctrl.h | 12 ++
arch/x86/include/asm/thread_info.h | 4 +-
arch/x86/kernel/cpu/bugs.c | 185 +++++++++++++++++++++++-
arch/x86/kernel/process.c | 58 ++++++--
arch/x86/mm/tlb.c | 21 ++-
fs/exec.c | 13 +-
include/linux/sched.h | 5 +
include/linux/sched/coredump.h | 2 +-
include/uapi/linux/prctl.h | 1 +
kernel/cred.c | 2 +-
kernel/sys.c | 2 +-
tools/include/uapi/linux/prctl.h | 1 +
16 files changed, 310 insertions(+), 29 deletions(-)

--
2.9.4



2018-09-26 01:19:26

by Tim Chen

[permalink] [raw]
Subject: [Patch v2 1/4] x86/speculation: Option to select app to app mitigation for spectre_v2

Jiri Kosina's patch makes IBPB and STIBP available for
general spectre v2 app to app mitigation. IBPB will be issued for
switching to an app that's not ptraceable by the previous
app and STIBP will be always turned on.

However, app to app exploit is in general difficult
due to address space layout randomization in apps and
the need to know an app's address space layout ahead of time.
Users may not wish to incur app to app performance
overhead from IBPB and STIBP for general non security sensitive apps.

This patch provides a lite option for spectre_v2 app to app
mitigation where IBPB is only issued for security sensitive
non-dumpable app.

The strict option will keep system at high security level
where IBPB and STIBP are used to defend all apps against
spectre_v2 app to app attack.

Signed-off-by: Tim Chen <[email protected]>
---
Documentation/admin-guide/kernel-parameters.txt | 11 +++
arch/x86/include/asm/nospec-branch.h | 9 +++
arch/x86/kernel/cpu/bugs.c | 95 +++++++++++++++++++++++--
arch/x86/mm/tlb.c | 19 +++--
4 files changed, 126 insertions(+), 8 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 64a3bf5..6243144 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4186,6 +4186,17 @@
Not specifying this option is equivalent to
spectre_v2=auto.

+ spectre_v2_app2app=
+ [X86] Control app to app mitigation of Spectre variant 2
+ (indirect branch speculation) vulnerability.
+
+ lite - only turn on mitigation for non-dumpable processes
+ strict - protect against attacks for all user processes
+ auto - let kernel decide lite or strict mode
+
+ Not specifying this option is equivalent to
+ spectre_v2_app2app=auto.
+
spec_store_bypass_disable=
[HW] Control Speculative Store Bypass (SSB) Disable mitigation
(Speculative Store Bypass vulnerability)
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index fd2a8c1..c59a6c4 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -3,6 +3,7 @@
#ifndef _ASM_X86_NOSPEC_BRANCH_H_
#define _ASM_X86_NOSPEC_BRANCH_H_

+#include <linux/static_key.h>
#include <asm/alternative.h>
#include <asm/alternative-asm.h>
#include <asm/cpufeatures.h>
@@ -217,6 +218,12 @@ enum spectre_v2_mitigation {
SPECTRE_V2_IBRS_ENHANCED,
};

+enum spectre_v2_app2app_mitigation {
+ SPECTRE_V2_APP2APP_NONE,
+ SPECTRE_V2_APP2APP_LITE,
+ SPECTRE_V2_APP2APP_STRICT,
+};
+
/* The Speculative Store Bypass disable variants */
enum ssb_mitigation {
SPEC_STORE_BYPASS_NONE,
@@ -228,6 +235,8 @@ enum ssb_mitigation {
extern char __indirect_thunk_start[];
extern char __indirect_thunk_end[];

+DECLARE_STATIC_KEY_FALSE(spectre_v2_app_lite);
+
/*
* On VMEXIT we must ensure that no RSB predictions learned in the guest
* can be followed in the host, by overwriting the RSB completely. Both
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ee46dcb..c967012 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -133,6 +133,12 @@ enum spectre_v2_mitigation_cmd {
SPECTRE_V2_CMD_RETPOLINE_AMD,
};

+enum spectre_v2_app2app_mitigation_cmd {
+ SPECTRE_V2_APP2APP_CMD_AUTO,
+ SPECTRE_V2_APP2APP_CMD_LITE,
+ SPECTRE_V2_APP2APP_CMD_STRICT,
+};
+
static const char *spectre_v2_strings[] = {
[SPECTRE_V2_NONE] = "Vulnerable",
[SPECTRE_V2_RETPOLINE_MINIMAL] = "Vulnerable: Minimal generic ASM retpoline",
@@ -142,12 +148,24 @@ static const char *spectre_v2_strings[] = {
[SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS",
};

+static const char *spectre_v2_app2app_strings[] = {
+ [SPECTRE_V2_APP2APP_NONE] = "App-App Vulnerable",
+ [SPECTRE_V2_APP2APP_LITE] = "App-App Mitigation: Protect only non-dumpable process",
+ [SPECTRE_V2_APP2APP_STRICT] = "App-App Mitigation: Full app to app attack protection",
+};
+
+DEFINE_STATIC_KEY_FALSE(spectre_v2_app_lite);
+EXPORT_SYMBOL(spectre_v2_app_lite);
+
#undef pr_fmt
#define pr_fmt(fmt) "Spectre V2 : " fmt

static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
SPECTRE_V2_NONE;

+static enum spectre_v2_mitigation spectre_v2_app2app_enabled __ro_after_init =
+ SPECTRE_V2_APP2APP_NONE;
+
void
x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
{
@@ -275,6 +293,46 @@ static const struct {
{ "auto", SPECTRE_V2_CMD_AUTO, false },
};

+static const struct {
+ const char *option;
+ enum spectre_v2_app2app_mitigation_cmd cmd;
+ bool secure;
+} app2app_mitigation_options[] = {
+ { "lite", SPECTRE_V2_APP2APP_CMD_LITE, false },
+ { "strict", SPECTRE_V2_APP2APP_CMD_STRICT, false },
+ { "auto", SPECTRE_V2_APP2APP_CMD_AUTO, false },
+};
+
+static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_app2app_cmdline(void)
+{
+ char arg[20];
+ int ret, i;
+ enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_APP2APP_CMD_AUTO;
+
+ ret = cmdline_find_option(boot_command_line, "spectre_v2_app2app", arg, sizeof(arg));
+ if (ret < 0)
+ return SPECTRE_V2_APP2APP_CMD_AUTO;
+
+ for (i = 0; i < ARRAY_SIZE(app2app_mitigation_options); i++) {
+ if (!match_option(arg, ret, app2app_mitigation_options[i].option))
+ continue;
+ cmd = app2app_mitigation_options[i].cmd;
+ break;
+ }
+
+ if (i >= ARRAY_SIZE(app2app_mitigation_options)) {
+ pr_err("unknown app to app protection option (%s). Switching to AUTO select\n", arg);
+ return SPECTRE_V2_APP2APP_CMD_AUTO;
+ }
+
+ if (app2app_mitigation_options[i].secure)
+ spec2_print_if_secure(app2app_mitigation_options[i].option);
+ else
+ spec2_print_if_insecure(app2app_mitigation_options[i].option);
+
+ return cmd;
+}
+
static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
{
char arg[20];
@@ -325,6 +383,9 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)

static bool stibp_needed(void)
{
+ if (static_branch_unlikely(&spectre_v2_app_lite))
+ return false;
+
if (spectre_v2_enabled == SPECTRE_V2_NONE)
return false;

@@ -366,7 +427,9 @@ void arch_smt_update(void)
static void __init spectre_v2_select_mitigation(void)
{
enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
+ enum spectre_v2_app2app_mitigation_cmd app2app_cmd = spectre_v2_parse_app2app_cmdline();
enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
+ enum spectre_v2_app2app_mitigation app2app_mode = SPECTRE_V2_APP2APP_NONE;

/*
* If the CPU is not affected and the command line mode is NONE or AUTO
@@ -376,6 +439,17 @@ static void __init spectre_v2_select_mitigation(void)
(cmd == SPECTRE_V2_CMD_NONE || cmd == SPECTRE_V2_CMD_AUTO))
return;

+ switch (app2app_cmd) {
+ case SPECTRE_V2_APP2APP_CMD_LITE:
+ case SPECTRE_V2_APP2APP_CMD_AUTO:
+ app2app_mode = SPECTRE_V2_APP2APP_LITE;
+ break;
+
+ case SPECTRE_V2_APP2APP_CMD_STRICT:
+ app2app_mode = SPECTRE_V2_APP2APP_STRICT;
+ break;
+ }
+
switch (cmd) {
case SPECTRE_V2_CMD_NONE:
return;
@@ -427,6 +501,11 @@ static void __init spectre_v2_select_mitigation(void)
}

specv2_set_mode:
+ spectre_v2_app2app_enabled = app2app_mode;
+ pr_info("%s\n", spectre_v2_app2app_strings[app2app_mode]);
+ if (app2app_mode == SPECTRE_V2_APP2APP_LITE)
+ static_branch_enable(&spectre_v2_app_lite);
+
spectre_v2_enabled = mode;
pr_info("%s\n", spectre_v2_strings[mode]);

@@ -441,8 +520,8 @@ static void __init spectre_v2_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");

- /* Initialize Indirect Branch Prediction Barrier if supported */
- if (boot_cpu_has(X86_FEATURE_IBPB)) {
+ /* Initialize Indirect Branch Prediction Barrier if supported and not disabled */
+ if (boot_cpu_has(X86_FEATURE_IBPB) && app2app_mode != SPECTRE_V2_APP2APP_NONE) {
setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
}
@@ -875,8 +954,16 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr

case X86_BUG_SPECTRE_V2:
mutex_lock(&spec_ctrl_mutex);
- ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
- boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
+ if (static_branch_unlikely(&spectre_v2_app_lite))
+ ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
+ boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB-lite" : "",
+ boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
+ (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
+ boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
+ spectre_v2_module_string());
+ else
+ ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
+ boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB-strict" : "",
boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
(x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index ed44444..14522aa 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -184,14 +184,25 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
{
/*
- * Check if the current (previous) task has access to the memory
- * of the @tsk (next) task. If access is denied, make sure to
+ * For lite protection mode, we only protect the non-dumpable
+ * processes.
+ *
+ * Otherwise check if the current (previous) task has access to the memory
+ * of the @tsk (next) task for strict app to app protection.
+ * If access is denied, make sure to
* issue a IBPB to stop user->user Spectre-v2 attacks.
*
* Note: __ptrace_may_access() returns 0 or -ERRNO.
*/
- return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
- __ptrace_may_access(tsk, PTRACE_MODE_IBPB));
+
+ /* skip IBPB if no context changes */
+ if (!tsk || !tsk->mm || tsk->mm->context.ctx_id == last_ctx_id)
+ return false;
+
+ if (static_branch_unlikely(&spectre_v2_app_lite))
+ return (get_dumpable(tsk->mm) != SUID_DUMP_USER);
+ else
+ return (__ptrace_may_access(tsk, PTRACE_MODE_IBPB));
}

void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
--
2.9.4


2018-09-26 01:19:28

by Tim Chen

[permalink] [raw]
Subject: [Patch v2 4/4] x86/speculation: Add prctl to control indirect branch speculation per process

To migitgate possible app to app attack from branch target buffer poisoning,
a new prctl is provided to control branch speculation for applications in
user app. The following interfaces are provided:

prctl(PR_SET_SPECULATION_CTRL, PR_INDIR_BRANCH, PR_SPEC_DISABLE, 0, 0);
- Disable branch target speculation to protect against app to app
style attack using IBPB and STIBP

prctl(PR_SET_SPECULATION_CTRL, PR_INDIR_BRANCH, PR_SPEC_ENABLE, 0, 0);
- Allow branch target speculation, no mitigation for Spectre V2

prctl(PR_GET_SPECULATION_CTRL, PR_INDIR_BRANCH, 0, 0, 0)
- Query the indirect branch speculation restriction on a process

Signed-off-by: Tim Chen <[email protected]>
---
Documentation/admin-guide/kernel-parameters.txt | 4 +-
Documentation/userspace-api/spec_ctrl.rst | 8 +++
arch/x86/kernel/cpu/bugs.c | 82 ++++++++++++++++++++++++-
arch/x86/mm/tlb.c | 30 ++-------
fs/exec.c | 13 +++-
include/linux/sched.h | 5 ++
include/linux/sched/coredump.h | 2 +-
include/uapi/linux/prctl.h | 1 +
kernel/cred.c | 2 +-
kernel/sys.c | 2 +-
tools/include/uapi/linux/prctl.h | 1 +
11 files changed, 117 insertions(+), 33 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 6243144..640ce9a 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -4190,7 +4190,9 @@
[X86] Control app to app mitigation of Spectre variant 2
(indirect branch speculation) vulnerability.

- lite - only turn on mitigation for non-dumpable processes
+ lite - turn on mitigation for non-dumpable processes
+ or processes that has indirect branch restricted
+ via prctl's PR_SET_SPECULATION_CTRL option
strict - protect against attacks for all user processes
auto - let kernel decide lite or strict mode

diff --git a/Documentation/userspace-api/spec_ctrl.rst b/Documentation/userspace-api/spec_ctrl.rst
index 32f3d55..aa71e84 100644
--- a/Documentation/userspace-api/spec_ctrl.rst
+++ b/Documentation/userspace-api/spec_ctrl.rst
@@ -92,3 +92,11 @@ Speculation misfeature controls
* prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_ENABLE, 0, 0);
* prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_DISABLE, 0, 0);
* prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_FORCE_DISABLE, 0, 0);
+
+- PR_INDIR_BRANCH: Indirect Branch Speculation in Applications
+ (Mitigate Spectre V2 style user space app to app attack)
+
+ Invocations:
+ * prctl(PR_GET_SPECULATION_CTRL, PR_INDIR_BRANCH, 0, 0, 0);
+ * prctl(PR_SET_SPECULATION_CTRL, PR_INDIR_BRANCH, PR_SPEC_ENABLE, 0, 0);
+ * prctl(PR_SET_SPECULATION_CTRL, PR_INDIR_BRANCH, PR_SPEC_DISABLE, 0, 0);
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 052f1a5..2ec531f 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -14,6 +14,7 @@
#include <linux/module.h>
#include <linux/nospec.h>
#include <linux/prctl.h>
+#include <linux/sched/coredump.h>

#include <asm/spec-ctrl.h>
#include <asm/cmdline.h>
@@ -150,7 +151,7 @@ static const char *spectre_v2_strings[] = {

static const char *spectre_v2_app2app_strings[] = {
[SPECTRE_V2_APP2APP_NONE] = "App-App Vulnerable",
- [SPECTRE_V2_APP2APP_LITE] = "App-App Mitigation: Protect only non-dumpable process",
+ [SPECTRE_V2_APP2APP_LITE] = "App-App Mitigation: Protect non-dumpable or indir branch restricted process",
[SPECTRE_V2_APP2APP_STRICT] = "App-App Mitigation: Full app to app attack protection",
};

@@ -728,17 +729,74 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
return 0;
}

+static int indir_branch_prctl_set(struct task_struct *task, unsigned long ctrl)
+{
+ bool update;
+
+ if (spectre_v2_app2app_enabled != SPECTRE_V2_APP2APP_LITE &&
+ spectre_v2_app2app_enabled != SPECTRE_V2_APP2APP_STRICT)
+ return -ENXIO;
+
+ switch (ctrl) {
+ case PR_SPEC_ENABLE:
+ if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_STRICT)
+ return -ENXIO;
+ if (get_dumpable(task->mm) != SUID_DUMP_USER)
+ return -ENXIO;
+ task_clear_spec_indir_branch_disable(task);
+ update = test_and_clear_tsk_thread_flag(task, TIF_STIBP);
+ break;
+ case PR_SPEC_DISABLE:
+ if (spectre_v2_app2app_enabled == SPECTRE_V2_APP2APP_STRICT)
+ return 0;
+ task_set_spec_indir_branch_disable(task);
+ update = !test_and_set_tsk_thread_flag(task, TIF_STIBP);
+ break;
+ default:
+ return -ERANGE;
+ }
+
+ /*
+ * If being set on non-current task, delay setting the CPU
+ * mitigation until it is next scheduled.
+ * Use speculative_store_bypass_update will update SPEC_CTRL MSR
+ */
+ if (task == current && update)
+ speculative_store_bypass_update_current();
+
+ return 0;
+}
+
int arch_prctl_spec_ctrl_set(struct task_struct *task, unsigned long which,
unsigned long ctrl)
{
switch (which) {
case PR_SPEC_STORE_BYPASS:
return ssb_prctl_set(task, ctrl);
+ case PR_INDIR_BRANCH:
+ return indir_branch_prctl_set(task, ctrl);
default:
return -ENODEV;
}
}

+void arch_set_dumpable(struct task_struct *tsk, struct mm_struct *mm, int value)
+{
+ if (!static_branch_unlikely(&spectre_v2_app_lite))
+ return;
+ if (!static_cpu_has(X86_FEATURE_STIBP))
+ return;
+
+ if ((unsigned) value != SUID_DUMP_USER) {
+ set_tsk_thread_flag(tsk, TIF_STIBP);
+ return;
+ }
+
+ if (!task_spec_indir_branch_disable(tsk)) {
+ clear_tsk_thread_flag(tsk, TIF_STIBP);
+ }
+}
+
#ifdef CONFIG_SECCOMP
void arch_seccomp_spec_mitigate(struct task_struct *task)
{
@@ -766,11 +824,33 @@ static int ssb_prctl_get(struct task_struct *task)
}
}

+static int indir_branch_prctl_get(struct task_struct *task)
+{
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+ return PR_SPEC_NOT_AFFECTED;
+
+ switch (spectre_v2_app2app_enabled) {
+ case SPECTRE_V2_APP2APP_NONE:
+ return PR_SPEC_ENABLE;
+ case SPECTRE_V2_APP2APP_LITE:
+ if (task_spec_indir_branch_disable(task) ||
+ get_dumpable(task->mm) != SUID_DUMP_USER)
+ return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
+ return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
+ case SPECTRE_V2_APP2APP_STRICT:
+ return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
+ default:
+ return PR_SPEC_NOT_AFFECTED;
+ }
+}
+
int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
{
switch (which) {
case PR_SPEC_STORE_BYPASS:
return ssb_prctl_get(task);
+ case PR_INDIR_BRANCH:
+ return indir_branch_prctl_get(task);
default:
return -ENODEV;
}
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index b3d1daa..65329a7 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -184,8 +184,9 @@ static void sync_current_stack_to_mm(struct mm_struct *mm)
static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
{
/*
- * For lite protection mode, we only protect the non-dumpable
- * processes.
+ * For lite protection mode, we protect processes
+ * where the user explicitly disable indirect branch
+ * speculation or mark the process as non-dumpable.
*
* Otherwise check if the current (previous) task has access to the memory
* of the @tsk (next) task for strict app to app protection.
@@ -200,30 +201,12 @@ static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
return false;

if (static_branch_unlikely(&spectre_v2_app_lite))
- return (get_dumpable(tsk->mm) != SUID_DUMP_USER);
+ return (get_dumpable(tsk->mm) != SUID_DUMP_USER ||
+ task_thread_info(tsk)->flags & _TIF_STIBP);
else
return (__ptrace_may_access(tsk, PTRACE_MODE_IBPB));
}

-static void set_stibp(struct task_struct *tsk)
-{
- /*
- * For lite protection mode, we set STIBP only
- * for non-dumpable processes.
- */
-
- if (!static_branch_unlikely(&spectre_v2_app_lite))
- return;
-
- if (!tsk || !tsk->mm)
- return;
-
- if (get_dumpable(tsk->mm) != SUID_DUMP_USER)
- set_tsk_thread_flag(tsk, TIF_STIBP);
- else
- clear_tsk_thread_flag(tsk, TIF_STIBP);
-}
-
void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
{
@@ -315,9 +298,6 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();

- if (static_cpu_has(X86_FEATURE_STIBP))
- set_stibp(tsk);
-
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
/*
* If our current stack is in vmalloc space and isn't
diff --git a/fs/exec.c b/fs/exec.c
index 1ebf6e5..89edadd 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -1362,9 +1362,9 @@ void setup_new_exec(struct linux_binprm * bprm)
if (bprm->interp_flags & BINPRM_FLAGS_ENFORCE_NONDUMP ||
!(uid_eq(current_euid(), current_uid()) &&
gid_eq(current_egid(), current_gid())))
- set_dumpable(current->mm, suid_dumpable);
+ set_dumpable(current, current->mm, suid_dumpable);
else
- set_dumpable(current->mm, SUID_DUMP_USER);
+ set_dumpable(current, current->mm, SUID_DUMP_USER);

arch_setup_new_exec();
perf_event_exec();
@@ -1940,10 +1940,15 @@ void set_binfmt(struct linux_binfmt *new)
}
EXPORT_SYMBOL(set_binfmt);

+void __weak arch_set_dumpable(struct task_struct *tsk, struct mm_struct *mm, int value)
+{
+ return;
+}
+
/*
* set_dumpable stores three-value SUID_DUMP_* into mm->flags.
*/
-void set_dumpable(struct mm_struct *mm, int value)
+void set_dumpable(struct task_struct *tsk, struct mm_struct *mm, int value)
{
unsigned long old, new;

@@ -1954,6 +1959,8 @@ void set_dumpable(struct mm_struct *mm, int value)
old = READ_ONCE(mm->flags);
new = (old & ~MMF_DUMPABLE_MASK) | value;
} while (cmpxchg(&mm->flags, old, new) != old);
+
+ arch_set_dumpable(tsk, mm, value);
}

SYSCALL_DEFINE3(execve,
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 977cb57..b0a78fd 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1439,6 +1439,7 @@ static inline bool is_percpu_thread(void)
#define PFA_SPREAD_SLAB 2 /* Spread some slab caches over cpuset */
#define PFA_SPEC_SSB_DISABLE 3 /* Speculative Store Bypass disabled */
#define PFA_SPEC_SSB_FORCE_DISABLE 4 /* Speculative Store Bypass force disabled*/
+#define PFA_SPEC_INDIR_BRANCH_DISABLE 5 /* Indirect branch speculation restricted in apps */

#define TASK_PFA_TEST(name, func) \
static inline bool task_##func(struct task_struct *p) \
@@ -1470,6 +1471,10 @@ TASK_PFA_CLEAR(SPEC_SSB_DISABLE, spec_ssb_disable)
TASK_PFA_TEST(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)
TASK_PFA_SET(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable)

+TASK_PFA_TEST(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
+TASK_PFA_SET(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
+TASK_PFA_CLEAR(SPEC_INDIR_BRANCH_DISABLE, spec_indir_branch_disable)
+
static inline void
current_restore_flags(unsigned long orig_flags, unsigned long flags)
{
diff --git a/include/linux/sched/coredump.h b/include/linux/sched/coredump.h
index ec912d0..4284350 100644
--- a/include/linux/sched/coredump.h
+++ b/include/linux/sched/coredump.h
@@ -14,7 +14,7 @@
#define MMF_DUMPABLE_BITS 2
#define MMF_DUMPABLE_MASK ((1 << MMF_DUMPABLE_BITS) - 1)

-extern void set_dumpable(struct mm_struct *mm, int value);
+extern void set_dumpable(struct task_struct *tsk, struct mm_struct *mm, int value);
/*
* This returns the actual value of the suid_dumpable flag. For things
* that are using this for checking for privilege transitions, it must
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index c0d7ea0..06f71f6 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -212,6 +212,7 @@ struct prctl_mm_map {
#define PR_SET_SPECULATION_CTRL 53
/* Speculation control variants */
# define PR_SPEC_STORE_BYPASS 0
+# define PR_INDIR_BRANCH 1
/* Return and control values for PR_SET/GET_SPECULATION_CTRL */
# define PR_SPEC_NOT_AFFECTED 0
# define PR_SPEC_PRCTL (1UL << 0)
diff --git a/kernel/cred.c b/kernel/cred.c
index ecf0365..de3f486 100644
--- a/kernel/cred.c
+++ b/kernel/cred.c
@@ -446,7 +446,7 @@ int commit_creds(struct cred *new)
!gid_eq(old->fsgid, new->fsgid) ||
!cred_cap_issubset(old, new)) {
if (task->mm)
- set_dumpable(task->mm, suid_dumpable);
+ set_dumpable(task, task->mm, suid_dumpable);
task->pdeath_signal = 0;
smp_wmb();
}
diff --git a/kernel/sys.c b/kernel/sys.c
index cf5c675..78af561 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -2292,7 +2292,7 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
error = -EINVAL;
break;
}
- set_dumpable(me->mm, arg2);
+ set_dumpable(me, me->mm, arg2);
break;

case PR_SET_UNALIGN:
diff --git a/tools/include/uapi/linux/prctl.h b/tools/include/uapi/linux/prctl.h
index c0d7ea0..06f71f6 100644
--- a/tools/include/uapi/linux/prctl.h
+++ b/tools/include/uapi/linux/prctl.h
@@ -212,6 +212,7 @@ struct prctl_mm_map {
#define PR_SET_SPECULATION_CTRL 53
/* Speculation control variants */
# define PR_SPEC_STORE_BYPASS 0
+# define PR_INDIR_BRANCH 1
/* Return and control values for PR_SET/GET_SPECULATION_CTRL */
# define PR_SPEC_NOT_AFFECTED 0
# define PR_SPEC_PRCTL (1UL << 0)
--
2.9.4


2018-09-26 01:19:55

by Tim Chen

[permalink] [raw]
Subject: [Patch v2 3/4] x86/speculation: Extend per process STIBP to AMD cpus.

From: Thomas Lendacky <[email protected]>

We extend the app to app spectre v2 mitigation using STIBP
to the AMD cpus. We need to take care of special
cases for AMD cpu's update of SPEC_CTRL MSR to avoid double
writing of MSRs from update to SSBD and STIBP.

Originally-by: Thomas Lendacky <[email protected]>
Signed-off-by: Tim Chen <[email protected]>
---
arch/x86/kernel/process.c | 48 +++++++++++++++++++++++++++++++++++++----------
1 file changed, 38 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index cb24014..4a3a672 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -399,6 +399,10 @@ static __always_inline void set_spec_ctrl_state(unsigned long tifn)
{
u64 msr = x86_spec_ctrl_base;

+ /*
+ * AMD cpu may have used a different method to update SSBD, so
+ * we need to be sure we are using the SPEC_CTRL MSR for SSBD.
+ */
if (static_cpu_has(X86_FEATURE_SSBD))
msr |= ssbd_tif_to_spec_ctrl(tifn);

@@ -408,20 +412,45 @@ static __always_inline void set_spec_ctrl_state(unsigned long tifn)
wrmsrl(MSR_IA32_SPEC_CTRL, msr);
}

-static __always_inline void __speculative_store_bypass_update(unsigned long tifn)
+static __always_inline void __speculative_store_bypass_update(unsigned long tifp,
+ unsigned long tifn)
{
- if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
- amd_set_ssb_virt_state(tifn);
- else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
- amd_set_core_ssb_state(tifn);
- else
- set_spec_ctrl_state(tifn);
+ bool stibp = !!((tifp ^ tifn) & _TIF_STIBP);
+ bool ssbd = !!((tifp ^ tifn) & _TIF_SSBD);
+
+ if (!ssbd && !stibp)
+ return;
+
+ if (ssbd) {
+ /*
+ * For AMD, try these methods first. The ssbd variable will
+ * reflect if the SPEC_CTRL MSR method is needed.
+ */
+ ssbd = false;
+
+ if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
+ amd_set_ssb_virt_state(tifn);
+ else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
+ amd_set_core_ssb_state(tifn);
+ else
+ ssbd = true;
+ }
+
+ /* Avoid a possible extra MSR write, recheck the flags */
+ if (!ssbd && !stibp)
+ return;
+
+ set_spec_ctrl_state(tifn);
}

void speculative_store_bypass_update(unsigned long tif)
{
+ /*
+ * On this path we're forcing the update, so use ~tif as the
+ * previous flags.
+ */
preempt_disable();
- __speculative_store_bypass_update(tif);
+ __speculative_store_bypass_update(~tif, tif);
preempt_enable();
}

@@ -457,8 +486,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
if ((tifp ^ tifn) & _TIF_NOCPUID)
set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));

- if ((tifp ^ tifn) & (_TIF_SSBD | _TIF_STIBP))
- __speculative_store_bypass_update(tifn);
+ __speculative_store_bypass_update(tifp, tifn);
}

/*
--
2.9.4


2018-09-26 01:20:21

by Tim Chen

[permalink] [raw]
Subject: [Patch v2 2/4] x86/speculation: Provide application property based STIBP protection

This patch provides an application property based spectre_v2
protection with STIBP against attack from another app from
a sibling hyper-thread. For security sensitive non-dumpable
app, STIBP will be turned on before switching to it for Intel
processors vulnerable to spectre_v2.

Signed-off-by: Tim Chen <[email protected]>
---
arch/x86/include/asm/msr-index.h | 3 ++-
arch/x86/include/asm/spec-ctrl.h | 12 ++++++++++++
arch/x86/include/asm/thread_info.h | 4 +++-
arch/x86/kernel/cpu/bugs.c | 14 +++++++++++---
arch/x86/kernel/process.c | 14 ++++++++++----
arch/x86/mm/tlb.c | 22 ++++++++++++++++++++++
6 files changed, 60 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 4731f0c..0e43388 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -41,7 +41,8 @@

#define MSR_IA32_SPEC_CTRL 0x00000048 /* Speculation Control */
#define SPEC_CTRL_IBRS (1 << 0) /* Indirect Branch Restricted Speculation */
-#define SPEC_CTRL_STIBP (1 << 1) /* Single Thread Indirect Branch Predictors */
+#define SPEC_CTRL_STIBP_SHIFT 1 /* Single Thread Indirect Branch Predictor bit */
+#define SPEC_CTRL_STIBP (1 << SPEC_CTRL_STIBP_SHIFT) /* Single Thread Indirect Branch Predictors */
#define SPEC_CTRL_SSBD_SHIFT 2 /* Speculative Store Bypass Disable bit */
#define SPEC_CTRL_SSBD (1 << SPEC_CTRL_SSBD_SHIFT) /* Speculative Store Bypass Disable */

diff --git a/arch/x86/include/asm/spec-ctrl.h b/arch/x86/include/asm/spec-ctrl.h
index ae7c2c5..6a962b8 100644
--- a/arch/x86/include/asm/spec-ctrl.h
+++ b/arch/x86/include/asm/spec-ctrl.h
@@ -53,12 +53,24 @@ static inline u64 ssbd_tif_to_spec_ctrl(u64 tifn)
return (tifn & _TIF_SSBD) >> (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
}

+static inline u64 stibp_tif_to_spec_ctrl(u64 tifn)
+{
+ BUILD_BUG_ON(TIF_STIBP < SPEC_CTRL_STIBP_SHIFT);
+ return (tifn & _TIF_STIBP) >> (TIF_STIBP - SPEC_CTRL_STIBP_SHIFT);
+}
+
static inline unsigned long ssbd_spec_ctrl_to_tif(u64 spec_ctrl)
{
BUILD_BUG_ON(TIF_SSBD < SPEC_CTRL_SSBD_SHIFT);
return (spec_ctrl & SPEC_CTRL_SSBD) << (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT);
}

+static inline unsigned long stibp_spec_ctrl_to_tif(u64 spec_ctrl)
+{
+ BUILD_BUG_ON(TIF_STIBP < SPEC_CTRL_STIBP_SHIFT);
+ return (spec_ctrl & SPEC_CTRL_STIBP) << (TIF_STIBP - SPEC_CTRL_STIBP_SHIFT);
+}
+
static inline u64 ssbd_tif_to_amd_ls_cfg(u64 tifn)
{
return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL;
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 2ff2a30..40c58c286 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -83,6 +83,7 @@ struct thread_info {
#define TIF_SYSCALL_EMU 6 /* syscall emulation active */
#define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */
#define TIF_SECCOMP 8 /* secure computing */
+#define TIF_STIBP 9 /* Single threaded indirect branch predict */
#define TIF_USER_RETURN_NOTIFY 11 /* notify kernel of userspace return */
#define TIF_UPROBE 12 /* breakpointed or singlestepping */
#define TIF_PATCH_PENDING 13 /* pending live patching update */
@@ -110,6 +111,7 @@ struct thread_info {
#define _TIF_SYSCALL_EMU (1 << TIF_SYSCALL_EMU)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
+#define _TIF_STIBP (1 << TIF_STIBP)
#define _TIF_USER_RETURN_NOTIFY (1 << TIF_USER_RETURN_NOTIFY)
#define _TIF_UPROBE (1 << TIF_UPROBE)
#define _TIF_PATCH_PENDING (1 << TIF_PATCH_PENDING)
@@ -146,7 +148,7 @@ struct thread_info {

/* flags to check in __switch_to() */
#define _TIF_WORK_CTXSW \
- (_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_SSBD)
+ (_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_SSBD|_TIF_STIBP)

#define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY)
#define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index c967012..052f1a5 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -163,7 +163,7 @@ EXPORT_SYMBOL(spectre_v2_app_lite);
static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
SPECTRE_V2_NONE;

-static enum spectre_v2_mitigation spectre_v2_app2app_enabled __ro_after_init =
+static enum spectre_v2_app2app_mitigation spectre_v2_app2app_enabled __ro_after_init =
SPECTRE_V2_APP2APP_NONE;

void
@@ -187,6 +187,9 @@ x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
static_cpu_has(X86_FEATURE_AMD_SSBD))
hostval |= ssbd_tif_to_spec_ctrl(ti->flags);

+ if (static_branch_unlikely(&spectre_v2_app_lite))
+ hostval |= stibp_tif_to_spec_ctrl(ti->flags);
+
if (hostval != guestval) {
msrval = setguest ? guestval : hostval;
wrmsrl(MSR_IA32_SPEC_CTRL, msrval);
@@ -383,6 +386,11 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)

static bool stibp_needed(void)
{
+ /*
+ * Determine if we want to leave STIBP always on.
+ * For lite option, we enable STIBP based on a process's
+ * flag during context switch.
+ */
if (static_branch_unlikely(&spectre_v2_app_lite))
return false;

@@ -958,14 +966,14 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB-lite" : "",
boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
- (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
+ ", STIBP-lite",
boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
spectre_v2_module_string());
else
ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB-strict" : "",
boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
- (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
+ (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP-strict" : "",
boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
spectre_v2_module_string());
mutex_unlock(&spec_ctrl_mutex);
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index c93fcfd..cb24014 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -395,9 +395,15 @@ static __always_inline void amd_set_ssb_virt_state(unsigned long tifn)
wrmsrl(MSR_AMD64_VIRT_SPEC_CTRL, ssbd_tif_to_spec_ctrl(tifn));
}

-static __always_inline void intel_set_ssb_state(unsigned long tifn)
+static __always_inline void set_spec_ctrl_state(unsigned long tifn)
{
- u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
+ u64 msr = x86_spec_ctrl_base;
+
+ if (static_cpu_has(X86_FEATURE_SSBD))
+ msr |= ssbd_tif_to_spec_ctrl(tifn);
+
+ if (cpu_smt_control == CPU_SMT_ENABLED)
+ msr |= stibp_tif_to_spec_ctrl(tifn);

wrmsrl(MSR_IA32_SPEC_CTRL, msr);
}
@@ -409,7 +415,7 @@ static __always_inline void __speculative_store_bypass_update(unsigned long tifn
else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
amd_set_core_ssb_state(tifn);
else
- intel_set_ssb_state(tifn);
+ set_spec_ctrl_state(tifn);
}

void speculative_store_bypass_update(unsigned long tif)
@@ -451,7 +457,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
if ((tifp ^ tifn) & _TIF_NOCPUID)
set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));

- if ((tifp ^ tifn) & _TIF_SSBD)
+ if ((tifp ^ tifn) & (_TIF_SSBD | _TIF_STIBP))
__speculative_store_bypass_update(tifn);
}

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 14522aa..b3d1daa 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -205,6 +205,25 @@ static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id)
return (__ptrace_may_access(tsk, PTRACE_MODE_IBPB));
}

+static void set_stibp(struct task_struct *tsk)
+{
+ /*
+ * For lite protection mode, we set STIBP only
+ * for non-dumpable processes.
+ */
+
+ if (!static_branch_unlikely(&spectre_v2_app_lite))
+ return;
+
+ if (!tsk || !tsk->mm)
+ return;
+
+ if (get_dumpable(tsk->mm) != SUID_DUMP_USER)
+ set_tsk_thread_flag(tsk, TIF_STIBP);
+ else
+ clear_tsk_thread_flag(tsk, TIF_STIBP);
+}
+
void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
{
@@ -296,6 +315,9 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
ibpb_needed(tsk, last_ctx_id))
indirect_branch_prediction_barrier();

+ if (static_cpu_has(X86_FEATURE_STIBP))
+ set_stibp(tsk);
+
if (IS_ENABLED(CONFIG_VMAP_STACK)) {
/*
* If our current stack is in vmalloc space and isn't
--
2.9.4


2018-09-26 17:25:23

by Tim Chen

[permalink] [raw]
Subject: Re: [Patch v2 3/4] x86/speculation: Extend per process STIBP to AMD cpus.

On 09/25/2018 05:43 PM, Tim Chen wrote:
> From: Thomas Lendacky <[email protected]>
>
> We extend the app to app spectre v2 mitigation using STIBP
> to the AMD cpus. We need to take care of special
> cases for AMD cpu's update of SPEC_CTRL MSR to avoid double
> writing of MSRs from update to SSBD and STIBP.

Tom, if this patch looks okay to you, can I add your sign off?

Tim

>
> Originally-by: Thomas Lendacky <[email protected]>
> Signed-off-by: Tim Chen <[email protected]>
> ---
> arch/x86/kernel/process.c | 48 +++++++++++++++++++++++++++++++++++++----------
> 1 file changed, 38 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> index cb24014..4a3a672 100644
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -399,6 +399,10 @@ static __always_inline void set_spec_ctrl_state(unsigned long tifn)
> {
> u64 msr = x86_spec_ctrl_base;
>
> + /*
> + * AMD cpu may have used a different method to update SSBD, so
> + * we need to be sure we are using the SPEC_CTRL MSR for SSBD.
> + */
> if (static_cpu_has(X86_FEATURE_SSBD))
> msr |= ssbd_tif_to_spec_ctrl(tifn);
>
> @@ -408,20 +412,45 @@ static __always_inline void set_spec_ctrl_state(unsigned long tifn)
> wrmsrl(MSR_IA32_SPEC_CTRL, msr);
> }
>
> -static __always_inline void __speculative_store_bypass_update(unsigned long tifn)
> +static __always_inline void __speculative_store_bypass_update(unsigned long tifp,
> + unsigned long tifn)
> {
> - if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
> - amd_set_ssb_virt_state(tifn);
> - else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
> - amd_set_core_ssb_state(tifn);
> - else
> - set_spec_ctrl_state(tifn);
> + bool stibp = !!((tifp ^ tifn) & _TIF_STIBP);
> + bool ssbd = !!((tifp ^ tifn) & _TIF_SSBD);
> +
> + if (!ssbd && !stibp)
> + return;
> +
> + if (ssbd) {
> + /*
> + * For AMD, try these methods first. The ssbd variable will
> + * reflect if the SPEC_CTRL MSR method is needed.
> + */
> + ssbd = false;
> +
> + if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
> + amd_set_ssb_virt_state(tifn);
> + else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
> + amd_set_core_ssb_state(tifn);
> + else
> + ssbd = true;
> + }
> +
> + /* Avoid a possible extra MSR write, recheck the flags */
> + if (!ssbd && !stibp)
> + return;
> +
> + set_spec_ctrl_state(tifn);
> }
>
> void speculative_store_bypass_update(unsigned long tif)
> {
> + /*
> + * On this path we're forcing the update, so use ~tif as the
> + * previous flags.
> + */
> preempt_disable();
> - __speculative_store_bypass_update(tif);
> + __speculative_store_bypass_update(~tif, tif);
> preempt_enable();
> }
>
> @@ -457,8 +486,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
> if ((tifp ^ tifn) & _TIF_NOCPUID)
> set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
>
> - if ((tifp ^ tifn) & (_TIF_SSBD | _TIF_STIBP))
> - __speculative_store_bypass_update(tifn);
> + __speculative_store_bypass_update(tifp, tifn);
> }
>
> /*
>


2018-09-26 19:11:47

by Tom Lendacky

[permalink] [raw]
Subject: Re: [Patch v2 3/4] x86/speculation: Extend per process STIBP to AMD cpus.

On 09/26/2018 12:24 PM, Tim Chen wrote:
> On 09/25/2018 05:43 PM, Tim Chen wrote:
>> From: Thomas Lendacky <[email protected]>
>>
>> We extend the app to app spectre v2 mitigation using STIBP
>> to the AMD cpus. We need to take care of special
>> cases for AMD cpu's update of SPEC_CTRL MSR to avoid double
>> writing of MSRs from update to SSBD and STIBP.
>
> Tom, if this patch looks okay to you, can I add your sign off?

Hi Tim,

Yup, that looks correct.

>
> Tim
>
>>
>> Originally-by: Thomas Lendacky <[email protected]>
>> Signed-off-by: Tim Chen <[email protected]>

Signed-off-by: Tom Lendacky <[email protected]>

Thanks,
Tom

>> ---
>> arch/x86/kernel/process.c | 48 +++++++++++++++++++++++++++++++++++++----------
>> 1 file changed, 38 insertions(+), 10 deletions(-)
>>
>> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
>> index cb24014..4a3a672 100644
>> --- a/arch/x86/kernel/process.c
>> +++ b/arch/x86/kernel/process.c
>> @@ -399,6 +399,10 @@ static __always_inline void set_spec_ctrl_state(unsigned long tifn)
>> {
>> u64 msr = x86_spec_ctrl_base;
>>
>> + /*
>> + * AMD cpu may have used a different method to update SSBD, so
>> + * we need to be sure we are using the SPEC_CTRL MSR for SSBD.
>> + */
>> if (static_cpu_has(X86_FEATURE_SSBD))
>> msr |= ssbd_tif_to_spec_ctrl(tifn);
>>
>> @@ -408,20 +412,45 @@ static __always_inline void set_spec_ctrl_state(unsigned long tifn)
>> wrmsrl(MSR_IA32_SPEC_CTRL, msr);
>> }
>>
>> -static __always_inline void __speculative_store_bypass_update(unsigned long tifn)
>> +static __always_inline void __speculative_store_bypass_update(unsigned long tifp,
>> + unsigned long tifn)
>> {
>> - if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
>> - amd_set_ssb_virt_state(tifn);
>> - else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
>> - amd_set_core_ssb_state(tifn);
>> - else
>> - set_spec_ctrl_state(tifn);
>> + bool stibp = !!((tifp ^ tifn) & _TIF_STIBP);
>> + bool ssbd = !!((tifp ^ tifn) & _TIF_SSBD);
>> +
>> + if (!ssbd && !stibp)
>> + return;
>> +
>> + if (ssbd) {
>> + /*
>> + * For AMD, try these methods first. The ssbd variable will
>> + * reflect if the SPEC_CTRL MSR method is needed.
>> + */
>> + ssbd = false;
>> +
>> + if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
>> + amd_set_ssb_virt_state(tifn);
>> + else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
>> + amd_set_core_ssb_state(tifn);
>> + else
>> + ssbd = true;
>> + }
>> +
>> + /* Avoid a possible extra MSR write, recheck the flags */
>> + if (!ssbd && !stibp)
>> + return;
>> +
>> + set_spec_ctrl_state(tifn);
>> }
>>
>> void speculative_store_bypass_update(unsigned long tif)
>> {
>> + /*
>> + * On this path we're forcing the update, so use ~tif as the
>> + * previous flags.
>> + */
>> preempt_disable();
>> - __speculative_store_bypass_update(tif);
>> + __speculative_store_bypass_update(~tif, tif);
>> preempt_enable();
>> }
>>
>> @@ -457,8 +486,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
>> if ((tifp ^ tifn) & _TIF_NOCPUID)
>> set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
>>
>> - if ((tifp ^ tifn) & (_TIF_SSBD | _TIF_STIBP))
>> - __speculative_store_bypass_update(tifn);
>> + __speculative_store_bypass_update(tifp, tifn);
>> }
>>
>> /*
>>
>

2018-10-02 09:23:54

by Ingo Molnar

[permalink] [raw]
Subject: Re: [Patch v2 1/4] x86/speculation: Option to select app to app mitigation for spectre_v2


* Tim Chen <[email protected]> wrote:

> Subject: x86/speculation: Option to select app to app mitigation for spectre_v2
>

We prefer to start commit titles with verbs, not nouns, so this should be something like:

x86/speculation: Add option to select app to app mitigation for spectre_v2

> Jiri Kosina's patch makes IBPB and STIBP available for
> general spectre v2 app to app mitigation. IBPB will be issued for
> switching to an app that's not ptraceable by the previous
> app and STIBP will be always turned on.
>
> However, app to app exploit is in general difficult
> due to address space layout randomization in apps and
> the need to know an app's address space layout ahead of time.
> Users may not wish to incur app to app performance
> overhead from IBPB and STIBP for general non security sensitive apps.
>
> This patch provides a lite option for spectre_v2 app to app
> mitigation where IBPB is only issued for security sensitive
> non-dumpable app.
>
> The strict option will keep system at high security level
> where IBPB and STIBP are used to defend all apps against
> spectre_v2 app to app attack.

s/system
/the system

s/attack
attacks

> + spectre_v2_app2app=
> + [X86] Control app to app mitigation of Spectre variant 2
> + (indirect branch speculation) vulnerability.
> +
> + lite - only turn on mitigation for non-dumpable processes
> + strict - protect against attacks for all user processes
> + auto - let kernel decide lite or strict mode

Perhaps add:
lite - only turn on mitigation for non-dumpable processes (i.e.
protect daemons and other privileged processes that tend
to be non-dumpable)

?

> +
> + Not specifying this option is equivalent to
> + spectre_v2_app2app=auto.
> +
> spec_store_bypass_disable=
> [HW] Control Speculative Store Bypass (SSB) Disable mitigation
> (Speculative Store Bypass vulnerability)
> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> index fd2a8c1..c59a6c4 100644
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -3,6 +3,7 @@
> #ifndef _ASM_X86_NOSPEC_BRANCH_H_
> #define _ASM_X86_NOSPEC_BRANCH_H_
>
> +#include <linux/static_key.h>
> #include <asm/alternative.h>
> #include <asm/alternative-asm.h>
> #include <asm/cpufeatures.h>
> @@ -217,6 +218,12 @@ enum spectre_v2_mitigation {
> SPECTRE_V2_IBRS_ENHANCED,
> };
>
> +enum spectre_v2_app2app_mitigation {
> + SPECTRE_V2_APP2APP_NONE,
> + SPECTRE_V2_APP2APP_LITE,
> + SPECTRE_V2_APP2APP_STRICT,
> +};
> +
> /* The Speculative Store Bypass disable variants */
> enum ssb_mitigation {
> SPEC_STORE_BYPASS_NONE,
> @@ -228,6 +235,8 @@ enum ssb_mitigation {
> extern char __indirect_thunk_start[];
> extern char __indirect_thunk_end[];
>
> +DECLARE_STATIC_KEY_FALSE(spectre_v2_app_lite);
> +
> /*
> * On VMEXIT we must ensure that no RSB predictions learned in the guest
> * can be followed in the host, by overwriting the RSB completely. Both
> diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
> index ee46dcb..c967012 100644
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -133,6 +133,12 @@ enum spectre_v2_mitigation_cmd {
> SPECTRE_V2_CMD_RETPOLINE_AMD,
> };
>
> +enum spectre_v2_app2app_mitigation_cmd {
> + SPECTRE_V2_APP2APP_CMD_AUTO,
> + SPECTRE_V2_APP2APP_CMD_LITE,
> + SPECTRE_V2_APP2APP_CMD_STRICT,
> +};
> +
> static const char *spectre_v2_strings[] = {
> [SPECTRE_V2_NONE] = "Vulnerable",
> [SPECTRE_V2_RETPOLINE_MINIMAL] = "Vulnerable: Minimal generic ASM retpoline",
> @@ -142,12 +148,24 @@ static const char *spectre_v2_strings[] = {
> [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS",
> };
>
> +static const char *spectre_v2_app2app_strings[] = {
> + [SPECTRE_V2_APP2APP_NONE] = "App-App Vulnerable",
> + [SPECTRE_V2_APP2APP_LITE] = "App-App Mitigation: Protect only non-dumpable process",
> + [SPECTRE_V2_APP2APP_STRICT] = "App-App Mitigation: Full app to app attack protection",
> +};
> +
> +DEFINE_STATIC_KEY_FALSE(spectre_v2_app_lite);
> +EXPORT_SYMBOL(spectre_v2_app_lite);

EXPORT_SYMBOL_GPL() I suspect?

> +
> #undef pr_fmt
> #define pr_fmt(fmt) "Spectre V2 : " fmt
>
> static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
> SPECTRE_V2_NONE;
>
> +static enum spectre_v2_mitigation spectre_v2_app2app_enabled __ro_after_init =
> + SPECTRE_V2_APP2APP_NONE;
> +
> void
> x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
> {
> @@ -275,6 +293,46 @@ static const struct {
> { "auto", SPECTRE_V2_CMD_AUTO, false },
> };
>
> +static const struct {
> + const char *option;
> + enum spectre_v2_app2app_mitigation_cmd cmd;
> + bool secure;
> +} app2app_mitigation_options[] = {
> + { "lite", SPECTRE_V2_APP2APP_CMD_LITE, false },
> + { "strict", SPECTRE_V2_APP2APP_CMD_STRICT, false },
> + { "auto", SPECTRE_V2_APP2APP_CMD_AUTO, false },
> +};

Am I reading this right that it's not possible to configure this to 'none', i.e. to disable the
protection altogether?


> + * For lite protection mode, we only protect the non-dumpable
> + * processes.
> + *
> + * Otherwise check if the current (previous) task has access to the memory
> + * of the @tsk (next) task for strict app to app protection.
> + * If access is denied, make sure to
> * issue a IBPB to stop user->user Spectre-v2 attacks.

s/a IBPB
/an IBPB

Thanks,

Ingo

2018-10-02 09:28:33

by Ingo Molnar

[permalink] [raw]
Subject: Re: [Patch v2 3/4] x86/speculation: Extend per process STIBP to AMD cpus.


* Tim Chen <[email protected]> wrote:

> From: Thomas Lendacky <[email protected]>
>
> We extend the app to app spectre v2 mitigation using STIBP
> to the AMD cpus. We need to take care of special

s/to the AMD cpus
/to AMD CPUs

> cases for AMD cpu's update of SPEC_CTRL MSR to avoid double
> writing of MSRs from update to SSBD and STIBP.

s/AMD cpu
/AMD CPU

>
> Originally-by: Thomas Lendacky <[email protected]>
> Signed-off-by: Tim Chen <[email protected]>
> ---
> arch/x86/kernel/process.c | 48 +++++++++++++++++++++++++++++++++++++----------
> 1 file changed, 38 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> index cb24014..4a3a672 100644
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -399,6 +399,10 @@ static __always_inline void set_spec_ctrl_state(unsigned long tifn)
> {
> u64 msr = x86_spec_ctrl_base;
>
> + /*
> + * AMD cpu may have used a different method to update SSBD, so
> + * we need to be sure we are using the SPEC_CTRL MSR for SSBD.

s/AMD cpu may have used a different method to update SSBD
/AMD CPUs may use a different method to update the SSBD

Thanks,

Ingo

2018-10-02 09:35:55

by Ingo Molnar

[permalink] [raw]
Subject: Re: [Patch v2 4/4] x86/speculation: Add prctl to control indirect branch speculation per process


* Tim Chen <[email protected]> wrote:

> To migitgate possible app to app attack from branch target buffer poisoning,
> a new prctl is provided to control branch speculation for applications in
> user app. The following interfaces are provided:

s/migitgate
/mitigate

>
> prctl(PR_SET_SPECULATION_CTRL, PR_INDIR_BRANCH, PR_SPEC_DISABLE, 0, 0);
> - Disable branch target speculation to protect against app to app
> style attack using IBPB and STIBP
>
> prctl(PR_SET_SPECULATION_CTRL, PR_INDIR_BRANCH, PR_SPEC_ENABLE, 0, 0);
> - Allow branch target speculation, no mitigation for Spectre V2
>
> prctl(PR_GET_SPECULATION_CTRL, PR_INDIR_BRANCH, 0, 0, 0)
> - Query the indirect branch speculation restriction on a process

Well 'a process' is always 'the current process' in this case, right?

> - lite - only turn on mitigation for non-dumpable processes
> + lite - turn on mitigation for non-dumpable processes
> + or processes that has indirect branch restricted
> + via prctl's PR_SET_SPECULATION_CTRL option

s/or processes that has indirect
/or processes that have been indirect

?

> + /*
> + * If being set on non-current task, delay setting the CPU
> + * mitigation until it is next scheduled.
> + * Use speculative_store_bypass_update will update SPEC_CTRL MSR
> + */
> + if (task == current && update)
> + speculative_store_bypass_update_current();

Did you mean:

Call to speculative_store_bypass_update_current() will update SPEC_CTRL MSR

?


> - * For lite protection mode, we only protect the non-dumpable
> - * processes.
> + * For lite protection mode, we protect processes
> + * where the user explicitly disable indirect branch
> + * speculation or mark the process as non-dumpable.

s/where the user explicitly disable
/where the user explicitly disables

?

Thanks,

Ingo

2018-10-02 16:50:04

by Tim Chen

[permalink] [raw]
Subject: Re: [Patch v2 4/4] x86/speculation: Add prctl to control indirect branch speculation per process

On 10/02/2018 02:35 AM, Ingo Molnar wrote:
>
> * Tim Chen <[email protected]> wrote:
>
>> To migitgate possible app to app attack from branch target buffer poisoning,
>> a new prctl is provided to control branch speculation for applications in
>> user app. The following interfaces are provided:
>
> s/migitgate
> /mitigate
>
>>
>> prctl(PR_SET_SPECULATION_CTRL, PR_INDIR_BRANCH, PR_SPEC_DISABLE, 0, 0);
>> - Disable branch target speculation to protect against app to app
>> style attack using IBPB and STIBP
>>
>> prctl(PR_SET_SPECULATION_CTRL, PR_INDIR_BRANCH, PR_SPEC_ENABLE, 0, 0);
>> - Allow branch target speculation, no mitigation for Spectre V2
>>
>> prctl(PR_GET_SPECULATION_CTRL, PR_INDIR_BRANCH, 0, 0, 0)
>> - Query the indirect branch speculation restriction on a process
>
> Well 'a process' is always 'the current process' in this case, right?

Right.

>
>> - lite - only turn on mitigation for non-dumpable processes
>> + lite - turn on mitigation for non-dumpable processes
>> + or processes that has indirect branch restricted
>> + via prctl's PR_SET_SPECULATION_CTRL option
>
> s/or processes that has indirect
> /or processes that have been indirect
>
> ?
>
>> + /*
>> + * If being set on non-current task, delay setting the CPU
>> + * mitigation until it is next scheduled.
>> + * Use speculative_store_bypass_update will update SPEC_CTRL MSR
>> + */
>> + if (task == current && update)
>> + speculative_store_bypass_update_current();
>
> Did you mean:
>
> Call to speculative_store_bypass_update_current() will update SPEC_CTRL MSR

Yes.

>
> ?
>
>
>> - * For lite protection mode, we only protect the non-dumpable
>> - * processes.
>> + * For lite protection mode, we protect processes
>> + * where the user explicitly disable indirect branch
>> + * speculation or mark the process as non-dumpable.
>
> s/where the user explicitly disable
> /where the user explicitly disables
>
> ?
>
> Thanks,
>
> Ingo
>

Thanks for the corrections. I'll update the patchset.

Tim

2018-10-02 16:50:57

by Tim Chen

[permalink] [raw]
Subject: Re: [Patch v2 1/4] x86/speculation: Option to select app to app mitigation for spectre_v2

On 10/02/2018 02:23 AM, Ingo Molnar wrote:
>
> * Tim Chen <[email protected]> wrote:
>
>> Subject: x86/speculation: Option to select app to app mitigation for spectre_v2
>>
>
> We prefer to start commit titles with verbs, not nouns, so this should be something like:
>
> x86/speculation: Add option to select app to app mitigation for spectre_v2
>
>> Jiri Kosina's patch makes IBPB and STIBP available for
>> general spectre v2 app to app mitigation. IBPB will be issued for
>> switching to an app that's not ptraceable by the previous
>> app and STIBP will be always turned on.
>>
>> However, app to app exploit is in general difficult
>> due to address space layout randomization in apps and
>> the need to know an app's address space layout ahead of time.
>> Users may not wish to incur app to app performance
>> overhead from IBPB and STIBP for general non security sensitive apps.
>>
>> This patch provides a lite option for spectre_v2 app to app
>> mitigation where IBPB is only issued for security sensitive
>> non-dumpable app.
>>
>> The strict option will keep system at high security level
>> where IBPB and STIBP are used to defend all apps against
>> spectre_v2 app to app attack.
>
> s/system
> /the system
>
> s/attack
> attacks
>
>> + spectre_v2_app2app=
>> + [X86] Control app to app mitigation of Spectre variant 2
>> + (indirect branch speculation) vulnerability.
>> +
>> + lite - only turn on mitigation for non-dumpable processes
>> + strict - protect against attacks for all user processes
>> + auto - let kernel decide lite or strict mode
>
> Perhaps add:
> lite - only turn on mitigation for non-dumpable processes (i.e.
> protect daemons and other privileged processes that tend
> to be non-dumpable)
>
> ?

Will do.

>
>> +

>>
>> +static const char *spectre_v2_app2app_strings[] = {
>> + [SPECTRE_V2_APP2APP_NONE] = "App-App Vulnerable",
>> + [SPECTRE_V2_APP2APP_LITE] = "App-App Mitigation: Protect only non-dumpable process",
>> + [SPECTRE_V2_APP2APP_STRICT] = "App-App Mitigation: Full app to app attack protection",
>> +};
>> +
>> +DEFINE_STATIC_KEY_FALSE(spectre_v2_app_lite);
>> +EXPORT_SYMBOL(spectre_v2_app_lite);
>
> EXPORT_SYMBOL_GPL() I suspect?

This is only used in the core kernel functions related to
context switches. So I don't expect any module functions
needing this value.

>
>> +
>> #undef pr_fmt
>> #define pr_fmt(fmt) "Spectre V2 : " fmt
>>
>> static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
>> SPECTRE_V2_NONE;
>>
>> +static enum spectre_v2_mitigation spectre_v2_app2app_enabled __ro_after_init =
>> + SPECTRE_V2_APP2APP_NONE;
>> +
>> void
>> x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest)
>> {
>> @@ -275,6 +293,46 @@ static const struct {
>> { "auto", SPECTRE_V2_CMD_AUTO, false },
>> };
>>
>> +static const struct {
>> + const char *option;
>> + enum spectre_v2_app2app_mitigation_cmd cmd;
>> + bool secure;
>> +} app2app_mitigation_options[] = {
>> + { "lite", SPECTRE_V2_APP2APP_CMD_LITE, false },
>> + { "strict", SPECTRE_V2_APP2APP_CMD_STRICT, false },
>> + { "auto", SPECTRE_V2_APP2APP_CMD_AUTO, false },
>> +};
>
> Am I reading this right that it's not possible to configure this to 'none', i.e. to disable the
> protection altogether?

Sure, I can add a none option. I'll probably put that in patch 4
which is easy to disable the mitigation by not turning on
STIBP flag for the none option.

>
>
>> + * For lite protection mode, we only protect the non-dumpable
>> + * processes.
>> + *
>> + * Otherwise check if the current (previous) task has access to the memory
>> + * of the @tsk (next) task for strict app to app protection.
>> + * If access is denied, make sure to
>> * issue a IBPB to stop user->user Spectre-v2 attacks.
>
> s/a IBPB
> /an IBPB
>

Tim

2018-10-02 18:04:07

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [Patch v2 4/4] x86/speculation: Add prctl to control indirect branch speculation per process

On Tue, 25 Sep 2018, Tim Chen wrote:
>
> +void arch_set_dumpable(struct task_struct *tsk, struct mm_struct *mm, int value)
> +{
> + if (!static_branch_unlikely(&spectre_v2_app_lite))
> + return;
> + if (!static_cpu_has(X86_FEATURE_STIBP))
> + return;
> +
> + if ((unsigned) value != SUID_DUMP_USER) {

First of all we use unsigned int and not unsigned, Aside of that why is the
argument not unsigned int right away?

> + set_tsk_thread_flag(tsk, TIF_STIBP);
> + return;
> + }
> +
> + if (!task_spec_indir_branch_disable(tsk)) {
> + clear_tsk_thread_flag(tsk, TIF_STIBP);
> + }

No braces for single line statements required.

> +}
> +
> #ifdef CONFIG_SECCOMP
> void arch_seccomp_spec_mitigate(struct task_struct *task)
> {
> @@ -766,11 +824,33 @@ static int ssb_prctl_get(struct task_struct *task)
> }
> }
>
> -static void set_stibp(struct task_struct *tsk)
> -{
> - /*
> - * For lite protection mode, we set STIBP only
> - * for non-dumpable processes.
> - */
> -
> - if (!static_branch_unlikely(&spectre_v2_app_lite))
> - return;
> -
> - if (!tsk || !tsk->mm)
> - return;
> -
> - if (get_dumpable(tsk->mm) != SUID_DUMP_USER)
> - set_tsk_thread_flag(tsk, TIF_STIBP);
> - else
> - clear_tsk_thread_flag(tsk, TIF_STIBP);
> -}

This patch ordering is really strange. You first add set_stibp() just to
replace it in the next patch.


> diff --git a/fs/exec.c b/fs/exec.c
> index 1ebf6e5..89edadd 100644
> --- a/fs/exec.c
> +++ b/fs/exec.c
> @@ -1362,9 +1362,9 @@ void setup_new_exec(struct linux_binprm * bprm)
> if (bprm->interp_flags & BINPRM_FLAGS_ENFORCE_NONDUMP ||
> !(uid_eq(current_euid(), current_uid()) &&
> gid_eq(current_egid(), current_gid())))
> - set_dumpable(current->mm, suid_dumpable);
> + set_dumpable(current, current->mm, suid_dumpable);
> else
> - set_dumpable(current->mm, SUID_DUMP_USER);
> + set_dumpable(current, current->mm, SUID_DUMP_USER);

What's the point of adding an argument instead of just replacing mm with
task?

> +void __weak arch_set_dumpable(struct task_struct *tsk, struct mm_struct *mm, int value)
> +{
> + return;
> +}

So this wants to be structured as follows:

Patch 1: Change the argument from mm to task and update the implementation of
set_dumpable() accordingly and fixup the users

Patch 2: Introduce the weak arch_set_dumpable()

Patch N: Add the x86 implementation along with the patch which adds the
stipb magic instead of having that extra step of set_stipb() and
then replacing it.

....

Patch X: Add the PRCTL

It's well documented that patches should do one thing at a time and not
come as a hodgepogde of changes.

Thanks,

tglx

2018-10-02 19:05:45

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [Patch v2 3/4] x86/speculation: Extend per process STIBP to AMD cpus.

On Tue, 25 Sep 2018, Tim Chen wrote:
> From: Thomas Lendacky <[email protected]>
>
> We extend the app to app spectre v2 mitigation using STIBP
> to the AMD cpus. We need to take care of special
> cases for AMD cpu's update of SPEC_CTRL MSR to avoid double
> writing of MSRs from update to SSBD and STIBP.

According to documentation changelogs want to be written in imperative
mood.

> Originally-by: Thomas Lendacky <[email protected]>
> Signed-off-by: Tim Chen <[email protected]>
> ---
> arch/x86/kernel/process.c | 48 +++++++++++++++++++++++++++++++++++++----------
> 1 file changed, 38 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> index cb24014..4a3a672 100644
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -399,6 +399,10 @@ static __always_inline void set_spec_ctrl_state(unsigned long tifn)
> {
> u64 msr = x86_spec_ctrl_base;
>
> + /*
> + * AMD cpu may have used a different method to update SSBD, so
> + * we need to be sure we are using the SPEC_CTRL MSR for SSBD.

This has nothing to do with AMD. If X86_FEATURE_SSBD is not set, the SSBD
bit is not to be touched.

> + */
> if (static_cpu_has(X86_FEATURE_SSBD))
> msr |= ssbd_tif_to_spec_ctrl(tifn);
>
> @@ -408,20 +412,45 @@ static __always_inline void set_spec_ctrl_state(unsigned long tifn)
> wrmsrl(MSR_IA32_SPEC_CTRL, msr);
> }
>
> -static __always_inline void __speculative_store_bypass_update(unsigned long tifn)
> +static __always_inline void __speculative_store_bypass_update(unsigned long tifp,
> + unsigned long tifn)
> {
> - if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
> - amd_set_ssb_virt_state(tifn);
> - else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
> - amd_set_core_ssb_state(tifn);
> - else
> - set_spec_ctrl_state(tifn);
> + bool stibp = !!((tifp ^ tifn) & _TIF_STIBP);
> + bool ssbd = !!((tifp ^ tifn) & _TIF_SSBD);
> +
> + if (!ssbd && !stibp)
> + return;
> +
> + if (ssbd) {
> + /*
> + * For AMD, try these methods first. The ssbd variable will
> + * reflect if the SPEC_CTRL MSR method is needed.
> + */
> + ssbd = false;
> +
> + if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
> + amd_set_ssb_virt_state(tifn);
> + else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
> + amd_set_core_ssb_state(tifn);
> + else
> + ssbd = true;
> + }
> +
> + /* Avoid a possible extra MSR write, recheck the flags */
> + if (!ssbd && !stibp)
> + return;
> +
> + set_spec_ctrl_state(tifn);

Uuurgh. This is context switch code and it results in a horrible assembly
maze. Also the function name is bogus now. It's not only dealing with SSB
anymore. Please stop glueing stuff into the code as you see fit.

Something like the below creates halfways sensible code.

static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
{
u64 msr = x86_spec_ctrl_base;

if (static_cpu_has(X86_FEATURE_SSBD))
msr |= ssbd_tif_to_spec_ctrl(tifn);

wrmsrl(MSR_IA32_SPEC_CTRL, msr);
}

static __always_inline void spec_ctrl_update(unsigned long tifp,
unsigned long tifn)
{
bool updmsr = !!((tifp ^ tifn) & _TIF_STIBP);

if ((tifp ^ tifn) & _TIF_SSBD) {
if (static_cpu_has(X86_FEATURE_VIRT_SSBD))
amd_set_ssb_virt_state(tifn);
else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD))
amd_set_core_ssb_state(tifn);
else if (static_cpu_has(X86_FEATURE_SSBD))
updmsr = true;
}

if (updmsr)
spec_ctrl_update_msr(tifn);
}

void speculation_ctrl_update(unsigned long tif)
{
preempt_disable();
spec_ctrl_update(~tif, tif);
preempt_enable();
}

Hmm?

Thanks,

tglx

2018-10-02 19:11:15

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [Patch v2 2/4] x86/speculation: Provide application property based STIBP protection

On Tue, 25 Sep 2018, Tim Chen wrote:

> This patch provides an application property based spectre_v2

# git grep 'This patch' Documentation/process/

> protection with STIBP against attack from another app from

s/app/application/ please. This is not android.

> a sibling hyper-thread. For security sensitive non-dumpable
> app, STIBP will be turned on before switching to it for Intel
> processors vulnerable to spectre_v2.

What has this to do with Intel processors?

> -static __always_inline void intel_set_ssb_state(unsigned long tifn)
> +static __always_inline void set_spec_ctrl_state(unsigned long tifn)
> {
> - u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
> + u64 msr = x86_spec_ctrl_base;
> +
> + if (static_cpu_has(X86_FEATURE_SSBD))
> + msr |= ssbd_tif_to_spec_ctrl(tifn);
> +
> + if (cpu_smt_control == CPU_SMT_ENABLED)
> + msr |= stibp_tif_to_spec_ctrl(tifn);

Oh no. We are not adding yet another conditional into switch to. Either
that's done unconditionally or this wants to have a static key.

> wrmsrl(MSR_IA32_SPEC_CTRL, msr);

Thanks,

tglx

2018-10-02 20:05:07

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [Patch v2 1/4] x86/speculation: Option to select app to app mitigation for spectre_v2

On Tue, 25 Sep 2018, Tim Chen wrote:
> +enum spectre_v2_app2app_mitigation {
> + SPECTRE_V2_APP2APP_NONE,
> + SPECTRE_V2_APP2APP_LITE,
> + SPECTRE_V2_APP2APP_STRICT,
> +};
>
> static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
> SPECTRE_V2_NONE;
>
> +static enum spectre_v2_mitigation spectre_v2_app2app_enabled __ro_after_init =

Copy and paste. The enum type is wrong.

> + SPECTRE_V2_APP2APP_NONE;
> +

> +static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_app2app_cmdline(void)

Ditto

> +{
> + char arg[20];
> + int ret, i;
> + enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_APP2APP_CMD_AUTO;

Please use reverse fir tree ordering of the variables:

enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_APP2APP_CMD_AUTO;
char arg[20];
int ret, i;

> +
> + ret = cmdline_find_option(boot_command_line, "spectre_v2_app2app", arg, sizeof(arg));

Line break around 78 please

> @@ -325,6 +383,9 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
>
> static bool stibp_needed(void)
> {
> + if (static_branch_unlikely(&spectre_v2_app_lite))
> + return false;
> +
> if (spectre_v2_enabled == SPECTRE_V2_NONE)
> return false;
>
> @@ -366,7 +427,9 @@ void arch_smt_update(void)
> static void __init spectre_v2_select_mitigation(void)
> {
> enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
> + enum spectre_v2_app2app_mitigation_cmd app2app_cmd = spectre_v2_parse_app2app_cmdline();

Please avoid these overlong lines. Move the initialization to the code if
it does not fit into the declaration part.

> enum spectre_v2_mitigation mode = SPECTRE_V2_NONE;
> + enum spectre_v2_app2app_mitigation app2app_mode = SPECTRE_V2_APP2APP_NONE;

> /*
> * If the CPU is not affected and the command line mode is NONE or AUTO
> @@ -376,6 +439,17 @@ static void __init spectre_v2_select_mitigation(void)
> (cmd == SPECTRE_V2_CMD_NONE || cmd == SPECTRE_V2_CMD_AUTO))
> return;
>
> + switch (app2app_cmd) {
> + case SPECTRE_V2_APP2APP_CMD_LITE:
> + case SPECTRE_V2_APP2APP_CMD_AUTO:
> + app2app_mode = SPECTRE_V2_APP2APP_LITE;
> + break;
> +
> + case SPECTRE_V2_APP2APP_CMD_STRICT:
> + app2app_mode = SPECTRE_V2_APP2APP_STRICT;
> + break;
> + }
> +
> switch (cmd) {
> case SPECTRE_V2_CMD_NONE:
> return;
> @@ -427,6 +501,11 @@ static void __init spectre_v2_select_mitigation(void)
> }
>
> specv2_set_mode:
> + spectre_v2_app2app_enabled = app2app_mode;
> + pr_info("%s\n", spectre_v2_app2app_strings[app2app_mode]);
> + if (app2app_mode == SPECTRE_V2_APP2APP_LITE)
> + static_branch_enable(&spectre_v2_app_lite);
> +
> spectre_v2_enabled = mode;
> pr_info("%s\n", spectre_v2_strings[mode]);
>
> @@ -441,8 +520,8 @@ static void __init spectre_v2_select_mitigation(void)
> setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
> pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n");
>
> - /* Initialize Indirect Branch Prediction Barrier if supported */
> - if (boot_cpu_has(X86_FEATURE_IBPB)) {
> + /* Initialize Indirect Branch Prediction Barrier if supported and not disabled */
> + if (boot_cpu_has(X86_FEATURE_IBPB) && app2app_mode != SPECTRE_V2_APP2APP_NONE) {

Line breaks exist for a reason. Applies to comments and code.

> setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
> pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
> }
> @@ -875,8 +954,16 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
>
> case X86_BUG_SPECTRE_V2:
> mutex_lock(&spec_ctrl_mutex);
> - ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
> - boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
> + if (static_branch_unlikely(&spectre_v2_app_lite))
> + ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
> + boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB-lite" : "",
> + boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
> + (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
> + boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
> + spectre_v2_module_string());
> + else
> + ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
> + boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB-strict" : "",
> boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
> (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "",
> boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",

Why do you need to copy the whole thing? What's wrong with using
spectre_v2_app2app_enabled for getting the proper string for the IBPB
mitigation?

> - * Check if the current (previous) task has access to the memory
> - * of the @tsk (next) task. If access is denied, make sure to
> + * For lite protection mode, we only protect the non-dumpable
> + * processes.
> + *
> + * Otherwise check if the current (previous) task has access to the memory
> + * of the @tsk (next) task for strict app to app protection.
> + * If access is denied, make sure to
> * issue a IBPB to stop user->user Spectre-v2 attacks.
> *
> * Note: __ptrace_may_access() returns 0 or -ERRNO.
> */
> - return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id &&
> - __ptrace_may_access(tsk, PTRACE_MODE_IBPB));

Needs to be updated to the latest code in tip x86/pti

> + /* skip IBPB if no context changes */
> + if (!tsk || !tsk->mm || tsk->mm->context.ctx_id == last_ctx_id)
> + return false;
> +
> + if (static_branch_unlikely(&spectre_v2_app_lite))
> + return (get_dumpable(tsk->mm) != SUID_DUMP_USER);
> + else
> + return (__ptrace_may_access(tsk, PTRACE_MODE_IBPB));

Neither of the branches needs braces for the return

Thanks,

tglx

2018-10-03 07:26:08

by Ingo Molnar

[permalink] [raw]
Subject: Re: [Patch v2 4/4] x86/speculation: Add prctl to control indirect branch speculation per process


* Tim Chen <[email protected]> wrote:

> Thanks for the corrections. I'll update the patchset.

Please also go beyond the direct review feedback me and Thomas gave, and pro-actively look out
for similar patterns of mistakes in these patches and in all future patches you send.

As Thomas's and my review made it abundantly clear, this patch series has a very high
proportion of small, indefensible trivial quality problems.

If the root of the problem is that you are sending out patches too fast then please *read* your
own code and changelogs more than once before sending it to lkml, fix trivial mistakes and
coding style so that others don't have to waste time over trivialities.

If the root of the problem is that you don't have enough time to spend on these patches then
you should perhaps delay your next series for another day or two, use more time to review the
series, before sending it for an upstream merge.

I.e. please make a serious effort to improve patch and changelog quality in the future.

Thanks,

Ingo

2018-10-04 19:20:48

by Tim Chen

[permalink] [raw]
Subject: Re: [Patch v2 2/4] x86/speculation: Provide application property based STIBP protection

On 10/02/2018 12:10 PM, Thomas Gleixner wrote:
> On Tue, 25 Sep 2018, Tim Chen wrote:
>
>> This patch provides an application property based spectre_v2
>
> # git grep 'This patch' Documentation/process/
>
>> protection with STIBP against attack from another app from
>
> s/app/application/ please. This is not android.
>
>> a sibling hyper-thread. For security sensitive non-dumpable
>> app, STIBP will be turned on before switching to it for Intel
>> processors vulnerable to spectre_v2.
>
> What has this to do with Intel processors?
>
>> -static __always_inline void intel_set_ssb_state(unsigned long tifn)
>> +static __always_inline void set_spec_ctrl_state(unsigned long tifn)
>> {
>> - u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn);
>> + u64 msr = x86_spec_ctrl_base;
>> +
>> + if (static_cpu_has(X86_FEATURE_SSBD))
>> + msr |= ssbd_tif_to_spec_ctrl(tifn);
>> +
>> + if (cpu_smt_control == CPU_SMT_ENABLED)
>> + msr |= stibp_tif_to_spec_ctrl(tifn);
>
> Oh no. We are not adding yet another conditional into switch to. Either
> that's done unconditionally or this wants to have a static key.

Okay, will add a static_key to indicate that SMT is in use.

Tim

>
>> wrmsrl(MSR_IA32_SPEC_CTRL, msr);
>
> Thanks,
>
> tglx
>


2018-10-05 18:13:41

by Tim Chen

[permalink] [raw]
Subject: Re: [Patch v2 4/4] x86/speculation: Add prctl to control indirect branch speculation per process

On 10/02/2018 10:58 AM, Thomas Gleixner wrote:
> On Tue, 25 Sep 2018, Tim Chen wrote:
>>
>> +void arch_set_dumpable(struct task_struct *tsk, struct mm_struct *mm, int value)
>> +{
>> + if (!static_branch_unlikely(&spectre_v2_app_lite))
>> + return;
>> + if (!static_cpu_has(X86_FEATURE_STIBP))
>> + return;
>> +
>> + if ((unsigned) value != SUID_DUMP_USER) {
>
> First of all we use unsigned int and not unsigned, Aside of that why is the
> argument not unsigned int right away?


The original set_dumpable passes suid_dumpable, which was
exposed via /proc/sys/fs/suid_dumpable and defined as int.
It will make sense to define suid_dumpable as an unsigned int instead.

Would you like me to redefine suid_dumpable as unsigned int
in sysctl.c in the patch revision as a separate clean up patch?

Thanks.

Tim

2018-10-05 18:48:49

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [Patch v2 4/4] x86/speculation: Add prctl to control indirect branch speculation per process

On Fri, 5 Oct 2018, Tim Chen wrote:

> On 10/02/2018 10:58 AM, Thomas Gleixner wrote:
> > On Tue, 25 Sep 2018, Tim Chen wrote:
> >>
> >> +void arch_set_dumpable(struct task_struct *tsk, struct mm_struct *mm, int value)
> >> +{
> >> + if (!static_branch_unlikely(&spectre_v2_app_lite))
> >> + return;
> >> + if (!static_cpu_has(X86_FEATURE_STIBP))
> >> + return;
> >> +
> >> + if ((unsigned) value != SUID_DUMP_USER) {
> >
> > First of all we use unsigned int and not unsigned, Aside of that why is the
> > argument not unsigned int right away?
>
>
> The original set_dumpable passes suid_dumpable, which was
> exposed via /proc/sys/fs/suid_dumpable and defined as int.
> It will make sense to define suid_dumpable as an unsigned int instead.
>
> Would you like me to redefine suid_dumpable as unsigned int
> in sysctl.c in the patch revision as a separate clean up patch?

Yes, that makes sense.

Thanks,

tglx