Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp1257124imm; Wed, 19 Sep 2018 15:10:10 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYVAj0Ip8EC32IYZ7Bvjm6OkuvKu5EIueos7zk0jL3ILlTGJlMg6wEEnppQZbakTZ3EfRn4 X-Received: by 2002:a17:902:9696:: with SMTP id n22-v6mr36682127plp.212.1537395010815; Wed, 19 Sep 2018 15:10:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537395010; cv=none; d=google.com; s=arc-20160816; b=nvlQ3s0FAcH1eTx4rXHEMOV2wUCcFL8ze/zeJ6lf2C4ANjbvObSo3EoT1t/862MlI4 5QgjalqPnueWg6g0ltCcVafZTk8mbBhwHf0fVX6doLMVnTmUU1c5YaXHRfb1RzYa5JRi 38N+DBLe+WeQ1Ra+egPveMT6/dCgPHqh94UCsCthYC1OVX2jeJyfXqOhkMlkojAaG2hx rksdZxYCXyIWhLZvjlB/7kuDYTwY0KCDSQqZegPFdCtZclnu/OSkCoXIh6sB3IW5+Fhq GuB1LfdGN4YFePKVr4qbttKdzzxUlwU+qf+A0WABfd8PBlAU2/XnBhlK+wlNmg3B8f3h 25Vg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from; bh=+STQAUXrpWpre/0i9/ii9sOyGJSjq02f5CNvnMeOpVE=; b=qHfNYN6V5dg28fdTvaWEcuL6j37G3nxYKNTd0OpI15izvFChtXKqhnokc7WDdClSO2 wFC9/TAj7s5rzM+uSTaIttYlkKckr5VLMdIRabnptjIzMSPU+mwYk4Uv/upSGd4CrhbX HF7UEfL3U92kIGvxp3Nxc90ejc/G/nPiwyae+fM8EDgzgtZec69fDo5aW5sdPvuP9XrC 9ShVqC3PmsmFQ2l8wE98Ktx1NS6lP65aTLuwS5RYbMn2McgRvFexeP65gJjBueZiKVU2 8dWmBZZHo6pu8e6pIs95tHkz8OG/WjVuoExSOW+nGtC2YKjRCJspJpnW48IIXlUWO7rQ cn3Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v66-v6si22105470pfb.368.2018.09.19.15.09.55; Wed, 19 Sep 2018 15:10:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733193AbeITDtm (ORCPT + 99 others); Wed, 19 Sep 2018 23:49:42 -0400 Received: from mga07.intel.com ([134.134.136.100]:27899 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730876AbeITDtm (ORCPT ); Wed, 19 Sep 2018 23:49:42 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Sep 2018 15:09:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,395,1531810800"; d="scan'208";a="90262508" Received: from skl-02.jf.intel.com ([10.54.74.62]) by fmsmga004.fm.intel.com with ESMTP; 19 Sep 2018 15:09:42 -0700 From: Tim Chen To: Jiri Kosina , Thomas Gleixner Cc: Tim Chen , Tom Lendacky , Ingo Molnar , Peter Zijlstra , Josh Poimboeuf , Andrea Arcangeli , David Woodhouse , Andi Kleen , Dave Hansen , Casey Schaufler , Asit Mallick , Arjan van de Ven , Jon Masters , linux-kernel@vger.kernel.org, x86@kernel.org Subject: [PATCH 1/2] x86/speculation: Option to select app to app mitigation for spectre_v2 Date: Wed, 19 Sep 2018 14:35:29 -0700 Message-Id: <8b4b0f2fbb77432f68d1dbd2726ca85bd6f9a937.1537392876.git.tim.c.chen@linux.intel.com> X-Mailer: git-send-email 2.9.4 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Jiri Kosina's patch makes IBPB and STIBP available for general spectre v2 app to app mitigation. IBPB will be issued for switching to an app that's not ptraceable by the previous app and STIBP will be always turned on. However, app to app exploit is in general difficult due to address space layout randomization in apps and the need to know an app's address space layout ahead of time. Users may not wish to incur app to app performance overhead from IBPB and STIBP for general non security sensitive apps. This patch provides a lite option for spectre_v2 app to app mitigation where IBPB is only issued for security sensitive non-dumpable app. The strict option will keep system at high security level where IBPB and STIBP are used to defend all apps against spectre_v2 app to app attack. Signed-off-by: Tim Chen --- Documentation/admin-guide/kernel-parameters.txt | 11 +++ arch/x86/include/asm/nospec-branch.h | 9 +++ arch/x86/kernel/cpu/bugs.c | 95 +++++++++++++++++++++++-- arch/x86/mm/tlb.c | 19 +++-- 4 files changed, 126 insertions(+), 8 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 64a3bf5..6243144 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4186,6 +4186,17 @@ Not specifying this option is equivalent to spectre_v2=auto. + spectre_v2_app2app= + [X86] Control app to app mitigation of Spectre variant 2 + (indirect branch speculation) vulnerability. + + lite - only turn on mitigation for non-dumpable processes + strict - protect against attacks for all user processes + auto - let kernel decide lite or strict mode + + Not specifying this option is equivalent to + spectre_v2_app2app=auto. + spec_store_bypass_disable= [HW] Control Speculative Store Bypass (SSB) Disable mitigation (Speculative Store Bypass vulnerability) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index fd2a8c1..c59a6c4 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -3,6 +3,7 @@ #ifndef _ASM_X86_NOSPEC_BRANCH_H_ #define _ASM_X86_NOSPEC_BRANCH_H_ +#include #include #include #include @@ -217,6 +218,12 @@ enum spectre_v2_mitigation { SPECTRE_V2_IBRS_ENHANCED, }; +enum spectre_v2_app2app_mitigation { + SPECTRE_V2_APP2APP_NONE, + SPECTRE_V2_APP2APP_LITE, + SPECTRE_V2_APP2APP_STRICT, +}; + /* The Speculative Store Bypass disable variants */ enum ssb_mitigation { SPEC_STORE_BYPASS_NONE, @@ -228,6 +235,8 @@ enum ssb_mitigation { extern char __indirect_thunk_start[]; extern char __indirect_thunk_end[]; +DECLARE_STATIC_KEY_FALSE(spectre_v2_app_lite); + /* * On VMEXIT we must ensure that no RSB predictions learned in the guest * can be followed in the host, by overwriting the RSB completely. Both diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index ee46dcb..c967012 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -133,6 +133,12 @@ enum spectre_v2_mitigation_cmd { SPECTRE_V2_CMD_RETPOLINE_AMD, }; +enum spectre_v2_app2app_mitigation_cmd { + SPECTRE_V2_APP2APP_CMD_AUTO, + SPECTRE_V2_APP2APP_CMD_LITE, + SPECTRE_V2_APP2APP_CMD_STRICT, +}; + static const char *spectre_v2_strings[] = { [SPECTRE_V2_NONE] = "Vulnerable", [SPECTRE_V2_RETPOLINE_MINIMAL] = "Vulnerable: Minimal generic ASM retpoline", @@ -142,12 +148,24 @@ static const char *spectre_v2_strings[] = { [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS", }; +static const char *spectre_v2_app2app_strings[] = { + [SPECTRE_V2_APP2APP_NONE] = "App-App Vulnerable", + [SPECTRE_V2_APP2APP_LITE] = "App-App Mitigation: Protect only non-dumpable process", + [SPECTRE_V2_APP2APP_STRICT] = "App-App Mitigation: Full app to app attack protection", +}; + +DEFINE_STATIC_KEY_FALSE(spectre_v2_app_lite); +EXPORT_SYMBOL(spectre_v2_app_lite); + #undef pr_fmt #define pr_fmt(fmt) "Spectre V2 : " fmt static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = SPECTRE_V2_NONE; +static enum spectre_v2_mitigation spectre_v2_app2app_enabled __ro_after_init = + SPECTRE_V2_APP2APP_NONE; + void x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest) { @@ -275,6 +293,46 @@ static const struct { { "auto", SPECTRE_V2_CMD_AUTO, false }, }; +static const struct { + const char *option; + enum spectre_v2_app2app_mitigation_cmd cmd; + bool secure; +} app2app_mitigation_options[] = { + { "lite", SPECTRE_V2_APP2APP_CMD_LITE, false }, + { "strict", SPECTRE_V2_APP2APP_CMD_STRICT, false }, + { "auto", SPECTRE_V2_APP2APP_CMD_AUTO, false }, +}; + +static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_app2app_cmdline(void) +{ + char arg[20]; + int ret, i; + enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_APP2APP_CMD_AUTO; + + ret = cmdline_find_option(boot_command_line, "spectre_v2_app2app", arg, sizeof(arg)); + if (ret < 0) + return SPECTRE_V2_APP2APP_CMD_AUTO; + + for (i = 0; i < ARRAY_SIZE(app2app_mitigation_options); i++) { + if (!match_option(arg, ret, app2app_mitigation_options[i].option)) + continue; + cmd = app2app_mitigation_options[i].cmd; + break; + } + + if (i >= ARRAY_SIZE(app2app_mitigation_options)) { + pr_err("unknown app to app protection option (%s). Switching to AUTO select\n", arg); + return SPECTRE_V2_APP2APP_CMD_AUTO; + } + + if (app2app_mitigation_options[i].secure) + spec2_print_if_secure(app2app_mitigation_options[i].option); + else + spec2_print_if_insecure(app2app_mitigation_options[i].option); + + return cmd; +} + static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void) { char arg[20]; @@ -325,6 +383,9 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void) static bool stibp_needed(void) { + if (static_branch_unlikely(&spectre_v2_app_lite)) + return false; + if (spectre_v2_enabled == SPECTRE_V2_NONE) return false; @@ -366,7 +427,9 @@ void arch_smt_update(void) static void __init spectre_v2_select_mitigation(void) { enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline(); + enum spectre_v2_app2app_mitigation_cmd app2app_cmd = spectre_v2_parse_app2app_cmdline(); enum spectre_v2_mitigation mode = SPECTRE_V2_NONE; + enum spectre_v2_app2app_mitigation app2app_mode = SPECTRE_V2_APP2APP_NONE; /* * If the CPU is not affected and the command line mode is NONE or AUTO @@ -376,6 +439,17 @@ static void __init spectre_v2_select_mitigation(void) (cmd == SPECTRE_V2_CMD_NONE || cmd == SPECTRE_V2_CMD_AUTO)) return; + switch (app2app_cmd) { + case SPECTRE_V2_APP2APP_CMD_LITE: + case SPECTRE_V2_APP2APP_CMD_AUTO: + app2app_mode = SPECTRE_V2_APP2APP_LITE; + break; + + case SPECTRE_V2_APP2APP_CMD_STRICT: + app2app_mode = SPECTRE_V2_APP2APP_STRICT; + break; + } + switch (cmd) { case SPECTRE_V2_CMD_NONE: return; @@ -427,6 +501,11 @@ static void __init spectre_v2_select_mitigation(void) } specv2_set_mode: + spectre_v2_app2app_enabled = app2app_mode; + pr_info("%s\n", spectre_v2_app2app_strings[app2app_mode]); + if (app2app_mode == SPECTRE_V2_APP2APP_LITE) + static_branch_enable(&spectre_v2_app_lite); + spectre_v2_enabled = mode; pr_info("%s\n", spectre_v2_strings[mode]); @@ -441,8 +520,8 @@ static void __init spectre_v2_select_mitigation(void) setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW); pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n"); - /* Initialize Indirect Branch Prediction Barrier if supported */ - if (boot_cpu_has(X86_FEATURE_IBPB)) { + /* Initialize Indirect Branch Prediction Barrier if supported and not disabled */ + if (boot_cpu_has(X86_FEATURE_IBPB) && app2app_mode != SPECTRE_V2_APP2APP_NONE) { setup_force_cpu_cap(X86_FEATURE_USE_IBPB); pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n"); } @@ -875,8 +954,16 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr case X86_BUG_SPECTRE_V2: mutex_lock(&spec_ctrl_mutex); - ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], - boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "", + if (static_branch_unlikely(&spectre_v2_app_lite)) + ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], + boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB-lite" : "", + boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "", + (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "", + boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "", + spectre_v2_module_string()); + else + ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], + boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB-strict" : "", boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "", (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "", boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "", diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index ed44444..54780a8 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -184,14 +184,25 @@ static void sync_current_stack_to_mm(struct mm_struct *mm) static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id) { /* - * Check if the current (previous) task has access to the memory - * of the @tsk (next) task. If access is denied, make sure to + * For lite protection mode, we only protect the non-dumpable + * processes. + * + * Otherwise check if the current (previous) task has access to the memory + * of the @tsk (next) task for strict app to app protection. + * If access is denied, make sure to * issue a IBPB to stop user->user Spectre-v2 attacks. * * Note: __ptrace_may_access() returns 0 or -ERRNO. */ - return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id && - __ptrace_may_access(tsk, PTRACE_MODE_IBPB)); + + /* skip IBPB if no context changes */ + if (!tsk || !tsk->mm || tsk->mm->context.ctx_id != last_ctx_id) + return false; + + if (static_branch_unlikely(&spectre_v2_app_lite)) + return (get_dumpable(tsk->mm) != SUID_DUMP_USER); + else + return (__ptrace_may_access(tsk, PTRACE_MODE_IBPB)); } void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, -- 2.9.4