2019-04-10 23:16:33

by Jeremy Linton

[permalink] [raw]
Subject: [v7 00/10] arm64: add system vulnerability sysfs entries

Arm64 machines should be displaying a human readable
vulnerability status to speculative execution attacks in
/sys/devices/system/cpu/vulnerabilities

This series enables that behavior by providing the expected
functions. Those functions expose the cpu errata and feature
states, as well as whether firmware is responding appropriately
to display the overall machine status. This means that in a
heterogeneous machine we will only claim the machine is mitigated
or safe if we are confident all booted cores are safe or
mitigated.

v6->v7: Invert ssb white/black list logic so that we only mark
cores in the whitelist not affected when the firmware
fails to respond. Removed reviewed/tested tags for
just patch 9 because of this.

v5->v6:
Invert meltdown logic to display that a core is safe rather
than mitigated if the mitigation has been enabled on
machines that are safe. This can happen when the
mitigation was forced on via command line or KASLR.
This means that in order to detect if kpti is enabled
other methods must be used (look at dmesg) when the
machine isn't itself susceptible to meltdown.
Trivial whitespace tweaks.

v4->v5:
Revert the changes to remove the CONFIG_EXPERT hidden
options, but leave the detection paths building
without #ifdef wrappers. Also remove the
CONFIG_GENERIC_CPU_VULNERABILITIES #ifdefs
as we are 'select'ing the option in the Kconfig.
This allows us to keep all three variations of
the CONFIG/enable/disable paths without a lot of
(CONFIG_X || CONFIG_Y) checks.
Various bits/pieces moved between the patches in an attempt
to keep similar features/changes together.

v3->v4:
Drop the patch which selectivly exports sysfs entries
Remove the CONFIG_EXPERT hidden options which allowed
the kernel to be built without the vulnerability
detection code.
Pick Marc Z's patches which invert the white/black
lists for spectrev2 and clean up the firmware
detection logic.
Document the existing kpti controls
Add a nospectre_v2 option to boot time disable the
mitigation

v2->v3:
Remove "Unknown" states, replace with further blacklists
and default vulnerable/not affected states.
Add the ability for an arch port to selectively export
sysfs vulnerabilities.

v1->v2:
Add "Unknown" state to ABI/testing docs.
Minor tweaks.

Jeremy Linton (6):
arm64: Provide a command line to disable spectre_v2 mitigation
arm64: add sysfs vulnerability show for meltdown
arm64: Always enable spectrev2 vulnerability detection
arm64: add sysfs vulnerability show for spectre v2
arm64: Always enable ssb vulnerability detection
arm64: add sysfs vulnerability show for speculative store bypass

Marc Zyngier (2):
arm64: Advertise mitigation of Spectre-v2, or lack thereof
arm64: Use firmware to detect CPUs that are not affected by Spectre-v2

Mian Yousaf Kaukab (2):
arm64: add sysfs vulnerability show for spectre v1
arm64: enable generic CPU vulnerabilites support

.../admin-guide/kernel-parameters.txt | 8 +-
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/cpufeature.h | 4 -
arch/arm64/kernel/cpu_errata.c | 257 +++++++++++++-----
arch/arm64/kernel/cpufeature.c | 58 +++-
5 files changed, 241 insertions(+), 87 deletions(-)

--
2.20.1


2019-04-10 23:13:53

by Jeremy Linton

[permalink] [raw]
Subject: [v7 04/10] arm64: Advertise mitigation of Spectre-v2, or lack thereof

From: Marc Zyngier <[email protected]>

We currently have a list of CPUs affected by Spectre-v2, for which
we check that the firmware implements ARCH_WORKAROUND_1. It turns
out that not all firmwares do implement the required mitigation,
and that we fail to let the user know about it.

Instead, let's slightly revamp our checks, and rely on a whitelist
of cores that are known to be non-vulnerable, and let the user know
the status of the mitigation in the kernel log.

Signed-off-by: Marc Zyngier <[email protected]>
[This makes more sense in front of the sysfs patch]
[Pick pieces of that patch into this and move it earlier]
Signed-off-by: Jeremy Linton <[email protected]>
Reviewed-by: Andre Przywara <[email protected]>
Reviewed-by: Suzuki K Poulose <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Tested-by: Stefan Wahren <[email protected]>
---
arch/arm64/kernel/cpu_errata.c | 107 +++++++++++++++++----------------
1 file changed, 55 insertions(+), 52 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index cf623657cf3c..2b6e6d8e105b 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -131,9 +131,9 @@ static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
__flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K);
}

-static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
- const char *hyp_vecs_start,
- const char *hyp_vecs_end)
+static void install_bp_hardening_cb(bp_hardening_cb_t fn,
+ const char *hyp_vecs_start,
+ const char *hyp_vecs_end)
{
static DEFINE_RAW_SPINLOCK(bp_lock);
int cpu, slot = -1;
@@ -177,23 +177,6 @@ static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
}
#endif /* CONFIG_KVM_INDIRECT_VECTORS */

-static void install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry,
- bp_hardening_cb_t fn,
- const char *hyp_vecs_start,
- const char *hyp_vecs_end)
-{
- u64 pfr0;
-
- if (!entry->matches(entry, SCOPE_LOCAL_CPU))
- return;
-
- pfr0 = read_cpuid(ID_AA64PFR0_EL1);
- if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT))
- return;
-
- __install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end);
-}
-
#include <uapi/linux/psci.h>
#include <linux/arm-smccc.h>
#include <linux/psci.h>
@@ -228,31 +211,27 @@ static int __init parse_nospectre_v2(char *str)
}
early_param("nospectre_v2", parse_nospectre_v2);

-static void
-enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
+/*
+ * -1: No workaround
+ * 0: No workaround required
+ * 1: Workaround installed
+ */
+static int detect_harden_bp_fw(void)
{
bp_hardening_cb_t cb;
void *smccc_start, *smccc_end;
struct arm_smccc_res res;
u32 midr = read_cpuid_id();

- if (!entry->matches(entry, SCOPE_LOCAL_CPU))
- return;
-
- if (__nospectre_v2) {
- pr_info_once("spectrev2 mitigation disabled by command line option\n");
- return;
- }
-
if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
- return;
+ return -1;

switch (psci_ops.conduit) {
case PSCI_CONDUIT_HVC:
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 < 0)
- return;
+ return -1;
cb = call_hvc_arch_workaround_1;
/* This is a guest, no need to patch KVM vectors */
smccc_start = NULL;
@@ -263,23 +242,23 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
if ((int)res.a0 < 0)
- return;
+ return -1;
cb = call_smc_arch_workaround_1;
smccc_start = __smccc_workaround_1_smc_start;
smccc_end = __smccc_workaround_1_smc_end;
break;

default:
- return;
+ return -1;
}

if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) ||
((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1))
cb = qcom_link_stack_sanitization;

- install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
+ install_bp_hardening_cb(cb, smccc_start, smccc_end);

- return;
+ return 1;
}
#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */

@@ -521,24 +500,48 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
CAP_MIDR_RANGE_LIST(midr_list)

#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
-
/*
- * List of CPUs where we need to issue a psci call to
- * harden the branch predictor.
+ * List of CPUs that do not need any Spectre-v2 mitigation at all.
*/
-static const struct midr_range arm64_bp_harden_smccc_cpus[] = {
- MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
- MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
- MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
- MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
- MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
- MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
- MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
- MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
- MIDR_ALL_VERSIONS(MIDR_NVIDIA_DENVER),
- {},
+static const struct midr_range spectre_v2_safe_list[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
+ { /* sentinel */ }
};

+static bool __maybe_unused
+check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
+{
+ int need_wa;
+
+ WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
+
+ /* If the CPU has CSV2 set, we're safe */
+ if (cpuid_feature_extract_unsigned_field(read_cpuid(ID_AA64PFR0_EL1),
+ ID_AA64PFR0_CSV2_SHIFT))
+ return false;
+
+ /* Alternatively, we have a list of unaffected CPUs */
+ if (is_midr_in_range_list(read_cpuid_id(), spectre_v2_safe_list))
+ return false;
+
+ /* Fallback to firmware detection */
+ need_wa = detect_harden_bp_fw();
+ if (!need_wa)
+ return false;
+
+ /* forced off */
+ if (__nospectre_v2) {
+ pr_info_once("spectrev2 mitigation disabled by command line option\n");
+ return false;
+ }
+
+ if (need_wa < 0)
+ pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n");
+
+ return (need_wa > 0);
+}
#endif

#ifdef CONFIG_HARDEN_EL2_VECTORS
@@ -717,8 +720,8 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
- .cpu_enable = enable_smccc_arch_workaround_1,
- ERRATA_MIDR_RANGE_LIST(arm64_bp_harden_smccc_cpus),
+ .type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
+ .matches = check_branch_predictor,
},
#endif
#ifdef CONFIG_HARDEN_EL2_VECTORS
--
2.20.1

2019-04-10 23:13:56

by Jeremy Linton

[permalink] [raw]
Subject: [v7 07/10] arm64: add sysfs vulnerability show for spectre v2

Add code to track whether all the cores in the machine are
vulnerable, and whether all the vulnerable cores have been
mitigated.

Once we have that information we can add the sysfs stub and
provide an accurate view of what is known about the machine.

Signed-off-by: Jeremy Linton <[email protected]>
Reviewed-by: Andre Przywara <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Tested-by: Stefan Wahren <[email protected]>
---
arch/arm64/kernel/cpu_errata.c | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 74c4a66500c4..fb8eb6c6088f 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -512,6 +512,10 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \
CAP_MIDR_RANGE_LIST(midr_list)

+/* Track overall mitigation state. We are only mitigated if all cores are ok */
+static bool __hardenbp_enab = true;
+static bool __spectrev2_safe = true;
+
/*
* List of CPUs that do not need any Spectre-v2 mitigation at all.
*/
@@ -522,6 +526,10 @@ static const struct midr_range spectre_v2_safe_list[] = {
{ /* sentinel */ }
};

+/*
+ * Track overall bp hardening for all heterogeneous cores in the machine.
+ * We are only considered "safe" if all booted cores are known safe.
+ */
static bool __maybe_unused
check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
{
@@ -543,19 +551,25 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
if (!need_wa)
return false;

+ __spectrev2_safe = false;
+
if (!IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR)) {
pr_warn_once("spectrev2 mitigation disabled by configuration\n");
+ __hardenbp_enab = false;
return false;
}

/* forced off */
if (__nospectre_v2) {
pr_info_once("spectrev2 mitigation disabled by command line option\n");
+ __hardenbp_enab = false;
return false;
}

- if (need_wa < 0)
+ if (need_wa < 0) {
pr_warn_once("ARM_SMCCC_ARCH_WORKAROUND_1 missing from firmware\n");
+ __hardenbp_enab = false;
+ }

return (need_wa > 0);
}
@@ -778,3 +792,15 @@ ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
{
return sprintf(buf, "Mitigation: __user pointer sanitization\n");
}
+
+ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ if (__spectrev2_safe)
+ return sprintf(buf, "Not affected\n");
+
+ if (__hardenbp_enab)
+ return sprintf(buf, "Mitigation: Branch predictor hardening\n");
+
+ return sprintf(buf, "Vulnerable\n");
+}
--
2.20.1

2019-04-10 23:14:01

by Jeremy Linton

[permalink] [raw]
Subject: [v7 10/10] arm64: enable generic CPU vulnerabilites support

From: Mian Yousaf Kaukab <[email protected]>

Enable CPU vulnerabilty show functions for spectre_v1, spectre_v2,
meltdown and store-bypass.

Signed-off-by: Mian Yousaf Kaukab <[email protected]>
Signed-off-by: Jeremy Linton <[email protected]>
Reviewed-by: Andre Przywara <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Tested-by: Stefan Wahren <[email protected]>
---
arch/arm64/Kconfig | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7e34b9eba5de..6a7b7d4e0e90 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -90,6 +90,7 @@ config ARM64
select GENERIC_CLOCKEVENTS
select GENERIC_CLOCKEVENTS_BROADCAST
select GENERIC_CPU_AUTOPROBE
+ select GENERIC_CPU_VULNERABILITIES
select GENERIC_EARLY_IOREMAP
select GENERIC_IDLE_POLL_SETUP
select GENERIC_IRQ_MULTI_HANDLER
--
2.20.1

2019-04-10 23:14:10

by Jeremy Linton

[permalink] [raw]
Subject: [v7 09/10] arm64: add sysfs vulnerability show for speculative store bypass

Return status based on ssbd_state and the arm64 SSBS feature. If
the mitigation is disabled, or the firmware isn't responding then
return the expected machine state based on a whitelist of known
good cores.

Given a heterogeneous machine, the overall machine vulnerability
must be a tristate to assure any vulnerable cores transition to
vulnerable and stay there. Further, we delay transitioning to
vulnerable until we know the firmware isn't responding to avoid a
case where we miss the whitelist, but the firmware goes ahead and
reports the core is not vulnerable.

Signed-off-by: Jeremy Linton <[email protected]>
---
arch/arm64/kernel/cpu_errata.c | 62 ++++++++++++++++++++++++++++++++++
1 file changed, 62 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 6958dcdabf7d..a1f3188c7be0 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -278,6 +278,13 @@ static int detect_harden_bp_fw(void)
DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);

int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
+static enum {SSB_UNSET, SSB_SAFE, SSB_UNSAFE} __ssb_safe = SSB_UNSET;
+
+static inline void ssb_safe(void)
+{
+ if (__ssb_safe == SSB_UNSET)
+ __ssb_safe = SSB_SAFE;
+}

static const struct ssbd_options {
const char *str;
@@ -383,16 +390,25 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
struct arm_smccc_res res;
bool required = true;
s32 val;
+ bool this_cpu_safe = false;

WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());

if (this_cpu_has_cap(ARM64_SSBS)) {
required = false;
+ ssb_safe();
goto out_printmsg;
}

+ if (is_midr_in_range_list(read_cpuid_id(), entry->midr_range_list)) {
+ ssb_safe();
+ this_cpu_safe = true;
+ }
+
if (psci_ops.smccc_version == SMCCC_VERSION_1_0) {
ssbd_state = ARM64_SSBD_UNKNOWN;
+ if (!this_cpu_safe)
+ __ssb_safe = SSB_UNSAFE;
return false;
}

@@ -409,6 +425,8 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,

default:
ssbd_state = ARM64_SSBD_UNKNOWN;
+ if (!this_cpu_safe)
+ __ssb_safe = SSB_UNSAFE;
return false;
}

@@ -417,23 +435,31 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
switch (val) {
case SMCCC_RET_NOT_SUPPORTED:
ssbd_state = ARM64_SSBD_UNKNOWN;
+ if (!this_cpu_safe)
+ __ssb_safe = SSB_UNSAFE;
return false;

+ /* machines with mixed mitigation requirements must not return this */
case SMCCC_RET_NOT_REQUIRED:
pr_info_once("%s mitigation not required\n", entry->desc);
ssbd_state = ARM64_SSBD_MITIGATED;
+ ssb_safe();
return false;

case SMCCC_RET_SUCCESS:
+ __ssb_safe = SSB_UNSAFE;
required = true;
break;

case 1: /* Mitigation not required on this CPU */
required = false;
+ ssb_safe();
break;

default:
WARN_ON(1);
+ if (!this_cpu_safe)
+ __ssb_safe = SSB_UNSAFE;
return false;
}

@@ -474,6 +500,14 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
return required;
}

+/* known invulnerable cores */
+static const struct midr_range arm64_ssb_cpus[] = {
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
+ {},
+};
+
static void __maybe_unused
cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
{
@@ -769,6 +803,7 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
.capability = ARM64_SSBD,
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
.matches = has_ssbd_mitigation,
+ .midr_range_list = arm64_ssb_cpus,
},
#ifdef CONFIG_ARM64_ERRATUM_1188873
{
@@ -807,3 +842,30 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,

return sprintf(buf, "Vulnerable\n");
}
+
+ssize_t cpu_show_spec_store_bypass(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ /*
+ * Two assumptions: First, ssbd_state reflects the worse case
+ * for heterogeneous machines, and that if SSBS is supported its
+ * supported by all cores.
+ */
+ switch (ssbd_state) {
+ case ARM64_SSBD_MITIGATED:
+ return sprintf(buf, "Not affected\n");
+
+ case ARM64_SSBD_KERNEL:
+ case ARM64_SSBD_FORCE_ENABLE:
+ if (cpus_have_cap(ARM64_SSBS))
+ return sprintf(buf, "Not affected\n");
+ if (IS_ENABLED(CONFIG_ARM64_SSBD))
+ return sprintf(buf,
+ "Mitigation: Speculative Store Bypass disabled\n");
+ }
+
+ if (__ssb_safe == SSB_SAFE)
+ return sprintf(buf, "Not affected\n");
+
+ return sprintf(buf, "Vulnerable\n");
+}
--
2.20.1

2019-04-10 23:14:27

by Jeremy Linton

[permalink] [raw]
Subject: [v7 08/10] arm64: Always enable ssb vulnerability detection

The ssb detection logic is necessary regardless of whether
the vulnerability mitigation code is built into the kernel.
Break it out so that the CONFIG option only controls the
mitigation logic and not the vulnerability detection.

Signed-off-by: Jeremy Linton <[email protected]>
Reviewed-by: Andre Przywara <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Tested-by: Stefan Wahren <[email protected]>
---
arch/arm64/include/asm/cpufeature.h | 4 ----
arch/arm64/kernel/cpu_errata.c | 11 +++++++----
2 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index e505e1fbd2b9..6ccdc97e5d6a 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -638,11 +638,7 @@ static inline int arm64_get_ssbd_state(void)
#endif
}

-#ifdef CONFIG_ARM64_SSBD
void arm64_set_ssbd_mitigation(bool state);
-#else
-static inline void arm64_set_ssbd_mitigation(bool state) {}
-#endif

extern int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index fb8eb6c6088f..6958dcdabf7d 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -275,7 +275,6 @@ static int detect_harden_bp_fw(void)
return 1;
}

-#ifdef CONFIG_ARM64_SSBD
DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);

int ssbd_state __read_mostly = ARM64_SSBD_KERNEL;
@@ -346,6 +345,7 @@ void __init arm64_enable_wa2_handling(struct alt_instr *alt,
*updptr = cpu_to_le32(aarch64_insn_gen_nop());
}

+#ifdef CONFIG_ARM64_SSBD
void arm64_set_ssbd_mitigation(bool state)
{
if (this_cpu_has_cap(ARM64_SSBS)) {
@@ -370,6 +370,12 @@ void arm64_set_ssbd_mitigation(bool state)
break;
}
}
+#else
+void arm64_set_ssbd_mitigation(bool state)
+{
+ pr_info_once("SSBD disabled by kernel configuration\n");
+}
+#endif /* CONFIG_ARM64_SSBD */

static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,
int scope)
@@ -467,7 +473,6 @@ static bool has_ssbd_mitigation(const struct arm64_cpu_capabilities *entry,

return required;
}
-#endif /* CONFIG_ARM64_SSBD */

static void __maybe_unused
cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
@@ -759,14 +764,12 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
ERRATA_MIDR_RANGE_LIST(arm64_harden_el2_vectors),
},
#endif
-#ifdef CONFIG_ARM64_SSBD
{
.desc = "Speculative Store Bypass Disable",
.capability = ARM64_SSBD,
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
.matches = has_ssbd_mitigation,
},
-#endif
#ifdef CONFIG_ARM64_ERRATUM_1188873
{
/* Cortex-A76 r0p0 to r2p0 */
--
2.20.1

2019-04-10 23:14:33

by Jeremy Linton

[permalink] [raw]
Subject: [v7 05/10] arm64: Use firmware to detect CPUs that are not affected by Spectre-v2

From: Marc Zyngier <[email protected]>

The SMCCC ARCH_WORKAROUND_1 service can indicate that although the
firmware knows about the Spectre-v2 mitigation, this particular
CPU is not vulnerable, and it is thus not necessary to call
the firmware on this CPU.

Let's use this information to our benefit.

Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Jeremy Linton <[email protected]>
Reviewed-by: Andre Przywara <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Tested-by: Stefan Wahren <[email protected]>
---
arch/arm64/kernel/cpu_errata.c | 32 +++++++++++++++++++++++---------
1 file changed, 23 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 2b6e6d8e105b..e5c4c5d84a4e 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -230,22 +230,36 @@ static int detect_harden_bp_fw(void)
case PSCI_CONDUIT_HVC:
arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
- if ((int)res.a0 < 0)
+ switch ((int)res.a0) {
+ case 1:
+ /* Firmware says we're just fine */
+ return 0;
+ case 0:
+ cb = call_hvc_arch_workaround_1;
+ /* This is a guest, no need to patch KVM vectors */
+ smccc_start = NULL;
+ smccc_end = NULL;
+ break;
+ default:
return -1;
- cb = call_hvc_arch_workaround_1;
- /* This is a guest, no need to patch KVM vectors */
- smccc_start = NULL;
- smccc_end = NULL;
+ }
break;

case PSCI_CONDUIT_SMC:
arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
ARM_SMCCC_ARCH_WORKAROUND_1, &res);
- if ((int)res.a0 < 0)
+ switch ((int)res.a0) {
+ case 1:
+ /* Firmware says we're just fine */
+ return 0;
+ case 0:
+ cb = call_smc_arch_workaround_1;
+ smccc_start = __smccc_workaround_1_smc_start;
+ smccc_end = __smccc_workaround_1_smc_end;
+ break;
+ default:
return -1;
- cb = call_smc_arch_workaround_1;
- smccc_start = __smccc_workaround_1_smc_start;
- smccc_end = __smccc_workaround_1_smc_end;
+ }
break;

default:
--
2.20.1

2019-04-10 23:15:07

by Jeremy Linton

[permalink] [raw]
Subject: [v7 03/10] arm64: add sysfs vulnerability show for meltdown

Display the system vulnerability status. This means that
while its possible to have the mitigation enabled, the
sysfs entry won't indicate that status. This is because
the core ABI doesn't express the concept of mitigation
when the system isn't vulnerable.

Signed-off-by: Jeremy Linton <[email protected]>
Reviewed-by: Suzuki K Poulose <[email protected]>
Reviewed-by: Andre Przywara <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Tested-by: Stefan Wahren <[email protected]>
---
arch/arm64/kernel/cpufeature.c | 58 ++++++++++++++++++++++++++--------
1 file changed, 44 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 4061de10cea6..6b7e1556460a 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -947,7 +947,7 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
return has_cpuid_feature(entry, scope);
}

-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
+static bool __meltdown_safe = true;
static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */

static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
@@ -967,6 +967,16 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
{ /* sentinel */ }
};
char const *str = "command line option";
+ bool meltdown_safe;
+
+ meltdown_safe = is_midr_in_range_list(read_cpuid_id(), kpti_safe_list);
+
+ /* Defer to CPU feature registers */
+ if (has_cpuid_feature(entry, scope))
+ meltdown_safe = true;
+
+ if (!meltdown_safe)
+ __meltdown_safe = false;

/*
* For reasons that aren't entirely clear, enabling KPTI on Cavium
@@ -978,6 +988,19 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
__kpti_forced = -1;
}

+ /* Useful for KASLR robustness */
+ if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0) {
+ if (!__kpti_forced) {
+ str = "KASLR";
+ __kpti_forced = 1;
+ }
+ }
+
+ if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0)) {
+ pr_info_once("kernel page table isolation disabled by CONFIG\n");
+ return false;
+ }
+
/* Forced? */
if (__kpti_forced) {
pr_info_once("kernel page table isolation forced %s by %s\n",
@@ -985,18 +1008,10 @@ static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
return __kpti_forced > 0;
}

- /* Useful for KASLR robustness */
- if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
- return kaslr_offset() > 0;
-
- /* Don't force KPTI for CPUs that are not vulnerable */
- if (is_midr_in_range_list(read_cpuid_id(), kpti_safe_list))
- return false;
-
- /* Defer to CPU feature registers */
- return !has_cpuid_feature(entry, scope);
+ return !meltdown_safe;
}

+#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
static void
kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
{
@@ -1026,6 +1041,12 @@ kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)

return;
}
+#else
+static void
+kpti_install_ng_mappings(const struct arm64_cpu_capabilities *__unused)
+{
+}
+#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */

static int __init parse_kpti(char *str)
{
@@ -1039,7 +1060,6 @@ static int __init parse_kpti(char *str)
return 0;
}
early_param("kpti", parse_kpti);
-#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */

#ifdef CONFIG_ARM64_HW_AFDBM
static inline void __cpu_enable_hw_dbm(void)
@@ -1306,7 +1326,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.field_pos = ID_AA64PFR0_EL0_SHIFT,
.min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
},
-#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
{
.desc = "Kernel page table isolation (KPTI)",
.capability = ARM64_UNMAP_KERNEL_AT_EL0,
@@ -1322,7 +1341,6 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = unmap_kernel_at_el0,
.cpu_enable = kpti_install_ng_mappings,
},
-#endif
{
/* FP/SIMD is not implemented */
.capability = ARM64_HAS_NO_FPSIMD,
@@ -2101,3 +2119,15 @@ static int __init enable_mrs_emulation(void)
}

core_initcall(enable_mrs_emulation);
+
+ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ if (__meltdown_safe)
+ return sprintf(buf, "Not affected\n");
+
+ if (arm64_kernel_unmapped_at_el0())
+ return sprintf(buf, "Mitigation: KPTI\n");
+
+ return sprintf(buf, "Vulnerable\n");
+}
--
2.20.1

2019-04-10 23:15:07

by Jeremy Linton

[permalink] [raw]
Subject: [v7 01/10] arm64: Provide a command line to disable spectre_v2 mitigation

There are various reasons, including bencmarking, to disable spectrev2
mitigation on a machine. Provide a command-line to do so.

Signed-off-by: Jeremy Linton <[email protected]>
Reviewed-by: Suzuki K Poulose <[email protected]>
Reviewed-by: Andre Przywara <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Tested-by: Stefan Wahren <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Cc: [email protected]
---
Documentation/admin-guide/kernel-parameters.txt | 8 ++++----
arch/arm64/kernel/cpu_errata.c | 13 +++++++++++++
2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 2b8ee90bb644..d153bb15c8c7 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2873,10 +2873,10 @@
check bypass). With this option data leaks are possible
in the system.

- nospectre_v2 [X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2
- (indirect branch prediction) vulnerability. System may
- allow data leaks with this option, which is equivalent
- to spectre_v2=off.
+ nospectre_v2 [X86,PPC_FSL_BOOK3E,ARM64] Disable all mitigations for
+ the Spectre variant 2 (indirect branch prediction)
+ vulnerability. System may allow data leaks with this
+ option.

nospec_store_bypass_disable
[HW] Disable all mitigations for the Speculative Store Bypass vulnerability
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 9950bb0cbd52..d2b2c69d31bb 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -220,6 +220,14 @@ static void qcom_link_stack_sanitization(void)
: "=&r" (tmp));
}

+static bool __nospectre_v2;
+static int __init parse_nospectre_v2(char *str)
+{
+ __nospectre_v2 = true;
+ return 0;
+}
+early_param("nospectre_v2", parse_nospectre_v2);
+
static void
enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
{
@@ -231,6 +239,11 @@ enable_smccc_arch_workaround_1(const struct arm64_cpu_capabilities *entry)
if (!entry->matches(entry, SCOPE_LOCAL_CPU))
return;

+ if (__nospectre_v2) {
+ pr_info_once("spectrev2 mitigation disabled by command line option\n");
+ return;
+ }
+
if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
return;

--
2.20.1

2019-04-10 23:15:09

by Jeremy Linton

[permalink] [raw]
Subject: [v7 06/10] arm64: Always enable spectrev2 vulnerability detection

The sysfs patches need to display machine vulnerability
status regardless of kernel config. Prepare for that
by breaking out the vulnerability/mitigation detection
code from the logic which implements the mitigation.

Signed-off-by: Jeremy Linton <[email protected]>
Reviewed-by: Andre Przywara <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Tested-by: Stefan Wahren <[email protected]>
---
arch/arm64/kernel/cpu_errata.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index e5c4c5d84a4e..74c4a66500c4 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -109,7 +109,6 @@ cpu_enable_trap_ctr_access(const struct arm64_cpu_capabilities *__unused)

atomic_t arm64_el2_vector_last_slot = ATOMIC_INIT(-1);

-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
#include <asm/mmu_context.h>
#include <asm/cacheflush.h>

@@ -270,11 +269,11 @@ static int detect_harden_bp_fw(void)
((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1))
cb = qcom_link_stack_sanitization;

- install_bp_hardening_cb(cb, smccc_start, smccc_end);
+ if (IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR))
+ install_bp_hardening_cb(cb, smccc_start, smccc_end);

return 1;
}
-#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */

#ifdef CONFIG_ARM64_SSBD
DEFINE_PER_CPU_READ_MOSTLY(u64, arm64_ssbd_callback_required);
@@ -513,7 +512,6 @@ cpu_enable_cache_maint_trap(const struct arm64_cpu_capabilities *__unused)
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM, \
CAP_MIDR_RANGE_LIST(midr_list)

-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
/*
* List of CPUs that do not need any Spectre-v2 mitigation at all.
*/
@@ -545,6 +543,11 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)
if (!need_wa)
return false;

+ if (!IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR)) {
+ pr_warn_once("spectrev2 mitigation disabled by configuration\n");
+ return false;
+ }
+
/* forced off */
if (__nospectre_v2) {
pr_info_once("spectrev2 mitigation disabled by command line option\n");
@@ -556,7 +559,6 @@ check_branch_predictor(const struct arm64_cpu_capabilities *entry, int scope)

return (need_wa > 0);
}
-#endif

#ifdef CONFIG_HARDEN_EL2_VECTORS

@@ -731,13 +733,11 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
ERRATA_MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
},
#endif
-#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
{
.capability = ARM64_HARDEN_BRANCH_PREDICTOR,
.type = ARM64_CPUCAP_LOCAL_CPU_ERRATUM,
.matches = check_branch_predictor,
},
-#endif
#ifdef CONFIG_HARDEN_EL2_VECTORS
{
.desc = "EL2 vector hardening",
--
2.20.1

2019-04-10 23:15:15

by Jeremy Linton

[permalink] [raw]
Subject: [v7 02/10] arm64: add sysfs vulnerability show for spectre v1

From: Mian Yousaf Kaukab <[email protected]>

spectre v1, has been mitigated, and the mitigation is
always active.

Signed-off-by: Mian Yousaf Kaukab <[email protected]>
Signed-off-by: Jeremy Linton <[email protected]>
Reviewed-by: Andre Przywara <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Tested-by: Stefan Wahren <[email protected]>
Acked-by: Suzuki K Poulose <[email protected]>
---
arch/arm64/kernel/cpu_errata.c | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index d2b2c69d31bb..cf623657cf3c 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -755,3 +755,9 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
{
}
};
+
+ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+}
--
2.20.1

2019-04-12 15:33:46

by Jeremy Linton

[permalink] [raw]
Subject: Re: [v7 00/10] arm64: add system vulnerability sysfs entries

Hi,

This patch has a bug, and I think i'm going to tweak patch 9 to drop the
tristate (and default to a bool that is not vulnerable) since I ended up
adding all those extra return checks to deal with the unresponsive
firmware case.



On 4/10/19 6:12 PM, Jeremy Linton wrote:
> Arm64 machines should be displaying a human readable
> vulnerability status to speculative execution attacks in
> /sys/devices/system/cpu/vulnerabilities
>
> This series enables that behavior by providing the expected
> functions. Those functions expose the cpu errata and feature
> states, as well as whether firmware is responding appropriately
> to display the overall machine status. This means that in a
> heterogeneous machine we will only claim the machine is mitigated
> or safe if we are confident all booted cores are safe or
> mitigated.
>
> v6->v7: Invert ssb white/black list logic so that we only mark
> cores in the whitelist not affected when the firmware
> fails to respond. Removed reviewed/tested tags for
> just patch 9 because of this.
>
> v5->v6:
> Invert meltdown logic to display that a core is safe rather
> than mitigated if the mitigation has been enabled on
> machines that are safe. This can happen when the
> mitigation was forced on via command line or KASLR.
> This means that in order to detect if kpti is enabled
> other methods must be used (look at dmesg) when the
> machine isn't itself susceptible to meltdown.
> Trivial whitespace tweaks.
>
> v4->v5:
> Revert the changes to remove the CONFIG_EXPERT hidden
> options, but leave the detection paths building
> without #ifdef wrappers. Also remove the
> CONFIG_GENERIC_CPU_VULNERABILITIES #ifdefs
> as we are 'select'ing the option in the Kconfig.
> This allows us to keep all three variations of
> the CONFIG/enable/disable paths without a lot of
> (CONFIG_X || CONFIG_Y) checks.
> Various bits/pieces moved between the patches in an attempt
> to keep similar features/changes together.
>
> v3->v4:
> Drop the patch which selectivly exports sysfs entries
> Remove the CONFIG_EXPERT hidden options which allowed
> the kernel to be built without the vulnerability
> detection code.
> Pick Marc Z's patches which invert the white/black
> lists for spectrev2 and clean up the firmware
> detection logic.
> Document the existing kpti controls
> Add a nospectre_v2 option to boot time disable the
> mitigation
>
> v2->v3:
> Remove "Unknown" states, replace with further blacklists
> and default vulnerable/not affected states.
> Add the ability for an arch port to selectively export
> sysfs vulnerabilities.
>
> v1->v2:
> Add "Unknown" state to ABI/testing docs.
> Minor tweaks.
>
> Jeremy Linton (6):
> arm64: Provide a command line to disable spectre_v2 mitigation
> arm64: add sysfs vulnerability show for meltdown
> arm64: Always enable spectrev2 vulnerability detection
> arm64: add sysfs vulnerability show for spectre v2
> arm64: Always enable ssb vulnerability detection
> arm64: add sysfs vulnerability show for speculative store bypass
>
> Marc Zyngier (2):
> arm64: Advertise mitigation of Spectre-v2, or lack thereof
> arm64: Use firmware to detect CPUs that are not affected by Spectre-v2
>
> Mian Yousaf Kaukab (2):
> arm64: add sysfs vulnerability show for spectre v1
> arm64: enable generic CPU vulnerabilites support
>
> .../admin-guide/kernel-parameters.txt | 8 +-
> arch/arm64/Kconfig | 1 +
> arch/arm64/include/asm/cpufeature.h | 4 -
> arch/arm64/kernel/cpu_errata.c | 257 +++++++++++++-----
> arch/arm64/kernel/cpufeature.c | 58 +++-
> 5 files changed, 241 insertions(+), 87 deletions(-)
>