2018-01-25 09:24:33

by Woodhouse, David

[permalink] [raw]
Subject: [PATCH v4 0/7] Basic Speculation Control feature support

Add the basic CPUID and MSR definitions for AMD and Intel, followed by
the complete no-brainer: Disable KPTI on Intel CPUs which set the
RDCL_NO bit to say that they don't need it, as well as others which are
known not to speculate such as old Atoms and even older 32-bit chips.

Alan will continue an archæological dig to round up some more entries
for that table.

Also blacklist the early Intel microcodes for Spectre mitigation features,
and add the basic support for indirect_branch_prediction_barrier(). The
latter is needed to protect userspace and complete the retpoline-based
mitigation. Patches on top of it are being bikeshedded as we speak...

v2: Cleanups, add AMD bits for STIBP/SPEC_CTRL.
v3: Add more CPUs to the exemption for KPTI and clean that up.
Add microcode blacklist (RFC).
v4: Roll in 'no speculation' list for CPUs not vulnerable to Spectre.
Cosmetic cleanups in microcode blacklist table.

David Woodhouse (7):
x86/cpufeatures: Add CPUID_7_EDX CPUID leaf
x86/cpufeatures: Add Intel feature bits for Speculation Control
x86/cpufeatures: Add AMD feature bits for Speculation Control
x86/msr: Add definitions for new speculation control MSRs
x86/pti: Do not enable PTI on processors which are not vulnerable to
Meltdown
x86/cpufeature: Blacklist SPEC_CTRL on early Spectre v2 microcodes
x86/speculation: Add basic IBPB (Indirect Branch Prediction Barrier)
support

arch/x86/include/asm/cpufeature.h | 7 +++-
arch/x86/include/asm/cpufeatures.h | 15 +++++--
arch/x86/include/asm/disabled-features.h | 3 +-
arch/x86/include/asm/msr-index.h | 12 ++++++
arch/x86/include/asm/nospec-branch.h | 13 ++++++
arch/x86/include/asm/required-features.h | 3 +-
arch/x86/kernel/cpu/bugs.c | 7 ++++
arch/x86/kernel/cpu/common.c | 48 ++++++++++++++++++---
arch/x86/kernel/cpu/intel.c | 71 ++++++++++++++++++++++++++++++++
arch/x86/kernel/cpu/scattered.c | 2 -
10 files changed, 167 insertions(+), 14 deletions(-)

--
2.7.4



2018-01-25 09:24:43

by Woodhouse, David

[permalink] [raw]
Subject: [PATCH v4 1/7] x86/cpufeatures: Add CPUID_7_EDX CPUID leaf

This is a pure feature bits leaf. We have two AVX512 feature bits in it
already which were handled as scattered bits, and I'm about to add three
more from this leaf for speculation control features.

Signed-off-by: David Woodhouse <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Borislav Petkov <[email protected]>
---
arch/x86/include/asm/cpufeature.h | 7 +++++--
arch/x86/include/asm/cpufeatures.h | 8 +++++---
arch/x86/include/asm/disabled-features.h | 3 ++-
arch/x86/include/asm/required-features.h | 3 ++-
arch/x86/kernel/cpu/common.c | 1 +
arch/x86/kernel/cpu/scattered.c | 2 --
6 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index ea9a7dd..70eddb3 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -29,6 +29,7 @@ enum cpuid_leafs
CPUID_8000_000A_EDX,
CPUID_7_ECX,
CPUID_8000_0007_EBX,
+ CPUID_7_EDX,
};

#ifdef CONFIG_X86_FEATURE_NAMES
@@ -79,8 +80,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 15, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 16, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 17, feature_bit) || \
+ CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 18, feature_bit) || \
REQUIRED_MASK_CHECK || \
- BUILD_BUG_ON_ZERO(NCAPINTS != 18))
+ BUILD_BUG_ON_ZERO(NCAPINTS != 19))

#define DISABLED_MASK_BIT_SET(feature_bit) \
( CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 0, feature_bit) || \
@@ -101,8 +103,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 15, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 16, feature_bit) || \
CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 17, feature_bit) || \
+ CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 18, feature_bit) || \
DISABLED_MASK_CHECK || \
- BUILD_BUG_ON_ZERO(NCAPINTS != 18))
+ BUILD_BUG_ON_ZERO(NCAPINTS != 19))

#define cpu_has(c, bit) \
(__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 25b9375..7b25cf3 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -13,7 +13,7 @@
/*
* Defines x86 CPU feature bits
*/
-#define NCAPINTS 18 /* N 32-bit words worth of info */
+#define NCAPINTS 19 /* N 32-bit words worth of info */
#define NBUGINTS 1 /* N 32-bit bug flags */

/*
@@ -206,8 +206,6 @@
#define X86_FEATURE_RETPOLINE ( 7*32+12) /* Generic Retpoline mitigation for Spectre variant 2 */
#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* AMD Retpoline mitigation for Spectre variant 2 */
#define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
-#define X86_FEATURE_AVX512_4VNNIW ( 7*32+16) /* AVX-512 Neural Network Instructions */
-#define X86_FEATURE_AVX512_4FMAPS ( 7*32+17) /* AVX-512 Multiply Accumulation Single precision */

#define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */
#define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* Fill RSB on context switches */
@@ -319,6 +317,10 @@
#define X86_FEATURE_SUCCOR (17*32+ 1) /* Uncorrectable error containment and recovery */
#define X86_FEATURE_SMCA (17*32+ 3) /* Scalable MCA */

+/* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
+#define X86_FEATURE_AVX512_4VNNIW (18*32+ 2) /* AVX-512 Neural Network Instructions */
+#define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
+
/*
* BUG word(s)
*/
diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
index e428e16..c6a3af1 100644
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -71,6 +71,7 @@
#define DISABLED_MASK15 0
#define DISABLED_MASK16 (DISABLE_PKU|DISABLE_OSPKE|DISABLE_LA57)
#define DISABLED_MASK17 0
-#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 18)
+#define DISABLED_MASK18 0
+#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19)

#endif /* _ASM_X86_DISABLED_FEATURES_H */
diff --git a/arch/x86/include/asm/required-features.h b/arch/x86/include/asm/required-features.h
index d91ba04..fb3a6de 100644
--- a/arch/x86/include/asm/required-features.h
+++ b/arch/x86/include/asm/required-features.h
@@ -106,6 +106,7 @@
#define REQUIRED_MASK15 0
#define REQUIRED_MASK16 (NEED_LA57)
#define REQUIRED_MASK17 0
-#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 18)
+#define REQUIRED_MASK18 0
+#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19)

#endif /* _ASM_X86_REQUIRED_FEATURES_H */
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 372ba3f..e5d66e9 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -745,6 +745,7 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
cpuid_count(0x00000007, 0, &eax, &ebx, &ecx, &edx);
c->x86_capability[CPUID_7_0_EBX] = ebx;
c->x86_capability[CPUID_7_ECX] = ecx;
+ c->x86_capability[CPUID_7_EDX] = edx;
}

/* Extended state features: level 0x0000000d */
diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
index d0e6976..df11f5d 100644
--- a/arch/x86/kernel/cpu/scattered.c
+++ b/arch/x86/kernel/cpu/scattered.c
@@ -21,8 +21,6 @@ struct cpuid_bit {
static const struct cpuid_bit cpuid_bits[] = {
{ X86_FEATURE_APERFMPERF, CPUID_ECX, 0, 0x00000006, 0 },
{ X86_FEATURE_EPB, CPUID_ECX, 3, 0x00000006, 0 },
- { X86_FEATURE_AVX512_4VNNIW, CPUID_EDX, 2, 0x00000007, 0 },
- { X86_FEATURE_AVX512_4FMAPS, CPUID_EDX, 3, 0x00000007, 0 },
{ X86_FEATURE_CAT_L3, CPUID_EBX, 1, 0x00000010, 0 },
{ X86_FEATURE_CAT_L2, CPUID_EBX, 2, 0x00000010, 0 },
{ X86_FEATURE_CDP_L3, CPUID_ECX, 2, 0x00000010, 1 },
--
2.7.4


2018-01-25 09:24:48

by Woodhouse, David

[permalink] [raw]
Subject: [PATCH v4 2/7] x86/cpufeatures: Add Intel feature bits for Speculation Control

Add three feature bits exposed by new microcode on Intel CPUs for
speculation control.

Signed-off-by: David Woodhouse <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Borislav Petkov <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 3 +++
1 file changed, 3 insertions(+)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 7b25cf3..0a51070 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -320,6 +320,9 @@
/* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
#define X86_FEATURE_AVX512_4VNNIW (18*32+ 2) /* AVX-512 Neural Network Instructions */
#define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
+#define X86_FEATURE_SPEC_CTRL (18*32+26) /* Speculation Control (IBRS + IBPB) */
+#define X86_FEATURE_STIBP (18*32+27) /* Single Thread Indirect Branch Predictors */
+#define X86_FEATURE_ARCH_CAPABILITIES (18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */

/*
* BUG word(s)
--
2.7.4


2018-01-25 09:25:16

by Woodhouse, David

[permalink] [raw]
Subject: [PATCH v4 4/7] x86/msr: Add definitions for new speculation control MSRs

Add MSR and bit definitions for SPEC_CTRL, PRED_CMD and ARCH_CAPABILITIES.

See Intel's 336996-Speculative-Execution-Side-Channel-Mitigations.pdf

Signed-off-by: David Woodhouse <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
---
arch/x86/include/asm/msr-index.h | 12 ++++++++++++
1 file changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index fa11fb1..eb83ff1 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -39,6 +39,13 @@

/* Intel MSRs. Some also available on other CPUs */

+#define MSR_IA32_SPEC_CTRL 0x00000048 /* Speculation Control */
+#define SPEC_CTRL_IBRS (1 << 0) /* Indirect Branch Restricted Speculation */
+#define SPEC_CTRL_STIBP (1 << 1) /* Single Thread Indirect Branch Predictors */
+
+#define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */
+#define PRED_CMD_IBPB (1 << 0) /* Indirect Branch Prediction Barrier */
+
#define MSR_PPIN_CTL 0x0000004e
#define MSR_PPIN 0x0000004f

@@ -57,6 +64,11 @@
#define SNB_C3_AUTO_UNDEMOTE (1UL << 28)

#define MSR_MTRRcap 0x000000fe
+
+#define MSR_IA32_ARCH_CAPABILITIES 0x0000010a
+#define ARCH_CAP_RDCL_NO (1 << 0) /* Not susceptible to Meltdown */
+#define ARCH_CAP_IBRS_ALL (1 << 1) /* Enhanced IBRS support */
+
#define MSR_IA32_BBL_CR_CTL 0x00000119
#define MSR_IA32_BBL_CR_CTL3 0x0000011e

--
2.7.4


2018-01-25 09:25:27

by Woodhouse, David

[permalink] [raw]
Subject: [PATCH v4 5/7] x86/pti: Do not enable PTI on processors which are not vulnerable to Meltdown

Also, for CPUs which don't speculate at all, don't report that they're vulnerable
to the Spectre variants either.

Leave the cpu_no_meltdown[] match table with just X86_VENDOR_AMD in it for now,
even though that could be done with a simple comparison, on the assumption that
we'll have more to add.

Based on suggestions from Dave Hansen and Alan Cox.

Signed-off-by: David Woodhouse <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
---
arch/x86/kernel/cpu/common.c | 47 +++++++++++++++++++++++++++++++++++++++-----
1 file changed, 42 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index e5d66e9..08c3efb 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -47,6 +47,8 @@
#include <asm/pat.h>
#include <asm/microcode.h>
#include <asm/microcode_intel.h>
+#include <asm/intel-family.h>
+#include <asm/cpu_device_id.h>

#ifdef CONFIG_X86_LOCAL_APIC
#include <asm/uv/uv.h>
@@ -853,6 +855,40 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
#endif
}

+static const __initdata struct x86_cpu_id cpu_no_speculation[] = {
+ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_CEDARVIEW, X86_FEATURE_ANY },
+ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_CLOVERVIEW, X86_FEATURE_ANY },
+ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_LINCROFT, X86_FEATURE_ANY },
+ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_PENWELL, X86_FEATURE_ANY },
+ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_PINEVIEW, X86_FEATURE_ANY },
+ { X86_VENDOR_INTEL, 5 },
+ { X86_VENDOR_CENTAUR, 5 },
+ { X86_VENDOR_ANY, 4 },
+ {}
+};
+
+static const __initdata struct x86_cpu_id cpu_no_meltdown[] = {
+ { X86_VENDOR_AMD },
+ {}
+};
+
+static bool __init early_cpu_vulnerable_meltdown(struct cpuinfo_x86 *c)
+{
+ u64 ia32_cap = 0;
+
+ if (x86_match_cpu(cpu_no_meltdown))
+ return false;
+
+ if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
+ rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
+
+ /* Rogue Data Cache Load? No! */
+ if (ia32_cap & ARCH_CAP_RDCL_NO)
+ return false;
+
+ return true;
+}
+
/*
* Do minimum CPU detection early.
* Fields really needed: vendor, cpuid_level, family, model, mask,
@@ -900,11 +936,12 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)

setup_force_cpu_cap(X86_FEATURE_ALWAYS);

- if (c->x86_vendor != X86_VENDOR_AMD)
- setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
-
- setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
- setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
+ if (!x86_match_cpu(cpu_no_speculation)) {
+ if (early_cpu_vulnerable_meltdown(c))
+ setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
+ setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
+ setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
+ }

fpu__init_system(c);

--
2.7.4


2018-01-25 09:25:35

by Woodhouse, David

[permalink] [raw]
Subject: [PATCH v4 6/7] x86/cpufeature: Blacklist SPEC_CTRL on early Spectre v2 microcodes

We don't refuse to load the affected microcodes; just refuse to use SPEC_CTRL
if they're detected.

AMD has a feature bit for "PRED_CMD only", which Intel didn't do. When disabling
SPEC_CTRL we can actually turn on that AMD bit to allow IBPB to still be used.

We handle the other AMD bits here too, because hypervisors *may* have been
exposing those bits even on Intel chips, for fine-grained control of what's
available.

We can't easily use x86_match_cpu() for this table because that doesn't handle
steppings. And the approach taken in commit bd9240a18 almost made me lose my
lunch.

Signed-off-by: David Woodhouse <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
---
arch/x86/kernel/cpu/intel.c | 71 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 71 insertions(+)

diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index b720dac..4af572d 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -102,6 +102,59 @@ static void probe_xeon_phi_r3mwait(struct cpuinfo_x86 *c)
ELF_HWCAP2 |= HWCAP2_RING3MWAIT;
}

+/*
+ * Early microcode releases for the Spectre v2 mitigation were broken.
+ * Information taken from;
+ * • https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/microcode-update-guidance.pdf
+ * • https://kb.vmware.com/s/article/52345
+ * • Microcode revisions observed in the wild
+ * • releasenote from 20180108 microcode release
+ */
+struct sku_microcode {
+ u8 model;
+ u8 stepping;
+ u32 microcode;
+};
+static const struct sku_microcode spectre_bad_microcodes[] = {
+ { INTEL_FAM6_KABYLAKE_DESKTOP, 0x0B, 0x80 },
+ { INTEL_FAM6_KABYLAKE_DESKTOP, 0x0A, 0x80 },
+ { INTEL_FAM6_KABYLAKE_DESKTOP, 0x09, 0x80 },
+ { INTEL_FAM6_KABYLAKE_MOBILE, 0x0A, 0x80 },
+ { INTEL_FAM6_KABYLAKE_MOBILE, 0x09, 0x80 },
+ { INTEL_FAM6_SKYLAKE_X, 0x03, 0x0100013e },
+ { INTEL_FAM6_SKYLAKE_X, 0x04, 0x0200003c },
+ { INTEL_FAM6_SKYLAKE_MOBILE, 0x03, 0xc2 },
+ { INTEL_FAM6_SKYLAKE_DESKTOP, 0x03, 0xc2 },
+ { INTEL_FAM6_BROADWELL_CORE, 0x04, 0x28 },
+ { INTEL_FAM6_BROADWELL_GT3E, 0x01, 0x1b },
+ { INTEL_FAM6_BROADWELL_XEON_D, 0x02, 0x14 },
+ { INTEL_FAM6_BROADWELL_XEON_D, 0x03, 0x07000011 },
+ { INTEL_FAM6_BROADWELL_X, 0x01, 0x0b000025 },
+ { INTEL_FAM6_HASWELL_ULT, 0x01, 0x21 },
+ { INTEL_FAM6_HASWELL_GT3E, 0x01, 0x18 },
+ { INTEL_FAM6_HASWELL_CORE, 0x03, 0x23 },
+ { INTEL_FAM6_HASWELL_X, 0x02, 0x3b },
+ { INTEL_FAM6_HASWELL_X, 0x04, 0x10 },
+ { INTEL_FAM6_IVYBRIDGE_X, 0x04, 0x42a },
+ /* Updated in the 20180108 release; blacklist until we know otherwise */
+ { INTEL_FAM6_ATOM_GEMINI_LAKE, 0x01, 0x22 },
+ /* Observed in the wild */
+ { INTEL_FAM6_SANDYBRIDGE_X, 0x06, 0x61b },
+ { INTEL_FAM6_SANDYBRIDGE_X, 0x07, 0x712 },
+};
+
+static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(spectre_bad_microcodes); i++) {
+ if (c->x86_model == spectre_bad_microcodes[i].model &&
+ c->x86_mask == spectre_bad_microcodes[i].stepping)
+ return (c->microcode <= spectre_bad_microcodes[i].microcode);
+ }
+ return false;
+}
+
static void early_init_intel(struct cpuinfo_x86 *c)
{
u64 misc_enable;
@@ -122,6 +175,24 @@ static void early_init_intel(struct cpuinfo_x86 *c)
if (c->x86 >= 6 && !cpu_has(c, X86_FEATURE_IA64))
c->microcode = intel_get_microcode_revision();

+ if ((cpu_has(c, X86_FEATURE_SPEC_CTRL) ||
+ cpu_has(c, X86_FEATURE_STIBP) ||
+ cpu_has(c, X86_FEATURE_AMD_SPEC_CTRL) ||
+ cpu_has(c, X86_FEATURE_AMD_STIBP)) && bad_spectre_microcode(c)) {
+ pr_warn("Intel Spectre v2 broken microcode detected; disabling SPEC_CTRL\n");
+ /*
+ * Intel's X86_FEATURE_SPEC_CTRL says both MSRs are available.
+ * We can't leave that set, but we can turn on the AMD bit
+ * which advertises PRED_CMD alone. IBPB is believed to be OK.
+ */
+ if (cpu_has(c, X86_FEATURE_SPEC_CTRL))
+ set_cpu_cap(c, X86_FEATURE_AMD_PRED_CMD);
+ clear_cpu_cap(c, X86_FEATURE_SPEC_CTRL);
+ clear_cpu_cap(c, X86_FEATURE_STIBP);
+ clear_cpu_cap(c, X86_FEATURE_AMD_SPEC_CTRL);
+ clear_cpu_cap(c, X86_FEATURE_AMD_STIBP);
+ }
+
/*
* Atom erratum AAE44/AAF40/AAG38/AAH41:
*
--
2.7.4


2018-01-25 09:27:28

by Woodhouse, David

[permalink] [raw]
Subject: [PATCH v4 3/7] x86/cpufeatures: Add AMD feature bits for Speculation Control

AMD exposes the PRED_CMD/SPEC_CTRL MSRs slightly differently to Intel.
See http://lkml.kernel.org/r/[email protected]

Signed-off-by: David Woodhouse <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 3 +++
1 file changed, 3 insertions(+)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 0a51070..ae3212f 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -269,6 +269,9 @@
#define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */
#define X86_FEATURE_IRPERF (13*32+ 1) /* Instructions Retired Count */
#define X86_FEATURE_XSAVEERPTR (13*32+ 2) /* Always save/restore FP error pointers */
+#define X86_FEATURE_AMD_PRED_CMD (13*32+12) /* Prediction Command MSR (AMD) */
+#define X86_FEATURE_AMD_SPEC_CTRL (13*32+14) /* Speculation Control MSR only (AMD) */
+#define X86_FEATURE_AMD_STIBP (13*32+15) /* Single Thread Indirect Branch Predictors (AMD) */

/* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
#define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */
--
2.7.4


2018-01-25 09:29:46

by Woodhouse, David

[permalink] [raw]
Subject: [PATCH v4 7/7] x86/speculation: Add basic IBPB (Indirect Branch Prediction Barrier) support

Expose indirect_branch_prediction_barrier() for use in subsequent patches.

[karahmed: remove the special-casing of skylake for using IBPB (wtf?),
switch to using ALTERNATIVES instead of static_cpu_has]
[dwmw2: set up ax/cx/dx in the asm too so it gets NOP'd out]

Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: KarimAllah Ahmed <[email protected]>
Signed-off-by: David Woodhouse <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/nospec-branch.h | 13 +++++++++++++
arch/x86/kernel/cpu/bugs.c | 7 +++++++
3 files changed, 21 insertions(+)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index ae3212f..6b988278 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -207,6 +207,7 @@
#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* AMD Retpoline mitigation for Spectre variant 2 */
#define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */

+#define X86_FEATURE_IBPB ( 7*32+16) /* Using Indirect Branch Prediction Barrier */
#define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */
#define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* Fill RSB on context switches */

diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 4ad4108..34e384c 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -218,5 +218,18 @@ static inline void vmexit_fill_RSB(void)
#endif
}

+static inline void indirect_branch_prediction_barrier(void)
+{
+ asm volatile(ALTERNATIVE("",
+ "movl %[msr], %%ecx\n\t"
+ "movl %[val], %%eax\n\t"
+ "movl $0, %%edx\n\t"
+ "wrmsr",
+ X86_FEATURE_IBPB)
+ : : [msr] "i" (MSR_IA32_PRED_CMD),
+ [val] "i" (PRED_CMD_IBPB)
+ : "eax", "ecx", "edx", "memory");
+}
+
#endif /* __ASSEMBLY__ */
#endif /* __NOSPEC_BRANCH_H__ */
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 390b3dc..96548ff 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -249,6 +249,13 @@ static void __init spectre_v2_select_mitigation(void)
setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
pr_info("Filling RSB on context switch\n");
}
+
+ /* Initialize Indirect Branch Prediction Barrier if supported */
+ if (boot_cpu_has(X86_FEATURE_SPEC_CTRL) ||
+ boot_cpu_has(X86_FEATURE_AMD_PRED_CMD)) {
+ setup_force_cpu_cap(X86_FEATURE_IBPB);
+ pr_info("Enabling Indirect Branch Prediction Barrier\n");
+ }
}

#undef pr_fmt
--
2.7.4


2018-01-25 09:44:01

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v4 5/7] x86/pti: Do not enable PTI on processors which are not vulnerable to Meltdown

On Thu, Jan 25, 2018 at 09:23:07AM +0000, David Woodhouse wrote:
> +static bool __init early_cpu_vulnerable_meltdown(struct cpuinfo_x86 *c)
> +{
> + u64 ia32_cap = 0;
> +
> + if (x86_match_cpu(cpu_no_meltdown))
> + return false;
> +
> + if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
> + rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);

I think it was suggested a while back to write this like:

if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES) &&
!rdmsrl_safe(MSR_IA32_ARCH_CAPABILITIES, ia32_cap))

to deal with funny virt scenarios where they accidentally advertise the
CPUID bit but don't in fact provide the MSR.

> +
> + /* Rogue Data Cache Load? No! */
> + if (ia32_cap & ARCH_CAP_RDCL_NO)
> + return false;
> +
> + return true;
> +}

2018-01-25 09:57:44

by David Woodhouse

[permalink] [raw]
Subject: Re: [PATCH v4 5/7] x86/pti: Do not enable PTI on processors which are not vulnerable to Meltdown

On Thu, 2018-01-25 at 10:42 +0100, Peter Zijlstra wrote:
> On Thu, Jan 25, 2018 at 09:23:07AM +0000, David Woodhouse wrote:
> > +static bool __init early_cpu_vulnerable_meltdown(struct cpuinfo_x86 *c)
> > +{
> > +     u64 ia32_cap = 0;
> > +
> > +     if (x86_match_cpu(cpu_no_meltdown))
> > +                return false;
> > +
> > +     if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
> > +             rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
>
> I think it was suggested a while back to write this like:
>
>         if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES) &&
>             !rdmsrl_safe(MSR_IA32_ARCH_CAPABILITIES, ia32_cap))
>
> to deal with funny virt scenarios where they accidentally advertise the
> CPUID bit but don't in fact provide the MSR.

It was indeed suggested, but I was a bit confused by that. Because the
CPUID bit exists *purely* to advertise the existence of that MSR;
nothing more.

If it doesn't exist we'll end up with zero in ia32_cap anyway, which
will mean we *won't* see the RDCL_NO bit, and won't disable the
Meltdown flag.


Attachments:
smime.p7s (5.09 kB)

2018-01-25 10:02:28

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH v4 5/7] x86/pti: Do not enable PTI on processors which are not vulnerable to Meltdown

On Thu, 25 Jan 2018, David Woodhouse wrote:
> On Thu, 2018-01-25 at 10:42 +0100, Peter Zijlstra wrote:
> > On Thu, Jan 25, 2018 at 09:23:07AM +0000, David Woodhouse wrote:
> > > +static bool __init early_cpu_vulnerable_meltdown(struct cpuinfo_x86 *c)
> > > +{
> > > +     u64 ia32_cap = 0;
> > > +
> > > +     if (x86_match_cpu(cpu_no_meltdown))
> > > +                return false;
> > > +
> > > +     if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
> > > +             rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
> >
> > I think it was suggested a while back to write this like:
> >
> >         if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES) &&
> >             !rdmsrl_safe(MSR_IA32_ARCH_CAPABILITIES, ia32_cap))
> >
> > to deal with funny virt scenarios where they accidentally advertise the
> > CPUID bit but don't in fact provide the MSR.
>
> It was indeed suggested, but I was a bit confused by that. Because the
> CPUID bit exists *purely* to advertise the existence of that MSR;
> nothing more.
>
> If it doesn't exist we'll end up with zero in ia32_cap anyway, which
> will mean we *won't* see the RDCL_NO bit, and won't disable the
> Meltdown flag.

And using rdmsrl() has the benefit of running into the
ex_handler_rdmsr_unsafe() exception handler, which emits a warning. The
value returned in ia32_cap is 0.

Thanks,

tglx

2018-01-25 10:44:47

by David Woodhouse

[permalink] [raw]
Subject: Re: [PATCH v4 6/7] x86/cpufeature: Blacklist SPEC_CTRL on early Spectre v2 microcodes

On Thu, 2018-01-25 at 09:23 +0000, David Woodhouse wrote:
>
> +/*
> + * Early microcode releases for the Spectre v2 mitigation were broken.
> + * Information taken from;
> + * • https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/microcode-update-guidance.pdf

Oh look, Intel have released a *third* version of that document
already, and they've bumped the bad Kabylake versions to 0x84, although
they're *still* missing out the KBL 906EA SKU which was updated to 0x80
in the public 20180108 microcode release. I'll bump them all in my tree
to 0x84.

Thomas, want another copy in email now, or were we waiting for another
round of these before you merge them anyway?


Attachments:
smime.p7s (5.09 kB)

2018-01-25 10:55:18

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH v4 6/7] x86/cpufeature: Blacklist SPEC_CTRL on early Spectre v2 microcodes

On Thu, 25 Jan 2018, David Woodhouse wrote:

> On Thu, 2018-01-25 at 09:23 +0000, David Woodhouse wrote:
> >
> > +/*
> > + * Early microcode releases for the Spectre v2 mitigation were broken.
> > + * Information taken from;
> > + * • https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/microcode-update-guidance.pdf
>
> Oh look, Intel have released a *third* version of that document
> already, and they've bumped the bad Kabylake versions to 0x84, although
> they're *still* missing out the KBL 906EA SKU which was updated to 0x80
> in the public 20180108 microcode release. I'll bump them all in my tree
> to 0x84.
>
> Thomas, want another copy in email now, or were we waiting for another
> round of these before you merge them anyway?

Looking at this mess makes me even less convinced that a blacklist is a
good idea. We have already at least 3 different variants of blacklists.

So I rather see a whitelist approach which says:

if (ucode_version < known_good(family, model))
return;

I know it would require that Intel releases a set of known good ucodes at
some point in the future, but that's a reasonable request.

I rather take a few patches which add the cutoff for family/model (and NO,
I don't want to add stepping into the mix at all) than having this
blacklist mess which keeps changing every other day.

Thanks,

tglx



2018-01-25 11:21:19

by David Woodhouse

[permalink] [raw]
Subject: Re: [PATCH v4 6/7] x86/cpufeature: Blacklist SPEC_CTRL on early Spectre v2 microcodes

On Thu, 2018-01-25 at 11:54 +0100, Thomas Gleixner wrote:
> On Thu, 25 Jan 2018, David Woodhouse wrote:
>
> >
> > On Thu, 2018-01-25 at 09:23 +0000, David Woodhouse wrote:
> > >
> > >
> > > +/*
> > > + * Early microcode releases for the Spectre v2 mitigation were broken.
> > > + * Information taken from;
> > > + * • https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/microcode-update-guidance.pdf
> > Oh look, Intel have released a *third* version of that document
> > already, and they've bumped the bad Kabylake versions to 0x84, although
> > they're *still* missing out the KBL 906EA SKU which was updated to 0x80
> > in the public 20180108 microcode release. I'll bump them all in my tree
> > to 0x84.
> >
> > Thomas, want another copy in email now, or were we waiting for another
> > round of these before you merge them anyway?
> Looking at this mess makes me even less convinced that a blacklist is a
> good idea. We have already at least 3 different variants of blacklists.
>
> So I rather see a whitelist approach which says:
>
>    if (ucode_version < known_good(family, model))
>        return;
>
> I know it would require that Intel releases a set of known good ucodes at
> some point in the future, but that's a reasonable request.
>
> I rather take a few patches which add the cutoff for family/model (and NO,
> I don't want to add stepping into the mix at all) than having this
> blacklist mess which keeps changing every other day.

I don't know why they added KBL 0x84 microcode; I'm not aware that it's
been released. Maybe it escaped somewhere so that's why they added it
to the doc.

If they ship another bad microcode revision to the public, newer than
the ones which are already in that list, then there is something
*SEVERELY* wrong. Wronger than we already know.

And if we take this 'piecemeal whitelist' approach, they're going to
have to add the newly-released microcodes to the kernel each time they
do a release which covers a new SKU — so it's not like it'll even help,
because if they *do* ship another bad one, we'll *already* have added
it to the kernel by the time we realise.

I don't expect them to *keep* doing daily releases of the blacklist
doc; it's just been a bit rushed, that's all. I think we're much better
off doing it this way than saying that we'll keep taking incremental
updates as they do new releases.

Also, I don't think we can avoid looking at the stepping. Not unless
you want to make Intel synchronise the microcode version numbers for
different steppings of the same model? Because they aren't at the
moment.


Attachments:
smime.p7s (5.09 kB)

2018-01-25 11:35:23

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH v4 6/7] x86/cpufeature: Blacklist SPEC_CTRL on early Spectre v2 microcodes

On Thu, 25 Jan 2018, David Woodhouse wrote:
> On Thu, 2018-01-25 at 11:54 +0100, Thomas Gleixner wrote:
> > On Thu, 25 Jan 2018, David Woodhouse wrote:
> > > Thomas, want another copy in email now, or were we waiting for another
> > > round of these before you merge them anyway?
> > Looking at this mess makes me even less convinced that a blacklist is a
> > good idea. We have already at least 3 different variants of blacklists.
> >
> > So I rather see a whitelist approach which says:
> >
> >    if (ucode_version < known_good(family, model))
> >        return;
> >
> > I know it would require that Intel releases a set of known good ucodes at
> > some point in the future, but that's a reasonable request.
> >
> > I rather take a few patches which add the cutoff for family/model (and NO,
> > I don't want to add stepping into the mix at all) than having this
> > blacklist mess which keeps changing every other day.
>
> I don't know why they added KBL 0x84 microcode; I'm not aware that it's
> been released. Maybe it escaped somewhere so that's why they added it
> to the doc.
>
> If they ship another bad microcode revision to the public, newer than
> the ones which are already in that list, then there is something
> *SEVERELY* wrong. Wronger than we already know.

Well, all of this is already wrong.

> And if we take this 'piecemeal whitelist' approach, they're going to
> have to add the newly-released microcodes to the kernel each time they
> do a release which covers a new SKU — so it's not like it'll even help,
> because if they *do* ship another bad one, we'll *already* have added
> it to the kernel by the time we realise.

Fair enough.

> I don't expect them to *keep* doing daily releases of the blacklist
> doc; it's just been a bit rushed, that's all. I think we're much better
> off doing it this way than saying that we'll keep taking incremental
> updates as they do new releases.
>
> Also, I don't think we can avoid looking at the stepping. Not unless
> you want to make Intel synchronise the microcode version numbers for
> different steppings of the same model? Because they aren't at the
> moment.

This stuff is really a master piece of trainwreck engineering.

So yeah, whatever we do we end up with a proper mess. Lets go for a
blacklist and hope that we'll have something which holds at some
foreseeable day in the future.

The other concern I have is IBRS vs. IBPB. Are we sufficiently sure that
IBPB is working on those IBRS blacklisted ucode revisions? Or should we
just play safe and not touch any of this at all when we detect a
blacklisted one?

Given the close to 0 trust in Intels change management and QA, I rather
keep my hands from everything which was ever mentioned in any document as
broken. I hope we have a collection of those PDFs stored at a safe place.

Thanks,

tglx

2018-01-25 11:43:25

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v4 7/7] x86/speculation: Add basic IBPB (Indirect Branch Prediction Barrier) support

On Thu, Jan 25, 2018 at 09:23:09AM +0000, David Woodhouse wrote:
> Expose indirect_branch_prediction_barrier() for use in subsequent patches.
>
> [karahmed: remove the special-casing of skylake for using IBPB (wtf?),
> switch to using ALTERNATIVES instead of static_cpu_has]
> [dwmw2: set up ax/cx/dx in the asm too so it gets NOP'd out]
>
> Signed-off-by: Thomas Gleixner <[email protected]>
> Signed-off-by: KarimAllah Ahmed <[email protected]>
> Signed-off-by: David Woodhouse <[email protected]>
> ---
> arch/x86/include/asm/cpufeatures.h | 1 +
> arch/x86/include/asm/nospec-branch.h | 13 +++++++++++++
> arch/x86/kernel/cpu/bugs.c | 7 +++++++
> 3 files changed, 21 insertions(+)
>
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index ae3212f..6b988278 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -207,6 +207,7 @@
> #define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* AMD Retpoline mitigation for Spectre variant 2 */
> #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
>
> +#define X86_FEATURE_IBPB ( 7*32+16) /* Using Indirect Branch Prediction Barrier */
> #define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */
> #define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* Fill RSB on context switches */
>
> diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
> index 4ad4108..34e384c 100644
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -218,5 +218,18 @@ static inline void vmexit_fill_RSB(void)
> #endif
> }
>
> +static inline void indirect_branch_prediction_barrier(void)
> +{
> + asm volatile(ALTERNATIVE("",
> + "movl %[msr], %%ecx\n\t"
> + "movl %[val], %%eax\n\t"
> + "movl $0, %%edx\n\t"
> + "wrmsr",
> + X86_FEATURE_IBPB)
> + : : [msr] "i" (MSR_IA32_PRED_CMD),
> + [val] "i" (PRED_CMD_IBPB)
> + : "eax", "ecx", "edx", "memory");

Lemme paste my simplification suggestion from the other day:

"Btw, we can simplify this a bit by dropping the inputs and marking the 3
GPRs as clobbered:

alternative_input("",
"mov $0x49, %%ecx\n\t"
"mov $1, %%eax\n\t"
"xor %%edx, %%edx\n\t"
"wrmsr\n\t",
X86_FEATURE_IBPB,
ASM_NO_INPUT_CLOBBER("eax", "ecx", "edx", "memory"));

"

--
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

2018-01-25 11:59:31

by David Woodhouse

[permalink] [raw]
Subject: Re: [PATCH v4 7/7] x86/speculation: Add basic IBPB (Indirect Branch Prediction Barrier) support

On Thu, 2018-01-25 at 12:50 +0100, Borislav Petkov wrote:
> On Thu, Jan 25, 2018 at 11:47:35AM +0000, David Woodhouse wrote:
> > I did already do the explicit clobbers now you reminded me how to do
> > them, but I elected not to use ASM_NO_INPUT_CLOBBER() because on
> > balance, losing the MSR_IA32_PRED_CMD / PRED_CMD_IBPB definitions and
> > having to hard-code the numbers was not so much of a win.
>
> and I'm thinking exactly the opposite: if an alternative can lose a
> bunch of inputs which don't change, then that's a good thing.

They're immediates, not registers. So it's like the first example in...

void foo(void)
{
asm volatile ("alternative me away; my numbers are %0 and %1\n"
      :: "i" (3456), "i" (1234));

asm volatile ("alternative me away; GCC already loaded regs %0 and %1\n"
      :: "c" (3456), "a" (1234));
}

... which gives you...

foo:
.LFB0:
.cfi_startproc
#APP
# 4 "foo.c" 1
alternative me away; my numbers are $3456 and $1234

# 0 "" 2
#NO_APP
movl $1234, %eax
movl $3456, %ecx
#APP
# 7 "foo.c" 1
alternative me away; GCC already loaded regs %ecx and %eax

# 0 "" 2
#NO_APP
ret
.cfi_endproc


Attachments:
smime.p7s (5.09 kB)

2018-01-25 12:12:55

by David Woodhouse

[permalink] [raw]
Subject: Re: [PATCH v4 7/7] x86/speculation: Add basic IBPB (Indirect Branch Prediction Barrier) support

On Thu, 2018-01-25 at 13:03 +0100, Borislav Petkov wrote:
> On Thu, Jan 25, 2018 at 11:58:20AM +0000, David Woodhouse wrote:
> > They're immediates, not registers. So it's like the first example
> in...
>
> Oh, I know very well what they are - I simply find the macro more
> readable if there are less or no arguments.

Oh, I agree, but having to write 0x49 there instead of
MSR_IA32_PRED_CMD was a more pressing reason *not* to change it.

There are possibly macro hacks that can make that work (the first naïve
attempt didn't) but it exceeded my boredom threshold fairly rapidly.

If you send me a version which still uses MSR_IA32_PRED_CMD and
PRED_CMD_IBPB instead of hard-coding the numbers, and which doesn't end
up with piles of __stringify() and similar crap to do so, I'll happily
change it.


Attachments:
smime.p7s (5.09 kB)

2018-01-25 12:26:11

by David Woodhouse

[permalink] [raw]
Subject: Re: [PATCH v4 7/7] x86/speculation: Add basic IBPB (Indirect Branch Prediction Barrier) support

On Thu, 2018-01-25 at 12:41 +0100, Borislav Petkov wrote:
>
> > +static inline void indirect_branch_prediction_barrier(void)
> > +{
> > +     asm volatile(ALTERNATIVE("",
> > +                              "movl %[msr], %%ecx\n\t"
> > +                              "movl %[val], %%eax\n\t"
> > +                              "movl $0, %%edx\n\t"
> > +                              "wrmsr",
> > +                              X86_FEATURE_IBPB)
> > +                  : : [msr] "i" (MSR_IA32_PRED_CMD),
> > +                      [val] "i" (PRED_CMD_IBPB)
> > +                  : "eax", "ecx", "edx", "memory");
>
> Lemme paste my simplification suggestion from the other day:
>
> "Btw, we can simplify this a bit by dropping the inputs and marking the 3
> GPRs as clobbered:
>
>         alternative_input("",
>                           "mov $0x49, %%ecx\n\t"
>                           "mov $1, %%eax\n\t"
>                           "xor %%edx, %%edx\n\t"
>                           "wrmsr\n\t",
>                           X86_FEATURE_IBPB,
>                           ASM_NO_INPUT_CLOBBER("eax", "ecx", "edx", "memory"));

Yeah, I saw that; sorry for not replying explicitly.

I did already do the explicit clobbers now you reminded me how to do
them, but I elected not to use ASM_NO_INPUT_CLOBBER() because on
balance, losing the MSR_IA32_PRED_CMD / PRED_CMD_IBPB definitions and
having to hard-code the numbers was not so much of a win.


Attachments:
smime.p7s (5.09 kB)

2018-01-25 12:26:47

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v4 7/7] x86/speculation: Add basic IBPB (Indirect Branch Prediction Barrier) support

On Thu, Jan 25, 2018 at 11:47:35AM +0000, David Woodhouse wrote:
> I did already do the explicit clobbers now you reminded me how to do
> them, but I elected not to use ASM_NO_INPUT_CLOBBER() because on
> balance, losing the MSR_IA32_PRED_CMD / PRED_CMD_IBPB definitions and
> having to hard-code the numbers was not so much of a win.

and I'm thinking exactly the opposite: if an alternative can lose a
bunch of inputs which don't change, then that's a good thing.

--
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

2018-01-25 12:28:35

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v4 7/7] x86/speculation: Add basic IBPB (Indirect Branch Prediction Barrier) support

On Thu, Jan 25, 2018 at 11:58:20AM +0000, David Woodhouse wrote:
> They're immediates, not registers. So it's like the first example in...

Oh, I know very well what they are - I simply find the macro more
readable if there are less or no arguments.

--
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

2018-01-25 14:35:19

by David Woodhouse

[permalink] [raw]
Subject: Re: [PATCH v4 6/7] x86/cpufeature: Blacklist SPEC_CTRL on early Spectre v2 microcodes

On Thu, 2018-01-25 at 12:34 +0100, Thomas Gleixner wrote:
>
> This stuff is really a master piece of trainwreck engineering.
>
> So yeah, whatever we do we end up with a proper mess. Lets go for a
> blacklist and hope that we'll have something which holds at some
> foreseeable day in the future.
>
> The other concern I have is IBRS vs. IBPB. Are we sufficiently sure that
> IBPB is working on those IBRS blacklisted ucode revisions? Or should we
> just play safe and not touch any of this at all when we detect a
> blacklisted one?

That isn't sufficiently clear to me. I've changed it back to blacklist
*everything* for now, to be safe. If at any point Intel want to get
their act together and give us coherent information to the contrary, we
can change to separate IBPB/IBRS blacklists.

> Given the close to 0 trust in Intels change management and QA, I rather
> keep my hands from everything which was ever mentioned in any document as
> broken. I hope we have a collection of those PDFs stored at a safe place.

$ ls microcode-update-guidance-2018012*
microcode-update-guidance-20180122.pdf
microcode-update-guidance-20180123.pdf
microcode-update-guidance-20180124.pdf


Here's what I have now. I'm happy enough with this, so the main thing
I'm looking for is an ack from Alan for patch #5 of the series, if I've
got that sufficiently correct now.

From ad16c9a9f459a91c0980a2c7fde41640b6c04524 Mon Sep 17 00:00:00 2001
From: David Woodhouse <[email protected]>
Date: Wed, 24 Jan 2018 15:54:41 +0000
Subject: [PATCH 06/18] x86/cpufeature: Blacklist SPEC_CTRL/PRED_CMD on early
 Spectre v2 microcodes

This doesn't refuse to load the affected microcodes; it just refuses to
use the Spectre v2 mitigation features if they're detected, by clearing
the appropriate feature bits.

The AMD CPUID bits are handled here too, because hypervisors *may* have
been exposing those bits even on Intel chips, for fine-grained control
of what's available.

It is non-trivial to use x86_match_cpu() for this table because that
doesn't handle steppings. And the approach taken in commit bd9240a18
almost made me lose my lunch.

Signed-off-by: David Woodhouse <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
---
 arch/x86/kernel/cpu/intel.c | 66 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 66 insertions(+)

diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index b720dac..125b65f 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -102,6 +102,59 @@ static void probe_xeon_phi_r3mwait(struct cpuinfo_x86 *c)
  ELF_HWCAP2 |= HWCAP2_RING3MWAIT;
 }
 
+/*
+ * Early microcode releases for the Spectre v2 mitigation were broken.
+ * Information taken from;
+ * • https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/microcode-update-guidance.pdf
+ * • https://kb.vmware.com/s/article/52345
+ * • Microcode revisions observed in the wild
+ * • releasenote from 20180108 microcode release
+ */
+struct sku_microcode {
+ u8 model;
+ u8 stepping;
+ u32 microcode;
+};
+static const struct sku_microcode spectre_bad_microcodes[] = {
+ { INTEL_FAM6_KABYLAKE_DESKTOP, 0x0B, 0x84 },
+ { INTEL_FAM6_KABYLAKE_DESKTOP, 0x0A, 0x84 },
+ { INTEL_FAM6_KABYLAKE_DESKTOP, 0x09, 0x84 },
+ { INTEL_FAM6_KABYLAKE_MOBILE, 0x0A, 0x84 },
+ { INTEL_FAM6_KABYLAKE_MOBILE, 0x09, 0x84 },
+ { INTEL_FAM6_SKYLAKE_X, 0x03, 0x0100013e },
+ { INTEL_FAM6_SKYLAKE_X, 0x04, 0x0200003c },
+ { INTEL_FAM6_SKYLAKE_MOBILE, 0x03, 0xc2 },
+ { INTEL_FAM6_SKYLAKE_DESKTOP, 0x03, 0xc2 },
+ { INTEL_FAM6_BROADWELL_CORE, 0x04, 0x28 },
+ { INTEL_FAM6_BROADWELL_GT3E, 0x01, 0x1b },
+ { INTEL_FAM6_BROADWELL_XEON_D, 0x02, 0x14 },
+ { INTEL_FAM6_BROADWELL_XEON_D, 0x03, 0x07000011 },
+ { INTEL_FAM6_BROADWELL_X, 0x01, 0x0b000025 },
+ { INTEL_FAM6_HASWELL_ULT, 0x01, 0x21 },
+ { INTEL_FAM6_HASWELL_GT3E, 0x01, 0x18 },
+ { INTEL_FAM6_HASWELL_CORE, 0x03, 0x23 },
+ { INTEL_FAM6_HASWELL_X, 0x02, 0x3b },
+ { INTEL_FAM6_HASWELL_X, 0x04, 0x10 },
+ { INTEL_FAM6_IVYBRIDGE_X, 0x04, 0x42a },
+ /* Updated in the 20180108 release; blacklist until we know otherwise */
+ { INTEL_FAM6_ATOM_GEMINI_LAKE, 0x01, 0x22 },
+ /* Observed in the wild */
+ { INTEL_FAM6_SANDYBRIDGE_X, 0x06, 0x61b },
+ { INTEL_FAM6_SANDYBRIDGE_X, 0x07, 0x712 },
+};
+
+static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(spectre_bad_microcodes); i++) {
+ if (c->x86_model == spectre_bad_microcodes[i].model &&
+     c->x86_mask == spectre_bad_microcodes[i].stepping)
+ return (c->microcode <= spectre_bad_microcodes[i].microcode);
+ }
+ return false;
+}
+
 static void early_init_intel(struct cpuinfo_x86 *c)
 {
  u64 misc_enable;
@@ -122,6 +175,19 @@ static void early_init_intel(struct cpuinfo_x86 *c)
  if (c->x86 >= 6 && !cpu_has(c, X86_FEATURE_IA64))
  c->microcode = intel_get_microcode_revision();
 
+ if ((cpu_has(c, X86_FEATURE_SPEC_CTRL) ||
+      cpu_has(c, X86_FEATURE_STIBP) ||
+      cpu_has(c, X86_FEATURE_AMD_SPEC_CTRL) ||
+      cpu_has(c, X86_FEATURE_AMD_PRED_CMD) ||
+      cpu_has(c, X86_FEATURE_AMD_STIBP)) && bad_spectre_microcode(c)) {
+ pr_warn("Intel Spectre v2 broken microcode detected; disabling SPEC_CTRL\n");
+ clear_cpu_cap(c, X86_FEATURE_SPEC_CTRL);
+ clear_cpu_cap(c, X86_FEATURE_STIBP);
+ clear_cpu_cap(c, X86_FEATURE_AMD_SPEC_CTRL);
+ clear_cpu_cap(c, X86_FEATURE_AMD_PRED_CMD);
+ clear_cpu_cap(c, X86_FEATURE_AMD_STIBP);
+ }
+
  /*
   * Atom erratum AAE44/AAF40/AAG38/AAH41:
   *
-- 
2.7.4


Attachments:
smime.p7s (5.09 kB)

2018-01-25 14:59:25

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH v4 6/7] x86/cpufeature: Blacklist SPEC_CTRL on early Spectre v2 microcodes

On Thu, 25 Jan 2018, David Woodhouse wrote:
> On Thu, 2018-01-25 at 12:34 +0100, Thomas Gleixner wrote:
> >
> > This stuff is really a master piece of trainwreck engineering.
> >
> > So yeah, whatever we do we end up with a proper mess. Lets go for a
> > blacklist and hope that we'll have something which holds at some
> > foreseeable day in the future.
> >
> > The other concern I have is IBRS vs. IBPB. Are we sufficiently sure that
> > IBPB is working on those IBRS blacklisted ucode revisions? Or should we
> > just play safe and not touch any of this at all when we detect a
> > blacklisted one?
>
> That isn't sufficiently clear to me. I've changed it back to blacklist
> *everything* for now, to be safe. If at any point Intel want to get
> their act together and give us coherent information to the contrary, we
> can change to separate IBPB/IBRS blacklists.

Thanks for that. That's the only sensible approach as long as we have to
deal with the current Quality Assumptions...

Thanks,

tglx

2018-01-25 15:15:00

by Alan Cox

[permalink] [raw]
Subject: Re: [PATCH v4 5/7] x86/pti: Do not enable PTI on processors which are not vulnerable to Meltdown

On Thu, 25 Jan 2018 09:23:07 +0000
David Woodhouse <[email protected]> wrote:

> Also, for CPUs which don't speculate at all, don't report that they're vulnerable
> to the Spectre variants either.
>
> Leave the cpu_no_meltdown[] match table with just X86_VENDOR_AMD in it for now,
> even though that could be done with a simple comparison, on the assumption that
> we'll have more to add.
>
> Based on suggestions from Dave Hansen and Alan Cox.

Looks good to me. I've been doing some more document spelunking and have
some more stuff for 32bit but not for 64bit capable systems.

X86_VENDOR_NSC, 5

is safe as the MediaGX/Geode doesn't have speculation. However CYRIX,5
isn't because there are a mix of CPU types there.

Most interesting is the Cyrix one. I'm going to have to resurrect my
Cyrix kit because some of the Cyrix processors actually have control bits
to turn on/off the BTB and also the return stack predictor (PCR0 bits 1
and 0 respecively) and no lfence so you need to change system flags
or reload a segment register to force a stall.

Alan

2018-01-25 16:18:35

by Alan Cox

[permalink] [raw]
Subject: Re: [PATCH v4 6/7] x86/cpufeature: Blacklist SPEC_CTRL on early Spectre v2 microcodes

> Here's what I have now. I'm happy enough with this, so the main thing
> I'm looking for is an ack from Alan for patch #5 of the series, if I've
> got that sufficiently correct now.

You have my ACK for patch 5: Any further changes I'll submit as updates
once it's merged.

I am happy with patch 5 and I think for 64bit we are probably done for
the mainstream. VIA/Centaur data would be nice.

The microcode otoh I don't think makes sense - do you want to be secure
with a slightly higher risk of crashes or insecure ? Once you can point
at a newer ucode for each case then yes I personally at least think it
then makes sense to tell users they want the newer one.

Alan

2018-01-25 16:26:21

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH v4 6/7] x86/cpufeature: Blacklist SPEC_CTRL on early Spectre v2 microcodes

On Thu, 25 Jan 2018, Alan Cox wrote:

> > Here's what I have now. I'm happy enough with this, so the main thing
> > I'm looking for is an ack from Alan for patch #5 of the series, if I've
> > got that sufficiently correct now.
>
> You have my ACK for patch 5: Any further changes I'll submit as updates
> once it's merged.
>
> I am happy with patch 5 and I think for 64bit we are probably done for
> the mainstream. VIA/Centaur data would be nice.
>
> The microcode otoh I don't think makes sense - do you want to be secure
> with a slightly higher risk of crashes or insecure ? Once you can point
> at a newer ucode for each case then yes I personally at least think it
> then makes sense to tell users they want the newer one.

Yes, ideally we can tell the user to update to ucode rev > X for a
particular SKU. Though with the current state of affairs all we can do is
accumulate ucode revisions which are marked bad by Intels Quality
Assumptions at a given day.

Once Intel gets its act together and provides authoritative information we
can revisit this and make it user friendly. You might be able to speculate
more precisely when this is going to happen.

Thanks,

tglx

2018-01-25 16:37:31

by David Woodhouse

[permalink] [raw]
Subject: Re: [PATCH v4 6/7] x86/cpufeature: Blacklist SPEC_CTRL on early Spectre v2 microcodes

On Thu, 2018-01-25 at 17:24 +0100, Thomas Gleixner wrote:
> On Thu, 25 Jan 2018, Alan Cox wrote:
>
> >
> > >
> > > Here's what I have now. I'm happy enough with this, so the main thing
> > > I'm looking for is an ack from Alan for patch #5 of the series, if I've
> > > got that sufficiently correct now.
> > You have my ACK for patch 5: Any further changes I'll submit as updates
> > once it's merged.
> >
> > I am happy with patch 5 and I think for 64bit we are probably done for
> > the mainstream. VIA/Centaur data would be nice.

Thanks.

> > The microcode otoh I don't think makes sense - do you want to be secure
> > with a slightly higher risk of crashes or insecure ? Once you can point
> > at a newer ucode for each case then yes I personally at least think it
> > then makes sense to tell users they want the newer one.
>
> Yes, ideally we can tell the user to update to ucode rev > X for a
> particular SKU. Though with the current state of affairs all we can do is
> accumulate ucode revisions which are marked bad by Intels Quality
> Assumptions at a given day.

I don't think telling them to upgrade to a specific revision is needed.
If they get a 'bad microcode' message they should update to whatever is
the latest.

As for "slightly higher risk of crashes or insecure"... that's a valid
concern *but* remember we're using retpoline to make the kernel secure,
and we haven't even merged the patches yet which do IBPB on context
switch or IBRS on calling into firmware or any of the other lower-
priority things. I have yet to see a viable proof-of-concept for any of
the holes which remain.

So I think we *can* probably live without the microcode features for
now. In particular I *don't* want to be exposing them to VM guests, so
that VM guests can take the system down by bashing on them.

If you want to add a command-line option to ignore the blacklist and
allow the features to be used, I *suppose* I could tolerate that.
Please make it at least taint the kernel if a bad microcode is actually
used though.


Attachments:
smime.p7s (5.09 kB)

2018-01-26 09:41:26

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH v4 6/7] x86/cpufeature: Blacklist SPEC_CTRL on early Spectre v2 microcodes


* David Woodhouse <[email protected]> wrote:

> On Thu, 2018-01-25 at 12:34 +0100, Thomas Gleixner wrote:
> >
> > This stuff is really a master piece of trainwreck engineering.
> >
> > So yeah, whatever we do we end up with a proper mess. Lets go for a
> > blacklist and hope that we'll have something which holds at some
> > foreseeable day in the future.
> >
> > The other concern I have is IBRS vs. IBPB. Are we sufficiently sure that
> > IBPB is working on those IBRS blacklisted ucode revisions? Or should we
> > just play safe and not touch any of this at all when we detect a
> > blacklisted one?
>
> That isn't sufficiently clear to me. I've changed it back to blacklist
> *everything* for now, to be safe. If at any point Intel want to get
> their act together and give us coherent information to the contrary, we
> can change to separate IBPB/IBRS blacklists.

Yes.

I also agree that blacklists are the fundamentally correct approach here: a
bit-rotting blacklist is far better to users than a bit-rotting whitelist,
assuming that the number of CPU and microcode bugs goes down with time.

Thanks,

Ingo