2018-01-06 02:33:01

by Tim Chen

[permalink] [raw]
Subject: [PATCH v2 0/8] IBRS patch series

Thanks to everyone for the feedback on the initial posting.
This is an updated patchset and I hope I've captured all
the review comments. I've done a lot of code clean up
per everyone's comments. Please let me know if I've missed
something.

The retpoline related changes is moved to the end of the
patch series, so they can be taken out or changed easily
without affecting the other patches.

Many people hate the multi-bits spec_ctrl_ibrs variable so
I got rid of that and replace it with a dynamic_ibrs flag
to indicate if we need to switch IBRS enter/exiting kernel
which is more intuitive and also makes the code cleaner.

Peter/Andrea suggested that we use a static key to control the run time
IBRS enabling/disabling with "STATIC_JUMP_IF_TRUE" kind
of construct. However, I had some concerns that
JUMP_LABEL config may be disabled and the construct cannot
be used. I also encountered some
OOPs when I'm changing ibrs control state probably
related to changing the jump label branching. I haven't
had time to debug that so I left it out for now.
I will welcome some help here on a patch to get the static key
thing working right.

v2.
1. Added missing feature enumeration in tools/arch/x86/include/asm/cpufeatures.h
2. Kernel entry macros label cleanup and move them to calling.h
3. Remove unnecessary irqs_diabled check in the mwait
4. Don't use a bit field base sys control variable to make ibrs enabling
simpler and easier to understand
5. Corrected compile issues for firmware update code
6. Leave IBPB feature bits out from this patch series and will be added
in its own set of patches later

Tim

---patch series details---
This patch series enables the basic detection and usage of x86 indirect
branch speculation feature. It enables the indirect branch restricted
speculation (IBRS) on kernel entry and disables it on exit.
It enumerates the indirect branch prediction barrier (IBPB).

The x86 IBRS feature requires corresponding microcode support.
It mitigates the variant 2 vulnerability described in
https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html

If IBRS is set, near returns and near indirect jumps/calls will not
allow their predicted target address to be controlled by code that
executed in a less privileged prediction mode before the IBRS mode was
last written with a value of 1 or on another logical processor so long
as all RSB entries from the previous less privileged prediction mode
are overwritten.

Both retpoline and IBRS provides mitigation against variant 2 attacks,
with IBRS being the most secured method but could incur more performance
overhead compared to retpoline[1]. If you are very paranoid or you
run on a CPU where IBRS=1 is cheaper, you may also want to run in "IBRS
always" mode.

See: https://docs.google.com/document/d/e/2PACX-1vSMrwkaoSUBAFc6Fjd19F18c1O9pudkfAY-7lGYGOTN8mc9ul-J6pWadcAaBJZcVA7W_3jlLKRtKRbd/pub

More detailed description of IBRS is described in the first patch.

It is applied on top of the page table isolation changes.

A run time and boot time control of the IBRS feature is provided

There are 2 ways to control IBRS

1. At boot time
noibrs kernel boot parameter will disable IBRS usage

Otherwise if the above parameters are not specified, the system
will enable ibrs and ibpb usage if the cpu supports it.

2. At run time
echo 0 > /sys/kernel/debug/x86/ibrs_enabled will turn off IBRS
echo 1 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in kernel
echo 2 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in both userspace and kernel (IBRS always)

[1] https://lkml.org/lkml/2018/1/4/174


Tim Chen (8):
x86/feature: Detect the x86 IBRS feature to control Speculation
x86/enter: MACROS to set/clear IBRS
x86/enter: Use IBRS on syscall and interrupts
x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature
x86/idle: Disable IBRS entering idle and enable it on wakeup
x86/microcode: Recheck IBRS features on microcode reload
x86: Do not use dynamic IBRS if retpoline is enabled
x86: Use IBRS for firmware update path

arch/x86/entry/calling.h | 104 +++++++++++++++
arch/x86/entry/entry_64.S | 23 ++++
arch/x86/entry/entry_64_compat.S | 8 ++
arch/x86/include/asm/apm.h | 6 +
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/efi.h | 17 ++-
arch/x86/include/asm/msr-index.h | 4 +
arch/x86/include/asm/mwait.h | 13 ++
arch/x86/include/asm/spec_ctrl.h | 54 ++++++++
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/microcode/core.c | 4 +
arch/x86/kernel/cpu/scattered.c | 3 +
arch/x86/kernel/cpu/spec_ctrl.c | 209 +++++++++++++++++++++++++++++++
arch/x86/kernel/process.c | 9 +-
tools/arch/x86/include/asm/cpufeatures.h | 1 +
15 files changed, 453 insertions(+), 4 deletions(-)
create mode 100644 arch/x86/include/asm/spec_ctrl.h
create mode 100644 arch/x86/kernel/cpu/spec_ctrl.c

--
2.9.4


2018-01-06 02:33:09

by Tim Chen

[permalink] [raw]
Subject: [PATCH v2 1/8] x86/feature: Detect the x86 IBRS feature to control Speculation

cpuid ax=0x7, return rdx bit 26 to indicate presence of this feature
IA32_SPEC_CTRL (0x48)
IA32_SPEC_CTRL, bit0 – Indirect Branch Restricted Speculation (IBRS)

If IBRS is set, near returns and near indirect jumps/calls will not allow
their predicted target address to be controlled by code that executed in
a less privileged prediction mode before the IBRS mode was last written
with a value of 1 or on another logical processor so long as all RSB
entries from the previous less privileged prediction mode are overwritten.

* Thus a near indirect jump/call/return may be affected by code in a
less privileged prediction mode that executed AFTER IBRS mode was last
written with a value of 1

* There is no need to clear IBRS before writing it with a value of
1. Unconditionally writing it with a value of 1 after the prediction
mode change is sufficient

* Note: IBRS is not required in order to isolate branch predictions for
SMM or SGX enclaves

* Code executed by a sibling logical processor cannot control indirect
jump/call/return predicted target when IBRS is set

* SMEP will prevent supervisor mode using RSB entries filled by user code;
this can reduce the need for software to overwrite RSB entries

CPU performance could be reduced when running with IBRS set.

Signed-off-by: Tim Chen <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/msr-index.h | 4 ++++
arch/x86/kernel/cpu/scattered.c | 1 +
tools/arch/x86/include/asm/cpufeatures.h | 1 +
4 files changed, 7 insertions(+)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 07cdd17..5ee0737 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -209,6 +209,7 @@
#define X86_FEATURE_AVX512_4FMAPS ( 7*32+17) /* AVX-512 Multiply Accumulation Single precision */

#define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */
+#define X86_FEATURE_SPEC_CTRL ( 7*32+19) /* Control Speculation Control */

/* Virtualization flags: Linux defined, word 8 */
#define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 34c4922..f881add 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -42,6 +42,10 @@
#define MSR_PPIN_CTL 0x0000004e
#define MSR_PPIN 0x0000004f

+#define MSR_IA32_SPEC_CTRL 0x00000048
+#define SPEC_CTRL_FEATURE_DISABLE_IBRS (0 << 0)
+#define SPEC_CTRL_FEATURE_ENABLE_IBRS (1 << 0)
+
#define MSR_IA32_PERFCTR0 0x000000c1
#define MSR_IA32_PERFCTR1 0x000000c2
#define MSR_FSB_FREQ 0x000000cd
diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
index 05459ad..bc50c40 100644
--- a/arch/x86/kernel/cpu/scattered.c
+++ b/arch/x86/kernel/cpu/scattered.c
@@ -24,6 +24,7 @@ static const struct cpuid_bit cpuid_bits[] = {
{ X86_FEATURE_INTEL_PT, CPUID_EBX, 25, 0x00000007, 0 },
{ X86_FEATURE_AVX512_4VNNIW, CPUID_EDX, 2, 0x00000007, 0 },
{ X86_FEATURE_AVX512_4FMAPS, CPUID_EDX, 3, 0x00000007, 0 },
+ { X86_FEATURE_SPEC_CTRL, CPUID_EDX, 26, 0x00000007, 0 },
{ X86_FEATURE_CAT_L3, CPUID_EBX, 1, 0x00000010, 0 },
{ X86_FEATURE_CAT_L2, CPUID_EBX, 2, 0x00000010, 0 },
{ X86_FEATURE_CDP_L3, CPUID_ECX, 2, 0x00000010, 1 },
diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
index 800104c..5e56275 100644
--- a/tools/arch/x86/include/asm/cpufeatures.h
+++ b/tools/arch/x86/include/asm/cpufeatures.h
@@ -208,6 +208,7 @@
#define X86_FEATURE_AVX512_4FMAPS ( 7*32+17) /* AVX-512 Multiply Accumulation Single precision */

#define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */
+#define X86_FEATURE_SPEC_CTRL ( 7*32+19) /* Control Speculation Control */

/* Virtualization flags: Linux defined, word 8 */
#define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
--
2.9.4

2018-01-06 02:33:13

by Tim Chen

[permalink] [raw]
Subject: [PATCH v2 8/8] x86: Use IBRS for firmware update path

From: David Woodhouse <[email protected]>

We are impervious to the indirect branch prediction attack with retpoline
but firmware won't be, so we still need to set IBRS to protect
firmware code execution when calling into firmware at runtime.

Signed-off-by: David Woodhouse <[email protected]>
Signed-off-by: Tim Chen <[email protected]>
---
arch/x86/include/asm/apm.h | 6 ++++++
arch/x86/include/asm/efi.h | 17 +++++++++++++--
arch/x86/include/asm/spec_ctrl.h | 3 +++
arch/x86/kernel/cpu/spec_ctrl.c | 45 ++++++++++++++++++++++++++++++++++++++++
4 files changed, 69 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/apm.h b/arch/x86/include/asm/apm.h
index 4d4015d..fbf4212 100644
--- a/arch/x86/include/asm/apm.h
+++ b/arch/x86/include/asm/apm.h
@@ -7,6 +7,8 @@
#ifndef _ASM_X86_MACH_DEFAULT_APM_H
#define _ASM_X86_MACH_DEFAULT_APM_H

+#include <asm/spec_ctrl.h>
+
#ifdef APM_ZERO_SEGS
# define APM_DO_ZERO_SEGS \
"pushl %%ds\n\t" \
@@ -28,6 +30,7 @@ static inline void apm_bios_call_asm(u32 func, u32 ebx_in, u32 ecx_in,
u32 *eax, u32 *ebx, u32 *ecx,
u32 *edx, u32 *esi)
{
+ unprotected_firmware_begin();
/*
* N.B. We do NOT need a cld after the BIOS call
* because we always save and restore the flags.
@@ -44,6 +47,7 @@ static inline void apm_bios_call_asm(u32 func, u32 ebx_in, u32 ecx_in,
"=S" (*esi)
: "a" (func), "b" (ebx_in), "c" (ecx_in)
: "memory", "cc");
+ unprotected_firmware_end();
}

static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in,
@@ -52,6 +56,7 @@ static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in,
int cx, dx, si;
bool error;

+ unprotected_firmware_begin();
/*
* N.B. We do NOT need a cld after the BIOS call
* because we always save and restore the flags.
@@ -68,6 +73,7 @@ static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in,
"=S" (si)
: "a" (func), "b" (ebx_in), "c" (ecx_in)
: "memory", "cc");
+ unprotected_firmware_end();
return error;
}

diff --git a/arch/x86/include/asm/efi.h b/arch/x86/include/asm/efi.h
index 85f6ccb..439bd55 100644
--- a/arch/x86/include/asm/efi.h
+++ b/arch/x86/include/asm/efi.h
@@ -6,6 +6,7 @@
#include <asm/pgtable.h>
#include <asm/processor-flags.h>
#include <asm/tlb.h>
+#include <asm/spec_ctrl.h>

/*
* We map the EFI regions needed for runtime services non-contiguously,
@@ -36,8 +37,18 @@

extern asmlinkage unsigned long efi_call_phys(void *, ...);

-#define arch_efi_call_virt_setup() kernel_fpu_begin()
-#define arch_efi_call_virt_teardown() kernel_fpu_end()
+#define arch_efi_call_virt_setup() \
+{( \
+ kernel_fpu_begin(); \
+ unprotected_firmware_begin(); \
+)}
+
+#define arch_efi_call_virt_teardown() \
+{( \
+ unprotected_firmware_end(); \
+ kernel_fpu_end(); \
+)}
+

/*
* Wrap all the virtual calls in a way that forces the parameters on the stack.
@@ -73,6 +84,7 @@ struct efi_scratch {
efi_sync_low_kernel_mappings(); \
preempt_disable(); \
__kernel_fpu_begin(); \
+ unprotected_firmware_begin(); \
\
if (efi_scratch.use_pgd) { \
efi_scratch.prev_cr3 = __read_cr3(); \
@@ -91,6 +103,7 @@ struct efi_scratch {
__flush_tlb_all(); \
} \
\
+ unprotected_firmware_end(); \
__kernel_fpu_end(); \
preempt_enable(); \
})
diff --git a/arch/x86/include/asm/spec_ctrl.h b/arch/x86/include/asm/spec_ctrl.h
index be08ae7..ce21d24 100644
--- a/arch/x86/include/asm/spec_ctrl.h
+++ b/arch/x86/include/asm/spec_ctrl.h
@@ -11,6 +11,9 @@ void scan_spec_ctrl_feature(struct cpuinfo_x86 *c);
void rescan_spec_ctrl_feature(struct cpuinfo_x86 *c);
bool ibrs_inuse(void);

+void unprotected_firmware_begin(void);
+void unprotected_firmware_end(void);
+
extern unsigned int dynamic_ibrs;

static inline void __disable_indirect_speculation(void)
diff --git a/arch/x86/kernel/cpu/spec_ctrl.c b/arch/x86/kernel/cpu/spec_ctrl.c
index 076c470..59bf561 100644
--- a/arch/x86/kernel/cpu/spec_ctrl.c
+++ b/arch/x86/kernel/cpu/spec_ctrl.c
@@ -10,6 +10,10 @@
unsigned int dynamic_ibrs __read_mostly;
EXPORT_SYMBOL_GPL(dynamic_ibrs);

+#if defined(RETPOLINE)
+static unsigned int firmware_ibrs __read_mostly;
+#endif
+
enum {
IBRS_DISABLED,
/* in host kernel, disabled in guest and userland */
@@ -31,6 +35,8 @@ static inline void set_ibrs_feature(void)
dynamic_ibrs = 1;
ibrs_enabled = IBRS_ENABLED;
}
+#else
+ firmware_ibrs = 1;
#endif
}

@@ -162,3 +168,42 @@ static int __init debugfs_spec_ctrl(void)
return 0;
}
late_initcall(debugfs_spec_ctrl);
+
+#if defined(RETPOLINE)
+/*
+ * RETPOLINE does not protect against indirect speculation
+ * in firmware code. Enable IBRS to protect firmware execution.
+ */
+void unprotected_firmware_begin(void)
+{
+ if (firmware_ibrs) {
+ __disable_indirect_speculation();
+ } else {
+ /*
+ * rmb prevent unwanted speculation when we
+ * are setting IBRS
+ */
+ rmb();
+ }
+}
+EXPORT_SYMBOL_GPL(unprotected_firmware_begin);
+
+void unprotected_firmware_end(void)
+{
+ if (firmware_ibrs) {
+ __enable_indirect_speculation();
+ }
+}
+EXPORT_SYMBOL_GPL(unprotected_firmware_end);
+
+#else
+void unprotected_firmware_begin(void)
+{
+}
+EXPORT_SYMBOL_GPL(unprotected_firmware_begin);
+
+void unprotected_firmware_end(void)
+{
+}
+EXPORT_SYMBOL_GPL(unprotected_firmware_end);
+#endif
--
2.9.4

2018-01-06 02:33:07

by Tim Chen

[permalink] [raw]
Subject: [PATCH v2 2/8] x86/enter: MACROS to set/clear IBRS

Create macros to control IBRS. Use these macros to enable IBRS on kernel entry
paths and disable IBRS on kernel exit paths.

The registers rax, rcx and rdx are touched when controlling IBRS
so they need to be saved when they can't be clobbered.

Signed-off-by: Tim Chen <[email protected]>
---
arch/x86/entry/calling.h | 74 ++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 74 insertions(+)

diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
index 45a63e0..09c870d 100644
--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -6,6 +6,8 @@
#include <asm/percpu.h>
#include <asm/asm-offsets.h>
#include <asm/processor-flags.h>
+#include <asm/msr-index.h>
+#include <asm/cpufeatures.h>

/*

@@ -347,3 +349,75 @@ For 32-bit we have the following conventions - kernel is built with
.Lafter_call_\@:
#endif
.endm
+
+/*
+ * IBRS related macros
+ */
+
+.macro PUSH_MSR_REGS
+ pushq %rax
+ pushq %rcx
+ pushq %rdx
+.endm
+
+.macro POP_MSR_REGS
+ popq %rdx
+ popq %rcx
+ popq %rax
+.endm
+
+.macro WRMSR_ASM msr_nr:req eax_val:req
+ movl \msr_nr, %ecx
+ movl $0, %edx
+ movl \eax_val, %eax
+.endm
+
+.macro ENABLE_IBRS
+ ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
+ PUSH_MSR_REGS
+ WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_ENABLE_IBRS
+ POP_MSR_REGS
+.Lskip_\@:
+.endm
+
+.macro DISABLE_IBRS
+ ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
+ PUSH_MSR_REGS
+ WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_DISABLE_IBRS
+ POP_MSR_REGS
+.Lskip_\@:
+.endm
+
+.macro ENABLE_IBRS_CLOBBER
+ ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
+ WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_ENABLE_IBRS
+.Lskip_\@:
+.endm
+
+.macro DISABLE_IBRS_CLOBBER
+ ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
+ WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_DISABLE_IBRS
+.Lskip_\@:
+.endm
+
+.macro ENABLE_IBRS_SAVE_AND_CLOBBER save_reg:req
+ ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
+ movl $MSR_IA32_SPEC_CTRL, %ecx
+ rdmsr
+ movl %eax, \save_reg
+
+ movl $0, %edx
+ movl $SPEC_CTRL_FEATURE_ENABLE_IBRS, %eax
+ wrmsr
+.Lskip_\@:
+.endm
+
+.macro RESTORE_IBRS_CLOBBER save_reg:req
+ ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
+ /* Set IBRS to the value saved in the save_reg */
+ movl $MSR_IA32_SPEC_CTRL, %ecx
+ movl $0, %edx
+ movl \save_reg, %eax
+ wrmsr
+.Lskip_\@:
+.endm
--
2.9.4

2018-01-06 02:33:35

by Tim Chen

[permalink] [raw]
Subject: [PATCH v2 7/8] x86: Do not use dynamic IBRS if retpoline is enabled

Don't use dynamic IBRS that toggle IBRS on entry/exit
to kernel code if retpoline is used.

Signed-off-by: Tim Chen <[email protected]>
---
arch/x86/kernel/cpu/spec_ctrl.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/x86/kernel/cpu/spec_ctrl.c b/arch/x86/kernel/cpu/spec_ctrl.c
index 28107a2..076c470 100644
--- a/arch/x86/kernel/cpu/spec_ctrl.c
+++ b/arch/x86/kernel/cpu/spec_ctrl.c
@@ -26,10 +26,12 @@ DEFINE_MUTEX(spec_ctrl_mutex);

static inline void set_ibrs_feature(void)
{
+#if !defined(RETPOLINE)
if (!ibrs_admin_disabled) {
dynamic_ibrs = 1;
ibrs_enabled = IBRS_ENABLED;
}
+#endif
}

void scan_spec_ctrl_feature(struct cpuinfo_x86 *c)
--
2.9.4

2018-01-06 02:33:50

by Tim Chen

[permalink] [raw]
Subject: [PATCH v2 6/8] x86/microcode: Recheck IBRS features on microcode reload

On new microcode write, check whether IBRS
is present by rescanning IBRS feature.

Signed-off-by: Tim Chen <[email protected]>
---
arch/x86/include/asm/spec_ctrl.h | 1 +
arch/x86/kernel/cpu/microcode/core.c | 4 ++++
arch/x86/kernel/cpu/spec_ctrl.c | 18 ++++++++++--------
3 files changed, 15 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/spec_ctrl.h b/arch/x86/include/asm/spec_ctrl.h
index 62c5dc8..be08ae7 100644
--- a/arch/x86/include/asm/spec_ctrl.h
+++ b/arch/x86/include/asm/spec_ctrl.h
@@ -8,6 +8,7 @@
#include <asm/microcode.h>

void scan_spec_ctrl_feature(struct cpuinfo_x86 *c);
+void rescan_spec_ctrl_feature(struct cpuinfo_x86 *c);
bool ibrs_inuse(void);

extern unsigned int dynamic_ibrs;
diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
index c4fa4a8..848c3e9 100644
--- a/arch/x86/kernel/cpu/microcode/core.c
+++ b/arch/x86/kernel/cpu/microcode/core.c
@@ -40,6 +40,7 @@
#include <asm/processor.h>
#include <asm/cmdline.h>
#include <asm/setup.h>
+#include <asm/spec_ctrl.h>

#define DRIVER_VERSION "2.2"

@@ -444,6 +445,9 @@ static ssize_t microcode_write(struct file *file, const char __user *buf,
if (ret > 0)
perf_check_microcode();

+ /* check spec_ctrl capabilities */
+ rescan_spec_ctrl_feature(&boot_cpu_data);
+
mutex_unlock(&microcode_mutex);
put_online_cpus();

diff --git a/arch/x86/kernel/cpu/spec_ctrl.c b/arch/x86/kernel/cpu/spec_ctrl.c
index 1641bec..28107a2 100644
--- a/arch/x86/kernel/cpu/spec_ctrl.c
+++ b/arch/x86/kernel/cpu/spec_ctrl.c
@@ -24,13 +24,18 @@ static bool ibrs_admin_disabled;
/* mutex to serialize IBRS control changes */
DEFINE_MUTEX(spec_ctrl_mutex);

+static inline void set_ibrs_feature(void)
+{
+ if (!ibrs_admin_disabled) {
+ dynamic_ibrs = 1;
+ ibrs_enabled = IBRS_ENABLED;
+ }
+}
+
void scan_spec_ctrl_feature(struct cpuinfo_x86 *c)
{
if ((!c->cpu_index) && (boot_cpu_has(X86_FEATURE_SPEC_CTRL))) {
- if (!ibrs_admin_disabled) {
- dynamic_ibrs = 1;
- ibrs_enabled = IBRS_ENABLED;
- }
+ set_ibrs_feature();
}
}
EXPORT_SYMBOL_GPL(scan_spec_ctrl_feature);
@@ -43,10 +48,7 @@ void rescan_spec_ctrl_feature(struct cpuinfo_x86 *c)
{
mutex_lock(&spec_ctrl_mutex);
if (boot_cpu_has(X86_FEATURE_SPEC_CTRL)) {
- if (!ibrs_admin_disabled) {
- dynamic_ibrs = 1;
- ibrs_enabled = IBRS_ENABLED;
- }
+ set_ibrs_feature();
}
mutex_unlock(&spec_ctrl_mutex);
}
--
2.9.4

2018-01-06 02:34:06

by Tim Chen

[permalink] [raw]
Subject: [PATCH v2 5/8] x86/idle: Disable IBRS entering idle and enable it on wakeup

Clear IBRS on idle entry and set it on idle exit into kernel on mwait.
When we are in mwait, we are not running but if we leave IBRS on,
it will affect the performance on the sibling hardware thread. So
we disable IBRS and reenable it when we wake up.

Signed-off-by: Tim Chen <[email protected]>
---
arch/x86/include/asm/mwait.h | 13 +++++++++++++
arch/x86/include/asm/spec_ctrl.h | 35 +++++++++++++++++++++++++++++++++++
arch/x86/kernel/process.c | 9 +++++++--
3 files changed, 55 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
index 39a2fb2..f28f2ea 100644
--- a/arch/x86/include/asm/mwait.h
+++ b/arch/x86/include/asm/mwait.h
@@ -6,6 +6,7 @@
#include <linux/sched/idle.h>

#include <asm/cpufeature.h>
+#include <asm/spec_ctrl.h>

#define MWAIT_SUBSTATE_MASK 0xf
#define MWAIT_CSTATE_MASK 0xf
@@ -106,9 +107,21 @@ static inline void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
mb();
}

+ /*
+ * CPUs run faster with speculation protection
+ * disabled. All CPU threads in a core must
+ * disable speculation protection for it to be
+ * disabled. Disable it while we are idle so the
+ * other hyperthread can run fast.
+ *
+ * Interrupts have been disabled at this point.
+ */
+
+ unprotected_speculation_begin();
__monitor((void *)&current_thread_info()->flags, 0, 0);
if (!need_resched())
__mwait(eax, ecx);
+ unprotected_speculation_end();
}
current_clr_polling();
}
diff --git a/arch/x86/include/asm/spec_ctrl.h b/arch/x86/include/asm/spec_ctrl.h
index 4fda38b..62c5dc8 100644
--- a/arch/x86/include/asm/spec_ctrl.h
+++ b/arch/x86/include/asm/spec_ctrl.h
@@ -12,4 +12,39 @@ bool ibrs_inuse(void);

extern unsigned int dynamic_ibrs;

+static inline void __disable_indirect_speculation(void)
+{
+ native_wrmsrl(MSR_IA32_SPEC_CTRL, SPEC_CTRL_FEATURE_ENABLE_IBRS);
+}
+
+static inline void __enable_indirect_speculation(void)
+{
+ native_wrmsrl(MSR_IA32_SPEC_CTRL, SPEC_CTRL_FEATURE_DISABLE_IBRS);
+}
+
+/*
+ * Interrupts must be disabled to begin unprotected speculation.
+ * Otherwise interrupts could come in and start running in unprotected mode.
+ */
+
+static inline void unprotected_speculation_begin(void)
+{
+ lockdep_assert_irqs_disabled();
+ if (dynamic_ibrs)
+ __enable_indirect_speculation();
+}
+
+static inline void unprotected_speculation_end(void)
+{
+ if (dynamic_ibrs) {
+ __disable_indirect_speculation();
+ } else {
+ /*
+ * rmb prevent unwanted speculation when we
+ * are setting IBRS
+ */
+ rmb();
+ }
+}
+
#endif /* _ASM_X86_SPEC_CTRL_H */
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index aed9d94..4b1ac7c 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -39,6 +39,7 @@
#include <asm/switch_to.h>
#include <asm/desc.h>
#include <asm/prctl.h>
+#include <asm/spec_ctrl.h>

/*
* per-CPU TSS segments. Threads are completely 'soft' on Linux,
@@ -461,11 +462,15 @@ static __cpuidle void mwait_idle(void)
mb(); /* quirk */
}

+ unprotected_speculation_begin();
__monitor((void *)&current_thread_info()->flags, 0, 0);
- if (!need_resched())
+ if (!need_resched()) {
__sti_mwait(0, 0);
- else
+ unprotected_speculation_end();
+ } else {
+ unprotected_speculation_end();
local_irq_enable();
+ }
trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, smp_processor_id());
} else {
local_irq_enable();
--
2.9.4

2018-01-06 02:34:19

by Tim Chen

[permalink] [raw]
Subject: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

From: Tim Chen <[email protected]>
From: Andrea Arcangeli <[email protected]>

There are 2 ways to control IBRS

1. At boot time
noibrs kernel boot parameter will disable IBRS usage

Otherwise if the above parameters are not specified, the system
will enable ibrs and ibpb usage if the cpu supports it.

2. At run time
echo 0 > /sys/kernel/debug/x86/ibrs_enabled will turn off IBRS
echo 1 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in kernel
echo 2 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in both userspace and kernel

The implementation was updated with input and suggestions from Andrea Arcangeli.

Signed-off-by: Tim Chen <[email protected]>
---
arch/x86/entry/calling.h | 42 ++++++++--
arch/x86/include/asm/spec_ctrl.h | 15 ++++
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/scattered.c | 2 +
arch/x86/kernel/cpu/spec_ctrl.c | 160 +++++++++++++++++++++++++++++++++++++++
5 files changed, 214 insertions(+), 6 deletions(-)
create mode 100644 arch/x86/include/asm/spec_ctrl.h
create mode 100644 arch/x86/kernel/cpu/spec_ctrl.c

diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
index 09c870d..6b65d47 100644
--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -373,35 +373,55 @@ For 32-bit we have the following conventions - kernel is built with
.endm

.macro ENABLE_IBRS
- ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
+ testl $1, dynamic_ibrs
+ jz .Lskip_\@
+
PUSH_MSR_REGS
WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_ENABLE_IBRS
POP_MSR_REGS
+ jmp .Ldone_\@
+
.Lskip_\@:
+ lfence
+.Ldone_\@:
.endm

.macro DISABLE_IBRS
- ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
+ testl $1, dynamic_ibrs
+ jz .Lskip_\@
+
PUSH_MSR_REGS
WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_DISABLE_IBRS
POP_MSR_REGS
+
.Lskip_\@:
.endm

.macro ENABLE_IBRS_CLOBBER
- ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
+ testl $1, dynamic_ibrs
+ jz .Lskip_\@
+
WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_ENABLE_IBRS
+ jmp .Ldone_\@
+
.Lskip_\@:
+ lfence
+.Ldone_\@:
.endm

.macro DISABLE_IBRS_CLOBBER
- ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
+ testl $1, dynamic_ibrs
+ jz .Lskip_\@
+
WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_DISABLE_IBRS
+
.Lskip_\@:
.endm

.macro ENABLE_IBRS_SAVE_AND_CLOBBER save_reg:req
- ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
+ testl $1, dynamic_ibrs
+ jz .Lskip_\@
+
movl $MSR_IA32_SPEC_CTRL, %ecx
rdmsr
movl %eax, \save_reg
@@ -409,15 +429,25 @@ For 32-bit we have the following conventions - kernel is built with
movl $0, %edx
movl $SPEC_CTRL_FEATURE_ENABLE_IBRS, %eax
wrmsr
+ jmp .Ldone_\@
+
.Lskip_\@:
+ lfence
+.Ldone_\@:
.endm

.macro RESTORE_IBRS_CLOBBER save_reg:req
- ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
+ testl $1, dynamic_ibrs
+ jz .Lskip_\@
+
/* Set IBRS to the value saved in the save_reg */
movl $MSR_IA32_SPEC_CTRL, %ecx
movl $0, %edx
movl \save_reg, %eax
wrmsr
+ jmp .Ldone_\@
+
.Lskip_\@:
+ lfence
+.Ldone_\@:
.endm
diff --git a/arch/x86/include/asm/spec_ctrl.h b/arch/x86/include/asm/spec_ctrl.h
new file mode 100644
index 0000000..4fda38b
--- /dev/null
+++ b/arch/x86/include/asm/spec_ctrl.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _ASM_X86_SPEC_CTRL_H
+#define _ASM_X86_SPEC_CTRL_H
+
+#include <asm/msr-index.h>
+#include <asm/cpufeatures.h>
+#include <asm/microcode.h>
+
+void scan_spec_ctrl_feature(struct cpuinfo_x86 *c);
+bool ibrs_inuse(void);
+
+extern unsigned int dynamic_ibrs;
+
+#endif /* _ASM_X86_SPEC_CTRL_H */
diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index 570e8bb..3ffbd24 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -24,6 +24,7 @@ obj-y += match.o
obj-y += bugs.o
obj-y += aperfmperf.o
obj-y += cpuid-deps.o
+obj-y += spec_ctrl.o

obj-$(CONFIG_PROC_FS) += proc.o
obj-$(CONFIG_X86_FEATURE_NAMES) += capflags.o powerflags.o
diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
index bc50c40..5756d14 100644
--- a/arch/x86/kernel/cpu/scattered.c
+++ b/arch/x86/kernel/cpu/scattered.c
@@ -8,6 +8,7 @@
#include <asm/processor.h>

#include <asm/apic.h>
+#include <asm/spec_ctrl.h>

struct cpuid_bit {
u16 feature;
@@ -57,6 +58,7 @@ void init_scattered_cpuid_features(struct cpuinfo_x86 *c)
if (regs[cb->reg] & (1 << cb->bit))
set_cpu_cap(c, cb->feature);
}
+ scan_spec_ctrl_feature(c);
}

u32 get_scattered_cpuid_leaf(unsigned int level, unsigned int sub_leaf,
diff --git a/arch/x86/kernel/cpu/spec_ctrl.c b/arch/x86/kernel/cpu/spec_ctrl.c
new file mode 100644
index 0000000..1641bec
--- /dev/null
+++ b/arch/x86/kernel/cpu/spec_ctrl.c
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#include <linux/mutex.h>
+#include <linux/debugfs.h>
+#include <linux/uaccess.h>
+
+#include <asm/spec_ctrl.h>
+#include <asm/cpufeature.h>
+
+unsigned int dynamic_ibrs __read_mostly;
+EXPORT_SYMBOL_GPL(dynamic_ibrs);
+
+enum {
+ IBRS_DISABLED,
+ /* in host kernel, disabled in guest and userland */
+ IBRS_ENABLED,
+ /* in host kernel and host userland, disabled in guest */
+ IBRS_ENABLED_USER,
+ IBRS_MAX = IBRS_ENABLED_USER,
+};
+static unsigned int ibrs_enabled;
+static bool ibrs_admin_disabled;
+
+/* mutex to serialize IBRS control changes */
+DEFINE_MUTEX(spec_ctrl_mutex);
+
+void scan_spec_ctrl_feature(struct cpuinfo_x86 *c)
+{
+ if ((!c->cpu_index) && (boot_cpu_has(X86_FEATURE_SPEC_CTRL))) {
+ if (!ibrs_admin_disabled) {
+ dynamic_ibrs = 1;
+ ibrs_enabled = IBRS_ENABLED;
+ }
+ }
+}
+EXPORT_SYMBOL_GPL(scan_spec_ctrl_feature);
+
+/*
+ * Used after boot phase to rescan spec_ctrl feature,
+ * serialize scan with spec_ctrl_mutex.
+ */
+void rescan_spec_ctrl_feature(struct cpuinfo_x86 *c)
+{
+ mutex_lock(&spec_ctrl_mutex);
+ if (boot_cpu_has(X86_FEATURE_SPEC_CTRL)) {
+ if (!ibrs_admin_disabled) {
+ dynamic_ibrs = 1;
+ ibrs_enabled = IBRS_ENABLED;
+ }
+ }
+ mutex_unlock(&spec_ctrl_mutex);
+}
+EXPORT_SYMBOL_GPL(rescan_spec_ctrl_feature);
+
+bool ibrs_inuse(void)
+{
+ return ibrs_enabled == IBRS_ENABLED;
+}
+EXPORT_SYMBOL_GPL(ibrs_inuse);
+
+static int __init noibrs(char *str)
+{
+ ibrs_admin_disabled = true;
+ ibrs_enabled = IBRS_DISABLED;
+
+ return 0;
+}
+early_param("noibrs", noibrs);
+
+static ssize_t __enabled_read(struct file *file, char __user *user_buf,
+ size_t count, loff_t *ppos, unsigned int *field)
+{
+ char buf[32];
+ unsigned int len;
+
+ len = sprintf(buf, "%d\n", READ_ONCE(*field));
+ return simple_read_from_buffer(user_buf, count, ppos, buf, len);
+}
+
+static ssize_t ibrs_enabled_read(struct file *file, char __user *user_buf,
+ size_t count, loff_t *ppos)
+{
+ return __enabled_read(file, user_buf, count, ppos, &ibrs_enabled);
+}
+
+static void spec_ctrl_flush_all_cpus(u32 msr_nr, u64 val)
+{
+ int cpu;
+
+ get_online_cpus();
+ for_each_online_cpu(cpu)
+ wrmsrl_on_cpu(cpu, msr_nr, val);
+ put_online_cpus();
+}
+
+static ssize_t ibrs_enabled_write(struct file *file,
+ const char __user *user_buf,
+ size_t count, loff_t *ppos)
+{
+ char buf[32];
+ ssize_t len;
+ unsigned int enable;
+
+ len = min(count, sizeof(buf) - 1);
+ if (copy_from_user(buf, user_buf, len))
+ return -EFAULT;
+
+ buf[len] = '\0';
+ if (kstrtouint(buf, 0, &enable))
+ return -EINVAL;
+
+ if (enable > IBRS_MAX)
+ return -EINVAL;
+
+ if (!boot_cpu_has(X86_FEATURE_SPEC_CTRL)) {
+ ibrs_enabled = IBRS_DISABLED;
+ return -EINVAL;
+ }
+
+ mutex_lock(&spec_ctrl_mutex);
+
+ if (enable == IBRS_DISABLED) {
+ /* disable IBRS usage */
+ ibrs_admin_disabled = true;
+ dynamic_ibrs = 0;
+ spec_ctrl_flush_all_cpus(MSR_IA32_SPEC_CTRL,
+ SPEC_CTRL_FEATURE_DISABLE_IBRS);
+
+ } else if (enable == IBRS_ENABLED) {
+ /* enable IBRS usage in kernel */
+ ibrs_admin_disabled = false;
+ dynamic_ibrs = 1;
+
+ } else if (enable == IBRS_ENABLED_USER) {
+ /* enable IBRS all the time in both userspace and kernel */
+ ibrs_admin_disabled = false;
+ dynamic_ibrs = 0;
+ spec_ctrl_flush_all_cpus(MSR_IA32_SPEC_CTRL,
+ SPEC_CTRL_FEATURE_ENABLE_IBRS);
+ }
+
+ ibrs_enabled = enable;
+
+ mutex_unlock(&spec_ctrl_mutex);
+ return count;
+}
+
+static const struct file_operations fops_ibrs_enabled = {
+ .read = ibrs_enabled_read,
+ .write = ibrs_enabled_write,
+ .llseek = default_llseek,
+};
+
+static int __init debugfs_spec_ctrl(void)
+{
+ debugfs_create_file("ibrs_enabled", S_IRUSR | S_IWUSR,
+ arch_debugfs_dir, NULL, &fops_ibrs_enabled);
+ return 0;
+}
+late_initcall(debugfs_spec_ctrl);
--
2.9.4

2018-01-06 02:34:37

by Tim Chen

[permalink] [raw]
Subject: [PATCH v2 3/8] x86/enter: Use IBRS on syscall and interrupts

From: Andrea Arcangeli <[email protected]>

Set IBRS upon kernel entrance via syscall and interrupts. Clear it
upon exit. IBRS protects against unsafe indirect branching predictions
in the kernel.

The NMI interrupt save/restore of IBRS state was based on Andrea
Arcangeli's implementation.
Here's an explanation by Dave Hansen on why we save IBRS state for NMI.

The normal interrupt code uses the 'error_entry' path which uses the
Code Segment (CS) of the instruction that was interrupted to tell
whether it interrupted the kernel or userspace and thus has to switch
IBRS, or leave it alone.

The NMI code is different. It uses 'paranoid_entry' because it can
interrupt the kernel while it is running with a userspace IBRS (and %GS
and CR3) value, but has a kernel CS. If we used the same approach as
the normal interrupt code, we might do the following;

SYSENTER_entry
<-------------- NMI HERE
IBRS=1
do_something()
IBRS=0
SYSRET

The NMI code might notice that we are running in the kernel and decide
that it is OK to skip the IBRS=1. This would leave it running
unprotected with IBRS=0, which is bad.

However, if we unconditionally set IBRS=1, in the NMI, we might get the
following case:

SYSENTER_entry
IBRS=1
do_something()
IBRS=0
<-------------- NMI HERE (set IBRS=1)
SYSRET

and we would return to userspace with IBRS=1. Userspace would run
slowly until we entered and exited the kernel again.

Instead of those two approaches, we chose a third one where we simply
save the IBRS value in a scratch register (%r13) and then restore that
value, verbatim.

Signed-off-by: Andrea Arcangeli <[email protected]>
Signed-off-by: Tim Chen <[email protected]>
---
arch/x86/entry/entry_64.S | 23 +++++++++++++++++++++++
arch/x86/entry/entry_64_compat.S | 8 ++++++++
2 files changed, 31 insertions(+)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index f048e38..a4031c9 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -174,6 +174,8 @@ ENTRY(entry_SYSCALL_64_trampoline)

/* Load the top of the task stack into RSP */
movq CPU_ENTRY_AREA_tss + TSS_sp1 + CPU_ENTRY_AREA, %rsp
+ /* Stack is usable, use the non-clobbering IBRS enable: */
+ ENABLE_IBRS

/* Start building the simulated IRET frame. */
pushq $__USER_DS /* pt_regs->ss */
@@ -217,6 +219,8 @@ ENTRY(entry_SYSCALL_64)
*/
movq %rsp, PER_CPU_VAR(rsp_scratch)
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
+ /* Stack is usable, use the non-clobbering IBRS enable: */
+ ENABLE_IBRS

/* Construct struct pt_regs on stack */
pushq $__USER_DS /* pt_regs->ss */
@@ -411,6 +415,7 @@ syscall_return_via_sysret:
* We are on the trampoline stack. All regs except RDI are live.
* We can do future final exit work right here.
*/
+ DISABLE_IBRS
SWITCH_TO_USER_CR3_STACK scratch_reg=%rdi

popq %rdi
@@ -749,6 +754,7 @@ GLOBAL(swapgs_restore_regs_and_return_to_usermode)
* We can do future final exit work right here.
*/

+ DISABLE_IBRS
SWITCH_TO_USER_CR3_STACK scratch_reg=%rdi

/* Restore RDI. */
@@ -836,6 +842,14 @@ native_irq_return_ldt:
SWAPGS /* to kernel GS */
SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi /* to kernel CR3 */

+ /*
+ * Normally we enable IBRS when we switch to kernel's CR3.
+ * But we are going to switch back to user CR3 immediately
+ * in this routine after fixing ESPFIX stack. There is
+ * no vulnerable code branching for IBRS to protect.
+ * We don't toggle IBRS to avoid the cost of two MSR writes.
+ */
+
movq PER_CPU_VAR(espfix_waddr), %rdi
movq %rax, (0*8)(%rdi) /* user RAX */
movq (1*8)(%rsp), %rax /* user RIP */
@@ -969,6 +983,8 @@ ENTRY(switch_to_thread_stack)
SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
movq %rsp, %rdi
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
+ /* Stack is usable, use the non-clobbering IBRS enable: */
+ ENABLE_IBRS
UNWIND_HINT sp_offset=16 sp_reg=ORC_REG_DI

pushq 7*8(%rdi) /* regs->ss */
@@ -1271,6 +1287,7 @@ ENTRY(paranoid_entry)

1:
SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg=%rax save_reg=%r14
+ ENABLE_IBRS_SAVE_AND_CLOBBER save_reg=%r13d

ret
END(paranoid_entry)
@@ -1294,6 +1311,7 @@ ENTRY(paranoid_exit)
testl %ebx, %ebx /* swapgs needed? */
jnz .Lparanoid_exit_no_swapgs
TRACE_IRQS_IRETQ
+ RESTORE_IBRS_CLOBBER save_reg=%r13d
RESTORE_CR3 scratch_reg=%rbx save_reg=%r14
SWAPGS_UNSAFE_STACK
jmp .Lparanoid_exit_restore
@@ -1324,6 +1342,7 @@ ENTRY(error_entry)
SWAPGS
/* We have user CR3. Change to kernel CR3. */
SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
+ ENABLE_IBRS_CLOBBER

.Lerror_entry_from_usermode_after_swapgs:
/* Put us onto the real thread stack. */
@@ -1371,6 +1390,7 @@ ENTRY(error_entry)
*/
SWAPGS
SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
+ ENABLE_IBRS_CLOBBER
jmp .Lerror_entry_done

.Lbstep_iret:
@@ -1385,6 +1405,7 @@ ENTRY(error_entry)
*/
SWAPGS
SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
+ ENABLE_IBRS

/*
* Pretend that the exception came from user mode: set up pt_regs
@@ -1486,6 +1507,7 @@ ENTRY(nmi)
SWITCH_TO_KERNEL_CR3 scratch_reg=%rdx
movq %rsp, %rdx
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
+ ENABLE_IBRS
UNWIND_HINT_IRET_REGS base=%rdx offset=8
pushq 5*8(%rdx) /* pt_regs->ss */
pushq 4*8(%rdx) /* pt_regs->rsp */
@@ -1736,6 +1758,7 @@ end_repeat_nmi:
movq $-1, %rsi
call do_nmi

+ RESTORE_IBRS_CLOBBER save_reg=%r13d
RESTORE_CR3 scratch_reg=%r15 save_reg=%r14

testl %ebx, %ebx /* swapgs needed? */
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index 40f1700..2074555 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -54,6 +54,7 @@ ENTRY(entry_SYSENTER_compat)
SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp

movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
+ ENABLE_IBRS

/*
* User tracing code (ptrace or signal handlers) might assume that
@@ -224,6 +225,7 @@ GLOBAL(entry_SYSCALL_compat_after_hwframe)
* preserved during the C calls inside TRACE_IRQS_OFF anyway.
*/
SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
+ ENABLE_IBRS_CLOBBER /* clobbers %rax, %rcx, %rdx */

/*
* User mode is traced as though IRQs are on, and SYSENTER
@@ -240,6 +242,12 @@ GLOBAL(entry_SYSCALL_compat_after_hwframe)
/* Opportunistic SYSRET */
sysret32_from_system_call:
TRACE_IRQS_ON /* User mode traces as IRQs on. */
+ /*
+ * Clobber of %rax, %rcx, %rdx is OK before register restoring.
+ * This is safe to do here because we have no indirect branches
+ * between here and the return to userspace (sysretl).
+ */
+ DISABLE_IBRS_CLOBBER
movq RBX(%rsp), %rbx /* pt_regs->rbx */
movq RBP(%rsp), %rbp /* pt_regs->rbp */
movq EFLAGS(%rsp), %r11 /* pt_regs->flags (in r11) */
--
2.9.4

2018-01-06 03:12:15

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On 01/05/2018 06:12 PM, Tim Chen wrote:
> .macro ENABLE_IBRS
> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> + testl $1, dynamic_ibrs
> + jz .Lskip_\@

There was an earlier suggestion to use STATIC_JUMP_IF_... here. That's
a good suggestion, but we encountered some issues with it either
crashing the kernel at boot or not properly turning on/off.

We would love to do that minor really soon, but we figured everyone
would rather see this version than wait for us to debug such a minor tweak.

2018-01-06 06:43:18

by Tim Chen

[permalink] [raw]
Subject: Re: [PATCH v2 0/8] IBRS patch series



On 01/05/2018 06:12 PM, Tim Chen wrote:
> Thanks to everyone for the feedback on the initial posting.
> This is an updated patchset and I hope I've captured all
> the review comments. I've done a lot of code clean up
> per everyone's comments. Please let me know if I've missed
> something.
>

Should also have mentioned that the patch series applies on
top of 4.15-rc6.

Tim

2018-01-06 08:54:07

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Fri, Jan 05, 2018 at 06:12:19PM -0800, Tim Chen wrote:
> From: Tim Chen <[email protected]>
> From: Andrea Arcangeli <[email protected]>
>
> There are 2 ways to control IBRS
>
> 1. At boot time
> noibrs kernel boot parameter will disable IBRS usage
>
> Otherwise if the above parameters are not specified, the system
> will enable ibrs and ibpb usage if the cpu supports it.
>
> 2. At run time
> echo 0 > /sys/kernel/debug/x86/ibrs_enabled will turn off IBRS
> echo 1 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in kernel
> echo 2 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in both userspace and kernel
>
> The implementation was updated with input and suggestions from Andrea Arcangeli.
>
> Signed-off-by: Tim Chen <[email protected]>
> ---

<snip>

> index 0000000..4fda38b
> --- /dev/null
> +++ b/arch/x86/include/asm/spec_ctrl.h
> @@ -0,0 +1,15 @@
> +/* SPDX-License-Identifier: GPL-2.0 */

Thank you for these markings.

> --- /dev/null
> +++ b/arch/x86/kernel/cpu/spec_ctrl.c
> @@ -0,0 +1,160 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include <linux/mutex.h>
> +#include <linux/debugfs.h>
> +#include <linux/uaccess.h>
> +
> +#include <asm/spec_ctrl.h>
> +#include <asm/cpufeature.h>
> +
> +unsigned int dynamic_ibrs __read_mostly;
> +EXPORT_SYMBOL_GPL(dynamic_ibrs);

What kernel module needs this symbol? A symbol can be "global" and not
be exported, those are two different things.

And again, horrible name for a global symbol, if this really is being
exported to modules (i.e. the world.)

> +
> +enum {
> + IBRS_DISABLED,

As you are making a user/kernel API here, shouldn't you enforce the
values this enum has "just to be sure"?

> + /* in host kernel, disabled in guest and userland */
> + IBRS_ENABLED,
> + /* in host kernel and host userland, disabled in guest */
> + IBRS_ENABLED_USER,
> + IBRS_MAX = IBRS_ENABLED_USER,
> +};

Also, this should be documented somewhere, Documentation/ABI/ perhaps?

> +static unsigned int ibrs_enabled;
> +static bool ibrs_admin_disabled;
> +
> +/* mutex to serialize IBRS control changes */
> +DEFINE_MUTEX(spec_ctrl_mutex);

static?

> +
> +void scan_spec_ctrl_feature(struct cpuinfo_x86 *c)
> +{
> + if ((!c->cpu_index) && (boot_cpu_has(X86_FEATURE_SPEC_CTRL))) {
> + if (!ibrs_admin_disabled) {
> + dynamic_ibrs = 1;
> + ibrs_enabled = IBRS_ENABLED;
> + }
> + }
> +}
> +EXPORT_SYMBOL_GPL(scan_spec_ctrl_feature);

Again, what module needs this?

Same for all the exports in this file, and again, if they are needed in
modules, you need to get a better naming scheme. I'm not trying to
bikeshed, it really matters, our global symbol namespace is a mess,
don't make it worse :)

thanks,

greg k-h

2018-01-06 08:55:36

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 8/8] x86: Use IBRS for firmware update path

On Fri, Jan 05, 2018 at 06:12:23PM -0800, Tim Chen wrote:
> From: David Woodhouse <[email protected]>
>
> We are impervious to the indirect branch prediction attack with retpoline
> but firmware won't be, so we still need to set IBRS to protect
> firmware code execution when calling into firmware at runtime.
>
> Signed-off-by: David Woodhouse <[email protected]>
> Signed-off-by: Tim Chen <[email protected]>
> ---
> arch/x86/include/asm/apm.h | 6 ++++++
> arch/x86/include/asm/efi.h | 17 +++++++++++++--
> arch/x86/include/asm/spec_ctrl.h | 3 +++
> arch/x86/kernel/cpu/spec_ctrl.c | 45 ++++++++++++++++++++++++++++++++++++++++
> 4 files changed, 69 insertions(+), 2 deletions(-)

The files that Alan pointed out as being missed in the last version of
this patch don't seem to be updated here either :(

thanks,

greg k-h

2018-01-06 08:57:26

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 8/8] x86: Use IBRS for firmware update path

On Fri, Jan 05, 2018 at 06:12:23PM -0800, Tim Chen wrote:
> +void unprotected_firmware_begin(void)
> +{
> + if (firmware_ibrs) {
> + __disable_indirect_speculation();
> + } else {
> + /*
> + * rmb prevent unwanted speculation when we
> + * are setting IBRS
> + */
> + rmb();
> + }
> +}
> +EXPORT_SYMBOL_GPL(unprotected_firmware_begin);

What modules need these functions?

And again, if they are needed, we need a better naming scheme for them.
Prefix them all with "firmware_" perhaps to make it a bit nicer?

thanks,

greg k-h

2018-01-06 12:01:55

by Woodhouse, David

[permalink] [raw]
Subject: Re: [PATCH v2 0/8] IBRS patch series

On Fri, 2018-01-05 at 22:43 -0800, Tim Chen wrote:
>
> On 01/05/2018 06:12 PM, Tim Chen wrote:
> >
> > Thanks to everyone for the feedback on the initial posting.
> > This is an updated patchset and I hope I've captured all
> > the review comments.  I've done a lot of code clean up
> > per everyone's comments. Please let me know if I've missed
> > something.
> >
> Should also have mentioned that the patch series applies on
> top of 4.15-rc6.

If you're going to set/clear the X86_FEATURE_RETPOLINE and
X86_FEATURE_RETPOLINE_AMD features (sorry, there are two of them now.
Blame Linus), you probably want to base on the retpoline branch
instead? It was Arjan's choice to put retpoline first, wasn't it?

In the last few days we've gone from a coherent patch series which was
far from perfect but which was at least basically complete and
functional, to disparate patch series all of which conflict. And some
of which haven't even been posted publicly yet at all (IBPB, and I
still haven't actually seen the variant 1 patches in public).

It would be really good to pull everything together, even if it's going
to receive significant bikeshedding.


Attachments:
smime.p7s (5.09 kB)

2018-01-06 12:17:58

by Woodhouse, David

[permalink] [raw]
Subject: Re: [PATCH v2 6/8] x86/microcode: Recheck IBRS features on microcode reload

On Fri, 2018-01-05 at 18:12 -0800, Tim Chen wrote:
> On new microcode write, check whether IBRS
> is present by rescanning IBRS feature.
>
> Signed-off-by: Tim Chen <[email protected]>

Didn't we also give you a patch which fixed up the error handling on
the microcode reload, when it fails on only one CPU?


Attachments:
smime.p7s (5.09 kB)

2018-01-06 12:18:17

by Woodhouse, David

[permalink] [raw]
Subject: Re: [PATCH v2 0/8] IBRS patch series

On Fri, 2018-01-05 at 18:12 -0800, Tim Chen wrote:
> Tim Chen (8):
>   x86/feature: Detect the x86 IBRS feature to control Speculation
>   x86/enter: MACROS to set/clear IBRS
>   x86/enter: Use IBRS on syscall and interrupts
>   x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature
>   x86/idle: Disable IBRS entering idle and enable it on wakeup
>   x86/microcode: Recheck IBRS features on microcode reload
>   x86: Do not use dynamic IBRS if retpoline is enabled
>   x86: Use IBRS for firmware update path

.. where's the KVM bit? We need to set IBRS on vmexit too.


Attachments:
smime.p7s (5.09 kB)

2018-01-06 12:56:42

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 1/8] x86/feature: Detect the x86 IBRS feature to control Speculation

On Fri, Jan 05, 2018 at 06:12:16PM -0800, Tim Chen wrote:

<--- This needs an introductory sentence here.

> cpuid ax=0x7, return rdx bit 26 to indicate presence of this feature

You can write that as CPUID(7).RDX[26].

> IA32_SPEC_CTRL (0x48)
> IA32_SPEC_CTRL, bit0 – Indirect Branch Restricted Speculation (IBRS)

Ah, those are MSRs. Please say so.

> If IBRS is set, near returns and near indirect jumps/calls will not allow
> their predicted target address to be controlled by code that executed in
> a less privileged prediction mode before the IBRS mode was last written
> with a value of 1 or on another logical processor so long as all RSB

"RSB" is?

> entries from the previous less privileged prediction mode are overwritten.
>
> * Thus a near indirect jump/call/return may be affected by code in a
> less privileged prediction mode that executed AFTER IBRS mode was last
> written with a value of 1

End sentences with a full stop.

> * There is no need to clear IBRS before writing it with a value of
> 1. Unconditionally writing it with a value of 1 after the prediction
> mode change is sufficient

This sounds strange. I know of funky MSRs like that but if it is not
the case here, no need to mention it then.

> * Note: IBRS is not required in order to isolate branch predictions for
> SMM or SGX enclaves
>
> * Code executed by a sibling logical processor cannot control indirect
> jump/call/return predicted target when IBRS is set
>
> * SMEP will prevent supervisor mode using RSB entries filled by user code;
> this can reduce the need for software to overwrite RSB entries
>
> CPU performance could be reduced when running with IBRS set.
>
> Signed-off-by: Tim Chen <[email protected]>
> ---
> arch/x86/include/asm/cpufeatures.h | 1 +
> arch/x86/include/asm/msr-index.h | 4 ++++
> arch/x86/kernel/cpu/scattered.c | 1 +
> tools/arch/x86/include/asm/cpufeatures.h | 1 +
> 4 files changed, 7 insertions(+)
>
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index 07cdd17..5ee0737 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -209,6 +209,7 @@
> #define X86_FEATURE_AVX512_4FMAPS ( 7*32+17) /* AVX-512 Multiply Accumulation Single precision */
>
> #define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */
> +#define X86_FEATURE_SPEC_CTRL ( 7*32+19) /* Control Speculation Control */
>
> /* Virtualization flags: Linux defined, word 8 */
> #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> index 34c4922..f881add 100644
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -42,6 +42,10 @@
> #define MSR_PPIN_CTL 0x0000004e
> #define MSR_PPIN 0x0000004f
>
> +#define MSR_IA32_SPEC_CTRL 0x00000048
> +#define SPEC_CTRL_FEATURE_DISABLE_IBRS (0 << 0)
> +#define SPEC_CTRL_FEATURE_ENABLE_IBRS (1 << 0)

s/_FEATURE//

SPEC_CTRL_{ENABLE,DISABLE}_IBRS is good enough.

> +
> #define MSR_IA32_PERFCTR0 0x000000c1
> #define MSR_IA32_PERFCTR1 0x000000c2
> #define MSR_FSB_FREQ 0x000000cd
...

--
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

2018-01-06 14:41:17

by Konrad Rzeszutek Wilk

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Fri, Jan 05, 2018 at 06:12:19PM -0800, Tim Chen wrote:
> From: Tim Chen <[email protected]>
> From: Andrea Arcangeli <[email protected]>
>
> There are 2 ways to control IBRS
>
> 1. At boot time
> noibrs kernel boot parameter will disable IBRS usage
>
> Otherwise if the above parameters are not specified, the system
> will enable ibrs and ibpb usage if the cpu supports it.
>
> 2. At run time
> echo 0 > /sys/kernel/debug/x86/ibrs_enabled will turn off IBRS
> echo 1 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in kernel
> echo 2 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in both userspace and kernel
>
> The implementation was updated with input and suggestions from Andrea Arcangeli.
>
> Signed-off-by: Tim Chen <[email protected]>
> ---
> arch/x86/entry/calling.h | 42 ++++++++--
> arch/x86/include/asm/spec_ctrl.h | 15 ++++
> arch/x86/kernel/cpu/Makefile | 1 +
> arch/x86/kernel/cpu/scattered.c | 2 +
> arch/x86/kernel/cpu/spec_ctrl.c | 160 +++++++++++++++++++++++++++++++++++++++
> 5 files changed, 214 insertions(+), 6 deletions(-)
> create mode 100644 arch/x86/include/asm/spec_ctrl.h
> create mode 100644 arch/x86/kernel/cpu/spec_ctrl.c
>
> diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
> index 09c870d..6b65d47 100644
> --- a/arch/x86/entry/calling.h
> +++ b/arch/x86/entry/calling.h
> @@ -373,35 +373,55 @@ For 32-bit we have the following conventions - kernel is built with
> .endm
>
> .macro ENABLE_IBRS
> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> + testl $1, dynamic_ibrs
> + jz .Lskip_\@
> +
> PUSH_MSR_REGS
> WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_ENABLE_IBRS
> POP_MSR_REGS
> + jmp .Ldone_\@
> +
> .Lskip_\@:
> + lfence
> +.Ldone_\@:
> .endm
>
> .macro DISABLE_IBRS
> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> + testl $1, dynamic_ibrs

On every system call we end up hammering on this 'dynamic_ibrs'
variable. And it looks like it can be flipped via the IPI mechanism.

Would it make sense for this to be per-cpu?

2018-01-06 17:33:20

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On 01/06/2018 06:41 AM, Konrad Rzeszutek Wilk wrote:
>> .macro DISABLE_IBRS
>> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
>> + testl $1, dynamic_ibrs
> On every system call we end up hammering on this 'dynamic_ibrs'
> variable. And it looks like it can be flipped via the IPI mechanism.
>
> Would it make sense for this to be per-cpu?

It's probably better to either just make it __read_mostly or get the
static branches that folks were suggesting actually working.

2018-01-06 17:41:48

by Van De Ven, Arjan

[permalink] [raw]
Subject: RE: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

> >> .macro DISABLE_IBRS
> >> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> >> + testl $1, dynamic_ibrs
> > On every system call we end up hammering on this 'dynamic_ibrs'
> > variable. And it looks like it can be flipped via the IPI mechanism.
> >
> > Would it make sense for this to be per-cpu?
>
> It's probably better to either just make it __read_mostly or get the
> static branches that folks were suggesting actually working.

I still wonder if this isn't just better as a boot command line

2018-01-06 18:11:02

by Tim Chen

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature



On 01/06/2018 12:54 AM, Greg KH wrote:
> On Fri, Jan 05, 2018 at 06:12:19PM -0800, Tim Chen wrote:
>> From: Tim Chen <[email protected]>
>> From: Andrea Arcangeli <[email protected]>
>>
>> There are 2 ways to control IBRS
>>
>> 1. At boot time
>> noibrs kernel boot parameter will disable IBRS usage
>>
>> Otherwise if the above parameters are not specified, the system
>> will enable ibrs and ibpb usage if the cpu supports it.
>>
>> 2. At run time
>> echo 0 > /sys/kernel/debug/x86/ibrs_enabled will turn off IBRS
>> echo 1 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in kernel
>> echo 2 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in both userspace and kernel
>>
>> The implementation was updated with input and suggestions from Andrea Arcangeli.
>>
>> Signed-off-by: Tim Chen <[email protected]>
>> ---
>
> <snip>
>
>> index 0000000..4fda38b
>> --- /dev/null
>> +++ b/arch/x86/include/asm/spec_ctrl.h
>> @@ -0,0 +1,15 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>
> Thank you for these markings.
>
>> --- /dev/null
>> +++ b/arch/x86/kernel/cpu/spec_ctrl.c
>> @@ -0,0 +1,160 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +
>> +#include <linux/mutex.h>
>> +#include <linux/debugfs.h>
>> +#include <linux/uaccess.h>
>> +
>> +#include <asm/spec_ctrl.h>
>> +#include <asm/cpufeature.h>
>> +
>> +unsigned int dynamic_ibrs __read_mostly;
>> +EXPORT_SYMBOL_GPL(dynamic_ibrs);
>
> What kernel module needs this symbol? A symbol can be "global" and not
> be exported, those are two different things.
>
> And again, horrible name for a global symbol, if this really is being
> exported to modules (i.e. the world.)
>
>> +
>> +enum {
>> + IBRS_DISABLED,
>
> As you are making a user/kernel API here, shouldn't you enforce the
> values this enum has "just to be sure"?
>
>> + /* in host kernel, disabled in guest and userland */
>> + IBRS_ENABLED,
>> + /* in host kernel and host userland, disabled in guest */
>> + IBRS_ENABLED_USER,
>> + IBRS_MAX = IBRS_ENABLED_USER,
>> +};
>
> Also, this should be documented somewhere, Documentation/ABI/ perhaps?
>
>> +static unsigned int ibrs_enabled;
>> +static bool ibrs_admin_disabled;
>> +
>> +/* mutex to serialize IBRS control changes */
>> +DEFINE_MUTEX(spec_ctrl_mutex);
>
> static?
>
>> +
>> +void scan_spec_ctrl_feature(struct cpuinfo_x86 *c)
>> +{
>> + if ((!c->cpu_index) && (boot_cpu_has(X86_FEATURE_SPEC_CTRL))) {
>> + if (!ibrs_admin_disabled) {
>> + dynamic_ibrs = 1;
>> + ibrs_enabled = IBRS_ENABLED;
>> + }
>> + }
>> +}
>> +EXPORT_SYMBOL_GPL(scan_spec_ctrl_feature);
>
> Again, what module needs this?

Right now no module will need it. I'll get rid of it.

>
> Same for all the exports in this file, and again, if they are needed in
> modules, you need to get a better naming scheme. I'm not trying to
> bikeshed, it really matters, our global symbol namespace is a mess,
> don't make it worse :)

Noted. I'll add spec_ctrl in front of those.

>
> thanks,
>
> greg k-h
>

2018-01-06 18:20:30

by Tim Chen

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature



On 01/06/2018 06:41 AM, Konrad Rzeszutek Wilk wrote:
> On Fri, Jan 05, 2018 at 06:12:19PM -0800, Tim Chen wrote:
>> From: Tim Chen <[email protected]>
>> From: Andrea Arcangeli <[email protected]>
>>
>
>>
>> .macro DISABLE_IBRS
>> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
>> + testl $1, dynamic_ibrs
>
> On every system call we end up hammering on this 'dynamic_ibrs'
> variable. And it looks like it can be flipped via the IPI mechanism.

On system call, we read dynamic_ibrs value (not change it) and
flip the IBRS msr only if it dynamic_ibrs is true.

We only do global change to all the IBRS msrs on all cpus during
the admin request to change its value, serialized by
spec_ctrl_mutex. Before we do that, we set dynamic_ibrs to 0,
so each cpu no longer do any change to IBRS. Then the IPI happens
to update the IBRS MSR values.

>
> Would it make sense for this to be per-cpu?
>

2018-01-06 18:23:11

by Tim Chen

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature



On 01/06/2018 09:33 AM, Dave Hansen wrote:
> On 01/06/2018 06:41 AM, Konrad Rzeszutek Wilk wrote:
>>> .macro DISABLE_IBRS
>>> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
>>> + testl $1, dynamic_ibrs
>> On every system call we end up hammering on this 'dynamic_ibrs'
>> variable. And it looks like it can be flipped via the IPI mechanism.
>>
>> Would it make sense for this to be per-cpu?
>
> It's probably better to either just make it __read_mostly or get the
> static branches that folks were suggesting actually working.
>

dynamic_ibrs is indeed declared __read_mostly

+unsigned int dynamic_ibrs __read_mostly;
+EXPORT_SYMBOL_GPL(dynamic_ibrs);

Tim

2018-01-06 19:22:55

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On 01/06/2018 09:41 AM, Van De Ven, Arjan wrote:
>>>> .macro DISABLE_IBRS
>>>> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
>>>> + testl $1, dynamic_ibrs
>>> On every system call we end up hammering on this 'dynamic_ibrs'
>>> variable. And it looks like it can be flipped via the IPI mechanism.
>>>
>>> Would it make sense for this to be per-cpu?
>>
>> It's probably better to either just make it __read_mostly or get the
>> static branches that folks were suggesting actually working.
>
> I still wonder if this isn't just better as a boot command line

It's simpler that way. But, ideally, we want to make it runtime
switchable to match the implementation in the distros.

2018-01-06 19:47:30

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Sat, 6 Jan 2018, Dave Hansen wrote:

> On 01/06/2018 09:41 AM, Van De Ven, Arjan wrote:
> >>>> .macro DISABLE_IBRS
> >>>> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> >>>> + testl $1, dynamic_ibrs
> >>> On every system call we end up hammering on this 'dynamic_ibrs'
> >>> variable. And it looks like it can be flipped via the IPI mechanism.
> >>>
> >>> Would it make sense for this to be per-cpu?
> >>
> >> It's probably better to either just make it __read_mostly or get the
> >> static branches that folks were suggesting actually working.
> >
> > I still wonder if this isn't just better as a boot command line
>
> It's simpler that way. But, ideally, we want to make it runtime
> switchable to match the implementation in the distros.

Stop this silly argument please. The distros shipped lots of crap which we
dont want to have at all.

I told you folks yesterday what I want to see and the sysctl thing is the
least on that list and it's not needed for getting the important thing -
the protection - to work.

Can we pretty please do the basics and worry about that sysctl or whatever
people have on their wishlist once the dust settled.

Thanks,

tglx

2018-01-06 21:25:54

by Konrad Rzeszutek Wilk

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Sat, Jan 06, 2018 at 10:10:59AM -0800, Tim Chen wrote:
>
>
> On 01/06/2018 12:54 AM, Greg KH wrote:
> > On Fri, Jan 05, 2018 at 06:12:19PM -0800, Tim Chen wrote:
> >> From: Tim Chen <[email protected]>
> >> From: Andrea Arcangeli <[email protected]>
> >>
> >> There are 2 ways to control IBRS
> >>
> >> 1. At boot time
> >> noibrs kernel boot parameter will disable IBRS usage
> >>
> >> Otherwise if the above parameters are not specified, the system
> >> will enable ibrs and ibpb usage if the cpu supports it.
> >>
> >> 2. At run time
> >> echo 0 > /sys/kernel/debug/x86/ibrs_enabled will turn off IBRS
> >> echo 1 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in kernel
> >> echo 2 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in both userspace and kernel
> >>


This is going to create headaches in the future.

That is future CPUs there will be no need for this MSR nor retpoline as
the CPUs will observe correctness when switching .. rings/vm-exits/etc
and I would assume that 'ibrs_enabled' will return 0.

And that will make folks scared and run to support/Intel with
complaints.

Furthmore with the 'retpoline' work you can disable IBRS and instead use
'retpoline's as mitigation - and again the 'ibrs_enabled' is now zero.
Cue in horde of customers calling support.

Would it be better to have an global /sys/../spectre_resistent instead
of these 'well, check if the repoline sysfs is enabled, or if that is
not, then look at the cpuid flags'.

It would be good to have this future proof.

2018-01-06 21:32:51

by Konrad Rzeszutek Wilk

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Sat, Jan 06, 2018 at 08:47:19PM +0100, Thomas Gleixner wrote:
> On Sat, 6 Jan 2018, Dave Hansen wrote:
>
> > On 01/06/2018 09:41 AM, Van De Ven, Arjan wrote:
> > >>>> .macro DISABLE_IBRS
> > >>>> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> > >>>> + testl $1, dynamic_ibrs
> > >>> On every system call we end up hammering on this 'dynamic_ibrs'
> > >>> variable. And it looks like it can be flipped via the IPI mechanism.
> > >>>
> > >>> Would it make sense for this to be per-cpu?
> > >>
> > >> It's probably better to either just make it __read_mostly or get the
> > >> static branches that folks were suggesting actually working.
> > >
> > > I still wonder if this isn't just better as a boot command line
> >
> > It's simpler that way. But, ideally, we want to make it runtime
> > switchable to match the implementation in the distros.
>
> Stop this silly argument please. The distros shipped lots of crap which we
> dont want to have at all.
>
> I told you folks yesterday what I want to see and the sysctl thing is the
> least on that list and it's not needed for getting the important thing -
> the protection - to work.

I agree. But this is what customers are told to inspect to see if they
are impacted. And if in the future versions this goes away or such - they
will freak out and cause needless escalations.

>
> Can we pretty please do the basics and worry about that sysctl or whatever
> people have on their wishlist once the dust settled.
>
> Thanks,
>
> tglx
>

2018-01-06 21:34:04

by Van De Ven, Arjan

[permalink] [raw]
Subject: RE: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

> I agree. But this is what customers are told to inspect to see if they
> are impacted. And if in the future versions this goes away or such - they
> will freak out and cause needless escalations.


it's a mistake though... retpoline is what people should be using ....
... and only in super exception cases should IBRS even be considered

2018-01-06 21:39:40

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Sat, 6 Jan 2018, Konrad Rzeszutek Wilk wrote:
> On Sat, Jan 06, 2018 at 08:47:19PM +0100, Thomas Gleixner wrote:
> > On Sat, 6 Jan 2018, Dave Hansen wrote:
> >
> > > On 01/06/2018 09:41 AM, Van De Ven, Arjan wrote:
> > > >>>> .macro DISABLE_IBRS
> > > >>>> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> > > >>>> + testl $1, dynamic_ibrs
> > > >>> On every system call we end up hammering on this 'dynamic_ibrs'
> > > >>> variable. And it looks like it can be flipped via the IPI mechanism.
> > > >>>
> > > >>> Would it make sense for this to be per-cpu?
> > > >>
> > > >> It's probably better to either just make it __read_mostly or get the
> > > >> static branches that folks were suggesting actually working.
> > > >
> > > > I still wonder if this isn't just better as a boot command line
> > >
> > > It's simpler that way. But, ideally, we want to make it runtime
> > > switchable to match the implementation in the distros.
> >
> > Stop this silly argument please. The distros shipped lots of crap which we
> > dont want to have at all.
> >
> > I told you folks yesterday what I want to see and the sysctl thing is the
> > least on that list and it's not needed for getting the important thing -
> > the protection - to work.
>
> I agree. But this is what customers are told to inspect to see if they
> are impacted. And if in the future versions this goes away or such - they
> will freak out and cause needless escalations.

That's the result of distros cramming stuff into their kernels without
talking to us. It's their problem to explain that their customers.

We can talks about the sysctl _AFTER_ fixing the real issues.

Thanks,

tglx

2018-01-06 21:41:58

by Konrad Rzeszutek Wilk

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Sat, Jan 06, 2018 at 09:34:01PM +0000, Van De Ven, Arjan wrote:
> > I agree. But this is what customers are told to inspect to see if they
> > are impacted. And if in the future versions this goes away or such - they
> > will freak out and cause needless escalations.
>
>
> it's a mistake though... retpoline is what people should be using ....
> ... and only in super exception cases should IBRS even be considered

Like on certain Skylake and Broadwell where using the retpoline won't suffice?

As you can imagine having an heuristic that will figure out if:
- The CPU is doing the correct thing, retpoline or IBRS is not needed at all.
- The CPU can do retpoline, use that instead.
- The CPU is unsafe to do retpoline, use IBRS.
- The CPU can do retpoline, but MSRS are faster (is that even the case?)
- The CPU hasn't been updated, the retpolines are unsafe, the only solution is to buy a new CPU.

And have all of this nicely rolled out to customers in an 'sysfs' that will
tell the customers - yes your are safe.

All I m saying is that the 'ibrs_enabled' is a bad name, it should be something else
altogether.

2018-01-06 21:44:47

by Van De Ven, Arjan

[permalink] [raw]
Subject: RE: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature


> > it's a mistake though... retpoline is what people should be using ....
> > ... and only in super exception cases should IBRS even be considered
>
> Like on certain Skylake and Broadwell where using the retpoline won't suffice?

skylake is a bit more complex but that was discussed in earlier emails where a few more things are needed. No idea what you're inventing on broadwell where you would use ibrs but not retpoline


> - The CPU can do retpoline, but MSRS are faster (is that even the case?)

that is never the case

2018-01-06 21:46:43

by Konrad Rzeszutek Wilk

[permalink] [raw]
Subject: Is: Linus, name for 'spectre' variable. Was:Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Sat, Jan 06, 2018 at 10:39:27PM +0100, Thomas Gleixner wrote:
> On Sat, 6 Jan 2018, Konrad Rzeszutek Wilk wrote:
> > On Sat, Jan 06, 2018 at 08:47:19PM +0100, Thomas Gleixner wrote:
> > > On Sat, 6 Jan 2018, Dave Hansen wrote:
> > >
> > > > On 01/06/2018 09:41 AM, Van De Ven, Arjan wrote:
> > > > >>>> .macro DISABLE_IBRS
> > > > >>>> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> > > > >>>> + testl $1, dynamic_ibrs
> > > > >>> On every system call we end up hammering on this 'dynamic_ibrs'
> > > > >>> variable. And it looks like it can be flipped via the IPI mechanism.
> > > > >>>
> > > > >>> Would it make sense for this to be per-cpu?
> > > > >>
> > > > >> It's probably better to either just make it __read_mostly or get the
> > > > >> static branches that folks were suggesting actually working.
> > > > >
> > > > > I still wonder if this isn't just better as a boot command line
> > > >
> > > > It's simpler that way. But, ideally, we want to make it runtime
> > > > switchable to match the implementation in the distros.
> > >
> > > Stop this silly argument please. The distros shipped lots of crap which we
> > > dont want to have at all.
> > >
> > > I told you folks yesterday what I want to see and the sysctl thing is the
> > > least on that list and it's not needed for getting the important thing -
> > > the protection - to work.
> >
> > I agree. But this is what customers are told to inspect to see if they
> > are impacted. And if in the future versions this goes away or such - they
> > will freak out and cause needless escalations.
>
> That's the result of distros cramming stuff into their kernels without
> talking to us. It's their problem to explain that their customers.

I am trying to resolve this now for those distros's that haven't
crammed these patches in yet and get a resolution on the naming _now_
so they can update it to have the right name from the gecko for the future.

Lets rope in the boss.

>
> We can talks about the sysctl _AFTER_ fixing the real issues.

Perhaps we can delegate this to Linus.
>
> Thanks,
>
> tglx

2018-01-07 08:20:30

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Sat, Jan 06, 2018 at 04:25:19PM -0500, Konrad Rzeszutek Wilk wrote:
> On Sat, Jan 06, 2018 at 10:10:59AM -0800, Tim Chen wrote:
> >
> >
> > On 01/06/2018 12:54 AM, Greg KH wrote:
> > > On Fri, Jan 05, 2018 at 06:12:19PM -0800, Tim Chen wrote:
> > >> From: Tim Chen <[email protected]>
> > >> From: Andrea Arcangeli <[email protected]>
> > >>
> > >> There are 2 ways to control IBRS
> > >>
> > >> 1. At boot time
> > >> noibrs kernel boot parameter will disable IBRS usage
> > >>
> > >> Otherwise if the above parameters are not specified, the system
> > >> will enable ibrs and ibpb usage if the cpu supports it.
> > >>
> > >> 2. At run time
> > >> echo 0 > /sys/kernel/debug/x86/ibrs_enabled will turn off IBRS
> > >> echo 1 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in kernel
> > >> echo 2 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in both userspace and kernel
> > >>
>
>
> This is going to create headaches in the future.
>
> That is future CPUs there will be no need for this MSR nor retpoline as
> the CPUs will observe correctness when switching .. rings/vm-exits/etc
> and I would assume that 'ibrs_enabled' will return 0.
>
> And that will make folks scared and run to support/Intel with
> complaints.
>
> Furthmore with the 'retpoline' work you can disable IBRS and instead use
> 'retpoline's as mitigation - and again the 'ibrs_enabled' is now zero.
> Cue in horde of customers calling support.
>
> Would it be better to have an global /sys/../spectre_resistent instead
> of these 'well, check if the repoline sysfs is enabled, or if that is
> not, then look at the cpuid flags'.
>
> It would be good to have this future proof.

It's a debugfs api, it can be changed at any time, to be anything we
want, and all is fine :)

Let's get this all working first please, and then a "real" api can be
designed and implemented to please everyone.

thanks,

greg k-h

2018-01-07 12:03:14

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 2/8] x86/enter: MACROS to set/clear IBRS

On Fri, Jan 05, 2018 at 06:12:17PM -0800, Tim Chen wrote:

> Subject: Re: [PATCH v2 2/8] x86/enter: MACROS to set/clear IBRS

Your subject needs to have a verb and not scream:

Subject: [PATCH v2 2/8] x86/entry: Add macros to set/clear IBRS

> Create macros to control IBRS. Use these macros to enable IBRS on kernel entry
> paths and disable IBRS on kernel exit paths.
>
> The registers rax, rcx and rdx are touched when controlling IBRS
> so they need to be saved when they can't be clobbered.
>
> Signed-off-by: Tim Chen <[email protected]>
> ---
> arch/x86/entry/calling.h | 74 ++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 74 insertions(+)
>
> diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
> index 45a63e0..09c870d 100644
> --- a/arch/x86/entry/calling.h
> +++ b/arch/x86/entry/calling.h
> @@ -6,6 +6,8 @@
> #include <asm/percpu.h>
> #include <asm/asm-offsets.h>
> #include <asm/processor-flags.h>
> +#include <asm/msr-index.h>
> +#include <asm/cpufeatures.h>
>
> /*
>
> @@ -347,3 +349,75 @@ For 32-bit we have the following conventions - kernel is built with
> .Lafter_call_\@:
> #endif
> .endm
> +
> +/*
> + * IBRS related macros
> + */
> +.macro PUSH_MSR_REGS
> + pushq %rax
> + pushq %rcx
> + pushq %rdx
> +.endm
> +
> +.macro POP_MSR_REGS
> + popq %rdx
> + popq %rcx
> + popq %rax
> +.endm
> +
> +.macro WRMSR_ASM msr_nr:req eax_val:req

WRMSR as a name is good enough.

Also, you need edx_val:req too in case we decide to reuse that macro
for something else later. Which I'm pretty sure we will, once it is out
there.

> + movl \msr_nr, %ecx
> + movl $0, %edx

... and then

movl \edx_val, %edx

> + movl \eax_val, %eax
> +.endm
> +
> +.macro ENABLE_IBRS
> + ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> + PUSH_MSR_REGS
> + WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_ENABLE_IBRS

This is overwriting the previous contents of the MSR. You need to read
it and OR-in its bits [63:2] with SPEC_CTRL_FEATURE_ENABLE_IBRS and
clear bit 0.

Unless the rest of this MSR is not going to be used for anything else.
Then you're fine.

--
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

2018-01-07 17:12:24

by Tim Chen

[permalink] [raw]
Subject: Re: [PATCH v2 2/8] x86/enter: MACROS to set/clear IBRS



On 01/07/2018 04:03 AM, Borislav Petkov wrote:
> On Fri, Jan 05, 2018 at 06:12:17PM -0800, Tim Chen wrote:
>
>> Subject: Re: [PATCH v2 2/8] x86/enter: MACROS to set/clear IBRS
>
> Your subject needs to have a verb and not scream:
>
> Subject: [PATCH v2 2/8] x86/entry: Add macros to set/clear IBRS
>
>> Create macros to control IBRS. Use these macros to enable IBRS on kernel entry
>> paths and disable IBRS on kernel exit paths.
>>
>> The registers rax, rcx and rdx are touched when controlling IBRS
>> so they need to be saved when they can't be clobbered.
>>
>> Signed-off-by: Tim Chen <[email protected]>
>> ---
>> arch/x86/entry/calling.h | 74 ++++++++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 74 insertions(+)
>>
>> diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
>> index 45a63e0..09c870d 100644
>> --- a/arch/x86/entry/calling.h
>> +++ b/arch/x86/entry/calling.h
>> @@ -6,6 +6,8 @@
>> #include <asm/percpu.h>
>> #include <asm/asm-offsets.h>
>> #include <asm/processor-flags.h>
>> +#include <asm/msr-index.h>
>> +#include <asm/cpufeatures.h>
>>
>> /*
>>
>> @@ -347,3 +349,75 @@ For 32-bit we have the following conventions - kernel is built with
>> .Lafter_call_\@:
>> #endif
>> .endm
>> +
>> +/*
>> + * IBRS related macros
>> + */
>> +.macro PUSH_MSR_REGS
>> + pushq %rax
>> + pushq %rcx
>> + pushq %rdx
>> +.endm
>> +
>> +.macro POP_MSR_REGS
>> + popq %rdx
>> + popq %rcx
>> + popq %rax
>> +.endm
>> +
>> +.macro WRMSR_ASM msr_nr:req eax_val:req
>
> WRMSR as a name is good enough.
>
> Also, you need edx_val:req too in case we decide to reuse that macro
> for something else later. Which I'm pretty sure we will, once it is out
> there.
>
>> + movl \msr_nr, %ecx
>> + movl $0, %edx
>
> ... and then
>
> movl \edx_val, %edx
>
>> + movl \eax_val, %eax
>> +.endm
>> +
>> +.macro ENABLE_IBRS
>> + ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
>> + PUSH_MSR_REGS
>> + WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_ENABLE_IBRS
>
> This is overwriting the previous contents of the MSR. You need to read
> it and OR-in its bits [63:2] with SPEC_CTRL_FEATURE_ENABLE_IBRS and
> clear bit 0.
>
> Unless the rest of this MSR is not going to be used for anything else.
> Then you're fine.
>

Currently we are not using other bits.
When the time comes that we have other bits in this MSR used, we will
change this.

Tim

2018-01-07 17:14:59

by Tim Chen

[permalink] [raw]
Subject: Re: [PATCH v2 1/8] x86/feature: Detect the x86 IBRS feature to control Speculation



On 01/06/2018 04:56 AM, Borislav Petkov wrote:
> On Fri, Jan 05, 2018 at 06:12:16PM -0800, Tim Chen wrote:
>
> <--- This needs an introductory sentence here.
>
>> cpuid ax=0x7, return rdx bit 26 to indicate presence of this feature
>
> You can write that as CPUID(7).RDX[26].
>
>> IA32_SPEC_CTRL (0x48)
>> IA32_SPEC_CTRL, bit0 – Indirect Branch Restricted Speculation (IBRS)
>
> Ah, those are MSRs. Please say so.
>
>> If IBRS is set, near returns and near indirect jumps/calls will not allow
>> their predicted target address to be controlled by code that executed in
>> a less privileged prediction mode before the IBRS mode was last written
>> with a value of 1 or on another logical processor so long as all RSB
>
> "RSB" is?

RSB is return stack buffer, basically speculation addresses for return statement.

>
>> entries from the previous less privileged prediction mode are overwritten.
>>
>> * Thus a near indirect jump/call/return may be affected by code in a
>> less privileged prediction mode that executed AFTER IBRS mode was last
>> written with a value of 1
>
> End sentences with a full stop.
>
>> * There is no need to clear IBRS before writing it with a value of
>> 1. Unconditionally writing it with a value of 1 after the prediction
>> mode change is sufficient
>
> This sounds strange. I know of funky MSRs like that but if it is not
> the case here, no need to mention it then.
>
>> * Note: IBRS is not required in order to isolate branch predictions for
>> SMM or SGX enclaves
>>
>> * Code executed by a sibling logical processor cannot control indirect
>> jump/call/return predicted target when IBRS is set
>>
>> * SMEP will prevent supervisor mode using RSB entries filled by user code;
>> this can reduce the need for software to overwrite RSB entries
>>
>> CPU performance could be reduced when running with IBRS set.
>>
>> Signed-off-by: Tim Chen <[email protected]>
>> ---
>> arch/x86/include/asm/cpufeatures.h | 1 +
>> arch/x86/include/asm/msr-index.h | 4 ++++
>> arch/x86/kernel/cpu/scattered.c | 1 +
>> tools/arch/x86/include/asm/cpufeatures.h | 1 +
>> 4 files changed, 7 insertions(+)
>>
>> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
>> index 07cdd17..5ee0737 100644
>> --- a/arch/x86/include/asm/cpufeatures.h
>> +++ b/arch/x86/include/asm/cpufeatures.h
>> @@ -209,6 +209,7 @@
>> #define X86_FEATURE_AVX512_4FMAPS ( 7*32+17) /* AVX-512 Multiply Accumulation Single precision */
>>
>> #define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */
>> +#define X86_FEATURE_SPEC_CTRL ( 7*32+19) /* Control Speculation Control */
>>
>> /* Virtualization flags: Linux defined, word 8 */
>> #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
>> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
>> index 34c4922..f881add 100644
>> --- a/arch/x86/include/asm/msr-index.h
>> +++ b/arch/x86/include/asm/msr-index.h
>> @@ -42,6 +42,10 @@
>> #define MSR_PPIN_CTL 0x0000004e
>> #define MSR_PPIN 0x0000004f
>>
>> +#define MSR_IA32_SPEC_CTRL 0x00000048
>> +#define SPEC_CTRL_FEATURE_DISABLE_IBRS (0 << 0)
>> +#define SPEC_CTRL_FEATURE_ENABLE_IBRS (1 << 0)
>
> s/_FEATURE//
>
> SPEC_CTRL_{ENABLE,DISABLE}_IBRS is good enough.
>
>> +
>> #define MSR_IA32_PERFCTR0 0x000000c1
>> #define MSR_IA32_PERFCTR1 0x000000c2
>> #define MSR_FSB_FREQ 0x000000cd
> ...
>

Thanks. will update the phrasing.

Tim

2018-01-07 18:32:15

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 1/8] x86/feature: Detect the x86 IBRS feature to control Speculation

On Sun, Jan 07, 2018 at 09:14:57AM -0800, Tim Chen wrote:
> RSB is return stack buffer, basically speculation addresses for return
> statement.

I had a very good idea what it was but maybe other readers might not. So
please write out abbreviations on their first use.

--
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

2018-01-07 18:45:07

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 2/8] x86/enter: MACROS to set/clear IBRS

On Sun, Jan 07, 2018 at 09:12:21AM -0800, Tim Chen wrote:
> Currently we are not using other bits.
> When the time comes that we have other bits in this MSR used, we will
> change this.

Then please write in a short comment above it that it is ok that the
previous MSR contents get overwritten.

--
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

2018-01-07 19:27:32

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 3/8] x86/enter: Use IBRS on syscall and interrupts

On Fri, Jan 05, 2018 at 06:12:18PM -0800, Tim Chen wrote:
> From: Andrea Arcangeli <[email protected]>
>
> Set IBRS upon kernel entrance via syscall and interrupts. Clear it
> upon exit. IBRS protects against unsafe indirect branching predictions
> in the kernel.
>
> The NMI interrupt save/restore of IBRS state was based on Andrea
> Arcangeli's implementation.
> Here's an explanation by Dave Hansen on why we save IBRS state for NMI.
>
> The normal interrupt code uses the 'error_entry' path which uses the
> Code Segment (CS) of the instruction that was interrupted to tell
> whether it interrupted the kernel or userspace and thus has to switch
> IBRS, or leave it alone.
>
> The NMI code is different. It uses 'paranoid_entry' because it can
> interrupt the kernel while it is running with a userspace IBRS (and %GS
> and CR3) value, but has a kernel CS. If we used the same approach as
> the normal interrupt code, we might do the following;
>
> SYSENTER_entry
> <-------------- NMI HERE
> IBRS=1
> do_something()
> IBRS=0
> SYSRET
>
> The NMI code might notice that we are running in the kernel and decide
> that it is OK to skip the IBRS=1. This would leave it running
> unprotected with IBRS=0, which is bad.
>
> However, if we unconditionally set IBRS=1, in the NMI, we might get the
> following case:
>
> SYSENTER_entry
> IBRS=1
> do_something()
> IBRS=0
> <-------------- NMI HERE (set IBRS=1)
> SYSRET
>
> and we would return to userspace with IBRS=1. Userspace would run
> slowly until we entered and exited the kernel again.
>
> Instead of those two approaches, we chose a third one where we simply
> save the IBRS value in a scratch register (%r13) and then restore that
> value, verbatim.

That's one helluva commit message. This is how you write commit
messages!

> Signed-off-by: Andrea Arcangeli <[email protected]>
> Signed-off-by: Tim Chen <[email protected]>
> ---
> arch/x86/entry/entry_64.S | 23 +++++++++++++++++++++++
> arch/x86/entry/entry_64_compat.S | 8 ++++++++
> 2 files changed, 31 insertions(+)

Reviewed-by: Borislav Petkov <[email protected]>

--
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

2018-01-08 12:47:25

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Fri, Jan 05, 2018 at 07:12:12PM -0800, Dave Hansen wrote:
> On 01/05/2018 06:12 PM, Tim Chen wrote:
> > .macro ENABLE_IBRS
> > - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> > + testl $1, dynamic_ibrs
> > + jz .Lskip_\@
>
> There was an earlier suggestion to use STATIC_JUMP_IF_... here. That's

Yes, and thanks for Cc'ing me... :/

> a good suggestion, but we encountered some issues with it either
> crashing the kernel at boot or not properly turning on/off.
>
> We would love to do that minor really soon, but we figured everyone
> would rather see this version than wait for us to debug such a minor tweak.

Its not minor, you need that LFENCE without it. Which makes the whole
thing unconditionally expensive, which sucks.

2018-01-08 15:08:14

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Fri, Jan 05, 2018 at 06:12:19PM -0800, Tim Chen wrote:
> +void scan_spec_ctrl_feature(struct cpuinfo_x86 *c)
> +{
> + if ((!c->cpu_index) && (boot_cpu_has(X86_FEATURE_SPEC_CTRL))) {

We should test X86_BUG_SPECTRE_V1 here too before default enabling this,
no?

> + if (!ibrs_admin_disabled) {
> + dynamic_ibrs = 1;
> + ibrs_enabled = IBRS_ENABLED;
> + }
> + }
> +}

2018-01-08 15:11:35

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Fri, Jan 05, 2018 at 06:12:19PM -0800, Tim Chen wrote:
> +bool ibrs_inuse(void);

> +bool ibrs_inuse(void)
> +{
> + return ibrs_enabled == IBRS_ENABLED;
> +}
> +EXPORT_SYMBOL_GPL(ibrs_inuse);

Doesn't appear to actually be used anywhere...

2018-01-08 15:15:39

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Fri, Jan 05, 2018 at 06:12:19PM -0800, Tim Chen wrote:
> +static void spec_ctrl_flush_all_cpus(u32 msr_nr, u64 val)
> +{
> + int cpu;
> +
> + get_online_cpus();
> + for_each_online_cpu(cpu)
> + wrmsrl_on_cpu(cpu, msr_nr, val);
> + put_online_cpus();
> +}
> +
> +static ssize_t ibrs_enabled_write(struct file *file,
> + const char __user *user_buf,
> + size_t count, loff_t *ppos)
> +{
> + char buf[32];
> + ssize_t len;
> + unsigned int enable;
> +
> + len = min(count, sizeof(buf) - 1);
> + if (copy_from_user(buf, user_buf, len))
> + return -EFAULT;
> +
> + buf[len] = '\0';
> + if (kstrtouint(buf, 0, &enable))
> + return -EINVAL;
> +
> + if (enable > IBRS_MAX)
> + return -EINVAL;
> +
> + if (!boot_cpu_has(X86_FEATURE_SPEC_CTRL)) {
> + ibrs_enabled = IBRS_DISABLED;
> + return -EINVAL;
> + }
> +
> + mutex_lock(&spec_ctrl_mutex);
> +
> + if (enable == IBRS_DISABLED) {
> + /* disable IBRS usage */
> + ibrs_admin_disabled = true;
> + dynamic_ibrs = 0;
> + spec_ctrl_flush_all_cpus(MSR_IA32_SPEC_CTRL,
> + SPEC_CTRL_FEATURE_DISABLE_IBRS);
> +
> + } else if (enable == IBRS_ENABLED) {
> + /* enable IBRS usage in kernel */
> + ibrs_admin_disabled = false;
> + dynamic_ibrs = 1;
> +
> + } else if (enable == IBRS_ENABLED_USER) {
> + /* enable IBRS all the time in both userspace and kernel */
> + ibrs_admin_disabled = false;
> + dynamic_ibrs = 0;
> + spec_ctrl_flush_all_cpus(MSR_IA32_SPEC_CTRL,
> + SPEC_CTRL_FEATURE_ENABLE_IBRS);

So what about the CPUs that were offline when you did this?

> + }
> +
> + ibrs_enabled = enable;
> +
> + mutex_unlock(&spec_ctrl_mutex);
> + return count;
> +}

2018-01-08 15:29:46

by Van De Ven, Arjan

[permalink] [raw]
Subject: RE: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

> > + if ((!c->cpu_index) && (boot_cpu_has(X86_FEATURE_SPEC_CTRL))) {
>
> We should test X86_BUG_SPECTRE_V1 here too before default enabling this,
> no?


we shouldn't default enable this.

2018-01-08 15:53:54

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Fri, Jan 05, 2018 at 06:12:19PM -0800, Tim Chen wrote:
> +static ssize_t ibrs_enabled_write(struct file *file,
> + const char __user *user_buf,
> + size_t count, loff_t *ppos)
> +{
> + char buf[32];
> + ssize_t len;
> + unsigned int enable;
> +
> + len = min(count, sizeof(buf) - 1);
> + if (copy_from_user(buf, user_buf, len))
> + return -EFAULT;
> +
> + buf[len] = '\0';
> + if (kstrtouint(buf, 0, &enable))
> + return -EINVAL;
> +
> + if (enable > IBRS_MAX)
> + return -EINVAL;
> +
> + if (!boot_cpu_has(X86_FEATURE_SPEC_CTRL)) {
> + ibrs_enabled = IBRS_DISABLED;
> + return -EINVAL;
> + }
> +
> + mutex_lock(&spec_ctrl_mutex);
> +
> + if (enable == IBRS_DISABLED) {
> + /* disable IBRS usage */
> + ibrs_admin_disabled = true;
> + dynamic_ibrs = 0;
> + spec_ctrl_flush_all_cpus(MSR_IA32_SPEC_CTRL,
> + SPEC_CTRL_FEATURE_DISABLE_IBRS);
> +
> + } else if (enable == IBRS_ENABLED) {
> + /* enable IBRS usage in kernel */
> + ibrs_admin_disabled = false;
> + dynamic_ibrs = 1;

I think you need to do:

spec_ctrl_flush_all_cpus(MSR_IA32_SPEC_CTRL,
SPEC_CTRL_FEATURE_ENABLE_IBRS);

here as well, to force all CPUs into a known state.

> +
> + } else if (enable == IBRS_ENABLED_USER) {
> + /* enable IBRS all the time in both userspace and kernel */
> + ibrs_admin_disabled = false;
> + dynamic_ibrs = 0;
> + spec_ctrl_flush_all_cpus(MSR_IA32_SPEC_CTRL,
> + SPEC_CTRL_FEATURE_ENABLE_IBRS);
> + }
> +
> + ibrs_enabled = enable;
> +
> + mutex_unlock(&spec_ctrl_mutex);
> + return count;
> +}

2018-01-08 16:14:16

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v2 1/8] x86/feature: Detect the x86 IBRS feature to control Speculation

On 06/01/2018 03:12, Tim Chen wrote:
> cpuid ax=0x7, return rdx bit 26 to indicate presence of this feature
> IA32_SPEC_CTRL (0x48)
> IA32_SPEC_CTRL, bit0 – Indirect Branch Restricted Speculation (IBRS)
>
> If IBRS is set, near returns and near indirect jumps/calls will not allow
> their predicted target address to be controlled by code that executed in
> a less privileged prediction mode before the IBRS mode was last written
> with a value of 1 or on another logical processor so long as all RSB
> entries from the previous less privileged prediction mode are overwritten.
>
> * Thus a near indirect jump/call/return may be affected by code in a
> less privileged prediction mode that executed AFTER IBRS mode was last
> written with a value of 1
>
> * There is no need to clear IBRS before writing it with a value of
> 1. Unconditionally writing it with a value of 1 after the prediction
> mode change is sufficient
>
> * Note: IBRS is not required in order to isolate branch predictions for
> SMM or SGX enclaves
>
> * Code executed by a sibling logical processor cannot control indirect
> jump/call/return predicted target when IBRS is set
>
> * SMEP will prevent supervisor mode using RSB entries filled by user code;
> this can reduce the need for software to overwrite RSB entries
>
> CPU performance could be reduced when running with IBRS set.
>
> Signed-off-by: Tim Chen <[email protected]>

KVM also needs STIBP, can you please add it, or perhaps split
cpuid(7,0).edx out to its own feature word?

Thanks,

Paolo

> ---
> arch/x86/include/asm/cpufeatures.h | 1 +
> arch/x86/include/asm/msr-index.h | 4 ++++
> arch/x86/kernel/cpu/scattered.c | 1 +
> tools/arch/x86/include/asm/cpufeatures.h | 1 +
> 4 files changed, 7 insertions(+)
>
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index 07cdd17..5ee0737 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -209,6 +209,7 @@
> #define X86_FEATURE_AVX512_4FMAPS ( 7*32+17) /* AVX-512 Multiply Accumulation Single precision */
>
> #define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */
> +#define X86_FEATURE_SPEC_CTRL ( 7*32+19) /* Control Speculation Control */
>
> /* Virtualization flags: Linux defined, word 8 */
> #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> index 34c4922..f881add 100644
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -42,6 +42,10 @@
> #define MSR_PPIN_CTL 0x0000004e
> #define MSR_PPIN 0x0000004f
>
> +#define MSR_IA32_SPEC_CTRL 0x00000048
> +#define SPEC_CTRL_FEATURE_DISABLE_IBRS (0 << 0)
> +#define SPEC_CTRL_FEATURE_ENABLE_IBRS (1 << 0)
> +
> #define MSR_IA32_PERFCTR0 0x000000c1
> #define MSR_IA32_PERFCTR1 0x000000c2
> #define MSR_FSB_FREQ 0x000000cd
> diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
> index 05459ad..bc50c40 100644
> --- a/arch/x86/kernel/cpu/scattered.c
> +++ b/arch/x86/kernel/cpu/scattered.c
> @@ -24,6 +24,7 @@ static const struct cpuid_bit cpuid_bits[] = {
> { X86_FEATURE_INTEL_PT, CPUID_EBX, 25, 0x00000007, 0 },
> { X86_FEATURE_AVX512_4VNNIW, CPUID_EDX, 2, 0x00000007, 0 },
> { X86_FEATURE_AVX512_4FMAPS, CPUID_EDX, 3, 0x00000007, 0 },
> + { X86_FEATURE_SPEC_CTRL, CPUID_EDX, 26, 0x00000007, 0 },
> { X86_FEATURE_CAT_L3, CPUID_EBX, 1, 0x00000010, 0 },
> { X86_FEATURE_CAT_L2, CPUID_EBX, 2, 0x00000010, 0 },
> { X86_FEATURE_CDP_L3, CPUID_ECX, 2, 0x00000010, 1 },
> diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
> index 800104c..5e56275 100644
> --- a/tools/arch/x86/include/asm/cpufeatures.h
> +++ b/tools/arch/x86/include/asm/cpufeatures.h
> @@ -208,6 +208,7 @@
> #define X86_FEATURE_AVX512_4FMAPS ( 7*32+17) /* AVX-512 Multiply Accumulation Single precision */
>
> #define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */
> +#define X86_FEATURE_SPEC_CTRL ( 7*32+19) /* Control Speculation Control */
>
> /* Virtualization flags: Linux defined, word 8 */
> #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
>

2018-01-08 16:14:54

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Mon, Jan 08, 2018 at 01:47:07PM +0100, Peter Zijlstra wrote:
> > a good suggestion, but we encountered some issues with it either
> > crashing the kernel at boot or not properly turning on/off.

The below boots, but I lack stuff to test the enabling.

--- a/arch/x86/include/asm/spec_ctrl.h
+++ b/arch/x86/include/asm/spec_ctrl.h
@@ -9,9 +9,8 @@

void scan_spec_ctrl_feature(struct cpuinfo_x86 *c);
void rescan_spec_ctrl_feature(struct cpuinfo_x86 *c);
-bool ibrs_inuse(void);

-extern unsigned int dynamic_ibrs;
+DECLARE_STATIC_KEY_FALSE(ibrs_key);

static inline void __disable_indirect_speculation(void)
{
@@ -31,21 +30,14 @@ static inline void __enable_indirect_spe
static inline void unprotected_speculation_begin(void)
{
lockdep_assert_irqs_disabled();
- if (dynamic_ibrs)
+ if (static_branch_unlikely(&ibrs_key))
__enable_indirect_speculation();
}

static inline void unprotected_speculation_end(void)
{
- if (dynamic_ibrs) {
+ if (static_branch_unlikely(&ibrs_key))
__disable_indirect_speculation();
- } else {
- /*
- * rmb prevent unwanted speculation when we
- * are setting IBRS
- */
- rmb();
- }
}

#endif /* _ASM_X86_SPEC_CTRL_H */
--- a/arch/x86/kernel/cpu/spec_ctrl.c
+++ b/arch/x86/kernel/cpu/spec_ctrl.c
@@ -7,8 +7,7 @@
#include <asm/spec_ctrl.h>
#include <asm/cpufeature.h>

-unsigned int dynamic_ibrs __read_mostly;
-EXPORT_SYMBOL_GPL(dynamic_ibrs);
+DEFINE_STATIC_KEY_FALSE(ibrs_key);

enum {
IBRS_DISABLED,
@@ -27,7 +26,7 @@ DEFINE_MUTEX(spec_ctrl_mutex);
static inline void set_ibrs_feature(void)
{
if (!ibrs_admin_disabled) {
- dynamic_ibrs = 1;
+ static_branch_enable(&ibrs_key);
ibrs_enabled = IBRS_ENABLED;
}
}
@@ -54,12 +53,6 @@ void rescan_spec_ctrl_feature(struct cpu
}
EXPORT_SYMBOL_GPL(rescan_spec_ctrl_feature);

-bool ibrs_inuse(void)
-{
- return ibrs_enabled == IBRS_ENABLED;
-}
-EXPORT_SYMBOL_GPL(ibrs_inuse);
-
static int __init noibrs(char *str)
{
ibrs_admin_disabled = true;
@@ -124,19 +117,23 @@ static ssize_t ibrs_enabled_write(struct
if (enable == IBRS_DISABLED) {
/* disable IBRS usage */
ibrs_admin_disabled = true;
- dynamic_ibrs = 0;
+ static_branch_disable(&ibrs_key);
spec_ctrl_flush_all_cpus(MSR_IA32_SPEC_CTRL,
SPEC_CTRL_FEATURE_DISABLE_IBRS);

} else if (enable == IBRS_ENABLED) {
/* enable IBRS usage in kernel */
ibrs_admin_disabled = false;
- dynamic_ibrs = 1;
+ static_branch_enable(&ibrs_key);
+ /*
+ * This too needs an IPI to force all active CPUs into a known state.
+ */

} else if (enable == IBRS_ENABLED_USER) {
/* enable IBRS all the time in both userspace and kernel */
ibrs_admin_disabled = false;
- dynamic_ibrs = 0;
+ static_branch_disable(&ibrs_key);
+
spec_ctrl_flush_all_cpus(MSR_IA32_SPEC_CTRL,
SPEC_CTRL_FEATURE_ENABLE_IBRS);
}
--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -373,22 +373,17 @@ For 32-bit we have the following convent
.endm

.macro ENABLE_IBRS
- testl $1, dynamic_ibrs
- jz .Lskip_\@
+ STATIC_JUMP_IF_FALSE .Lskip_\@, ibrs_key, def=0

PUSH_MSR_REGS
WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_ENABLE_IBRS
POP_MSR_REGS
- jmp .Ldone_\@

.Lskip_\@:
- lfence
-.Ldone_\@:
.endm

.macro DISABLE_IBRS
- testl $1, dynamic_ibrs
- jz .Lskip_\@
+ STATIC_JUMP_IF_FALSE .Lskip_\@, ibrs_key, def=0

PUSH_MSR_REGS
WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_DISABLE_IBRS
@@ -398,20 +393,15 @@ For 32-bit we have the following convent
.endm

.macro ENABLE_IBRS_CLOBBER
- testl $1, dynamic_ibrs
- jz .Lskip_\@
+ STATIC_JUMP_IF_FALSE .Lskip_\@, ibrs_key, def=0

WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_ENABLE_IBRS
- jmp .Ldone_\@

.Lskip_\@:
- lfence
-.Ldone_\@:
.endm

.macro DISABLE_IBRS_CLOBBER
- testl $1, dynamic_ibrs
- jz .Lskip_\@
+ STATIC_JUMP_IF_FALSE .Lskip_\@, ibrs_key, def=0

WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_DISABLE_IBRS

@@ -419,8 +409,7 @@ For 32-bit we have the following convent
.endm

.macro ENABLE_IBRS_SAVE_AND_CLOBBER save_reg:req
- testl $1, dynamic_ibrs
- jz .Lskip_\@
+ STATIC_JUMP_IF_FALSE .Lskip_\@, ibrs_key, def=0

movl $MSR_IA32_SPEC_CTRL, %ecx
rdmsr
@@ -429,25 +418,18 @@ For 32-bit we have the following convent
movl $0, %edx
movl $SPEC_CTRL_FEATURE_ENABLE_IBRS, %eax
wrmsr
- jmp .Ldone_\@

.Lskip_\@:
- lfence
-.Ldone_\@:
.endm

.macro RESTORE_IBRS_CLOBBER save_reg:req
- testl $1, dynamic_ibrs
- jz .Lskip_\@
+ STATIC_JUMP_IF_FALSE .Lskip_\@, ibrs_key, def=0

/* Set IBRS to the value saved in the save_reg */
movl $MSR_IA32_SPEC_CTRL, %ecx
movl $0, %edx
movl \save_reg, %eax
wrmsr
- jmp .Ldone_\@

.Lskip_\@:
- lfence
-.Ldone_\@:
.endm

2018-01-08 17:02:25

by Tim Chen

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On 01/08/2018 07:29 AM, Van De Ven, Arjan wrote:
>>> + if ((!c->cpu_index) && (boot_cpu_has(X86_FEATURE_SPEC_CTRL))) {
>>
>> We should test X86_BUG_SPECTRE_V1 here too before default enabling this,
>> no?
>
>
> we shouldn't default enable this.
>

Patch 7 disables ibrs if retpoline is in use.

Tim

2018-01-08 17:28:16

by Tim Chen

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On 01/08/2018 08:14 AM, Peter Zijlstra wrote:
> On Mon, Jan 08, 2018 at 01:47:07PM +0100, Peter Zijlstra wrote:
>>> a good suggestion, but we encountered some issues with it either
>>> crashing the kernel at boot or not properly turning on/off.
>
> The below boots, but I lack stuff to test the enabling.

Peter,

Thanks. Will give it a spin. One other concern is if
JUMP_LABEL is not configured, this may not work, and
we may still need fall back to a control variable.

Tim

2018-01-08 17:42:33

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Mon, Jan 08, 2018 at 09:28:12AM -0800, Tim Chen wrote:
> On 01/08/2018 08:14 AM, Peter Zijlstra wrote:
> > On Mon, Jan 08, 2018 at 01:47:07PM +0100, Peter Zijlstra wrote:
> >>> a good suggestion, but we encountered some issues with it either
> >>> crashing the kernel at boot or not properly turning on/off.
> >
> > The below boots, but I lack stuff to test the enabling.
>
> Peter,
>
> Thanks. Will give it a spin. One other concern is if
> JUMP_LABEL is not configured, this may not work, and
> we may still need fall back to a control variable.

Urgh yes.. I always forget about that case. Will the retpoline crud be
backported to a GCC old enough to not have asm-goto? If not, we could
make all of this simply require asm-goto.

2018-01-08 17:43:46

by Alexey Dobriyan

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

> + len = sprintf(buf, "%d\n", READ_ONCE(*field));

READ_ONCE isn't necessary as there is only one read being made.

> + len = min(count, sizeof(buf) - 1);
> + if (copy_from_user(buf, user_buf, len))
> + return -EFAULT;
> +
> + buf[len] = '\0';
> + if (kstrtouint(buf, 0, &enable))
> + return -EINVAL;

There is kstrto*_from_user().

2018-01-08 18:30:25

by Andrea Arcangeli

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Mon, Jan 08, 2018 at 08:43:40PM +0300, Alexey Dobriyan wrote:
> > + len = sprintf(buf, "%d\n", READ_ONCE(*field));
>
> READ_ONCE isn't necessary as there is only one read being made.

Others might disagree but you shouldn't ever let gcc touch any memory
that can change under gcc without READ_ONCE or volatile.

Ages ago I was told in a switch() statement gcc could decide to use an
hash and re-read the value and crash.

Not for the simple case above, and we often don't use READ_ONCE and we
might user barrier() instead, but it would be better to use READ_ONCE.

READ_ONCE is more correct and there's no point to try to optimize it
especially if there's only one read being made in that function.

Either version in practice will work, but READ_ONCE is preferable IMHO.

Thanks,
Andrea

2018-01-08 19:35:37

by Woodhouse, David

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Mon, 2018-01-08 at 18:42 +0100, Peter Zijlstra wrote:
> On Mon, Jan 08, 2018 at 09:28:12AM -0800, Tim Chen wrote:
> > On 01/08/2018 08:14 AM, Peter Zijlstra wrote:
> > > On Mon, Jan 08, 2018 at 01:47:07PM +0100, Peter Zijlstra wrote:
> > >>> a good suggestion, but we encountered some issues with it either
> > >>> crashing the kernel at boot or not properly turning on/off.
> > > 
> > > The below boots, but I lack stuff to test the enabling.
> > 
> > Peter,
> > 
> > Thanks.  Will give it a spin.  One other concern is if
> > JUMP_LABEL is not configured, this may not work, and
> > we may still need fall back to a control variable.
>
> Urgh yes.. I always forget about that case. Will the retpoline crud be
> backported to a GCC old enough to not have asm-goto? If not, we could
> make all of this simply require asm-goto.

I think HJ said he was planning to backport as far as 4.9. When was
asm-goto added?


Attachments:
smime.p7s (5.09 kB)

2018-01-08 19:52:26

by Lu, Hongjiu

[permalink] [raw]
Subject: RE: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

>
> On Mon, 2018-01-08 at 18:42 +0100, Peter Zijlstra wrote:
> > On Mon, Jan 08, 2018 at 09:28:12AM -0800, Tim Chen wrote:
> > > On 01/08/2018 08:14 AM, Peter Zijlstra wrote:
> > > > On Mon, Jan 08, 2018 at 01:47:07PM +0100, Peter Zijlstra wrote:
> > > >>> a good suggestion, but we encountered some issues with it either
> > > >>> crashing the kernel at boot or not properly turning on/off.
> > > >
> > > > The below boots, but I lack stuff to test the enabling.
> > >
> > > Peter,
> > >
> > > Thanks.  Will give it a spin.  One other concern is if JUMP_LABEL is
> > > not configured, this may not work, and we may still need fall back
> > > to a control variable.
> >
> > Urgh yes.. I always forget about that case. Will the retpoline crud be
> > backported to a GCC old enough to not have asm-goto? If not, we could
> > make all of this simply require asm-goto.
>
> I think HJ said he was planning to backport as far as 4.9. When was asm-
> goto added?

It was added to GCC 4.5.

H.J.

2018-01-08 22:24:59

by Tim Chen

[permalink] [raw]
Subject: Re: [PATCH v2 2/8] x86/enter: MACROS to set/clear IBRS

On 01/07/2018 04:03 AM, Borislav Petkov wrote:

>
> WRMSR as a name is good enough.
>

It alias with wrmsr instruction and didn't
build correctly. So I'll keep it as WRMSR_ASM.

Tim

2018-01-09 00:29:59

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Fri, Jan 05, 2018 at 06:12:19PM -0800, Tim Chen wrote:
> From: Tim Chen <[email protected]>
> From: Andrea Arcangeli <[email protected]>

There's Co-Developed-by for this:

- Co-Developed-by: states that the patch was also created by another developer
along with the original author. This is useful at times when multiple
people work on a single patch. Note, this person also needs to have a
Signed-off-by: line in the patch as well.

> There are 2 ways to control IBRS
>
> 1. At boot time
> noibrs kernel boot parameter will disable IBRS usage
>
> Otherwise if the above parameters are not specified, the system
> will enable ibrs and ibpb usage if the cpu supports it.

s/cpu/CPU/

>
> 2. At run time
> echo 0 > /sys/kernel/debug/x86/ibrs_enabled will turn off IBRS
> echo 1 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in kernel
> echo 2 > /sys/kernel/debug/x86/ibrs_enabled will turn on IBRS in both userspace and kernel
>
> The implementation was updated with input and suggestions from Andrea Arcangeli.
>
> Signed-off-by: Tim Chen <[email protected]>
> ---
> arch/x86/entry/calling.h | 42 ++++++++--
> arch/x86/include/asm/spec_ctrl.h | 15 ++++
> arch/x86/kernel/cpu/Makefile | 1 +
> arch/x86/kernel/cpu/scattered.c | 2 +
> arch/x86/kernel/cpu/spec_ctrl.c | 160 +++++++++++++++++++++++++++++++++++++++
> 5 files changed, 214 insertions(+), 6 deletions(-)
> create mode 100644 arch/x86/include/asm/spec_ctrl.h
> create mode 100644 arch/x86/kernel/cpu/spec_ctrl.c
>
> diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
> index 09c870d..6b65d47 100644
> --- a/arch/x86/entry/calling.h
> +++ b/arch/x86/entry/calling.h
> @@ -373,35 +373,55 @@ For 32-bit we have the following conventions - kernel is built with
> .endm
>
> .macro ENABLE_IBRS
> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> + testl $1, dynamic_ibrs
> + jz .Lskip_\@
> +
> PUSH_MSR_REGS
> WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_ENABLE_IBRS
> POP_MSR_REGS
> + jmp .Ldone_\@
> +
> .Lskip_\@:
> + lfence
> +.Ldone_\@:
> .endm
>
> .macro DISABLE_IBRS
> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> + testl $1, dynamic_ibrs
> + jz .Lskip_\@
> +
> PUSH_MSR_REGS
> WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_DISABLE_IBRS
> POP_MSR_REGS
> +
> .Lskip_\@:
> .endm
>
> .macro ENABLE_IBRS_CLOBBER
> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> + testl $1, dynamic_ibrs
> + jz .Lskip_\@
> +
> WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_ENABLE_IBRS
> + jmp .Ldone_\@
> +
> .Lskip_\@:
> + lfence
> +.Ldone_\@:
> .endm
>
> .macro DISABLE_IBRS_CLOBBER
> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> + testl $1, dynamic_ibrs
> + jz .Lskip_\@
> +
> WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_DISABLE_IBRS
> +
> .Lskip_\@:
> .endm
>
> .macro ENABLE_IBRS_SAVE_AND_CLOBBER save_reg:req
> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> + testl $1, dynamic_ibrs
> + jz .Lskip_\@
> +
> movl $MSR_IA32_SPEC_CTRL, %ecx
> rdmsr
> movl %eax, \save_reg
> @@ -409,15 +429,25 @@ For 32-bit we have the following conventions - kernel is built with
> movl $0, %edx
> movl $SPEC_CTRL_FEATURE_ENABLE_IBRS, %eax
> wrmsr
> + jmp .Ldone_\@
> +
> .Lskip_\@:
> + lfence
> +.Ldone_\@:
> .endm
>
> .macro RESTORE_IBRS_CLOBBER save_reg:req
> - ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL
> + testl $1, dynamic_ibrs
> + jz .Lskip_\@
> +
> /* Set IBRS to the value saved in the save_reg */
> movl $MSR_IA32_SPEC_CTRL, %ecx
> movl $0, %edx
> movl \save_reg, %eax
> wrmsr
> + jmp .Ldone_\@
> +
> .Lskip_\@:
> + lfence
> +.Ldone_\@:
> .endm

Why not put all macros in spec_ctrl.h ?

> diff --git a/arch/x86/include/asm/spec_ctrl.h b/arch/x86/include/asm/spec_ctrl.h
> new file mode 100644
> index 0000000..4fda38b
> --- /dev/null
> +++ b/arch/x86/include/asm/spec_ctrl.h
> @@ -0,0 +1,15 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef _ASM_X86_SPEC_CTRL_H
> +#define _ASM_X86_SPEC_CTRL_H
> +
> +#include <asm/msr-index.h>
> +#include <asm/cpufeatures.h>
> +#include <asm/microcode.h>

Left over include from a previous version.

Instead, you need

#include <linux/cpu.h>

in spec_ctrl.c for get/put_online_cpus().

> +void scan_spec_ctrl_feature(struct cpuinfo_x86 *c);
> +bool ibrs_inuse(void);
> +
> +extern unsigned int dynamic_ibrs;
> +
> +#endif /* _ASM_X86_SPEC_CTRL_H */
> diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
> index 570e8bb..3ffbd24 100644
> --- a/arch/x86/kernel/cpu/Makefile
> +++ b/arch/x86/kernel/cpu/Makefile
> @@ -24,6 +24,7 @@ obj-y += match.o
> obj-y += bugs.o
> obj-y += aperfmperf.o
> obj-y += cpuid-deps.o
> +obj-y += spec_ctrl.o
>
> obj-$(CONFIG_PROC_FS) += proc.o
> obj-$(CONFIG_X86_FEATURE_NAMES) += capflags.o powerflags.o
> diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
> index bc50c40..5756d14 100644
> --- a/arch/x86/kernel/cpu/scattered.c
> +++ b/arch/x86/kernel/cpu/scattered.c
> @@ -8,6 +8,7 @@
> #include <asm/processor.h>
>
> #include <asm/apic.h>
> +#include <asm/spec_ctrl.h>
>
> struct cpuid_bit {
> u16 feature;
> @@ -57,6 +58,7 @@ void init_scattered_cpuid_features(struct cpuinfo_x86 *c)
> if (regs[cb->reg] & (1 << cb->bit))
> set_cpu_cap(c, cb->feature);
> }
> + scan_spec_ctrl_feature(c);

Hell no!

This function is only for setting the feature bits.

> u32 get_scattered_cpuid_leaf(unsigned int level, unsigned int sub_leaf,
> diff --git a/arch/x86/kernel/cpu/spec_ctrl.c b/arch/x86/kernel/cpu/spec_ctrl.c
> new file mode 100644
> index 0000000..1641bec
> --- /dev/null
> +++ b/arch/x86/kernel/cpu/spec_ctrl.c
> @@ -0,0 +1,160 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#include <linux/mutex.h>
> +#include <linux/debugfs.h>
> +#include <linux/uaccess.h>
> +
> +#include <asm/spec_ctrl.h>
> +#include <asm/cpufeature.h>
> +
> +unsigned int dynamic_ibrs __read_mostly;

WTH is dynamic_ibrs?

> +EXPORT_SYMBOL_GPL(dynamic_ibrs);
> +
> +enum {
> + IBRS_DISABLED,
> + /* in host kernel, disabled in guest and userland */
> + IBRS_ENABLED,
> + /* in host kernel and host userland, disabled in guest */
> + IBRS_ENABLED_USER,
> + IBRS_MAX = IBRS_ENABLED_USER,
> +};
> +static unsigned int ibrs_enabled;
> +static bool ibrs_admin_disabled;

Srsly?!

That's *three* variables controlling IBRS. This needs simplification.

> +
> +/* mutex to serialize IBRS control changes */
> +DEFINE_MUTEX(spec_ctrl_mutex);

static

> +void scan_spec_ctrl_feature(struct cpuinfo_x86 *c)
> +{
> + if ((!c->cpu_index) && (boot_cpu_has(X86_FEATURE_SPEC_CTRL))) {

What is !c->cpu_index? Checking whether this is the BSP? What for?

> + if (!ibrs_admin_disabled) {
> + dynamic_ibrs = 1;
> + ibrs_enabled = IBRS_ENABLED;
> + }
> + }
> +}
> +EXPORT_SYMBOL_GPL(scan_spec_ctrl_feature);
> +
> +/*
> + * Used after boot phase to rescan spec_ctrl feature,
> + * serialize scan with spec_ctrl_mutex.
> + */
> +void rescan_spec_ctrl_feature(struct cpuinfo_x86 *c)
> +{
> + mutex_lock(&spec_ctrl_mutex);
> + if (boot_cpu_has(X86_FEATURE_SPEC_CTRL)) {
> + if (!ibrs_admin_disabled) {
> + dynamic_ibrs = 1;
> + ibrs_enabled = IBRS_ENABLED;
> + }
> + }
> + mutex_unlock(&spec_ctrl_mutex);
> +}
> +EXPORT_SYMBOL_GPL(rescan_spec_ctrl_feature);
> +
> +bool ibrs_inuse(void)
> +{
> + return ibrs_enabled == IBRS_ENABLED;
> +}

So from looking at this, you can have a single ibrs_state variable and
set bits in it and have *all* possible settings you wanna have.

> +EXPORT_SYMBOL_GPL(ibrs_inuse);

That naming is not really correct: otherwise IBRS_ENABLED_USER means
ibrs is *not* in use.

> +
> +static int __init noibrs(char *str)
> +{
> + ibrs_admin_disabled = true;
> + ibrs_enabled = IBRS_DISABLED;
> +
> + return 0;
> +}
> +early_param("noibrs", noibrs);
> +
> +static ssize_t __enabled_read(struct file *file, char __user *user_buf,
> + size_t count, loff_t *ppos, unsigned int *field)
> +{
> + char buf[32];
> + unsigned int len;
> +
> + len = sprintf(buf, "%d\n", READ_ONCE(*field));
> + return simple_read_from_buffer(user_buf, count, ppos, buf, len);
> +}
> +
> +static ssize_t ibrs_enabled_read(struct file *file, char __user *user_buf,
> + size_t count, loff_t *ppos)
> +{
> + return __enabled_read(file, user_buf, count, ppos, &ibrs_enabled);
> +}
> +
> +static void spec_ctrl_flush_all_cpus(u32 msr_nr, u64 val)
> +{
> + int cpu;
> +
> + get_online_cpus();
> + for_each_online_cpu(cpu)
> + wrmsrl_on_cpu(cpu, msr_nr, val);
> + put_online_cpus();
> +}
> +
> +static ssize_t ibrs_enabled_write(struct file *file,
> + const char __user *user_buf,
> + size_t count, loff_t *ppos)
> +{
> + char buf[32];
> + ssize_t len;
> + unsigned int enable;
> +
> + len = min(count, sizeof(buf) - 1);
> + if (copy_from_user(buf, user_buf, len))
> + return -EFAULT;
> +
> + buf[len] = '\0';
> + if (kstrtouint(buf, 0, &enable))
> + return -EINVAL;
> +
> + if (enable > IBRS_MAX)
> + return -EINVAL;
> +
> + if (!boot_cpu_has(X86_FEATURE_SPEC_CTRL)) {

That capability check needs to happen first: no need to do anything if
the CPU doesn't have IBRS.

--
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

2018-01-09 00:35:05

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 6/8] x86/microcode: Recheck IBRS features on microcode reload

On Fri, Jan 05, 2018 at 06:12:21PM -0800, Tim Chen wrote:
> On new microcode write, check whether IBRS
> is present by rescanning IBRS feature.
>
> Signed-off-by: Tim Chen <[email protected]>
> ---
> arch/x86/include/asm/spec_ctrl.h | 1 +
> arch/x86/kernel/cpu/microcode/core.c | 4 ++++
> arch/x86/kernel/cpu/spec_ctrl.c | 18 ++++++++++--------
> 3 files changed, 15 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/include/asm/spec_ctrl.h b/arch/x86/include/asm/spec_ctrl.h
> index 62c5dc8..be08ae7 100644
> --- a/arch/x86/include/asm/spec_ctrl.h
> +++ b/arch/x86/include/asm/spec_ctrl.h
> @@ -8,6 +8,7 @@
> #include <asm/microcode.h>
>
> void scan_spec_ctrl_feature(struct cpuinfo_x86 *c);
> +void rescan_spec_ctrl_feature(struct cpuinfo_x86 *c);
> bool ibrs_inuse(void);
>
> extern unsigned int dynamic_ibrs;
> diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
> index c4fa4a8..848c3e9 100644
> --- a/arch/x86/kernel/cpu/microcode/core.c
> +++ b/arch/x86/kernel/cpu/microcode/core.c
> @@ -40,6 +40,7 @@
> #include <asm/processor.h>
> #include <asm/cmdline.h>
> #include <asm/setup.h>
> +#include <asm/spec_ctrl.h>
>
> #define DRIVER_VERSION "2.2"
>
> @@ -444,6 +445,9 @@ static ssize_t microcode_write(struct file *file, const char __user *buf,
> if (ret > 0)
> perf_check_microcode();
>
> + /* check spec_ctrl capabilities */
> + rescan_spec_ctrl_feature(&boot_cpu_data);

Needs to be under the if like perf_check_microcode().

> +
> mutex_unlock(&microcode_mutex);
> put_online_cpus();
>
> diff --git a/arch/x86/kernel/cpu/spec_ctrl.c b/arch/x86/kernel/cpu/spec_ctrl.c
> index 1641bec..28107a2 100644
> --- a/arch/x86/kernel/cpu/spec_ctrl.c
> +++ b/arch/x86/kernel/cpu/spec_ctrl.c
> @@ -24,13 +24,18 @@ static bool ibrs_admin_disabled;
> /* mutex to serialize IBRS control changes */
> DEFINE_MUTEX(spec_ctrl_mutex);
>
> +static inline void set_ibrs_feature(void)
> +{
> + if (!ibrs_admin_disabled) {
> + dynamic_ibrs = 1;
> + ibrs_enabled = IBRS_ENABLED;
> + }
> +}
> +
> void scan_spec_ctrl_feature(struct cpuinfo_x86 *c)
> {
> if ((!c->cpu_index) && (boot_cpu_has(X86_FEATURE_SPEC_CTRL))) {
> - if (!ibrs_admin_disabled) {
> - dynamic_ibrs = 1;
> - ibrs_enabled = IBRS_ENABLED;
> - }
> + set_ibrs_feature();
> }
> }
> EXPORT_SYMBOL_GPL(scan_spec_ctrl_feature);
> @@ -43,10 +48,7 @@ void rescan_spec_ctrl_feature(struct cpuinfo_x86 *c)
> {
> mutex_lock(&spec_ctrl_mutex);
> if (boot_cpu_has(X86_FEATURE_SPEC_CTRL)) {
> - if (!ibrs_admin_disabled) {
> - dynamic_ibrs = 1;
> - ibrs_enabled = IBRS_ENABLED;
> - }
> + set_ibrs_feature();

As already stated, this does not work. You want to check the CPUID leaf
after the microcode reload and then set X86_FEATURE_SPEC_CTRL on each
CPU.

--
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

2018-01-09 10:39:52

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v2 1/8] x86/feature: Detect the x86 IBRS feature to control Speculation

On 06/01/2018 03:12, Tim Chen wrote:
> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> index 34c4922..f881add 100644
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -42,6 +42,10 @@
> #define MSR_PPIN_CTL 0x0000004e
> #define MSR_PPIN 0x0000004f
>
> +#define MSR_IA32_SPEC_CTRL 0x00000048
> +#define SPEC_CTRL_FEATURE_DISABLE_IBRS (0 << 0)
> +#define SPEC_CTRL_FEATURE_ENABLE_IBRS (1 << 0)
> +
> #define MSR_IA32_PERFCTR0 0x000000c1
> #define MSR_IA32_PERFCTR1 0x000000c2
> #define MSR_FSB_FREQ 0x000000cd

This is the patch that I have in my KVM series:

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 03ffde6217d0..828a03425571 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -464,8 +464,15 @@
#define MSR_SMI_COUNT 0x00000034
#define MSR_IA32_FEATURE_CONTROL 0x0000003a
#define MSR_IA32_TSC_ADJUST 0x0000003b
+
+#define MSR_IA32_SPEC_CTRL 0x00000048
+#define SPEC_CTRL_FEATURE_DISABLE_IBRS (0 << 0)
+#define SPEC_CTRL_FEATURE_ENABLE_IBRS (1 << 0)
+
+#define MSR_IA32_PRED_CMD 0x00000049
+#define PRED_CMD_IBPB (1UL << 0)
+
#define MSR_IA32_BNDCFGS 0x00000d90
-
#define MSR_IA32_BNDCFGS_RSVD 0x00000ffc

#define MSR_IA32_XSS 0x00000da0

Tim, can you include it like this to avoid conflicts?

Thanks,

Paolo

2018-01-09 10:41:02

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Mon, 8 Jan 2018, Peter Zijlstra wrote:
> On Mon, Jan 08, 2018 at 09:28:12AM -0800, Tim Chen wrote:
> > On 01/08/2018 08:14 AM, Peter Zijlstra wrote:
> > > On Mon, Jan 08, 2018 at 01:47:07PM +0100, Peter Zijlstra wrote:
> > >>> a good suggestion, but we encountered some issues with it either
> > >>> crashing the kernel at boot or not properly turning on/off.
> > >
> > > The below boots, but I lack stuff to test the enabling.
> >
> > Peter,
> >
> > Thanks. Will give it a spin. One other concern is if
> > JUMP_LABEL is not configured, this may not work, and
> > we may still need fall back to a control variable.
>
> Urgh yes.. I always forget about that case. Will the retpoline crud be
> backported to a GCC old enough to not have asm-goto? If not, we could
> make all of this simply require asm-goto.

No, ifs and buts, really.

This wants to be a jump label and we set the requirements here. I'm tired
of this completely bogus crap just to support some archaic version of GCC.

This is messy enough and no, this whole we need a control variable nonsense
is just not going to happen.

Thanks,

tglx



2018-01-09 17:53:18

by Tim Chen

[permalink] [raw]
Subject: Re: [PATCH v2 1/8] x86/feature: Detect the x86 IBRS feature to control Speculation

On 01/09/2018 02:39 AM, Paolo Bonzini wrote:

> +
> +#define MSR_IA32_SPEC_CTRL 0x00000048
> +#define SPEC_CTRL_FEATURE_DISABLE_IBRS (0 << 0)
> +#define SPEC_CTRL_FEATURE_ENABLE_IBRS (1 << 0)


Boris requested that the name for ENABLE/DISABLE to be shortened to
SPEC_CTRL_{ENABLE,DISABLE}_IBRS. I'm planning to do that
for the next patchset update.

> +
> +#define MSR_IA32_PRED_CMD 0x00000049
> +#define PRED_CMD_IBPB (1UL << 0)
> +

The IBPB stuff should be enumerated separately.
Feel free to create a separate patch for this chunk.


Thanks.

Tim

2018-01-09 17:55:34

by Tim Chen

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On 01/09/2018 02:40 AM, Thomas Gleixner wrote:
> On Mon, 8 Jan 2018, Peter Zijlstra wrote:
>> On Mon, Jan 08, 2018 at 09:28:12AM -0800, Tim Chen wrote:
>>> On 01/08/2018 08:14 AM, Peter Zijlstra wrote:
>>>> On Mon, Jan 08, 2018 at 01:47:07PM +0100, Peter Zijlstra wrote:
>>>>>> a good suggestion, but we encountered some issues with it either
>>>>>> crashing the kernel at boot or not properly turning on/off.
>>>>
>>>> The below boots, but I lack stuff to test the enabling.
>>>
>>> Peter,
>>>
>>> Thanks. Will give it a spin. One other concern is if
>>> JUMP_LABEL is not configured, this may not work, and
>>> we may still need fall back to a control variable.
>>
>> Urgh yes.. I always forget about that case. Will the retpoline crud be
>> backported to a GCC old enough to not have asm-goto? If not, we could
>> make all of this simply require asm-goto.
>
> No, ifs and buts, really.
>
> This wants to be a jump label and we set the requirements here. I'm tired
> of this completely bogus crap just to support some archaic version of GCC.
>
> This is messy enough and no, this whole we need a control variable nonsense
> is just not going to happen.
>
>

Thomas,

I'll be sending an updated patchset with boot option opt in for ibrs
and leave the control varaible out. I agree that we can worry about the
control variable later.

Thanks.

Tim

2018-01-09 17:58:12

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v2 1/8] x86/feature: Detect the x86 IBRS feature to control Speculation

On 09/01/2018 18:53, Tim Chen wrote:
> On 01/09/2018 02:39 AM, Paolo Bonzini wrote:
>
>> +
>> +#define MSR_IA32_SPEC_CTRL 0x00000048
>> +#define SPEC_CTRL_FEATURE_DISABLE_IBRS (0 << 0)
>> +#define SPEC_CTRL_FEATURE_ENABLE_IBRS (1 << 0)
>
>
> Boris requested that the name for ENABLE/DISABLE to be shortened to
> SPEC_CTRL_{ENABLE,DISABLE}_IBRS. I'm planning to do that
> for the next patchset update.

Were you also planning to include KVM patches in your patchset sooner or
later? Or would that be a separate submission?

(Either is fine; in the latter case, I assume I'll get a topic branch
from the tip tree).

>> +
>> +#define MSR_IA32_PRED_CMD 0x00000049
>> +#define PRED_CMD_IBPB (1UL << 0)
>> +
>
> The IBPB stuff should be enumerated separately.
> Feel free to create a separate patch for this chunk.

I think it makes sense for the patch that adds the CPU feature to also
change msr-index.h.

Paolo

2018-01-09 18:06:00

by Tim Chen

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On 01/08/2018 04:29 PM, Borislav Petkov wrote:
> On Fri, Jan 05, 2018 at 06:12:19PM -0800, Tim Chen wrote:
>> From: Tim Chen <[email protected]>
>> From: Andrea Arcangeli <[email protected]>
>
> There's Co-Developed-by for this:
>
> - Co-Developed-by: states that the patch was also created by another developer
> along with the original author. This is useful at times when multiple
> people work on a single patch. Note, this person also needs to have a
> Signed-off-by: line in the patch as well.
>

Thanks. Will do.


>> .Lskip_\@:
>> + lfence
>> +.Ldone_\@:
>> .endm
>
> Why not put all macros in spec_ctrl.h ?

There were a previous discussion thread in v1 patch with Peter Z and Dave.
Peter and Dave prefer that all these entrance macros be consolidated
in calling.h

>
>> diff --git a/arch/x86/include/asm/spec_ctrl.h b/arch/x86/include/asm/spec_ctrl.h
>> new file mode 100644
>> index 0000000..4fda38b
>> --- /dev/null
>> +++ b/arch/x86/include/asm/spec_ctrl.h
>> @@ -0,0 +1,15 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +
>> +#ifndef _ASM_X86_SPEC_CTRL_H
>> +#define _ASM_X86_SPEC_CTRL_H
>> +
>> +#include <asm/msr-index.h>
>> +#include <asm/cpufeatures.h>
>> +#include <asm/microcode.h>
>
> Left over include from a previous version.
>
> Instead, you need
>
> #include <linux/cpu.h>
>
> in spec_ctrl.c for get/put_online_cpus().
>
>> +void scan_spec_ctrl_feature(struct cpuinfo_x86 *c);
>> +bool ibrs_inuse(void);
>> +
>> +extern unsigned int dynamic_ibrs;
>> +
>> +#endif /* _ASM_X86_SPEC_CTRL_H */
>> diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
>> index 570e8bb..3ffbd24 100644
>> --- a/arch/x86/kernel/cpu/Makefile
>> +++ b/arch/x86/kernel/cpu/Makefile
>> @@ -24,6 +24,7 @@ obj-y += match.o
>> obj-y += bugs.o
>> obj-y += aperfmperf.o
>> obj-y += cpuid-deps.o
>> +obj-y += spec_ctrl.o
>>
>> obj-$(CONFIG_PROC_FS) += proc.o
>> obj-$(CONFIG_X86_FEATURE_NAMES) += capflags.o powerflags.o
>> diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
>> index bc50c40..5756d14 100644
>> --- a/arch/x86/kernel/cpu/scattered.c
>> +++ b/arch/x86/kernel/cpu/scattered.c
>> @@ -8,6 +8,7 @@
>> #include <asm/processor.h>
>>
>> #include <asm/apic.h>
>> +#include <asm/spec_ctrl.h>
>>
>> struct cpuid_bit {
>> u16 feature;
>> @@ -57,6 +58,7 @@ void init_scattered_cpuid_features(struct cpuinfo_x86 *c)
>> if (regs[cb->reg] & (1 << cb->bit))
>> set_cpu_cap(c, cb->feature);
>> }
>> + scan_spec_ctrl_feature(c);
>
> Hell no!
>
> This function is only for setting the feature bits.
>
>> u32 get_scattered_cpuid_leaf(unsigned int level, unsigned int sub_leaf,
>> diff --git a/arch/x86/kernel/cpu/spec_ctrl.c b/arch/x86/kernel/cpu/spec_ctrl.c
>> new file mode 100644
>> index 0000000..1641bec
>> --- /dev/null
>> +++ b/arch/x86/kernel/cpu/spec_ctrl.c
>> @@ -0,0 +1,160 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +
>> +#include <linux/mutex.h>
>> +#include <linux/debugfs.h>
>> +#include <linux/uaccess.h>
>> +
>> +#include <asm/spec_ctrl.h>
>> +#include <asm/cpufeature.h>
>> +
>> +unsigned int dynamic_ibrs __read_mostly;
>
> WTH is dynamic_ibrs?

Dynamic ibrs because we enable the IBRS in MSR 0x48
entering the kernel and disable it when we exit the
kernel.

>
>> +EXPORT_SYMBOL_GPL(dynamic_ibrs);
>> +
>> +enum {
>> + IBRS_DISABLED,
>> + /* in host kernel, disabled in guest and userland */
>> + IBRS_ENABLED,
>> + /* in host kernel and host userland, disabled in guest */
>> + IBRS_ENABLED_USER,
>> + IBRS_MAX = IBRS_ENABLED_USER,
>> +};
>> +static unsigned int ibrs_enabled;
>> +static bool ibrs_admin_disabled;
>
> Srsly?!
>
> That's *three* variables controlling IBRS. This needs simplification.
>
>> +
>> +/* mutex to serialize IBRS control changes */
>> +DEFINE_MUTEX(spec_ctrl_mutex);
>
> static
>
>> +void scan_spec_ctrl_feature(struct cpuinfo_x86 *c)
>> +{
>> + if ((!c->cpu_index) && (boot_cpu_has(X86_FEATURE_SPEC_CTRL))) {
>
> What is !c->cpu_index? Checking whether this is the BSP? What for?

This is to ensure that we only do the operation once during boot.

Thanks.

Tim

2018-01-09 18:13:27

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v2 1/8] x86/feature: Detect the x86 IBRS feature to control Speculation

On 01/06/2018 04:56 AM, Borislav Petkov wrote:
>> +#define MSR_IA32_SPEC_CTRL 0x00000048
>> +#define SPEC_CTRL_FEATURE_DISABLE_IBRS (0 << 0)
>> +#define SPEC_CTRL_FEATURE_ENABLE_IBRS (1 << 0)
> s/_FEATURE//
>
> SPEC_CTRL_{ENABLE,DISABLE}_IBRS is good enough.

FWIW, I like the idea of including "SPEC_CTRL_" because it matches the
constant to the MSR name. But I don't care that much.

2018-01-09 18:13:51

by David Woodhouse

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Tue, 2018-01-09 at 09:55 -0800, Tim Chen wrote:
>
> Thomas,
>
> I'll be sending an updated patchset with boot option opt in for ibrs
> and leave the control varaible out.  I agree that we can worry about the
> control variable later.

Please base this on the spectre_v2= option that's already in
tip/x86/pti.


Attachments:
smime.p7s (5.09 kB)

2018-01-09 18:55:37

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 1/8] x86/feature: Detect the x86 IBRS feature to control Speculation

On Tue, Jan 09, 2018 at 10:13:23AM -0800, Dave Hansen wrote:
> FWIW, I like the idea of including "SPEC_CTRL_" because it matches the
> constant to the MSR name. But I don't care that much.

Sure - just the "FEATURE_" part of the string is redundant.

--
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

2018-01-09 20:31:05

by Tim Chen

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On 01/09/2018 10:13 AM, David Woodhouse wrote:
> On Tue, 2018-01-09 at 09:55 -0800, Tim Chen wrote:
>>
>> Thomas,
>>
>> I'll be sending an updated patchset with boot option opt in for ibrs
>> and leave the control varaible out. I agree that we can worry about the
>> control variable later.
>
> Please base this on the spectre_v2= option that's already in
> tip/x86/pti.
>

Will do that for boot option.

Tim

2018-01-09 22:59:58

by Tim Chen

[permalink] [raw]
Subject: Re: [PATCH v2 1/8] x86/feature: Detect the x86 IBRS feature to control Speculation

On 01/09/2018 09:58 AM, Paolo Bonzini wrote:
> On 09/01/2018 18:53, Tim Chen wrote:
>> On 01/09/2018 02:39 AM, Paolo Bonzini wrote:
>>
>>> +
>>> +#define MSR_IA32_SPEC_CTRL 0x00000048
>>> +#define SPEC_CTRL_FEATURE_DISABLE_IBRS (0 << 0)
>>> +#define SPEC_CTRL_FEATURE_ENABLE_IBRS (1 << 0)
>>
>>
>> Boris requested that the name for ENABLE/DISABLE to be shortened to
>> SPEC_CTRL_{ENABLE,DISABLE}_IBRS. I'm planning to do that
>> for the next patchset update.
>
> Were you also planning to include KVM patches in your patchset sooner or
> later? Or would that be a separate submission?

Ashok Raj will be sending out the KVM related patches in a short
while after I updated my patchset.

Tim

2018-01-17 21:41:56

by Tim Chen

[permalink] [raw]
Subject: Re: [PATCH v2 2/8] x86/enter: MACROS to set/clear IBRS

On 01/06/2018 03:05 PM, Mark Marshall wrote:
> Hi.
>
> (I've only just subscribed and can't work out how to reply to a message from before I subscribed (on my phone), sorry)
>
> In the macro WRMSR_ASM you seem to have lost the wrmsr?
>
>

Yes. That's a bug noticed by Thomas and I'll fix for update to IBRS patches.

Tim

2018-01-18 23:30:48

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH v2 1/8] x86/feature: Detect the x86 IBRS feature to control Speculation

On Fri, Jan 5, 2018 at 6:12 PM, Tim Chen <[email protected]> wrote:
> cpuid ax=0x7, return rdx bit 26 to indicate presence of this feature
> IA32_SPEC_CTRL (0x48)
> IA32_SPEC_CTRL, bit0 – Indirect Branch Restricted Speculation (IBRS)
>
> If IBRS is set, near returns and near indirect jumps/calls will not allow
> their predicted target address to be controlled by code that executed in
> a less privileged prediction mode before the IBRS mode was last written
> with a value of 1 or on another logical processor so long as all RSB
> entries from the previous less privileged prediction mode are overwritten.
>
> * Thus a near indirect jump/call/return may be affected by code in a
> less privileged prediction mode that executed AFTER IBRS mode was last
> written with a value of 1
>
> * There is no need to clear IBRS before writing it with a value of
> 1. Unconditionally writing it with a value of 1 after the prediction
> mode change is sufficient
>
> * Note: IBRS is not required in order to isolate branch predictions for
> SMM or SGX enclaves
>
> * Code executed by a sibling logical processor cannot control indirect
> jump/call/return predicted target when IBRS is set
>
> * SMEP will prevent supervisor mode using RSB entries filled by user code;
> this can reduce the need for software to overwrite RSB entries
>
> CPU performance could be reduced when running with IBRS set.
>
> Signed-off-by: Tim Chen <[email protected]>
> ---
> arch/x86/include/asm/cpufeatures.h | 1 +
> arch/x86/include/asm/msr-index.h | 4 ++++
> arch/x86/kernel/cpu/scattered.c | 1 +
> tools/arch/x86/include/asm/cpufeatures.h | 1 +
> 4 files changed, 7 insertions(+)
>
> diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
> index 07cdd17..5ee0737 100644
> --- a/arch/x86/include/asm/cpufeatures.h
> +++ b/arch/x86/include/asm/cpufeatures.h
> @@ -209,6 +209,7 @@
> #define X86_FEATURE_AVX512_4FMAPS ( 7*32+17) /* AVX-512 Multiply Accumulation Single precision */
>
> #define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */
> +#define X86_FEATURE_SPEC_CTRL ( 7*32+19) /* Control Speculation Control */
>
> /* Virtualization flags: Linux defined, word 8 */
> #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> index 34c4922..f881add 100644
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -42,6 +42,10 @@
> #define MSR_PPIN_CTL 0x0000004e
> #define MSR_PPIN 0x0000004f
>
> +#define MSR_IA32_SPEC_CTRL 0x00000048
> +#define SPEC_CTRL_FEATURE_DISABLE_IBRS (0 << 0)
> +#define SPEC_CTRL_FEATURE_ENABLE_IBRS (1 << 0)
> +

I still can't find the PDF that documents this right now, but, from
memory, shouldn't the value that we write to the MSR on entry be 3,
not 1? That is, isn't bit 1 STIBP and don't we want STIBP to be on?
And IIRC bit 1 was ignored on non-STIBP-capable machines.

2018-01-27 13:59:46

by Konrad Rzeszutek Wilk

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Mon, Jan 08, 2018 at 05:14:37PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 08, 2018 at 01:47:07PM +0100, Peter Zijlstra wrote:
> > > a good suggestion, but we encountered some issues with it either
> > > crashing the kernel at boot or not properly turning on/off.
>
> The below boots, but I lack stuff to test the enabling.

..snip..
> --- a/arch/x86/entry/calling.h
> +++ b/arch/x86/entry/calling.h
> @@ -373,22 +373,17 @@ For 32-bit we have the following convent
> .endm
>
> .macro ENABLE_IBRS
> - testl $1, dynamic_ibrs
> - jz .Lskip_\@
> + STATIC_JUMP_IF_FALSE .Lskip_\@, ibrs_key, def=0
>
> PUSH_MSR_REGS
> WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_ENABLE_IBRS
> POP_MSR_REGS
> - jmp .Ldone_\@
>
> .Lskip_\@:
> - lfence
> -.Ldone_\@:
> .endm

I know that this particular patchset is now obsolete as the retpoline
along with stuffing the RSB half or full is the preferred way.

But I am wondering - why was the 'lfence' added in the first place
if dynamic_ibrs was zero?

It certainly is not putting the speculative execution on a wild ride
like: "[tip:x86/pti] x86/retpoline: Use LFENCE instead of PAUSE in the
retpoline/RSB filling RSB macros" https://git.kernel.org/tip/2eb9137c8744f9adf1670e9aa52850948a30112b

So what was the intent behind this? Was it: "oh if we do not have
IBRS let us at least add lfence on every system call, interrupt, nmi,
exception, etc to do a poor man version of IBRS?"

Thank you.
P.S.
My apologies if this was discussed in the prior versions of this thread.
I must have missed it.

2018-01-27 14:27:44

by David Woodhouse

[permalink] [raw]
Subject: Re: [PATCH v2 4/8] x86/spec_ctrl: Add sysctl knobs to enable/disable SPEC_CTRL feature

On Sat, 2018-01-27 at 08:59 -0500, Konrad Rzeszutek Wilk wrote:
> On Mon, Jan 08, 2018 at 05:14:37PM +0100, Peter Zijlstra wrote:
> >
> > On Mon, Jan 08, 2018 at 01:47:07PM +0100, Peter Zijlstra wrote:
> > >
> > > >
> > > > a good suggestion, but we encountered some issues with it either
> > > > crashing the kernel at boot or not properly turning on/off.
> > The below boots, but I lack stuff to test the enabling.
> ..snip..
> >
> > --- a/arch/x86/entry/calling.h
> > +++ b/arch/x86/entry/calling.h
> > @@ -373,22 +373,17 @@ For 32-bit we have the following convent
> >  .endm
> >  
> >  .macro ENABLE_IBRS
> > - testl $1, dynamic_ibrs
> > - jz .Lskip_\@
> > + STATIC_JUMP_IF_FALSE .Lskip_\@, ibrs_key, def=0
> >  
> >   PUSH_MSR_REGS
> >   WRMSR_ASM $MSR_IA32_SPEC_CTRL, $SPEC_CTRL_FEATURE_ENABLE_IBRS
> >   POP_MSR_REGS
> > - jmp .Ldone_\@
> >  
> >  .Lskip_\@:
> > - lfence
> > -.Ldone_\@:
> >  .endm
> I know that this particular patchset is now obsolete as the retpoline
> along with stuffing the RSB half or full is the preferred way.
>
> But I am wondering - why was the 'lfence' added in the first place
> if dynamic_ibrs was zero?

This was in the system call path, just before the jmp *sys_call_table()
indirect jump. If an attacker could cause that 'jz .Lskip_\@' over the
wrmsr to be predicted *taken*, speculative execution would happily
proceed all the way to the indirect jump. Oops :)

Hence the 'else lfence' so that the branch-taken code path still stops
to wait. Hence also the subsequent insistence on using ALTERNATIVE for
it, and my paranoia about relying on GCC not missing optimisations if
we're using static_cpu_has(). 


Attachments:
smime.p7s (5.09 kB)