Benefits:
Currently a user process that wishes to read or write the FS/GS base must
make a system call. But recent X86 processors have added new instructions
for use in 64-bit mode that allow direct access to the FS and GS segment
base addresses. The operating system controls whether applications can
use these instructions with a %cr4 control bit.
In addition to benefits to applications, performance improvements to the
OS context switch code are possible by making use of these instructions. A
third party reported out promising performance numbers out of their
initial benchmarking of the previous version of this patch series [9].
Enablement check:
The kernel provides information about the enabled state of FSGSBASE to
applications using the ELF_AUX vector. If the HWCAP2_FSGSBASE bit is set in
the AUX vector, the kernel has FSGSBASE instructions enabled and
applications can use them.
Kernel changes:
Major changes made in the kernel are in context switch, paranoid path, and
ptrace. In a context switch, a task's FS/GS base will be secured regardless
of its selector. In the paranoid path, GS base is unconditionally
overwritten to the kernel GS base on entry and the original GS base is
restored on exit. Ptrace includes divergence of FS/GS index and base
values.
Security:
For mitigating the Spectre v1 SWAPGS issue, LFENCE instructions were added
on most kernel entries. Those patches are dependent on previous behaviors
that users couldn't load a kernel address into the GS base. These patches
change that assumption since the user can load any address into GS base.
The changes to the kernel entry path in this patch series take account of
the SWAPGS issue.
Changes from v9:
- Rebase on top of v5.7-rc1 and re-test.
- Work around changes in 2fff071d28b5 ("x86/process: Unify
copy_thread_tls()").
- Work around changes in c7ca0b614513 ("Revert "x86/ptrace: Prevent
ptrace from clearing the FS/GS selector" and fix the test").
Andi Kleen (2):
x86/fsgsbase/64: Add intrinsics for FSGSBASE instructions
x86/elf: Enumerate kernel FSGSBASE capability in AT_HWCAP2
Andy Lutomirski (4):
x86/cpu: Add 'unsafe_fsgsbase' to enable CR4.FSGSBASE
x86/entry/64: Clean up paranoid exit
x86/fsgsbase/64: Use FSGSBASE in switch_to() if available
x86/fsgsbase/64: Enable FSGSBASE on 64bit by default and add a chicken
bit
Chang S. Bae (9):
x86/ptrace: Prevent ptrace from clearing the FS/GS selector
selftests/x86/fsgsbase: Test GS selector on ptracer-induced GS base
write
x86/entry/64: Switch CR3 before SWAPGS in paranoid entry
x86/entry/64: Introduce the FIND_PERCPU_BASE macro
x86/entry/64: Handle FSGSBASE enabled paranoid entry/exit
x86/entry/64: Document GSBASE handling in the paranoid path
x86/fsgsbase/64: Enable FSGSBASE instructions in helper functions
x86/fsgsbase/64: Use FSGSBASE instructions on thread copy and ptrace
selftests/x86/fsgsbase: Test ptracer-induced GS base write with
FSGSBASE
Sasha Levin (1):
x86/fsgsbase/64: move save_fsgs to header file
Thomas Gleixner (1):
Documentation/x86/64: Add documentation for GS/FS addressing mode
Tony Luck (1):
x86/speculation/swapgs: Check FSGSBASE in enabling SWAPGS mitigation
.../admin-guide/kernel-parameters.txt | 2 +
Documentation/x86/entry_64.rst | 9 +
Documentation/x86/x86_64/fsgs.rst | 199 ++++++++++++++++++
Documentation/x86/x86_64/index.rst | 1 +
arch/x86/entry/calling.h | 40 ++++
arch/x86/entry/entry_64.S | 131 +++++++++---
arch/x86/include/asm/fsgsbase.h | 45 +++-
arch/x86/include/asm/inst.h | 15 ++
arch/x86/include/uapi/asm/hwcap2.h | 3 +
arch/x86/kernel/cpu/bugs.c | 6 +-
arch/x86/kernel/cpu/common.c | 22 ++
arch/x86/kernel/process.c | 10 +-
arch/x86/kernel/process.h | 69 ++++++
arch/x86/kernel/process_64.c | 142 +++++++------
arch/x86/kernel/ptrace.c | 17 +-
tools/testing/selftests/x86/fsgsbase.c | 24 ++-
16 files changed, 606 insertions(+), 129 deletions(-)
create mode 100644 Documentation/x86/x86_64/fsgs.rst
--
2.20.1
From: "Chang S. Bae" <[email protected]>
When a ptracer writes a ptracee's FS/GS base with a different value, the
selector is also cleared. This behavior is not correct as the selector
should be preserved.
Update only the base value and leave the selector intact. To simplify the
code further remove the conditional checking for the same value as this
code is not performance-critical.
The only recognizable downside of this change is when the selector is
already nonzero on write. The base will be reloaded according to the
selector. But the case is highly unexpected in real usages.
Suggested-by: Andy Lutomirski <[email protected]>
Signed-off-by: Chang S. Bae <[email protected]>
Reviewed-by: Tony Luck <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Andi Kleen <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/x86/kernel/ptrace.c | 17 ++---------------
1 file changed, 2 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
index f0e1ddbc2fd78..cc56efb75d275 100644
--- a/arch/x86/kernel/ptrace.c
+++ b/arch/x86/kernel/ptrace.c
@@ -380,25 +380,12 @@ static int putreg(struct task_struct *child,
case offsetof(struct user_regs_struct,fs_base):
if (value >= TASK_SIZE_MAX)
return -EIO;
- /*
- * When changing the FS base, use do_arch_prctl_64()
- * to set the index to zero and to set the base
- * as requested.
- *
- * NB: This behavior is nonsensical and likely needs to
- * change when FSGSBASE support is added.
- */
- if (child->thread.fsbase != value)
- return do_arch_prctl_64(child, ARCH_SET_FS, value);
+ x86_fsbase_write_task(child, value);
return 0;
case offsetof(struct user_regs_struct,gs_base):
- /*
- * Exactly the same here as the %fs handling above.
- */
if (value >= TASK_SIZE_MAX)
return -EIO;
- if (child->thread.gsbase != value)
- return do_arch_prctl_64(child, ARCH_SET_GS, value);
+ x86_gsbase_write_task(child, value);
return 0;
#endif
}
--
2.20.1
From: "Chang S. Bae" <[email protected]>
The test validates that the selector is not changed when a ptracer writes
the ptracee's GS base.
Originally-by: Andy Lutomirski <[email protected]>
Signed-off-by: Chang S. Bae <[email protected]>
Reviewed-by: Tony Luck <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Andi Kleen <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
tools/testing/selftests/x86/fsgsbase.c | 21 +++++++++++++++------
1 file changed, 15 insertions(+), 6 deletions(-)
diff --git a/tools/testing/selftests/x86/fsgsbase.c b/tools/testing/selftests/x86/fsgsbase.c
index 15a329da59fa3..950a48b2e3662 100644
--- a/tools/testing/selftests/x86/fsgsbase.c
+++ b/tools/testing/selftests/x86/fsgsbase.c
@@ -465,7 +465,7 @@ static void test_ptrace_write_gsbase(void)
wait(&status);
if (WSTOPSIG(status) == SIGTRAP) {
- unsigned long gs, base;
+ unsigned long gs;
unsigned long gs_offset = USER_REGS_OFFSET(gs);
unsigned long base_offset = USER_REGS_OFFSET(gs_base);
@@ -481,7 +481,6 @@ static void test_ptrace_write_gsbase(void)
err(1, "PTRACE_POKEUSER");
gs = ptrace(PTRACE_PEEKUSER, child, gs_offset, NULL);
- base = ptrace(PTRACE_PEEKUSER, child, base_offset, NULL);
/*
* In a non-FSGSBASE system, the nonzero selector will load
@@ -489,11 +488,21 @@ static void test_ptrace_write_gsbase(void)
* selector value is changed or not by the GSBASE write in
* a ptracer.
*/
- if (gs == 0 && base == 0xFF) {
- printf("[OK]\tGS was reset as expected\n");
- } else {
+ if (gs != *shared_scratch) {
nerrs++;
- printf("[FAIL]\tGS=0x%lx, GSBASE=0x%lx (should be 0, 0xFF)\n", gs, base);
+ printf("[FAIL]\tGS changed to %lx\n", gs);
+
+ /*
+ * On older kernels, poking a nonzero value into the
+ * base would zero the selector. On newer kernels,
+ * this behavior has changed -- poking the base
+ * changes only the base and, if FSGSBASE is not
+ * available, this may not effect.
+ */
+ if (gs == 0)
+ printf("\tNote: this is expected behavior on older kernels.\n");
+ } else {
+ printf("[OK]\tGS remained 0x%hx\n", *shared_scratch);
}
}
--
2.20.1
From: Tony Luck <[email protected]>
Before enabling FSGSBASE the kernel could safely assume that the content
of GS base was a user address. Thus any speculative access as the result
of a mispredicted branch controlling the execution of SWAPGS would be to
a user address. So systems with speculation-proof SMAP did not need to
add additional LFENCE instructions to mitigate.
With FSGSBASE enabled a hostile user can set GS base to a kernel address.
So they can make the kernel speculatively access data they wish to leak
via a side channel. This means that SMAP provides no protection.
Add FSGSBASE as an additional condition to enable the fence-based SWAPGS
mitigation.
Signed-off-by: Tony Luck <[email protected]>
Signed-off-by: Chang S. Bae <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Andi Kleen <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/x86/kernel/cpu/bugs.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index ed54b3b21c396..487603ea51cd1 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -450,14 +450,12 @@ static void __init spectre_v1_select_mitigation(void)
* If FSGSBASE is enabled, the user can put a kernel address in
* GS, in which case SMAP provides no protection.
*
- * [ NOTE: Don't check for X86_FEATURE_FSGSBASE until the
- * FSGSBASE enablement patches have been merged. ]
- *
* If FSGSBASE is disabled, the user can only put a user space
* address in GS. That makes an attack harder, but still
* possible if there's no SMAP protection.
*/
- if (!smap_works_speculatively()) {
+ if (boot_cpu_has(X86_FEATURE_FSGSBASE) ||
+ !smap_works_speculatively()) {
/*
* Mitigation can be provided from SWAPGS itself or
* PTI as the CR3 write in the Meltdown mitigation
--
2.20.1
From: "Chang S. Bae" <[email protected]>
On FSGSBASE systems, the way to handle GS base in the paranoid path is
different from the existing SWAPGS-based entry/exit path handling. Document
the reason and what has to be done for FSGSBASE enabled systems.
Signed-off-by: Chang S. Bae <[email protected]>
Reviewed-by: Tony Luck <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Andi Kleen <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
Documentation/x86/entry_64.rst | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/Documentation/x86/entry_64.rst b/Documentation/x86/entry_64.rst
index a48b3f6ebbe87..0499a40723af3 100644
--- a/Documentation/x86/entry_64.rst
+++ b/Documentation/x86/entry_64.rst
@@ -108,3 +108,12 @@ We try to only use IST entries and the paranoid entry code for vectors
that absolutely need the more expensive check for the GS base - and we
generate all 'normal' entry points with the regular (faster) paranoid=0
variant.
+
+On FSGSBASE systems, however, user space can set GS without kernel
+interaction. It means the value of GS base itself does not imply anything,
+whether a kernel value or a user space value. So, there is no longer a safe
+way to check whether the exception is entering from user mode or kernel
+mode in the paranoid entry code path. So the GS base value needs to be read
+out, saved and the kernel GS base value written. On exit, the saved GS base
+value needs to be restored unconditionally. The non-paranoid entry/exit
+code still uses SWAPGS unconditionally as the state is known.
--
2.20.1
From: "Chang S. Bae" <[email protected]>
Without FSGSBASE, user space cannot change GS base other than through a
PRCTL. The kernel enforces that the user space GS base value is positive
as negative values are used for detecting the kernel space GS base value
in the paranoid entry code.
If FSGSBASE is enabled, user space can set arbitrary GS base values without
kernel intervention, including negative ones, which breaks the paranoid
entry assumptions.
To avoid this, paranoid entry needs to unconditionally save the current
GS base value independent of the interrupted context, retrieve and write
the kernel GS base and unconditionally restore the saved value on exit.
The restore happens either in paranoid exit or in the special exit path of
the NMI low level code.
All other entry code paths which use unconditional SWAPGS are not affected
as they do not depend on the actual content.
The new logic for paranoid entry, when FSGSBASE is enabled, removes SWAPGS
and replaces with unconditional WRGSBASE. Hence no fences are needed.
Suggested-by: H. Peter Anvin <[email protected]>
Suggested-by: Andy Lutomirski <[email protected]>
Suggested-by: Thomas Gleixner <[email protected]>
Signed-off-by: Chang S. Bae <[email protected]>
Reviewed-by: Tony Luck <[email protected]>
Acked-by: Tom Lendacky <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Tom Lendacky <[email protected]>
Cc: Vegard Nossum <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/x86/entry/calling.h | 6 +++
arch/x86/entry/entry_64.S | 78 ++++++++++++++++++++++++++++++++++-----
2 files changed, 75 insertions(+), 9 deletions(-)
diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
index 0eb134e18b7a9..5f3a8ecaddc2d 100644
--- a/arch/x86/entry/calling.h
+++ b/arch/x86/entry/calling.h
@@ -340,6 +340,12 @@ For 32-bit we have the following conventions - kernel is built with
#endif
.endm
+.macro SAVE_AND_SET_GSBASE scratch_reg:req save_reg:req
+ rdgsbase \save_reg
+ GET_PERCPU_BASE \scratch_reg
+ wrgsbase \scratch_reg
+.endm
+
#endif /* CONFIG_X86_64 */
.macro STACKLEAK_ERASE
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 7f27626f8426f..a4fd01c8f2970 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -38,6 +38,7 @@
#include <asm/export.h>
#include <asm/frame.h>
#include <asm/nospec-branch.h>
+#include <asm/fsgsbase.h>
#include <linux/err.h>
#include "calling.h"
@@ -1211,9 +1212,14 @@ idtentry machine_check do_mce has_error_code=0 paranoid=1
#endif
/*
- * Save all registers in pt_regs, and switch gs if needed.
- * Use slow, but surefire "are we in kernel?" check.
- * Return: ebx=0: need swapgs on exit, ebx=1: otherwise
+ * Save all registers in pt_regs. Return GS base related information
+ * in EBX depending on the availability of the FSGSBASE instructions:
+ *
+ * FSGSBASE R/EBX
+ * N 0 -> SWAPGS on exit
+ * 1 -> no SWAPGS on exit
+ *
+ * Y GS base value at entry, must be restored in paranoid_exit
*/
SYM_CODE_START_LOCAL(paranoid_entry)
UNWIND_HINT_FUNC
@@ -1238,7 +1244,29 @@ SYM_CODE_START_LOCAL(paranoid_entry)
*/
SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg=%rax save_reg=%r14
- /* EBX = 1 -> kernel GSBASE active, no restore required */
+ /*
+ * Handling GS base depends on the availability of FSGSBASE.
+ *
+ * Without FSGSBASE the kernel enforces that negative GS base
+ * values indicate kernel GS base. With FSGSBASE no assumptions
+ * can be made about the GS base value when entering from user
+ * space.
+ */
+ ALTERNATIVE "jmp .Lparanoid_entry_checkgs", "", X86_FEATURE_FSGSBASE
+
+ /*
+ * Read the current GS base and store it in %rbx unconditionally,
+ * retrieve and set the current CPUs kernel GS base. The stored value
+ * has to be restored in paranoid_exit unconditionally.
+ *
+ * This unconditional write of GS base ensures no subsequent load
+ * based on a mispredicted GS base.
+ */
+ SAVE_AND_SET_GSBASE scratch_reg=%rax save_reg=%rbx
+ ret
+
+.Lparanoid_entry_checkgs:
+ /* EBX = 1 -> kernel GS base active, no restore required */
movl $1, %ebx
/*
* The kernel-enforced convention is a negative GS base indicates
@@ -1265,10 +1293,17 @@ SYM_CODE_END(paranoid_entry)
*
* We may be returning to very strange contexts (e.g. very early
* in syscall entry), so checking for preemption here would
- * be complicated. Fortunately, we there's no good reason
- * to try to handle preemption here.
+ * be complicated. Fortunately, there's no good reason to try
+ * to handle preemption here.
+ *
+ * R/EBX contains the GS base related information depending on the
+ * availability of the FSGSBASE instructions:
+ *
+ * FSGSBASE R/EBX
+ * N 0 -> SWAPGS on exit
+ * 1 -> no SWAPGS on exit
*
- * On entry, ebx is "no swapgs" flag (1: don't need swapgs, 0: need it)
+ * Y User space GS base, must be restored unconditionally
*/
SYM_CODE_START_LOCAL(paranoid_exit)
UNWIND_HINT_REGS
@@ -1285,7 +1320,15 @@ SYM_CODE_START_LOCAL(paranoid_exit)
TRACE_IRQS_OFF_DEBUG
RESTORE_CR3 scratch_reg=%rax save_reg=%r14
- /* If EBX is 0, SWAPGS is required */
+ /* Handle the three GS base cases */
+ ALTERNATIVE "jmp .Lparanoid_exit_checkgs", "", X86_FEATURE_FSGSBASE
+
+ /* With FSGSBASE enabled, unconditionally resotre GS base */
+ wrgsbase %rbx
+ jmp restore_regs_and_return_to_kernel
+
+.Lparanoid_exit_checkgs:
+ /* On non-FSGSBASE systems, conditionally do SWAPGS */
testl %ebx, %ebx
jnz restore_regs_and_return_to_kernel
@@ -1699,10 +1742,27 @@ end_repeat_nmi:
/* Always restore stashed CR3 value (see paranoid_entry) */
RESTORE_CR3 scratch_reg=%r15 save_reg=%r14
- testl %ebx, %ebx /* swapgs needed? */
+ /*
+ * The above invocation of paranoid_entry stored the GS base
+ * related information in R/EBX depending on the availability
+ * of FSGSBASE.
+ *
+ * If FSGSBASE is enabled, restore the saved GS base value
+ * unconditionally, otherwise take the conditional SWAPGS path.
+ */
+ ALTERNATIVE "jmp nmi_no_fsgsbase", "", X86_FEATURE_FSGSBASE
+
+ wrgsbase %rbx
+ jmp nmi_restore
+
+nmi_no_fsgsbase:
+ /* EBX == 0 -> invoke SWAPGS */
+ testl %ebx, %ebx
jnz nmi_restore
+
nmi_swapgs:
SWAPGS_UNSAFE_STACK
+
nmi_restore:
POP_REGS
--
2.20.1
From: Thomas Gleixner <[email protected]>
Explain how the GS/FS based addressing can be utilized in user space
applications along with the differences between the generic prctl() based
GS/FS base control and the FSGSBASE version available on newer CPUs.
Originally-by: Andi Kleen <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Signed-off-by: Chang S. Bae <[email protected]>
Reviewed-by: Tony Luck <[email protected]>
Reviewed-by: Randy Dunlap <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Randy Dunlap <[email protected]>
Cc: Jonathan Corbet <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
Documentation/x86/x86_64/fsgs.rst | 199 +++++++++++++++++++++++++++++
Documentation/x86/x86_64/index.rst | 1 +
2 files changed, 200 insertions(+)
create mode 100644 Documentation/x86/x86_64/fsgs.rst
diff --git a/Documentation/x86/x86_64/fsgs.rst b/Documentation/x86/x86_64/fsgs.rst
new file mode 100644
index 0000000000000..50960e09e1f66
--- /dev/null
+++ b/Documentation/x86/x86_64/fsgs.rst
@@ -0,0 +1,199 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Using FS and GS segments in user space applications
+===================================================
+
+The x86 architecture supports segmentation. Instructions which access
+memory can use segment register based addressing mode. The following
+notation is used to address a byte within a segment:
+
+ Segment-register:Byte-address
+
+The segment base address is added to the Byte-address to compute the
+resulting virtual address which is accessed. This allows to access multiple
+instances of data with the identical Byte-address, i.e. the same code. The
+selection of a particular instance is purely based on the base-address in
+the segment register.
+
+In 32-bit mode the CPU provides 6 segments, which also support segment
+limits. The limits can be used to enforce address space protections.
+
+In 64-bit mode the CS/SS/DS/ES segments are ignored and the base address is
+always 0 to provide a full 64bit address space. The FS and GS segments are
+still functional in 64-bit mode.
+
+Common FS and GS usage
+------------------------------
+
+The FS segment is commonly used to address Thread Local Storage (TLS). FS
+is usually managed by runtime code or a threading library. Variables
+declared with the '__thread' storage class specifier are instantiated per
+thread and the compiler emits the FS: address prefix for accesses to these
+variables. Each thread has its own FS base address so common code can be
+used without complex address offset calculations to access the per thread
+instances. Applications should not use FS for other purposes when they use
+runtimes or threading libraries which manage the per thread FS.
+
+The GS segment has no common use and can be used freely by
+applications. GCC and Clang support GS based addressing via address space
+identifiers.
+
+Reading and writing the FS/GS base address
+------------------------------------------
+
+There exist two mechanisms to read and write the FS/GS base address:
+
+ - the arch_prctl() system call
+
+ - the FSGSBASE instruction family
+
+Accessing FS/GS base with arch_prctl()
+--------------------------------------
+
+ The arch_prctl(2) based mechanism is available on all 64-bit CPUs and all
+ kernel versions.
+
+ Reading the base:
+
+ arch_prctl(ARCH_GET_FS, &fsbase);
+ arch_prctl(ARCH_GET_GS, &gsbase);
+
+ Writing the base:
+
+ arch_prctl(ARCH_SET_FS, fsbase);
+ arch_prctl(ARCH_SET_GS, gsbase);
+
+ The ARCH_SET_GS prctl may be disabled depending on kernel configuration
+ and security settings.
+
+Accessing FS/GS base with the FSGSBASE instructions
+---------------------------------------------------
+
+ With the Ivy Bridge CPU generation Intel introduced a new set of
+ instructions to access the FS and GS base registers directly from user
+ space. These instructions are also supported on AMD Family 17H CPUs. The
+ following instructions are available:
+
+ =============== ===========================
+ RDFSBASE %reg Read the FS base register
+ RDGSBASE %reg Read the GS base register
+ WRFSBASE %reg Write the FS base register
+ WRGSBASE %reg Write the GS base register
+ =============== ===========================
+
+ The instructions avoid the overhead of the arch_prctl() syscall and allow
+ more flexible usage of the FS/GS addressing modes in user space
+ applications. This does not prevent conflicts between threading libraries
+ and runtimes which utilize FS and applications which want to use it for
+ their own purpose.
+
+FSGSBASE instructions enablement
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ The instructions are enumerated in CPUID leaf 7, bit 0 of EBX. If
+ available /proc/cpuinfo shows 'fsgsbase' in the flag entry of the CPUs.
+
+ The availability of the instructions does not enable them
+ automatically. The kernel has to enable them explicitly in CR4. The
+ reason for this is that older kernels make assumptions about the values in
+ the GS register and enforce them when GS base is set via
+ arch_prctl(). Allowing user space to write arbitrary values to GS base
+ would violate these assumptions and cause malfunction.
+
+ On kernels which do not enable FSGSBASE the execution of the FSGSBASE
+ instructions will fault with a #UD exception.
+
+ The kernel provides reliable information about the enabled state in the
+ ELF AUX vector. If the HWCAP2_FSGSBASE bit is set in the AUX vector, the
+ kernel has FSGSBASE instructions enabled and applications can use them.
+ The following code example shows how this detection works::
+
+ #include <sys/auxv.h>
+ #include <elf.h>
+
+ /* Will be eventually in asm/hwcap.h */
+ #ifndef HWCAP2_FSGSBASE
+ #define HWCAP2_FSGSBASE (1 << 1)
+ #endif
+
+ ....
+
+ unsigned val = getauxval(AT_HWCAP2);
+
+ if (val & HWCAP2_FSGSBASE)
+ printf("FSGSBASE enabled\n");
+
+FSGSBASE instructions compiler support
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+GCC version 4.6.4 and newer provide instrinsics for the FSGSBASE
+instructions. Clang 5 supports them as well.
+
+ =================== ===========================
+ _readfsbase_u64() Read the FS base register
+ _readfsbase_u64() Read the GS base register
+ _writefsbase_u64() Write the FS base register
+ _writegsbase_u64() Write the GS base register
+ =================== ===========================
+
+To utilize these instrinsics <immintrin.h> must be included in the source
+code and the compiler option -mfsgsbase has to be added.
+
+Compiler support for FS/GS based addressing
+-------------------------------------------
+
+GCC version 6 and newer provide support for FS/GS based addressing via
+Named Address Spaces. GCC implements the following address space
+identifiers for x86:
+
+ ========= ====================================
+ __seg_fs Variable is addressed relative to FS
+ __seg_gs Variable is addressed relative to GS
+ ========= ====================================
+
+The preprocessor symbols __SEG_FS and __SEG_GS are defined when these
+address spaces are supported. Code which implements fallback modes should
+check whether these symbols are defined. Usage example::
+
+ #ifdef __SEG_GS
+
+ long data0 = 0;
+ long data1 = 1;
+
+ long __seg_gs *ptr;
+
+ /* Check whether FSGSBASE is enabled by the kernel (HWCAP2_FSGSBASE) */
+ ....
+
+ /* Set GS base to point to data0 */
+ _writegsbase_u64(&data0);
+
+ /* Access offset 0 of GS */
+ ptr = 0;
+ printf("data0 = %ld\n", *ptr);
+
+ /* Set GS base to point to data1 */
+ _writegsbase_u64(&data1);
+ /* ptr still addresses offset 0! */
+ printf("data1 = %ld\n", *ptr);
+
+
+Clang does not provide the GCC address space identifiers, but it provides
+address spaces via an attribute based mechanism in Clang 2.6 and newer
+versions:
+
+ ==================================== =====================================
+ __attribute__((address_space(256)) Variable is addressed relative to GS
+ __attribute__((address_space(257)) Variable is addressed relative to FS
+ ==================================== =====================================
+
+FS/GS based addressing with inline assembly
+-------------------------------------------
+
+In case the compiler does not support address spaces, inline assembly can
+be used for FS/GS based addressing mode::
+
+ mov %fs:offset, %reg
+ mov %gs:offset, %reg
+
+ mov %reg, %fs:offset
+ mov %reg, %gs:offset
diff --git a/Documentation/x86/x86_64/index.rst b/Documentation/x86/x86_64/index.rst
index d6eaaa5a35fcd..a56070fc8e77a 100644
--- a/Documentation/x86/x86_64/index.rst
+++ b/Documentation/x86/x86_64/index.rst
@@ -14,3 +14,4 @@ x86_64 Support
fake-numa-for-cpusets
cpu-hotplug-spec
machinecheck
+ fsgs
--
2.20.1
From: Andy Lutomirski <[email protected]>
With the new FSGSBASE instructions, FS/GS base can be efficiently read
and written in __switch_to(). Use that capability to preserve the full
state.
This will enable user code to do whatever it wants with the new
instructions without any kernel-induced gotchas. (There can still be
architectural gotchas: movl %gs,%eax; movl %eax,%gs may change GS base
if WRGSBASE was used, but users are expected to read the CPU manual
before doing things like that.)
This is a considerable speedup. It seems to save about 100 cycles per
context switch compared to the baseline 4.6-rc1 behavior on a Skylake
laptop.
[ chang: 5~10% performance improvements were seen by a context switch
benchmark that ran threads with different FS/GS base values (to the
baseline 4.16). ]
Signed-off-by: Andy Lutomirski <[email protected]>
Signed-off-by: Chang S. Bae <[email protected]>
Reviewed-by: Tony Luck <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Andi Kleen <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/x86/kernel/process_64.c | 34 ++++++++++++++++++++++++++++------
1 file changed, 28 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index aaa65f284b9b9..e066750be89a0 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -199,8 +199,18 @@ static __always_inline void save_fsgs(struct task_struct *task)
{
savesegment(fs, task->thread.fsindex);
savesegment(gs, task->thread.gsindex);
- save_base_legacy(task, task->thread.fsindex, FS);
- save_base_legacy(task, task->thread.gsindex, GS);
+ if (static_cpu_has(X86_FEATURE_FSGSBASE)) {
+ /*
+ * If FSGSBASE is enabled, we can't make any useful guesses
+ * about the base, and user code expects us to save the current
+ * value. Fortunately, reading the base directly is efficient.
+ */
+ task->thread.fsbase = rdfsbase();
+ task->thread.gsbase = x86_gsbase_read_cpu_inactive();
+ } else {
+ save_base_legacy(task, task->thread.fsindex, FS);
+ save_base_legacy(task, task->thread.gsindex, GS);
+ }
}
#if IS_ENABLED(CONFIG_KVM)
@@ -279,10 +289,22 @@ static __always_inline void load_seg_legacy(unsigned short prev_index,
static __always_inline void x86_fsgsbase_load(struct thread_struct *prev,
struct thread_struct *next)
{
- load_seg_legacy(prev->fsindex, prev->fsbase,
- next->fsindex, next->fsbase, FS);
- load_seg_legacy(prev->gsindex, prev->gsbase,
- next->gsindex, next->gsbase, GS);
+ if (static_cpu_has(X86_FEATURE_FSGSBASE)) {
+ /* Update the FS and GS selectors if they could have changed. */
+ if (unlikely(prev->fsindex || next->fsindex))
+ loadseg(FS, next->fsindex);
+ if (unlikely(prev->gsindex || next->gsindex))
+ loadseg(GS, next->gsindex);
+
+ /* Update the bases. */
+ wrfsbase(next->fsbase);
+ x86_gsbase_write_cpu_inactive(next->gsbase);
+ } else {
+ load_seg_legacy(prev->fsindex, prev->fsbase,
+ next->fsindex, next->fsbase, FS);
+ load_seg_legacy(prev->gsindex, prev->gsbase,
+ next->gsindex, next->gsbase, GS);
+ }
}
static unsigned long x86_fsgsbase_read_task(struct task_struct *task,
--
2.20.1
From: "Chang S. Bae" <[email protected]>
Add CPU feature conditional FS/GS base access to the relevant helper
functions. That allows accelerating certain FS/GS base operations in
subsequent changes.
Note, that while possible, the user space entry/exit GS base operations are
not going to use the new FSGSBASE instructions. The reason is that it would
require additional storage for the user space value which adds more
complexity to the low level code and experiments have shown marginal
benefit. This may be revisited later but for now the SWAPGS based handling
in the entry code is preserved except for the paranoid entry/exit code.
Suggested-by: Tony Luck <[email protected]>
Signed-off-by: Chang S. Bae <[email protected]>
Reviewed-by: Tony Luck <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: H. Peter Anvin <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Tony Luck <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Andrew Cooper <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
---
arch/x86/include/asm/fsgsbase.h | 27 +++++++--------
arch/x86/kernel/process_64.c | 58 +++++++++++++++++++++++++++++++++
2 files changed, 70 insertions(+), 15 deletions(-)
diff --git a/arch/x86/include/asm/fsgsbase.h b/arch/x86/include/asm/fsgsbase.h
index fdd1177499b40..aefd53767a5d4 100644
--- a/arch/x86/include/asm/fsgsbase.h
+++ b/arch/x86/include/asm/fsgsbase.h
@@ -49,35 +49,32 @@ static __always_inline void wrgsbase(unsigned long gsbase)
asm volatile("wrgsbase %0" :: "r" (gsbase) : "memory");
}
+#include <asm/cpufeature.h>
+
/* Helper functions for reading/writing FS/GS base */
static inline unsigned long x86_fsbase_read_cpu(void)
{
unsigned long fsbase;
- rdmsrl(MSR_FS_BASE, fsbase);
+ if (static_cpu_has(X86_FEATURE_FSGSBASE))
+ fsbase = rdfsbase();
+ else
+ rdmsrl(MSR_FS_BASE, fsbase);
return fsbase;
}
-static inline unsigned long x86_gsbase_read_cpu_inactive(void)
-{
- unsigned long gsbase;
-
- rdmsrl(MSR_KERNEL_GS_BASE, gsbase);
-
- return gsbase;
-}
-
static inline void x86_fsbase_write_cpu(unsigned long fsbase)
{
- wrmsrl(MSR_FS_BASE, fsbase);
+ if (static_cpu_has(X86_FEATURE_FSGSBASE))
+ wrfsbase(fsbase);
+ else
+ wrmsrl(MSR_FS_BASE, fsbase);
}
-static inline void x86_gsbase_write_cpu_inactive(unsigned long gsbase)
-{
- wrmsrl(MSR_KERNEL_GS_BASE, gsbase);
-}
+extern unsigned long x86_gsbase_read_cpu_inactive(void);
+extern void x86_gsbase_write_cpu_inactive(unsigned long gsbase);
#endif /* CONFIG_X86_64 */
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 5ef9d8f25b0e8..aaa65f284b9b9 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -328,6 +328,64 @@ static unsigned long x86_fsgsbase_read_task(struct task_struct *task,
return base;
}
+unsigned long x86_gsbase_read_cpu_inactive(void)
+{
+ unsigned long gsbase;
+
+ if (static_cpu_has(X86_FEATURE_FSGSBASE)) {
+ bool need_restore = false;
+ unsigned long flags;
+
+ /*
+ * We read the inactive GS base value by swapping
+ * to make it the active one. But we cannot allow
+ * an interrupt while we switch to and from.
+ */
+ if (!irqs_disabled()) {
+ local_irq_save(flags);
+ need_restore = true;
+ }
+
+ native_swapgs();
+ gsbase = rdgsbase();
+ native_swapgs();
+
+ if (need_restore)
+ local_irq_restore(flags);
+ } else {
+ rdmsrl(MSR_KERNEL_GS_BASE, gsbase);
+ }
+
+ return gsbase;
+}
+
+void x86_gsbase_write_cpu_inactive(unsigned long gsbase)
+{
+ if (static_cpu_has(X86_FEATURE_FSGSBASE)) {
+ bool need_restore = false;
+ unsigned long flags;
+
+ /*
+ * We write the inactive GS base value by swapping
+ * to make it the active one. But we cannot allow
+ * an interrupt while we switch to and from.
+ */
+ if (!irqs_disabled()) {
+ local_irq_save(flags);
+ need_restore = true;
+ }
+
+ native_swapgs();
+ wrgsbase(gsbase);
+ native_swapgs();
+
+ if (need_restore)
+ local_irq_restore(flags);
+ } else {
+ wrmsrl(MSR_KERNEL_GS_BASE, gsbase);
+ }
+}
+
unsigned long x86_fsbase_read_task(struct task_struct *task)
{
unsigned long fsbase;
--
2.20.1
On Thu, Apr 23, 2020 at 4:22 PM Sasha Levin <[email protected]> wrote:
>
> From: "Chang S. Bae" <[email protected]>
>
> When a ptracer writes a ptracee's FS/GS base with a different value, the
> selector is also cleared. This behavior is not correct as the selector
> should be preserved.
>
> Update only the base value and leave the selector intact. To simplify the
> code further remove the conditional checking for the same value as this
> code is not performance-critical.
>
> The only recognizable downside of this change is when the selector is
> already nonzero on write. The base will be reloaded according to the
> selector. But the case is highly unexpected in real usages.
After spending a while reading this patch, I think it's probably okay,
but this ptrace stuff is utter garbage. The changelog should explain
why common cases work with the current code, what you think the point
(if any) of the condition you're removing is, and why it's okay to
make this change.
Certainly the current changelog is wrong. You say "The base will be
reloaded according to the selector". The code you're changing calls
x86_fs/gsbase_write_task(), which is, effectively:
task->thread.fsbase = fsbase;
This doesn't reload anything.
Maybe what you're trying to say is "with this patch applied, as is or
with FSGSBASE disabled, if the tracee has FS != 0 and a tracer
modifies only fs_base, then the change won't stick."
--Andy
On 4/24/20 1:21 AM, Sasha Levin wrote:
> Benefits:
> Currently a user process that wishes to read or write the FS/GS base must
> make a system call. But recent X86 processors have added new instructions
> for use in 64-bit mode that allow direct access to the FS and GS segment
> base addresses. The operating system controls whether applications can
> use these instructions with a %cr4 control bit.
>
> In addition to benefits to applications, performance improvements to the
> OS context switch code are possible by making use of these instructions. A
> third party reported out promising performance numbers out of their
> initial benchmarking of the previous version of this patch series [9].
>
> Enablement check:
> The kernel provides information about the enabled state of FSGSBASE to
> applications using the ELF_AUX vector. If the HWCAP2_FSGSBASE bit is set in
> the AUX vector, the kernel has FSGSBASE instructions enabled and
> applications can use them.
>
> Kernel changes:
> Major changes made in the kernel are in context switch, paranoid path, and
> ptrace. In a context switch, a task's FS/GS base will be secured regardless
> of its selector. In the paranoid path, GS base is unconditionally
> overwritten to the kernel GS base on entry and the original GS base is
> restored on exit. Ptrace includes divergence of FS/GS index and base
> values.
>
> Security:
> For mitigating the Spectre v1 SWAPGS issue, LFENCE instructions were added
> on most kernel entries. Those patches are dependent on previous behaviors
> that users couldn't load a kernel address into the GS base. These patches
> change that assumption since the user can load any address into GS base.
> The changes to the kernel entry path in this patch series take account of
> the SWAPGS issue.
>
> Changes from v9:
>
> - Rebase on top of v5.7-rc1 and re-test.
> - Work around changes in 2fff071d28b5 ("x86/process: Unify
> copy_thread_tls()").
> - Work around changes in c7ca0b614513 ("Revert "x86/ptrace: Prevent
> ptrace from clearing the FS/GS selector" and fix the test").
>
>
>
> Andi Kleen (2):
> x86/fsgsbase/64: Add intrinsics for FSGSBASE instructions
> x86/elf: Enumerate kernel FSGSBASE capability in AT_HWCAP2
>
> Andy Lutomirski (4):
> x86/cpu: Add 'unsafe_fsgsbase' to enable CR4.FSGSBASE
> x86/entry/64: Clean up paranoid exit
> x86/fsgsbase/64: Use FSGSBASE in switch_to() if available
> x86/fsgsbase/64: Enable FSGSBASE on 64bit by default and add a chicken
> bit
>
> Chang S. Bae (9):
> x86/ptrace: Prevent ptrace from clearing the FS/GS selector
> selftests/x86/fsgsbase: Test GS selector on ptracer-induced GS base
> write
> x86/entry/64: Switch CR3 before SWAPGS in paranoid entry
> x86/entry/64: Introduce the FIND_PERCPU_BASE macro
> x86/entry/64: Handle FSGSBASE enabled paranoid entry/exit
> x86/entry/64: Document GSBASE handling in the paranoid path
> x86/fsgsbase/64: Enable FSGSBASE instructions in helper functions
> x86/fsgsbase/64: Use FSGSBASE instructions on thread copy and ptrace
> selftests/x86/fsgsbase: Test ptracer-induced GS base write with
> FSGSBASE
>
> Sasha Levin (1):
> x86/fsgsbase/64: move save_fsgs to header file
>
> Thomas Gleixner (1):
> Documentation/x86/64: Add documentation for GS/FS addressing mode
>
> Tony Luck (1):
> x86/speculation/swapgs: Check FSGSBASE in enabling SWAPGS mitigation
>
> .../admin-guide/kernel-parameters.txt | 2 +
> Documentation/x86/entry_64.rst | 9 +
> Documentation/x86/x86_64/fsgs.rst | 199 ++++++++++++++++++
> Documentation/x86/x86_64/index.rst | 1 +
> arch/x86/entry/calling.h | 40 ++++
> arch/x86/entry/entry_64.S | 131 +++++++++---
> arch/x86/include/asm/fsgsbase.h | 45 +++-
> arch/x86/include/asm/inst.h | 15 ++
> arch/x86/include/uapi/asm/hwcap2.h | 3 +
> arch/x86/kernel/cpu/bugs.c | 6 +-
> arch/x86/kernel/cpu/common.c | 22 ++
> arch/x86/kernel/process.c | 10 +-
> arch/x86/kernel/process.h | 69 ++++++
> arch/x86/kernel/process_64.c | 142 +++++++------
> arch/x86/kernel/ptrace.c | 17 +-
> tools/testing/selftests/x86/fsgsbase.c | 24 ++-
> 16 files changed, 606 insertions(+), 129 deletions(-)
> create mode 100644 Documentation/x86/x86_64/fsgs.rst
So FWIW I've done some overnight fuzz testing of this patch set and
haven't seen any problems. Will try a couple of other kernel configs too.
Vegard
On 5/10/20 10:09 AM, Vegard Nossum wrote:
>
> On 4/24/20 1:21 AM, Sasha Levin wrote:
>> Benefits:
>> Currently a user process that wishes to read or write the FS/GS base must
>> make a system call. But recent X86 processors have added new instructions
>> for use in 64-bit mode that allow direct access to the FS and GS segment
>> base addresses. The operating system controls whether applications can
>> use these instructions with a %cr4 control bit.
[...]
> So FWIW I've done some overnight fuzz testing of this patch set and
> haven't seen any problems. Will try a couple of other kernel configs too.
I spoke a few minutes too soon. Just hit this, if anybody wants to have
a look:
[ 6402.786418] ------------[ cut here ]------------
[ 6402.787769] WARNING: CPU: 0 PID: 13802 at arch/x86/kernel/traps.c:811
do_debug+0x16c/0x210
[ 6402.790042] CPU: 0 PID: 13802 Comm: init Not tainted 5.7.0-rc4+ #194
[ 6402.791779] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 6402.793365] RIP: 0010:do_debug+0x16c/0x210
[ 6402.794496] Code: ef e8 f8 fb 00 00 f6 85 91 00 00 00 02 74 b9 fa 66
66 90 66 66 90 e8 c3 f5 11 00 eb ab f6 85 88 00 00 00 03 0f 85 6e ff ff
ff <0f> 0b 80 e4 bf 49 89 84 24 58 0a 00 00 f0 41 80 0c 24 10 48 81 a5
[ 6402.799557] RSP: 0000:fffffe0000011f20 EFLAGS: 00010046
[ 6402.800995] RAX: 0000000000004002 RBX: 0000000000000000 RCX:
00000000ffffffff
[ 6402.802959] RDX: 0000000000000000 RSI: 0000000000000003 RDI:
ffffffff82471e60
[ 6402.804891] RBP: fffffe0000011f58 R08: 0000000000000000 R09:
0000000000000005
[ 6402.806836] R10: 0000000000000000 R11: 0000000000000000 R12:
ffff88803e739a00
[ 6402.808775] R13: 0000000000000000 R14: 000000003ce24000 R15:
0000000000000000
[ 6402.810723] FS: 000000000097a8c0(0000) GS:ffff88803ec00000(0000)
knlGS:0000000000000000
[ 6402.812933] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 6402.814509] CR2: 0000000040000010 CR3: 000000003ce24000 CR4:
00000000000006f0
[ 6402.816468] DR0: 0000000000000001 DR1: 0000000040006070 DR2:
00007ffff7ffd000
[ 6402.818406] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
0000000003b3062a
[ 6402.820353] Call Trace:
[ 6402.821043] <#DB>
[ 6402.821622] debug+0x37/0x70
[ 6402.822449] RIP: 0010:arch_stack_walk_user+0x79/0x110
[ 6402.823851] Code: b8 f0 ff ff bf be f0 df ff ff 48 0f 44 c6 48 39 d0
0f 82 94 00 00 00 41 83 87 b8 09 00 00 01 66 66 90 0f ae e8 31 c0 48 8b
1a <66> 66 90 85 c0 75 72 66 66 90 0f ae e8 48 8b 72 08 66 66 90 85 c0
[ 6402.828923] RSP: 0000:ffffc90003807d80 EFLAGS: 00000046
[ 6402.830346] RAX: 0000000000000000 RBX: 0040001000bf4800 RCX:
0000000000000001
[ 6402.832288] RDX: 0000000040006073 RSI: 00000000400060dd RDI:
ffffc90003807db8
[ 6402.834250] RBP: ffffc90003807f58 R08: 0000000000000001 R09:
ffff88803e444400
[ 6402.836203] R10: 000000000000054c R11: ffff88803d2d955c R12:
ffff88803e739a00
[ 6402.838139] R13: ffffffff810f16a0 R14: ffffc90003807db8 R15:
ffff88803e739a00
[ 6402.840083] ? profile_setup.cold+0xa1/0xa1
[ 6402.841235] </#DB>
[ 6402.841836] stack_trace_save_user+0x8c/0xd4
[ 6402.843045] trace_buffer_unlock_commit_regs+0x122/0x1a0
[ 6402.844501] trace_event_buffer_commit+0x6d/0x240
[ 6402.845799] trace_event_raw_event_preemptirq_template+0x75/0xc0
[ 6402.847441] ? debug+0x53/0x70
[ 6402.848299] ? trace_hardirqs_off_thunk+0x1a/0x33
[ 6402.849593] trace_hardirqs_off_caller+0xa6/0xd0
[ 6402.850862] ? debug+0x4e/0x70
[ 6402.851727] trace_hardirqs_off_thunk+0x1a/0x33
[ 6402.852983] debug+0x53/0x70
[ 6402.853785] RIP: 0033:0x400060dd
[ 6402.854681] Code: 7a 1e 9e 91 de 4c 65 49 be 00 d0 ff f7 ff 7f 00 00
49 bf de a7 b3 e8 d7 21 3c 15 9c 48 81 0c 24 00 01 00 00 9d b8 62 00 00
00 <8e> c0 0f 05 66 8c c8 9c 48 81 24 24 ff fe ff ff 9d 48 89 04 25 40
[ 6402.859689] RSP: 002b:000000004000aea0 EFLAGS: 00000317
[ 6402.861116] RAX: 0000000000000062 RBX: 0000000040001000 RCX:
ffffffffffffffff
[ 6402.863097] RDX: 0000000040003000 RSI: 0000000040004000 RDI:
0000000040001000
[ 6402.866199] RBP: 0000000040006073 R08: 0000000000000001 R09:
0000000000000001
[ 6402.868142] R10: ffffffffef080df2 R11: 1000000000000000 R12:
fdffffffffffffff
[ 6402.870083] R13: 654cde919e1e7ab5 R14: 00007ffff7ffd000 R15:
153c21d7e8b3a7de
[ 6402.872049] ---[ end trace 91a3039d0fd63799 ]---
It might not be related to the patch set, mind.
Vegard
Vegard Nossum <[email protected]> writes:
> On 5/10/20 10:09 AM, Vegard Nossum wrote:
>
> I spoke a few minutes too soon. Just hit this, if anybody wants to have
> a look:
>
> [ 6402.786418] ------------[ cut here ]------------
> [ 6402.787769] WARNING: CPU: 0 PID: 13802 at arch/x86/kernel/traps.c:811
> do_debug+0x16c/0x210
> [ 6402.820353] Call Trace:
> [ 6402.821043] <#DB>
> [ 6402.821622] debug+0x37/0x70
> [ 6402.822449] RIP: 0010:arch_stack_walk_user+0x79/0x110
That's a cute way to trigger that WARN_ON in the #DB handler.
> [ 6402.816468] DR0: 0000000000000001 DR1: 0000000040006070 DR2: 00007ffff7ffd000
> [ 6402.818406] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000003b3062a
#DB recursion
[ 6402.832288] RDX: 0000000040006073
27: 48 8b 1a mov (%rdx),%rbx
Breakpoint on user space stack, #DB triggers and the low level ASM
irqflags tracepoint has stacktrace enabled which unwinds into the user
stack and triggers #DB again.
Bah. I know why I want to ban all that tracing muck from low level entry code.
> It might not be related to the patch set, mind.
It's unrelated.
Thanks,
tglx
On Sun, May 10, 2020 at 12:15:34PM +0200, Thomas Gleixner wrote:
>Vegard Nossum <[email protected]> writes:
>> On 5/10/20 10:09 AM, Vegard Nossum wrote:
>>
>> I spoke a few minutes too soon. Just hit this, if anybody wants to have
>> a look:
>>
>> [ 6402.786418] ------------[ cut here ]------------
>> [ 6402.787769] WARNING: CPU: 0 PID: 13802 at arch/x86/kernel/traps.c:811
>> do_debug+0x16c/0x210
>
>> [ 6402.820353] Call Trace:
>> [ 6402.821043] <#DB>
>> [ 6402.821622] debug+0x37/0x70
>> [ 6402.822449] RIP: 0010:arch_stack_walk_user+0x79/0x110
>
>That's a cute way to trigger that WARN_ON in the #DB handler.
>
>> [ 6402.816468] DR0: 0000000000000001 DR1: 0000000040006070 DR2: 00007ffff7ffd000
>> [ 6402.818406] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000003b3062a
>
>#DB recursion
>
> [ 6402.832288] RDX: 0000000040006073
>
>27: 48 8b 1a mov (%rdx),%rbx
>
>Breakpoint on user space stack, #DB triggers and the low level ASM
>irqflags tracepoint has stacktrace enabled which unwinds into the user
>stack and triggers #DB again.
>
>Bah. I know why I want to ban all that tracing muck from low level entry code.
>
>> It might not be related to the patch set, mind.
>
>It's unrelated.
Thanks for testing Vegard!
--
Thanks,
Sasha
> So this is a check that checks if you're running in user mode if
> you have a debug trap with single step, but somehow it triggered
> for a user segment.
>
> Probably the regs got corrupted.
>
> Sasha, I suspect you're missing a mov %rsp,%rdi somewhere in the
> debug entry path that sets up the regs argument for the C code.
... Ah never mind. Thomas has a better explanation.
-Andi
> [ 6402.786418] ------------[ cut here ]------------
> [ 6402.787769] WARNING: CPU: 0 PID: 13802 at arch/x86/kernel/traps.c:811
> do_debug+0x16c/0x210
...
> [ 6402.848299] ? trace_hardirqs_off_thunk+0x1a/0x33
> [ 6402.849593] trace_hardirqs_off_caller+0xa6/0xd0
> [ 6402.850862] ? debug+0x4e/0x70
> [ 6402.851727] trace_hardirqs_off_thunk+0x1a/0x33
> [ 6402.852983] debug+0x53/0x70
> [ 6402.853785] RIP: 0033:0x400060dd
So this is a check that checks if you're running in user mode if
you have a debug trap with single step, but somehow it triggered
for a user segment.
Probably the regs got corrupted.
Sasha, I suspect you're missing a mov %rsp,%rdi somewhere in the
debug entry path that sets up the regs argument for the C code.
-Andi
On Sun, May 10, 2020 at 05:50:28PM -0700, Andi Kleen wrote:
>> So this is a check that checks if you're running in user mode if
>> you have a debug trap with single step, but somehow it triggered
>> for a user segment.
>>
>> Probably the regs got corrupted.
>>
>> Sasha, I suspect you're missing a mov %rsp,%rdi somewhere in the
>> debug entry path that sets up the regs argument for the C code.
>
>... Ah never mind. Thomas has a better explanation.
FWIW, this series was heavily tested for the past few months to the
point that we're comfortable in enabling it for 3rd party users on
Azure:
https://bugs.launchpad.net/ubuntu/+source/linux-azure/+bug/1877425.
--
Thanks,
Sasha