LKGS instruction is introduced with Intel FRED (flexible return and event
delivery) specification https://cdrdv2.intel.com/v1/dl/getContent/678938.
LKGS is independent of FRED, so we enable it as a standalone CPU feature.
LKGS behaves like the MOV to GS instruction except that it loads the base
address into the IA32_KERNEL_GS_BASE MSR instead of the GS segment’s
descriptor cache, which is exactly what Linux kernel does to load user level
GS base. Thus, with LKGS, there is no need to SWAPGS away from the kernel
GS base.
H. Peter Anvin (Intel) (6):
x86/cpufeature: add cpu feature bit for LKGS
x86/opcode: add LKGS instruction to x86-opcode-map
x86/gsseg: make asm_load_gs_index() take an u16
x86/gsseg: move local_irq_save/restore() into asm_load_gs_index()
x86/gsseg: move load_gs_index() to its own header file
x86/gsseg: use the LKGS instruction if available for load_gs_index()
arch/x86/entry/entry_64.S | 28 +++++++++---
arch/x86/ia32/ia32_signal.c | 1 +
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/gsseg.h | 58 ++++++++++++++++++++++++
arch/x86/include/asm/mmu_context.h | 1 +
arch/x86/include/asm/special_insns.h | 21 ---------
arch/x86/kernel/paravirt.c | 1 +
arch/x86/kernel/tls.c | 1 +
arch/x86/lib/x86-opcode-map.txt | 1 +
tools/arch/x86/include/asm/cpufeatures.h | 1 +
tools/arch/x86/lib/x86-opcode-map.txt | 1 +
11 files changed, 88 insertions(+), 27 deletions(-)
create mode 100644 arch/x86/include/asm/gsseg.h
--
2.34.1
From: "H. Peter Anvin (Intel)" <[email protected]>
Add the instruction opcode used by LKGS.
Opcode number is per public FRED draft spec v3.0
https://cdrdv2.intel.com/v1/dl/getContent/678938.
Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
Signed-off-by: Xin Li <[email protected]>
---
arch/x86/lib/x86-opcode-map.txt | 1 +
tools/arch/x86/lib/x86-opcode-map.txt | 1 +
2 files changed, 2 insertions(+)
diff --git a/arch/x86/lib/x86-opcode-map.txt b/arch/x86/lib/x86-opcode-map.txt
index d12d1358f96d..5168ee0360b2 100644
--- a/arch/x86/lib/x86-opcode-map.txt
+++ b/arch/x86/lib/x86-opcode-map.txt
@@ -1047,6 +1047,7 @@ GrpTable: Grp6
3: LTR Ew
4: VERR Ew
5: VERW Ew
+6: LKGS Ew (F2)
EndTable
GrpTable: Grp7
diff --git a/tools/arch/x86/lib/x86-opcode-map.txt b/tools/arch/x86/lib/x86-opcode-map.txt
index d12d1358f96d..5168ee0360b2 100644
--- a/tools/arch/x86/lib/x86-opcode-map.txt
+++ b/tools/arch/x86/lib/x86-opcode-map.txt
@@ -1047,6 +1047,7 @@ GrpTable: Grp6
3: LTR Ew
4: VERR Ew
5: VERW Ew
+6: LKGS Ew (F2)
EndTable
GrpTable: Grp7
--
2.34.1
From: "H. Peter Anvin (Intel)" <[email protected]>
Add the CPU feature bit for LKGS (Load "Kernel" GS).
LKGS instruction is introduced with Intel FRED (flexible return and
event delivery) specificaton
https://cdrdv2.intel.com/v1/dl/getContent/678938.
LKGS behaves like the MOV to GS instruction except that it loads
the base address into the IA32_KERNEL_GS_BASE MSR instead of the
GS segment’s descriptor cache, which is exactly what Linux kernel
does to load a user level GS base. Thus, with LKGS, there is no
need to SWAPGS away from the kernel GS base.
Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
Signed-off-by: Xin Li <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 1 +
tools/arch/x86/include/asm/cpufeatures.h | 1 +
2 files changed, 2 insertions(+)
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index ef4775c6db01..459fb0c21dd4 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -308,6 +308,7 @@
/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
#define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */
#define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */
+#define X86_FEATURE_LKGS (12*32+ 18) /* Load "kernel" (userspace) gs */
/* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */
#define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */
diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
index ef4775c6db01..459fb0c21dd4 100644
--- a/tools/arch/x86/include/asm/cpufeatures.h
+++ b/tools/arch/x86/include/asm/cpufeatures.h
@@ -308,6 +308,7 @@
/* Intel-defined CPU features, CPUID level 0x00000007:1 (EAX), word 12 */
#define X86_FEATURE_AVX_VNNI (12*32+ 4) /* AVX VNNI instructions */
#define X86_FEATURE_AVX512_BF16 (12*32+ 5) /* AVX512 BFLOAT16 instructions */
+#define X86_FEATURE_LKGS (12*32+ 18) /* Load "kernel" (userspace) gs */
/* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */
#define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */
--
2.34.1
From: "H. Peter Anvin (Intel)" <[email protected]>
Let gcc know that only the low 16 bits of load_gs_index() argument
actually matter. It might allow it to create slightly better
code. However, do not propagate this into the prototypes of functions
that end up being paravirtualized, to avoid unnecessary changes.
Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
Signed-off-by: Xin Li <[email protected]>
---
arch/x86/entry/entry_64.S | 2 +-
arch/x86/include/asm/special_insns.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 9953d966d124..e0c48998d2fb 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -779,7 +779,7 @@ _ASM_NOKPROBE(common_interrupt_return)
/*
* Reload gs selector with exception handling
- * edi: new selector
+ * di: new selector
*
* Is in entry.text as it shouldn't be instrumented.
*/
diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index 35f709f619fb..a71d0e8d4684 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -120,7 +120,7 @@ static inline void native_wbinvd(void)
asm volatile("wbinvd": : :"memory");
}
-extern asmlinkage void asm_load_gs_index(unsigned int selector);
+extern asmlinkage void asm_load_gs_index(u16 selector);
static inline void native_load_gs_index(unsigned int selector)
{
--
2.34.1
From: "H. Peter Anvin (Intel)" <[email protected]>
The need to disable/enable interrupts around asm_load_gs_index() is a
consequence of the implementation. Prepare for using the LKGS
instruction, which is locally atomic and therefore doesn't need
interrupts disabled.
Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
Signed-off-by: Xin Li <[email protected]>
---
arch/x86/entry/entry_64.S | 26 +++++++++++++++++++++-----
arch/x86/include/asm/special_insns.h | 4 ----
2 files changed, 21 insertions(+), 9 deletions(-)
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index e0c48998d2fb..cc6ba6672418 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -778,19 +778,35 @@ SYM_CODE_END(common_interrupt_return)
_ASM_NOKPROBE(common_interrupt_return)
/*
- * Reload gs selector with exception handling
+ * Reload gs selector with exception handling. This is used only on
+ * native, so using swapgs, pushf, popf, cli, sti, ... directly is fine.
+ *
* di: new selector
+ * rax: scratch register
*
* Is in entry.text as it shouldn't be instrumented.
+ *
+ * Note: popf is slow, so use pushf to read IF and then execute cli/sti
+ * if necessary.
*/
SYM_FUNC_START(asm_load_gs_index)
FRAME_BEGIN
+ pushf
+ pop %rax
+ andl $X86_EFLAGS_IF, %eax /* Interrupts enabled? */
+ jz 1f
+ cli
+1:
swapgs
.Lgs_change:
ANNOTATE_NOENDBR // error_entry
movl %edi, %gs
2: ALTERNATIVE "", "mfence", X86_BUG_SWAPGS_FENCE
swapgs
+ testl %eax, %eax
+ jz 3f
+ sti
+3:
FRAME_END
RET
@@ -799,12 +815,12 @@ SYM_FUNC_START(asm_load_gs_index)
swapgs /* switch back to user gs */
.macro ZAP_GS
/* This can't be a string because the preprocessor needs to see it. */
- movl $__USER_DS, %eax
- movl %eax, %gs
+ movl $__USER_DS, %edi
+ movl %edi, %gs
.endm
ALTERNATIVE "", "ZAP_GS", X86_BUG_NULL_SEG
- xorl %eax, %eax
- movl %eax, %gs
+ xorl %edi, %edi
+ movl %edi, %gs
jmp 2b
_ASM_EXTABLE(.Lgs_change, .Lbad_gs)
diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index a71d0e8d4684..6de00dec6564 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -124,11 +124,7 @@ extern asmlinkage void asm_load_gs_index(u16 selector);
static inline void native_load_gs_index(unsigned int selector)
{
- unsigned long flags;
-
- local_irq_save(flags);
asm_load_gs_index(selector);
- local_irq_restore(flags);
}
static inline unsigned long __read_cr4(void)
--
2.34.1
From: "H. Peter Anvin (Intel)" <[email protected]>
The LKGS instruction atomically loads a segment descriptor into the
%gs descriptor registers, *except* that %gs.base is unchanged, and the
base is instead loaded into MSR_IA32_KERNEL_GS_BASE, which is exactly
what we want this function to do.
Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
Signed-off-by: Xin Li <[email protected]>
---
arch/x86/include/asm/gsseg.h | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/gsseg.h b/arch/x86/include/asm/gsseg.h
index 5e3b56a17098..b8a6a98d88b8 100644
--- a/arch/x86/include/asm/gsseg.h
+++ b/arch/x86/include/asm/gsseg.h
@@ -3,15 +3,41 @@
#define _ASM_X86_GSSEG_H
#include <linux/types.h>
+
+#include <asm/asm.h>
+#include <asm/cpufeature.h>
+#include <asm/alternative.h>
#include <asm/processor.h>
+#include <asm/nops.h>
#ifdef CONFIG_X86_64
extern asmlinkage void asm_load_gs_index(u16 selector);
+#define LKGS_DI _ASM_BYTES(0xf2,0x0f,0x00,0xf7)
+
static inline void native_load_gs_index(unsigned int selector)
{
- asm_load_gs_index(selector);
+ u16 sel = selector;
+
+ /*
+ * Note: the fixup is used for the LKGS instruction, but
+ * it needs to be attached to the primary instruction sequence
+ * as it isn't something that gets patched.
+ *
+ * %rax is provided to the assembly routine as a scratch
+ * register.
+ */
+ alternative_io("1: call asm_load_gs_index\n"
+ ".pushsection \".fixup\",\"ax\"\n"
+ "2: xorl %k[sel], %k[sel]\n"
+ " jmp 1b\n"
+ ".popsection\n"
+ _ASM_EXTABLE(1b, 2b),
+ _ASM_BYTES(0x3e) LKGS_DI,
+ X86_FEATURE_LKGS,
+ ASM_OUTPUT2([sel] "+D" (sel), ASM_CALL_CONSTRAINT),
+ ASM_NO_INPUT_CLOBBER(_ASM_AX));
}
#endif /* CONFIG_X86_64 */
--
2.34.1
From: "H. Peter Anvin (Intel)" <[email protected]>
<asm/cpufeature.h> depends on <asm/special_insns.h>, so in order to be
able to use alternatives in native_load_gs_index(), factor it out into
a separate header file.
Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
Signed-off-by: Xin Li <[email protected]>
---
arch/x86/ia32/ia32_signal.c | 1 +
arch/x86/include/asm/gsseg.h | 32 ++++++++++++++++++++++++++++
arch/x86/include/asm/mmu_context.h | 1 +
arch/x86/include/asm/special_insns.h | 17 ---------------
arch/x86/kernel/paravirt.c | 1 +
arch/x86/kernel/tls.c | 1 +
6 files changed, 36 insertions(+), 17 deletions(-)
create mode 100644 arch/x86/include/asm/gsseg.h
diff --git a/arch/x86/ia32/ia32_signal.c b/arch/x86/ia32/ia32_signal.c
index c9c3859322fa..14c739303099 100644
--- a/arch/x86/ia32/ia32_signal.c
+++ b/arch/x86/ia32/ia32_signal.c
@@ -34,6 +34,7 @@
#include <asm/sigframe.h>
#include <asm/sighandling.h>
#include <asm/smap.h>
+#include <asm/gsseg.h>
static inline void reload_segments(struct sigcontext_32 *sc)
{
diff --git a/arch/x86/include/asm/gsseg.h b/arch/x86/include/asm/gsseg.h
new file mode 100644
index 000000000000..5e3b56a17098
--- /dev/null
+++ b/arch/x86/include/asm/gsseg.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef _ASM_X86_GSSEG_H
+#define _ASM_X86_GSSEG_H
+
+#include <linux/types.h>
+#include <asm/processor.h>
+
+#ifdef CONFIG_X86_64
+
+extern asmlinkage void asm_load_gs_index(u16 selector);
+
+static inline void native_load_gs_index(unsigned int selector)
+{
+ asm_load_gs_index(selector);
+}
+
+#endif /* CONFIG_X86_64 */
+
+#ifndef CONFIG_PARAVIRT_XXL
+
+static inline void load_gs_index(unsigned int selector)
+{
+#ifdef CONFIG_X86_64
+ native_load_gs_index(selector);
+#else
+ loadsegment(gs, selector);
+#endif
+}
+
+#endif /* CONFIG_PARAVIRT_XXL */
+
+#endif /* _ASM_X86_GSSEG_H */
diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index b8d40ddeab00..e01aa74a6de7 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -12,6 +12,7 @@
#include <asm/tlbflush.h>
#include <asm/paravirt.h>
#include <asm/debugreg.h>
+#include <asm/gsseg.h>
extern atomic64_t last_mm_ctx_id;
diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index 6de00dec6564..cfd9499b617c 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -120,13 +120,6 @@ static inline void native_wbinvd(void)
asm volatile("wbinvd": : :"memory");
}
-extern asmlinkage void asm_load_gs_index(u16 selector);
-
-static inline void native_load_gs_index(unsigned int selector)
-{
- asm_load_gs_index(selector);
-}
-
static inline unsigned long __read_cr4(void)
{
return native_read_cr4();
@@ -180,16 +173,6 @@ static inline void wbinvd(void)
native_wbinvd();
}
-
-static inline void load_gs_index(unsigned int selector)
-{
-#ifdef CONFIG_X86_64
- native_load_gs_index(selector);
-#else
- loadsegment(gs, selector);
-#endif
-}
-
#endif /* CONFIG_PARAVIRT_XXL */
static inline void clflush(volatile void *__p)
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 7ca2d46c08cc..00f6a92551d2 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -32,6 +32,7 @@
#include <asm/special_insns.h>
#include <asm/tlb.h>
#include <asm/io_bitmap.h>
+#include <asm/gsseg.h>
/*
* nop stub, which must not clobber anything *including the stack* to
diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c
index 3c883e064242..3ffbab0081f4 100644
--- a/arch/x86/kernel/tls.c
+++ b/arch/x86/kernel/tls.c
@@ -12,6 +12,7 @@
#include <asm/ldt.h>
#include <asm/processor.h>
#include <asm/proto.h>
+#include <asm/gsseg.h>
#include "tls.h"
--
2.34.1
On Thu, Oct 06, 2022 at 08:40:39AM -0700, Xin Li wrote:
> SYM_FUNC_START(asm_load_gs_index)
> FRAME_BEGIN
> + pushf
> + pop %rax
> + andl $X86_EFLAGS_IF, %eax /* Interrupts enabled? */
> + jz 1f
> + cli
> +1:
Why the pop,andl,jz ? AFAICT our arch_local_irq_save() doesn't even
bother with that, why does this function.
On Thu, Oct 06, 2022 at 08:40:41AM -0700, Xin Li wrote:
> +#define LKGS_DI _ASM_BYTES(0xf2,0x0f,0x00,0xf7)
This should come with a comment that states the binutils version that
understands: "lkgs %di" (or however that ends being).
On Thu, Oct 06, 2022 at 08:40:41AM -0700, Xin Li wrote:
> static inline void native_load_gs_index(unsigned int selector)
> {
> + u16 sel = selector;
> +
> + /*
> + * Note: the fixup is used for the LKGS instruction, but
> + * it needs to be attached to the primary instruction sequence
> + * as it isn't something that gets patched.
> + *
> + * %rax is provided to the assembly routine as a scratch
> + * register.
> + */
> + alternative_io("1: call asm_load_gs_index\n"
> + ".pushsection \".fixup\",\"ax\"\n"
> + "2: xorl %k[sel], %k[sel]\n"
> + " jmp 1b\n"
> + ".popsection\n"
> + _ASM_EXTABLE(1b, 2b),
> + _ASM_BYTES(0x3e) LKGS_DI,
> + X86_FEATURE_LKGS,
> + ASM_OUTPUT2([sel] "+D" (sel), ASM_CALL_CONSTRAINT),
> + ASM_NO_INPUT_CLOBBER(_ASM_AX));
> }
I'm very sure none of this was tested... the .fixup section hasn't
existed for almost a year now.
e5eefda5aa51 ("x86: Remove .fixup section")
On Thu, Oct 06, 2022 at 08:40:41AM -0700, Xin Li wrote:
> static inline void native_load_gs_index(unsigned int selector)
> {
> + u16 sel = selector;
> +
> + /*
> + * Note: the fixup is used for the LKGS instruction, but
> + * it needs to be attached to the primary instruction sequence
> + * as it isn't something that gets patched.
> + *
> + * %rax is provided to the assembly routine as a scratch
> + * register.
> + */
> + alternative_io("1: call asm_load_gs_index\n"
> + ".pushsection \".fixup\",\"ax\"\n"
> + "2: xorl %k[sel], %k[sel]\n"
> + " jmp 1b\n"
> + ".popsection\n"
> + _ASM_EXTABLE(1b, 2b),
> + _ASM_BYTES(0x3e) LKGS_DI,
> + X86_FEATURE_LKGS,
> + ASM_OUTPUT2([sel] "+D" (sel), ASM_CALL_CONSTRAINT),
> + ASM_NO_INPUT_CLOBBER(_ASM_AX));
> }
Something like so perhaps?
asm_inline volatile("1:\n"
ALTERNATIVE("call asm_load_gs_index\n",
LKGS_DI,
X86_FEATURE_LKGS)
_ASM_EXTABLE_TYPE_REG(1b, 1b, EX_TYPE_ZERO_REG, %k[sel])
: ASM_CALL_CONSTRAINT
: [sel] "D" (sel)
: "memory", _ASM_AX);
(completely untested, not even seen a compiler upclose)
On Thu, Oct 6, 2022 at 12:13 PM Xin Li <[email protected]> wrote:
>
> From: "H. Peter Anvin (Intel)" <[email protected]>
>
> <asm/cpufeature.h> depends on <asm/special_insns.h>, so in order to be
> able to use alternatives in native_load_gs_index(), factor it out into
> a separate header file.
>
> Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
> Signed-off-by: Xin Li <[email protected]>
This could be moved into <asm/segment.h> instead of creating a new header file.
--
Brian Gerst
On Thu, Oct 6, 2022 at 12:19 PM Xin Li <[email protected]> wrote:
>
> From: "H. Peter Anvin (Intel)" <[email protected]>
>
> The LKGS instruction atomically loads a segment descriptor into the
> %gs descriptor registers, *except* that %gs.base is unchanged, and the
> base is instead loaded into MSR_IA32_KERNEL_GS_BASE, which is exactly
> what we want this function to do.
>
> Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
> Signed-off-by: Xin Li <[email protected]>
> ---
> arch/x86/include/asm/gsseg.h | 28 +++++++++++++++++++++++++++-
> 1 file changed, 27 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/gsseg.h b/arch/x86/include/asm/gsseg.h
> index 5e3b56a17098..b8a6a98d88b8 100644
> --- a/arch/x86/include/asm/gsseg.h
> +++ b/arch/x86/include/asm/gsseg.h
> @@ -3,15 +3,41 @@
> #define _ASM_X86_GSSEG_H
>
> #include <linux/types.h>
> +
> +#include <asm/asm.h>
> +#include <asm/cpufeature.h>
> +#include <asm/alternative.h>
> #include <asm/processor.h>
> +#include <asm/nops.h>
>
> #ifdef CONFIG_X86_64
>
> extern asmlinkage void asm_load_gs_index(u16 selector);
>
> +#define LKGS_DI _ASM_BYTES(0xf2,0x0f,0x00,0xf7)
> +
> static inline void native_load_gs_index(unsigned int selector)
> {
> - asm_load_gs_index(selector);
> + u16 sel = selector;
> +
> + /*
> + * Note: the fixup is used for the LKGS instruction, but
> + * it needs to be attached to the primary instruction sequence
> + * as it isn't something that gets patched.
> + *
> + * %rax is provided to the assembly routine as a scratch
> + * register.
> + */
> + alternative_io("1: call asm_load_gs_index\n"
> + ".pushsection \".fixup\",\"ax\"\n"
> + "2: xorl %k[sel], %k[sel]\n"
> + " jmp 1b\n"
> + ".popsection\n"
> + _ASM_EXTABLE(1b, 2b),
> + _ASM_BYTES(0x3e) LKGS_DI,
> + X86_FEATURE_LKGS,
> + ASM_OUTPUT2([sel] "+D" (sel), ASM_CALL_CONSTRAINT),
> + ASM_NO_INPUT_CLOBBER(_ASM_AX));
> }
>
> #endif /* CONFIG_X86_64 */
> --
> 2.34.1
There are not that many call sites, so using something like this
(incorporating Peter Z's suggestion for the exception handler) would
be better from a code readability perspective vs. a tiny increase in
code size.
if (static_cpu_has(X86_FEATURE_LKGS))
asm volatile("1: " LKGS_DI
_ASM_EXTABLE_TYPE_REG(1b, 1b,
EX_TYPE_ZERO_REG, %k[sel])
: [sel] "+D" (sel) : : "memory");
else
asm_load_gs_index(sel);
--
Brian Gerst
On October 7, 2022 7:50:01 AM PDT, Peter Zijlstra <[email protected]> wrote:
>On Thu, Oct 06, 2022 at 08:40:39AM -0700, Xin Li wrote:
>> SYM_FUNC_START(asm_load_gs_index)
>> FRAME_BEGIN
>> + pushf
>> + pop %rax
>> + andl $X86_EFLAGS_IF, %eax /* Interrupts enabled? */
>> + jz 1f
>> + cli
>> +1:
>
>Why the pop,andl,jz ? AFAICT our arch_local_irq_save() doesn't even
>bother with that, why does this function.
They pop and and are needed for the sti anyway, and so might as well be leveraged here, too.
On October 7, 2022 7:47:09 AM PDT, Peter Zijlstra <[email protected]> wrote:
>On Thu, Oct 06, 2022 at 08:40:41AM -0700, Xin Li wrote:
>
>> static inline void native_load_gs_index(unsigned int selector)
>> {
>> + u16 sel = selector;
>> +
>> + /*
>> + * Note: the fixup is used for the LKGS instruction, but
>> + * it needs to be attached to the primary instruction sequence
>> + * as it isn't something that gets patched.
>> + *
>> + * %rax is provided to the assembly routine as a scratch
>> + * register.
>> + */
>> + alternative_io("1: call asm_load_gs_index\n"
>> + ".pushsection \".fixup\",\"ax\"\n"
>> + "2: xorl %k[sel], %k[sel]\n"
>> + " jmp 1b\n"
>> + ".popsection\n"
>> + _ASM_EXTABLE(1b, 2b),
>> + _ASM_BYTES(0x3e) LKGS_DI,
>> + X86_FEATURE_LKGS,
>> + ASM_OUTPUT2([sel] "+D" (sel), ASM_CALL_CONSTRAINT),
>> + ASM_NO_INPUT_CLOBBER(_ASM_AX));
>> }
>
>I'm very sure none of this was tested... the .fixup section hasn't
>existed for almost a year now.
>
> e5eefda5aa51 ("x86: Remove .fixup section")
Xin, what did you use as the forward-porting baseline?
On October 7, 2022 8:40:41 AM PDT, Brian Gerst <[email protected]> wrote:
>On Thu, Oct 6, 2022 at 12:13 PM Xin Li <[email protected]> wrote:
>>
>> From: "H. Peter Anvin (Intel)" <[email protected]>
>>
>> <asm/cpufeature.h> depends on <asm/special_insns.h>, so in order to be
>> able to use alternatives in native_load_gs_index(), factor it out into
>> a separate header file.
>>
>> Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
>> Signed-off-by: Xin Li <[email protected]>
>
>This could be moved into <asm/segment.h> instead of creating a new header file.
>
>--
>Brian Gerst
At least at the time I wrote the code, it could not, without creating yet another circular header file dependency.
> > + alternative_io("1: call asm_load_gs_index\n"
> > + ".pushsection \".fixup\",\"ax\"\n"
> > + "2: xorl %k[sel], %k[sel]\n"
> > + " jmp 1b\n"
> > + ".popsection\n"
> > + _ASM_EXTABLE(1b, 2b),
> > + _ASM_BYTES(0x3e) LKGS_DI,
> > + X86_FEATURE_LKGS,
> > + ASM_OUTPUT2([sel] "+D" (sel), ASM_CALL_CONSTRAINT),
> > + ASM_NO_INPUT_CLOBBER(_ASM_AX));
> > }
>
> I'm very sure none of this was tested... the .fixup section hasn't existed for
> almost a year now.
Weird, did you ever check a kernel dump?
>
> e5eefda5aa51 ("x86: Remove .fixup section")
> >> + alternative_io("1: call asm_load_gs_index\n"
> >> + ".pushsection \".fixup\",\"ax\"\n"
> >> + "2: xorl %k[sel], %k[sel]\n"
> >> + " jmp 1b\n"
> >> + ".popsection\n"
> >> + _ASM_EXTABLE(1b, 2b),
> >> + _ASM_BYTES(0x3e) LKGS_DI,
> >> + X86_FEATURE_LKGS,
> >> + ASM_OUTPUT2([sel] "+D" (sel), ASM_CALL_CONSTRAINT),
> >> + ASM_NO_INPUT_CLOBBER(_ASM_AX));
> >> }
> >
> >I'm very sure none of this was tested... the .fixup section hasn't
> >existed for almost a year now.
> >
> > e5eefda5aa51 ("x86: Remove .fixup section")
>
> Xin, what did you use as the forward-porting baseline?
6.0 release, and my kernel dump shows me a fixup section is there, and a fixup section is created anyway if we do "pushsection "\.fixup\"".
On October 7, 2022 11:07:34 AM PDT, "Li, Xin3" <[email protected]> wrote:
>> >> + alternative_io("1: call asm_load_gs_index\n"
>> >> + ".pushsection \".fixup\",\"ax\"\n"
>> >> + "2: xorl %k[sel], %k[sel]\n"
>> >> + " jmp 1b\n"
>> >> + ".popsection\n"
>> >> + _ASM_EXTABLE(1b, 2b),
>> >> + _ASM_BYTES(0x3e) LKGS_DI,
>> >> + X86_FEATURE_LKGS,
>> >> + ASM_OUTPUT2([sel] "+D" (sel), ASM_CALL_CONSTRAINT),
>> >> + ASM_NO_INPUT_CLOBBER(_ASM_AX));
>> >> }
>> >
>> >I'm very sure none of this was tested... the .fixup section hasn't
>> >existed for almost a year now.
>> >
>> > e5eefda5aa51 ("x86: Remove .fixup section")
>>
>> Xin, what did you use as the forward-porting baseline?
>
>6.0 release, and my kernel dump shows me a fixup section is there, and a fixup section is created anyway if we do "pushsection "\.fixup\"".
>
>
>
Yeah. .fixup is really Just Another Text Section ™, so it is probably not surprising if it has crept back in.
On Fri, Oct 07, 2022 at 06:01:06PM +0000, Li, Xin3 wrote:
> > > + alternative_io("1: call asm_load_gs_index\n"
> > > + ".pushsection \".fixup\",\"ax\"\n"
> > > + "2: xorl %k[sel], %k[sel]\n"
> > > + " jmp 1b\n"
> > > + ".popsection\n"
> > > + _ASM_EXTABLE(1b, 2b),
> > > + _ASM_BYTES(0x3e) LKGS_DI,
> > > + X86_FEATURE_LKGS,
> > > + ASM_OUTPUT2([sel] "+D" (sel), ASM_CALL_CONSTRAINT),
> > > + ASM_NO_INPUT_CLOBBER(_ASM_AX));
> > > }
> >
> > I'm very sure none of this was tested... the .fixup section hasn't existed for
> > almost a year now.
>
> Weird, did you ever check a kernel dump?
$ readelf -WS defconfig-build/vmlinux | grep fixup
[ 5] .pci_fixup PROGBITS ffffffff826a5350 18a5350 003570 00 A 0 0 16
[ 6] .rela.pci_fixup RELA 0000000000000000 360c388 005028 18 I 60 5 8
In fact, when I add one I get:
ld: warning: orphan section `.fixup' from `arch/x86/kernel/traps.o' being placed in section `.fixup'
On October 7, 2022 12:24:13 PM PDT, Peter Zijlstra <[email protected]> wrote:
>On Fri, Oct 07, 2022 at 06:01:06PM +0000, Li, Xin3 wrote:
>> > > + alternative_io("1: call asm_load_gs_index\n"
>> > > + ".pushsection \".fixup\",\"ax\"\n"
>> > > + "2: xorl %k[sel], %k[sel]\n"
>> > > + " jmp 1b\n"
>> > > + ".popsection\n"
>> > > + _ASM_EXTABLE(1b, 2b),
>> > > + _ASM_BYTES(0x3e) LKGS_DI,
>> > > + X86_FEATURE_LKGS,
>> > > + ASM_OUTPUT2([sel] "+D" (sel), ASM_CALL_CONSTRAINT),
>> > > + ASM_NO_INPUT_CLOBBER(_ASM_AX));
>> > > }
>> >
>> > I'm very sure none of this was tested... the .fixup section hasn't existed for
>> > almost a year now.
>>
>> Weird, did you ever check a kernel dump?
>
>$ readelf -WS defconfig-build/vmlinux | grep fixup
>[ 5] .pci_fixup PROGBITS ffffffff826a5350 18a5350 003570 00 A 0 0 16
>[ 6] .rela.pci_fixup RELA 0000000000000000 360c388 005028 18 I 60 5 8
>
>In fact, when I add one I get:
>
>ld: warning: orphan section `.fixup' from `arch/x86/kernel/traps.o' being placed in section `.fixup'
Perhaps the two of you need to compare confugurations?
On Fri, Oct 07, 2022 at 01:03:12PM -0700, H. Peter Anvin wrote:
> On October 7, 2022 12:24:13 PM PDT, Peter Zijlstra <[email protected]> wrote:
> >On Fri, Oct 07, 2022 at 06:01:06PM +0000, Li, Xin3 wrote:
> >> > > + alternative_io("1: call asm_load_gs_index\n"
> >> > > + ".pushsection \".fixup\",\"ax\"\n"
> >> > > + "2: xorl %k[sel], %k[sel]\n"
> >> > > + " jmp 1b\n"
> >> > > + ".popsection\n"
> >> > > + _ASM_EXTABLE(1b, 2b),
> >> > > + _ASM_BYTES(0x3e) LKGS_DI,
> >> > > + X86_FEATURE_LKGS,
> >> > > + ASM_OUTPUT2([sel] "+D" (sel), ASM_CALL_CONSTRAINT),
> >> > > + ASM_NO_INPUT_CLOBBER(_ASM_AX));
> >> > > }
> >> >
> >> > I'm very sure none of this was tested... the .fixup section hasn't existed for
> >> > almost a year now.
> >>
> >> Weird, did you ever check a kernel dump?
> >
> >$ readelf -WS defconfig-build/vmlinux | grep fixup
> >[ 5] .pci_fixup PROGBITS ffffffff826a5350 18a5350 003570 00 A 0 0 16
> >[ 6] .rela.pci_fixup RELA 0000000000000000 360c388 005028 18 I 60 5 8
> >
> >In fact, when I add one I get:
> >
> >ld: warning: orphan section `.fixup' from `arch/x86/kernel/traps.o' being placed in section `.fixup'
>
> Perhaps the two of you need to compare confugurations?
Whatever for? I know the robots report this warning because there was
one from the KVM cross-merge when the .fixup removal went in. It got
reported and fixed and that was the last of it.
Anyway; try:
$ git grep "\.fixup" arch/x86/
there isn't a single usage.
Andrew Cooper suggested upgrading the orphan section warning to a hard
link error, orphan sections are bad regardless.
On 10/7/22 13:23, Peter Zijlstra wrote:
>>
>> Perhaps the two of you need to compare confugurations?
>
> Whatever for? I know the robots report this warning because there was
> one from the KVM cross-merge when the .fixup removal went in. It got
> reported and fixed and that was the last of it.
>
To help track down what appears to be a problem?
> Anyway; try:
>
> $ git grep "\.fixup" arch/x86/
>
> there isn't a single usage.
>
> Andrew Cooper suggested upgrading the orphan section warning to a hard
> link error, orphan sections are bad regardless.
>
Agreed 1000%. This is a no-brainer. From IRC:
<andyhhp> -LDFLAGS_vmlinux += --orphan-handling=warn
<andyhhp> +LDFLAGS_vmlinux += --orphan-handling=error
-hpa
> On October 7, 2022 8:40:41 AM PDT, Brian Gerst <[email protected]> wrote:
> >On Thu, Oct 6, 2022 at 12:13 PM Xin Li <[email protected]> wrote:
> >>
> >> From: "H. Peter Anvin (Intel)" <[email protected]>
> >>
> >> <asm/cpufeature.h> depends on <asm/special_insns.h>, so in order to
> >> be able to use alternatives in native_load_gs_index(), factor it out
> >> into a separate header file.
> >>
> >> Signed-off-by: H. Peter Anvin (Intel) <[email protected]>
> >> Signed-off-by: Xin Li <[email protected]>
> >
> >This could be moved into <asm/segment.h> instead of creating a new header
> file.
Good suggestion. However I still prefer to keep GS segment in its own header file,
1) it's a special segment for x86_64 kernel, and it's more readable to keep it separated.
2) segment.h is included in too many files, but gsseg.h only 4 files. We avoid header pollution.
Maybe we should even factor GS segment out of segment.h for x86_64.
Xin
> >
> >--
> >Brian Gerst
>
> At least at the time I wrote the code, it could not, without creating yet another
> circular header file dependency.
> Something like so perhaps?
>
> asm_inline volatile("1:\n"
> ALTERNATIVE("call asm_load_gs_index\n",
> LKGS_DI,
> X86_FEATURE_LKGS)
> _ASM_EXTABLE_TYPE_REG(1b, 1b,
> EX_TYPE_ZERO_REG, %k[sel])
> : ASM_CALL_CONSTRAINT
> : [sel] "D" (sel)
> : "memory", _ASM_AX);
>
>
> (completely untested, not even seen a compiler upclose)
It compiles (after adding "0x3e" to make it 5 bytes to match the size of "call asm_load_gs_index") and passes my fixup test, thanks!
> > + alternative_io("1: call asm_load_gs_index\n"
> > + ".pushsection \".fixup\",\"ax\"\n"
> > + "2: xorl %k[sel], %k[sel]\n"
> > + " jmp 1b\n"
> > + ".popsection\n"
> > + _ASM_EXTABLE(1b, 2b),
> > + _ASM_BYTES(0x3e) LKGS_DI,
> > + X86_FEATURE_LKGS,
> > + ASM_OUTPUT2([sel] "+D" (sel), ASM_CALL_CONSTRAINT),
> > + ASM_NO_INPUT_CLOBBER(_ASM_AX));
> > }
> >
> > #endif /* CONFIG_X86_64 */
> > --
> > 2.34.1
>
> There are not that many call sites, so using something like this (incorporating
> Peter Z's suggestion for the exception handler) would be better from a code
> readability perspective vs. a tiny increase in code size.
The existing approach patches the binary code thus we don't need to check it at runtime.
>
> if (static_cpu_has(X86_FEATURE_LKGS))
> asm volatile("1: " LKGS_DI
> _ASM_EXTABLE_TYPE_REG(1b, 1b, EX_TYPE_ZERO_REG,
> %k[sel])
> : [sel] "+D" (sel) : : "memory");
> else
> asm_load_gs_index(sel);
> > Andrew Cooper suggested upgrading the orphan section warning to a hard
> > link error, orphan sections are bad regardless.
> >
>
> Agreed 1000%. This is a no-brainer. From IRC:
>
>
> <andyhhp> -LDFLAGS_vmlinux += --orphan-handling=warn <andyhhp>
> +LDFLAGS_vmlinux += --orphan-handling=error
Who is going to make the change?
On October 7, 2022 10:32:31 PM PDT, "Li, Xin3" <[email protected]> wrote:
>> > Andrew Cooper suggested upgrading the orphan section warning to a hard
>> > link error, orphan sections are bad regardless.
>> >
>>
>> Agreed 1000%. This is a no-brainer. From IRC:
>>
>>
>> <andyhhp> -LDFLAGS_vmlinux += --orphan-handling=warn <andyhhp>
>> +LDFLAGS_vmlinux += --orphan-handling=error
>
>Who is going to make the change?
>
Why don't you?
On Sat, Oct 8, 2022 at 1:40 AM Li, Xin3 <[email protected]> wrote:
>
> > > + alternative_io("1: call asm_load_gs_index\n"
> > > + ".pushsection \".fixup\",\"ax\"\n"
> > > + "2: xorl %k[sel], %k[sel]\n"
> > > + " jmp 1b\n"
> > > + ".popsection\n"
> > > + _ASM_EXTABLE(1b, 2b),
> > > + _ASM_BYTES(0x3e) LKGS_DI,
> > > + X86_FEATURE_LKGS,
> > > + ASM_OUTPUT2([sel] "+D" (sel), ASM_CALL_CONSTRAINT),
> > > + ASM_NO_INPUT_CLOBBER(_ASM_AX));
> > > }
> > >
> > > #endif /* CONFIG_X86_64 */
> > > --
> > > 2.34.1
> >
> > There are not that many call sites, so using something like this (incorporating
> > Peter Z's suggestion for the exception handler) would be better from a code
> > readability perspective vs. a tiny increase in code size.
>
> The existing approach patches the binary code thus we don't need to check it at runtime.
static_cpu_has() uses alternatives to patch the branch, so there is no
runtime check after early boot.
--
Brian Gerst
> On October 7, 2022 10:32:31 PM PDT, "Li, Xin3" <[email protected]> wrote:
> >> > Andrew Cooper suggested upgrading the orphan section warning to a
> >> > hard link error, orphan sections are bad regardless.
> >> >
> >>
> >> Agreed 1000%. This is a no-brainer. From IRC:
> >>
> >>
> >> <andyhhp> -LDFLAGS_vmlinux += --orphan-handling=warn <andyhhp>
> >> +LDFLAGS_vmlinux += --orphan-handling=error
> >
> >Who is going to make the change?
> >
>
> Why don't you?
Will do.
Xin
> > > There are not that many call sites, so using something like this
> > > (incorporating Peter Z's suggestion for the exception handler) would
> > > be better from a code readability perspective vs. a tiny increase in code size.
> >
> > The existing approach patches the binary code thus we don't need to check it
> at runtime.
>
> static_cpu_has() uses alternatives to patch the branch, so there is no runtime
> check after early boot.
>
Sorry, didn't know it, thanks for point it out.
If we prefer static_cpu_has, are you asking to replace all alternative macros with it?
Xin
On October 9, 2022 9:32:34 PM PDT, "Li, Xin3" <[email protected]> wrote:
>> > > There are not that many call sites, so using something like this
>> > > (incorporating Peter Z's suggestion for the exception handler) would
>> > > be better from a code readability perspective vs. a tiny increase in code size.
>> >
>> > The existing approach patches the binary code thus we don't need to check it
>> at runtime.
>>
>> static_cpu_has() uses alternatives to patch the branch, so there is no runtime
>> check after early boot.
>>
>
>Sorry, didn't know it, thanks for point it out.
>
>If we prefer static_cpu_has, are you asking to replace all alternative macros with it?
>
>Xin
>
>
Honestly, it seems to me to be more than a bit excessive. The code might be nontrivial, but *with proper commenting* it should be perfectly understandable...
On Mon, Oct 10, 2022 at 04:32:34AM +0000, Li, Xin3 wrote:
> > > > There are not that many call sites, so using something like this
> > > > (incorporating Peter Z's suggestion for the exception handler) would
> > > > be better from a code readability perspective vs. a tiny increase in code size.
> > >
> > > The existing approach patches the binary code thus we don't need to check it
> > at runtime.
> >
> > static_cpu_has() uses alternatives to patch the branch, so there is no runtime
> > check after early boot.
> >
>
> Sorry, didn't know it, thanks for point it out.
>
> If we prefer static_cpu_has, are you asking to replace all alternative macros with it?
No; the only reason to do it here would be to unconfuse that exception
thing. But even there I'm not convinced.
Yes, Brian's code is much easier to read, but code-gen is quite terrible
(also, my binutils can't seem to decode this -- GNU objdump (GNU
Binutils for Debian) 2.38.90.20220713)
> > Andrew Cooper suggested upgrading the orphan section warning to a hard
> > link error, orphan sections are bad regardless.
> >
>
> Agreed 1000%. This is a no-brainer. From IRC:
>
>
> <andyhhp> -LDFLAGS_vmlinux += --orphan-handling=warn
> <andyhhp> +LDFLAGS_vmlinux += --orphan-handling=error
There is an arch independent config CONFIG_LD_ORPHAN_WARN, which forces linker
to warn on implicit named sections, or there is even no warning.
CONFIG_LD_ORPHAN_WARN depends on ARCH_WANT_LD_ORPHAN_WARN, and some archs
(arm/arm64/mips/x86/...) have it defined, and then ld generates warnings on
orphan sections.
Should we promote warning to error only on x86?
>
> -hpa