2023-08-09 08:17:32

by Peter Zijlstra

[permalink] [raw]
Subject: [RFC][PATCH 05/17] x86/cpu: Cleanup the untrain mess

Since there can only be one active return_thunk, there only needs be
one (matching) untrain_ret. It fundamentally doesn't make sense to
allow multiple untrain_ret at the same time.

Fold all the 3 different untrain methods into a single (temporary)
helper stub.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
---
arch/x86/include/asm/nospec-branch.h | 19 +++++--------------
arch/x86/lib/retpoline.S | 7 +++++++
2 files changed, 12 insertions(+), 14 deletions(-)

--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -272,9 +272,9 @@
.endm

#ifdef CONFIG_CPU_UNRET_ENTRY
-#define CALL_ZEN_UNTRAIN_RET "call zen_untrain_ret"
+#define CALL_UNTRAIN_RET "call entry_untrain_ret"
#else
-#define CALL_ZEN_UNTRAIN_RET ""
+#define CALL_UNTRAIN_RET ""
#endif

/*
@@ -293,15 +293,10 @@
defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
VALIDATE_UNRET_END
ALTERNATIVE_3 "", \
- CALL_ZEN_UNTRAIN_RET, X86_FEATURE_UNRET, \
+ CALL_UNTRAIN_RET, X86_FEATURE_UNRET, \
"call entry_ibpb", X86_FEATURE_ENTRY_IBPB, \
__stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
#endif
-
-#ifdef CONFIG_CPU_SRSO
- ALTERNATIVE_2 "", "call srso_untrain_ret", X86_FEATURE_SRSO, \
- "call srso_untrain_ret_alias", X86_FEATURE_SRSO_ALIAS
-#endif
.endm

.macro UNTRAIN_RET_FROM_CALL
@@ -309,15 +304,10 @@
defined(CONFIG_CALL_DEPTH_TRACKING)
VALIDATE_UNRET_END
ALTERNATIVE_3 "", \
- CALL_ZEN_UNTRAIN_RET, X86_FEATURE_UNRET, \
+ CALL_UNTRAIN_RET, X86_FEATURE_UNRET, \
"call entry_ibpb", X86_FEATURE_ENTRY_IBPB, \
__stringify(RESET_CALL_DEPTH_FROM_CALL), X86_FEATURE_CALL_DEPTH
#endif
-
-#ifdef CONFIG_CPU_SRSO
- ALTERNATIVE_2 "", "call srso_untrain_ret", X86_FEATURE_SRSO, \
- "call srso_untrain_ret_alias", X86_FEATURE_SRSO_ALIAS
-#endif
.endm


@@ -349,6 +339,7 @@ extern void zen_untrain_ret(void);
extern void srso_untrain_ret(void);
extern void srso_untrain_ret_alias(void);

+extern void entry_untrain_ret(void);
extern void entry_ibpb(void);

extern void (*x86_return_thunk)(void);
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -268,6 +268,13 @@ SYM_CODE_END(srso_safe_ret)
SYM_FUNC_END(srso_untrain_ret)
__EXPORT_THUNK(srso_untrain_ret)

+SYM_FUNC_START(entry_untrain_ret)
+ ALTERNATIVE_2 "jmp zen_untrain_ret", \
+ "jmp srso_untrain_ret", X86_FEATURE_SRSO, \
+ "jmp srso_untrain_ret_alias", X86_FEATURE_SRSO_ALIAS
+SYM_FUNC_END(entry_untrain_ret)
+__EXPORT_THUNK(entry_untrain_ret)
+
/*
* Both these do an unbalanced CALL to mess up the RSB, terminate with UD2
* to indicate noreturn.




2023-08-09 13:31:34

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [RFC][PATCH 05/17] x86/cpu: Cleanup the untrain mess

On Wed, Aug 09, 2023 at 09:12:23AM +0200, Peter Zijlstra wrote:
> Since there can only be one active return_thunk, there only needs be
> one (matching) untrain_ret. It fundamentally doesn't make sense to
> allow multiple untrain_ret at the same time.
>
> Fold all the 3 different untrain methods into a single (temporary)
> helper stub.
>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> ---
> arch/x86/include/asm/nospec-branch.h | 19 +++++--------------
> arch/x86/lib/retpoline.S | 7 +++++++
> 2 files changed, 12 insertions(+), 14 deletions(-)
>
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -272,9 +272,9 @@
> .endm
>
> #ifdef CONFIG_CPU_UNRET_ENTRY
> -#define CALL_ZEN_UNTRAIN_RET "call zen_untrain_ret"
> +#define CALL_UNTRAIN_RET "call entry_untrain_ret"
> #else
> -#define CALL_ZEN_UNTRAIN_RET ""
> +#define CALL_UNTRAIN_RET ""
> #endif
>
> /*
> @@ -293,15 +293,10 @@
> defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
> VALIDATE_UNRET_END
> ALTERNATIVE_3 "", \
> - CALL_ZEN_UNTRAIN_RET, X86_FEATURE_UNRET, \
> + CALL_UNTRAIN_RET, X86_FEATURE_UNRET, \

SRSO doesn't have X86_FEATURE_UNRET set.

--
Josh

2023-08-09 13:31:42

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC][PATCH 05/17] x86/cpu: Cleanup the untrain mess

On Wed, Aug 09, 2023 at 08:51:01AM -0400, Josh Poimboeuf wrote:
> On Wed, Aug 09, 2023 at 09:12:23AM +0200, Peter Zijlstra wrote:
> > Since there can only be one active return_thunk, there only needs be
> > one (matching) untrain_ret. It fundamentally doesn't make sense to
> > allow multiple untrain_ret at the same time.
> >
> > Fold all the 3 different untrain methods into a single (temporary)
> > helper stub.
> >
> > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > ---
> > arch/x86/include/asm/nospec-branch.h | 19 +++++--------------
> > arch/x86/lib/retpoline.S | 7 +++++++
> > 2 files changed, 12 insertions(+), 14 deletions(-)
> >
> > --- a/arch/x86/include/asm/nospec-branch.h
> > +++ b/arch/x86/include/asm/nospec-branch.h
> > @@ -272,9 +272,9 @@
> > .endm
> >
> > #ifdef CONFIG_CPU_UNRET_ENTRY
> > -#define CALL_ZEN_UNTRAIN_RET "call zen_untrain_ret"
> > +#define CALL_UNTRAIN_RET "call entry_untrain_ret"
> > #else
> > -#define CALL_ZEN_UNTRAIN_RET ""
> > +#define CALL_UNTRAIN_RET ""
> > #endif
> >
> > /*
> > @@ -293,15 +293,10 @@
> > defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
> > VALIDATE_UNRET_END
> > ALTERNATIVE_3 "", \
> > - CALL_ZEN_UNTRAIN_RET, X86_FEATURE_UNRET, \
> > + CALL_UNTRAIN_RET, X86_FEATURE_UNRET, \
>
> SRSO doesn't have X86_FEATURE_UNRET set.

Argh.. this stuff doesn't exist at the end anymore, but yeah, that's
unfortunate.

I'll see if I can find another intermediate step.

2023-08-09 14:06:09

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC][PATCH 05/17] x86/cpu: Cleanup the untrain mess

On Wed, Aug 09, 2023 at 03:12:43PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 09, 2023 at 08:51:01AM -0400, Josh Poimboeuf wrote:
> > On Wed, Aug 09, 2023 at 09:12:23AM +0200, Peter Zijlstra wrote:
> > > Since there can only be one active return_thunk, there only needs be
> > > one (matching) untrain_ret. It fundamentally doesn't make sense to
> > > allow multiple untrain_ret at the same time.
> > >
> > > Fold all the 3 different untrain methods into a single (temporary)
> > > helper stub.
> > >
> > > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > > ---
> > > arch/x86/include/asm/nospec-branch.h | 19 +++++--------------
> > > arch/x86/lib/retpoline.S | 7 +++++++
> > > 2 files changed, 12 insertions(+), 14 deletions(-)
> > >
> > > --- a/arch/x86/include/asm/nospec-branch.h
> > > +++ b/arch/x86/include/asm/nospec-branch.h
> > > @@ -272,9 +272,9 @@
> > > .endm
> > >
> > > #ifdef CONFIG_CPU_UNRET_ENTRY
> > > -#define CALL_ZEN_UNTRAIN_RET "call zen_untrain_ret"
> > > +#define CALL_UNTRAIN_RET "call entry_untrain_ret"
> > > #else
> > > -#define CALL_ZEN_UNTRAIN_RET ""
> > > +#define CALL_UNTRAIN_RET ""
> > > #endif
> > >
> > > /*
> > > @@ -293,15 +293,10 @@
> > > defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
> > > VALIDATE_UNRET_END
> > > ALTERNATIVE_3 "", \
> > > - CALL_ZEN_UNTRAIN_RET, X86_FEATURE_UNRET, \
> > > + CALL_UNTRAIN_RET, X86_FEATURE_UNRET, \
> >
> > SRSO doesn't have X86_FEATURE_UNRET set.
>
> Argh.. this stuff doesn't exist at the end anymore, but yeah, that's
> unfortunate.
>
> I'll see if I can find another intermediate step.

I think simply setting UNRET for SRSO at this point will be sufficient.
That ensures the entry_untrain_ret thing gets called, and the
alternative there DTRT.

The feature isn't used anywhere else afaict.

Then later, after the fancy alternatives happen, this can be cleaned up
again.

2023-08-12 18:32:43

by Borislav Petkov

[permalink] [raw]
Subject: Re: [RFC][PATCH 05/17] x86/cpu: Cleanup the untrain mess

On Wed, Aug 09, 2023 at 03:26:35PM +0200, Peter Zijlstra wrote:
> I think simply setting UNRET for SRSO at this point will be sufficient.
> That ensures the entry_untrain_ret thing gets called, and the
> alternative there DTRT.
>
> The feature isn't used anywhere else afaict.
>
> Then later, after the fancy alternatives happen, this can be cleaned up
> again.

Yes, this fixes it for >= Zen3:

---
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index 4b0a770fbacb..611d048f6415 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -2448,6 +2448,7 @@ static void __init srso_select_mitigation(void)
* like ftrace, static_call, etc.
*/
setup_force_cpu_cap(X86_FEATURE_RETHUNK);
+ setup_force_cpu_cap(X86_FEATURE_UNRET);

if (boot_cpu_data.x86 == 0x19) {
setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette