With the difference being that UNTRAIN_RET_VM uses
X86_FEATURE_IBPB_ON_VMEXIT instead of X86_FEATURE_ENTRY_IBPB.
This cures VMEXIT doing potentially unret+IBPB or double IBPB.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
---
arch/x86/include/asm/nospec-branch.h | 11 +++++++++++
arch/x86/kernel/cpu/bugs.c | 17 ++++++++++++++++-
arch/x86/kvm/svm/vmenter.S | 7 ++-----
3 files changed, 29 insertions(+), 6 deletions(-)
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -299,6 +299,17 @@
#endif
.endm
+.macro UNTRAIN_RET_VM
+#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
+ defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
+ VALIDATE_UNRET_END
+ ALTERNATIVE_3 "", \
+ CALL_UNTRAIN_RET, X86_FEATURE_UNRET, \
+ "call entry_ibpb", X86_FEATURE_IBPB_ON_VMEXIT, \
+ __stringify(RESET_CALL_DEPTH), X86_FEATURE_CALL_DEPTH
+#endif
+.endm
+
.macro UNTRAIN_RET_FROM_CALL
#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
defined(CONFIG_CALL_DEPTH_TRACKING)
--- a/arch/x86/kvm/svm/vmenter.S
+++ b/arch/x86/kvm/svm/vmenter.S
@@ -222,10 +222,7 @@ SYM_FUNC_START(__svm_vcpu_run)
* because interrupt handlers won't sanitize 'ret' if the return is
* from the kernel.
*/
- UNTRAIN_RET
-
- /* SRSO */
- ALTERNATIVE "", "call entry_ibpb", X86_FEATURE_IBPB_ON_VMEXIT
+ UNTRAIN_RET_VM
/*
* Clear all general purpose registers except RSP and RAX to prevent
@@ -362,7 +359,7 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run)
* because interrupt handlers won't sanitize RET if the return is
* from the kernel.
*/
- UNTRAIN_RET
+ UNTRAIN_RET_VM
/* "Pop" @spec_ctrl_intercepted. */
pop %_ASM_BX
On Wed, Aug 09, 2023 at 09:12:25AM +0200, Peter Zijlstra wrote:
> With the difference being that UNTRAIN_RET_VM uses
> X86_FEATURE_IBPB_ON_VMEXIT instead of X86_FEATURE_ENTRY_IBPB.
>
> This cures VMEXIT doing potentially unret+IBPB or double IBPB.
>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> ---
> arch/x86/include/asm/nospec-branch.h | 11 +++++++++++
> arch/x86/kernel/cpu/bugs.c | 17 ++++++++++++++++-
> arch/x86/kvm/svm/vmenter.S | 7 ++-----
> 3 files changed, 29 insertions(+), 6 deletions(-)
>
> --- a/arch/x86/include/asm/nospec-branch.h
> +++ b/arch/x86/include/asm/nospec-branch.h
> @@ -299,6 +299,17 @@
> #endif
> .endm
>
> +.macro UNTRAIN_RET_VM
> +#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
> + defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
Maybe can be simplified?
#if defined(CONFIG_RETHUNK) || defined(CONFIG_CPU_IBPB_ENTRY)
--
Josh
On Wed, Aug 09, 2023 at 09:50:04AM -0400, Josh Poimboeuf wrote:
> On Wed, Aug 09, 2023 at 09:12:25AM +0200, Peter Zijlstra wrote:
> > With the difference being that UNTRAIN_RET_VM uses
> > X86_FEATURE_IBPB_ON_VMEXIT instead of X86_FEATURE_ENTRY_IBPB.
> >
> > This cures VMEXIT doing potentially unret+IBPB or double IBPB.
> >
> > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > ---
> > arch/x86/include/asm/nospec-branch.h | 11 +++++++++++
> > arch/x86/kernel/cpu/bugs.c | 17 ++++++++++++++++-
> > arch/x86/kvm/svm/vmenter.S | 7 ++-----
> > 3 files changed, 29 insertions(+), 6 deletions(-)
> >
> > --- a/arch/x86/include/asm/nospec-branch.h
> > +++ b/arch/x86/include/asm/nospec-branch.h
> > @@ -299,6 +299,17 @@
> > #endif
> > .endm
> >
> > +.macro UNTRAIN_RET_VM
> > +#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
> > + defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
>
> Maybe can be simplified?
>
See patches 9 and 10 :-)
On Wed, Aug 09, 2023 at 04:06:44PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 09, 2023 at 09:50:04AM -0400, Josh Poimboeuf wrote:
> > On Wed, Aug 09, 2023 at 09:12:25AM +0200, Peter Zijlstra wrote:
> > > With the difference being that UNTRAIN_RET_VM uses
> > > X86_FEATURE_IBPB_ON_VMEXIT instead of X86_FEATURE_ENTRY_IBPB.
> > >
> > > This cures VMEXIT doing potentially unret+IBPB or double IBPB.
> > >
> > > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > > ---
> > > arch/x86/include/asm/nospec-branch.h | 11 +++++++++++
> > > arch/x86/kernel/cpu/bugs.c | 17 ++++++++++++++++-
> > > arch/x86/kvm/svm/vmenter.S | 7 ++-----
> > > 3 files changed, 29 insertions(+), 6 deletions(-)
> > >
> > > --- a/arch/x86/include/asm/nospec-branch.h
> > > +++ b/arch/x86/include/asm/nospec-branch.h
> > > @@ -299,6 +299,17 @@
> > > #endif
> > > .endm
> > >
> > > +.macro UNTRAIN_RET_VM
> > > +#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
> > > + defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
> >
> > Maybe can be simplified?
> >
>
> See patches 9 and 10 :-)
Can still be further simplified to CONFIG_RETHUNK?
--
Josh
On Wed, Aug 09, 2023 at 10:30:00AM -0400, Josh Poimboeuf wrote:
> On Wed, Aug 09, 2023 at 04:06:44PM +0200, Peter Zijlstra wrote:
> > On Wed, Aug 09, 2023 at 09:50:04AM -0400, Josh Poimboeuf wrote:
> > > On Wed, Aug 09, 2023 at 09:12:25AM +0200, Peter Zijlstra wrote:
> > > > With the difference being that UNTRAIN_RET_VM uses
> > > > X86_FEATURE_IBPB_ON_VMEXIT instead of X86_FEATURE_ENTRY_IBPB.
> > > >
> > > > This cures VMEXIT doing potentially unret+IBPB or double IBPB.
> > > >
> > > > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > > > ---
> > > > arch/x86/include/asm/nospec-branch.h | 11 +++++++++++
> > > > arch/x86/kernel/cpu/bugs.c | 17 ++++++++++++++++-
> > > > arch/x86/kvm/svm/vmenter.S | 7 ++-----
> > > > 3 files changed, 29 insertions(+), 6 deletions(-)
> > > >
> > > > --- a/arch/x86/include/asm/nospec-branch.h
> > > > +++ b/arch/x86/include/asm/nospec-branch.h
> > > > @@ -299,6 +299,17 @@
> > > > #endif
> > > > .endm
> > > >
> > > > +.macro UNTRAIN_RET_VM
> > > > +#if defined(CONFIG_CPU_UNRET_ENTRY) || defined(CONFIG_CPU_IBPB_ENTRY) || \
> > > > + defined(CONFIG_CALL_DEPTH_TRACKING) || defined(CONFIG_CPU_SRSO)
> > >
> > > Maybe can be simplified?
> > >
> >
> > See patches 9 and 10 :-)
>
> Can still be further simplified to CONFIG_RETHUNK?
Hmm, probably, but my brain seems to be giving out. Let me do the whole
dinner and kids to bed thing and I'll look again later.
On Wed, Aug 09, 2023 at 09:12:25AM +0200, Peter Zijlstra wrote:
> With the difference being that UNTRAIN_RET_VM uses
> X86_FEATURE_IBPB_ON_VMEXIT instead of X86_FEATURE_ENTRY_IBPB.
>
> This cures VMEXIT doing potentially unret+IBPB or double IBPB.
Can't - I have a separate flag for that and I set it only when !IBPB:
case SRSO_CMD_IBPB_ON_VMEXIT:
if (IS_ENABLED(CONFIG_CPU_SRSO)) {
if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
But I like the separate macro.
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
On Sun, Aug 13, 2023 at 12:36:57PM +0200, Borislav Petkov wrote:
> On Wed, Aug 09, 2023 at 09:12:25AM +0200, Peter Zijlstra wrote:
> > With the difference being that UNTRAIN_RET_VM uses
> > X86_FEATURE_IBPB_ON_VMEXIT instead of X86_FEATURE_ENTRY_IBPB.
> >
> > This cures VMEXIT doing potentially unret+IBPB or double IBPB.
>
> Can't - I have a separate flag for that and I set it only when !IBPB:
>
> case SRSO_CMD_IBPB_ON_VMEXIT:
> if (IS_ENABLED(CONFIG_CPU_SRSO)) {
> if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
> setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
>
Of course you can, just also set it with regular IBPB :-)