2020-04-28 19:21:40

by Peter Zijlstra

[permalink] [raw]
Subject: [PATCH v2 03/14] x86,smap: Fix smap_{save,restore}() alternatives

As reported by objtool:

lib/ubsan.o: warning: objtool: .altinstr_replacement+0x0: alternative modifies stack
lib/ubsan.o: warning: objtool: .altinstr_replacement+0x7: alternative modifies stack

the smap_{save,restore}() alternatives violate (the newly enforced)
rule on stack invariance. That is, due to there only being a single
ORC table it must be valid to any alternative. These alternatives
violate this with the direct result that unwinds will not be correct
in between these calls.

[ In specific, since we force SMAP on for objtool, running on !SMAP
hardware will observe a different stack-layout and the ORC unwinder
will stumble. ]

So rewrite the functions to unconditionally save/restore the flags,
which gives an invariant stack layout irrespective of the SMAP state.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
---
arch/x86/include/asm/smap.h | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)

--- a/arch/x86/include/asm/smap.h
+++ b/arch/x86/include/asm/smap.h
@@ -57,16 +57,19 @@ static __always_inline unsigned long sma
{
unsigned long flags;

- asm volatile (ALTERNATIVE("", "pushf; pop %0; " __ASM_CLAC,
- X86_FEATURE_SMAP)
- : "=rm" (flags) : : "memory", "cc");
+ asm volatile ("# smap_save\n\t"
+ "pushf; pop %0"
+ : "=rm" (flags) : : "memory");
+
+ clac();

return flags;
}

static __always_inline void smap_restore(unsigned long flags)
{
- asm volatile (ALTERNATIVE("", "push %0; popf", X86_FEATURE_SMAP)
+ asm volatile ("# smap_restore\n\t"
+ "push %0; popf"
: : "g" (flags) : "memory", "cc");
}




2020-04-29 00:56:04

by Brian Gerst

[permalink] [raw]
Subject: Re: [PATCH v2 03/14] x86,smap: Fix smap_{save,restore}() alternatives

On Tue, Apr 28, 2020 at 3:21 PM Peter Zijlstra <[email protected]> wrote:
>
> As reported by objtool:
>
> lib/ubsan.o: warning: objtool: .altinstr_replacement+0x0: alternative modifies stack
> lib/ubsan.o: warning: objtool: .altinstr_replacement+0x7: alternative modifies stack
>
> the smap_{save,restore}() alternatives violate (the newly enforced)
> rule on stack invariance. That is, due to there only being a single
> ORC table it must be valid to any alternative. These alternatives
> violate this with the direct result that unwinds will not be correct
> in between these calls.
>
> [ In specific, since we force SMAP on for objtool, running on !SMAP
> hardware will observe a different stack-layout and the ORC unwinder
> will stumble. ]
>
> So rewrite the functions to unconditionally save/restore the flags,
> which gives an invariant stack layout irrespective of the SMAP state.
>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> ---
> arch/x86/include/asm/smap.h | 11 +++++++----
> 1 file changed, 7 insertions(+), 4 deletions(-)
>
> --- a/arch/x86/include/asm/smap.h
> +++ b/arch/x86/include/asm/smap.h
> @@ -57,16 +57,19 @@ static __always_inline unsigned long sma
> {
> unsigned long flags;
>
> - asm volatile (ALTERNATIVE("", "pushf; pop %0; " __ASM_CLAC,
> - X86_FEATURE_SMAP)
> - : "=rm" (flags) : : "memory", "cc");
> + asm volatile ("# smap_save\n\t"
> + "pushf; pop %0"
> + : "=rm" (flags) : : "memory");
> +
> + clac();
>
> return flags;
> }
>
> static __always_inline void smap_restore(unsigned long flags)
> {
> - asm volatile (ALTERNATIVE("", "push %0; popf", X86_FEATURE_SMAP)
> + asm volatile ("# smap_restore\n\t"
> + "push %0; popf"
> : : "g" (flags) : "memory", "cc");
> }

POPF is an expensive instruction that should be avoided if possible.
A better solution would be to have the alternative jump over the
push/pop when SMAP is disabled.

--
Brian Gerst

2020-04-29 08:35:15

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 03/14] x86,smap: Fix smap_{save,restore}() alternatives

On Tue, Apr 28, 2020 at 08:54:05PM -0400, Brian Gerst wrote:
> On Tue, Apr 28, 2020 at 3:21 PM Peter Zijlstra <[email protected]> wrote:
> >
> > As reported by objtool:
> >
> > lib/ubsan.o: warning: objtool: .altinstr_replacement+0x0: alternative modifies stack
> > lib/ubsan.o: warning: objtool: .altinstr_replacement+0x7: alternative modifies stack
> >
> > the smap_{save,restore}() alternatives violate (the newly enforced)
> > rule on stack invariance. That is, due to there only being a single
> > ORC table it must be valid to any alternative. These alternatives
> > violate this with the direct result that unwinds will not be correct
> > in between these calls.
> >
> > [ In specific, since we force SMAP on for objtool, running on !SMAP
> > hardware will observe a different stack-layout and the ORC unwinder
> > will stumble. ]
> >
> > So rewrite the functions to unconditionally save/restore the flags,
> > which gives an invariant stack layout irrespective of the SMAP state.
> >
> > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > ---
> > arch/x86/include/asm/smap.h | 11 +++++++----
> > 1 file changed, 7 insertions(+), 4 deletions(-)
> >
> > --- a/arch/x86/include/asm/smap.h
> > +++ b/arch/x86/include/asm/smap.h
> > @@ -57,16 +57,19 @@ static __always_inline unsigned long sma
> > {
> > unsigned long flags;
> >
> > - asm volatile (ALTERNATIVE("", "pushf; pop %0; " __ASM_CLAC,
> > - X86_FEATURE_SMAP)
> > - : "=rm" (flags) : : "memory", "cc");
> > + asm volatile ("# smap_save\n\t"
> > + "pushf; pop %0"
> > + : "=rm" (flags) : : "memory");
> > +
> > + clac();
> >
> > return flags;
> > }
> >
> > static __always_inline void smap_restore(unsigned long flags)
> > {
> > - asm volatile (ALTERNATIVE("", "push %0; popf", X86_FEATURE_SMAP)
> > + asm volatile ("# smap_restore\n\t"
> > + "push %0; popf"
> > : : "g" (flags) : "memory", "cc");
> > }
>
> POPF is an expensive instruction that should be avoided if possible.
> A better solution would be to have the alternative jump over the
> push/pop when SMAP is disabled.

Yeah. I think I had that, but then confused myself again. I don't think
it matters much if you look at where it's used though.

Still, let me try the jmp thing again..

2020-04-29 10:21:03

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 03/14] x86,smap: Fix smap_{save,restore}() alternatives

On Wed, Apr 29, 2020 at 10:30:53AM +0200, Peter Zijlstra wrote:
> > POPF is an expensive instruction that should be avoided if possible.
> > A better solution would be to have the alternative jump over the
> > push/pop when SMAP is disabled.
>
> Yeah. I think I had that, but then confused myself again. I don't think
> it matters much if you look at where it's used though.
>
> Still, let me try the jmp thing again..

Here goes..

---
Subject: x86,smap: Fix smap_{save,restore}() alternatives
From: Peter Zijlstra <[email protected]>
Date: Tue Apr 28 19:57:59 CEST 2020

As reported by objtool:

lib/ubsan.o: warning: objtool: .altinstr_replacement+0x0: alternative modifies stack
lib/ubsan.o: warning: objtool: .altinstr_replacement+0x7: alternative modifies stack

the smap_{save,restore}() alternatives violate (the newly enforced)
rule on stack invariance. That is, due to there only being a single
ORC table it must be valid to any alternative. These alternatives
violate this with the direct result that unwinds will not be correct
when it hits between the PUSH and POP instructions.

Rewrite the functions to only have a conditional jump.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
---
arch/x86/include/asm/smap.h | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)

--- a/arch/x86/include/asm/smap.h
+++ b/arch/x86/include/asm/smap.h
@@ -57,8 +57,10 @@ static __always_inline unsigned long sma
{
unsigned long flags;

- asm volatile (ALTERNATIVE("", "pushf; pop %0; " __ASM_CLAC,
- X86_FEATURE_SMAP)
+ asm volatile ("# smap_save\n\t"
+ ALTERNATIVE("jmp 1f", "", X86_FEATURE_SMAP)
+ "pushf; pop %0; " __ASM_CLAC "\n\t"
+ "1:"
: "=rm" (flags) : : "memory", "cc");

return flags;
@@ -66,7 +68,10 @@ static __always_inline unsigned long sma

static __always_inline void smap_restore(unsigned long flags)
{
- asm volatile (ALTERNATIVE("", "push %0; popf", X86_FEATURE_SMAP)
+ asm volatile ("# smap_restore\n\t"
+ ALTERNATIVE("jmp 1f", "", X86_FEATURE_SMAP)
+ "push %0; popf\n\t"
+ "1:"
: : "g" (flags) : "memory", "cc");
}

2020-04-29 12:16:22

by Brian Gerst

[permalink] [raw]
Subject: Re: [PATCH v2 03/14] x86,smap: Fix smap_{save,restore}() alternatives

On Wed, Apr 29, 2020 at 6:18 AM Peter Zijlstra <[email protected]> wrote:
>
> On Wed, Apr 29, 2020 at 10:30:53AM +0200, Peter Zijlstra wrote:
> > > POPF is an expensive instruction that should be avoided if possible.
> > > A better solution would be to have the alternative jump over the
> > > push/pop when SMAP is disabled.
> >
> > Yeah. I think I had that, but then confused myself again. I don't think
> > it matters much if you look at where it's used though.
> >
> > Still, let me try the jmp thing again..
>
> Here goes..
>
> ---
> Subject: x86,smap: Fix smap_{save,restore}() alternatives
> From: Peter Zijlstra <[email protected]>
> Date: Tue Apr 28 19:57:59 CEST 2020
>
> As reported by objtool:
>
> lib/ubsan.o: warning: objtool: .altinstr_replacement+0x0: alternative modifies stack
> lib/ubsan.o: warning: objtool: .altinstr_replacement+0x7: alternative modifies stack
>
> the smap_{save,restore}() alternatives violate (the newly enforced)
> rule on stack invariance. That is, due to there only being a single
> ORC table it must be valid to any alternative. These alternatives
> violate this with the direct result that unwinds will not be correct
> when it hits between the PUSH and POP instructions.
>
> Rewrite the functions to only have a conditional jump.
>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> ---
> arch/x86/include/asm/smap.h | 11 ++++++++---
> 1 file changed, 8 insertions(+), 3 deletions(-)
>
> --- a/arch/x86/include/asm/smap.h
> +++ b/arch/x86/include/asm/smap.h
> @@ -57,8 +57,10 @@ static __always_inline unsigned long sma
> {
> unsigned long flags;
>
> - asm volatile (ALTERNATIVE("", "pushf; pop %0; " __ASM_CLAC,
> - X86_FEATURE_SMAP)
> + asm volatile ("# smap_save\n\t"
> + ALTERNATIVE("jmp 1f", "", X86_FEATURE_SMAP)
> + "pushf; pop %0; " __ASM_CLAC "\n\t"
> + "1:"
> : "=rm" (flags) : : "memory", "cc");
>
> return flags;
> @@ -66,7 +68,10 @@ static __always_inline unsigned long sma
>
> static __always_inline void smap_restore(unsigned long flags)
> {
> - asm volatile (ALTERNATIVE("", "push %0; popf", X86_FEATURE_SMAP)
> + asm volatile ("# smap_restore\n\t"
> + ALTERNATIVE("jmp 1f", "", X86_FEATURE_SMAP)
> + "push %0; popf\n\t"
> + "1:"
> : : "g" (flags) : "memory", "cc");
> }
>

Looks good. Alternatively, you could use static_cpu_has(X86_FEATURE_SMAP).

--
Brian Gerst

2020-05-01 18:24:45

by tip-bot2 for Tony Luck

[permalink] [raw]
Subject: [tip: objtool/core] x86,smap: Fix smap_{save,restore}() alternatives

The following commit has been merged into the objtool/core branch of tip:

Commit-ID: 1ff865e343c2b59469d7e41d370a980a3f972c71
Gitweb: https://git.kernel.org/tip/1ff865e343c2b59469d7e41d370a980a3f972c71
Author: Peter Zijlstra <[email protected]>
AuthorDate: Tue, 28 Apr 2020 19:57:59 +02:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Thu, 30 Apr 2020 20:14:31 +02:00

x86,smap: Fix smap_{save,restore}() alternatives

As reported by objtool:

lib/ubsan.o: warning: objtool: .altinstr_replacement+0x0: alternative modifies stack
lib/ubsan.o: warning: objtool: .altinstr_replacement+0x7: alternative modifies stack

the smap_{save,restore}() alternatives violate (the newly enforced)
rule on stack invariance. That is, due to there only being a single
ORC table it must be valid to any alternative. These alternatives
violate this with the direct result that unwinds will not be correct
when it hits between the PUSH and POP instructions.

Rewrite the functions to only have a conditional jump.

Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Reviewed-by: Miroslav Benes <[email protected]>
Acked-by: Josh Poimboeuf <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
arch/x86/include/asm/smap.h | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/arch/x86/include/asm/smap.h b/arch/x86/include/asm/smap.h
index 27c47d1..8b58d69 100644
--- a/arch/x86/include/asm/smap.h
+++ b/arch/x86/include/asm/smap.h
@@ -57,8 +57,10 @@ static __always_inline unsigned long smap_save(void)
{
unsigned long flags;

- asm volatile (ALTERNATIVE("", "pushf; pop %0; " __ASM_CLAC,
- X86_FEATURE_SMAP)
+ asm volatile ("# smap_save\n\t"
+ ALTERNATIVE("jmp 1f", "", X86_FEATURE_SMAP)
+ "pushf; pop %0; " __ASM_CLAC "\n\t"
+ "1:"
: "=rm" (flags) : : "memory", "cc");

return flags;
@@ -66,7 +68,10 @@ static __always_inline unsigned long smap_save(void)

static __always_inline void smap_restore(unsigned long flags)
{
- asm volatile (ALTERNATIVE("", "push %0; popf", X86_FEATURE_SMAP)
+ asm volatile ("# smap_restore\n\t"
+ ALTERNATIVE("jmp 1f", "", X86_FEATURE_SMAP)
+ "push %0; popf\n\t"
+ "1:"
: : "g" (flags) : "memory", "cc");
}