Clang can inline emit_indirect_jump() and then folds constants, which
results in:
| vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x6a4: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40
| vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x67d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40
| vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x386: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20
| vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x35d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20
Suppress the optimization such that it must emit a code reference to
the __x86_indirect_thunk_array[] base.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
---
arch/x86/net/bpf_jit_comp.c | 1 +
1 file changed, 1 insertion(+)
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -412,6 +412,7 @@ static void emit_indirect_jump(u8 **ppro
EMIT_LFENCE();
EMIT2(0xFF, 0xE0 + reg);
} else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
+ OPTIMIZER_HIDE_VAR(reg);
emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip);
} else
#endif
On Tue, Apr 5, 2022 at 12:55 AM Peter Zijlstra <[email protected]> wrote:
>
>
> Clang can inline emit_indirect_jump() and then folds constants, which
> results in:
>
> | vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x6a4: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40
> | vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x67d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40
> | vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x386: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20
> | vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x35d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20
>
> Suppress the optimization such that it must emit a code reference to
> the __x86_indirect_thunk_array[] base.
>
> Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> ---
> arch/x86/net/bpf_jit_comp.c | 1 +
> 1 file changed, 1 insertion(+)
>
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -412,6 +412,7 @@ static void emit_indirect_jump(u8 **ppro
> EMIT_LFENCE();
> EMIT2(0xFF, 0xE0 + reg);
> } else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
> + OPTIMIZER_HIDE_VAR(reg);
> emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip);
> } else
> #endif
Looks good. Please cc bpf@vger and all bpf maintainers in the future.
We can take it through the bpf tree if you prefer.
On Tue, Apr 05, 2022 at 09:58:28AM -0700, Alexei Starovoitov wrote:
> On Tue, Apr 5, 2022 at 12:55 AM Peter Zijlstra <[email protected]> wrote:
> >
> >
> > Clang can inline emit_indirect_jump() and then folds constants, which
> > results in:
> >
> > | vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x6a4: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40
> > | vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x67d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40
> > | vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x386: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20
> > | vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x35d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20
> >
> > Suppress the optimization such that it must emit a code reference to
> > the __x86_indirect_thunk_array[] base.
> >
> > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > ---
> > arch/x86/net/bpf_jit_comp.c | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > --- a/arch/x86/net/bpf_jit_comp.c
> > +++ b/arch/x86/net/bpf_jit_comp.c
> > @@ -412,6 +412,7 @@ static void emit_indirect_jump(u8 **ppro
> > EMIT_LFENCE();
> > EMIT2(0xFF, 0xE0 + reg);
> > } else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
> > + OPTIMIZER_HIDE_VAR(reg);
> > emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip);
> > } else
> > #endif
>
> Looks good. Please cc bpf@vger and all bpf maintainers in the future.
Oh right, I'll go add an alias for that.
> We can take it through the bpf tree if you prefer.
I'll take it through the x86/urgent tree if you don't mind.
On Wed, Apr 6, 2022 at 3:46 AM Peter Zijlstra <[email protected]> wrote:
>
> On Tue, Apr 05, 2022 at 09:58:28AM -0700, Alexei Starovoitov wrote:
> > On Tue, Apr 5, 2022 at 12:55 AM Peter Zijlstra <[email protected]> wrote:
> > >
> > >
> > > Clang can inline emit_indirect_jump() and then folds constants, which
> > > results in:
> > >
> > > | vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x6a4: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40
> > > | vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x67d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40
> > > | vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x386: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20
> > > | vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x35d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20
> > >
> > > Suppress the optimization such that it must emit a code reference to
> > > the __x86_indirect_thunk_array[] base.
> > >
> > > Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
> > > ---
> > > arch/x86/net/bpf_jit_comp.c | 1 +
> > > 1 file changed, 1 insertion(+)
> > >
> > > --- a/arch/x86/net/bpf_jit_comp.c
> > > +++ b/arch/x86/net/bpf_jit_comp.c
> > > @@ -412,6 +412,7 @@ static void emit_indirect_jump(u8 **ppro
> > > EMIT_LFENCE();
> > > EMIT2(0xFF, 0xE0 + reg);
> > > } else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
> > > + OPTIMIZER_HIDE_VAR(reg);
> > > emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip);
> > > } else
> > > #endif
> >
> > Looks good. Please cc bpf@vger and all bpf maintainers in the future.
>
> Oh right, I'll go add an alias for that.
>
> > We can take it through the bpf tree if you prefer.
>
> I'll take it through the x86/urgent tree if you don't mind.
Sure. Then pls add:
Acked-by: Alexei Starovoitov <[email protected]>
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: be8a096521ca1a252bf078b347f96ce94582612e
Gitweb: https://git.kernel.org/tip/be8a096521ca1a252bf078b347f96ce94582612e
Author: Peter Zijlstra <[email protected]>
AuthorDate: Mon, 28 Mar 2022 13:13:41 +02:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Thu, 07 Apr 2022 11:27:02 +02:00
x86,bpf: Avoid IBT objtool warning
Clang can inline emit_indirect_jump() and then folds constants, which
results in:
| vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x6a4: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40
| vmlinux.o: warning: objtool: emit_bpf_dispatcher()+0x67d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x40
| vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x386: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20
| vmlinux.o: warning: objtool: emit_bpf_tail_call_indirect()+0x35d: relocation to !ENDBR: .text.__x86.indirect_thunk+0x20
Suppress the optimization such that it must emit a code reference to
the __x86_indirect_thunk_array[] base.
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
arch/x86/net/bpf_jit_comp.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 8fe35ed..16b6efa 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -412,6 +412,7 @@ static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip)
EMIT_LFENCE();
EMIT2(0xFF, 0xE0 + reg);
} else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
+ OPTIMIZER_HIDE_VAR(reg);
emit_jump(&prog, &__x86_indirect_thunk_array[reg], ip);
} else
#endif