From: Joan Bruguera Micó <[email protected]>
Fixes two issues that cause kernels panic when using the BPF JIT with
the call depth tracking / stuffing mitigation for Skylake processors
(`retbleed=stuff`). Both issues can be triggered by running simple
BPF programs (e.g. running the test suite should trigger both).
The first (resubmit) fixes a trivial issue related to calculating the
destination IP for call instructions with call depth tracking.
The second is related to using the correct IP for relocations, related
to the recently introduced %rip-relative addressing for PER_CPU_VAR.
Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
---
v2:
Simplify calculation of "ip".
Add more details to the commit message.
Joan Bruguera Micó (1):
x86/bpf: Fix IP for relocating call depth accounting
Uros Bizjak (1):
x86/bpf: Fix IP after emitting call depth accounting
arch/x86/include/asm/alternative.h | 4 ++--
arch/x86/kernel/callthunks.c | 4 ++--
arch/x86/net/bpf_jit_comp.c | 19 ++++++++-----------
3 files changed, 12 insertions(+), 15 deletions(-)
--
2.42.0
Adjust the IP passed to `emit_patch` so it calculates the correct offset
for the CALL instruction if `x86_call_depth_emit_accounting` emits code.
Otherwise we will skip some instructions and most likely crash.
Fixes: b2e9dfe54be4 ("x86/bpf: Emit call depth accounting if required")
Link: https://lore.kernel.org/lkml/[email protected]/
Co-developed-by: Joan Bruguera Micó <[email protected]>
Signed-off-by: Joan Bruguera Micó <[email protected]>
Signed-off-by: Uros Bizjak <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
---
v2: Simplify calculation of "ip".
---
arch/x86/net/bpf_jit_comp.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index a7ba8e178645..e55745f512e1 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -480,7 +480,7 @@ static int emit_call(u8 **pprog, void *func, void *ip)
static int emit_rsb_call(u8 **pprog, void *func, void *ip)
{
OPTIMIZER_HIDE_VAR(func);
- x86_call_depth_emit_accounting(pprog, func);
+ ip += x86_call_depth_emit_accounting(pprog, func);
return emit_patch(pprog, func, ip, 0xE8);
}
--
2.42.0
From: Joan Bruguera Micó <[email protected]>
The commit:
59bec00ace28 ("x86/percpu: Introduce %rip-relative addressing to PER_CPU_VAR()")
made PER_CPU_VAR() to use rip-relative addressing, hence
INCREMENT_CALL_DEPTH macro and skl_call_thunk_template got rip-relative
asm code inside of it. A follow up commit:
17bce3b2ae2d ("x86/callthunks: Handle %rip-relative relocations in call thunk template")
changed x86_call_depth_emit_accounting() to use apply_relocation(),
but mistakenly assumed that the code is being patched in-place (where
the destination of the relocation matches the address of the code),
using *pprog as the destination ip. This is not true for the call depth
accounting, emitted by the BPF JIT, so the calculated address was wrong,
JIT-ed BPF progs on kernels with call depth tracking got broken and
usually caused a page fault.
Pass the destination IP when the BPF JIT emits call depth accounting.
Fixes: 17bce3b2ae2d ("x86/callthunks: Handle %rip-relative relocations in call thunk template")
Signed-off-by: Joan Bruguera Micó <[email protected]>
Reviewed-by: Uros Bizjak <[email protected]>
Acked-by: Ingo Molnar <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Daniel Borkmann <[email protected]>
---
v2: Add more details to the commit message.
---
arch/x86/include/asm/alternative.h | 4 ++--
arch/x86/kernel/callthunks.c | 4 ++--
arch/x86/net/bpf_jit_comp.c | 19 ++++++++-----------
3 files changed, 12 insertions(+), 15 deletions(-)
diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
index fcd20c6dc7f9..67b68d0d17d1 100644
--- a/arch/x86/include/asm/alternative.h
+++ b/arch/x86/include/asm/alternative.h
@@ -117,7 +117,7 @@ extern void callthunks_patch_builtin_calls(void);
extern void callthunks_patch_module_calls(struct callthunk_sites *sites,
struct module *mod);
extern void *callthunks_translate_call_dest(void *dest);
-extern int x86_call_depth_emit_accounting(u8 **pprog, void *func);
+extern int x86_call_depth_emit_accounting(u8 **pprog, void *func, void *ip);
#else
static __always_inline void callthunks_patch_builtin_calls(void) {}
static __always_inline void
@@ -128,7 +128,7 @@ static __always_inline void *callthunks_translate_call_dest(void *dest)
return dest;
}
static __always_inline int x86_call_depth_emit_accounting(u8 **pprog,
- void *func)
+ void *func, void *ip)
{
return 0;
}
diff --git a/arch/x86/kernel/callthunks.c b/arch/x86/kernel/callthunks.c
index 30335182b6b0..e92ff0c11db8 100644
--- a/arch/x86/kernel/callthunks.c
+++ b/arch/x86/kernel/callthunks.c
@@ -314,7 +314,7 @@ static bool is_callthunk(void *addr)
return !bcmp(pad, insn_buff, tmpl_size);
}
-int x86_call_depth_emit_accounting(u8 **pprog, void *func)
+int x86_call_depth_emit_accounting(u8 **pprog, void *func, void *ip)
{
unsigned int tmpl_size = SKL_TMPL_SIZE;
u8 insn_buff[MAX_PATCH_LEN];
@@ -327,7 +327,7 @@ int x86_call_depth_emit_accounting(u8 **pprog, void *func)
return 0;
memcpy(insn_buff, skl_call_thunk_template, tmpl_size);
- apply_relocation(insn_buff, tmpl_size, *pprog,
+ apply_relocation(insn_buff, tmpl_size, ip,
skl_call_thunk_template, tmpl_size);
memcpy(*pprog, insn_buff, tmpl_size);
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index e55745f512e1..df5fac428408 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -480,7 +480,7 @@ static int emit_call(u8 **pprog, void *func, void *ip)
static int emit_rsb_call(u8 **pprog, void *func, void *ip)
{
OPTIMIZER_HIDE_VAR(func);
- ip += x86_call_depth_emit_accounting(pprog, func);
+ ip += x86_call_depth_emit_accounting(pprog, func, ip);
return emit_patch(pprog, func, ip, 0xE8);
}
@@ -1972,20 +1972,17 @@ st: if (is_imm8(insn->off))
/* call */
case BPF_JMP | BPF_CALL: {
- int offs;
+ u8 *ip = image + addrs[i - 1];
func = (u8 *) __bpf_call_base + imm32;
if (tail_call_reachable) {
RESTORE_TAIL_CALL_CNT(bpf_prog->aux->stack_depth);
- if (!imm32)
- return -EINVAL;
- offs = 7 + x86_call_depth_emit_accounting(&prog, func);
- } else {
- if (!imm32)
- return -EINVAL;
- offs = x86_call_depth_emit_accounting(&prog, func);
+ ip += 7;
}
- if (emit_call(&prog, func, image + addrs[i - 1] + offs))
+ if (!imm32)
+ return -EINVAL;
+ ip += x86_call_depth_emit_accounting(&prog, func, ip);
+ if (emit_call(&prog, func, ip))
return -EINVAL;
break;
}
@@ -2835,7 +2832,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
* Direct-call fentry stub, as such it needs accounting for the
* __fentry__ call.
*/
- x86_call_depth_emit_accounting(&prog, NULL);
+ x86_call_depth_emit_accounting(&prog, NULL, image);
}
EMIT1(0x55); /* push rbp */
EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
--
2.42.0
Hello:
This series was applied to bpf/bpf.git (master)
by Alexei Starovoitov <[email protected]>:
On Mon, 1 Apr 2024 20:55:28 +0200 you wrote:
> From: Joan Bruguera Micó <[email protected]>
>
> Fixes two issues that cause kernels panic when using the BPF JIT with
> the call depth tracking / stuffing mitigation for Skylake processors
> (`retbleed=stuff`). Both issues can be triggered by running simple
> BPF programs (e.g. running the test suite should trigger both).
>
> [...]
Here is the summary with links:
- [RESEND,bpf,v2,1/2] x86/bpf: Fix IP after emitting call depth accounting
https://git.kernel.org/bpf/bpf/c/9d98aa088386
- [RESEND,bpf,v2,2/2] x86/bpf: Fix IP for relocating call depth accounting
https://git.kernel.org/bpf/bpf/c/6a537453000a
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
* [email protected] <[email protected]> wrote:
> Hello:
>
> This series was applied to bpf/bpf.git (master)
> by Alexei Starovoitov <[email protected]>:
>
> On Mon, 1 Apr 2024 20:55:28 +0200 you wrote:
> > From: Joan Bruguera Mic? <[email protected]>
> >
> > Fixes two issues that cause kernels panic when using the BPF JIT with
> > the call depth tracking / stuffing mitigation for Skylake processors
> > (`retbleed=stuff`). Both issues can be triggered by running simple
> > BPF programs (e.g. running the test suite should trigger both).
> >
> > [...]
>
> Here is the summary with links:
> - [RESEND,bpf,v2,1/2] x86/bpf: Fix IP after emitting call depth accounting
> https://git.kernel.org/bpf/bpf/c/9d98aa088386
> - [RESEND,bpf,v2,2/2] x86/bpf: Fix IP for relocating call depth accounting
> https://git.kernel.org/bpf/bpf/c/6a537453000a
Just wondering, which kernel version is this targeted for?
The bug is upstream as well, so a fix needs to be sent to Linus.
I can pick all of this up into tip:x86/urgent, if that accelerates
things.
Thanks,
Ingo
On Wed, Apr 3, 2024 at 9:26 AM Ingo Molnar <[email protected]> wrote:
>
>
> * [email protected] <[email protected]> wrote:
>
> > Hello:
> >
> > This series was applied to bpf/bpf.git (master)
> > by Alexei Starovoitov <[email protected]>:
> >
> > On Mon, 1 Apr 2024 20:55:28 +0200 you wrote:
> > > From: Joan Bruguera Micó <[email protected]>
> > >
> > > Fixes two issues that cause kernels panic when using the BPF JIT with
> > > the call depth tracking / stuffing mitigation for Skylake processors
> > > (`retbleed=stuff`). Both issues can be triggered by running simple
> > > BPF programs (e.g. running the test suite should trigger both).
> > >
> > > [...]
> >
> > Here is the summary with links:
> > - [RESEND,bpf,v2,1/2] x86/bpf: Fix IP after emitting call depth accounting
> > https://git.kernel.org/bpf/bpf/c/9d98aa088386
> > - [RESEND,bpf,v2,2/2] x86/bpf: Fix IP for relocating call depth accounting
> > https://git.kernel.org/bpf/bpf/c/6a537453000a
>
> Just wondering, which kernel version is this targeted for?
The whole series is intended for the current mainline (v6.9), this is
why it is developed against the bpf (*not* bpf-next) branch. Please
note that the kernel panics with retbleed=stuff even without
%rip-relative changes (patch 1/2 above) [1], so patch 1/2 should be
backported to stable branches.
[1] https://lore.kernel.org/lkml/[email protected]/
Uros.
> The bug is upstream as well, so a fix needs to be sent to Linus.
>
> I can pick all of this up into tip:x86/urgent, if that accelerates
> things.
>
> Thanks,
>
> Ingo
* Uros Bizjak <[email protected]> wrote:
> > > Here is the summary with links:
> > > - [RESEND,bpf,v2,1/2] x86/bpf: Fix IP after emitting call depth accounting
> > > https://git.kernel.org/bpf/bpf/c/9d98aa088386
> > > - [RESEND,bpf,v2,2/2] x86/bpf: Fix IP for relocating call depth accounting
> > > https://git.kernel.org/bpf/bpf/c/6a537453000a
> >
> > Just wondering, which kernel version is this targeted for?
>
> The whole series is intended for the current mainline (v6.9), this is
> why it is developed against the bpf (*not* bpf-next) branch.
I see - so bpf.git:master are current-mainline BPF fixes - that's perfect.
BTW., bpf-next is not a branch, but a separate Git tree. (Which is what
confused me.)
Thanks,
Ingo
On 4/3/24 10:01 AM, Ingo Molnar wrote:
> * Uros Bizjak <[email protected]> wrote:
>
>>>> Here is the summary with links:
>>>> - [RESEND,bpf,v2,1/2] x86/bpf: Fix IP after emitting call depth accounting
>>>> https://git.kernel.org/bpf/bpf/c/9d98aa088386
>>>> - [RESEND,bpf,v2,2/2] x86/bpf: Fix IP for relocating call depth accounting
>>>> https://git.kernel.org/bpf/bpf/c/6a537453000a
>>>
>>> Just wondering, which kernel version is this targeted for?
>>
>> The whole series is intended for the current mainline (v6.9), this is
>> why it is developed against the bpf (*not* bpf-next) branch.
>
> I see - so bpf.git:master are current-mainline BPF fixes - that's perfect.
Yes, correct, that is for mainline (v6.9).
Thanks,
Daniel