2023-05-15 03:33:41

by Ze Gao

[permalink] [raw]
Subject: [PATCH 1/4] rethook: use preempt_{disable, enable}_notrace in rethook_trampoline_handler

This patch replace preempt_{disable, enable} with its corresponding
notrace version in rethook_trampoline_handler so no worries about stack
recursion or overflow introduced by preempt_count_{add, sub} under
fprobe + rethook context.

Signed-off-by: Ze Gao <[email protected]>
---
kernel/trace/rethook.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/rethook.c b/kernel/trace/rethook.c
index 32c3dfdb4d6a..60f6cb2b486b 100644
--- a/kernel/trace/rethook.c
+++ b/kernel/trace/rethook.c
@@ -288,7 +288,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
* These loops must be protected from rethook_free_rcu() because those
* are accessing 'rhn->rethook'.
*/
- preempt_disable();
+ preempt_disable_notrace();

/*
* Run the handler on the shadow stack. Do not unlink the list here because
@@ -321,7 +321,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
first = first->next;
rethook_recycle(rhn);
}
- preempt_enable();
+ preempt_enable_notrace();

return correct_ret_addr;
}
--
2.40.1



2023-05-16 04:44:05

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [PATCH 1/4] rethook: use preempt_{disable, enable}_notrace in rethook_trampoline_handler

Hi Ze Gao,

Thanks for the patch.

On Mon, 15 May 2023 11:26:38 +0800
Ze Gao <[email protected]> wrote:

> This patch replace preempt_{disable, enable} with its corresponding
> notrace version in rethook_trampoline_handler so no worries about stack
> recursion or overflow introduced by preempt_count_{add, sub} under
> fprobe + rethook context.

So, have you ever see that recursion of preempt_count overflow case?

I intended to use the normal preempt_disable() here because it does NOT
prohibit any function-trace call (Note that both kprobes and
fprobe checks recursive call by itself) but it is used for preempt_onoff
tracer.

Thanks,

>
> Signed-off-by: Ze Gao <[email protected]>
> ---
> kernel/trace/rethook.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/trace/rethook.c b/kernel/trace/rethook.c
> index 32c3dfdb4d6a..60f6cb2b486b 100644
> --- a/kernel/trace/rethook.c
> +++ b/kernel/trace/rethook.c
> @@ -288,7 +288,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
> * These loops must be protected from rethook_free_rcu() because those
> * are accessing 'rhn->rethook'.
> */
> - preempt_disable();
> + preempt_disable_notrace();
>
> /*
> * Run the handler on the shadow stack. Do not unlink the list here because
> @@ -321,7 +321,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
> first = first->next;
> rethook_recycle(rhn);
> }
> - preempt_enable();
> + preempt_enable_notrace();
>
> return correct_ret_addr;
> }
> --
> 2.40.1
>


--
Masami Hiramatsu (Google) <[email protected]>

2023-05-16 05:58:30

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [PATCH 1/4] rethook: use preempt_{disable, enable}_notrace in rethook_trampoline_handler

On Tue, 16 May 2023 13:25:02 +0900
Masami Hiramatsu (Google) <[email protected]> wrote:

> Hi Ze Gao,
>
> Thanks for the patch.
>
> On Mon, 15 May 2023 11:26:38 +0800
> Ze Gao <[email protected]> wrote:
>
> > This patch replace preempt_{disable, enable} with its corresponding
> > notrace version in rethook_trampoline_handler so no worries about stack
> > recursion or overflow introduced by preempt_count_{add, sub} under
> > fprobe + rethook context.
>
> So, have you ever see that recursion of preempt_count overflow case?
>
> I intended to use the normal preempt_disable() here because it does NOT
> prohibit any function-trace call (Note that both kprobes and
> fprobe checks recursive call by itself) but it is used for preempt_onoff
> tracer.

OK, I got the point.

rethook_trampoline_handler() {
preempt_disable() {
preempt_count_add() { => fprobe and set rethook
} => rethook_trampoline_handler() {
preempt_disable() {
...

So the problem is that the preempt_disable() macro calls preempt_count_add()
which can be tracable.

So, let's make it notrace.

Acked-by: Masami Hiramatsu (Google) <[email protected]>

and

Fixes: 54ecbe6f1ed5 ("rethook: Add a generic return hook")
Cc: [email protected]

Thank you,

>
> Thanks,
>
> >
> > Signed-off-by: Ze Gao <[email protected]>
> > ---
> > kernel/trace/rethook.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/trace/rethook.c b/kernel/trace/rethook.c
> > index 32c3dfdb4d6a..60f6cb2b486b 100644
> > --- a/kernel/trace/rethook.c
> > +++ b/kernel/trace/rethook.c
> > @@ -288,7 +288,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
> > * These loops must be protected from rethook_free_rcu() because those
> > * are accessing 'rhn->rethook'.
> > */
> > - preempt_disable();
> > + preempt_disable_notrace();
> >
> > /*
> > * Run the handler on the shadow stack. Do not unlink the list here because
> > @@ -321,7 +321,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
> > first = first->next;
> > rethook_recycle(rhn);
> > }
> > - preempt_enable();
> > + preempt_enable_notrace();
> >
> > return correct_ret_addr;
> > }
> > --
> > 2.40.1
> >
>
>
> --
> Masami Hiramatsu (Google) <[email protected]>


--
Masami Hiramatsu (Google) <[email protected]>