2021-10-18 03:51:30

by 王贇

[permalink] [raw]
Subject: [PATCH v4 0/2] fix & prevent the missing preemption disabling

The testing show that perf_ftrace_function_call() are using smp_processor_id()
with preemption enabled, all the checking on CPU could be wrong after preemption.

As Peter point out, the section between ftrace_test_recursion_trylock/unlock()
pair require the preemption to be disabled as 'Documentation/trace/ftrace-uses.rst'
explained, but currently the work is done outside of the helpers.

And since the internal using of trace_test_and_set_recursion()
and trace_clear_recursion() also require preemption to be disabled, we
can just merge the logical together.

Patch 1/2 will make sure preemption disabled when recursion lock succeed,
patch 2/2 will do smp_processor_id() checking after trylock() to address the
issue.

v1: https://lore.kernel.org/all/[email protected]/
v2: https://lore.kernel.org/all/[email protected]/
v3: https://lore.kernel.org/all/[email protected]/

Michael Wang (2):
ftrace: disable preemption when recursion locked
ftrace: do CPU checking after preemption disabled

arch/csky/kernel/probes/ftrace.c | 2 --
arch/parisc/kernel/ftrace.c | 2 --
arch/powerpc/kernel/kprobes-ftrace.c | 2 --
arch/riscv/kernel/probes/ftrace.c | 2 --
arch/x86/kernel/kprobes/ftrace.c | 2 --
include/linux/trace_recursion.h | 20 +++++++++++++++++++-
kernel/livepatch/patch.c | 13 +++++++------
kernel/trace/ftrace.c | 15 +++++----------
kernel/trace/trace_event_perf.c | 6 +++---
kernel/trace/trace_functions.c | 5 -----
10 files changed, 34 insertions(+), 35 deletions(-)

--
1.8.3.1


2021-10-26 06:46:23

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v4 0/2] fix & prevent the missing preemption disabling

On Tue, 26 Oct 2021 10:09:12 +0800
王贇 <[email protected]> wrote:

> Just a ping, to see if there are any more comments :-P

I guess you missed what went into mainline (and your name found a bug
in my perl script for importing patches ;-)

https://lore.kernel.org/all/[email protected]/

Which means patch 1 needs to change:

> + /*
> + * Disable preemption to fulfill the promise.
> + *
> + * Don't worry about the bit 0 cases, they indicate
> + * the disabling behaviour has already been done by
> + * internal call previously.
> + */
> + preempt_disable_notrace();
> +
> return bit + 1;
> }
>
> +/*
> + * Preemption will be enabled (if it was previously enabled).
> + */
> static __always_inline void trace_clear_recursion(int bit)
> {
> if (!bit)
> return;
>
> + if (bit > 0)
> + preempt_enable_notrace();
> +

Where this wont work anymore.

Need to preempt disable and enable always.

-- Steve


> barrier();
> bit--;
> trace_recursion_clear(bit);
> @@ -209,7 +227,7 @@ static __always_inline void trace_clear_recursion(int bit)
> * tracing recursed in the same context (normal vs interrupt),
> *

2021-10-26 06:48:34

by 王贇

[permalink] [raw]
Subject: Re: [PATCH v4 0/2] fix & prevent the missing preemption disabling



On 2021/10/26 上午10:42, Steven Rostedt wrote:
> On Tue, 26 Oct 2021 10:09:12 +0800
> 王贇 <[email protected]> wrote:
>
>> Just a ping, to see if there are any more comments :-P
>
> I guess you missed what went into mainline (and your name found a bug
> in my perl script for importing patches ;-)
>
> https://lore.kernel.org/all/[email protected]/

Cool~ Missing some chinese font maybe, that's fine :-)

>
> Which means patch 1 needs to change:
>> + /*
>> + * Disable preemption to fulfill the promise.
>> + *
>> + * Don't worry about the bit 0 cases, they indicate
>> + * the disabling behaviour has already been done by
>> + * internal call previously.
>> + */
>> + preempt_disable_notrace();
>> +
>> return bit + 1;
>> }
>>
>> +/*
>> + * Preemption will be enabled (if it was previously enabled).
>> + */
>> static __always_inline void trace_clear_recursion(int bit)
>> {
>> if (!bit)
>> return;
>>
>> + if (bit > 0)
>> + preempt_enable_notrace();
>> +
>
> Where this wont work anymore.
>
> Need to preempt disable and enable always.

Yup, will rebase on the latest changes~

Regards,
Michael Wang

>
> -- Steve
>
>
>> barrier();
>> bit--;
>> trace_recursion_clear(bit);
>> @@ -209,7 +227,7 @@ static __always_inline void trace_clear_recursion(int bit)
>> * tracing recursed in the same context (normal vs interrupt),
>> *

2021-10-26 08:04:09

by 王贇

[permalink] [raw]
Subject: Re: [PATCH v4 0/2] fix & prevent the missing preemption disabling

Just a ping, to see if there are any more comments :-P

Regards,
Michael Wang

On 2021/10/18 上午11:38, 王贇 wrote:
> The testing show that perf_ftrace_function_call() are using smp_processor_id()
> with preemption enabled, all the checking on CPU could be wrong after preemption.
>
> As Peter point out, the section between ftrace_test_recursion_trylock/unlock()
> pair require the preemption to be disabled as 'Documentation/trace/ftrace-uses.rst'
> explained, but currently the work is done outside of the helpers.
>
> And since the internal using of trace_test_and_set_recursion()
> and trace_clear_recursion() also require preemption to be disabled, we
> can just merge the logical together.
>
> Patch 1/2 will make sure preemption disabled when recursion lock succeed,
> patch 2/2 will do smp_processor_id() checking after trylock() to address the
> issue.
>
> v1: https://lore.kernel.org/all/[email protected]/
> v2: https://lore.kernel.org/all/[email protected]/
> v3: https://lore.kernel.org/all/[email protected]/
>
> Michael Wang (2):
> ftrace: disable preemption when recursion locked
> ftrace: do CPU checking after preemption disabled
>
> arch/csky/kernel/probes/ftrace.c | 2 --
> arch/parisc/kernel/ftrace.c | 2 --
> arch/powerpc/kernel/kprobes-ftrace.c | 2 --
> arch/riscv/kernel/probes/ftrace.c | 2 --
> arch/x86/kernel/kprobes/ftrace.c | 2 --
> include/linux/trace_recursion.h | 20 +++++++++++++++++++-
> kernel/livepatch/patch.c | 13 +++++++------
> kernel/trace/ftrace.c | 15 +++++----------
> kernel/trace/trace_event_perf.c | 6 +++---
> kernel/trace/trace_functions.c | 5 -----
> 10 files changed, 34 insertions(+), 35 deletions(-)
>