hi,
attached patch is disabling irqs during optimized callback,
so we dont miss any in-irq kprobes as missed.
Also I think there's small window where current_kprobe variable
could be touched in non-safe way, but I was not able to hit
any issue.
I'm not sure wether this is a bug or if it was intentional to have
irqs enabled during the pre_handler callback.
wbr,
jirka
---
Disabling irqs during optimized callback, so we dont miss
any in-irq kprobes as missed.
Interrupts are also disabled during non-optimized kprobes callbacks.
Signed-off-by: Jiri Olsa <[email protected]>
---
arch/x86/kernel/kprobes.c | 3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index c969fd9..917cb31 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -1183,11 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+ unsigned long flags;
/* This is possible if op is under delayed unoptimizing */
if (kprobe_disabled(&op->kp))
return;
+ local_irq_save(flags);
preempt_disable();
if (kprobe_running()) {
kprobes_inc_nmissed_count(&op->kp);
@@ -1208,6 +1210,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
__this_cpu_write(current_kprobe, NULL);
}
preempt_enable_no_resched();
+ local_irq_restore(flags);
}
static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
--
1.7.1
On Tue, Apr 26, 2011 at 03:01:31PM +0200, Jiri Olsa wrote:
> hi,
>
> attached patch is disabling irqs during optimized callback,
> so we dont miss any in-irq kprobes as missed.
>
> Also I think there's small window where current_kprobe variable
> could be touched in non-safe way, but I was not able to hit
> any issue.
>
> I'm not sure wether this is a bug or if it was intentional to have
> irqs enabled during the pre_handler callback.
That's not very convincing. Did you see if we actually did miss events.
If that's the case then it is a bug. The conversion to optimizing should
not cause events to be missed.
>
> wbr,
> jirka
>
> ---
> Disabling irqs during optimized callback, so we dont miss
> any in-irq kprobes as missed.
>
> Interrupts are also disabled during non-optimized kprobes callbacks.
>
> Signed-off-by: Jiri Olsa <[email protected]>
> ---
> arch/x86/kernel/kprobes.c | 3 +++
> 1 files changed, 3 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
> index c969fd9..917cb31 100644
> --- a/arch/x86/kernel/kprobes.c
> +++ b/arch/x86/kernel/kprobes.c
> @@ -1183,11 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> struct pt_regs *regs)
> {
> struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> + unsigned long flags;
>
> /* This is possible if op is under delayed unoptimizing */
> if (kprobe_disabled(&op->kp))
> return;
>
> + local_irq_save(flags);
> preempt_disable();
No reason to disable preemption if you disabled interrupts.
> if (kprobe_running()) {
> kprobes_inc_nmissed_count(&op->kp);
> @@ -1208,6 +1210,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> __this_cpu_write(current_kprobe, NULL);
> }
> preempt_enable_no_resched();
Remove the preempt_enable_no_resched() as well.
BTW, what's up with all these preempt_enable_no_resched()'s laying
around in the kprobe code? Looks to me that this can cause lots of
missing wakeups (preemption leaks). Which would make this horrible for
real-time.
-- Steve
> + local_irq_restore(flags);
> }
>
> static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
On Tue, Apr 26, 2011 at 09:46:25AM -0400, Steven Rostedt wrote:
> On Tue, Apr 26, 2011 at 03:01:31PM +0200, Jiri Olsa wrote:
> > hi,
> >
> > attached patch is disabling irqs during optimized callback,
> > so we dont miss any in-irq kprobes as missed.
> >
> > Also I think there's small window where current_kprobe variable
> > could be touched in non-safe way, but I was not able to hit
> > any issue.
> >
> > I'm not sure wether this is a bug or if it was intentional to have
> > irqs enabled during the pre_handler callback.
>
> That's not very convincing. Did you see if we actually did miss events.
> If that's the case then it is a bug. The conversion to optimizing should
> not cause events to be missed.
yep, running following:
# cd /debug/tracing/
# echo "p mutex_unlock" >> kprobe_events
# echo "p _raw_spin_lock" >> kprobe_events
# echo "p smp_apic_timer_interrupt" >> ./kprobe_events
# echo 1 > events/enable
makes the optimized kprobes to be missed. They are not missed in
same testcase for non-optimized kprobes. I should have mentioned
that, sry ;)
>
>
> >
> > wbr,
> > jirka
> >
> > ---
> > Disabling irqs during optimized callback, so we dont miss
> > any in-irq kprobes as missed.
> >
> > Interrupts are also disabled during non-optimized kprobes callbacks.
> >
> > Signed-off-by: Jiri Olsa <[email protected]>
> > ---
> > arch/x86/kernel/kprobes.c | 3 +++
> > 1 files changed, 3 insertions(+), 0 deletions(-)
> >
> > diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
> > index c969fd9..917cb31 100644
> > --- a/arch/x86/kernel/kprobes.c
> > +++ b/arch/x86/kernel/kprobes.c
> > @@ -1183,11 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> > struct pt_regs *regs)
> > {
> > struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > + unsigned long flags;
> >
> > /* This is possible if op is under delayed unoptimizing */
> > if (kprobe_disabled(&op->kp))
> > return;
> >
> > + local_irq_save(flags);
> > preempt_disable();
>
> No reason to disable preemption if you disabled interrupts.
ops, missed that.. attaching new patch
thanks,
jirka
---
Disabling irqs during optimized callback, so we dont miss
any in-irq kprobes as missed.
running following:
# cd /debug/tracing/
# echo "p mutex_unlock" >> kprobe_events
# echo "p _raw_spin_lock" >> kprobe_events
# echo "p smp_apic_timer_interrupt" >> ./kprobe_events
# echo 1 > events/enable
makes the optimized kprobes to be missed. None is missed
if the kprobe optimatization is disabled.
Signed-off-by: Jiri Olsa <[email protected]>
---
arch/x86/kernel/kprobes.c | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index c969fd9..f1a6244 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+ unsigned long flags;
/* This is possible if op is under delayed unoptimizing */
if (kprobe_disabled(&op->kp))
return;
- preempt_disable();
+ local_irq_save(flags);
if (kprobe_running()) {
kprobes_inc_nmissed_count(&op->kp);
} else {
@@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
opt_pre_handler(&op->kp, regs);
__this_cpu_write(current_kprobe, NULL);
}
- preempt_enable_no_resched();
+ local_irq_restore(flags);
}
static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
--
1.7.1
(2011/04/26 23:19), Jiri Olsa wrote:
> On Tue, Apr 26, 2011 at 09:46:25AM -0400, Steven Rostedt wrote:
>> On Tue, Apr 26, 2011 at 03:01:31PM +0200, Jiri Olsa wrote:
>>> hi,
>>>
>>> attached patch is disabling irqs during optimized callback,
>>> so we dont miss any in-irq kprobes as missed.
>>>
>>> Also I think there's small window where current_kprobe variable
>>> could be touched in non-safe way, but I was not able to hit
>>> any issue.
>>>
>>> I'm not sure wether this is a bug or if it was intentional to have
>>> irqs enabled during the pre_handler callback.
>>
>> That's not very convincing. Did you see if we actually did miss events.
>> If that's the case then it is a bug. The conversion to optimizing should
>> not cause events to be missed.
>
> yep, running following:
>
> # cd /debug/tracing/
> # echo "p mutex_unlock" >> kprobe_events
> # echo "p _raw_spin_lock" >> kprobe_events
> # echo "p smp_apic_timer_interrupt" >> ./kprobe_events
> # echo 1 > events/enable
>
> makes the optimized kprobes to be missed. They are not missed in
> same testcase for non-optimized kprobes. I should have mentioned
> that, sry ;)
Good catch! that's right! kprobes' int3 automatically disables
irq, but optimized path doesn't. And that causes unexpected
event loss.
>
>>
>>
>>>
>>> wbr,
>>> jirka
>>>
>>> ---
>>> Disabling irqs during optimized callback, so we dont miss
>>> any in-irq kprobes as missed.
>>>
>>> Interrupts are also disabled during non-optimized kprobes callbacks.
>>>
>>> Signed-off-by: Jiri Olsa <[email protected]>
>>> ---
>>> arch/x86/kernel/kprobes.c | 3 +++
>>> 1 files changed, 3 insertions(+), 0 deletions(-)
>>>
>>> diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
>>> index c969fd9..917cb31 100644
>>> --- a/arch/x86/kernel/kprobes.c
>>> +++ b/arch/x86/kernel/kprobes.c
>>> @@ -1183,11 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
>>> struct pt_regs *regs)
>>> {
>>> struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
>>> + unsigned long flags;
>>>
>>> /* This is possible if op is under delayed unoptimizing */
>>> if (kprobe_disabled(&op->kp))
>>> return;
>>>
>>> + local_irq_save(flags);
>>> preempt_disable();
>>
>> No reason to disable preemption if you disabled interrupts.
Right,
> ops, missed that.. attaching new patch
> ---
> Disabling irqs during optimized callback, so we dont miss
> any in-irq kprobes as missed.
>
> running following:
>
> # cd /debug/tracing/
> # echo "p mutex_unlock" >> kprobe_events
> # echo "p _raw_spin_lock" >> kprobe_events
> # echo "p smp_apic_timer_interrupt" >> ./kprobe_events
> # echo 1 > events/enable
>
> makes the optimized kprobes to be missed. None is missed
> if the kprobe optimatization is disabled.
>
> Signed-off-by: Jiri Olsa <[email protected]>
Acked-by: Masami Hiramatsu <[email protected]>
Ingo, could you pull this as a bugfix?
Thank you!
> ---
> arch/x86/kernel/kprobes.c | 5 +++--
> 1 files changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
> index c969fd9..f1a6244 100644
> --- a/arch/x86/kernel/kprobes.c
> +++ b/arch/x86/kernel/kprobes.c
> @@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> struct pt_regs *regs)
> {
> struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> + unsigned long flags;
>
> /* This is possible if op is under delayed unoptimizing */
> if (kprobe_disabled(&op->kp))
> return;
>
> - preempt_disable();
> + local_irq_save(flags);
> if (kprobe_running()) {
> kprobes_inc_nmissed_count(&op->kp);
> } else {
> @@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> opt_pre_handler(&op->kp, regs);
> __this_cpu_write(current_kprobe, NULL);
> }
> - preempt_enable_no_resched();
> + local_irq_restore(flags);
> }
>
> static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
--
Masami HIRAMATSU
Software Platform Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]
On Wed, Apr 27, 2011 at 09:51:23AM +0900, Masami Hiramatsu wrote:
> (2011/04/26 23:19), Jiri Olsa wrote:
> > On Tue, Apr 26, 2011 at 09:46:25AM -0400, Steven Rostedt wrote:
> >> On Tue, Apr 26, 2011 at 03:01:31PM +0200, Jiri Olsa wrote:
SNIP
> >>> hi,
> >
> > makes the optimized kprobes to be missed. None is missed
> > if the kprobe optimatization is disabled.
> >
> > Signed-off-by: Jiri Olsa <[email protected]>
>
> Acked-by: Masami Hiramatsu <[email protected]>
>
>
> Ingo, could you pull this as a bugfix?
hi,
could this be pulled in?
thanks,
jirka
>
> Thank you!
>
>
> > ---
> > arch/x86/kernel/kprobes.c | 5 +++--
> > 1 files changed, 3 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
> > index c969fd9..f1a6244 100644
> > --- a/arch/x86/kernel/kprobes.c
> > +++ b/arch/x86/kernel/kprobes.c
> > @@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> > struct pt_regs *regs)
> > {
> > struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > + unsigned long flags;
> >
> > /* This is possible if op is under delayed unoptimizing */
> > if (kprobe_disabled(&op->kp))
> > return;
> >
> > - preempt_disable();
> > + local_irq_save(flags);
> > if (kprobe_running()) {
> > kprobes_inc_nmissed_count(&op->kp);
> > } else {
> > @@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> > opt_pre_handler(&op->kp, regs);
> > __this_cpu_write(current_kprobe, NULL);
> > }
> > - preempt_enable_no_resched();
> > + local_irq_restore(flags);
> > }
> >
> > static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
>
>
> --
> Masami HIRAMATSU
> Software Platform Research Dept. Linux Technology Center
> Hitachi, Ltd., Yokohama Research Laboratory
> E-mail: [email protected]
* Jiri Olsa <[email protected]> wrote:
> + local_irq_save(flags);
> preempt_disable();
> if (kprobe_running()) {
> kprobes_inc_nmissed_count(&op->kp);
> @@ -1208,6 +1210,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> __this_cpu_write(current_kprobe, NULL);
> }
> preempt_enable_no_resched();
> + local_irq_restore(flags);
irq-disable is synonymous to preempt disable so the preempt_disable()/enable()
pair looks like superfluous overhead.
Thanks,
Ingo
On Tue, May 10, 2011 at 12:40:19PM +0200, Ingo Molnar wrote:
>
> * Jiri Olsa <[email protected]> wrote:
>
> > + local_irq_save(flags);
> > preempt_disable();
> > if (kprobe_running()) {
> > kprobes_inc_nmissed_count(&op->kp);
> > @@ -1208,6 +1210,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> > __this_cpu_write(current_kprobe, NULL);
> > }
> > preempt_enable_no_resched();
> > + local_irq_restore(flags);
>
> irq-disable is synonymous to preempt disable so the preempt_disable()/enable()
> pair looks like superfluous overhead.
yes, there's correct patch already in the list here:
http://marc.info/?l=linux-kernel&m=130382756829695&w=2
thanks,
jirka
>
> Thanks,
>
> Ingo
* Jiri Olsa <[email protected]> wrote:
> On Tue, May 10, 2011 at 12:40:19PM +0200, Ingo Molnar wrote:
> >
> > * Jiri Olsa <[email protected]> wrote:
> >
> > > + local_irq_save(flags);
> > > preempt_disable();
> > > if (kprobe_running()) {
> > > kprobes_inc_nmissed_count(&op->kp);
> > > @@ -1208,6 +1210,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> > > __this_cpu_write(current_kprobe, NULL);
> > > }
> > > preempt_enable_no_resched();
> > > + local_irq_restore(flags);
> >
> > irq-disable is synonymous to preempt disable so the preempt_disable()/enable()
> > pair looks like superfluous overhead.
>
> yes, there's correct patch already in the list here:
> http://marc.info/?l=linux-kernel&m=130382756829695&w=2
It helps to change the subject line when you think another patch should be
considered, to something like:
[PATCH, v2] kprobes, x86: Disable irq during optimized callback
(also note the other changes i made to the title, 3 altogether.)
Thanks,
Ingo
On Tue, May 10, 2011 at 01:44:18PM +0200, Ingo Molnar wrote:
>
> * Jiri Olsa <[email protected]> wrote:
>
> > On Tue, May 10, 2011 at 12:40:19PM +0200, Ingo Molnar wrote:
> > >
> > > * Jiri Olsa <[email protected]> wrote:
> > >
> > > > + local_irq_save(flags);
> > > > preempt_disable();
> > > > if (kprobe_running()) {
> > > > kprobes_inc_nmissed_count(&op->kp);
> > > > @@ -1208,6 +1210,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
> > > > __this_cpu_write(current_kprobe, NULL);
> > > > }
> > > > preempt_enable_no_resched();
> > > > + local_irq_restore(flags);
> > >
> > > irq-disable is synonymous to preempt disable so the preempt_disable()/enable()
> > > pair looks like superfluous overhead.
> >
> > yes, there's correct patch already in the list here:
> > http://marc.info/?l=linux-kernel&m=130382756829695&w=2
>
> It helps to change the subject line when you think another patch should be
> considered, to something like:
>
> [PATCH, v2] kprobes, x86: Disable irq during optimized callback
>
> (also note the other changes i made to the title, 3 altogether.)
sorry, here it is ;) thanks
jirka
---
Disabling irqs during optimized callback, so we dont miss
any in-irq kprobes as missed.
running following:
# cd /debug/tracing/
# echo "p mutex_unlock" >> kprobe_events
# echo "p _raw_spin_lock" >> kprobe_events
# echo "p smp_apic_timer_interrupt" >> ./kprobe_events
# echo 1 > events/enable
makes the optimized kprobes to be missed. None is missed
if the kprobe optimatization is disabled.
Signed-off-by: Jiri Olsa <[email protected]>
---
arch/x86/kernel/kprobes.c | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index c969fd9..f1a6244 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+ unsigned long flags;
/* This is possible if op is under delayed unoptimizing */
if (kprobe_disabled(&op->kp))
return;
- preempt_disable();
+ local_irq_save(flags);
if (kprobe_running()) {
kprobes_inc_nmissed_count(&op->kp);
} else {
@@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
opt_pre_handler(&op->kp, regs);
__this_cpu_write(current_kprobe, NULL);
}
- preempt_enable_no_resched();
+ local_irq_restore(flags);
}
static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)
--
1.7.1
Commit-ID: 9bbeacf52f66d165739a4bbe9c018d17493a74b5
Gitweb: http://git.kernel.org/tip/9bbeacf52f66d165739a4bbe9c018d17493a74b5
Author: Jiri Olsa <[email protected]>
AuthorDate: Wed, 11 May 2011 13:06:13 +0200
Committer: Ingo Molnar <[email protected]>
CommitDate: Wed, 11 May 2011 13:21:23 +0200
kprobes, x86: Disable irqs during optimized callback
Disable irqs during optimized callback, so we dont miss any in-irq kprobes.
The following commands:
# cd /debug/tracing/
# echo "p mutex_unlock" >> kprobe_events
# echo "p _raw_spin_lock" >> kprobe_events
# echo "p smp_apic_timer_interrupt" >> ./kprobe_events
# echo 1 > events/enable
Cause the optimized kprobes to be missed. None is missed
with the fix applied.
Signed-off-by: Jiri Olsa <[email protected]>
Acked-by: Masami Hiramatsu <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
arch/x86/kernel/kprobes.c | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c
index c969fd9..f1a6244 100644
--- a/arch/x86/kernel/kprobes.c
+++ b/arch/x86/kernel/kprobes.c
@@ -1183,12 +1183,13 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
struct pt_regs *regs)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+ unsigned long flags;
/* This is possible if op is under delayed unoptimizing */
if (kprobe_disabled(&op->kp))
return;
- preempt_disable();
+ local_irq_save(flags);
if (kprobe_running()) {
kprobes_inc_nmissed_count(&op->kp);
} else {
@@ -1207,7 +1208,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op,
opt_pre_handler(&op->kp, regs);
__this_cpu_write(current_kprobe, NULL);
}
- preempt_enable_no_resched();
+ local_irq_restore(flags);
}
static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src)