From: "Steven Rostedt (Red Hat)" <[email protected]>
When one of the event files is opened, we need to prevent them from
being removed. Modules do with with the module owner set (automated
from the VFS layer). The ftrace buffer instances have a ref count added
to the trace_array when the enabled file is opened (the others are not
that big of a deal, as they only reference the event calls which
still exist when an instance disappears). But kprobes and uprobes
do not have any protection.
# cd /sys/kernel/debug/tracing
# echo 'p:sigprocmask sigprocmask' > kprobe_events || exit -1
# enable_probe() {
sleep 10
echo 1
}
# file=events/kprobes/sigprocmask/enable
# enable_probe > $file &
> kprobe_events
The above will corrupt the kprobe system, as the write to the enable
file will happen after the kprobe was deleted.
Trying to create the probe again fails:
# echo 'p:sigprocmask sigprocmask' > kprobe_events
# cat kprobe_events
p:kprobes/sigprocmask sigprocmask
# ls events/kprobes/
enable filter
Have the unregister probe fail when the event files are open, in use
are used by perf.
Signed-off-by: Steven Rostedt <[email protected]>
---
kernel/trace/trace_kprobe.c | 21 +++++++++++++++------
1 file changed, 15 insertions(+), 6 deletions(-)
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 7ed6976..ffcaf42 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -95,7 +95,7 @@ static __kprobes bool trace_probe_is_on_module(struct trace_probe *tp)
}
static int register_probe_event(struct trace_probe *tp);
-static void unregister_probe_event(struct trace_probe *tp);
+static int unregister_probe_event(struct trace_probe *tp);
static DEFINE_MUTEX(probe_lock);
static LIST_HEAD(probe_list);
@@ -340,9 +340,12 @@ static int unregister_trace_probe(struct trace_probe *tp)
if (trace_probe_is_enabled(tp))
return -EBUSY;
+ /* Will fail if probe is being used by ftrace or perf */
+ if (unregister_probe_event(tp))
+ return -EBUSY;
+
__unregister_trace_probe(tp);
list_del(&tp->list);
- unregister_probe_event(tp);
return 0;
}
@@ -621,7 +624,9 @@ static int release_all_trace_probes(void)
/* TODO: Use batch unregistration */
while (!list_empty(&probe_list)) {
tp = list_entry(probe_list.next, struct trace_probe, list);
- unregister_trace_probe(tp);
+ ret = unregister_trace_probe(tp);
+ if (ret)
+ goto end;
free_trace_probe(tp);
}
@@ -1242,11 +1247,15 @@ static int register_probe_event(struct trace_probe *tp)
return ret;
}
-static void unregister_probe_event(struct trace_probe *tp)
+static int unregister_probe_event(struct trace_probe *tp)
{
+ int ret;
+
/* tp->event is unregistered in trace_remove_event_call() */
- trace_remove_event_call(&tp->call);
- kfree(tp->call.print_fmt);
+ ret = trace_remove_event_call(&tp->call);
+ if (!ret)
+ kfree(tp->call.print_fmt);
+ return ret;
}
/* Make a debugfs interface for controlling probe points */
--
1.7.10.4
(2013/07/04 12:33), Steven Rostedt wrote:
> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> index 7ed6976..ffcaf42 100644
> --- a/kernel/trace/trace_kprobe.c
> +++ b/kernel/trace/trace_kprobe.c
> @@ -95,7 +95,7 @@ static __kprobes bool trace_probe_is_on_module(struct trace_probe *tp)
> }
>
> static int register_probe_event(struct trace_probe *tp);
> -static void unregister_probe_event(struct trace_probe *tp);
> +static int unregister_probe_event(struct trace_probe *tp);
>
> static DEFINE_MUTEX(probe_lock);
> static LIST_HEAD(probe_list);
> @@ -340,9 +340,12 @@ static int unregister_trace_probe(struct trace_probe *tp)
> if (trace_probe_is_enabled(tp))
> return -EBUSY;
>
> + /* Will fail if probe is being used by ftrace or perf */
> + if (unregister_probe_event(tp))
> + return -EBUSY;
> +
> __unregister_trace_probe(tp);
> list_del(&tp->list);
> - unregister_probe_event(tp);
>
> return 0;
> }
This may cause an unexpected access violation at kprobe handler because
unregister_probe_event frees event_call/event_files and it will be
accessed until kprobe is *completely* disabled.
Actually disable_kprobe() doesn't ensure to finish the current running
kprobe handlers. Thus, even if trace_probe_is_enabled() returns false,
we must do synchronize_sched() for waiting, before unregister_probe_event().
OTOH, unregister_kprobe() waits for that. That's why I do
__unregister_trace_probe(tp) first here.
I think there are 2 solutions, one is adding a wait after disable_k*probe
at disable_trace_probe(), another is adding a wait before unregister_probe_event()
at unregister_trace_probe().
Thank you,
--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]
On 07/04, Masami Hiramatsu wrote:
>
> (2013/07/04 12:33), Steven Rostedt wrote:
> > + /* Will fail if probe is being used by ftrace or perf */
> > + if (unregister_probe_event(tp))
> > + return -EBUSY;
> > +
> > __unregister_trace_probe(tp);
> > list_del(&tp->list);
> > - unregister_probe_event(tp);
> >
> > return 0;
> > }
>
> This may cause an unexpected access violation at kprobe handler because
> unregister_probe_event frees event_call/event_files and it will be
> accessed until kprobe is *completely* disabled.
I don't think so... Please correct me.
(but yes I think the patch needs a small update, see below).
> Actually disable_kprobe() doesn't ensure to finish the current running
> kprobe handlers.
Yes. in fact disable_trace_probe(file != NULL) does, but perf doesn't.
> Thus, even if trace_probe_is_enabled() returns false,
> we must do synchronize_sched() for waiting, before unregister_probe_event().
No, I think we should simply kill trace_probe_is_enabled() here.
And synchronize_sched() _before_ unregister_probe_event() can't
help, exactly because trace_probe_is_enabled() is racy.
> OTOH, unregister_kprobe() waits for that.
Yes.
So I think we only need to move kfree(tp->call.print_fmt). In fact I
already wrote the patch assuming that trace_remove_event_call() will
be changed as we discussed.
So the sequence should be:
if (trace_remove_event_call(...))
return;
/* does synchronize_sched */
unregister_kprobe();
kfree(everything);
Agreed?
Oleg.
(2013/07/05 3:48), Oleg Nesterov wrote:
> On 07/04, Masami Hiramatsu wrote:
>>
>> (2013/07/04 12:33), Steven Rostedt wrote:
>>> + /* Will fail if probe is being used by ftrace or perf */
>>> + if (unregister_probe_event(tp))
>>> + return -EBUSY;
>>> +
>>> __unregister_trace_probe(tp);
>>> list_del(&tp->list);
>>> - unregister_probe_event(tp);
>>>
>>> return 0;
>>> }
>>
>> This may cause an unexpected access violation at kprobe handler because
>> unregister_probe_event frees event_call/event_files and it will be
>> accessed until kprobe is *completely* disabled.
>
> I don't think so... Please correct me.
>
> (but yes I think the patch needs a small update, see below).
>
>> Actually disable_kprobe() doesn't ensure to finish the current running
>> kprobe handlers.
>
> Yes. in fact disable_trace_probe(file != NULL) does, but perf doesn't.
Ah, right. we did that.
>
>> Thus, even if trace_probe_is_enabled() returns false,
>> we must do synchronize_sched() for waiting, before unregister_probe_event().
>
> No, I think we should simply kill trace_probe_is_enabled() here.
> And synchronize_sched() _before_ unregister_probe_event() can't
> help, exactly because trace_probe_is_enabled() is racy.
Right, it should be useless.
>> OTOH, unregister_kprobe() waits for that.
>
> Yes.
>
> So I think we only need to move kfree(tp->call.print_fmt). In fact I
> already wrote the patch assuming that trace_remove_event_call() will
> be changed as we discussed.
>
> So the sequence should be:
>
> if (trace_remove_event_call(...))
> return;
>
> /* does synchronize_sched */
> unregister_kprobe();
>
> kfree(everything);
>
> Agreed?
If we can free everything after all, I'd like to do so.
Hmm, but AFAICS, trace_remove_event_call() supposes that
all event is disabled completely.
A safe way is to wait rcu always right after disable_*probe
in disable_trace_probe. If we have an unused link, we can
free it after that.
Or, do more aggressively, introducing a dying-bit for each
trace-probe could be another way. If the bit is set, all
enable operations are failed. It works like as a per-event lock.
Thank you,
--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]
On 07/05, Masami Hiramatsu wrote:
>
> (2013/07/05 3:48), Oleg Nesterov wrote:
> > On 07/04, Masami Hiramatsu wrote:
> >>
> >> Actually disable_kprobe() doesn't ensure to finish the current running
> >> kprobe handlers.
> >
> > Yes. in fact disable_trace_probe(file != NULL) does, but perf doesn't.
>
> Ah, right. we did that.
And thus we only need to synchronize kprobe_dispatcher()->kprobe_perf_func()
path. And afaics kprobe_perf_func() doesn't use anything which can be freed
by trace_remove_event_call?
> >> OTOH, unregister_kprobe() waits for that.
> >
> > Yes.
> >
> > So I think we only need to move kfree(tp->call.print_fmt).
OOPS. I was wrong. It seems that ->print_fmt is only for event/format ?
Then it is fine to kfree it right after trace_remove_event_call().
> > So the sequence should be:
> >
> > if (trace_remove_event_call(...))
> > return;
> >
> > /* does synchronize_sched */
> > unregister_kprobe();
> >
> > kfree(everything);
> >
> > Agreed?
>
> If we can free everything after all, I'd like to do so.
> Hmm, but AFAICS, trace_remove_event_call() supposes that
> all event is disabled completely.
Yes, but kprobe_trace_func() is really disabled?
> A safe way is to wait rcu always right after disable_*probe
> in disable_trace_probe. If we have an unused link, we can
> free it after that.
Aaaah... I am starting to understand... Even if kprobe_perf_func()
is fine, synchronize_sched() is calles _before_ disable_kprobe()
and thus it can't synchronize with the handlers which hit this probe
after we start synchronize_sched().
Did you mean this or I misssed something else?
Thanks!
Oleg.
(2013/07/06 2:26), Oleg Nesterov wrote:
> On 07/05, Masami Hiramatsu wrote:
>>
>> (2013/07/05 3:48), Oleg Nesterov wrote:
>>> On 07/04, Masami Hiramatsu wrote:
>>>>
>>>> Actually disable_kprobe() doesn't ensure to finish the current running
>>>> kprobe handlers.
>>>
>>> Yes. in fact disable_trace_probe(file != NULL) does, but perf doesn't.
>>
>> Ah, right. we did that.
>
> And thus we only need to synchronize kprobe_dispatcher()->kprobe_perf_func()
> path. And afaics kprobe_perf_func() doesn't use anything which can be freed
> by trace_remove_event_call?
>
>>>> OTOH, unregister_kprobe() waits for that.
>>>
>>> Yes.
>>>
>>> So I think we only need to move kfree(tp->call.print_fmt).
>
> OOPS. I was wrong. It seems that ->print_fmt is only for event/format ?
> Then it is fine to kfree it right after trace_remove_event_call().
>
>>> So the sequence should be:
>>>
>>> if (trace_remove_event_call(...))
>>> return;
>>>
>>> /* does synchronize_sched */
>>> unregister_kprobe();
>>>
>>> kfree(everything);
>>>
>>> Agreed?
>>
>> If we can free everything after all, I'd like to do so.
>> Hmm, but AFAICS, trace_remove_event_call() supposes that
>> all event is disabled completely.
>
> Yes, but kprobe_trace_func() is really disabled?
No, currently, doesn't. We need to fix that.
(Perhaps, uprobes too.)
>> A safe way is to wait rcu always right after disable_*probe
>> in disable_trace_probe. If we have an unused link, we can
>> free it after that.
>
> Aaaah... I am starting to understand... Even if kprobe_perf_func()
> is fine, synchronize_sched() is calles _before_ disable_kprobe()
> and thus it can't synchronize with the handlers which hit this probe
> after we start synchronize_sched().
>
> Did you mean this or I misssed something else?
Right, thus perf path also need to be synchronized.
Thank you,
--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]
On 07/08, Masami Hiramatsu wrote:
>
> (2013/07/06 2:26), Oleg Nesterov wrote:
> (Perhaps, uprobes too.)
Sure, uprobes needs all the same fixes, lets ignore it for now.
> >> A safe way is to wait rcu always right after disable_*probe
> >> in disable_trace_probe. If we have an unused link, we can
> >> free it after that.
> >
> > Aaaah... I am starting to understand... Even if kprobe_perf_func()
> > is fine, synchronize_sched() is calles _before_ disable_kprobe()
> > and thus it can't synchronize with the handlers which hit this probe
> > after we start synchronize_sched().
> >
> > Did you mean this or I misssed something else?
>
> Right, thus perf path also need to be synchronized.
kprobe_perf_func() looks fine. This path can't use anything which
can be freed by trace_remove_event_call(). We are going to free
tp / tp->call but only after unregister_kprobe().
Afaics. But even if I am right, I agree it would be safer to not
rely on this and simply do synchronize_sched() after disable as
you suggested.
Thanks,
Oleg.
Wait for disabling all running kprobe handlers when a kprobe
event is disabled, since the caller, trace_remove_event_call()
supposes that a removing event is disabled completely by
disabling the event.
With this change, ftrace can ensure that there is no running
event handlers after disabling it.
Signed-off-by: Masami Hiramatsu <[email protected]>
---
kernel/trace/trace_kprobe.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 7ed6976..a9a4fe5 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -243,11 +243,11 @@ find_event_file_link(struct trace_probe *tp, struct ftrace_event_file *file)
static int
disable_trace_probe(struct trace_probe *tp, struct ftrace_event_file *file)
{
+ struct event_file_link *link = NULL;
+ int wait = 0;
int ret = 0;
if (file) {
- struct event_file_link *link;
-
link = find_event_file_link(tp, file);
if (!link) {
ret = -EINVAL;
@@ -255,10 +255,7 @@ disable_trace_probe(struct trace_probe *tp, struct ftrace_event_file *file)
}
list_del_rcu(&link->list);
- /* synchronize with kprobe_trace_func/kretprobe_trace_func */
- synchronize_sched();
- kfree(link);
-
+ wait = 1;
if (!list_empty(&tp->files))
goto out;
@@ -271,8 +268,18 @@ disable_trace_probe(struct trace_probe *tp, struct ftrace_event_file *file)
disable_kretprobe(&tp->rp);
else
disable_kprobe(&tp->rp.kp);
+ wait = 1;
}
out:
+ if (wait) {
+ /*
+ * synchronize with kprobe_trace_func/kretprobe_trace_func
+ * to ensure disabled (all running handlers are finished)
+ */
+ synchronize_sched();
+ kfree(link); /* Ignored if link == NULL */
+ }
+
return ret;
}
On Tue, Jul 09, 2013 at 05:01:45PM +0900, Masami Hiramatsu wrote:
> + if (wait) {
> + /*
> + * synchronize with kprobe_trace_func/kretprobe_trace_func
> + * to ensure disabled (all running handlers are finished)
> + */
> + synchronize_sched();
> + kfree(link); /* Ignored if link == NULL */
> + }
What's not clear to me from this comment is if we're only waiting for kfree()?
In that case shouldn't we use call_rcu() to free the thing? Or do we need the
sync_sched() for other purposes as well?
(2013/07/09 17:07), Peter Zijlstra wrote:
> On Tue, Jul 09, 2013 at 05:01:45PM +0900, Masami Hiramatsu wrote:
>> + if (wait) {
>> + /*
>> + * synchronize with kprobe_trace_func/kretprobe_trace_func
>> + * to ensure disabled (all running handlers are finished)
>> + */
>> + synchronize_sched();
>> + kfree(link); /* Ignored if link == NULL */
>> + }
>
> What's not clear to me from this comment is if we're only waiting for kfree()?
> In that case shouldn't we use call_rcu() to free the thing? Or do we need the
> sync_sched() for other purposes as well?
No, this is not only for kfree, but also to ensure completing
disabling process, because trace_remove_event_call() supposes
that for releasing event_call related objects (Those objects
will be accessed in the handlers).
Thank you,
--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]
On Tue, Jul 09, 2013 at 05:20:17PM +0900, Masami Hiramatsu wrote:
> (2013/07/09 17:07), Peter Zijlstra wrote:
> > On Tue, Jul 09, 2013 at 05:01:45PM +0900, Masami Hiramatsu wrote:
> >> + if (wait) {
> >> + /*
> >> + * synchronize with kprobe_trace_func/kretprobe_trace_func
> >> + * to ensure disabled (all running handlers are finished)
> >> + */
> >> + synchronize_sched();
> >> + kfree(link); /* Ignored if link == NULL */
> >> + }
> >
> > What's not clear to me from this comment is if we're only waiting for kfree()?
> > In that case shouldn't we use call_rcu() to free the thing? Or do we need the
> > sync_sched() for other purposes as well?
>
> No, this is not only for kfree, but also to ensure completing
> disabling process, because trace_remove_event_call() supposes
> that for releasing event_call related objects (Those objects
> will be accessed in the handlers).
Then may I kindly suggest you clarify this in the comment? :-)
(2013/07/09 17:21), Peter Zijlstra wrote:
> On Tue, Jul 09, 2013 at 05:20:17PM +0900, Masami Hiramatsu wrote:
>> (2013/07/09 17:07), Peter Zijlstra wrote:
>>> On Tue, Jul 09, 2013 at 05:01:45PM +0900, Masami Hiramatsu wrote:
>>>> + if (wait) {
>>>> + /*
>>>> + * synchronize with kprobe_trace_func/kretprobe_trace_func
>>>> + * to ensure disabled (all running handlers are finished)
>>>> + */
>>>> + synchronize_sched();
>>>> + kfree(link); /* Ignored if link == NULL */
>>>> + }
>>>
>>> What's not clear to me from this comment is if we're only waiting for kfree()?
>>> In that case shouldn't we use call_rcu() to free the thing? Or do we need the
>>> sync_sched() for other purposes as well?
>>
>> No, this is not only for kfree, but also to ensure completing
>> disabling process, because trace_remove_event_call() supposes
>> that for releasing event_call related objects (Those objects
>> will be accessed in the handlers).
>
> Then may I kindly suggest you clarify this in the comment? :-)
>
Ah, right! I'll update it :)
Thank you!
--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]
Wait for disabling all running kprobe handlers when a kprobe
event is disabled, since the caller, trace_remove_event_call()
supposes that a removing event is disabled completely by
disabling the event.
With this change, ftrace can ensure that there is no running
event handlers after disabling it.
Changes in V2:
- Comment (in code) for clarify why we need to wait there.
Signed-off-by: Masami Hiramatsu <[email protected]>
---
kernel/trace/trace_kprobe.c | 23 +++++++++++++++++------
1 file changed, 17 insertions(+), 6 deletions(-)
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 7ed6976..8374f78 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -243,11 +243,11 @@ find_event_file_link(struct trace_probe *tp, struct ftrace_event_file *file)
static int
disable_trace_probe(struct trace_probe *tp, struct ftrace_event_file *file)
{
+ struct event_file_link *link = NULL;
+ int wait = 0;
int ret = 0;
if (file) {
- struct event_file_link *link;
-
link = find_event_file_link(tp, file);
if (!link) {
ret = -EINVAL;
@@ -255,10 +255,7 @@ disable_trace_probe(struct trace_probe *tp, struct ftrace_event_file *file)
}
list_del_rcu(&link->list);
- /* synchronize with kprobe_trace_func/kretprobe_trace_func */
- synchronize_sched();
- kfree(link);
-
+ wait = 1;
if (!list_empty(&tp->files))
goto out;
@@ -271,8 +268,22 @@ disable_trace_probe(struct trace_probe *tp, struct ftrace_event_file *file)
disable_kretprobe(&tp->rp);
else
disable_kprobe(&tp->rp.kp);
+ wait = 1;
}
out:
+ if (wait) {
+ /*
+ * Synchronize with kprobe_trace_func/kretprobe_trace_func
+ * to ensure disabled (all running handlers are finished).
+ * This is not only for kfree(), but also the caller,
+ * trace_remove_event_call() supposes it for releasing
+ * event_call related objects, which will be accessed in
+ * the kprobe_trace_func/kretprobe_trace_func.
+ */
+ synchronize_sched();
+ kfree(link); /* Ignored if link == NULL */
+ }
+
return ret;
}
On 07/09, Masami Hiramatsu wrote:
>
> Wait for disabling all running kprobe handlers when a kprobe
> event is disabled, since the caller, trace_remove_event_call()
> supposes that a removing event is disabled completely by
> disabling the event.
> With this change, ftrace can ensure that there is no running
> event handlers after disabling it.
>
> Changes in V2:
> - Comment (in code) for clarify why we need to wait there.
>
> Signed-off-by: Masami Hiramatsu <[email protected]>
Agreed.
We need this change in any case, whatever we do in trace/trace_events.c
and in unregister_trace_probe().
Oleg.
(2013/07/16 3:20), Oleg Nesterov wrote:
> On 07/09, Masami Hiramatsu wrote:
>>
>> Wait for disabling all running kprobe handlers when a kprobe
>> event is disabled, since the caller, trace_remove_event_call()
>> supposes that a removing event is disabled completely by
>> disabling the event.
>> With this change, ftrace can ensure that there is no running
>> event handlers after disabling it.
>>
>> Changes in V2:
>> - Comment (in code) for clarify why we need to wait there.
>>
>> Signed-off-by: Masami Hiramatsu <[email protected]>
>
> Agreed.
>
> We need this change in any case, whatever we do in trace/trace_events.c
> and in unregister_trace_probe().
Steven, I think this patch is still needed anyway even if we are
discussing file handling bugs, because this fixes another timing bug.
Thank you,
--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]
On Thu, 2013-07-18 at 21:07 +0900, Masami Hiramatsu wrote:
> (2013/07/16 3:20), Oleg Nesterov wrote:
> > On 07/09, Masami Hiramatsu wrote:
> >>
> >> Wait for disabling all running kprobe handlers when a kprobe
> >> event is disabled, since the caller, trace_remove_event_call()
> >> supposes that a removing event is disabled completely by
> >> disabling the event.
> >> With this change, ftrace can ensure that there is no running
> >> event handlers after disabling it.
> >>
> >> Changes in V2:
> >> - Comment (in code) for clarify why we need to wait there.
> >>
> >> Signed-off-by: Masami Hiramatsu <[email protected]>
> >
> > Agreed.
> >
> > We need this change in any case, whatever we do in trace/trace_events.c
> > and in unregister_trace_probe().
>
> Steven, I think this patch is still needed anyway even if we are
> discussing file handling bugs, because this fixes another timing bug.
Sure, I'll pull that into my queue now.
-- Steve
(2013/07/04 12:33), Steven Rostedt wrote:
> From: "Steven Rostedt (Red Hat)" <[email protected]>
>
> When one of the event files is opened, we need to prevent them from
> being removed. Modules do with with the module owner set (automated
> from the VFS layer). The ftrace buffer instances have a ref count added
> to the trace_array when the enabled file is opened (the others are not
> that big of a deal, as they only reference the event calls which
> still exist when an instance disappears). But kprobes and uprobes
> do not have any protection.
>
> # cd /sys/kernel/debug/tracing
> # echo 'p:sigprocmask sigprocmask' > kprobe_events || exit -1
> # enable_probe() {
> sleep 10
> echo 1
> }
> # file=events/kprobes/sigprocmask/enable
> # enable_probe > $file &
> > kprobe_events
>
> The above will corrupt the kprobe system, as the write to the enable
> file will happen after the kprobe was deleted.
>
> Trying to create the probe again fails:
>
> # echo 'p:sigprocmask sigprocmask' > kprobe_events
> # cat kprobe_events
> p:kprobes/sigprocmask sigprocmask
> # ls events/kprobes/
> enable filter
>
> Have the unregister probe fail when the event files are open, in use
> are used by perf.
>
Now, since the commit a232e270d makes sure that disable_trace_probe()
waits until all running handlers are done, this patch has no problem.
Acked-by: Masami Hiramatsu <[email protected]>
Thank you!
> Signed-off-by: Steven Rostedt <[email protected]>
> ---
> kernel/trace/trace_kprobe.c | 21 +++++++++++++++------
> 1 file changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> index 7ed6976..ffcaf42 100644
> --- a/kernel/trace/trace_kprobe.c
> +++ b/kernel/trace/trace_kprobe.c
> @@ -95,7 +95,7 @@ static __kprobes bool trace_probe_is_on_module(struct trace_probe *tp)
> }
>
> static int register_probe_event(struct trace_probe *tp);
> -static void unregister_probe_event(struct trace_probe *tp);
> +static int unregister_probe_event(struct trace_probe *tp);
>
> static DEFINE_MUTEX(probe_lock);
> static LIST_HEAD(probe_list);
> @@ -340,9 +340,12 @@ static int unregister_trace_probe(struct trace_probe *tp)
> if (trace_probe_is_enabled(tp))
> return -EBUSY;
>
> + /* Will fail if probe is being used by ftrace or perf */
> + if (unregister_probe_event(tp))
> + return -EBUSY;
> +
> __unregister_trace_probe(tp);
> list_del(&tp->list);
> - unregister_probe_event(tp);
>
> return 0;
> }
> @@ -621,7 +624,9 @@ static int release_all_trace_probes(void)
> /* TODO: Use batch unregistration */
> while (!list_empty(&probe_list)) {
> tp = list_entry(probe_list.next, struct trace_probe, list);
> - unregister_trace_probe(tp);
> + ret = unregister_trace_probe(tp);
> + if (ret)
> + goto end;
> free_trace_probe(tp);
> }
>
> @@ -1242,11 +1247,15 @@ static int register_probe_event(struct trace_probe *tp)
> return ret;
> }
>
> -static void unregister_probe_event(struct trace_probe *tp)
> +static int unregister_probe_event(struct trace_probe *tp)
> {
> + int ret;
> +
> /* tp->event is unregistered in trace_remove_event_call() */
> - trace_remove_event_call(&tp->call);
> - kfree(tp->call.print_fmt);
> + ret = trace_remove_event_call(&tp->call);
> + if (!ret)
> + kfree(tp->call.print_fmt);
> + return ret;
> }
>
> /* Make a debugfs interface for controlling probe points */
>
--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]
On Wed, 2013-07-03 at 23:33 -0400, Steven Rostedt wrote:
> plain text document attachment
> (0003-tracing-kprobes-Fail-to-unregister-if-probe-event-fi.patch)
> From: "Steven Rostedt (Red Hat)" <[email protected]>
>
> When one of the event files is opened, we need to prevent them from
> being removed. Modules do with with the module owner set (automated
> from the VFS layer). The ftrace buffer instances have a ref count added
> to the trace_array when the enabled file is opened (the others are not
> that big of a deal, as they only reference the event calls which
> still exist when an instance disappears). But kprobes and uprobes
> do not have any protection.
>
> # cd /sys/kernel/debug/tracing
> # echo 'p:sigprocmask sigprocmask' > kprobe_events || exit -1
> # enable_probe() {
> sleep 10
> echo 1
> }
> # file=events/kprobes/sigprocmask/enable
> # enable_probe > $file &
> > kprobe_events
>
> The above will corrupt the kprobe system, as the write to the enable
> file will happen after the kprobe was deleted.
>
> Trying to create the probe again fails:
>
> # echo 'p:sigprocmask sigprocmask' > kprobe_events
> # cat kprobe_events
> p:kprobes/sigprocmask sigprocmask
> # ls events/kprobes/
> enable filter
Oleg,
The above no longer triggers the bug due to your changes. The race is
much tighter now and requires a process with the enable file opened and
races with a write to enable it where the removal of the trace file
checks the trace disabled, sees that it is, continues, but then the
write enables it just as it gets deleted.
Is this correct?
Do you know of a way to trigger this bug? Probably requires writing
something in C and hammer at writing '0's and '1's into the enable file,
and then remove it. Check if it succeeds and try again.
Hmm, what happens without this patch now? If it is active, and we delete
it? It will call back into the kprobes and access a tracepoint that does
not exist? Would this cause a crash?
-- Steve
>
> Have the unregister probe fail when the event files are open, in use
> are used by perf.
>
> Signed-off-by: Steven Rostedt <[email protected]>
> ---
> kernel/trace/trace_kprobe.c | 21 +++++++++++++++------
> 1 file changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> index 7ed6976..ffcaf42 100644
> --- a/kernel/trace/trace_kprobe.c
> +++ b/kernel/trace/trace_kprobe.c
> @@ -95,7 +95,7 @@ static __kprobes bool trace_probe_is_on_module(struct trace_probe *tp)
> }
>
> static int register_probe_event(struct trace_probe *tp);
> -static void unregister_probe_event(struct trace_probe *tp);
> +static int unregister_probe_event(struct trace_probe *tp);
>
> static DEFINE_MUTEX(probe_lock);
> static LIST_HEAD(probe_list);
> @@ -340,9 +340,12 @@ static int unregister_trace_probe(struct trace_probe *tp)
> if (trace_probe_is_enabled(tp))
> return -EBUSY;
>
> + /* Will fail if probe is being used by ftrace or perf */
> + if (unregister_probe_event(tp))
> + return -EBUSY;
> +
> __unregister_trace_probe(tp);
> list_del(&tp->list);
> - unregister_probe_event(tp);
>
> return 0;
> }
> @@ -621,7 +624,9 @@ static int release_all_trace_probes(void)
> /* TODO: Use batch unregistration */
> while (!list_empty(&probe_list)) {
> tp = list_entry(probe_list.next, struct trace_probe, list);
> - unregister_trace_probe(tp);
> + ret = unregister_trace_probe(tp);
> + if (ret)
> + goto end;
> free_trace_probe(tp);
> }
>
> @@ -1242,11 +1247,15 @@ static int register_probe_event(struct trace_probe *tp)
> return ret;
> }
>
> -static void unregister_probe_event(struct trace_probe *tp)
> +static int unregister_probe_event(struct trace_probe *tp)
> {
> + int ret;
> +
> /* tp->event is unregistered in trace_remove_event_call() */
> - trace_remove_event_call(&tp->call);
> - kfree(tp->call.print_fmt);
> + ret = trace_remove_event_call(&tp->call);
> + if (!ret)
> + kfree(tp->call.print_fmt);
> + return ret;
> }
>
> /* Make a debugfs interface for controlling probe points */
On 07/31, Steven Rostedt wrote:
>
> On Wed, 2013-07-03 at 23:33 -0400, Steven Rostedt wrote:
> > The above will corrupt the kprobe system, as the write to the enable
> > file will happen after the kprobe was deleted.
>
> Oleg,
>
> The above no longer triggers the bug due to your changes. The race is
> much tighter now
Yes, the changelog should be updated...
> and requires a process with the enable file opened and
> races with a write to enable it where the removal of the trace file
> checks the trace disabled, sees that it is, continues, but then the
> write enables it just as it gets deleted.
This should be fine. Either event_remove() path takes event_mutex
first and then ->write() fails, or ftrace_event_enable_disable()
actually disables this even successfully.
> Do you know of a way to trigger this bug?
I'll try to think more tomorrow, but most probably no. The race is
unlikely.
We need perf_trace_event_init() or ":enable_event:this-event" right
before trace_remove_event_call() takes the mutex.
And right after the caller (kprobes) checks "disabled".
> Hmm, what happens without this patch now? If it is active, and we delete
> it? It will call back into the kprobes and access a tracepoint that does
> not exist? Would this cause a crash?
I think yes, the crash is possible.
perf or FL_SOFT_MODE, this call/file has the external references, and
we are going to free it.
Oleg.
On Wed, 2013-07-31 at 22:40 +0200, Oleg Nesterov wrote:
> On 07/31, Steven Rostedt wrote:
> >
> > On Wed, 2013-07-03 at 23:33 -0400, Steven Rostedt wrote:
> > > The above will corrupt the kprobe system, as the write to the enable
> > > file will happen after the kprobe was deleted.
> >
> > Oleg,
> >
> > The above no longer triggers the bug due to your changes. The race is
> > much tighter now
>
> Yes, the changelog should be updated...
>
> > and requires a process with the enable file opened and
> > races with a write to enable it where the removal of the trace file
> > checks the trace disabled, sees that it is, continues, but then the
> > write enables it just as it gets deleted.
>
> This should be fine. Either event_remove() path takes event_mutex
> first and then ->write() fails, or ftrace_event_enable_disable()
> actually disables this even successfully.
Actually I meant while in unregister_trace_probe(), it gets by the
trace_probe_is_enabled() part first, then the write succeeds (as the
event_mutex isn't taken till unregister_probe_event()). The the
unregister_probe_event fails, but the tp was freed. The event files
still reference the tp and this is where a crash can happen without this
patch set.
-- Steve
On Wed, 2013-07-31 at 18:42 -0400, Steven Rostedt wrote:
>
> > This should be fine. Either event_remove() path takes event_mutex
> > first and then ->write() fails, or ftrace_event_enable_disable()
> > actually disables this even successfully.
>
> Actually I meant while in unregister_trace_probe(), it gets by the
> trace_probe_is_enabled() part first, then the write succeeds (as the
> event_mutex isn't taken till unregister_probe_event()). The the
> unregister_probe_event fails, but the tp was freed. The event files
> still reference the tp and this is where a crash can happen without this
> patch set.
And it's not just theoretical. I worked on a program to try to trigger
this bug, and succeeded :-) (I've never been so happy being able to
crash my kernel)
[ 128.999772] BUG: unable to handle kernel paging request at 00000005000000f9
[ 129.000015] IP: [<ffffffff810dee70>] probes_open+0x3b/0xa7
[ 129.000015] PGD 7808a067 PUD 0
[ 129.000015] Oops: 0000 [#1] PREEMPT SMP
[ 129.000015] Dumping ftrace buffer:
<skip>
[ 129.000015] ---------------------------------
[ 129.000015] Modules linked in: ipt_MASQUERADE sunrpc ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables uinput snd_hda_codec_idt kvm_intel snd_hda_intel snd_hda_codec snd_hwdep snd_seq snd_seq_device snd_pcm kvm snd_page_alloc snd_timer shpchp snd microcode i2c_i801 soundcore pata_acpi firewire_ohci firewire_core crc_itu_t ata_generic i915 drm_kms_helper drm i2c_algo_bit i2c_core video
[ 129.000015] CPU: 1 PID: 2070 Comm: test-kprobe-rem Not tainted 3.11.0-rc3-test+ #47
[ 129.000015] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
[ 129.000015] task: ffff880077756440 ti: ffff880076e52000 task.ti: ffff880076e52000
[ 129.000015] RIP: 0010:[<ffffffff810dee70>] [<ffffffff810dee70>] probes_open+0x3b/0xa7
[ 129.000015] RSP: 0018:ffff880076e53c38 EFLAGS: 00010203
[ 129.000015] RAX: 0000000500000001 RBX: ffff88007844f440 RCX: 0000000000000003
[ 129.000015] RDX: 0000000000000003 RSI: 0000000000000003 RDI: ffff880076e52000
[ 129.000015] RBP: ffff880076e53c58 R08: ffff880076e53bd8 R09: 0000000000000000
[ 129.000015] R10: ffff880077756440 R11: 0000000000000006 R12: ffffffff810dee35
[ 129.000015] R13: ffff880079250418 R14: 0000000000000000 R15: ffff88007844f450
[ 129.000015] FS: 00007f87a276f700(0000) GS:ffff88007d480000(0000) knlGS:0000000000000000
[ 129.000015] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 129.000015] CR2: 00000005000000f9 CR3: 0000000077262000 CR4: 00000000000007e0
[ 129.000015] Stack:
[ 129.000015] ffff880076e53c58 ffffffff81219ea0 ffff88007844f440 ffffffff810dee35
[ 129.000015] ffff880076e53ca8 ffffffff81130f78 ffff8800772986c0 ffff8800796f93a0
[ 129.000015] ffffffff81d1b5d8 ffff880076e53e04 0000000000000000 ffff88007844f440
[ 129.000015] Call Trace:
[ 129.000015] [<ffffffff81219ea0>] ? security_file_open+0x2c/0x30
[ 129.000015] [<ffffffff810dee35>] ? unregister_trace_probe+0x4b/0x4b
[ 129.000015] [<ffffffff81130f78>] do_dentry_open+0x162/0x226
[ 129.000015] [<ffffffff81131186>] finish_open+0x46/0x54
[ 129.000015] [<ffffffff8113f30b>] do_last+0x7f6/0x996
[ 129.000015] [<ffffffff8113cc6f>] ? inode_permission+0x42/0x44
[ 129.000015] [<ffffffff8113f6dd>] path_openat+0x232/0x496
[ 129.000015] [<ffffffff8113fc30>] do_filp_open+0x3a/0x8a
[ 129.000015] [<ffffffff8114ab32>] ? __alloc_fd+0x168/0x17a
[ 129.000015] [<ffffffff81131f4e>] do_sys_open+0x70/0x102
[ 129.000015] [<ffffffff8108f06e>] ? trace_hardirqs_on_caller+0x160/0x197
[ 129.000015] [<ffffffff81131ffe>] SyS_open+0x1e/0x20
[ 129.000015] [<ffffffff81522742>] system_call_fastpath+0x16/0x1b
[ 129.000015] Code: e5 41 54 53 48 89 f3 48 83 ec 10 48 23 56 78 48 39 c2 75 6c 31 f6 48 c7 c7 d0 e6 a4 81 e8 3a 91 43 00 48 8b 05 f2 f8 96 00 eb 0c <f6> 80 f8 00 00 00 03 75 31 48 8b 00 48 3d 60 e7 a4 81 75 ec eb
[ 129.000015] RIP [<ffffffff810dee70>] probes_open+0x3b/0xa7
[ 129.000015] RSP <ffff880076e53c38>
[ 129.000015] CR2: 00000005000000f9
[ 130.555521] ---[ end trace 35f17d68fc569897 ]---
Attached is the program I used. It took lots of tweaks and doesn't
always trigger the bug on the first run. It can take several runs to
trigger. Each bug I've seen has been rather random. Twice it crashed due
to memory errors, once it just screwed up the kprobes, but the system
still ran fine. I had another kprobes error bug, when when I went to do
more tracing, it crashed.
What my program does is creates two threads (well, one thread and the
main thread). They place themselves onto CPU 0 and 1. Then the following
occurs:
CPU 0 CPU 1
----- -----
create sigprocmask probe
loop: loop:
fd = open(events/kprobes/sigprocmask/enable)
pthread_barrier_wait()
nanosleep(interval)
pthread_barrier_wait()
write(fd, "1", 1);
write(fd, "0", 1);
truncate kprobe_events
// deletes all probes
pthread_barrier_wait();
pthread_barrier_wait();
write(fd, "0", 1); // just in case
create sigprocmask probe
pthread_barrier_wait()
pthread_barrier_wait()
update interval
goto loop goto loop
I had to tweak the interval times to see where it would most likely
cause a race. When I found where it started to conflict, I placed the
range around that time and incremented it in steps related to 7/5
(~e/2). That eventually triggered the bug.
Now with this patch, I had to modify the test to give it a negative
number to place the pause before the write. That's because the switch to
removing the events slowed down the deletion and allowed the write to
always succeed (without the delay).
Anyway, I ran this in a while loop with the patch for an hour (and still
running) and it seems to keep it from crashing.
I'll update the change log to reflect this.
Thanks!
-- Steve
Here's the new change log, but the same patch. Does this sound ok to you
guys?
-- Steve
>From 40c32592668b727cbfcf7b1c0567f581bd62a5e4 Mon Sep 17 00:00:00 2001
From: "Steven Rostedt (Red Hat)" <[email protected]>
Date: Wed, 3 Jul 2013 23:33:50 -0400
Subject: [PATCH] tracing/kprobes: Fail to unregister if probe event files are
in use
When a probe is being removed, it cleans up the event files that correspond
to the probe. But there is a race between writing to one of these files
and deleting the probe. This is especially true for the "enable" file.
CPU 0 CPU 1
----- -----
fd = open("enable",O_WRONLY);
probes_open()
release_all_trace_probes()
unregister_trace_probe()
if (trace_probe_is_enabled(tp))
return -EBUSY
write(fd, "1", 1)
__ftrace_set_clr_event()
call->class->reg()
(kprobe_register)
enable_trace_probe(tp)
__unregister_trace_probe(tp);
list_del(&tp->list)
unregister_probe_event(tp) <-- fails!
free_trace_probe(tp)
write(fd, "0", 1)
__ftrace_set_clr_event()
call->class->unreg
(kprobe_register)
disable_trace_probe(tp) <-- BOOM!
A test program was written that used two threads to simulate the
above scenario adding a nanosleep() interval to change the timings
and after several thousand runs, it was able to trigger this bug
and crash:
BUG: unable to handle kernel paging request at 00000005000000f9
IP: [<ffffffff810dee70>] probes_open+0x3b/0xa7
PGD 7808a067 PUD 0
Oops: 0000 [#1] PREEMPT SMP
Dumping ftrace buffer:
---------------------------------
Modules linked in: ipt_MASQUERADE sunrpc ip6t_REJECT nf_conntrack_ipv6
CPU: 1 PID: 2070 Comm: test-kprobe-rem Not tainted 3.11.0-rc3-test+ #47
Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
task: ffff880077756440 ti: ffff880076e52000 task.ti: ffff880076e52000
RIP: 0010:[<ffffffff810dee70>] [<ffffffff810dee70>] probes_open+0x3b/0xa7
RSP: 0018:ffff880076e53c38 EFLAGS: 00010203
RAX: 0000000500000001 RBX: ffff88007844f440 RCX: 0000000000000003
RDX: 0000000000000003 RSI: 0000000000000003 RDI: ffff880076e52000
RBP: ffff880076e53c58 R08: ffff880076e53bd8 R09: 0000000000000000
R10: ffff880077756440 R11: 0000000000000006 R12: ffffffff810dee35
R13: ffff880079250418 R14: 0000000000000000 R15: ffff88007844f450
FS: 00007f87a276f700(0000) GS:ffff88007d480000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00000005000000f9 CR3: 0000000077262000 CR4: 00000000000007e0
Stack:
ffff880076e53c58 ffffffff81219ea0 ffff88007844f440 ffffffff810dee35
ffff880076e53ca8 ffffffff81130f78 ffff8800772986c0 ffff8800796f93a0
ffffffff81d1b5d8 ffff880076e53e04 0000000000000000 ffff88007844f440
Call Trace:
[<ffffffff81219ea0>] ? security_file_open+0x2c/0x30
[<ffffffff810dee35>] ? unregister_trace_probe+0x4b/0x4b
[<ffffffff81130f78>] do_dentry_open+0x162/0x226
[<ffffffff81131186>] finish_open+0x46/0x54
[<ffffffff8113f30b>] do_last+0x7f6/0x996
[<ffffffff8113cc6f>] ? inode_permission+0x42/0x44
[<ffffffff8113f6dd>] path_openat+0x232/0x496
[<ffffffff8113fc30>] do_filp_open+0x3a/0x8a
[<ffffffff8114ab32>] ? __alloc_fd+0x168/0x17a
[<ffffffff81131f4e>] do_sys_open+0x70/0x102
[<ffffffff8108f06e>] ? trace_hardirqs_on_caller+0x160/0x197
[<ffffffff81131ffe>] SyS_open+0x1e/0x20
[<ffffffff81522742>] system_call_fastpath+0x16/0x1b
Code: e5 41 54 53 48 89 f3 48 83 ec 10 48 23 56 78 48 39 c2 75 6c 31 f6 48 c7
RIP [<ffffffff810dee70>] probes_open+0x3b/0xa7
RSP <ffff880076e53c38>
CR2: 00000005000000f9
---[ end trace 35f17d68fc569897 ]---
The unregister_trace_probe() must be done first, and if it fails it must
fail the removal of the kprobe.
Several changes have already been made by Oleg Nesterov and Masami Hiramatsu
to allow moving the unregister_probe_event() before the removal of
the probe and exit the function if it fails. This prevents the tp
structure from being used after it is freed.
Link: http://lkml.kernel.org/r/[email protected]
Acked-by: Masami Hiramatsu <[email protected]>
Signed-off-by: Steven Rostedt <[email protected]>
---
kernel/trace/trace_kprobe.c | 21 +++++++++++++++------
1 file changed, 15 insertions(+), 6 deletions(-)
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 3811487..243f683 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -95,7 +95,7 @@ static __kprobes bool trace_probe_is_on_module(struct trace_probe *tp)
}
static int register_probe_event(struct trace_probe *tp);
-static void unregister_probe_event(struct trace_probe *tp);
+static int unregister_probe_event(struct trace_probe *tp);
static DEFINE_MUTEX(probe_lock);
static LIST_HEAD(probe_list);
@@ -351,9 +351,12 @@ static int unregister_trace_probe(struct trace_probe *tp)
if (trace_probe_is_enabled(tp))
return -EBUSY;
+ /* Will fail if probe is being used by ftrace or perf */
+ if (unregister_probe_event(tp))
+ return -EBUSY;
+
__unregister_trace_probe(tp);
list_del(&tp->list);
- unregister_probe_event(tp);
return 0;
}
@@ -632,7 +635,9 @@ static int release_all_trace_probes(void)
/* TODO: Use batch unregistration */
while (!list_empty(&probe_list)) {
tp = list_entry(probe_list.next, struct trace_probe, list);
- unregister_trace_probe(tp);
+ ret = unregister_trace_probe(tp);
+ if (ret)
+ goto end;
free_trace_probe(tp);
}
@@ -1247,11 +1252,15 @@ static int register_probe_event(struct trace_probe *tp)
return ret;
}
-static void unregister_probe_event(struct trace_probe *tp)
+static int unregister_probe_event(struct trace_probe *tp)
{
+ int ret;
+
/* tp->event is unregistered in trace_remove_event_call() */
- trace_remove_event_call(&tp->call);
- kfree(tp->call.print_fmt);
+ ret = trace_remove_event_call(&tp->call);
+ if (!ret)
+ kfree(tp->call.print_fmt);
+ return ret;
}
/* Make a debugfs interface for controlling probe points */
--
1.8.1.4
(2013/08/01 11:50), Steven Rostedt wrote:
> Here's the new change log, but the same patch. Does this sound ok to you
> guys?
Great! This looks good for me. ;)
Thank you very much!
>
> -- Steve
>
>>From 40c32592668b727cbfcf7b1c0567f581bd62a5e4 Mon Sep 17 00:00:00 2001
> From: "Steven Rostedt (Red Hat)" <[email protected]>
> Date: Wed, 3 Jul 2013 23:33:50 -0400
> Subject: [PATCH] tracing/kprobes: Fail to unregister if probe event files are
> in use
>
> When a probe is being removed, it cleans up the event files that correspond
> to the probe. But there is a race between writing to one of these files
> and deleting the probe. This is especially true for the "enable" file.
>
> CPU 0 CPU 1
> ----- -----
>
> fd = open("enable",O_WRONLY);
>
> probes_open()
> release_all_trace_probes()
> unregister_trace_probe()
> if (trace_probe_is_enabled(tp))
> return -EBUSY
>
> write(fd, "1", 1)
> __ftrace_set_clr_event()
> call->class->reg()
> (kprobe_register)
> enable_trace_probe(tp)
>
> __unregister_trace_probe(tp);
> list_del(&tp->list)
> unregister_probe_event(tp) <-- fails!
> free_trace_probe(tp)
>
> write(fd, "0", 1)
> __ftrace_set_clr_event()
> call->class->unreg
> (kprobe_register)
> disable_trace_probe(tp) <-- BOOM!
>
> A test program was written that used two threads to simulate the
> above scenario adding a nanosleep() interval to change the timings
> and after several thousand runs, it was able to trigger this bug
> and crash:
>
> BUG: unable to handle kernel paging request at 00000005000000f9
> IP: [<ffffffff810dee70>] probes_open+0x3b/0xa7
> PGD 7808a067 PUD 0
> Oops: 0000 [#1] PREEMPT SMP
> Dumping ftrace buffer:
> ---------------------------------
> Modules linked in: ipt_MASQUERADE sunrpc ip6t_REJECT nf_conntrack_ipv6
> CPU: 1 PID: 2070 Comm: test-kprobe-rem Not tainted 3.11.0-rc3-test+ #47
> Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
> task: ffff880077756440 ti: ffff880076e52000 task.ti: ffff880076e52000
> RIP: 0010:[<ffffffff810dee70>] [<ffffffff810dee70>] probes_open+0x3b/0xa7
> RSP: 0018:ffff880076e53c38 EFLAGS: 00010203
> RAX: 0000000500000001 RBX: ffff88007844f440 RCX: 0000000000000003
> RDX: 0000000000000003 RSI: 0000000000000003 RDI: ffff880076e52000
> RBP: ffff880076e53c58 R08: ffff880076e53bd8 R09: 0000000000000000
> R10: ffff880077756440 R11: 0000000000000006 R12: ffffffff810dee35
> R13: ffff880079250418 R14: 0000000000000000 R15: ffff88007844f450
> FS: 00007f87a276f700(0000) GS:ffff88007d480000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
> CR2: 00000005000000f9 CR3: 0000000077262000 CR4: 00000000000007e0
> Stack:
> ffff880076e53c58 ffffffff81219ea0 ffff88007844f440 ffffffff810dee35
> ffff880076e53ca8 ffffffff81130f78 ffff8800772986c0 ffff8800796f93a0
> ffffffff81d1b5d8 ffff880076e53e04 0000000000000000 ffff88007844f440
> Call Trace:
> [<ffffffff81219ea0>] ? security_file_open+0x2c/0x30
> [<ffffffff810dee35>] ? unregister_trace_probe+0x4b/0x4b
> [<ffffffff81130f78>] do_dentry_open+0x162/0x226
> [<ffffffff81131186>] finish_open+0x46/0x54
> [<ffffffff8113f30b>] do_last+0x7f6/0x996
> [<ffffffff8113cc6f>] ? inode_permission+0x42/0x44
> [<ffffffff8113f6dd>] path_openat+0x232/0x496
> [<ffffffff8113fc30>] do_filp_open+0x3a/0x8a
> [<ffffffff8114ab32>] ? __alloc_fd+0x168/0x17a
> [<ffffffff81131f4e>] do_sys_open+0x70/0x102
> [<ffffffff8108f06e>] ? trace_hardirqs_on_caller+0x160/0x197
> [<ffffffff81131ffe>] SyS_open+0x1e/0x20
> [<ffffffff81522742>] system_call_fastpath+0x16/0x1b
> Code: e5 41 54 53 48 89 f3 48 83 ec 10 48 23 56 78 48 39 c2 75 6c 31 f6 48 c7
> RIP [<ffffffff810dee70>] probes_open+0x3b/0xa7
> RSP <ffff880076e53c38>
> CR2: 00000005000000f9
> ---[ end trace 35f17d68fc569897 ]---
>
> The unregister_trace_probe() must be done first, and if it fails it must
> fail the removal of the kprobe.
>
> Several changes have already been made by Oleg Nesterov and Masami Hiramatsu
> to allow moving the unregister_probe_event() before the removal of
> the probe and exit the function if it fails. This prevents the tp
> structure from being used after it is freed.
>
> Link: http://lkml.kernel.org/r/[email protected]
>
> Acked-by: Masami Hiramatsu <[email protected]>
> Signed-off-by: Steven Rostedt <[email protected]>
> ---
> kernel/trace/trace_kprobe.c | 21 +++++++++++++++------
> 1 file changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> index 3811487..243f683 100644
> --- a/kernel/trace/trace_kprobe.c
> +++ b/kernel/trace/trace_kprobe.c
> @@ -95,7 +95,7 @@ static __kprobes bool trace_probe_is_on_module(struct trace_probe *tp)
> }
>
> static int register_probe_event(struct trace_probe *tp);
> -static void unregister_probe_event(struct trace_probe *tp);
> +static int unregister_probe_event(struct trace_probe *tp);
>
> static DEFINE_MUTEX(probe_lock);
> static LIST_HEAD(probe_list);
> @@ -351,9 +351,12 @@ static int unregister_trace_probe(struct trace_probe *tp)
> if (trace_probe_is_enabled(tp))
> return -EBUSY;
>
> + /* Will fail if probe is being used by ftrace or perf */
> + if (unregister_probe_event(tp))
> + return -EBUSY;
> +
> __unregister_trace_probe(tp);
> list_del(&tp->list);
> - unregister_probe_event(tp);
>
> return 0;
> }
> @@ -632,7 +635,9 @@ static int release_all_trace_probes(void)
> /* TODO: Use batch unregistration */
> while (!list_empty(&probe_list)) {
> tp = list_entry(probe_list.next, struct trace_probe, list);
> - unregister_trace_probe(tp);
> + ret = unregister_trace_probe(tp);
> + if (ret)
> + goto end;
> free_trace_probe(tp);
> }
>
> @@ -1247,11 +1252,15 @@ static int register_probe_event(struct trace_probe *tp)
> return ret;
> }
>
> -static void unregister_probe_event(struct trace_probe *tp)
> +static int unregister_probe_event(struct trace_probe *tp)
> {
> + int ret;
> +
> /* tp->event is unregistered in trace_remove_event_call() */
> - trace_remove_event_call(&tp->call);
> - kfree(tp->call.print_fmt);
> + ret = trace_remove_event_call(&tp->call);
> + if (!ret)
> + kfree(tp->call.print_fmt);
> + return ret;
> }
>
> /* Make a debugfs interface for controlling probe points */
>
--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]
On 07/31, Steven Rostedt wrote:
>
> On Wed, 2013-07-31 at 22:40 +0200, Oleg Nesterov wrote:
> >
> > This should be fine. Either event_remove() path takes event_mutex
> > first and then ->write() fails, or ftrace_event_enable_disable()
> > actually disables this even successfully.
>
> Actually I meant while in unregister_trace_probe(), it gets by the
> trace_probe_is_enabled() part first, then the write succeeds (as the
> event_mutex isn't taken till unregister_probe_event()). The the
> unregister_probe_event fails,
Yes sure. In this case evrything is clear. But this looks as if
unregister_probe_event() should not fail. IOW, this looks as if
the bug was introduce by the previous "trace_remove_event_call()
should fail if call/file is in use" change.
Oleg.
On 07/31, Steven Rostedt wrote:
>
> Subject: [PATCH] tracing/kprobes: Fail to unregister if probe event files are
> in use
>
> When a probe is being removed, it cleans up the event files that correspond
> to the probe. But there is a race between writing to one of these files
> and deleting the probe. This is especially true for the "enable" file.
>
> CPU 0 CPU 1
> ----- -----
>
> fd = open("enable",O_WRONLY);
>
> probes_open()
> release_all_trace_probes()
> unregister_trace_probe()
> if (trace_probe_is_enabled(tp))
> return -EBUSY
>
> write(fd, "1", 1)
> __ftrace_set_clr_event()
> call->class->reg()
> (kprobe_register)
> enable_trace_probe(tp)
>
> __unregister_trace_probe(tp);
> list_del(&tp->list)
> unregister_probe_event(tp) <-- fails!
> free_trace_probe(tp)
Yes. But again, this doesn't explain why unregister_probe_event()->
__trace_remove_event_call() can't simply proceed and
do ftrace_event_enable_disable() + remove_event_from_tracers().
IOW, if we do not apply the previous "trace_remove_event_call() should
fail if call/file is in use" patch, then everything is fine:
> write(fd, "0", 1)
this will fail with ENODEV.
Let's consider another race:
CPU 0 CPU 1
----- -----
probes_open()
trace_probe_is_enabled() == F;
sys_perf_event_open(attr.config == id)
...
trace_remove_event_call()
Whatever we do, we should abort. We could even do TRACE_REG_PERF_UNREGISTER
(although the current code doesn't do this), but we have no way to cleanup
the perf event's which have ->>tp_event = call.
trace_remove_event_call() was already changed to return the error.
And. Since it can fail, this obviously means that it should be checked,
we can't blindly do free_trace_probe().
IOW, the changelog could be very simple, I think. Either
trace_remove_event_call() should always succeed or we should check the
possible failure.
But I won't argue with this changelog. The only important thing is that
we all seem to agree that we do have the races here which can be fixed
by this and the previous change.
And just in case. I believe that the patch is fine.
Just one off-topic note,
> @@ -632,7 +635,9 @@ static int release_all_trace_probes(void)
> /* TODO: Use batch unregistration */
> while (!list_empty(&probe_list)) {
> tp = list_entry(probe_list.next, struct trace_probe, list);
> - unregister_trace_probe(tp);
> + ret = unregister_trace_probe(tp);
> + if (ret)
> + goto end;
> free_trace_probe(tp);
> }
This obviously breaks all-or-nothing semantics (I mean, this breaks
the intent, the current code is buggy).
I think we can't avoid this, and I hope this is fine. But then perhaps
we should simply remove the "list_for_each_entry" check above?
Oleg.
On 08/01, Oleg Nesterov wrote:
>
> And just in case. I believe that the patch is fine.
>
> Just one off-topic note,
Forgot to mention,
> > @@ -632,7 +635,9 @@ static int release_all_trace_probes(void)
> > /* TODO: Use batch unregistration */
> > while (!list_empty(&probe_list)) {
> > tp = list_entry(probe_list.next, struct trace_probe, list);
> > - unregister_trace_probe(tp);
> > + ret = unregister_trace_probe(tp);
> > + if (ret)
> > + goto end;
> > free_trace_probe(tp);
> > }
>
> This obviously breaks all-or-nothing semantics (I mean, this breaks
> the intent, the current code is buggy).
>
> I think we can't avoid this, and I hope this is fine. But then perhaps
> we should simply remove the "list_for_each_entry" check above?
And, of course, turn this "while (!list_empty())" into list_for_each_safe().
But again, this is almost off-topic and we can do this later.
Oleg.
On Thu, 2013-08-01 at 15:34 +0200, Oleg Nesterov wrote:
> On 07/31, Steven Rostedt wrote:
> >
> > Subject: [PATCH] tracing/kprobes: Fail to unregister if probe event files are
> > in use
> >
> > When a probe is being removed, it cleans up the event files that correspond
> > to the probe. But there is a race between writing to one of these files
> > and deleting the probe. This is especially true for the "enable" file.
> >
> > CPU 0 CPU 1
> > ----- -----
> >
> > fd = open("enable",O_WRONLY);
> >
> > probes_open()
> > release_all_trace_probes()
> > unregister_trace_probe()
> > if (trace_probe_is_enabled(tp))
> > return -EBUSY
> >
> > write(fd, "1", 1)
> > __ftrace_set_clr_event()
> > call->class->reg()
> > (kprobe_register)
> > enable_trace_probe(tp)
> >
> > __unregister_trace_probe(tp);
> > list_del(&tp->list)
> > unregister_probe_event(tp) <-- fails!
> > free_trace_probe(tp)
>
> Yes. But again, this doesn't explain why unregister_probe_event()->
> __trace_remove_event_call() can't simply proceed and
> do ftrace_event_enable_disable() + remove_event_from_tracers().
The problem is with the soft disable. We would have to look at what
functions have been attached to soft enable this event and remove them.
This can become quite complex. When the function is hit, it will try to
access the event call file. There's no inode associated to that, so the
i_private wont work. The answer would be to actually remove the
functions that are referencing it. See event_enable_probe() and
event_enable_count_probe(). The problem is that these are called from
the function tracer which means it can not take *any* locks.
This will become exponentially more complex when Tom Zanussi's patches
make it in that have events enabling other events and also using the
soft enable too.
This means that if we fail to disable, we must not destroy the event.
We may be able to start adding ref counts, such that it doesn't get
freed till it goes to zero, but every user will see it "not existing".
But that will take a bit of work to do and not for 3.11.
>
> IOW, if we do not apply the previous "trace_remove_event_call() should
> fail if call/file is in use" patch, then everything is fine:
>
>
> > write(fd, "0", 1)
>
> this will fail with ENODEV.
Currently it does not, because the failure in probe_remove_event_call()
due to the event being busy wont remove the event (event_remove() is
never called). Thus the event is still alive and the write will still
have access to it.
>
> Let's consider another race:
>
> CPU 0 CPU 1
> ----- -----
>
> probes_open()
> trace_probe_is_enabled() == F;
>
> sys_perf_event_open(attr.config == id)
>
>
> ...
> trace_remove_event_call()
>
> Whatever we do, we should abort. We could even do TRACE_REG_PERF_UNREGISTER
> (although the current code doesn't do this), but we have no way to cleanup
> the perf event's which have ->>tp_event = call.
>
> trace_remove_event_call() was already changed to return the error.
Right, and that's what this patch is about. The bug is that
trace_remove_event_call() returns an error which makes
unregister_probe_event() fail silently, which means that there's still a
reference to the tp and the write (which does happen) references it.
>
> And. Since it can fail, this obviously means that it should be checked,
> we can't blindly do free_trace_probe().
Right! That's what this patch does :-)
What you showed is the same race with perf as I have tested with ftrace.
>
> IOW, the changelog could be very simple, I think. Either
> trace_remove_event_call() should always succeed or we should check the
> possible failure.
I can update the change log to remove some of the functions that are
being called to be less confusing.
>
> But I won't argue with this changelog. The only important thing is that
> we all seem to agree that we do have the races here which can be fixed
> by this and the previous change.
>
> And just in case. I believe that the patch is fine.
OK, I'm currently running this through my tests. I'm hoping to get an
Acked-by from Srikar for uprobes. Time is running out for 3.11.
>
> Just one off-topic note,
>
> > @@ -632,7 +635,9 @@ static int release_all_trace_probes(void)
> > /* TODO: Use batch unregistration */
> > while (!list_empty(&probe_list)) {
> > tp = list_entry(probe_list.next, struct trace_probe, list);
> > - unregister_trace_probe(tp);
> > + ret = unregister_trace_probe(tp);
> > + if (ret)
> > + goto end;
> > free_trace_probe(tp);
> > }
>
> This obviously breaks all-or-nothing semantics (I mean, this breaks
> the intent, the current code is buggy).
>
I agree, this isn't really nice, but for now we have to deal with it.
> I think we can't avoid this, and I hope this is fine. But then perhaps
> we should simply remove the "list_for_each_entry" check above?
I think this can be pushed for 3.12 to fix.
Thanks,
-- Steve
On 08/01, Steven Rostedt wrote:
>
> On Thu, 2013-08-01 at 15:34 +0200, Oleg Nesterov wrote:
> > >
> > > __unregister_trace_probe(tp);
> > > list_del(&tp->list)
> > > unregister_probe_event(tp) <-- fails!
> > > free_trace_probe(tp)
> >
> > Yes. But again, this doesn't explain why unregister_probe_event()->
> > __trace_remove_event_call() can't simply proceed and
> > do ftrace_event_enable_disable() + remove_event_from_tracers().
>
> The problem is with the soft disable.
Exactly! This is another (also unlikely) race we need to prevent.
> so the
> i_private wont work.
Yes, and this is another reason why trace_remove_event_call() can't
always succeed, and the comment/changelog in probe_remove_event_call()
(added by the previous change) even tries to document the problems
with FL_SOFT_MODE.
> > IOW, if we do not apply the previous "trace_remove_event_call() should
> > fail if call/file is in use" patch, then everything is fine:
> >
> > > write(fd, "0", 1)
> >
> > this will fail with ENODEV.
>
> Currently it does not, because the failure in probe_remove_event_call()
> due to the event being busy wont remove the event (event_remove() is
> never called). Thus the event is still alive and the write will still
> have access to it.
Yes, yes. That is why the changelog says "Both trace_kprobe.c/trace_uprobe.c
need the additional changes".
IOW, the previous change itself adds the new races fixed by this patch
(and the similar change in trace_uprobe.c). Hopefully this is fine because
the code is buggy anyway.
> I can update the change log to remove some of the functions that are
> being called to be less confusing.
I am fine either way. Just I wanted to be sure that we understand each
other and I didn't miss something.
> I agree, this isn't really nice, but for now we have to deal with it.
Yes, yes, this is not for 3.11.
Oleg.
On Thu, 2013-08-01 at 16:33 +0200, Oleg Nesterov wrote:
> On 08/01, Steven Rostedt wrote:
> >
> > On Thu, 2013-08-01 at 15:34 +0200, Oleg Nesterov wrote:
> > > >
> > > > __unregister_trace_probe(tp);
> > > > list_del(&tp->list)
> > > > unregister_probe_event(tp) <-- fails!
> > > > free_trace_probe(tp)
> > >
> > > Yes. But again, this doesn't explain why unregister_probe_event()->
> > > __trace_remove_event_call() can't simply proceed and
> > > do ftrace_event_enable_disable() + remove_event_from_tracers().
> >
> > The problem is with the soft disable.
>
> Exactly! This is another (also unlikely) race we need to prevent.
>
Is there a race even with these patches? I don't see one. To link a
function to an event (set the soft disable mode), the event_mutex is
taken. If the event is gone, it wont be able to link. If the soft
disable is attached (after found and set under event_mutex), the event
can't be deleted.
Or are you just saying that these patches fix that case too?
-- Steve
On 08/01, Steven Rostedt wrote:
>
> On Thu, 2013-08-01 at 16:33 +0200, Oleg Nesterov wrote:
> > On 08/01, Steven Rostedt wrote:
> > >
> > > On Thu, 2013-08-01 at 15:34 +0200, Oleg Nesterov wrote:
> > > > >
> > > > > __unregister_trace_probe(tp);
> > > > > list_del(&tp->list)
> > > > > unregister_probe_event(tp) <-- fails!
> > > > > free_trace_probe(tp)
> > > >
> > > > Yes. But again, this doesn't explain why unregister_probe_event()->
> > > > __trace_remove_event_call() can't simply proceed and
> > > > do ftrace_event_enable_disable() + remove_event_from_tracers().
> > >
> > > The problem is with the soft disable.
> >
> > Exactly! This is another (also unlikely) race we need to prevent.
> >
>
> Is there a race even with these patches?
Sorry for confusion,
> I don't see one.
Neither me, but only with these changes.
I meant that this is another reason why trace_remove_event_call() should
fail and the caller should obviously abort in this case.
> Or are you just saying that these patches fix that case too?
Yes, sorry.
Oleg.
(2013/08/01 22:34), Oleg Nesterov wrote:
> Just one off-topic note,
>
>> > @@ -632,7 +635,9 @@ static int release_all_trace_probes(void)
>> > /* TODO: Use batch unregistration */
>> > while (!list_empty(&probe_list)) {
>> > tp = list_entry(probe_list.next, struct trace_probe, list);
>> > - unregister_trace_probe(tp);
>> > + ret = unregister_trace_probe(tp);
>> > + if (ret)
>> > + goto end;
>> > free_trace_probe(tp);
>> > }
> This obviously breaks all-or-nothing semantics (I mean, this breaks
> the intent, the current code is buggy).
I see, since we can't lock all operations, we have to change the
semantics. I think it's OK.
> I think we can't avoid this, and I hope this is fine. But then perhaps
> we should simply remove the "list_for_each_entry" check above?
Yes, it should be done. And in that case, I'd like to remove
all removable probes, as much as possible.
BTW, I'd like to replace this "remove all" behavior with
writing "-:*", instead of writing without append flag.
Thank you,
--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]