2022-04-22 20:15:44

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On Thu, Apr 21, 2022 at 10:12:25PM +0000, Delyan Kratunov wrote:
> Hi folks,
>
> While working on bpf tooling, we noticed that the sched_switch tracepoint
> signature recently changed in an incompatible manner. This affects the
> runqslower tools in the kernel tree, as well as multiple libbpf tools in iovisor/bcc.
>
> It would be a fair amount of churn to fix all these tools, not to mention any
> non-public tools people may be using.

So on the one hand, too bad, deal with it. None of this is ABI.

> If you are open to it, here's a
> description and a patch that moves the new argument to the end,
> so existing tools can continue working without change (the new argument
> just won't be extracted in existing programs):

And on the other hand; those users need to be fixed anyway, right?
Accessing prev->__state is equally broken.

Yes, the proposed hack is cute, but I have very little sympathy for any
of this. These are in-kernel interfaces, they change, there must not be
any impediment to change.

If bpf wants to ride on them, it needs to suffer the pain of doing so.


2022-04-22 20:54:52

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On Fri, Apr 22, 2022 at 4:09 AM Peter Zijlstra <[email protected]> wrote:
>
> On Thu, Apr 21, 2022 at 10:12:25PM +0000, Delyan Kratunov wrote:
> > Hi folks,
> >
> > While working on bpf tooling, we noticed that the sched_switch tracepoint
> > signature recently changed in an incompatible manner. This affects the
> > runqslower tools in the kernel tree, as well as multiple libbpf tools in iovisor/bcc.
> >
> > It would be a fair amount of churn to fix all these tools, not to mention any
> > non-public tools people may be using.
>
> So on the one hand, too bad, deal with it. None of this is ABI.
>
> > If you are open to it, here's a
> > description and a patch that moves the new argument to the end,
> > so existing tools can continue working without change (the new argument
> > just won't be extracted in existing programs):
>
> And on the other hand; those users need to be fixed anyway, right?
> Accessing prev->__state is equally broken.
>
> Yes, the proposed hack is cute, but I have very little sympathy for any
> of this. These are in-kernel interfaces, they change, there must not be
> any impediment to change.
>
> If bpf wants to ride on them, it needs to suffer the pain of doing so.

Right, none of this is ABI and BPF users community is prepared to deal
with this, no one is claiming otherwise. And we did deal with __state
field rename already, see [0] for one example.

This function argument reordering is much harder to deal with in a
"contained" way like we did it for __state rename, unfortunately. So
given it doesn't have to require so much work for multiple people to
work around and is just a simple reordering of arguments to add a new
argument at the end of a function prototype, is there a big downside
to doing this?

[0] https://github.com/iovisor/bcc/commit/8b350fd51b004d4eddd7caa410e274e2807904f9

2022-04-22 22:04:27

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On Fri, 22 Apr 2022 13:09:03 +0200
Peter Zijlstra <[email protected]> wrote:

> If bpf wants to ride on them, it needs to suffer the pain of doing so.

And I constantly hear that BPF is not an ABI, and is not guaranteed to work
from one kernel version to the next.

-- Steve

2022-04-22 22:28:41

by Delyan Kratunov

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On Fri, 2022-04-22 at 13:09 +0200, Peter Zijlstra wrote:
> And on the other hand; those users need to be fixed anyway, right?
> Accessing prev->__state is equally broken.

The users that access prev->__state would most likely have to be fixed, for sure.

However, not all users access prev->__state. `offcputime` for example just takes a
stack trace and associates it with the switched out task. This kind of user
would continue working with the proposed patch.

> If bpf wants to ride on them, it needs to suffer the pain of doing so.

Sure, I'm just advocating for a fairly trivial patch to avoid some of the suffering,
hopefully without being a burden to development. If that's not the case, then it's a
clear no-go.

2022-04-22 22:30:33

by Alexei Starovoitov

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On Fri, Apr 22, 2022 at 10:22 AM Delyan Kratunov <[email protected]> wrote:
>
> On Fri, 2022-04-22 at 13:09 +0200, Peter Zijlstra wrote:
> > And on the other hand; those users need to be fixed anyway, right?
> > Accessing prev->__state is equally broken.
>
> The users that access prev->__state would most likely have to be fixed, for sure.
>
> However, not all users access prev->__state. `offcputime` for example just takes a
> stack trace and associates it with the switched out task. This kind of user
> would continue working with the proposed patch.
>
> > If bpf wants to ride on them, it needs to suffer the pain of doing so.
>
> Sure, I'm just advocating for a fairly trivial patch to avoid some of the suffering,
> hopefully without being a burden to development. If that's not the case, then it's a
> clear no-go.


Namhyung just sent this patch set:
https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/

to add off-cpu profiling to perf.
It also hooks into sched_switch tracepoint.
Notice it deals with state->__state rename just fine.
But it will have a hard time without this patch
until we add all the extra CO-RE features to detect
and automatically adjust bpf progs when tracepoint
arguments order changed.
We will do it eventually, of course.
There will be additional work in llvm, libbpf, kernel, etc.
But for now I think it would be good to land Delyan's patch
to avoid unnecessary pain to all the users.

Peter, do you mind?

2022-04-26 20:33:53

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On Fri, Apr 22, 2022 at 11:30:12AM -0700, Alexei Starovoitov wrote:
> On Fri, Apr 22, 2022 at 10:22 AM Delyan Kratunov <[email protected]> wrote:
> >
> > On Fri, 2022-04-22 at 13:09 +0200, Peter Zijlstra wrote:
> > > And on the other hand; those users need to be fixed anyway, right?
> > > Accessing prev->__state is equally broken.
> >
> > The users that access prev->__state would most likely have to be fixed, for sure.
> >
> > However, not all users access prev->__state. `offcputime` for example just takes a
> > stack trace and associates it with the switched out task. This kind of user
> > would continue working with the proposed patch.
> >
> > > If bpf wants to ride on them, it needs to suffer the pain of doing so.
> >
> > Sure, I'm just advocating for a fairly trivial patch to avoid some of the suffering,
> > hopefully without being a burden to development. If that's not the case, then it's a
> > clear no-go.
>
>
> Namhyung just sent this patch set:
> https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/

That has:

+ * recently task_struct->state renamed to __state so it made an incompatible
+ * change.

git tells me:

2f064a59a11f ("sched: Change task_struct::state")

is almost a year old by now. That don't qualify as recently in my book.
That says that 'old kernels used to call this...'.

> to add off-cpu profiling to perf.
> It also hooks into sched_switch tracepoint.
> Notice it deals with state->__state rename just fine.

So I don't speak BPF much; it always takes me more time to make bpf work
than to just hack up the kernel, which makes it hard to get motivated.

However, it was not just a rename, state changed type too, which is why I
did the rename, to make sure all users would get a compile fail and
could adjust.

If you're silently making it work by frobbing the name, you loose that.

Specifically, task_struct::state used to be 'volatile long', while
task_struct::__state is 'unsigned int'. As such, any user must now be
very careful to use READ_ONCE(). I don't see that happening with just
frobbing the name.

Additinoally, by shrinking the field, I suppose BE systems get to keep
the pieces?

> But it will have a hard time without this patch
> until we add all the extra CO-RE features to detect
> and automatically adjust bpf progs when tracepoint
> arguments order changed.

Could be me, but silently making it work sounds like fail :/ There's a
reason code changes, users need to adapt, not silently pretend stuff is
as before.

How will you know you need to fix your tool?

> We will do it eventually, of course.
> There will be additional work in llvm, libbpf, kernel, etc.
> But for now I think it would be good to land Delyan's patch
> to avoid unnecessary pain to all the users.
>
> Peter, do you mind?

I suppose I can help out this time, but I really don't want to set a
precedent for these things. Broken is broken.

The down-side for me is that the argument order no longer makes any
sense.

2022-04-26 22:16:25

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On Tue, Apr 26, 2022 at 5:28 AM Peter Zijlstra <[email protected]> wrote:
>
> On Fri, Apr 22, 2022 at 11:30:12AM -0700, Alexei Starovoitov wrote:
> > On Fri, Apr 22, 2022 at 10:22 AM Delyan Kratunov <[email protected]> wrote:
> > >
> > > On Fri, 2022-04-22 at 13:09 +0200, Peter Zijlstra wrote:
> > > > And on the other hand; those users need to be fixed anyway, right?
> > > > Accessing prev->__state is equally broken.
> > >
> > > The users that access prev->__state would most likely have to be fixed, for sure.
> > >
> > > However, not all users access prev->__state. `offcputime` for example just takes a
> > > stack trace and associates it with the switched out task. This kind of user
> > > would continue working with the proposed patch.
> > >
> > > > If bpf wants to ride on them, it needs to suffer the pain of doing so.
> > >
> > > Sure, I'm just advocating for a fairly trivial patch to avoid some of the suffering,
> > > hopefully without being a burden to development. If that's not the case, then it's a
> > > clear no-go.
> >
> >
> > Namhyung just sent this patch set:
> > https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/
>
> That has:
>
> + * recently task_struct->state renamed to __state so it made an incompatible
> + * change.
>
> git tells me:
>
> 2f064a59a11f ("sched: Change task_struct::state")
>
> is almost a year old by now. That don't qualify as recently in my book.
> That says that 'old kernels used to call this...'.
>
> > to add off-cpu profiling to perf.
> > It also hooks into sched_switch tracepoint.
> > Notice it deals with state->__state rename just fine.
>
> So I don't speak BPF much; it always takes me more time to make bpf work
> than to just hack up the kernel, which makes it hard to get motivated.
>
> However, it was not just a rename, state changed type too, which is why I
> did the rename, to make sure all users would get a compile fail and
> could adjust.
>
> If you're silently making it work by frobbing the name, you loose that.

In general, libbpf is trying to accommodate it as best as it can. It
will adjust load size automatically based on BTF information for
direct memory reads. For cases when users have to use
bpf_probe_read_kernel(), it's impossible to do this automatically, but
libbpf and BPF CO-RE provide primitives to accommodate changes to
field types. We have bpf_core_field_size() and
BPF_CORE_READ_BITFIELD()/BPF_CORE_READ_BITFIELD_PROBED() macros that
can read any bitfield or integer type properly and store them into
u64.

So yes, user will still have to test and validate their BPF programs
for new kernels, but they do have instruments to handle this in a
contained way in BPF code without awkward extra user-space side
feature detection.

With __state rename, it's been done, it's easy to accommodate. That
easiness of being able to deal with this change is one of the reasons
we never asked to do anything about that. It was rather mild
inconveniences for a bunch of tools and applications.

This argument reordering, as I mentioned before, is a much bigger deal
and much harder to deal with easily. As Alexei mentioned, we are
discussing extending BPF CO-RE to be able to accommodate even such
changes in pure BPF-side code, but we don't have that mechanism yet,
so if it's not too hard, we still kindly ask to consider appending a
new argument at the end.


>
> Specifically, task_struct::state used to be 'volatile long', while
> task_struct::__state is 'unsigned int'. As such, any user must now be
> very careful to use READ_ONCE(). I don't see that happening with just
> frobbing the name.
>
> Additinoally, by shrinking the field, I suppose BE systems get to keep
> the pieces?
>
> > But it will have a hard time without this patch
> > until we add all the extra CO-RE features to detect
> > and automatically adjust bpf progs when tracepoint
> > arguments order changed.
>
> Could be me, but silently making it work sounds like fail :/ There's a
> reason code changes, users need to adapt, not silently pretend stuff is
> as before.
>
> How will you know you need to fix your tool?

There is no denying that tracing BPF code successfully compiling
doesn't mean it's going to work correctly (like with any other code,
really). So testing is still mandatory. This is more about being able
to deal with such changes without having to maintain two separately
compiled BPF programs or doing extensive and expensive feature
detection in user-space code.

>
> > We will do it eventually, of course.
> > There will be additional work in llvm, libbpf, kernel, etc.
> > But for now I think it would be good to land Delyan's patch
> > to avoid unnecessary pain to all the users.
> >
> > Peter, do you mind?
>
> I suppose I can help out this time, but I really don't want to set a
> precedent for these things. Broken is broken.

We'd really appreciate it if you do help out this time. The intent is
not to set any kind of precedent. This ask is due to
sched_switch is a very popular target *and* this change can't be
easily detected (once detected it can be easily accommodated, but
detection is painful), so overall impact is disproportionate.

>
> The down-side for me is that the argument order no longer makes any
> sense.

The order changes only for tracing-related macros/functions, so
hopefully that doesn't make the rest of scheduler code illogical or
inconvenient.

2022-04-27 09:32:30

by Qais Yousef

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On 04/26/22 14:28, Peter Zijlstra wrote:
> On Fri, Apr 22, 2022 at 11:30:12AM -0700, Alexei Starovoitov wrote:
> > On Fri, Apr 22, 2022 at 10:22 AM Delyan Kratunov <[email protected]> wrote:
> > >
> > > On Fri, 2022-04-22 at 13:09 +0200, Peter Zijlstra wrote:
> > > > And on the other hand; those users need to be fixed anyway, right?
> > > > Accessing prev->__state is equally broken.
> > >
> > > The users that access prev->__state would most likely have to be fixed, for sure.
> > >
> > > However, not all users access prev->__state. `offcputime` for example just takes a
> > > stack trace and associates it with the switched out task. This kind of user
> > > would continue working with the proposed patch.
> > >
> > > > If bpf wants to ride on them, it needs to suffer the pain of doing so.
> > >
> > > Sure, I'm just advocating for a fairly trivial patch to avoid some of the suffering,
> > > hopefully without being a burden to development. If that's not the case, then it's a
> > > clear no-go.
> >
> >
> > Namhyung just sent this patch set:
> > https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/
>
> That has:
>
> + * recently task_struct->state renamed to __state so it made an incompatible
> + * change.
>
> git tells me:
>
> 2f064a59a11f ("sched: Change task_struct::state")
>
> is almost a year old by now. That don't qualify as recently in my book.
> That says that 'old kernels used to call this...'.
>
> > to add off-cpu profiling to perf.
> > It also hooks into sched_switch tracepoint.
> > Notice it deals with state->__state rename just fine.
>
> So I don't speak BPF much; it always takes me more time to make bpf work
> than to just hack up the kernel, which makes it hard to get motivated.
>
> However, it was not just a rename, state changed type too, which is why I
> did the rename, to make sure all users would get a compile fail and
> could adjust.
>
> If you're silently making it work by frobbing the name, you loose that.
>
> Specifically, task_struct::state used to be 'volatile long', while
> task_struct::__state is 'unsigned int'. As such, any user must now be
> very careful to use READ_ONCE(). I don't see that happening with just
> frobbing the name.
>
> Additinoally, by shrinking the field, I suppose BE systems get to keep
> the pieces?
>
> > But it will have a hard time without this patch
> > until we add all the extra CO-RE features to detect
> > and automatically adjust bpf progs when tracepoint
> > arguments order changed.
>
> Could be me, but silently making it work sounds like fail :/ There's a
> reason code changes, users need to adapt, not silently pretend stuff is
> as before.
>
> How will you know you need to fix your tool?

If libbpf doesn't fail, then yeah it's a big problem. I wonder how users of
kprobe who I suppose are more prone to this kind of problems have been coping.

>
> > We will do it eventually, of course.
> > There will be additional work in llvm, libbpf, kernel, etc.
> > But for now I think it would be good to land Delyan's patch
> > to avoid unnecessary pain to all the users.
> >
> > Peter, do you mind?
>
> I suppose I can help out this time, but I really don't want to set a
> precedent for these things. Broken is broken.
>
> The down-side for me is that the argument order no longer makes any
> sense.

I'm intending to backport fa2c3254d7cf to 5.10 and 5.15 but waiting for
a Tested-by. If you take this one, then it'll need to be backported too.

Cheers

--
Qais Yousef

2022-04-27 10:45:30

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On Tue, Apr 26, 2022 at 7:10 AM Qais Yousef <[email protected]> wrote:
>
> On 04/26/22 14:28, Peter Zijlstra wrote:
> > On Fri, Apr 22, 2022 at 11:30:12AM -0700, Alexei Starovoitov wrote:
> > > On Fri, Apr 22, 2022 at 10:22 AM Delyan Kratunov <[email protected]> wrote:
> > > >
> > > > On Fri, 2022-04-22 at 13:09 +0200, Peter Zijlstra wrote:
> > > > > And on the other hand; those users need to be fixed anyway, right?
> > > > > Accessing prev->__state is equally broken.
> > > >
> > > > The users that access prev->__state would most likely have to be fixed, for sure.
> > > >
> > > > However, not all users access prev->__state. `offcputime` for example just takes a
> > > > stack trace and associates it with the switched out task. This kind of user
> > > > would continue working with the proposed patch.
> > > >
> > > > > If bpf wants to ride on them, it needs to suffer the pain of doing so.
> > > >
> > > > Sure, I'm just advocating for a fairly trivial patch to avoid some of the suffering,
> > > > hopefully without being a burden to development. If that's not the case, then it's a
> > > > clear no-go.
> > >
> > >
> > > Namhyung just sent this patch set:
> > > https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/
> >
> > That has:
> >
> > + * recently task_struct->state renamed to __state so it made an incompatible
> > + * change.
> >
> > git tells me:
> >
> > 2f064a59a11f ("sched: Change task_struct::state")
> >
> > is almost a year old by now. That don't qualify as recently in my book.
> > That says that 'old kernels used to call this...'.
> >
> > > to add off-cpu profiling to perf.
> > > It also hooks into sched_switch tracepoint.
> > > Notice it deals with state->__state rename just fine.
> >
> > So I don't speak BPF much; it always takes me more time to make bpf work
> > than to just hack up the kernel, which makes it hard to get motivated.
> >
> > However, it was not just a rename, state changed type too, which is why I
> > did the rename, to make sure all users would get a compile fail and
> > could adjust.
> >
> > If you're silently making it work by frobbing the name, you loose that.
> >
> > Specifically, task_struct::state used to be 'volatile long', while
> > task_struct::__state is 'unsigned int'. As such, any user must now be
> > very careful to use READ_ONCE(). I don't see that happening with just
> > frobbing the name.
> >
> > Additinoally, by shrinking the field, I suppose BE systems get to keep
> > the pieces?
> >
> > > But it will have a hard time without this patch
> > > until we add all the extra CO-RE features to detect
> > > and automatically adjust bpf progs when tracepoint
> > > arguments order changed.
> >
> > Could be me, but silently making it work sounds like fail :/ There's a
> > reason code changes, users need to adapt, not silently pretend stuff is
> > as before.
> >
> > How will you know you need to fix your tool?
>
> If libbpf doesn't fail, then yeah it's a big problem. I wonder how users of
> kprobe who I suppose are more prone to this kind of problems have been coping.

See my reply to Peter. libbpf can't know user's intent to fail this
automatically, in general. In some cases when it can it does
accommodate this automatically. In other cases it provides instruments
for user to handle this (bpf_core_field_size(),
BPF_CORE_READ_BITFIELD(), etc).

But in the end no one eliminated the need for testing your application
for correctness. Tracing programs do break on kernel changes and BPF
users do adapt to them. Sometimes adapting is easy (like state ->
__state transition), sometimes it's much more involved (like this
argument order change).

>
> >
> > > We will do it eventually, of course.
> > > There will be additional work in llvm, libbpf, kernel, etc.
> > > But for now I think it would be good to land Delyan's patch
> > > to avoid unnecessary pain to all the users.
> > >
> > > Peter, do you mind?
> >
> > I suppose I can help out this time, but I really don't want to set a
> > precedent for these things. Broken is broken.
> >
> > The down-side for me is that the argument order no longer makes any
> > sense.
>
> I'm intending to backport fa2c3254d7cf to 5.10 and 5.15 but waiting for
> a Tested-by. If you take this one, then it'll need to be backported too.
>
> Cheers
>
> --
> Qais Yousef

2022-04-27 11:43:05

by Qais Yousef

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On 04/26/22 08:54, Andrii Nakryiko wrote:
> On Tue, Apr 26, 2022 at 7:10 AM Qais Yousef <[email protected]> wrote:
> >
> > On 04/26/22 14:28, Peter Zijlstra wrote:
> > > On Fri, Apr 22, 2022 at 11:30:12AM -0700, Alexei Starovoitov wrote:
> > > > On Fri, Apr 22, 2022 at 10:22 AM Delyan Kratunov <[email protected]> wrote:
> > > > >
> > > > > On Fri, 2022-04-22 at 13:09 +0200, Peter Zijlstra wrote:
> > > > > > And on the other hand; those users need to be fixed anyway, right?
> > > > > > Accessing prev->__state is equally broken.
> > > > >
> > > > > The users that access prev->__state would most likely have to be fixed, for sure.
> > > > >
> > > > > However, not all users access prev->__state. `offcputime` for example just takes a
> > > > > stack trace and associates it with the switched out task. This kind of user
> > > > > would continue working with the proposed patch.
> > > > >
> > > > > > If bpf wants to ride on them, it needs to suffer the pain of doing so.
> > > > >
> > > > > Sure, I'm just advocating for a fairly trivial patch to avoid some of the suffering,
> > > > > hopefully without being a burden to development. If that's not the case, then it's a
> > > > > clear no-go.
> > > >
> > > >
> > > > Namhyung just sent this patch set:
> > > > https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/
> > >
> > > That has:
> > >
> > > + * recently task_struct->state renamed to __state so it made an incompatible
> > > + * change.
> > >
> > > git tells me:
> > >
> > > 2f064a59a11f ("sched: Change task_struct::state")
> > >
> > > is almost a year old by now. That don't qualify as recently in my book.
> > > That says that 'old kernels used to call this...'.
> > >
> > > > to add off-cpu profiling to perf.
> > > > It also hooks into sched_switch tracepoint.
> > > > Notice it deals with state->__state rename just fine.
> > >
> > > So I don't speak BPF much; it always takes me more time to make bpf work
> > > than to just hack up the kernel, which makes it hard to get motivated.
> > >
> > > However, it was not just a rename, state changed type too, which is why I
> > > did the rename, to make sure all users would get a compile fail and
> > > could adjust.
> > >
> > > If you're silently making it work by frobbing the name, you loose that.
> > >
> > > Specifically, task_struct::state used to be 'volatile long', while
> > > task_struct::__state is 'unsigned int'. As such, any user must now be
> > > very careful to use READ_ONCE(). I don't see that happening with just
> > > frobbing the name.
> > >
> > > Additinoally, by shrinking the field, I suppose BE systems get to keep
> > > the pieces?
> > >
> > > > But it will have a hard time without this patch
> > > > until we add all the extra CO-RE features to detect
> > > > and automatically adjust bpf progs when tracepoint
> > > > arguments order changed.
> > >
> > > Could be me, but silently making it work sounds like fail :/ There's a
> > > reason code changes, users need to adapt, not silently pretend stuff is
> > > as before.
> > >
> > > How will you know you need to fix your tool?
> >
> > If libbpf doesn't fail, then yeah it's a big problem. I wonder how users of
> > kprobe who I suppose are more prone to this kind of problems have been coping.
>
> See my reply to Peter. libbpf can't know user's intent to fail this
> automatically, in general. In some cases when it can it does
> accommodate this automatically. In other cases it provides instruments
> for user to handle this (bpf_core_field_size(),
> BPF_CORE_READ_BITFIELD(), etc).

My naiive thinking is that the function signature has changed (there's 1 extra
arg not just a subtle swap of args of the same type) - so I thought that can be
detected. But maybe it is harder said than done.

I am trying to remember as I've used this before; I think you get the arg list
as part of ctx when you attach to a function?

I wonder if it'd be hard to provide a macro for the user to provide the
signature of the function they expect; this macro can try then to verify/assert
the number, type and order is the same. Not bullet proof and requires opt-in,
but could be useful?


// dummy pseudo-code

BPF_CORE_ASSERT_SIG(sched_switch, NR_ARGS, ARG0, ARG1, ...)
if (ctx->nr_args != NR_ARGS)
assert()
if (type_of(ctx->args[0]) != type_of(ARG0))
assert()
...

I'm not sure if you have any info about the type though..

> But in the end no one eliminated the need for testing your application
> for correctness. Tracing programs do break on kernel changes and BPF
> users do adapt to them. Sometimes adapting is easy (like state ->
> __state transition), sometimes it's much more involved (like this
> argument order change).

It's not just an arg re-order, it's a new argument inserted in the middle. But
fair enough :-)

Cheers

--
Qais Yousef

2022-04-27 18:48:02

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On Wed, Apr 27, 2022 at 3:35 AM Qais Yousef <[email protected]> wrote:
>
> On 04/26/22 08:54, Andrii Nakryiko wrote:
> > On Tue, Apr 26, 2022 at 7:10 AM Qais Yousef <[email protected]> wrote:
> > >
> > > On 04/26/22 14:28, Peter Zijlstra wrote:
> > > > On Fri, Apr 22, 2022 at 11:30:12AM -0700, Alexei Starovoitov wrote:
> > > > > On Fri, Apr 22, 2022 at 10:22 AM Delyan Kratunov <[email protected]> wrote:
> > > > > >
> > > > > > On Fri, 2022-04-22 at 13:09 +0200, Peter Zijlstra wrote:
> > > > > > > And on the other hand; those users need to be fixed anyway, right?
> > > > > > > Accessing prev->__state is equally broken.
> > > > > >
> > > > > > The users that access prev->__state would most likely have to be fixed, for sure.
> > > > > >
> > > > > > However, not all users access prev->__state. `offcputime` for example just takes a
> > > > > > stack trace and associates it with the switched out task. This kind of user
> > > > > > would continue working with the proposed patch.
> > > > > >
> > > > > > > If bpf wants to ride on them, it needs to suffer the pain of doing so.
> > > > > >
> > > > > > Sure, I'm just advocating for a fairly trivial patch to avoid some of the suffering,
> > > > > > hopefully without being a burden to development. If that's not the case, then it's a
> > > > > > clear no-go.
> > > > >
> > > > >
> > > > > Namhyung just sent this patch set:
> > > > > https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/
> > > >
> > > > That has:
> > > >
> > > > + * recently task_struct->state renamed to __state so it made an incompatible
> > > > + * change.
> > > >
> > > > git tells me:
> > > >
> > > > 2f064a59a11f ("sched: Change task_struct::state")
> > > >
> > > > is almost a year old by now. That don't qualify as recently in my book.
> > > > That says that 'old kernels used to call this...'.
> > > >
> > > > > to add off-cpu profiling to perf.
> > > > > It also hooks into sched_switch tracepoint.
> > > > > Notice it deals with state->__state rename just fine.
> > > >
> > > > So I don't speak BPF much; it always takes me more time to make bpf work
> > > > than to just hack up the kernel, which makes it hard to get motivated.
> > > >
> > > > However, it was not just a rename, state changed type too, which is why I
> > > > did the rename, to make sure all users would get a compile fail and
> > > > could adjust.
> > > >
> > > > If you're silently making it work by frobbing the name, you loose that.
> > > >
> > > > Specifically, task_struct::state used to be 'volatile long', while
> > > > task_struct::__state is 'unsigned int'. As such, any user must now be
> > > > very careful to use READ_ONCE(). I don't see that happening with just
> > > > frobbing the name.
> > > >
> > > > Additinoally, by shrinking the field, I suppose BE systems get to keep
> > > > the pieces?
> > > >
> > > > > But it will have a hard time without this patch
> > > > > until we add all the extra CO-RE features to detect
> > > > > and automatically adjust bpf progs when tracepoint
> > > > > arguments order changed.
> > > >
> > > > Could be me, but silently making it work sounds like fail :/ There's a
> > > > reason code changes, users need to adapt, not silently pretend stuff is
> > > > as before.
> > > >
> > > > How will you know you need to fix your tool?
> > >
> > > If libbpf doesn't fail, then yeah it's a big problem. I wonder how users of
> > > kprobe who I suppose are more prone to this kind of problems have been coping.
> >
> > See my reply to Peter. libbpf can't know user's intent to fail this
> > automatically, in general. In some cases when it can it does
> > accommodate this automatically. In other cases it provides instruments
> > for user to handle this (bpf_core_field_size(),
> > BPF_CORE_READ_BITFIELD(), etc).
>
> My naiive thinking is that the function signature has changed (there's 1 extra
> arg not just a subtle swap of args of the same type) - so I thought that can be
> detected. But maybe it is harder said than done.

It is. We don't have number of arguments either:

struct bpf_raw_tracepoint_args {
__u64 args[0];
};

What BPF program is getting is just an array of u64s.

>
> I am trying to remember as I've used this before; I think you get the arg list
> as part of ctx when you attach to a function?
>
> I wonder if it'd be hard to provide a macro for the user to provide the
> signature of the function they expect; this macro can try then to verify/assert
> the number, type and order is the same. Not bullet proof and requires opt-in,
> but could be useful?
>
>
> // dummy pseudo-code
>
> BPF_CORE_ASSERT_SIG(sched_switch, NR_ARGS, ARG0, ARG1, ...)
> if (ctx->nr_args != NR_ARGS)
> assert()
> if (type_of(ctx->args[0]) != type_of(ARG0))
> assert()
> ...
>
> I'm not sure if you have any info about the type though..

What we have now under discussion is more generic way for user to
check signature of function prototype, struct/union, etc. But all that
will take some time to implement and finalize. So this patch is a way
to stop/prevent the bleeding until we have that available to users.

>
> > But in the end no one eliminated the need for testing your application
> > for correctness. Tracing programs do break on kernel changes and BPF
> > users do adapt to them. Sometimes adapting is easy (like state ->
> > __state transition), sometimes it's much more involved (like this
> > argument order change).
>
> It's not just an arg re-order, it's a new argument inserted in the middle. But
> fair enough :-)
>
> Cheers
>
> --
> Qais Yousef

2022-04-27 22:08:37

by Alexei Starovoitov

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On Wed, Apr 27, 2022 at 11:17 AM Andrii Nakryiko
<[email protected]> wrote:
>
> > >
> > > See my reply to Peter. libbpf can't know user's intent to fail this
> > > automatically, in general. In some cases when it can it does
> > > accommodate this automatically. In other cases it provides instruments
> > > for user to handle this (bpf_core_field_size(),
> > > BPF_CORE_READ_BITFIELD(), etc).
> >
> > My naiive thinking is that the function signature has changed (there's 1 extra
> > arg not just a subtle swap of args of the same type) - so I thought that can be
> > detected. But maybe it is harder said than done.
>
> It is. We don't have number of arguments either:
>
> struct bpf_raw_tracepoint_args {
> __u64 args[0];
> };
>
> What BPF program is getting is just an array of u64s.

Well, that's a true and false statement at the same time
that might confuse folks reading this thread.
To clarify:
raw_tp and tp_btf programs receive an array of u64-s.
raw_tp checks the number of arguments only.
The prog that accesses non-existing arg will be rejected.
tp_btf prog in addition to the number of args knows
the exact type of the arguments.
So in this case accessing arg0 in bpf prog
as 'struct task_struct *' while it's 'unsigned int prev_state'
in the kernel will cause the verifier to reject it.
If prog does bpf_probe_read_kernel() then all bets are off, of course.
There is no type checking mechanism for bpf_probe_read_kernel.
Only BTF powered pointer dereferences are type checked.
The 'tp_btf' prog type is the recommended way to access tracepoints.
The 'raw_tp' was implemented before BTF was introduced.

2022-04-29 20:53:41

by Qais Yousef

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On 04/27/22 11:17, Andrii Nakryiko wrote:
> On Wed, Apr 27, 2022 at 3:35 AM Qais Yousef <[email protected]> wrote:
> >
> > On 04/26/22 08:54, Andrii Nakryiko wrote:
> > > On Tue, Apr 26, 2022 at 7:10 AM Qais Yousef <[email protected]> wrote:
> > > >
> > > > On 04/26/22 14:28, Peter Zijlstra wrote:
> > > > > On Fri, Apr 22, 2022 at 11:30:12AM -0700, Alexei Starovoitov wrote:
> > > > > > On Fri, Apr 22, 2022 at 10:22 AM Delyan Kratunov <[email protected]> wrote:
> > > > > > >
> > > > > > > On Fri, 2022-04-22 at 13:09 +0200, Peter Zijlstra wrote:
> > > > > > > > And on the other hand; those users need to be fixed anyway, right?
> > > > > > > > Accessing prev->__state is equally broken.
> > > > > > >
> > > > > > > The users that access prev->__state would most likely have to be fixed, for sure.
> > > > > > >
> > > > > > > However, not all users access prev->__state. `offcputime` for example just takes a
> > > > > > > stack trace and associates it with the switched out task. This kind of user
> > > > > > > would continue working with the proposed patch.
> > > > > > >
> > > > > > > > If bpf wants to ride on them, it needs to suffer the pain of doing so.
> > > > > > >
> > > > > > > Sure, I'm just advocating for a fairly trivial patch to avoid some of the suffering,
> > > > > > > hopefully without being a burden to development. If that's not the case, then it's a
> > > > > > > clear no-go.
> > > > > >
> > > > > >
> > > > > > Namhyung just sent this patch set:
> > > > > > https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/
> > > > >
> > > > > That has:
> > > > >
> > > > > + * recently task_struct->state renamed to __state so it made an incompatible
> > > > > + * change.
> > > > >
> > > > > git tells me:
> > > > >
> > > > > 2f064a59a11f ("sched: Change task_struct::state")
> > > > >
> > > > > is almost a year old by now. That don't qualify as recently in my book.
> > > > > That says that 'old kernels used to call this...'.
> > > > >
> > > > > > to add off-cpu profiling to perf.
> > > > > > It also hooks into sched_switch tracepoint.
> > > > > > Notice it deals with state->__state rename just fine.
> > > > >
> > > > > So I don't speak BPF much; it always takes me more time to make bpf work
> > > > > than to just hack up the kernel, which makes it hard to get motivated.
> > > > >
> > > > > However, it was not just a rename, state changed type too, which is why I
> > > > > did the rename, to make sure all users would get a compile fail and
> > > > > could adjust.
> > > > >
> > > > > If you're silently making it work by frobbing the name, you loose that.
> > > > >
> > > > > Specifically, task_struct::state used to be 'volatile long', while
> > > > > task_struct::__state is 'unsigned int'. As such, any user must now be
> > > > > very careful to use READ_ONCE(). I don't see that happening with just
> > > > > frobbing the name.
> > > > >
> > > > > Additinoally, by shrinking the field, I suppose BE systems get to keep
> > > > > the pieces?
> > > > >
> > > > > > But it will have a hard time without this patch
> > > > > > until we add all the extra CO-RE features to detect
> > > > > > and automatically adjust bpf progs when tracepoint
> > > > > > arguments order changed.
> > > > >
> > > > > Could be me, but silently making it work sounds like fail :/ There's a
> > > > > reason code changes, users need to adapt, not silently pretend stuff is
> > > > > as before.
> > > > >
> > > > > How will you know you need to fix your tool?
> > > >
> > > > If libbpf doesn't fail, then yeah it's a big problem. I wonder how users of
> > > > kprobe who I suppose are more prone to this kind of problems have been coping.
> > >
> > > See my reply to Peter. libbpf can't know user's intent to fail this
> > > automatically, in general. In some cases when it can it does
> > > accommodate this automatically. In other cases it provides instruments
> > > for user to handle this (bpf_core_field_size(),
> > > BPF_CORE_READ_BITFIELD(), etc).
> >
> > My naiive thinking is that the function signature has changed (there's 1 extra
> > arg not just a subtle swap of args of the same type) - so I thought that can be
> > detected. But maybe it is harder said than done.
>
> It is. We don't have number of arguments either:
>
> struct bpf_raw_tracepoint_args {
> __u64 args[0];
> };
>
> What BPF program is getting is just an array of u64s.
>
> >
> > I am trying to remember as I've used this before; I think you get the arg list
> > as part of ctx when you attach to a function?
> >
> > I wonder if it'd be hard to provide a macro for the user to provide the
> > signature of the function they expect; this macro can try then to verify/assert
> > the number, type and order is the same. Not bullet proof and requires opt-in,
> > but could be useful?
> >
> >
> > // dummy pseudo-code
> >
> > BPF_CORE_ASSERT_SIG(sched_switch, NR_ARGS, ARG0, ARG1, ...)
> > if (ctx->nr_args != NR_ARGS)
> > assert()
> > if (type_of(ctx->args[0]) != type_of(ARG0))
> > assert()
> > ...
> >
> > I'm not sure if you have any info about the type though..
>
> What we have now under discussion is more generic way for user to
> check signature of function prototype, struct/union, etc. But all that
> will take some time to implement and finalize. So this patch is a way
> to stop/prevent the bleeding until we have that available to users.

Okay good to know. Alexei mentioned a plan, but I didn't get that it included
signature verification.

Cheers

--
Qais Yousef

2022-05-09 19:38:38

by Andrii Nakryiko

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On Thu, Apr 28, 2022 at 3:02 AM Qais Yousef <[email protected]> wrote:
>
> On 04/27/22 11:17, Andrii Nakryiko wrote:
> > On Wed, Apr 27, 2022 at 3:35 AM Qais Yousef <[email protected]> wrote:
> > >
> > > On 04/26/22 08:54, Andrii Nakryiko wrote:
> > > > On Tue, Apr 26, 2022 at 7:10 AM Qais Yousef <[email protected]> wrote:
> > > > >
> > > > > On 04/26/22 14:28, Peter Zijlstra wrote:
> > > > > > On Fri, Apr 22, 2022 at 11:30:12AM -0700, Alexei Starovoitov wrote:
> > > > > > > On Fri, Apr 22, 2022 at 10:22 AM Delyan Kratunov <[email protected]> wrote:
> > > > > > > >
> > > > > > > > On Fri, 2022-04-22 at 13:09 +0200, Peter Zijlstra wrote:
> > > > > > > > > And on the other hand; those users need to be fixed anyway, right?
> > > > > > > > > Accessing prev->__state is equally broken.
> > > > > > > >
> > > > > > > > The users that access prev->__state would most likely have to be fixed, for sure.
> > > > > > > >
> > > > > > > > However, not all users access prev->__state. `offcputime` for example just takes a
> > > > > > > > stack trace and associates it with the switched out task. This kind of user
> > > > > > > > would continue working with the proposed patch.
> > > > > > > >
> > > > > > > > > If bpf wants to ride on them, it needs to suffer the pain of doing so.
> > > > > > > >
> > > > > > > > Sure, I'm just advocating for a fairly trivial patch to avoid some of the suffering,
> > > > > > > > hopefully without being a burden to development. If that's not the case, then it's a
> > > > > > > > clear no-go.
> > > > > > >
> > > > > > >
> > > > > > > Namhyung just sent this patch set:
> > > > > > > https://patchwork.kernel.org/project/netdevbpf/patch/[email protected]/
> > > > > >
> > > > > > That has:
> > > > > >
> > > > > > + * recently task_struct->state renamed to __state so it made an incompatible
> > > > > > + * change.
> > > > > >
> > > > > > git tells me:
> > > > > >
> > > > > > 2f064a59a11f ("sched: Change task_struct::state")
> > > > > >
> > > > > > is almost a year old by now. That don't qualify as recently in my book.
> > > > > > That says that 'old kernels used to call this...'.
> > > > > >
> > > > > > > to add off-cpu profiling to perf.
> > > > > > > It also hooks into sched_switch tracepoint.
> > > > > > > Notice it deals with state->__state rename just fine.
> > > > > >
> > > > > > So I don't speak BPF much; it always takes me more time to make bpf work
> > > > > > than to just hack up the kernel, which makes it hard to get motivated.
> > > > > >
> > > > > > However, it was not just a rename, state changed type too, which is why I
> > > > > > did the rename, to make sure all users would get a compile fail and
> > > > > > could adjust.
> > > > > >
> > > > > > If you're silently making it work by frobbing the name, you loose that.
> > > > > >
> > > > > > Specifically, task_struct::state used to be 'volatile long', while
> > > > > > task_struct::__state is 'unsigned int'. As such, any user must now be
> > > > > > very careful to use READ_ONCE(). I don't see that happening with just
> > > > > > frobbing the name.
> > > > > >
> > > > > > Additinoally, by shrinking the field, I suppose BE systems get to keep
> > > > > > the pieces?
> > > > > >
> > > > > > > But it will have a hard time without this patch
> > > > > > > until we add all the extra CO-RE features to detect
> > > > > > > and automatically adjust bpf progs when tracepoint
> > > > > > > arguments order changed.
> > > > > >
> > > > > > Could be me, but silently making it work sounds like fail :/ There's a
> > > > > > reason code changes, users need to adapt, not silently pretend stuff is
> > > > > > as before.
> > > > > >
> > > > > > How will you know you need to fix your tool?
> > > > >
> > > > > If libbpf doesn't fail, then yeah it's a big problem. I wonder how users of
> > > > > kprobe who I suppose are more prone to this kind of problems have been coping.
> > > >
> > > > See my reply to Peter. libbpf can't know user's intent to fail this
> > > > automatically, in general. In some cases when it can it does
> > > > accommodate this automatically. In other cases it provides instruments
> > > > for user to handle this (bpf_core_field_size(),
> > > > BPF_CORE_READ_BITFIELD(), etc).
> > >
> > > My naiive thinking is that the function signature has changed (there's 1 extra
> > > arg not just a subtle swap of args of the same type) - so I thought that can be
> > > detected. But maybe it is harder said than done.
> >
> > It is. We don't have number of arguments either:
> >
> > struct bpf_raw_tracepoint_args {
> > __u64 args[0];
> > };
> >
> > What BPF program is getting is just an array of u64s.
> >
> > >
> > > I am trying to remember as I've used this before; I think you get the arg list
> > > as part of ctx when you attach to a function?
> > >
> > > I wonder if it'd be hard to provide a macro for the user to provide the
> > > signature of the function they expect; this macro can try then to verify/assert
> > > the number, type and order is the same. Not bullet proof and requires opt-in,
> > > but could be useful?
> > >
> > >
> > > // dummy pseudo-code
> > >
> > > BPF_CORE_ASSERT_SIG(sched_switch, NR_ARGS, ARG0, ARG1, ...)
> > > if (ctx->nr_args != NR_ARGS)
> > > assert()
> > > if (type_of(ctx->args[0]) != type_of(ARG0))
> > > assert()
> > > ...
> > >
> > > I'm not sure if you have any info about the type though..
> >
> > What we have now under discussion is more generic way for user to
> > check signature of function prototype, struct/union, etc. But all that
> > will take some time to implement and finalize. So this patch is a way
> > to stop/prevent the bleeding until we have that available to users.
>
> Okay good to know. Alexei mentioned a plan, but I didn't get that it included
> signature verification.
>

Given the discussion subsided, I'm still a bit unclear about the
outcome. Hopefully the discussion made it a bit clearer why this
one-time change is so helpful to the broader BPF community and is
rather an exception and not an expectation (and we have work in
progress to be able to handle even such changes in the future).

So can this patch be applied, please, or it's a hard no?

> Cheers
>
> --
> Qais Yousef

2022-05-10 09:18:51

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On Mon, May 09, 2022 at 12:32:31PM -0700, Andrii Nakryiko wrote:
> So can this patch be applied, please, or it's a hard no?

Sorry; I got distracted with other work. I'll grudingly apply the patch.

Thanks!

2022-05-10 13:15:34

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On Tue, May 10, 2022 at 09:01:22AM +0200, Peter Zijlstra wrote:
> On Mon, May 09, 2022 at 12:32:31PM -0700, Andrii Nakryiko wrote:
> > So can this patch be applied, please, or it's a hard no?
>
> Sorry; I got distracted with other work. I'll grudingly apply the patch.

gcc-11-i386-allmodconfig [15:35] FAILED
| In file included from ../include/trace/define_custom_trace.h:55,
| from ../samples/trace_events/trace_custom_sched.h:96,
| from ../samples/trace_events/trace_custom_sched.c:24:
| ../samples/trace_events/./trace_custom_sched.h: In function ‘ftrace_test_custom_probe_sched_switch’:
| ../include/trace/trace_custom_events.h:178:42: error: passing argument 1 of ‘check_trace_callback_type_sched_switch’ from incompatible pointer type [-Werror=incompatible-pointer-types]
| 178 | check_trace_callback_type_##call(trace_custom_event_raw_event_##template); \
| | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| | |
| | void (*)(void *, bool, unsigned int, struct task_struct *, struct task_struct *) {aka void (*)(void *, _Bool, unsigned int, struct task_struct *, struct task_struct *)}
| ../include/trace/trace_custom_events.h:34:9: note: in expansion of macro ‘DEFINE_CUSTOM_EVENT’
| 34 | DEFINE_CUSTOM_EVENT(name, name, PARAMS(proto), PARAMS(args));
| | ^~~~~~~~~~~~~~~~~~~
| ../samples/trace_events/./trace_custom_sched.h:21:1: note: in expansion of macro ‘TRACE_CUSTOM_EVENT’
| 21 | TRACE_CUSTOM_EVENT(sched_switch,
| | ^~~~~~~~~~~~~~~~~~
| In file included from ../include/linux/trace_events.h:11,
| from ../samples/trace_events/trace_custom_sched.c:10:
| ../include/linux/tracepoint.h:279:49: note: expected ‘void (*)(void *, bool, struct task_struct *, struct task_struct *, unsigned int)’ {aka ‘void (*)(void *, _Bool, struct task_struct *, struct task_struct *, unsigned int)’} but argument is of type ‘void (*)(void *, bool, unsigned int, struct task_struct *, struct task_struct *)’ {aka ‘void (*)(void *, _Bool, unsigned int, struct task_struct *, struct task_struct *)’}
| 279 | check_trace_callback_type_##name(void (*cb)(data_proto)) \
| | ~~~~~~~^~~~~~~~~~~~~~~
| ../include/linux/tracepoint.h:419:9: note: in expansion of macro ‘__DECLARE_TRACE’
| 419 | __DECLARE_TRACE(name, PARAMS(proto), PARAMS(args), \
| | ^~~~~~~~~~~~~~~
| ../include/linux/tracepoint.h:553:9: note: in expansion of macro ‘DECLARE_TRACE’
| 553 | DECLARE_TRACE(name, PARAMS(proto), PARAMS(args))
| | ^~~~~~~~~~~~~
| ../include/trace/events/sched.h:222:1: note: in expansion of macro ‘TRACE_EVENT’
| 222 | TRACE_EVENT(sched_switch,
| | ^~~~~~~~~~~
| cc1: some warnings being treated as errors
| make[3]: *** [../scripts/Makefile.build:292: samples/trace_events/trace_custom_sched.o] Error 1
| make[3]: *** Waiting for unfinished jobs....
| make[2]: *** [../scripts/Makefile.build:555: samples/trace_events] Error 2
| make[1]: *** [/opt/buildbot/linux-2.6/Makefile:1834: samples] Error 2
| make[1]: *** Waiting for unfinished jobs....
| make: *** [Makefile:219: __sub-make] Error 2
`----


2022-05-10 21:36:48

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH] sched/tracing: append prev_state to tp args instead

On Tue, 10 May 2022 10:29:40 +0200
Peter Zijlstra <[email protected]> wrote:

> On Tue, May 10, 2022 at 09:01:22AM +0200, Peter Zijlstra wrote:
> > On Mon, May 09, 2022 at 12:32:31PM -0700, Andrii Nakryiko wrote:
> > > So can this patch be applied, please, or it's a hard no?
> >
> > Sorry; I got distracted with other work. I'll grudingly apply the patch.
>
> gcc-11-i386-allmodconfig [15:35] FAILED
> | In file included from ../include/trace/define_custom_trace.h:55,
> | from ../samples/trace_events/trace_custom_sched.h:96,
> | from ../samples/trace_events/trace_custom_sched.c:24:
> | ../samples/trace_events/./trace_custom_sched.h: In function ‘ftrace_test_custom_probe_sched_switch’:
> | ../include/trace/trace_custom_events.h:178:42: error: passing argument 1 of ‘check_trace_callback_type_sched_switch’ from incompatible pointer type [-Werror=incompatible-pointer-types]
> | 178 | check_trace_callback_type_##call(trace_custom_event_raw_event_##template); \
> | | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> | | |
> | | void (*)(void *, bool, unsigned int, struct task_struct *, struct task_struct *) {aka void (*)(void *, _Bool, unsigned int, struct task_struct *, struct task_struct *)}
> | ../include/trace/trace_custom_events.h:34:9: note: in expansion of macro ‘DEFINE_CUSTOM_EVENT’
> | 34 | DEFINE_CUSTOM_EVENT(name, name, PARAMS(proto), PARAMS(args));


Yes, the custom tracepoint sample code uses the sched_switch tracepoint as
an example. That sample code needs to be updated:

In samples/trace_events/trace_custom_sched.h:

TRACE_CUSTOM_EVENT(sched_switch,

/*
* The TP_PROTO() and TP_ARGS must match the trace event
* that the custom event is using.
*/
TP_PROTO(bool preempt,
unsigned int prev_state,
struct task_struct *prev,
struct task_struct *next),

TP_ARGS(preempt, prev_state, prev, next),



The arguments need to be updated.

-- Steve

2022-05-12 05:28:14

by Delyan Kratunov

[permalink] [raw]
Subject: [PATCH v2] sched/tracing: append prev_state to tp args instead

Commit fa2c3254d7cf (sched/tracing: Don't re-read p->state when emitting
sched_switch event, 2022-01-20) added a new prev_state argument to the
sched_switch tracepoint, before the prev task_struct pointer.

This reordering of arguments broke BPF programs that use the raw
tracepoint (e.g. tp_btf programs). The type of the second argument has
changed and existing programs that assume a task_struct* argument
(e.g. for bpf_task_storage access) will now fail to verify.

If we instead append the new argument to the end, all existing programs
would continue to work and can conditionally extract the prev_state
argument on supported kernel versions.

Fixes: fa2c3254d7cf (sched/tracing: Don't re-read p->state when emitting sched_switch event, 2022-01-20)
Signed-off-by: Delyan Kratunov <[email protected]>
---
v1 -> v2:
Convert trace_events sample as well.

Sorry about that Peter and thanks for taking it in, it's much appreciated!

include/trace/events/sched.h | 6 +++---
kernel/sched/core.c | 2 +-
kernel/trace/fgraph.c | 4 ++--
kernel/trace/ftrace.c | 4 ++--
kernel/trace/trace_events.c | 8 ++++----
kernel/trace/trace_osnoise.c | 4 ++--
kernel/trace/trace_sched_switch.c | 4 ++--
kernel/trace/trace_sched_wakeup.c | 4 ++--
samples/trace_events/trace_custom_sched.h | 6 +++---
9 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 65e786756321..fbb99a61f714 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -222,11 +222,11 @@ static inline long __trace_sched_switch_state(bool preempt,
TRACE_EVENT(sched_switch,

TP_PROTO(bool preempt,
- unsigned int prev_state,
struct task_struct *prev,
- struct task_struct *next),
+ struct task_struct *next,
+ unsigned int prev_state),

- TP_ARGS(preempt, prev_state, prev, next),
+ TP_ARGS(preempt, prev, next, prev_state),

TP_STRUCT__entry(
__array( char, prev_comm, TASK_COMM_LEN )
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 51efaabac3e4..d58c0389eb23 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6382,7 +6382,7 @@ static void __sched notrace __schedule(unsigned int sched_mode)
migrate_disable_switch(rq, prev);
psi_sched_switch(prev, next, !task_on_rq_queued(prev));

- trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev_state, prev, next);
+ trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state);

/* Also unlocks the rq: */
rq = context_switch(rq, prev, next, &rf);
diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
index 8f4fb328133a..a7e84c8543cb 100644
--- a/kernel/trace/fgraph.c
+++ b/kernel/trace/fgraph.c
@@ -404,9 +404,9 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list)

static void
ftrace_graph_probe_sched_switch(void *ignore, bool preempt,
- unsigned int prev_state,
struct task_struct *prev,
- struct task_struct *next)
+ struct task_struct *next,
+ unsigned int prev_state)
{
unsigned long long timestamp;
int index;
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 4f1d2f5e7263..af899b058c8d 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -7420,9 +7420,9 @@ ftrace_func_t ftrace_ops_get_func(struct ftrace_ops *ops)

static void
ftrace_filter_pid_sched_switch_probe(void *data, bool preempt,
- unsigned int prev_state,
struct task_struct *prev,
- struct task_struct *next)
+ struct task_struct *next,
+ unsigned int prev_state)
{
struct trace_array *tr = data;
struct trace_pid_list *pid_list;
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index e11e167b7809..f97de82d1342 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -773,9 +773,9 @@ void trace_event_follow_fork(struct trace_array *tr, bool enable)

static void
event_filter_pid_sched_switch_probe_pre(void *data, bool preempt,
- unsigned int prev_state,
struct task_struct *prev,
- struct task_struct *next)
+ struct task_struct *next,
+ unsigned int prev_state)
{
struct trace_array *tr = data;
struct trace_pid_list *no_pid_list;
@@ -799,9 +799,9 @@ event_filter_pid_sched_switch_probe_pre(void *data, bool preempt,

static void
event_filter_pid_sched_switch_probe_post(void *data, bool preempt,
- unsigned int prev_state,
struct task_struct *prev,
- struct task_struct *next)
+ struct task_struct *next,
+ unsigned int prev_state)
{
struct trace_array *tr = data;
struct trace_pid_list *no_pid_list;
diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
index e9ae1f33a7f0..afb92e2f0aea 100644
--- a/kernel/trace/trace_osnoise.c
+++ b/kernel/trace/trace_osnoise.c
@@ -1168,9 +1168,9 @@ thread_exit(struct osnoise_variables *osn_var, struct task_struct *t)
*/
static void
trace_sched_switch_callback(void *data, bool preempt,
- unsigned int prev_state,
struct task_struct *p,
- struct task_struct *n)
+ struct task_struct *n,
+ unsigned int prev_state)
{
struct osnoise_variables *osn_var = this_cpu_osn_var();

diff --git a/kernel/trace/trace_sched_switch.c b/kernel/trace/trace_sched_switch.c
index 45796d8bd4b2..c9ffdcfe622e 100644
--- a/kernel/trace/trace_sched_switch.c
+++ b/kernel/trace/trace_sched_switch.c
@@ -22,8 +22,8 @@ static DEFINE_MUTEX(sched_register_mutex);

static void
probe_sched_switch(void *ignore, bool preempt,
- unsigned int prev_state,
- struct task_struct *prev, struct task_struct *next)
+ struct task_struct *prev, struct task_struct *next,
+ unsigned int prev_state)
{
int flags;

diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index 46429f9a96fa..330aee1c1a49 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -426,8 +426,8 @@ tracing_sched_wakeup_trace(struct trace_array *tr,

static void notrace
probe_wakeup_sched_switch(void *ignore, bool preempt,
- unsigned int prev_state,
- struct task_struct *prev, struct task_struct *next)
+ struct task_struct *prev, struct task_struct *next,
+ unsigned int prev_state)
{
struct trace_array_cpu *data;
u64 T0, T1, delta;
diff --git a/samples/trace_events/trace_custom_sched.h b/samples/trace_events/trace_custom_sched.h
index 9fdd8e7c2a45..951388334a3f 100644
--- a/samples/trace_events/trace_custom_sched.h
+++ b/samples/trace_events/trace_custom_sched.h
@@ -25,11 +25,11 @@ TRACE_CUSTOM_EVENT(sched_switch,
* that the custom event is using.
*/
TP_PROTO(bool preempt,
- unsigned int prev_state,
struct task_struct *prev,
- struct task_struct *next),
+ struct task_struct *next,
+ unsigned int prev_state),

- TP_ARGS(preempt, prev_state, prev, next),
+ TP_ARGS(preempt, prev, next, prev_state),

/*
* The next fields are where the customization happens.
--
2.35.3

Subject: [tip: sched/urgent] sched/tracing: Append prev_state to tp args instead

The following commit has been merged into the sched/urgent branch of tip:

Commit-ID: 9c2136be0878c88c53dea26943ce40bb03ad8d8d
Gitweb: https://git.kernel.org/tip/9c2136be0878c88c53dea26943ce40bb03ad8d8d
Author: Delyan Kratunov <[email protected]>
AuthorDate: Wed, 11 May 2022 18:28:36
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Thu, 12 May 2022 00:37:11 +02:00

sched/tracing: Append prev_state to tp args instead

Commit fa2c3254d7cf (sched/tracing: Don't re-read p->state when emitting
sched_switch event, 2022-01-20) added a new prev_state argument to the
sched_switch tracepoint, before the prev task_struct pointer.

This reordering of arguments broke BPF programs that use the raw
tracepoint (e.g. tp_btf programs). The type of the second argument has
changed and existing programs that assume a task_struct* argument
(e.g. for bpf_task_storage access) will now fail to verify.

If we instead append the new argument to the end, all existing programs
would continue to work and can conditionally extract the prev_state
argument on supported kernel versions.

Fixes: fa2c3254d7cf (sched/tracing: Don't re-read p->state when emitting sched_switch event, 2022-01-20)
Signed-off-by: Delyan Kratunov <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Steven Rostedt (Google) <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
include/trace/events/sched.h | 6 +++---
kernel/sched/core.c | 2 +-
kernel/trace/fgraph.c | 4 ++--
kernel/trace/ftrace.c | 4 ++--
kernel/trace/trace_events.c | 8 ++++----
kernel/trace/trace_osnoise.c | 4 ++--
kernel/trace/trace_sched_switch.c | 4 ++--
kernel/trace/trace_sched_wakeup.c | 4 ++--
samples/trace_events/trace_custom_sched.h | 6 +++---
9 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
index 65e7867..fbb99a6 100644
--- a/include/trace/events/sched.h
+++ b/include/trace/events/sched.h
@@ -222,11 +222,11 @@ static inline long __trace_sched_switch_state(bool preempt,
TRACE_EVENT(sched_switch,

TP_PROTO(bool preempt,
- unsigned int prev_state,
struct task_struct *prev,
- struct task_struct *next),
+ struct task_struct *next,
+ unsigned int prev_state),

- TP_ARGS(preempt, prev_state, prev, next),
+ TP_ARGS(preempt, prev, next, prev_state),

TP_STRUCT__entry(
__array( char, prev_comm, TASK_COMM_LEN )
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 51efaab..d58c038 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6382,7 +6382,7 @@ static void __sched notrace __schedule(unsigned int sched_mode)
migrate_disable_switch(rq, prev);
psi_sched_switch(prev, next, !task_on_rq_queued(prev));

- trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev_state, prev, next);
+ trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state);

/* Also unlocks the rq: */
rq = context_switch(rq, prev, next, &rf);
diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
index 8f4fb32..a7e84c8 100644
--- a/kernel/trace/fgraph.c
+++ b/kernel/trace/fgraph.c
@@ -404,9 +404,9 @@ free:

static void
ftrace_graph_probe_sched_switch(void *ignore, bool preempt,
- unsigned int prev_state,
struct task_struct *prev,
- struct task_struct *next)
+ struct task_struct *next,
+ unsigned int prev_state)
{
unsigned long long timestamp;
int index;
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 4f1d2f5..af899b0 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -7420,9 +7420,9 @@ ftrace_func_t ftrace_ops_get_func(struct ftrace_ops *ops)

static void
ftrace_filter_pid_sched_switch_probe(void *data, bool preempt,
- unsigned int prev_state,
struct task_struct *prev,
- struct task_struct *next)
+ struct task_struct *next,
+ unsigned int prev_state)
{
struct trace_array *tr = data;
struct trace_pid_list *pid_list;
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index e11e167..f97de82 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -773,9 +773,9 @@ void trace_event_follow_fork(struct trace_array *tr, bool enable)

static void
event_filter_pid_sched_switch_probe_pre(void *data, bool preempt,
- unsigned int prev_state,
struct task_struct *prev,
- struct task_struct *next)
+ struct task_struct *next,
+ unsigned int prev_state)
{
struct trace_array *tr = data;
struct trace_pid_list *no_pid_list;
@@ -799,9 +799,9 @@ event_filter_pid_sched_switch_probe_pre(void *data, bool preempt,

static void
event_filter_pid_sched_switch_probe_post(void *data, bool preempt,
- unsigned int prev_state,
struct task_struct *prev,
- struct task_struct *next)
+ struct task_struct *next,
+ unsigned int prev_state)
{
struct trace_array *tr = data;
struct trace_pid_list *no_pid_list;
diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
index e9ae1f3..afb92e2 100644
--- a/kernel/trace/trace_osnoise.c
+++ b/kernel/trace/trace_osnoise.c
@@ -1168,9 +1168,9 @@ thread_exit(struct osnoise_variables *osn_var, struct task_struct *t)
*/
static void
trace_sched_switch_callback(void *data, bool preempt,
- unsigned int prev_state,
struct task_struct *p,
- struct task_struct *n)
+ struct task_struct *n,
+ unsigned int prev_state)
{
struct osnoise_variables *osn_var = this_cpu_osn_var();

diff --git a/kernel/trace/trace_sched_switch.c b/kernel/trace/trace_sched_switch.c
index 45796d8..c9ffdcf 100644
--- a/kernel/trace/trace_sched_switch.c
+++ b/kernel/trace/trace_sched_switch.c
@@ -22,8 +22,8 @@ static DEFINE_MUTEX(sched_register_mutex);

static void
probe_sched_switch(void *ignore, bool preempt,
- unsigned int prev_state,
- struct task_struct *prev, struct task_struct *next)
+ struct task_struct *prev, struct task_struct *next,
+ unsigned int prev_state)
{
int flags;

diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
index 46429f9..330aee1 100644
--- a/kernel/trace/trace_sched_wakeup.c
+++ b/kernel/trace/trace_sched_wakeup.c
@@ -426,8 +426,8 @@ tracing_sched_wakeup_trace(struct trace_array *tr,

static void notrace
probe_wakeup_sched_switch(void *ignore, bool preempt,
- unsigned int prev_state,
- struct task_struct *prev, struct task_struct *next)
+ struct task_struct *prev, struct task_struct *next,
+ unsigned int prev_state)
{
struct trace_array_cpu *data;
u64 T0, T1, delta;
diff --git a/samples/trace_events/trace_custom_sched.h b/samples/trace_events/trace_custom_sched.h
index 9fdd8e7..9513883 100644
--- a/samples/trace_events/trace_custom_sched.h
+++ b/samples/trace_events/trace_custom_sched.h
@@ -25,11 +25,11 @@ TRACE_CUSTOM_EVENT(sched_switch,
* that the custom event is using.
*/
TP_PROTO(bool preempt,
- unsigned int prev_state,
struct task_struct *prev,
- struct task_struct *next),
+ struct task_struct *next,
+ unsigned int prev_state),

- TP_ARGS(preempt, prev_state, prev, next),
+ TP_ARGS(preempt, prev, next, prev_state),

/*
* The next fields are where the customization happens.

2022-05-14 04:19:00

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v2] sched/tracing: append prev_state to tp args instead

On Wed, 11 May 2022 18:28:36 +0000
Delyan Kratunov <[email protected]> wrote:

> Sorry about that Peter and thanks for taking it in, it's much appreciated!
>
> include/trace/events/sched.h | 6 +++---
> kernel/sched/core.c | 2 +-
> kernel/trace/fgraph.c | 4 ++--
> kernel/trace/ftrace.c | 4 ++--
> kernel/trace/trace_events.c | 8 ++++----
> kernel/trace/trace_osnoise.c | 4 ++--
> kernel/trace/trace_sched_switch.c | 4 ++--
> kernel/trace/trace_sched_wakeup.c | 4 ++--
> samples/trace_events/trace_custom_sched.h | 6 +++---
> 9 files changed, 21 insertions(+), 21 deletions(-)

Acked-by: Steven Rostedt (Google) <[email protected]>

-- Steve