2021-08-10 17:12:01

by Dmitry Vyukov

[permalink] [raw]
Subject: finding regressions with syzkaller

Hi,

I want to give an overview of an idea and an early prototype we
developed as part of an intern project. This is not yet at the stage
of producing real results, but I just wanted to share the idea with
you and maybe get some feedback.

The idea is to generate random test programs (as syzkaller does) and
then execute them on 2 different kernels and compare results (so
called "differential fuzzing"). This has the potential of finding not
just various "crashes" but also logical bugs and regressions.

Initially we thought of comparing Linux with gVisor or FreeBSD on a
common subset of syscalls. But it turns out we can also compare
different versions of Linux (LTS vs upstream, or different LTS
versions, or LTS .1 with .y) to find any changes in
behavior/regressions. Ultimately such an approach could detect and
report a large spectrum of various small and large changes in various
subsystems automatically and potentially even bisect the commit that
introduces the difference.

In the initial version we only considered returned errno's (including
0/success) as "results" of execution of a program. But theoretically
it should be enough to sense lots of differences, e.g. if a file state
is different that it can be sensed with a subsequent read returning
different results.

The major issue is various false positive differences caused by
timings, non-determinism, accumulated state, intentional and
semi-intentional changes (e.g. subtle API extensions), etc. We learnt
how to deal with some of these to some degree, but feasibility is
still an open question.

So far we were able to find few real-ish differences, the most
interesting I think is this commit:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=d25e3a3de0d6fb2f660dbc7d643b2c632beb1743
which silently does s/EBADF/ENXIO/:

- f = fdget(p->wq_fd);
- if (!f.file)
- return -EBADF;
+ f = fdget(p->wq_fd);
+ if (!f.file)
+ return -ENXIO;

I don't know how important this difference is, but I think it's
exciting and promising that the tool was able to sense this change.

The other difference we discovered is caused by this commit:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=97ba62b278674293762c3d91f724f1bb922f04e0

Which adds attr->sigtrap:
+ if (attr->sigtrap && !attr->remove_on_exec)
+ return -EINVAL;

So the new kernel returns EINVAL for some input, while the old kernel
did not recornize this flag and returned E2BIG. This is an example of
a subtle API extension, which represent a problem for the tool (bolder
API changes like a new syscall, or a new /dev node are easier to
handle automatically).

If you are interested in more info, here are some links:
https://github.com/google/syzkaller/blob/master/docs/syz_verifier.md
https://github.com/google/syzkaller/issues/692
https://github.com/google/syzkaller/issues/200

Since this work is in very early stage, I only have very high-level questions:
- what do you think about feasibility/usefulness of this idea in general?
- any suggestions on how to make the tool find more differences/bugs
or how to make it more reliable?
- is there a list or pointers to some known past regressions that
would be useful to find with such tool? (I've looked at the things
reported on the regressions@ list, but it's mostly crashes/not
booting, but that's what syzkaller can find already well)
- anybody else we should CC?

Thanks


2021-08-11 11:48:46

by Thorsten Leemhuis

[permalink] [raw]
Subject: Re: finding regressions with syzkaller

[CCing Lukas]

Hi Dmitry!

On 10.08.21 19:08, Dmitry Vyukov wrote:
> [...]
> The idea is to generate random test programs (as syzkaller does) and
> then execute them on 2 different kernels and compare results (so
> called "differential fuzzing"). This has the potential of finding not
> just various "crashes" but also logical bugs and regressions.

Hmmm, interesting concept!

> The major issue is various false positive differences caused by
> timings, non-determinism, accumulated state, intentional and
> semi-intentional changes (e.g. subtle API extensions), etc. We learnt
> how to deal with some of these to some degree, but feasibility is
> still an open question.

Sounds complicated and like a lot of manual work.

Do you have in mind that Linus and hence many other Kernel developers
afaics only care about regressions someone actually observed in a
practice? Like a software or script breaking due to a kernel-side change?

To quote Linus from
https://lore.kernel.org/lkml/CA+55aFx3RswnjmCErk8QhCo0KrCvxZnuES3WALBR1NkPbUZ8qw@mail.gmail.com/

```The Linux "no regressions" rule is not about some theoretical
"the ABI changed". It's about actual observed regressions.

So if we can improve the ABI without any user program or workflow
breaking, that's fine.```

His stance on that afaik has not changed since then.

Thus after ruling our all false positives syzkaller might find, there
will always be the follow-up question "well, does anything/anyone
actually care?". That might be hard to answer and requires yet more
manual work by some human. Maybe this working hours at least for now are
better spend in other areas.

> Since this work is in very early stage, I only have very high-level questions:
> - what do you think about feasibility/usefulness of this idea in general?

TBH I'm a bit sceptical due to the above factors. Don't get me wrong,
making syzkaller look out for regressions sounds great, but I wonder if
there are more pressing issues that are worth getting at first.

Another aspect: CI testing already finds quite a few regressions, but
those that are harder to catch are afaics often in driver code. And you
often can't test that without the hardware, which makes me assume that
syzkaller wouldn't help here (or am I wrong?)

> - any suggestions on how to make the tool find more differences/bugs
> or how to make it more reliable?
> - is there a list or pointers to some known past regressions that
> would be useful to find with such tool? (I've looked at the things
> reported on the regressions@ list, but it's mostly crashes/not
> booting, but that's what syzkaller can find already well)

I first wanted to tell you "look up the reports I compiled in 2017 in
the LKML archives", but I guess the way better solution is: just grep
for "regression" in the commit log.

> - anybody else we should CC?

I guess the people from the Elisa project might be interested in this,
that's why I CCed Lukas.

Ciao, Thorsten

2021-08-12 10:04:03

by Dmitry Vyukov

[permalink] [raw]
Subject: Re: finding regressions with syzkaller

On Wed, 11 Aug 2021 at 13:25, Thorsten Leemhuis <[email protected]> wrote:
>
> [CCing Lukas]
>
> Hi Dmitry!
>
> On 10.08.21 19:08, Dmitry Vyukov wrote:
> > [...]
> > The idea is to generate random test programs (as syzkaller does) and
> > then execute them on 2 different kernels and compare results (so
> > called "differential fuzzing"). This has the potential of finding not
> > just various "crashes" but also logical bugs and regressions.
>
> Hmmm, interesting concept!
>
> > The major issue is various false positive differences caused by
> > timings, non-determinism, accumulated state, intentional and
> > semi-intentional changes (e.g. subtle API extensions), etc. We learnt
> > how to deal with some of these to some degree, but feasibility is
> > still an open question.
>
> Sounds complicated and like a lot of manual work.
>
> Do you have in mind that Linus and hence many other Kernel developers
> afaics only care about regressions someone actually observed in a
> practice? Like a software or script breaking due to a kernel-side change?
>
> To quote Linus from
> https://lore.kernel.org/lkml/CA+55aFx3RswnjmCErk8QhCo0KrCvxZnuES3WALBR1NkPbUZ8qw@mail.gmail.com/
>
> ```The Linux "no regressions" rule is not about some theoretical
> "the ABI changed". It's about actual observed regressions.
>
> So if we can improve the ABI without any user program or workflow
> breaking, that's fine.```
>
> His stance on that afaik has not changed since then.
>
> Thus after ruling our all false positives syzkaller might find, there
> will always be the follow-up question "well, does anything/anyone
> actually care?". That might be hard to answer and requires yet more
> manual work by some human. Maybe this working hours at least for now are
> better spend in other areas.

Hi Thorsten,

Good point. At this point the nature and volume of regressions such a
system can find is unknown, so it's hard to make any conclusions.
But some additional theoretical arguments in favor of such a system:
1. Any regressions also need to be found quickly (ideally before the
release). And as far as I understand currently lots of regressions are
found only after 1-3 years when the new kernel reaches some distro and
users update to the new version. Year-long latency has its own
problems. In particular there may now be users of the new (implicit)
API as well, and then it's simply not possible to resolve the breakage
at all.

2. As far as I understand most regressions happen due to patches that
are not even known to change anything (the change wasn't
known/described). So such a system could at least surface this
information. For example, was it intentional/known/realized that this
commit changes API?
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=d25e3a3de0d6fb2f660dbc7d643b2c632beb1743
Amusingly the commit description says:
"Retain compatibility with how attaching works, so that any attempt to
attach to an fd that doesn't exist, or isn't an io_uring fd, will fail
like it did before".
which turns out to be false, it does not fail like it did before, it
fails differently.

3. It may be possible to prioritize some API changes as more likely to
be problematic (e.g. a change from errno 0 to some particular values,
or changes file contents after the same sequence of syscalls). The
importance can also be different for different kernels. For example,
for LTS .1 -> .y I assume any changes may be worth being aware of.



> > Since this work is in very early stage, I only have very high-level questions:
> > - what do you think about feasibility/usefulness of this idea in general?
>
> TBH I'm a bit sceptical due to the above factors. Don't get me wrong,
> making syzkaller look out for regressions sounds great, but I wonder if
> there are more pressing issues that are worth getting at first.
>
> Another aspect: CI testing already finds quite a few regressions,

Quite a few in absolute numbers or relative to the total number of
regressions? :)

> but
> those that are harder to catch are afaics often in driver code. And you
> often can't test that without the hardware, which makes me assume that
> syzkaller wouldn't help here (or am I wrong?)

It depends.
syzbot runs on VMs at the moment, but anybody is free to run syzkaller
on any h/w. And it's quite popular at least for Android phones as far
as I understand.
And at some point we may be getting more testable drivers than we have
now (few).
But also could anybody predict how many bugs syzkaller would find
before it came into existence? So I would not give up on generic
kernel code right away :)


> > - any suggestions on how to make the tool find more differences/bugs
> > or how to make it more reliable?
> > - is there a list or pointers to some known past regressions that
> > would be useful to find with such tool? (I've looked at the things
> > reported on the regressions@ list, but it's mostly crashes/not
> > booting, but that's what syzkaller can find already well)
>
> I first wanted to tell you "look up the reports I compiled in 2017 in
> the LKML archives", but I guess the way better solution is: just grep
> for "regression" in the commit log.

Good idea.
It seems that something like this can give enough subsystem-targeted
info for initial analysis:

git log --no-merges --oneline --grep "fix.*regression" fs/ | grep -v
"performance regression"

> > - anybody else we should CC?
>
> I guess the people from the Elisa project might be interested in this,
> that's why I CCed Lukas.
>
> Ciao, Thorsten

2021-09-22 11:23:41

by Lukas Bulwahn

[permalink] [raw]
Subject: Re: finding regressions with syzkaller

On Wed, Aug 11, 2021 at 1:25 PM Thorsten Leemhuis <[email protected]> wrote:
>
> [CCing Lukas]
>
> Hi Dmitry!
>
> On 10.08.21 19:08, Dmitry Vyukov wrote:
> > [...]
> > The idea is to generate random test programs (as syzkaller does) and
> > then execute them on 2 different kernels and compare results (so
> > called "differential fuzzing"). This has the potential of finding not
> > just various "crashes" but also logical bugs and regressions.
>
> Hmmm, interesting concept!
>
> > The major issue is various false positive differences caused by
> > timings, non-determinism, accumulated state, intentional and
> > semi-intentional changes (e.g. subtle API extensions), etc. We learnt
> > how to deal with some of these to some degree, but feasibility is
> > still an open question.
>
> Sounds complicated and like a lot of manual work.
>
> Do you have in mind that Linus and hence many other Kernel developers
> afaics only care about regressions someone actually observed in a
> practice? Like a software or script breaking due to a kernel-side change?
>
> To quote Linus from
> https://lore.kernel.org/lkml/CA+55aFx3RswnjmCErk8QhCo0KrCvxZnuES3WALBR1NkPbUZ8qw@mail.gmail.com/
>
> ```The Linux "no regressions" rule is not about some theoretical
> "the ABI changed". It's about actual observed regressions.
>
> So if we can improve the ABI without any user program or workflow
> breaking, that's fine.```
>
> His stance on that afaik has not changed since then.
>
> Thus after ruling our all false positives syzkaller might find, there
> will always be the follow-up question "well, does anything/anyone
> actually care?". That might be hard to answer and requires yet more
> manual work by some human. Maybe this working hours at least for now are
> better spend in other areas.
>
> > Since this work is in very early stage, I only have very high-level questions:
> > - what do you think about feasibility/usefulness of this idea in general?
>
> TBH I'm a bit sceptical due to the above factors. Don't get me wrong,
> making syzkaller look out for regressions sounds great, but I wonder if
> there are more pressing issues that are worth getting at first.
>
> Another aspect: CI testing already finds quite a few regressions, but
> those that are harder to catch are afaics often in driver code. And you
> often can't test that without the hardware, which makes me assume that
> syzkaller wouldn't help here (or am I wrong?)
>
> > - any suggestions on how to make the tool find more differences/bugs
> > or how to make it more reliable?
> > - is there a list or pointers to some known past regressions that
> > would be useful to find with such tool? (I've looked at the things
> > reported on the regressions@ list, but it's mostly crashes/not
> > booting, but that's what syzkaller can find already well)
>
> I first wanted to tell you "look up the reports I compiled in 2017 in
> the LKML archives", but I guess the way better solution is: just grep
> for "regression" in the commit log.
>
> > - anybody else we should CC?
>
> I guess the people from the Elisa project might be interested in this,
> that's why I CCed Lukas.
>

Thanks, Thorsten. I do follow the syzkaller mailing list, so I have
seen that email before, but I do appreciate your implicit
acknowledgement here :)

... and Dmitry is back from vacation and I guess we will hear more
today at the Testing and Fuzzing MC on this topic.

Further people/lists to CC are: Paul Albertella
<[email protected]> (already CCed here)

I am personally certainly interested and I think this work gives
companies in the area of building trustable software and systems (see
Paul's area of expertise) a good understanding how reliable and to
which extent the statement "all Linux kernels are backwards
compatible" really holds.

I unfortunately lost the Fuzzing Team (Jouni Högander, Jukka Kaartinen
et al.) previously working with me, and I need to first get back some
budget, build up a new team and hope that we can then also follow this
idea and contribute here as well. (Fingers crossed that I can convince
some others to give me money and work with me on this...)

Looking forward to the presentation at the MC.

Best regards,

Lukas