Hello,
syzbot found the following issue on:
HEAD commit: 81af97bdef5e printk: Export console trace point for kcsan/..
git tree: https://github.com/google/kmsan.git master
console output: https://syzkaller.appspot.com/x/log.txt?x=178eaf77c80000
kernel config: https://syzkaller.appspot.com/x/.config?x=60f9227cd7ef4e36
dashboard link: https://syzkaller.appspot.com/bug?extid=a6d6b8fffa294705dbd8
compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2
userspace arch: i386
Unfortunately, I don't have any reproducer for this issue yet.
Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/959cdbc34509/disk-81af97bd.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/21bf12585d87/vmlinux-81af97bd.xz
kernel image: https://storage.googleapis.com/syzbot-assets/d46348c365bd/bzImage-81af97bd.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: [email protected]
XFS: attr2 mount option is deprecated.
XFS (loop3): Mounting V5 Filesystem a2f82aab-77f8-4286-afd4-a8f747a74bab
XFS (loop3): Ending clean mount
XFS (loop3): Quotacheck needed: Please wait.
=====================================================
BUG: KMSAN: uninit-value in crc32_body lib/crc32.c:112 [inline]
BUG: KMSAN: uninit-value in crc32_le_generic lib/crc32.c:179 [inline]
BUG: KMSAN: uninit-value in __crc32c_le_base+0x467/0xd80 lib/crc32.c:201
crc32_body lib/crc32.c:112 [inline]
crc32_le_generic lib/crc32.c:179 [inline]
__crc32c_le_base+0x467/0xd80 lib/crc32.c:201
chksum_update+0x5b/0xd0 crypto/crc32c_generic.c:88
crypto_shash_update+0x28f/0x400 crypto/shash.c:121
crc32c+0xd9/0x1a0 lib/libcrc32c.c:47
xlog_cksum fs/xfs/xfs_log.c:1867 [inline]
xlog_sync+0xf66/0x11e0 fs/xfs/xfs_log.c:2111
xlog_state_release_iclog+0x3d0/0x7e0 fs/xfs/xfs_log.c:619
xlog_force_iclog fs/xfs/xfs_log.c:888 [inline]
xlog_force_and_check_iclog fs/xfs/xfs_log.c:3172 [inline]
xfs_log_force+0x9d3/0xf20 fs/xfs/xfs_log.c:3249
xfs_qm_dqflush+0x11d9/0x17f0 fs/xfs/xfs_dquot.c:1293
xfs_qm_flush_one+0x148/0x5b0 fs/xfs/xfs_qm.c:1260
xfs_qm_dquot_walk+0x2be/0x7b0 fs/xfs/xfs_qm.c:87
xfs_qm_quotacheck+0x4e1/0xb20 fs/xfs/xfs_qm.c:1338
xfs_qm_mount_quotas+0x3ef/0x890 fs/xfs/xfs_qm.c:1457
xfs_mountfs+0x2184/0x2410 fs/xfs/xfs_mount.c:959
xfs_fs_fill_super+0x1fb3/0x22b0 fs/xfs/xfs_super.c:1677
get_tree_bdev+0x890/0xd10 fs/super.c:1303
xfs_fs_get_tree+0x34/0x40 fs/xfs/xfs_super.c:1724
vfs_get_tree+0xa5/0x500 fs/super.c:1510
do_new_mount+0x69a/0x1580 fs/namespace.c:3042
path_mount+0x725/0x1ee0 fs/namespace.c:3372
do_mount fs/namespace.c:3385 [inline]
__do_sys_mount fs/namespace.c:3594 [inline]
__se_sys_mount+0x734/0x840 fs/namespace.c:3571
__ia32_sys_mount+0xe3/0x150 fs/namespace.c:3571
do_syscall_32_irqs_on arch/x86/entry/common.c:112 [inline]
__do_fast_syscall_32+0xa2/0x100 arch/x86/entry/common.c:178
do_fast_syscall_32+0x37/0x80 arch/x86/entry/common.c:203
do_SYSENTER_32+0x1f/0x30 arch/x86/entry/common.c:246
entry_SYSENTER_compat_after_hwframe+0x70/0x82
Uninit was stored to memory at:
xlog_write_iovec fs/xfs/xfs_log.c:2259 [inline]
xlog_write_full fs/xfs/xfs_log.c:2294 [inline]
xlog_write+0x1c3c/0x21a0 fs/xfs/xfs_log.c:2563
xlog_cil_write_chain fs/xfs/xfs_log_cil.c:973 [inline]
xlog_cil_push_work+0x30c7/0x4430 fs/xfs/xfs_log_cil.c:1318
process_one_work+0xb0d/0x1410 kernel/workqueue.c:2390
worker_thread+0x107e/0x1d60 kernel/workqueue.c:2537
kthread+0x31f/0x430 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
Uninit was created at:
slab_post_alloc_hook+0x12d/0xb60 mm/slab.h:774
slab_alloc_node mm/slub.c:3452 [inline]
__kmem_cache_alloc_node+0x518/0x920 mm/slub.c:3491
__do_kmalloc_node mm/slab_common.c:966 [inline]
__kmalloc+0x121/0x3c0 mm/slab_common.c:980
kmalloc include/linux/slab.h:584 [inline]
xlog_kvmalloc fs/xfs/xfs_log_priv.h:700 [inline]
xlog_cil_alloc_shadow_bufs fs/xfs/xfs_log_cil.c:338 [inline]
xlog_cil_commit+0x493/0x4c80 fs/xfs/xfs_log_cil.c:1629
__xfs_trans_commit+0x78e/0x13d0 fs/xfs/xfs_trans.c:1005
xfs_trans_roll+0x10a/0x6e0 fs/xfs/xfs_trans.c:1153
xfs_defer_trans_roll+0x10f/0x590 fs/xfs/libxfs/xfs_defer.c:366
xfs_defer_finish_noroll+0x49d/0x2d40 fs/xfs/libxfs/xfs_defer.c:551
__xfs_trans_commit+0x979/0x13d0 fs/xfs/xfs_trans.c:970
xfs_trans_commit+0x2f/0x40 fs/xfs/xfs_trans.c:1049
xfs_dquot_disk_alloc+0xd4f/0x11a0 fs/xfs/xfs_dquot.c:384
xfs_qm_dqread+0x149/0x950 fs/xfs/xfs_dquot.c:665
xfs_qm_dqget+0x32c/0xb40 fs/xfs/xfs_dquot.c:870
xfs_qm_quotacheck_dqadjust+0xc4/0x860 fs/xfs/xfs_qm.c:1085
xfs_qm_dqusage_adjust+0x51b/0x910 fs/xfs/xfs_qm.c:1190
xfs_iwalk_ag_recs+0x5ca/0xa50 fs/xfs/xfs_iwalk.c:220
xfs_iwalk_run_callbacks+0x27f/0x590 fs/xfs/xfs_iwalk.c:376
xfs_iwalk_ag+0xee3/0x1080 fs/xfs/xfs_iwalk.c:482
xfs_iwalk_ag_work+0x207/0x300 fs/xfs/xfs_iwalk.c:624
xfs_pwork_work+0x97/0x2b0 fs/xfs/xfs_pwork.c:47
process_one_work+0xb0d/0x1410 kernel/workqueue.c:2390
worker_thread+0x107e/0x1d60 kernel/workqueue.c:2537
kthread+0x31f/0x430 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
CPU: 1 PID: 5349 Comm: syz-executor.3 Not tainted 6.3.0-syzkaller-g81af97bdef5e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/14/2023
=====================================================
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at [email protected].
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the bug is already fixed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want to change bug's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the bug is a duplicate of another bug, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
Hi Christoph, Dave,
The repro provided by Xingwei indeed works.
I tried adding kmsan_check_memory(data, write_len) to xlog_write_iovec(), and
it reported an uninitialized hole inside the `data` buffer:
kmalloc-ed xlog buffer of size 512 : ffff88802fc26200
kmalloc-ed xlog buffer of size 368 : ffff88802fc24a00
kmalloc-ed xlog buffer of size 648 : ffff88802b631000
kmalloc-ed xlog buffer of size 648 : ffff88802b632800
kmalloc-ed xlog buffer of size 648 : ffff88802b631c00
xlog_write_iovec: copying 12 bytes from ffff888017ddbbd8 to ffff88802c300400
xlog_write_iovec: copying 28 bytes from ffff888017ddbbe4 to ffff88802c30040c
xlog_write_iovec: copying 68 bytes from ffff88802fc26274 to ffff88802c300428
xlog_write_iovec: copying 188 bytes from ffff88802fc262bc to ffff88802c30046c
=====================================================
BUG: KMSAN: uninit-value in xlog_write_iovec fs/xfs/xfs_log.c:2227
BUG: KMSAN: uninit-value in xlog_write_full fs/xfs/xfs_log.c:2263
BUG: KMSAN: uninit-value in xlog_write+0x1fac/0x2600 fs/xfs/xfs_log.c:2532
xlog_write_iovec fs/xfs/xfs_log.c:2227
xlog_write_full fs/xfs/xfs_log.c:2263
xlog_write+0x1fac/0x2600 fs/xfs/xfs_log.c:2532
xlog_cil_write_chain fs/xfs/xfs_log_cil.c:918
xlog_cil_push_work+0x30f2/0x44e0 fs/xfs/xfs_log_cil.c:1263
process_one_work kernel/workqueue.c:2630
process_scheduled_works+0x1188/0x1e30 kernel/workqueue.c:2703
worker_thread+0xee5/0x14f0 kernel/workqueue.c:2784
kthread+0x391/0x500 kernel/kthread.c:388
ret_from_fork+0x66/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
Uninit was created at:
slab_post_alloc_hook+0x101/0xac0 mm/slab.h:768
slab_alloc_node mm/slub.c:3482
__kmem_cache_alloc_node+0x612/0xae0 mm/slub.c:3521
__do_kmalloc_node mm/slab_common.c:1006
__kmalloc+0x11a/0x410 mm/slab_common.c:1020
kmalloc ./include/linux/slab.h:604
xlog_kvmalloc fs/xfs/xfs_log_priv.h:704
xlog_cil_alloc_shadow_bufs fs/xfs/xfs_log_cil.c:343
xlog_cil_commit+0x487/0x4dc0 fs/xfs/xfs_log_cil.c:1574
__xfs_trans_commit+0x8df/0x1930 fs/xfs/xfs_trans.c:1017
xfs_trans_commit+0x30/0x40 fs/xfs/xfs_trans.c:1061
xfs_create+0x15af/0x2150 fs/xfs/xfs_inode.c:1076
xfs_generic_create+0x4cd/0x1550 fs/xfs/xfs_iops.c:199
xfs_vn_create+0x4a/0x60 fs/xfs/xfs_iops.c:275
lookup_open fs/namei.c:3477
open_last_lookups fs/namei.c:3546
path_openat+0x29ac/0x6180 fs/namei.c:3776
do_filp_open+0x24d/0x680 fs/namei.c:3809
do_sys_openat2+0x1bc/0x330 fs/open.c:1440
do_sys_open fs/open.c:1455
__do_sys_openat fs/open.c:1471
__se_sys_openat fs/open.c:1466
__x64_sys_openat+0x253/0x330 fs/open.c:1466
do_syscall_x64 arch/x86/entry/common.c:51
do_syscall_64+0x4f/0x140 arch/x86/entry/common.c:82
entry_SYSCALL_64_after_hwframe+0x63/0x6b arch/x86/entry/entry_64.S:120
Bytes 112-115 of 188 are uninitialized
Memory access of size 188 starts at ffff88802fc262bc
CPU: 1 PID: 1061 Comm: kworker/u9:6 Tainted: G B 6.7.0-rc3-gcaca734ed6a9-dirty #112
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014
Workqueue: xfs-cil/loop0 xlog_cil_push_work
=====================================================
Do you know if this is expected?
On Wed, Dec 13, 2023 at 11:49 AM Alexander Potapenko <[email protected]> wrote:
>
> Hi Christoph, Dave,
>
> The repro provided by Xingwei indeed works.
(the mentioned repro is at
https://lore.kernel.org/all/CABOYnLzjtGnP35qE3VVyiu_mawizPtiugFATSaWwmdL1w+pqWA@mail.gmail.com/)
[cc [email protected] because that's where all questions
about XFS stuff should be directed, not to random individual
developers. ]
On Wed, Dec 13, 2023 at 11:49:50AM +0100, Alexander Potapenko wrote:
> Hi Christoph, Dave,
>
> The repro provided by Xingwei indeed works.
>
> I tried adding kmsan_check_memory(data, write_len) to xlog_write_iovec(), and
> it reported an uninitialized hole inside the `data` buffer:
>
> kmalloc-ed xlog buffer of size 512 : ffff88802fc26200
> kmalloc-ed xlog buffer of size 368 : ffff88802fc24a00
> kmalloc-ed xlog buffer of size 648 : ffff88802b631000
> kmalloc-ed xlog buffer of size 648 : ffff88802b632800
> kmalloc-ed xlog buffer of size 648 : ffff88802b631c00
Off the top of my head:
> xlog_write_iovec: copying 12 bytes from ffff888017ddbbd8 to ffff88802c300400
Log start record in an ophdr.
> xlog_write_iovec: copying 28 bytes from ffff888017ddbbe4 to ffff88802c30040c
ophdr + checkpoint start header
> xlog_write_iovec: copying 68 bytes from ffff88802fc26274 to ffff88802c300428
ophdr + inode log format header
> xlog_write_iovec: copying 188 bytes from ffff88802fc262bc to ffff88802c30046c
ophdr + inode core in struct xfs_log_dinode format.
> =====================================================
> BUG: KMSAN: uninit-value in xlog_write_iovec fs/xfs/xfs_log.c:2227
> BUG: KMSAN: uninit-value in xlog_write_full fs/xfs/xfs_log.c:2263
> BUG: KMSAN: uninit-value in xlog_write+0x1fac/0x2600 fs/xfs/xfs_log.c:2532
> xlog_write_iovec fs/xfs/xfs_log.c:2227
> xlog_write_full fs/xfs/xfs_log.c:2263
> xlog_write+0x1fac/0x2600 fs/xfs/xfs_log.c:2532
> xlog_cil_write_chain fs/xfs/xfs_log_cil.c:918
> xlog_cil_push_work+0x30f2/0x44e0 fs/xfs/xfs_log_cil.c:1263
> process_one_work kernel/workqueue.c:2630
> process_scheduled_works+0x1188/0x1e30 kernel/workqueue.c:2703
> worker_thread+0xee5/0x14f0 kernel/workqueue.c:2784
> kthread+0x391/0x500 kernel/kthread.c:388
> ret_from_fork+0x66/0x80 arch/x86/kernel/process.c:147
> ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
>
> Uninit was created at:
> slab_post_alloc_hook+0x101/0xac0 mm/slab.h:768
> slab_alloc_node mm/slub.c:3482
> __kmem_cache_alloc_node+0x612/0xae0 mm/slub.c:3521
> __do_kmalloc_node mm/slab_common.c:1006
> __kmalloc+0x11a/0x410 mm/slab_common.c:1020
> kmalloc ./include/linux/slab.h:604
> xlog_kvmalloc fs/xfs/xfs_log_priv.h:704
> xlog_cil_alloc_shadow_bufs fs/xfs/xfs_log_cil.c:343
> xlog_cil_commit+0x487/0x4dc0 fs/xfs/xfs_log_cil.c:1574
> __xfs_trans_commit+0x8df/0x1930 fs/xfs/xfs_trans.c:1017
> xfs_trans_commit+0x30/0x40 fs/xfs/xfs_trans.c:1061
> xfs_create+0x15af/0x2150 fs/xfs/xfs_inode.c:1076
> xfs_generic_create+0x4cd/0x1550 fs/xfs/xfs_iops.c:199
> xfs_vn_create+0x4a/0x60 fs/xfs/xfs_iops.c:275
> lookup_open fs/namei.c:3477
> open_last_lookups fs/namei.c:3546
> path_openat+0x29ac/0x6180 fs/namei.c:3776
> do_filp_open+0x24d/0x680 fs/namei.c:3809
> do_sys_openat2+0x1bc/0x330 fs/open.c:1440
> do_sys_open fs/open.c:1455
> __do_sys_openat fs/open.c:1471
> __se_sys_openat fs/open.c:1466
> __x64_sys_openat+0x253/0x330 fs/open.c:1466
> do_syscall_x64 arch/x86/entry/common.c:51
> do_syscall_64+0x4f/0x140 arch/x86/entry/common.c:82
> entry_SYSCALL_64_after_hwframe+0x63/0x6b arch/x86/entry/entry_64.S:120
>
> Bytes 112-115 of 188 are uninitialized
> Memory access of size 188 starts at ffff88802fc262bc
so bytes 100-103 of the xfs_log_dinode, which is the di_crc field
of the structure.
<looks at code>
Indeed, we *never* initialise that field, and we've never noticed
because it doesn't get used in replay (it is recalculated after
replay) so it's value is never checked and nothing has ever issued
warnings about it in our testing.
We actually did all uninitialised structure leak testing back in
2017 on the xfs_log_dinode and that, amongst other things, flagged
the 4 bytes *before* the di_crc field as being uninitialised
(di_next_unlinked). We fixed those issues and the uninit/leak
warnings went away via this commit:
commit 20413e37d71befd02b5846acdaf5e2564dd1c38e
Author: Dave Chinner <[email protected]>
Date: Mon Oct 9 11:37:22 2017 -0700
xfs: Don't log uninitialised fields in inode structures
Prevent kmemcheck from throwing warnings about reading uninitialised
memory when formatting inodes into the incore log buffer. There are
several issues here - we don't always log all the fields in the
inode log format item, and we never log the inode the
di_next_unlinked field.
In the case of the inode log format item, this is exacerbated
by the old xfs_inode_log_format structure padding issue. Hence make
the padded, 64 bit aligned version of the structure the one we always
use for formatting the log and get rid of the 64 bit variant. This
means we'll always log the 64-bit version and so recovery only needs
to convert from the unpadded 32 bit version from older 32 bit
kernels.
Signed-Off-By: Dave Chinner <[email protected]>
Tested-by: Tetsuo Handa <[email protected]>
Reviewed-by: Brian Foster <[email protected]>
Reviewed-by: Darrick J. Wong <[email protected]>
Signed-off-by: Darrick J. Wong <[email protected]>
The same tool that found those problems didn't report the 4 byte
region of the di_crc as being uninitialised, and it's taken another
6 years for some random, weird corner case for KMSAN to realise that
every inode we log fails to initialise the di_crc field?
It's trivial to fix now that the kmsan tool has identified the
issue, but I'm perplexed at how this has gone undetected for several
years despite the fact that "mount <fs>; touch foo; unmount <fs>"
should trigger an uninitialised memory read warning, without fail,
every time it is run.
-Dave.
--
Dave Chinner
[email protected]
On Wed, Dec 13, 2023 at 10:58 PM 'Dave Chinner' via syzkaller-bugs
<[email protected]> wrote:
>
> On Thu, Dec 14, 2023 at 08:16:07AM +1100, Dave Chinner wrote:
> > [cc [email protected] because that's where all questions
> > about XFS stuff should be directed, not to random individual
> > developers. ]
> >
> > On Wed, Dec 13, 2023 at 11:49:50AM +0100, Alexander Potapenko wrote:
> > > Hi Christoph, Dave,
> > >
> > > The repro provided by Xingwei indeed works.
>
> Can you please test the patch below?
It fixed the problem for me, feel free to add:
Tested-by: Alexander Potapenko <[email protected]>
As for the time needed to detect the bug, note that kmemcheck was
never used together with syzkaller, so it couldn't have the chance to
find it.
KMSAN found this bug in April
(https://syzkaller.appspot.com/bug?extid=a6d6b8fffa294705dbd8), only
half a year after we started mounting XFS images on syzbot.
Right now it is among the top crashers, so fixing it might uncover
more interesting bugs in xfs.
On Thu, Dec 14, 2023 at 03:55:00PM +0100, Alexander Potapenko wrote:
> On Wed, Dec 13, 2023 at 10:58 PM 'Dave Chinner' via syzkaller-bugs
> <[email protected]> wrote:
> >
> > On Thu, Dec 14, 2023 at 08:16:07AM +1100, Dave Chinner wrote:
> > > [cc [email protected] because that's where all questions
> > > about XFS stuff should be directed, not to random individual
> > > developers. ]
> > >
> > > On Wed, Dec 13, 2023 at 11:49:50AM +0100, Alexander Potapenko wrote:
> > > > Hi Christoph, Dave,
> > > >
> > > > The repro provided by Xingwei indeed works.
> >
> > Can you please test the patch below?
>
> It fixed the problem for me, feel free to add:
>
> Tested-by: Alexander Potapenko <[email protected]>
Thanks.
> As for the time needed to detect the bug, note that kmemcheck was
> never used together with syzkaller, so it couldn't have the chance to
> find it.
>
> KMSAN found this bug in April
> (https://syzkaller.appspot.com/bug?extid=a6d6b8fffa294705dbd8),
KMSAN has been used for quite a long time with syzbot, however,
and it's supposed to find these problems, too. Yet it's only been
finding this for 6 months?
> only
> half a year after we started mounting XFS images on syzbot.
Really? Where did you get that from? syzbot has been exercising XFS
filesystems since 2017 - the bug reports to the XFS list go back at
least that far.
-Dave.
--
Dave Chinner
[email protected]
Hi Dave,
> KMSAN has been used for quite a long time with syzbot, however,
> and it's supposed to find these problems, too. Yet it's only been
> finding this for 6 months?
As Alex already mentioned, there were big fs fuzzing improvements in
2022, and that's exactly when we started seeing "KMSAN: uninit-value
in __crc32c_le_base" (I've just checked crash history). Before that
moment the code was likely just not exercised on syzbot.
On Fri, Dec 15, 2023 at 10:59 PM 'Dave Chinner' via syzkaller-bugs
<[email protected]> wrote:
>
> On Fri, Dec 15, 2023 at 03:41:49PM +0100, Alexander Potapenko wrote:
> >
> > You are right, syzbot used to mount XFS way before 2022.
> > On the other hand, last fall there were some major changes to the way
> > syz_mount_image() works, so I am attributing the newly detected bugs
> > to those changes.
>
> Oh, so that's when syzbot first turned on XFS V5 format testing?
>
> Or was that done in April, when this issue was first reported?
>
> > Unfortunately we don't have much insight into reasons behind syzkaller
> > being able to trigger one bug or another: once a bug is found for the
> > first time, the likelihood to trigger it again increases, but finding
> > it initially might be tricky.
> >
> > I don't understand much how trivial is the repro at
> > https://gist.github.com/xrivendell7/c7bb6ddde87a892818ed1ce206a429c4,
>
> I just looked at it - all it does is create a new file. It's
> effectively "mount; touch", which is exactly what I said earlier
> in the thread should reproduce this issue every single time.
>
> > but overall we are not drilling deep enough into XFS.
> > https://storage.googleapis.com/syzbot-assets/8547e3dd1cca/ci-upstream-kmsan-gce-c7402612.html
> > (ouch, 230Mb!) shows very limited coverage.
>
> *sigh*
>
> Did you think to look at the coverage results to check why the
> numbers for XFS, ext4 and btrfs are all at 1%?
Hmmm, thanks for pointing it out!
Our ci-upstream-kmsan-gce instance is configured in such a way that
the fuzzer program is quite restricted in what it can do. Apparently,
it also lacks capabilities to do mounts, so we get almost no coverage
in fs/*/**. I'll check whether the lack of permissions to mount() was
intended.
On the other hand, the ci-upstream-kmsan-gce-386 instance does not
have such restrictions at all and we do see fs/ coverage there:
https://storage.googleapis.com/syzbot-assets/609dc759f08b/ci-upstream-kmsan-gce-386-0e389834.html
It's still quite low for fs/xfs, which is explainable -- we almost
immediately hit "KMSAN: uninit-value in __crc32c_le_base". For the
same reason, it's also somewhat lower than could be elsewhere as well
-- we spend too much time restarting VMs after crashes. Once the fix
patch reaches the fuzzed kernel tree, ci-upstream-kmsan-gce-386 should
be back to normal.
If we want to see how deep syzbot can go into the fs/ code in general,
it's better to look at the KASAN instance coverage:
https://storage.googleapis.com/syzbot-assets/12b7d6ca74e6/ci-upstream-kasan-gce-root-0e389834.html
(*)
Here e.g. fs/ext4 is already 63% and fs/xfs is 16%.
(*) Be careful, the file is very big.
--
Aleksandr
> Why didn't the low
> number make you dig a bit deeper to see if the number was real or
> whether there was a test execution problem during measurement?
>
> I just spent a minute doing exactly that, and the answer is
> pretty obvious. Both ext4 and XFS had a mount attempts
> rejected at mount option parsing, and btrfs rejected a device scan
> ioctl. That's it. Nothing else was exercised in those three
> filesystems.
>
> Put simply: the filesystems *weren't tested during coverage
> measurement*.
>
> If you are going to do coverage testing, please measure coverage
> over *thousands* of different tests performed on a single filesystem
> type. It needs to be thousands, because syzbot tests are so shallow
> and narrow that actually covering any significant amount of
> filesystem code is quite difficult....
>
> -Dave.
> --
> Dave Chinner
> [email protected]
>
> --
On Mon, Dec 18, 2023 at 11:22:40AM +0100, Aleksandr Nogikh wrote:
> Hi Dave,
>
> > KMSAN has been used for quite a long time with syzbot, however,
> > and it's supposed to find these problems, too. Yet it's only been
> > finding this for 6 months?
>
> As Alex already mentioned, there were big fs fuzzing improvements in
> 2022, and that's exactly when we started seeing "KMSAN: uninit-value
> in __crc32c_le_base" (I've just checked crash history). Before that
> moment the code was likely just not exercised on syzbot.
Can you tell us what these "big fuzzing improvements" were? I mean,
you're trying to fuzz our code and we've been working on rejecting
fuzzing for the last 15 years, so if you're doing something novel it
would help us work out how to defeat it quickly and effciently.
> On Fri, Dec 15, 2023 at 10:59 PM 'Dave Chinner' via syzkaller-bugs
> <[email protected]> wrote:
> >
> > On Fri, Dec 15, 2023 at 03:41:49PM +0100, Alexander Potapenko wrote:
> > >
> > > You are right, syzbot used to mount XFS way before 2022.
> > > On the other hand, last fall there were some major changes to the way
> > > syz_mount_image() works, so I am attributing the newly detected bugs
> > > to those changes.
> >
> > Oh, so that's when syzbot first turned on XFS V5 format testing?
> >
> > Or was that done in April, when this issue was first reported?
> >
> > > Unfortunately we don't have much insight into reasons behind syzkaller
> > > being able to trigger one bug or another: once a bug is found for the
> > > first time, the likelihood to trigger it again increases, but finding
> > > it initially might be tricky.
> > >
> > > I don't understand much how trivial is the repro at
> > > https://gist.github.com/xrivendell7/c7bb6ddde87a892818ed1ce206a429c4,
> >
> > I just looked at it - all it does is create a new file. It's
> > effectively "mount; touch", which is exactly what I said earlier
> > in the thread should reproduce this issue every single time.
> >
> > > but overall we are not drilling deep enough into XFS.
> > > https://storage.googleapis.com/syzbot-assets/8547e3dd1cca/ci-upstream-kmsan-gce-c7402612.html
> > > (ouch, 230Mb!) shows very limited coverage.
> >
> > *sigh*
> >
> > Did you think to look at the coverage results to check why the
> > numbers for XFS, ext4 and btrfs are all at 1%?
>
> Hmmm, thanks for pointing it out!
>
> Our ci-upstream-kmsan-gce instance is configured in such a way that
> the fuzzer program is quite restricted in what it can do. Apparently,
> it also lacks capabilities to do mounts, so we get almost no coverage
> in fs/*/**. I'll check whether the lack of permissions to mount() was
> intended.
>
> On the other hand, the ci-upstream-kmsan-gce-386 instance does not
> have such restrictions at all and we do see fs/ coverage there:
> https://storage.googleapis.com/syzbot-assets/609dc759f08b/ci-upstream-kmsan-gce-386-0e389834.html
>
> It's still quite low for fs/xfs, which is explainable -- we almost
> immediately hit "KMSAN: uninit-value in __crc32c_le_base". For the
> same reason, it's also somewhat lower than could be elsewhere as well
> -- we spend too much time restarting VMs after crashes. Once the fix
> patch reaches the fuzzed kernel tree, ci-upstream-kmsan-gce-386 should
> be back to normal.
>
> If we want to see how deep syzbot can go into the fs/ code in general,
> it's better to look at the KASAN instance coverage:
> https://storage.googleapis.com/syzbot-assets/12b7d6ca74e6/ci-upstream-kasan-gce-root-0e389834.html
> (*)
>
> Here e.g. fs/ext4 is already 63% and fs/xfs is 16%.
Actually, that XFS number is an excellent result. I don't think we
can do much better than that.
I know, that's not the response you expected.
Everyone knows that higher coverage numbers are better because it
means we've tested more code, right?
Wrong.
When it comes to fuzzing based attacks, the earlier the bad data is
detected and rejected the better the result. We should see lower
coverage of the code the better the detection and rejection
algorithms get. i.e. The detection code should be extensively
covered, but the rest of the code should have very little coverage
because of how quickly the filesystem reacts to fatal object
corruption.
And the evidence for this in the XFS coverage results?
Take a look at fs/xfs/libxfs/xfs_inode_buf.c. Every single line of
the disk inode format verifiers has been covered (i.e. every
possible corruption case we can detect has been exercised).
That's good.
However, there is zero coverage of core formatting functions like
xfs_inode_to_disk() that indicate no inodes have been successfully
modified and written back to disk.
That's *even better*.
Think about that for a minute.
The coverage data is telling us that we've read lots of corrupt
inodes and rejected them, but the testing has made almost no
successful inode modifications that have been written back to stable
storage. That's because of widespread corruption in the images
resulting in a fatal corruption being detected before modofications
are being made or are being aborted before they are pushed back to
the corrupt image.
The same pattern appears for most other major on-disk subsystems.
They either have not been exercised at all (e.g. extent btree code) or
the only code in the subsystem that has significant coverage is the
object lookup code and the format verifiers the lookup code runs.
This is an excellent result because it proves that XFS is detecting
the majority of corrupt structures in it's initial object
search iteration paths. Corruption is not getting past the
first read from disk and so no code other than the search/lookup
code and the verifiers is getting run.
Put simply: we are not letting corrupt structures get into code
paths where they can be mis-interpretted and do damage.
From my perspective as an experienced filesystem developer, this is
exactly the sort of coverage pattern I would like to see from -all
filesystems- when they are fed nothing but extensively corrupted
filesystems the way syzbot does.
The basic truth is that if filesystems are good at corruption
detection and rejection, they should have very low code coverage
numbers from syzbot testing.
-Dave.
--
Dave Chinner
[email protected]
On Tue, Dec 19, 2023 at 1:41 PM Dave Chinner <[email protected]> wrote:
>
> On Mon, Dec 18, 2023 at 11:22:40AM +0100, Aleksandr Nogikh wrote:
> > Hi Dave,
> >
> > > KMSAN has been used for quite a long time with syzbot, however,
> > > and it's supposed to find these problems, too. Yet it's only been
> > > finding this for 6 months?
> >
> > As Alex already mentioned, there were big fs fuzzing improvements in
> > 2022, and that's exactly when we started seeing "KMSAN: uninit-value
> > in __crc32c_le_base" (I've just checked crash history). Before that
> > moment the code was likely just not exercised on syzbot.
>
> Can you tell us what these "big fuzzing improvements" were? I mean,
> you're trying to fuzz our code and we've been working on rejecting
> fuzzing for the last 15 years, so if you're doing something novel it
> would help us work out how to defeat it quickly and effciently.
>
> > On Fri, Dec 15, 2023 at 10:59 PM 'Dave Chinner' via syzkaller-bugs
> > <[email protected]> wrote:
> > >
> > > On Fri, Dec 15, 2023 at 03:41:49PM +0100, Alexander Potapenko wrote:
> > > >
> > > > You are right, syzbot used to mount XFS way before 2022.
> > > > On the other hand, last fall there were some major changes to the way
> > > > syz_mount_image() works, so I am attributing the newly detected bugs
> > > > to those changes.
> > >
> > > Oh, so that's when syzbot first turned on XFS V5 format testing?
> > >
> > > Or was that done in April, when this issue was first reported?
> > >
> > > > Unfortunately we don't have much insight into reasons behind syzkaller
> > > > being able to trigger one bug or another: once a bug is found for the
> > > > first time, the likelihood to trigger it again increases, but finding
> > > > it initially might be tricky.
> > > >
> > > > I don't understand much how trivial is the repro at
> > > > https://gist.github.com/xrivendell7/c7bb6ddde87a892818ed1ce206a429c4,
> > >
> > > I just looked at it - all it does is create a new file. It's
> > > effectively "mount; touch", which is exactly what I said earlier
> > > in the thread should reproduce this issue every single time.
> > >
> > > > but overall we are not drilling deep enough into XFS.
> > > > https://storage.googleapis.com/syzbot-assets/8547e3dd1cca/ci-upstream-kmsan-gce-c7402612.html
> > > > (ouch, 230Mb!) shows very limited coverage.
> > >
> > > *sigh*
> > >
> > > Did you think to look at the coverage results to check why the
> > > numbers for XFS, ext4 and btrfs are all at 1%?
> >
> > Hmmm, thanks for pointing it out!
> >
> > Our ci-upstream-kmsan-gce instance is configured in such a way that
> > the fuzzer program is quite restricted in what it can do. Apparently,
> > it also lacks capabilities to do mounts, so we get almost no coverage
> > in fs/*/**. I'll check whether the lack of permissions to mount() was
> > intended.
> >
> > On the other hand, the ci-upstream-kmsan-gce-386 instance does not
> > have such restrictions at all and we do see fs/ coverage there:
> > https://storage.googleapis.com/syzbot-assets/609dc759f08b/ci-upstream-kmsan-gce-386-0e389834.html
> >
> > It's still quite low for fs/xfs, which is explainable -- we almost
> > immediately hit "KMSAN: uninit-value in __crc32c_le_base". For the
> > same reason, it's also somewhat lower than could be elsewhere as well
> > -- we spend too much time restarting VMs after crashes. Once the fix
> > patch reaches the fuzzed kernel tree, ci-upstream-kmsan-gce-386 should
> > be back to normal.
> >
> > If we want to see how deep syzbot can go into the fs/ code in general,
> > it's better to look at the KASAN instance coverage:
> > https://storage.googleapis.com/syzbot-assets/12b7d6ca74e6/ci-upstream-kasan-gce-root-0e389834.html
> > (*)
> >
> > Here e.g. fs/ext4 is already 63% and fs/xfs is 16%.
>
> Actually, that XFS number is an excellent result. I don't think we
> can do much better than that.
>
> I know, that's not the response you expected.
>
> Everyone knows that higher coverage numbers are better because it
> means we've tested more code, right?
>
> Wrong.
>
> When it comes to fuzzing based attacks, the earlier the bad data is
> detected and rejected the better the result. We should see lower
> coverage of the code the better the detection and rejection
> algorithms get. i.e. The detection code should be extensively
> covered, but the rest of the code should have very little coverage
> because of how quickly the filesystem reacts to fatal object
> corruption.
>
> And the evidence for this in the XFS coverage results?
>
> Take a look at fs/xfs/libxfs/xfs_inode_buf.c. Every single line of
> the disk inode format verifiers has been covered (i.e. every
> possible corruption case we can detect has been exercised).
>
> That's good.
>
> However, there is zero coverage of core formatting functions like
> xfs_inode_to_disk() that indicate no inodes have been successfully
> modified and written back to disk.
>
> That's *even better*.
>
> Think about that for a minute.
>
> The coverage data is telling us that we've read lots of corrupt
> inodes and rejected them, but the testing has made almost no
> successful inode modifications that have been written back to stable
> storage. That's because of widespread corruption in the images
> resulting in a fatal corruption being detected before modofications
> are being made or are being aborted before they are pushed back to
> the corrupt image.
>
> The same pattern appears for most other major on-disk subsystems.
> They either have not been exercised at all (e.g. extent btree code) or
> the only code in the subsystem that has significant coverage is the
> object lookup code and the format verifiers the lookup code runs.
>
> This is an excellent result because it proves that XFS is detecting
> the majority of corrupt structures in it's initial object
> search iteration paths. Corruption is not getting past the
> first read from disk and so no code other than the search/lookup
> code and the verifiers is getting run.
>
> Put simply: we are not letting corrupt structures get into code
> paths where they can be mis-interpretted and do damage.
>
> From my perspective as an experienced filesystem developer, this is
> exactly the sort of coverage pattern I would like to see from -all
> filesystems- when they are fed nothing but extensively corrupted
> filesystems the way syzbot does.
>
> The basic truth is that if filesystems are good at corruption
> detection and rejection, they should have very low code coverage
> numbers from syzbot testing.
>
> -Dave.
It is quite insightful that if we throw random garbage at a
well-written API then low coverage indicates that the checks are doing
their job.
But this is not how syzkaller works.
Our goal is to produce as many well-formed inputs as possible to
exercise most of the code under test.
Then, a small step sideways from every well-formed input might trigger
a bug here or there.
It might as well be rejected early by the elaborate input checks (in
which case we won't be finding any new bugs), but anyway we should be
covering bigger parts of the code by just running valid inputs.
For certain subsystems with very limited APIs it is fairly easy to
generate all the possible valid inputs by simply combining syscalls.
In most cases we are still limited by the combinatorial explosion of
the search space though.
But if there are implicit dependencies that the fuzzer cannot deduce
from the descriptions, it will blindly flip bits in known inputs in
the hope to produce a new valid input - mostly in vain.
So seeing little coverage for a subsystem usually means that for some
reason we are just barely scratching the API surface.
Compare this with fuzzing a C compiler.
Doing something like `head /dev/random > 1.c && gcc -c 1.c` in a loop
may eventually trigger interesting bugs in the compiler backend, but
most of the time the programs will be rejected by the lexer which is
smart enough to not accept garbage.
Is the lexer written correctly? Yes, as long as it does not crash on
these random inputs.
Does low coverage indicate that the whole compiler is written
correctly? Not necessarily. Tools like csmith will easily get past the
lexer checks by generating structurally valid programs that might be
broken from the semantic point of view.
For some parts of the kernel syzkaller acts like csmith, because its
knowledge of the data layout is just enough to generate valid inputs.
For file systems, however, we might be lagging behind.
The last year changes in the fuzzer that we mentioned earlier improved
the initial seeding of file system images and added some basic
mutations, but there is still a lot to do.
Hope this helps.
On Tue, Dec 19, 2023 at 02:56:04PM +0100, Alexander Potapenko wrote:
> On Tue, Dec 19, 2023 at 1:41 PM Dave Chinner <[email protected]> wrote:
> >
> > On Mon, Dec 18, 2023 at 11:22:40AM +0100, Aleksandr Nogikh wrote:
> > > Hi Dave,
> > >
> > > > KMSAN has been used for quite a long time with syzbot, however,
> > > > and it's supposed to find these problems, too. Yet it's only been
> > > > finding this for 6 months?
> > >
> > > As Alex already mentioned, there were big fs fuzzing improvements in
> > > 2022, and that's exactly when we started seeing "KMSAN: uninit-value
> > > in __crc32c_le_base" (I've just checked crash history). Before that
> > > moment the code was likely just not exercised on syzbot.
> >
> > Can you tell us what these "big fuzzing improvements" were? I mean,
> > you're trying to fuzz our code and we've been working on rejecting
> > fuzzing for the last 15 years, so if you're doing something novel it
> > would help us work out how to defeat it quickly and effciently.
> >
> > > On Fri, Dec 15, 2023 at 10:59 PM 'Dave Chinner' via syzkaller-bugs
> > > <[email protected]> wrote:
> > > >
> > > > On Fri, Dec 15, 2023 at 03:41:49PM +0100, Alexander Potapenko wrote:
> > > > >
> > > > > You are right, syzbot used to mount XFS way before 2022.
> > > > > On the other hand, last fall there were some major changes to the way
> > > > > syz_mount_image() works, so I am attributing the newly detected bugs
> > > > > to those changes.
> > > >
> > > > Oh, so that's when syzbot first turned on XFS V5 format testing?
> > > >
> > > > Or was that done in April, when this issue was first reported?
> > > >
> > > > > Unfortunately we don't have much insight into reasons behind syzkaller
> > > > > being able to trigger one bug or another: once a bug is found for the
> > > > > first time, the likelihood to trigger it again increases, but finding
> > > > > it initially might be tricky.
> > > > >
> > > > > I don't understand much how trivial is the repro at
> > > > > https://gist.github.com/xrivendell7/c7bb6ddde87a892818ed1ce206a429c4,
> > > >
> > > > I just looked at it - all it does is create a new file. It's
> > > > effectively "mount; touch", which is exactly what I said earlier
> > > > in the thread should reproduce this issue every single time.
> > > >
> > > > > but overall we are not drilling deep enough into XFS.
> > > > > https://storage.googleapis.com/syzbot-assets/8547e3dd1cca/ci-upstream-kmsan-gce-c7402612.html
> > > > > (ouch, 230Mb!) shows very limited coverage.
> > > >
> > > > *sigh*
> > > >
> > > > Did you think to look at the coverage results to check why the
> > > > numbers for XFS, ext4 and btrfs are all at 1%?
> > >
> > > Hmmm, thanks for pointing it out!
> > >
> > > Our ci-upstream-kmsan-gce instance is configured in such a way that
> > > the fuzzer program is quite restricted in what it can do. Apparently,
> > > it also lacks capabilities to do mounts, so we get almost no coverage
> > > in fs/*/**. I'll check whether the lack of permissions to mount() was
> > > intended.
> > >
> > > On the other hand, the ci-upstream-kmsan-gce-386 instance does not
> > > have such restrictions at all and we do see fs/ coverage there:
> > > https://storage.googleapis.com/syzbot-assets/609dc759f08b/ci-upstream-kmsan-gce-386-0e389834.html
> > >
> > > It's still quite low for fs/xfs, which is explainable -- we almost
> > > immediately hit "KMSAN: uninit-value in __crc32c_le_base". For the
> > > same reason, it's also somewhat lower than could be elsewhere as well
> > > -- we spend too much time restarting VMs after crashes. Once the fix
> > > patch reaches the fuzzed kernel tree, ci-upstream-kmsan-gce-386 should
> > > be back to normal.
> > >
> > > If we want to see how deep syzbot can go into the fs/ code in general,
> > > it's better to look at the KASAN instance coverage:
> > > https://storage.googleapis.com/syzbot-assets/12b7d6ca74e6/ci-upstream-kasan-gce-root-0e389834.html
> > > (*)
> > >
> > > Here e.g. fs/ext4 is already 63% and fs/xfs is 16%.
> >
> > Actually, that XFS number is an excellent result. I don't think we
> > can do much better than that.
> >
> > I know, that's not the response you expected.
> >
> > Everyone knows that higher coverage numbers are better because it
> > means we've tested more code, right?
> >
> > Wrong.
> >
> > When it comes to fuzzing based attacks, the earlier the bad data is
> > detected and rejected the better the result. We should see lower
> > coverage of the code the better the detection and rejection
> > algorithms get. i.e. The detection code should be extensively
> > covered, but the rest of the code should have very little coverage
> > because of how quickly the filesystem reacts to fatal object
> > corruption.
> >
> > And the evidence for this in the XFS coverage results?
> >
> > Take a look at fs/xfs/libxfs/xfs_inode_buf.c. Every single line of
> > the disk inode format verifiers has been covered (i.e. every
> > possible corruption case we can detect has been exercised).
> >
> > That's good.
> >
> > However, there is zero coverage of core formatting functions like
> > xfs_inode_to_disk() that indicate no inodes have been successfully
> > modified and written back to disk.
> >
> > That's *even better*.
> >
> > Think about that for a minute.
> >
> > The coverage data is telling us that we've read lots of corrupt
> > inodes and rejected them, but the testing has made almost no
> > successful inode modifications that have been written back to stable
> > storage. That's because of widespread corruption in the images
> > resulting in a fatal corruption being detected before modofications
> > are being made or are being aborted before they are pushed back to
> > the corrupt image.
> >
> > The same pattern appears for most other major on-disk subsystems.
> > They either have not been exercised at all (e.g. extent btree code) or
> > the only code in the subsystem that has significant coverage is the
> > object lookup code and the format verifiers the lookup code runs.
> >
> > This is an excellent result because it proves that XFS is detecting
> > the majority of corrupt structures in it's initial object
> > search iteration paths. Corruption is not getting past the
> > first read from disk and so no code other than the search/lookup
> > code and the verifiers is getting run.
> >
> > Put simply: we are not letting corrupt structures get into code
> > paths where they can be mis-interpretted and do damage.
> >
> > From my perspective as an experienced filesystem developer, this is
> > exactly the sort of coverage pattern I would like to see from -all
> > filesystems- when they are fed nothing but extensively corrupted
> > filesystems the way syzbot does.
> >
> > The basic truth is that if filesystems are good at corruption
> > detection and rejection, they should have very low code coverage
> > numbers from syzbot testing.
> >
> > -Dave.
>
> It is quite insightful that if we throw random garbage at a
> well-written API then low coverage indicates that the checks are doing
> their job.
>
> But this is not how syzkaller works.
In general, that statement is true. But we're talking about the
filesystem fuzzing that syzkaller does, and that's a completely
different beast.
> Our goal is to produce as many well-formed inputs as possible to
> exercise most of the code under test.
> Then, a small step sideways from every well-formed input might trigger
> a bug here or there.
> It might as well be rejected early by the elaborate input checks (in
> which case we won't be finding any new bugs), but anyway we should be
> covering bigger parts of the code by just running valid inputs.
Yes, you're talking about the syscall exercising interface. That
works just fine for finding issues when the input comes from only
one direction - the syscall interface.
The issue is that filesystem testing has a second set of inputs that
are being perturbed: the filesystem image being operated on by the
syscalls.
It's a simple fact that filesystem operations cannot be fully
exercised if the filesystem image is corrupt. Corruption will be
detected and the syscall operation will be aborted without having
performed the desired operation.
So while the goal might be to gain extensive subsystem code coverage
from the syscall interface, this is being defeated by the additional
peturbation of low level data objects (the corrupt filesystem)
causing premature termination of the syscall explorations.
> For certain subsystems with very limited APIs it is fairly easy to
> generate all the possible valid inputs by simply combining syscalls.
> In most cases we are still limited by the combinatorial explosion of
> the search space though.
> But if there are implicit dependencies that the fuzzer cannot deduce
> from the descriptions, it will blindly flip bits in known inputs in
> the hope to produce a new valid input - mostly in vain.
> So seeing little coverage for a subsystem usually means that for some
> reason we are just barely scratching the API surface.
Yes, random bit perturbation fuzzing is a really basic "million
monkeys bashing on typewriters" discovery technique. But when I read
the coverage results, there are very few syscall entry points into
the filesystem, and they all end up in the metadata read verifiers
that point to extensive corruption detection occuring.
> Compare this with fuzzing a C compiler.
> Doing something like `head /dev/random > 1.c && gcc -c 1.c` in a loop
> may eventually trigger interesting bugs in the compiler backend, but
> most of the time the programs will be rejected by the lexer which is
> smart enough to not accept garbage.
> Is the lexer written correctly? Yes, as long as it does not crash on
> these random inputs.
> Does low coverage indicate that the whole compiler is written
> correctly? Not necessarily. Tools like csmith will easily get past the
> lexer checks by generating structurally valid programs that might be
> broken from the semantic point of view.
This is exactly equivalent of syzkaller feeding corrupt filesystem
images to the filesystems.
IOWs, syzkaller needs to stop using corrupt filesystem images if it
wants to actually extensively cover filesystem internal
functionality for filesystems with high rates of corruption
rejection.
> For some parts of the kernel syzkaller acts like csmith, because its
> knowledge of the data layout is just enough to generate valid inputs.
> For file systems, however, we might be lagging behind.
I suspect the right word is "obsolete", not "lagging behind". That's
what I'm trying to find out....
> The last year changes in the fuzzer that we mentioned earlier improved
> the initial seeding of file system images and added some basic
> mutations, but there is still a lot to do.
This really doesn't t tell us anything we didn't already know.
What syzkaller -appears- to be doing with filesystems doesn't look
much different to state of the art from 2005 where syscall
operations were exercised against simulated disk failures and media
corruptions instead of corrupted images. That quickly moved on to
using image-based corruptions rather than simulated runtime failure
because it was a lot simpler to implement and had much faster test
cycles.
e.g. Go look for "IRON File Systems" related papers from the
University of Wisconsin where these techniques were first
demonstrated. These were some of the foundational papers
on filesystem error detection and handling that influenced the
current XFS v5 format design and verifiation architecture.
IOws, I'm trying to understand whether syzbot is actually doing
something novel I've never seen before with it's filesystem images
that might cause such coverage patterns. AFAICT syzkaller is not
doing anything novel, but I can't be certain if the responses I get
to requests for specific details are generalities devoid of any
detail....
Cheers,
Dave.
--
Dave Chinner
[email protected]