2023-09-19 03:56:39

by Shreeya Patel

[permalink] [raw]
Subject: [syzbot] INFO: task hung in ext4_fallocate

Hi Everyone,

syzbot has reported a task hung issue in ext4_fallocate, crash report
can be seen at the bottom of the email.

When I tried to reproduce this issue on mainline linux kernel using the
reproducer provided by syzbot, I see an endless loop of following errors :-

[   89.689751][ T3922] loop1: detected capacity change from 0 to 512
[   89.690514][ T3916] EXT4-fs error (device loop4):
ext4_map_blocks:577: inode #3: block 9: comm poc: lblock 0 mapped to
illegal pblock 9 (length 1)
[   89.694709][ T3890] EXT4-fs error (device loop0):
ext4_map_blocks:577: inode #3: block 9: comm poc: lblock 0 mapped to
illegal pblock 9 (length 1)
[   89.697306][ T3896] EXT4-fs (loop5): 1 orphan inode deleted
[   89.700797][ T3890] EXT4-fs error (device loop0) in
ext4_reserve_inode_write:5752: Out of memory
[   89.703072][ T3916] EXT4-fs error (device loop4) in
ext4_reserve_inode_write:5752: Out of memory
[   89.704703][ T3913] loop3: detected capacity change from 0 to 512
[   89.706535][ T3896] ext4 filesystem being mounted at
/root/syzkaller.eGkqPu/22/file1 supports timestamps until 2038-01-19
(0x7fffffff)
[   89.712664][ T3890] EXT4-fs error (device loop0):
ext4_dirty_inode:5956: inode #16: comm poc: mark_inode_dirty error
[   89.717990][ T3916] EXT4-fs error (device loop4):
ext4_dirty_inode:5956: inode #16: comm poc: mark_inode_dirty error
[   89.720967][ T3890] EXT4-fs error (device loop0) in
ext4_reserve_inode_write:5752: Out of memory
[   89.725410][ T3916] EXT4-fs error (device loop4):
ext4_map_blocks:577: inode #3: block 9: comm poc: lblock 0 mapped to
illegal pblock 9 (length 1)
[   89.732045][ T3896] EXT4-fs error (device loop5):
ext4_map_blocks:577: inode #3: block 9: comm poc: lblock 0 mapped to
illegal pblock 9 (length 1)
[   89.752554][ T3890] EXT4-fs error (device loop0):
ext4_alloc_file_blocks:4468: inode #16: comm poc: mark_inode_dirty error

Same is seen with the LTS and chromeOS 5.10, 5.15 kernel. In case I add
some debugging statements in the ext4_fallocate function, they are all
printed endlessly too.
I am not sure if an infinite loop is also termed as task hung but it
would be great if someone expert in fixing filesystem issues take a look
into this.
I am attaching the reproducer and config file in case anyone wants to
test it locally.

Please reach out in case you need any additional details to reproduce this.

Thanks,
Shreeya Patel

#regzbot introduced: v6.6-rc1..

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
syzbot report :-

(A) The crash report from June 15, 2023:

INFO: task syz-executor319:5195 blocked for more than 143 seconds.
Not tainted 5.10.184-syzkaller-22756-g4d841d608c1b #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor319 state:D stack: 0 pid: 5195 ppid: 2015 flags:0x00000004
Call Trace:
context_switch kernel/sched/core.c:4197 [inline]
__schedule+0x210d/0x21ae kernel/sched/core.c:5408
schedule+0x168/0x1cc kernel/sched/core.c:5486
rwsem_down_write_slowpath+0x4e3/0x867 kernel/locking/rwsem.c:1235
__down_write+0x99/0xd0 kernel/locking/rwsem.c:1401
inode_lock include/linux/fs.h:776 [inline]
ext4_fallocate+0x181/0x255e fs/ext4/extents.c:4697
vfs_fallocate+0x5a6/0x687 fs/open.c:312
ksys_fallocate+0x5a/0x8c fs/open.c:335
__do_sys_fallocate fs/open.c:343 [inline]
__se_sys_fallocate fs/open.c:341 [inline]
__x64_sys_fallocate+0xa7/0xb0 fs/open.c:341
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:51
entry_SYSCALL_64_after_hwframe+0x61/0xc6
RIP: 0033:0x7fb2b203f0c9
RSP: 002b:00007fb2a9c092f8 EFLAGS: 00000246 ORIG_RAX: 000000000000011d
RAX: ffffffffffffffda RBX: 00007fb2b20c67b0 RCX: 00007fb2b203f0c9
RDX: 0000000000008947 RSI: 0000000000000000 RDI: 0000000000000004
RBP: 00007fb2b20927bc R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000007 R11: 0000000000000246 R12: 00007fb2b20917b8
R13: 0031656c69662f2e R14: 6f6f6c2f7665642f R15: 00007fb2b20c67b8

Showing all locks held in the system:
1 lock held by khungtaskd/839:
#0: ffffffff86cc1720 (rcu_read_lock){....}-{1:2}, at: rcu_lock_release
include/linux/rcupdate.h:271 [inline]
#0: ffffffff86cc1720 (rcu_read_lock){....}-{1:2}, at:
rcu_lock_acquire.constprop.0+0x0/0x22 include/linux/rcupdate.h:720
2 locks held by getty/1961:
#0: ffff88810ec85098 (&tty->ldisc_sem){++++}-{0:0}, at:
tty_ldisc_ref_wait+0x22/0x6a drivers/tty/tty_ldisc.c:267
#1: ffffc900000532e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at:
n_tty_read+0x599/0xed7 drivers/tty/n_tty.c:2209
1 lock held by syz-executor319/2018:
1 lock held by syz-executor319/2019:
1 lock held by udevd/2036:
2 locks held by kworker/u4:4/3020:
#0: ffff888100071138 ((wq_completion)events_unbound){+.+.}-{0:0}, at:
process_one_work+0x4fe/0xad5 kernel/workqueue.c:2250
#1: ffff8881f6b27908 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0},
at: write_seqcount_t_begin include/linux/seqlock.h:541 [inline]
#1: ffff8881f6b27908 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0},
at: psi_group_change+0xa6/0x676 kernel/sched/psi.c:711
3 locks held by kworker/1:5/4866:
#0: ffff888100052938 ((wq_completion)events){+.+.}-{0:0}, at:
process_one_work+0x4fe/0xad5 kernel/workqueue.c:2250
#1: ffffc90006ec7df0
((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at:
process_one_work+0x544/0xad5 kernel/workqueue.c:2254
#2: ffffffff86cc9ae8 (rcu_state.exp_mutex){+.+.}-{3:3}, at:
exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
#2: ffffffff86cc9ae8 (rcu_state.exp_mutex){+.+.}-{3:3}, at:
synchronize_rcu_expedited+0x238/0x4fd kernel/rcu/tree_exp.h:841
7 locks held by syz-executor319/5177:
2 locks held by syz-executor319/5195:
#0: ffff888119b6c450 (sb_writers#5){.+.+}-{0:0}, at: __sb_start_write
include/linux/fs.h:1651 [inline]
#0: ffff888119b6c450 (sb_writers#5){.+.+}-{0:0}, at: sb_start_write
include/linux/fs.h:1721 [inline]
#0: ffff888119b6c450 (sb_writers#5){.+.+}-{0:0}, at: file_start_write
include/linux/fs.h:2855 [inline]
#0: ffff888119b6c450 (sb_writers#5){.+.+}-{0:0}, at:
vfs_fallocate+0x547/0x687 fs/open.c:311
#1:
ffff88811f7cc058
(
&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock
include/linux/fs.h:776 [inline]
&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_fallocate+0x181/0x255e
fs/ext4/extents.c:4697
4 locks held by syz-executor319/8626:
#0:
ffff88810e02e450
(
sb_writers
#5
){.+.+}-{0:0}
, at: __sb_start_write include/linux/fs.h:1651 [inline]
, at: sb_start_write include/linux/fs.h:1721 [inline]
, at: mnt_want_write+0x3e/0x97 fs/namespace.c:355
#1: ffff88811f8cf230 (&sb->s_type->i_mutex_key#8
){++++}-{3:3}, at: inode_lock include/linux/fs.h:776 [inline]
){++++}-{3:3}, at: do_truncate+0x13d/0x1c9 fs/open.c:65
#2: ffff88811f8cf0b8 (
&ei->i_mmap_sem){++++}-{3:3}, at: ext4_setattr+0xcc3/0x1752
fs/ext4/inode.c:5561
#3:
ffff88811f8cf020 (&ei->i_data_sem){++++}-{3:3}, at:
ext4_truncate+0x761/0xb6a fs/ext4/inode.c:4353
2 locks held by syz-executor319/8633:
#0: ffff88810e02e450 (sb_writers#5){.+.+}-{0:0}, at: __sb_start_write
include/linux/fs.h:1651 [inline]
#0: ffff88810e02e450 (sb_writers#5){.+.+}-{0:0}, at: sb_start_write
include/linux/fs.h:1721 [inline]
#0: ffff88810e02e450 (sb_writers#5){.+.+}-{0:0}, at: file_start_write
include/linux/fs.h:2855 [inline]
#0: ffff88810e02e450 (sb_writers#5){.+.+}-{0:0}, at:
vfs_fallocate+0x547/0x687 fs/open.c:311
#1: ffff88811f8cf230 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at:
inode_lock include/linux/fs.h:776 [inline]
#1: ffff88811f8cf230 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at:
ext4_fallocate+0x181/0x255e fs/ext4/extents.c:4697
4 locks held by syz-executor319/8677:
#0: ffff88810ec98450 (sb_writers#5){.+.+}-{0:0}, at: __sb_start_write
include/linux/fs.h:1651 [inline]
#0: ffff88810ec98450 (sb_writers#5){.+.+}-{0:0}, at: sb_start_write
include/linux/fs.h:1721 [inline]
#0: ffff88810ec98450 (sb_writers#5){.+.+}-{0:0}, at:
mnt_want_write+0x3e/0x97 fs/namespace.c:355
#1: ffff88811f9f8e80 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at:
inode_lock include/linux/fs.h:776 [inline]
#1: ffff88811f9f8e80 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at:
do_truncate+0x13d/0x1c9 fs/open.c:65
#2: ffff88811f9f8d08 (&ei->i_mmap_sem){++++}-{3:3}, at:
ext4_setattr+0xcc3/0x1752 fs/ext4/inode.c:5561
#3: ffff88811f9f8c70 (&ei->i_data_sem){++++}-{3:3}, at:
ext4_truncate+0x761/0xb6a fs/ext4/inode.c:4353
2 locks held by syz-executor319/8687:
#0: ffff88810ec98450 (sb_writers#5){.+.+}-{0:0}, at: __sb_start_write
include/linux/fs.h:1651 [inline]
#0: ffff88810ec98450 (sb_writers#5){.+.+}-{0:0}, at: sb_start_write
include/linux/fs.h:1721 [inline]
#0: ffff88810ec98450 (sb_writers#5){.+.+}-{0:0}, at: file_start_write
include/linux/fs.h:2855 [inline]
#0: ffff88810ec98450 (sb_writers#5){.+.+}-{0:0}, at:
vfs_fallocate+0x547/0x687 fs/open.c:311
#1: ffff88811f9f8e80 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at:
inode_lock include/linux/fs.h:776 [inline]
#1: ffff88811f9f8e80 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at:
ext4_fallocate+0x181/0x255e fs/ext4/extents.c:4697
2 locks held by udevd/8902:
#0: ffff888106438080 (&bdev->bd_mutex){+.+.}-{3:3}, at:
__blkdev_put+0x101/0x500 fs/block_dev.c:1793
#1: ffffffff8740b8a8 (loop_ctl_mutex){+.+.}-{3:3}, at:
__loop_clr_fd+0x8d/0x913 drivers/block/loop.c:1207
4 locks held by syz-executor319/11374:
#0: ffff888105f16450 (sb_writers#5){.+.+}-{0:0}, at: __sb_start_write
include/linux/fs.h:1651 [inline]
#0: ffff888105f16450 (sb_writers#5){.+.+}-{0:0}, at: sb_start_write
include/linux/fs.h:1721 [inline]
#0: ffff888105f16450 (sb_writers#5){.+.+}-{0:0}, at:
mnt_want_write+0x3e/0x97 fs/namespace.c:355
#1: ffff88811facde40 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at:
inode_lock include/linux/fs.h:776 [inline]
#1: ffff88811facde40 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at:
do_truncate+0x13d/0x1c9 fs/open.c:65
#2: ffff88811facdcc8 (&ei->i_mmap_sem){++++}-{3:3}, at:
ext4_setattr+0xcc3/0x1752 fs/ext4/inode.c:5561
#3: ffff88811facdc30 (&ei->i_data_sem){++++}-{3:3}, at:
ext4_truncate+0x761/0xb6a fs/ext4/inode.c:4353
2 locks held by syz-executor319/11384:
#0: ffff888105f16450 (sb_writers#5){.+.+}-{0:0}, at: __sb_start_write
include/linux/fs.h:1651 [inline]
#0: ffff888105f16450 (sb_writers#5){.+.+}-{0:0}, at: sb_start_write
include/linux/fs.h:1721 [inline]
#0: ffff888105f16450 (sb_writers#5){.+.+}-{0:0}, at: file_start_write
include/linux/fs.h:2855 [inline]
#0: ffff888105f16450 (sb_writers#5){.+.+}-{0:0}, at:
vfs_fallocate+0x547/0x687 fs/open.c:311
#1: ffff88811facde40 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at:
inode_lock include/linux/fs.h:776 [inline]
#1: ffff88811facde40 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at:
ext4_fallocate+0x181/0x255e fs/ext4/extents.c:4697
2 locks held by syz-executor319/12502:
#0: ffff88810643ef00 (&bdev->bd_mutex){+.+.}-{3:3}, at:
__blkdev_get+0x21f/0xd64 fs/block_dev.c:1503
#1: ffffffff8740b8a8 (loop_ctl_mutex){+.+.}-{3:3}, at: lo_open+0x19/0xc6
drivers/block/loop.c:1890
1 lock held by syz-executor319/12503:
#0: ffff888106438080 (&bdev->bd_mutex){+.+.}-{3:3}, at:
__blkdev_get+0x21f/0xd64 fs/block_dev.c:1503

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 839 Comm: khungtaskd Not tainted
5.10.184-syzkaller-22756-g4d841d608c1b #0
809c46235bb373ae9b9e9418415172972c181155
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 05/27/2023
Call Trace:
__dump_stack lib/dump_stack.c:86 [inline]
dump_stack+0x107/0x163 lib/dump_stack.c:127
nmi_cpu_backtrace+0x11f/0x13d lib/nmi_backtrace.c:105
nmi_trigger_cpumask_backtrace+0xd1/0x1b4 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:159 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:209 [inline]
watchdog+0xa64/0xaef kernel/hung_task.c:296
kthread+0x346/0x35a kernel/kthread.c:313
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:299
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 11374 Comm: syz-executor319 Not tainted
5.10.184-syzkaller-22756-g4d841d608c1b #0
809c46235bb373ae9b9e9418415172972c181155
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 05/27/2023
RIP: 0010:test_bit+0x28/0x2e
include/asm-generic/bitops/instrumented-non-atomic.h:136
Code: 5d c3 48 89 f8 b9 40 00 00 00 55 48 89 f5 48 99 53 48 89 fb 48 f7
f9 48 8d 3c c6 be 08 00 00 00 e8 b1 c8 3a 00 48 0f a3 5d 00 <5b> 0f 92
c0 5d c3 41 57 41 56 41 89 d6 41 55 49 89 f5 41 54 55 8d
RSP: 0018:ffffc900057a6990 EFLAGS: 00000047
RAX: fffffbfff0fa9701 RBX: 0000000000000001 RCX: fffffbfff0fa9800
RDX: fffffbfff0fa9800 RSI: 0000000000000008 RDI: ffffffff87d4bff8
RBP: ffffffff87d4bff8 R08: fffffbfff0fa9800 R09: ffffffff87d4bfff
R10: 0000000000000000 R11: ffffffff813918e5 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
FS: 00007fb2b1fea700(0000) GS:ffff8881f6b00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fb2a9c09000 CR3: 00000001198fd000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<NMI>
</NMI>
cpumask_test_cpu.constprop.0+0x15/0x19 include/linux/cpumask.h:367
trace_lock_acquire include/trace/events/lock.h:13 [inline]
lock_acquire+0x11e/0x3b7 kernel/locking/lockdep.c:5537
write_seqcount_t_begin_nested.constprop.0+0x42/0x45
include/linux/seqlock.h:515
write_seqcount_t_begin include/linux/seqlock.h:541 [inline]
psi_group_change+0xa6/0x676 kernel/sched/psi.c:711
psi_task_change+0x143/0x17a kernel/sched/psi.c:812
psi_enqueue kernel/sched/stats.h:82 [inline]
enqueue_task+0x1c4/0x242 kernel/sched/core.c:1942
activate_task+0x50/0x81 kernel/sched/core.c:1974
ttwu_do_activate+0x161/0x17c kernel/sched/core.c:2892
ttwu_queue kernel/sched/core.c:3089 [inline]
try_to_wake_up+0x5a7/0x9e6 kernel/sched/core.c:3367
wake_up_worker+0x70/0x77 kernel/workqueue.c:838
insert_work+0xc7/0xd4 kernel/workqueue.c:1353
__queue_work+0x99e/0x9e1 kernel/workqueue.c:1504
__queue_delayed_work+0xf7/0x1ef kernel/workqueue.c:1651
mod_delayed_work_on+0xb9/0x138 kernel/workqueue.c:1725
kblockd_mod_delayed_work_on+0x2a/0x33 block/blk-core.c:1660
blk_mq_add_to_requeue_list+0x14c/0x15f block/blk-mq.c:844
blk_flush_queue_rq block/blk-flush.c:137 [inline]
blk_flush_complete_seq+0x184/0x851 block/blk-flush.c:192
blk_insert_flush+0x3c7/0x3dc block/blk-flush.c:443
blk_mq_submit_bio+0xa78/0xfb7 block/blk-mq.c:2207
__submit_bio_noacct_mq block/blk-core.c:1017 [inline]
submit_bio_noacct+0x2a3/0x6fe block/blk-core.c:1050
submit_bio+0x457/0x4a8 block/blk-core.c:1120
submit_bh_wbc+0x604/0x615 fs/buffer.c:3054
__sync_dirty_buffer+0x110/0x189 fs/buffer.c:3155
ext4_commit_super+0x8f1/0x98d fs/ext4/super.c:5569
save_error_info+0x93/0x9e fs/ext4/super.c:628
__ext4_std_error+0x1f9/0x242 fs/ext4/super.c:841
ext4_reserve_inode_write+0x188/0x19b fs/ext4/inode.c:5889
__ext4_mark_inode_dirty+0x251/0x5b6 fs/ext4/inode.c:6055
__ext4_ext_dirty+0x160/0x177 fs/ext4/extents.c:182
ext4_ext_remove_space+0x2984/0x2a32 fs/ext4/extents.c:3056
ext4_ext_truncate+0x1ab/0x1f7 fs/ext4/extents.c:4413
ext4_truncate+0x7a2/0xb6a fs/ext4/inode.c:4358
ext4_setattr+0x15c1/0x1752 fs/ext4/inode.c:5638
notify_change+0xb57/0xdf2 fs/attr.c:394
do_truncate+0x14c/0x1c9 fs/open.c:67
handle_truncate fs/namei.c:3066 [inline]
do_open fs/namei.c:3407 [inline]
path_openat+0x1933/0x197a fs/namei.c:3521
do_filp_open+0xc2/0x156 fs/namei.c:3548
do_sys_openat2+0x113/0x36a fs/open.c:1189
do_sys_open+0x8d/0xc1 fs/open.c:1206
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:51
entry_SYSCALL_64_after_hwframe+0x61/0xc6
RIP: 0033:0x7fb2b203f0c9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 b1 15 00 00 90 48 89 f8 48 89
f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01
f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fb2b1fea2f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 00007fb2b20c67a0 RCX: 00007fb2b203f0c9
RDX: 0000000000000000 RSI: 000000000014d27e RDI: 0000000020000180
RBP: 00007fb2b20927bc R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fb2b20917b8
R13: 0031656c69662f2e R14: 6f6f6c2f7665642f R15: 00007fb2b20c67a8
INFO: NMI handler (nmi_cpu_backtrace_handler) took too long to run:
3.489 msecs
task:udevd state:D stack: 0 pid: 2036 ppid: 1881 flags:0x00000002
Call Trace:
context_switch kernel/sched/core.c:4197 [inline]
__schedule+0x210d/0x21ae kernel/sched/core.c:5408
schedule+0x168/0x1cc kernel/sched/core.c:5486
schedule_preempt_disabled+0xf/0x1a kernel/sched/core.c:5545
__mutex_lock_common kernel/locking/mutex.c:1047 [inline]
__mutex_lock+0x37c/0x690 kernel/locking/mutex.c:1109
__blkdev_get+0x21f/0xd64 fs/block_dev.c:1503
blkdev_get+0xcc/0x1eb fs/block_dev.c:1658
blkdev_open+0x22e/0x247 fs/block_dev.c:1775
do_dentry_open+0x6e5/0xbe3 fs/open.c:820
do_open fs/namei.c:3403 [inline]
path_openat+0x1689/0x197a fs/namei.c:3521
do_filp_open+0xc2/0x156 fs/namei.c:3548
do_sys_openat2+0x113/0x36a fs/open.c:1189
do_sys_open+0x8d/0xc1 fs/open.c:1206
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:51
entry_SYSCALL_64_after_hwframe+0x61/0xc6
RIP: 0033:0x795ff25609a4
RSP: 002b:00007ffe1ab3aa50 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00005a7b436ff460 RCX: 0000795ff25609a4
RDX: 00000000000a0800 RSI: 00005a7b436f7c90 RDI: 00000000ffffff9c
RBP: 00005a7b436f7c90 R08: 00000000ffffffff R09: 00007ffe1ab55080
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000000a0800
R13: 00005a7b436f3a30 R14: 0000000000000001 R15: 00005a7b436ca2c0
task:syz-executor319 state:D stack: 0 pid: 5177 ppid: 2015 flags:0x00004004
Call Trace:
context_switch kernel/sched/core.c:4197 [inline]
__schedule+0x210d/0x21ae kernel/sched/core.c:5408
schedule+0x168/0x1cc kernel/sched/core.c:5486
schedule_timeout+0x16a/0x1fc kernel/time/timer.c:2148
io_schedule_timeout+0x25/0x70 kernel/sched/core.c:7147
congestion_wait+0x108/0x2e3 mm/backing-dev.c:978
ext4_ext_truncate+0x1e4/0x1f7 fs/ext4/extents.c:4416
ext4_truncate+0x7a2/0xb6a fs/ext4/inode.c:4358
ext4_setattr+0x15c1/0x1752 fs/ext4/inode.c:5638
notify_change+0xb57/0xdf2 fs/attr.c:394
do_truncate+0x14c/0x1c9 fs/open.c:67
handle_truncate fs/namei.c:3066 [inline]
do_open fs/namei.c:3407 [inline]
path_openat+0x1933/0x197a fs/namei.c:3521
do_filp_open+0xc2/0x156 fs/namei.c:3548
do_sys_openat2+0x113/0x36a fs/open.c:1189
do_sys_open+0x8d/0xc1 fs/open.c:1206
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:51
entry_SYSCALL_64_after_hwframe+0x61/0xc6
RIP: 0033:0x7fb2b203f0c9
RSP: 002b:00007fb2b1fea2f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 00007fb2b20c67a0 RCX: 00007fb2b203f0c9
RDX: 0000000000000000 RSI: 000000000014d27e RDI: 0000000020000180
RBP: 00007fb2b20927bc R08: 0000000000000000 R09: 0000000000000000
R10: 00000000000004ae R11: 0000000000000246 R12: 00007fb2b20917b8
R13: 0031656c69662f2e R14: 6f6f6c2f7665642f R15: 00007fb2b20c67a8
task:syz-executor319 state:D stack: 0 pid: 5195 ppid: 2015 flags:0x00000004
Call Trace:
context_switch kernel/sched/core.c:4197 [inline]
__schedule+0x210d/0x21ae kernel/sched/core.c:5408
schedule+0x168/0x1cc kernel/sched/core.c:5486
rwsem_down_write_slowpath+0x4e3/0x867 kernel/locking/rwsem.c:1235
__down_write+0x99/0xd0 kernel/locking/rwsem.c:1401
inode_lock include/linux/fs.h:776 [inline]
ext4_fallocate+0x181/0x255e fs/ext4/extents.c:4697
vfs_fallocate+0x5a6/0x687 fs/open.c:312
ksys_fallocate+0x5a/0x8c fs/open.c:335
__do_sys_fallocate fs/open.c:343 [inline]
__se_sys_fallocate fs/open.c:341 [inline]
__x64_sys_fallocate+0xa7/0xb0 fs/open.c:341
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:51
entry_SYSCALL_64_after_hwframe+0x61/0xc6
RIP: 0033:0x7fb2b203f0c9
RSP: 002b:00007fb2a9c092f8 EFLAGS: 00000246 ORIG_RAX: 000000000000011d
RAX: ffffffffffffffda RBX: 00007fb2b20c67b0 RCX: 00007fb2b203f0c9
RDX: 0000000000008947 RSI: 0000000000000000 RDI: 0000000000000004
RBP: 00007fb2b20927bc R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000007 R11: 0000000000000246 R12: 00007fb2b20917b8
R13: 0031656c69662f2e R14: 6f6f6c2f7665642f R15: 00007fb2b20c67b8
task:syz-executor319 state:D stack: 0 pid: 8626 ppid: 2021 flags:0x00004004
Call Trace:
context_switch kernel/sched/core.c:4197 [inline]
__schedule+0x210d/0x21ae kernel/sched/core.c:5408
__raw_spin_unlock_irq include/linux/spinlock_api_smp.h:169 [inline]
_raw_spin_unlock_irq+0x3a/0x3c kernel/locking/spinlock.c:199
task:syz-executor319 state:D stack: 0 pid: 8633 ppid: 2021 flags:0x00000004
Call Trace:
context_switch kernel/sched/core.c:4197 [inline]
__schedule+0x210d/0x21ae kernel/sched/core.c:5408
schedule+0x168/0x1cc kernel/sched/core.c:5486
rwsem_down_write_slowpath+0x4e3/0x867 kernel/locking/rwsem.c:1235
__down_write+0x99/0xd0 kernel/locking/rwsem.c:1401
inode_lock include/linux/fs.h:776 [inline]
ext4_fallocate+0x181/0x255e fs/ext4/extents.c:4697
vfs_fallocate+0x5a6/0x687 fs/open.c:312
ksys_fallocate+0x5a/0x8c fs/open.c:335
__do_sys_fallocate fs/open.c:343 [inline]
__se_sys_fallocate fs/open.c:341 [inline]
__x64_sys_fallocate+0xa7/0xb0 fs/open.c:341
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:51
entry_SYSCALL_64_after_hwframe+0x61/0xc6
RIP: 0033:0x7fb2b203f0c9
RSP: 002b:00007fb2a9c092f8 EFLAGS: 00000246 ORIG_RAX: 000000000000011d
RAX: ffffffffffffffda RBX: 00007fb2b20c67b0 RCX: 00007fb2b203f0c9
RDX: 0000000000008947 RSI: 0000000000000000 RDI: 0000000000000004
RBP: 00007fb2b20927bc R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000007 R11: 0000000000000246 R12: 00007fb2b20917b8
R13: 0031656c69662f2e R14: 6f6f6c2f7665642f R15: 00007fb2b20c67b8
task:syz-executor319 state:D stack: 0 pid: 8677 ppid: 2020 flags:0x00004004
Call Trace:
context_switch kernel/sched/core.c:4197 [inline]
__schedule+0x210d/0x21ae kernel/sched/core.c:5408
schedule+0x168/0x1cc kernel/sched/core.c:5486
schedule_timeout+0x16a/0x1fc kernel/time/timer.c:2148
io_schedule_timeout+0x25/0x70 kernel/sched/core.c:7147
congestion_wait+0x108/0x2e3 mm/backing-dev.c:978
ext4_ext_truncate+0x1e4/0x1f7 fs/ext4/extents.c:4416
ext4_truncate+0x7a2/0xb6a fs/ext4/inode.c:4358
ext4_setattr+0x15c1/0x1752 fs/ext4/inode.c:5638
notify_change+0xb57/0xdf2 fs/attr.c:394
do_truncate+0x14c/0x1c9 fs/open.c:67
handle_truncate fs/namei.c:3066 [inline]
do_open fs/namei.c:3407 [inline]
path_openat+0x1933/0x197a fs/namei.c:3521
do_filp_open+0xc2/0x156 fs/namei.c:3548
do_sys_openat2+0x113/0x36a fs/open.c:1189
do_sys_open+0x8d/0xc1 fs/open.c:1206
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:51
entry_SYSCALL_64_after_hwframe+0x61/0xc6
RIP: 0033:0x7fb2b203f0c9
RSP: 002b:00007fb2b1fea2f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 00007fb2b20c67a0 RCX: 00007fb2b203f0c9
RDX: 0000000000000000 RSI: 000000000014d27e RDI: 0000000020000180
RBP: 00007fb2b20927bc R08: 0000000000000000 R09: 0000000000000000
R10: 00000000000004ae R11: 0000000000000246 R12: 00007fb2b20917b8
R13: 0031656c69662f2e R14: 6f6f6c2f7665642f R15: 00007fb2b20c67a8
task:syz-executor319 state:D stack: 0 pid: 8687 ppid: 2020 flags:0x00000004
Call Trace:
context_switch kernel/sched/core.c:4197 [inline]
__schedule+0x210d/0x21ae kernel/sched/core.c:5408
schedule+0x168/0x1cc kernel/sched/core.c:5486
rwsem_down_write_slowpath+0x4e3/0x867 kernel/locking/rwsem.c:1235
__down_write+0x99/0xd0 kernel/locking/rwsem.c:1401
inode_lock include/linux/fs.h:776 [inline]
ext4_fallocate+0x181/0x255e fs/ext4/extents.c:4697
vfs_fallocate+0x5a6/0x687 fs/open.c:312
ksys_fallocate+0x5a/0x8c fs/open.c:335
__do_sys_fallocate fs/open.c:343 [inline]
__se_sys_fallocate fs/open.c:341 [inline]
__x64_sys_fallocate+0xa7/0xb0 fs/open.c:341
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:51
entry_SYSCALL_64_after_hwframe+0x61/0xc6
RIP: 0033:0x7fb2b203f0c9
RSP: 002b:00007fb2a9c092f8 EFLAGS: 00000246 ORIG_RAX: 000000000000011d
RAX: ffffffffffffffda RBX: 00007fb2b20c67b0 RCX: 00007fb2b203f0c9
RDX: 0000000000008947 RSI: 0000000000000000 RDI: 0000000000000004
RBP: 00007fb2b20927bc R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000007 R11: 0000000000000246 R12: 00007fb2b20917b8
R13: 0031656c69662f2e R14: 6f6f6c2f7665642f R15: 00007fb2b20c67b8
task:udevd state:D stack: 0 pid: 8902 ppid: 1881 flags:0x00000002
Call Trace:
context_switch kernel/sched/core.c:4197 [inline]
__schedule+0x210d/0x21ae kernel/sched/core.c:5408
schedule+0x168/0x1cc kernel/sched/core.c:5486
schedule_preempt_disabled+0xf/0x1a kernel/sched/core.c:5545
__mutex_lock_common kernel/locking/mutex.c:1047 [inline]
__mutex_lock+0x37c/0x690 kernel/locking/mutex.c:1109
__blkdev_get+0x21f/0xd64 fs/block_dev.c:1503
blkdev_get+0xcc/0x1eb fs/block_dev.c:1658
blkdev_open+0x22e/0x247 fs/block_dev.c:1775
do_dentry_open+0x6e5/0xbe3 fs/open.c:820
do_open fs/namei.c:3403 [inline]
path_openat+0x1689/0x197a fs/namei.c:3521
do_filp_open+0xc2/0x156 fs/namei.c:3548
do_sys_openat2+0x113/0x36a fs/open.c:1189
do_sys_open+0x8d/0xc1 fs/open.c:1206
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:51
entry_SYSCALL_64_after_hwframe+0x61/0xc6
RIP: 0033:0x795ff25609a4
RSP: 002b:00007ffe1ab35650 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 0000795ff25609a4
RDX: 0000000000080000 RSI: 00005a7b436f4250 RDI: 00000000ffffff9c
RBP: 00005a7b436f4250 R08: 00005a7b43703070 R09: 0000795ff263bc10
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000080000
R13: 00005a7b43703070 R14: 0000000000000000 R15: 0000000000000001
task:syz-executor319 state:D stack: 0 pid:11374 ppid: 2017 flags:0x00004004
Call Trace:
context_switch kernel/sched/core.c:4197 [inline]
__schedule+0x210d/0x21ae kernel/sched/core.c:5408
schedule+0x168/0x1cc kernel/sched/core.c:5486
schedule_timeout+0x16a/0x1fc kernel/time/timer.c:2148
io_schedule_timeout+0x25/0x70 kernel/sched/core.c:7147
congestion_wait+0x108/0x2e3 mm/backing-dev.c:978
ext4_ext_truncate+0x1e4/0x1f7 fs/ext4/extents.c:4416
ext4_truncate+0x7a2/0xb6a fs/ext4/inode.c:4358
ext4_setattr+0x15c1/0x1752 fs/ext4/inode.c:5638
notify_change+0xb57/0xdf2 fs/attr.c:394
do_truncate+0x14c/0x1c9 fs/open.c:67
handle_truncate fs/namei.c:3066 [inline]
do_open fs/namei.c:3407 [inline]
path_openat+0x1933/0x197a fs/namei.c:3521
do_filp_open+0xc2/0x156 fs/namei.c:3548
do_sys_openat2+0x113/0x36a fs/open.c:1189
do_sys_open+0x8d/0xc1 fs/open.c:1206
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:51
entry_SYSCALL_64_after_hwframe+0x61/0xc6
RIP: 0033:0x7fb2b203f0c9
RSP: 002b:00007fb2b1fea2f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 00007fb2b20c67a0 RCX: 00007fb2b203f0c9
RDX: 0000000000000000 RSI: 000000000014d27e RDI: 0000000020000180
RBP: 00007fb2b20927bc R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fb2b20917b8
R13: 0031656c69662f2e R14: 6f6f6c2f7665642f R15: 00007fb2b20c67a8
task:syz-executor319 state:D
stack: 0 pid:11384 ppid: 2017 flags:0x00000004
Call Trace:
context_switch kernel/sched/core.c:4197 [inline]
__schedule+0x210d/0x21ae kernel/sched/core.c:5408
schedule+0x168/0x1cc kernel/sched/core.c:5486
rwsem_down_write_slowpath+0x4e3/0x867 kernel/locking/rwsem.c:1235
__down_write+0x99/0xd0 kernel/locking/rwsem.c:1401
inode_lock include/linux/fs.h:776 [inline]
ext4_fallocate+0x181/0x255e fs/ext4/extents.c:4697
vfs_fallocate+0x5a6/0x687 fs/open.c:312
ksys_fallocate+0x5a/0x8c fs/open.c:335
__do_sys_fallocate fs/open.c:343 [inline]
__se_sys_fallocate fs/open.c:341 [inline]
__x64_sys_fallocate+0xa7/0xb0 fs/open.c:341
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:51
entry_SYSCALL_64_after_hwframe+0x61/0xc6
RIP: 0033:0x7fb2b203f0c9
RSP: 002b:00007fb2a9c092f8 EFLAGS: 00000246 ORIG_RAX: 000000000000011d
RAX: ffffffffffffffda RBX: 00007fb2b20c67b0 RCX: 00007fb2b203f0c9
RDX: 0000000000008947 RSI: 0000000000000000 RDI: 0000000000000004
RBP: 00007fb2b20927bc R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000007 R11: 0000000000000246 R12: 00007fb2b20917b8
R13: 0031656c69662f2e R14: 6f6f6c2f7665642f R15: 00007fb2b20c67b8
task:syz-executor319 state:D stack: 0 pid:12502 ppid: 2018 flags:0x00000004
Call Trace:
context_switch kernel/sched/core.c:4197 [inline]
__schedule+0x210d/0x21ae kernel/sched/core.c:5408
schedule+0x168/0x1cc kernel/sched/core.c:5486
blk_mq_freeze_queue_wait+0xf7/0x14e block/blk-mq.c:150
__loop_clr_fd+0x224/0x913 drivers/block/loop.c:1223
lo_release+0x11b/0x1af drivers/block/loop.c:1923
__blkdev_put+0x38a/0x500 fs/block_dev.c:1806
blkdev_close+0x9a/0x9f fs/block_dev.c:1875
__fput+0x418/0x729 fs/file_table.c:281
task_work_run+0x12b/0x15b kernel/task_work.c:161
tracehook_notify_resume include/linux/tracehook.h:188 [inline]
exit_to_user_mode_loop kernel/entry/common.c:173 [inline]
exit_to_user_mode_prepare+0xbf/0x160 kernel/entry/common.c:200
syscall_exit_to_user_mode+0x128/0x168 kernel/entry/common.c:276
entry_SYSCALL_64_after_hwframe+0x61/0xc6
RIP: 0033:0x7fb2b1ffb4eb
RSP: 002b:00007fb2b1fea150 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
RAX: 0000000000000000 RBX: 0000000000000003 RCX: 00007fb2b1ffb4eb
RDX: 0000000000000010 RSI: 0000000000004c00 RDI: 0000000000000004
RBP: 0000000000000004 R08: 0000000000000000 R09: 00000000000004ae
R10: 0000000000000000 R11: 0000000000000293 R12: 00007fb2b1fea6b8
R13: 00007fb2b1fea180 R14: 00007fb2b1fea1c0 R15: 0000000000004500
task:syz-executor319 state:D stack: 0 pid:12504 ppid: 2018 flags:0x00000004
Call Trace:
context_switch kernel/sched/core.c:4197 [inline]
__schedule+0x210d/0x21ae kernel/sched/core.c:5408
schedule+0x168/0x1cc kernel/sched/core.c:5486
schedule_preempt_disabled+0xf/0x1a kernel/sched/core.c:5545
__mutex_lock_common kernel/locking/mutex.c:1047 [inline]
__mutex_lock+0x37c/0x690 kernel/locking/mutex.c:1109
__blkdev_get+0x21f/0xd64 fs/block_dev.c:1503
blkdev_get+0xcc/0x1eb fs/block_dev.c:1658
blkdev_open+0x22e/0x247 fs/block_dev.c:1775
do_dentry_open+0x6e5/0xbe3 fs/open.c:820
do_open fs/namei.c:3403 [inline]
path_openat+0x1689/0x197a fs/namei.c:3521
do_filp_open+0xc2/0x156 fs/namei.c:3548
do_sys_openat2+0x113/0x36a fs/open.c:1189
do_sys_open+0x8d/0xc1 fs/open.c:1206
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:51
entry_SYSCALL_64_after_hwframe+0x61/0xc6
RIP: 0033:0x7fb2b203f0c9
RSP: 002b:00007fb2a9c092f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 00007fb2b20c67b0 RCX: 00007fb2b203f0c9
RDX: 0000000000000000 RSI: 000000000014113e RDI: 00000000200000c0
RBP: 00007fb2b20927bc R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fb2b20917b8
R13: 0031656c69662f2e R14: 6f6f6c2f7665642f R15: 00007fb2b20c67b8
task:syz-executor319 state:D stack: 0 pid:12508 ppid: 2019 flags:0x00000004
Call Trace:
context_switch kernel/sched/core.c:4197 [inline]
__schedule+0x210d/0x21ae kernel/sched/core.c:5408
schedule+0x168/0x1cc kernel/sched/core.c:5486
schedule_preempt_disabled+0xf/0x1a kernel/sched/core.c:5545
__mutex_lock_common kernel/locking/mutex.c:1047 [inline]
__mutex_lock+0x37c/0x690 kernel/locking/mutex.c:1109
__blkdev_get+0x21f/0xd64 fs/block_dev.c:1503
blkdev_get+0xcc/0x1eb fs/block_dev.c:1658
blkdev_open+0x22e/0x247 fs/block_dev.c:1775
do_dentry_open+0x6e5/0xbe3 fs/open.c:820
do_open fs/namei.c:3403 [inline]
path_openat+0x1689/0x197a fs/namei.c:3521
do_filp_open+0xc2/0x156 fs/namei.c:3548
do_sys_openat2+0x113/0x36a fs/open.c:1189
do_sys_open+0x8d/0xc1 fs/open.c:1206
do_syscall_64+0x33/0x40 arch/x86/entry/common.c:51
entry_SYSCALL_64_after_hwframe+0x61/0xc6
RIP: 0033:0x7fb2b203f0c9
RSP: 002b:00007fb2a9be82f8 EFLAGS: 00000246
ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 00007fb2b20c67c0 RCX: 00007fb2b203f0c9
RDX: 0000000000000000 RSI: 000000000014113e RDI: 00000000200000c0
RBP: 00007fb2b20927bc R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fb2b20917b8
R13: 0031656c69662f2e R14: 6f6f6c2f7665642f R15: 00007fb2b20c67c8





Attachments:
reproducer.c (54.95 kB)
config (198.48 kB)
Download all attachments

2023-09-19 04:53:44

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [syzbot] INFO: task hung in ext4_fallocate

On Mon, Sep 18, 2023 at 08:13:30PM +0530, Shreeya Patel wrote:
> Hi Everyone,
>
> syzbot has reported a task hung issue in ext4_fallocate, crash report can be
> seen at the bottom of the email.

What's the link to the syzbot dashboard URL? This is typically the
first line of the reproducer, but it's been snipped out in your
reproducer.

>
> When I tried to reproduce this issue on mainline linux kernel using the
> reproducer provided by syzbot, I see an endless loop of following errors :-
>
> [?? 89.689751][ T3922] loop1: detected capacity change from 0 to 512
> [?? 89.690514][ T3916] EXT4-fs error (device loop4): ext4_map_blocks:577:
> inode #3: block 9: comm poc: lblock 0 mapped to illegal pblock 9 (length 1)
> [?? 89.694709][ T3890] EXT4-fs error (device loop0): ext4_map_blocks:577:

Please note that maliciously corrupted file system is considered low
priority by ext4 developers. If this is something which is important
to Google, then it needs to fund more headcount so that we have time
to take a look at these things. There has been plenty of discussions
about how syzbot is effectively a denial of service attack on upstream
resources, and the only way we can respond is to down-prioritize such
bug reports.

This is *especially* true when we receive reports from private syzbot
instances where we don't have any of the features provided by syzbot
console --- including convenient access to the file system image that
was mounted as part of the test, the ability to use #syz test to try
out patches --- and more importantly, so we can introduce debugging
messages and using the syzbot testing facility to run the tests.

If you really are concerned about the threat model of users picking up
random USB thumb drives that they found in the parking lot, consider
running fsck.ext4 -f on the file system first, and if the file system
checker is not able to fix the file system, refuse to mount it.
Alternatively, consider using cros-vm, and mounting the file system in
the guest-kernel so that the VMM provides an additional sandbox layer.

Anyway, please provide a link to a public Syzkaller dashboard report,
and we'll take a look at it when we have time...

Cheers,

- Ted

2023-10-03 14:11:53

by Andres Freund

[permalink] [raw]
Subject: Re: [syzbot] INFO: task hung in ext4_fallocate

Hi,

On 2023-09-18 18:07:54 -0400, Theodore Ts'o wrote:
> On Mon, Sep 18, 2023 at 08:13:30PM +0530, Shreeya Patel wrote:
> > When I tried to reproduce this issue on mainline linux kernel using the
> > reproducer provided by syzbot, I see an endless loop of following errors :-
> >
> > [?? 89.689751][ T3922] loop1: detected capacity change from 0 to 512
> > [?? 89.690514][ T3916] EXT4-fs error (device loop4): ext4_map_blocks:577:
> > inode #3: block 9: comm poc: lblock 0 mapped to illegal pblock 9 (length 1)
> > [?? 89.694709][ T3890] EXT4-fs error (device loop0): ext4_map_blocks:577:
>
> Please note that maliciously corrupted file system is considered low
> priority by ext4 developers.

FWIW, I am seeing occasional hangs in ext4_fallocate with 6.6-rc4 as well,
just doing database development on my laptop.

Oct 03 06:48:30 alap5 kernel: INFO: task postgres:132485 blocked for more than 122 seconds.
Oct 03 06:48:30 alap5 kernel: Not tainted 6.6.0-rc4-andres-00009-ge38f6d0c8360 #74
Oct 03 06:48:30 alap5 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Oct 03 06:48:30 alap5 kernel: task:postgres state:D stack:0 pid:132485 ppid:132017 flags:0x00020006
Oct 03 06:48:30 alap5 kernel: Call Trace:
Oct 03 06:48:30 alap5 kernel: <TASK>
Oct 03 06:48:30 alap5 kernel: __schedule+0x388/0x13e0
Oct 03 06:48:30 alap5 kernel: ? nvme_queue_rqs+0x1e6/0x290
Oct 03 06:48:30 alap5 kernel: schedule+0x5f/0xe0
Oct 03 06:48:30 alap5 kernel: inode_dio_wait+0xd5/0x100
Oct 03 06:48:30 alap5 kernel: ? membarrier_register_private_expedited+0xa0/0xa0
Oct 03 06:48:30 alap5 kernel: ext4_fallocate+0x12f/0x1040
Oct 03 06:48:30 alap5 kernel: ? io_submit_sqes+0x392/0x660
Oct 03 06:48:30 alap5 kernel: vfs_fallocate+0x135/0x360
Oct 03 06:48:30 alap5 kernel: __x64_sys_fallocate+0x42/0x70
Oct 03 06:48:30 alap5 kernel: do_syscall_64+0x38/0x80
Oct 03 06:48:30 alap5 kernel: entry_SYSCALL_64_after_hwframe+0x46/0xb0
Oct 03 06:48:30 alap5 kernel: RIP: 0033:0x7fda32415f82
Oct 03 06:48:30 alap5 kernel: RSP: 002b:00007ffedbea0828 EFLAGS: 00000246 ORIG_RAX: 000000000000011d
Oct 03 06:48:30 alap5 kernel: RAX: ffffffffffffffda RBX: 00000000002a6000 RCX: 00007fda32415f82
Oct 03 06:48:30 alap5 kernel: RDX: 0000000014b22000 RSI: 0000000000000000 RDI: 0000000000000016
Oct 03 06:48:30 alap5 kernel: RBP: 0000000014b22000 R08: 0000000014b22000 R09: 0000558f70eabed0
Oct 03 06:48:30 alap5 kernel: R10: 00000000002a6000 R11: 0000000000000246 R12: 00000000000000e0
Oct 03 06:48:30 alap5 kernel: R13: 000000000a000013 R14: 0000000000000004 R15: 0000558f709d4e70
Oct 03 06:48:30 alap5 kernel: </TASK>

cat /proc/132485/stack
[<0>] inode_dio_wait+0xd5/0x100
[<0>] ext4_fallocate+0x12f/0x1040
[<0>] vfs_fallocate+0x135/0x360
[<0>] __x64_sys_fallocate+0x42/0x70
[<0>] do_syscall_64+0x38/0x80
[<0>] entry_SYSCALL_64_after_hwframe+0x46/0xb0

cat /sys/kernel/debug/block/nvme1n1/hctx*/busy
doesn't show any active IO. Nor does the issue resolve when resetting the
controller - so I don't think this is a disk/block level issue.


This is on 8f1b4600373f, with one local, unrelated, commit ontop.

Unfortunately, so far, I've only reproduced this every couple hours of
interactive work, so bisecting isn't really feasible. I've not hit it on 6.5
over a considerably longer time, so I am reasonably confident this isn't an
older issue.

Greetings,

Andres Freund

2023-10-03 23:26:06

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [syzbot] INFO: task hung in ext4_fallocate

On Tue, Oct 03, 2023 at 07:11:38AM -0700, Andres Freund wrote:
> Hi,
>
> On 2023-09-18 18:07:54 -0400, Theodore Ts'o wrote:
> > On Mon, Sep 18, 2023 at 08:13:30PM +0530, Shreeya Patel wrote:
> > > When I tried to reproduce this issue on mainline linux kernel using the
> > > reproducer provided by syzbot, I see an endless loop of following errors :-
> > >
> > > [?? 89.689751][ T3922] loop1: detected capacity change from 0 to 512
> > > [?? 89.690514][ T3916] EXT4-fs error (device loop4): ext4_map_blocks:577:
> > > inode #3: block 9: comm poc: lblock 0 mapped to illegal pblock 9 (length 1)
> > > [?? 89.694709][ T3890] EXT4-fs error (device loop0): ext4_map_blocks:577:
> >
> > Please note that maliciously corrupted file system is considered low
> > priority by ext4 developers.
>
> FWIW, I am seeing occasional hangs in ext4_fallocate with 6.6-rc4 as well,
> just doing database development on my laptop.

Unless you are using a corrupted file system (e.g., there are EXT4-fs
error messages in dmesg), it's likely a different issue.

There is issue here which doesn't involve a corrupted file system
recently reported:

https://bugzilla.kernel.org/show_bug.cgi?id=217965

However, *please* let's not conflate bug reports unless we know they
are the same thing. It's better to keep reports separate, as opposed
to having users see multiple bug reports with vaguely the same
symptoms (say, like a headache) and pestering a doctor at a cocktail
party claiming that *of* *course* their issue is the same as the one
they read about in some vlog because the symptoms are the same.

(This is why I hate, hate, HATE bug reports via Launchpad, which tend
very much to be a flaming hott mess....)

Can you give us details about (a) what kind of block device; are you
using dm-crypt, or just a HDD or a SSD? If an SSD, what kind of SSD?
What CPU architecture is it? And can you send me the output of
dumpe2fs -h <block device>? And while the file system is mounted,
please send the contents of /proc/fs/<block-device>/options, e.g.:

% cat /proc/fs/ext4/dm-0/options
rw
bsddf
nogrpid
block_validity
dioread_nolock
nodiscard
delalloc
nowarn_on_error
journal_checksum
barrier
auto_da_alloc
user_xattr
acl
noquota
resuid=0
resgid=0
errors=remount-ro
commit=5
min_batch_time=0
max_batch_time=15000
stripe=0
data=ordered
inode_readahead_blks=32
init_itable=10
max_dir_size_kb=0

And finally, how full was the file system? What is the output of "df
<mountpoint>" and "df -i <mountpoint>".

Thanks,

- Ted

2023-10-04 00:43:10

by Andres Freund

[permalink] [raw]
Subject: Re: [syzbot] INFO: task hung in ext4_fallocate

Hi,

On 2023-10-03 19:25:05 -0400, Theodore Ts'o wrote:
> On Tue, Oct 03, 2023 at 07:11:38AM -0700, Andres Freund wrote:
> > On 2023-09-18 18:07:54 -0400, Theodore Ts'o wrote:
> > > On Mon, Sep 18, 2023 at 08:13:30PM +0530, Shreeya Patel wrote:
> > > > When I tried to reproduce this issue on mainline linux kernel using the
> > > > reproducer provided by syzbot, I see an endless loop of following errors :-
> > > >
> > > > [?? 89.689751][ T3922] loop1: detected capacity change from 0 to 512
> > > > [?? 89.690514][ T3916] EXT4-fs error (device loop4): ext4_map_blocks:577:
> > > > inode #3: block 9: comm poc: lblock 0 mapped to illegal pblock 9 (length 1)
> > > > [?? 89.694709][ T3890] EXT4-fs error (device loop0): ext4_map_blocks:577:
> > >
> > > Please note that maliciously corrupted file system is considered low
> > > priority by ext4 developers.
> >
> > FWIW, I am seeing occasional hangs in ext4_fallocate with 6.6-rc4 as well,
> > just doing database development on my laptop.
>
> Unless you are using a corrupted file system (e.g., there are EXT4-fs
> error messages in dmesg), it's likely a different issue.

Indeed quite possibly unrelated - the only reason that I mentioned it here was
the fact that so far I've only hit it in 6.6-rc*, not earlier.

I've since then "downgraded" to 6.5.5, without hitting the issue so far,
despite continuing to run a similar workload for most of the day.

I did actually find ext4 messages in dmesg, but only during the latest boot
into 6.5.5, despite encountering the hang four times (resolved via reboot each
time).

Oct 03 07:35:16 alap5 kernel: Linux version 6.5.5-andres-00001-g97e7c1c17aa1 (andres@alap5) (gcc (Debian 13.2.0-4) 13.2.0, GNU ld (GNU Binutils for Debian) 2.41) #76 SMP PREEMPT_DYNAMIC Tue Oct 3 07:29:10 PDT 2023
Oct 03 07:35:16 alap5 kernel: Command line: BOOT_IMAGE=/vmlinuz-6.5.5-andres-00001-g97e7c1c17aa1 root=/dev/mapper/alap5-root ro apparmor=0 rd.luks.options=discard systemd.show_status=1 i915.fastboot=1 i915.modeset=1 psmouse.synaptics_intertouch=1 systemd.unified_cgroup_hierarchy nvme.poll_queues=2 nvme.write_queues=2 intel_iommu=sm_on,igfx_off iommu=pt
...
Oct 03 07:35:17 alap5 systemd[1]: Starting systemd-fsck@dev-disk-by\x2duuid-b77a2f23\x2dbb0d\x2d48bf\x2d8c36\x2d327c73460cb8.service - File System Check on /dev/disk/by-uuid/b77a2f23-bb0d-48bf-8c36-327c73460cb8...
...
Oct 03 07:35:17 alap5 systemd-fsck[705]: /dev/nvme1n1p1: recovering journal
...
Oct 03 07:35:17 alap5 systemd-fsck[705]: /dev/nvme1n1p1: clean, 90863/122101760 files, 110935625/488378368 blocks
Oct 03 07:35:17 alap5 systemd[1]: Finished systemd-fsck@dev-disk-by\x2duuid-b77a2f23\x2dbb0d\x2d48bf\x2d8c36\x2d327c73460cb8.service - File System Check on /dev/disk/by-uuid/b77a2f23-bb0d-48bf-8c36-327c73460cb8.
...
Oct 03 07:35:18 alap5 kernel: EXT4-fs (nvme1n1p1): mounted filesystem b77a2f23-bb0d-48bf-8c36-327c73460cb8 r/w with ordered data mode. Quota mode: none.
...
Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 2240, len 32000; IO logical block 2240, len 2048
Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 4288, len 29952; IO logical block 4288, len 2048
Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 6336, len 27904; IO logical block 6336, len 2048
Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 8384, len 25856; IO logical block 8384, len 2048
Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 10432, len 23808; IO logical block 10432, len 2048
Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 12480, len 21760; IO logical block 12480, len 2048
Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 14528, len 19712; IO logical block 14528, len 2048
Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 16576, len 17664; IO logical block 16576, len 2048
Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 18624, len 15616; IO logical block 18624, len 320
Oct 03 07:38:11 alap5 kernel: wl0: disconnect from AP 04:62:73:d2:2f:ff for new auth to fc:5b:39:c6:6e:9f
...
Oct 03 07:38:11 alap5 kernel: wl0: Limiting TX power to 2 dBm as advertised by fc:5b:39:c6:6e:9f
Oct 03 07:38:22 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984800) finished: extent logical block 152080, len 9856; IO logical block 152080, len 2048
Oct 03 07:38:22 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984800) finished: extent logical block 154128, len 7808; IO logical block 154128, len 768

I'm somewhat surprised to see the "


> Can you give us details about (a) what kind of block device; are you
> using dm-crypt, or just a HDD or a SSD? If an SSD, what kind of SSD?

NVMe, Samsung SSD 970 EVO Plus 2TB. No dm_crypt, raid or whatever.

smartctl / nvme smart-log indicates no data correctness errors or such. Not
that that's a guarantee of anything.

A forced fsck doesn't find any corruption, but did report a number of
Inode 82984754 extent tree (at level 2) could be narrower. Optimize<y>?
style "issues".


> What CPU architecture is it?

x86-64 (i9-9880H in a ThinkPad X1 Extreme 2nd). The laptop does have a battery
that's not, uhm, perfect, anymore (turning off while supposedly still having
around 15% capacity), so there have been a few out-of-power hard "shutdowns".


> And can you send me the output of dumpe2fs -h <block device>?

dumpe2fs 1.47.0 (5-Feb-2023)
Filesystem volume name: <none>
Last mounted on: /srv
Filesystem UUID: b77a2f23-bb0d-48bf-8c36-327c73460cb8
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 122101760
Block count: 488378368
Reserved block count: 0
Overhead clusters: 7947216
Free blocks: 377442743
Free inodes: 122010897
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Reserved GDT blocks: 1024
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Thu Jan 2 20:13:09 2020
Last mount time: Tue Oct 3 07:35:18 2023
Last write time: Tue Oct 3 07:35:18 2023
Mount count: 3
Maximum mount count: -1
Last checked: Mon Oct 2 12:55:12 2023
Check interval: 0 (<none>)
Lifetime writes: 7662 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 03b91d34-4d2d-44b4-ac94-0c055f696e8a
Journal backup: inode blocks
Checksum type: crc32c
Checksum: 0xbee0f98e
Journal features: journal_incompat_revoke journal_64bit journal_checksum_v3
Total journal size: 1024M
Total journal blocks: 262144
Max transaction length: 262144
Fast commit length: 0
Journal sequence: 0x0004f764
Journal start: 23369
Journal checksum type: crc32c
Journal checksum: 0x66761981


> And while the file system is mounted, please send the contents of
> /proc/fs/<block-device>/options, e.g.:
>
> % cat /proc/fs/ext4/dm-0/options

cat /proc/fs/ext4/nvme1n1p1/options
rw
bsddf
nogrpid
block_validity
dioread_nolock
nodiscard
delalloc
nowarn_on_error
journal_checksum
barrier
auto_da_alloc
user_xattr
acl
noquota
resuid=0
resgid=0
errors=continue
commit=5
min_batch_time=0
max_batch_time=15000
stripe=0
data=ordered
inode_readahead_blks=32
init_itable=10
max_dir_size_kb=0

>
> And finally, how full was the file system? What is the output of "df
> <mountpoint>" and "df -i <mountpoint>".

root@alap5:/# df /srv/
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/nvme1n1p1 1921724608 542905720 1378802504 29% /srv
root@alap5:/# df -i /srv/
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/nvme1n1p1 122101760 91292 122010468 1% /srv


However, fullness of this filesystem varies substantially over time, as I use
it for developing / testing postgres, albeit primarily when travelling. I was
e.g. benchmarking bulk-loading of data (knowingly on a mediocre SSD) at the
time of the last crash, space usage during that goes up by a few hundred GB,
with the data being deleted afterwards.

Greetings,

Andres Freund

2023-10-05 14:12:11

by Shreeya Patel

[permalink] [raw]
Subject: Re: [syzbot] INFO: task hung in ext4_fallocate

Hi Andres,

On 04/10/23 06:12, Andres Freund wrote:
> Hi,
>
> On 2023-10-03 19:25:05 -0400, Theodore Ts'o wrote:
>> On Tue, Oct 03, 2023 at 07:11:38AM -0700, Andres Freund wrote:
>>> On 2023-09-18 18:07:54 -0400, Theodore Ts'o wrote:
>>>> On Mon, Sep 18, 2023 at 08:13:30PM +0530, Shreeya Patel wrote:
>>>>> When I tried to reproduce this issue on mainline linux kernel using the
>>>>> reproducer provided by syzbot, I see an endless loop of following errors :-
>>>>>
>>>>> [   89.689751][ T3922] loop1: detected capacity change from 0 to 512
>>>>> [   89.690514][ T3916] EXT4-fs error (device loop4): ext4_map_blocks:577:
>>>>> inode #3: block 9: comm poc: lblock 0 mapped to illegal pblock 9 (length 1)
>>>>> [   89.694709][ T3890] EXT4-fs error (device loop0): ext4_map_blocks:577:
>>>> Please note that maliciously corrupted file system is considered low
>>>> priority by ext4 developers.
>>> FWIW, I am seeing occasional hangs in ext4_fallocate with 6.6-rc4 as well,
>>> just doing database development on my laptop.
>> Unless you are using a corrupted file system (e.g., there are EXT4-fs
>> error messages in dmesg), it's likely a different issue.
> Indeed quite possibly unrelated - the only reason that I mentioned it here was
> the fact that so far I've only hit it in 6.6-rc*, not earlier.

Sorry about the previous html formatted email.

Since the issue that I reported is not reproducable locally, to confirm
if it happens on the mainline kernel or not,
we did a #syz test: https://github.com/torvalds/linux master

But it wasn't reproduced on the mainline kernel so it is very unlikely
that the issue you are seeing is same as this one.


Thanks,
Shreeya Patel


> I've since then "downgraded" to 6.5.5, without hitting the issue so far,
> despite continuing to run a similar workload for most of the day.
>
> I did actually find ext4 messages in dmesg, but only during the latest boot
> into 6.5.5, despite encountering the hang four times (resolved via reboot each
> time).
>
> Oct 03 07:35:16 alap5 kernel: Linux version 6.5.5-andres-00001-g97e7c1c17aa1 (andres@alap5) (gcc (Debian 13.2.0-4) 13.2.0, GNU ld (GNU Binutils for Debian) 2.41) #76 SMP PREEMPT_DYNAMIC Tue Oct 3 07:29:10 PDT 2023
> Oct 03 07:35:16 alap5 kernel: Command line: BOOT_IMAGE=/vmlinuz-6.5.5-andres-00001-g97e7c1c17aa1 root=/dev/mapper/alap5-root ro apparmor=0 rd.luks.options=discard systemd.show_status=1 i915.fastboot=1 i915.modeset=1 psmouse.synaptics_intertouch=1 systemd.unified_cgroup_hierarchy nvme.poll_queues=2 nvme.write_queues=2 intel_iommu=sm_on,igfx_off iommu=pt
> ...
> Oct 03 07:35:17 alap5 systemd[1]: Starting systemd-fsck@dev-disk-by\x2duuid-b77a2f23\x2dbb0d\x2d48bf\x2d8c36\x2d327c73460cb8.service - File System Check on /dev/disk/by-uuid/b77a2f23-bb0d-48bf-8c36-327c73460cb8...
> ...
> Oct 03 07:35:17 alap5 systemd-fsck[705]: /dev/nvme1n1p1: recovering journal
> ...
> Oct 03 07:35:17 alap5 systemd-fsck[705]: /dev/nvme1n1p1: clean, 90863/122101760 files, 110935625/488378368 blocks
> Oct 03 07:35:17 alap5 systemd[1]: Finished systemd-fsck@dev-disk-by\x2duuid-b77a2f23\x2dbb0d\x2d48bf\x2d8c36\x2d327c73460cb8.service - File System Check on /dev/disk/by-uuid/b77a2f23-bb0d-48bf-8c36-327c73460cb8.
> ...
> Oct 03 07:35:18 alap5 kernel: EXT4-fs (nvme1n1p1): mounted filesystem b77a2f23-bb0d-48bf-8c36-327c73460cb8 r/w with ordered data mode. Quota mode: none.
> ...
> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 2240, len 32000; IO logical block 2240, len 2048
> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 4288, len 29952; IO logical block 4288, len 2048
> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 6336, len 27904; IO logical block 6336, len 2048
> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 8384, len 25856; IO logical block 8384, len 2048
> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 10432, len 23808; IO logical block 10432, len 2048
> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 12480, len 21760; IO logical block 12480, len 2048
> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 14528, len 19712; IO logical block 14528, len 2048
> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 16576, len 17664; IO logical block 16576, len 2048
> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished: extent logical block 18624, len 15616; IO logical block 18624, len 320
> Oct 03 07:38:11 alap5 kernel: wl0: disconnect from AP 04:62:73:d2:2f:ff for new auth to fc:5b:39:c6:6e:9f
> ...
> Oct 03 07:38:11 alap5 kernel: wl0: Limiting TX power to 2 dBm as advertised by fc:5b:39:c6:6e:9f
> Oct 03 07:38:22 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984800) finished: extent logical block 152080, len 9856; IO logical block 152080, len 2048
> Oct 03 07:38:22 alap5 kernel: EXT4-fs warning (device nvme1n1p1): ext4_convert_unwritten_extents_endio:3743: Inode (82984800) finished: extent logical block 154128, len 7808; IO logical block 154128, len 768
>
> I'm somewhat surprised to see the "
>
>
>> Can you give us details about (a) what kind of block device; are you
>> using dm-crypt, or just a HDD or a SSD? If an SSD, what kind of SSD?
> NVMe, Samsung SSD 970 EVO Plus 2TB. No dm_crypt, raid or whatever.
>
> smartctl / nvme smart-log indicates no data correctness errors or such. Not
> that that's a guarantee of anything.
>
> A forced fsck doesn't find any corruption, but did report a number of
> Inode 82984754 extent tree (at level 2) could be narrower. Optimize<y>?
> style "issues".
>
>
>> What CPU architecture is it?
> x86-64 (i9-9880H in a ThinkPad X1 Extreme 2nd). The laptop does have a battery
> that's not, uhm, perfect, anymore (turning off while supposedly still having
> around 15% capacity), so there have been a few out-of-power hard "shutdowns".
>
>
>> And can you send me the output of dumpe2fs -h <block device>?
> dumpe2fs 1.47.0 (5-Feb-2023)
> Filesystem volume name: <none>
> Last mounted on: /srv
> Filesystem UUID: b77a2f23-bb0d-48bf-8c36-327c73460cb8
> Filesystem magic number: 0xEF53
> Filesystem revision #: 1 (dynamic)
> Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
> Filesystem flags: signed_directory_hash
> Default mount options: user_xattr acl
> Filesystem state: clean
> Errors behavior: Continue
> Filesystem OS type: Linux
> Inode count: 122101760
> Block count: 488378368
> Reserved block count: 0
> Overhead clusters: 7947216
> Free blocks: 377442743
> Free inodes: 122010897
> First block: 0
> Block size: 4096
> Fragment size: 4096
> Group descriptor size: 64
> Reserved GDT blocks: 1024
> Blocks per group: 32768
> Fragments per group: 32768
> Inodes per group: 8192
> Inode blocks per group: 512
> Flex block group size: 16
> Filesystem created: Thu Jan 2 20:13:09 2020
> Last mount time: Tue Oct 3 07:35:18 2023
> Last write time: Tue Oct 3 07:35:18 2023
> Mount count: 3
> Maximum mount count: -1
> Last checked: Mon Oct 2 12:55:12 2023
> Check interval: 0 (<none>)
> Lifetime writes: 7662 GB
> Reserved blocks uid: 0 (user root)
> Reserved blocks gid: 0 (group root)
> First inode: 11
> Inode size: 256
> Required extra isize: 32
> Desired extra isize: 32
> Journal inode: 8
> Default directory hash: half_md4
> Directory Hash Seed: 03b91d34-4d2d-44b4-ac94-0c055f696e8a
> Journal backup: inode blocks
> Checksum type: crc32c
> Checksum: 0xbee0f98e
> Journal features: journal_incompat_revoke journal_64bit journal_checksum_v3
> Total journal size: 1024M
> Total journal blocks: 262144
> Max transaction length: 262144
> Fast commit length: 0
> Journal sequence: 0x0004f764
> Journal start: 23369
> Journal checksum type: crc32c
> Journal checksum: 0x66761981
>
>
>> And while the file system is mounted, please send the contents of
>> /proc/fs/<block-device>/options, e.g.:
>>
>> % cat /proc/fs/ext4/dm-0/options
> cat /proc/fs/ext4/nvme1n1p1/options
> rw
> bsddf
> nogrpid
> block_validity
> dioread_nolock
> nodiscard
> delalloc
> nowarn_on_error
> journal_checksum
> barrier
> auto_da_alloc
> user_xattr
> acl
> noquota
> resuid=0
> resgid=0
> errors=continue
> commit=5
> min_batch_time=0
> max_batch_time=15000
> stripe=0
> data=ordered
> inode_readahead_blks=32
> init_itable=10
> max_dir_size_kb=0
>
>> And finally, how full was the file system? What is the output of "df
>> <mountpoint>" and "df -i <mountpoint>".
> root@alap5:/# df /srv/
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/nvme1n1p1 1921724608 542905720 1378802504 29% /srv
> root@alap5:/# df -i /srv/
> Filesystem Inodes IUsed IFree IUse% Mounted on
> /dev/nvme1n1p1 122101760 91292 122010468 1% /srv
>
>
> However, fullness of this filesystem varies substantially over time, as I use
> it for developing / testing postgres, albeit primarily when travelling. I was
> e.g. benchmarking bulk-loading of data (knowingly on a mediocre SSD) at the
> time of the last crash, space usage during that goes up by a few hundred GB,
> with the data being deleted afterwards.
>
> Greetings,
>
> Andres Freund

2023-10-10 09:23:32

by Thorsten Leemhuis

[permalink] [raw]
Subject: Re: [syzbot] INFO: task hung in ext4_fallocate

On 05.10.23 11:04, Shreeya Patel wrote:
> On 04/10/23 06:12, Andres Freund wrote:
>> On 2023-10-03 19:25:05 -0400, Theodore Ts'o wrote:
>>> On Tue, Oct 03, 2023 at 07:11:38AM -0700, Andres Freund wrote:
>>>> On 2023-09-18 18:07:54 -0400, Theodore Ts'o wrote:
>>>>> On Mon, Sep 18, 2023 at 08:13:30PM +0530, Shreeya Patel wrote:
>>>>>> When I tried to reproduce this issue on mainline linux kernel
>>>>>> using the
>>>>>> reproducer provided by syzbot, I see an endless loop of following
>>>>>> errors :-
>>>>>>
>>>>>> [   89.689751][ T3922] loop1: detected capacity change from 0 to 512
>>>>>> [   89.690514][ T3916] EXT4-fs error (device loop4):
>>>>>> ext4_map_blocks:577:
>>>>>> inode #3: block 9: comm poc: lblock 0 mapped to illegal pblock 9
>>>>>> (length 1)
>>>>>> [   89.694709][ T3890] EXT4-fs error (device loop0):
>>>>>> ext4_map_blocks:577:
>>>>> Please note that maliciously corrupted file system is considered low
>>>>> priority by ext4 developers.
>>>> FWIW, I am seeing occasional hangs in ext4_fallocate with 6.6-rc4 as
>>>> well,
>>>> just doing database development on my laptop.
>>> Unless you are using a corrupted file system (e.g., there are EXT4-fs
>>> error messages in dmesg), it's likely a different issue.
>> Indeed quite possibly unrelated - the only reason that I mentioned it
>> here was
>> the fact that so far I've only hit it in 6.6-rc*, not earlier.
>
> Since the issue that I reported is not reproducable locally, to confirm
> if it happens on the mainline kernel or not,
> we did a #syz test: https://github.com/torvalds/linux master
>
> But it wasn't reproduced on the mainline kernel so it is very unlikely
> that the issue you are seeing is same as this one.

Shreeya Patel, Andres Freund, please help me out here, as this
regression is in the list of tracked issues, but I'm a bit lost about
the state of things:

Shreeya Patel: if I understand what you wrote above right the issue
(which maybe only happened with a maliciously corrupted fs, which is
kinda unsupported) doesn't happen anymore? Then I guess I can remove
this thread from the tracking. That being said:

Andres Freund: as it seems you have a different issue, have you or
somebody else done anything to get us closer to a solution (like a
bisection)? Doesn't look like it, but it's quite possible that I missed
something. I just want to ensure that this is not lost if you care about
the problems you encountered; makes we wonder if reporting it separately
(e.g. in a new thread) might be wise.

Ciao, Thorsten


>> I've since then "downgraded" to 6.5.5, without hitting the issue so far,
>> despite continuing to run a similar workload for most of the day.
>>
>> I did actually find ext4 messages in dmesg, but only during the latest
>> boot
>> into 6.5.5, despite encountering the hang four times (resolved via
>> reboot each
>> time).
>>
>> Oct 03 07:35:16 alap5 kernel: Linux version
>> 6.5.5-andres-00001-g97e7c1c17aa1 (andres@alap5) (gcc (Debian 13.2.0-4)
>> 13.2.0, GNU ld (GNU Binutils for Debian) 2.41) #76 SMP PREEMPT_DYNAMIC
>> Tue Oct  3 07:29:10 PDT 2023
>> Oct 03 07:35:16 alap5 kernel: Command line:
>> BOOT_IMAGE=/vmlinuz-6.5.5-andres-00001-g97e7c1c17aa1
>> root=/dev/mapper/alap5-root ro apparmor=0 rd.luks.options=discard
>> systemd.show_status=1 i915.fastboot=1 i915.modeset=1
>> psmouse.synaptics_intertouch=1 systemd.unified_cgroup_hierarchy
>> nvme.poll_queues=2 nvme.write_queues=2 intel_iommu=sm_on,igfx_off
>> iommu=pt
>> ...
>> Oct 03 07:35:17 alap5 systemd[1]: Starting
>> systemd-fsck@dev-disk-by\x2duuid-b77a2f23\x2dbb0d\x2d48bf\x2d8c36\x2d327c73460cb8.service - File System Check on /dev/disk/by-uuid/b77a2f23-bb0d-48bf-8c36-327c73460cb8...
>> ...
>> Oct 03 07:35:17 alap5 systemd-fsck[705]: /dev/nvme1n1p1: recovering
>> journal
>> ...
>> Oct 03 07:35:17 alap5 systemd-fsck[705]: /dev/nvme1n1p1: clean,
>> 90863/122101760 files, 110935625/488378368 blocks
>> Oct 03 07:35:17 alap5 systemd[1]: Finished
>> systemd-fsck@dev-disk-by\x2duuid-b77a2f23\x2dbb0d\x2d48bf\x2d8c36\x2d327c73460cb8.service - File System Check on /dev/disk/by-uuid/b77a2f23-bb0d-48bf-8c36-327c73460cb8.
>> ...
>> Oct 03 07:35:18 alap5 kernel: EXT4-fs (nvme1n1p1): mounted filesystem
>> b77a2f23-bb0d-48bf-8c36-327c73460cb8 r/w with ordered data mode. Quota
>> mode: none.
>> ...
>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>> extent logical block 2240, len 32000; IO logical block 2240, len 2048
>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>> extent logical block 4288, len 29952; IO logical block 4288, len 2048
>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>> extent logical block 6336, len 27904; IO logical block 6336, len 2048
>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>> extent logical block 8384, len 25856; IO logical block 8384, len 2048
>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>> extent logical block 10432, len 23808; IO logical block 10432, len 2048
>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>> extent logical block 12480, len 21760; IO logical block 12480, len 2048
>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>> extent logical block 14528, len 19712; IO logical block 14528, len 2048
>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>> extent logical block 16576, len 17664; IO logical block 16576, len 2048
>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>> extent logical block 18624, len 15616; IO logical block 18624, len 320
>> Oct 03 07:38:11 alap5 kernel: wl0: disconnect from AP
>> 04:62:73:d2:2f:ff for new auth to fc:5b:39:c6:6e:9f
>> ...
>> Oct 03 07:38:11 alap5 kernel: wl0: Limiting TX power to 2 dBm as
>> advertised by fc:5b:39:c6:6e:9f
>> Oct 03 07:38:22 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>> ext4_convert_unwritten_extents_endio:3743: Inode (82984800) finished:
>> extent logical block 152080, len 9856; IO logical block 152080, len 2048
>> Oct 03 07:38:22 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>> ext4_convert_unwritten_extents_endio:3743: Inode (82984800) finished:
>> extent logical block 154128, len 7808; IO logical block 154128, len 768
>>
>> I'm somewhat surprised to see the "
>>
>>
>>> Can you give us details about (a) what kind of block device; are you
>>> using dm-crypt, or just a HDD or a SSD?  If an SSD, what kind of SSD?
>> NVMe, Samsung SSD 970 EVO Plus 2TB. No dm_crypt, raid or whatever.
>>
>> smartctl / nvme smart-log indicates no data correctness errors or
>> such. Not
>> that that's a guarantee of anything.
>>
>> A forced fsck doesn't find any corruption, but did report a number of
>> Inode 82984754 extent tree (at level 2) could be narrower.  Optimize<y>?
>> style "issues".
>>
>>
>>> What CPU architecture is it?
>> x86-64 (i9-9880H in a ThinkPad X1 Extreme 2nd). The laptop does have a
>> battery
>> that's not, uhm, perfect, anymore (turning off while supposedly still
>> having
>> around 15% capacity), so there have been a few out-of-power hard
>> "shutdowns".
>>
>>
>>> And can you send me the output of dumpe2fs -h <block device>?
>> dumpe2fs 1.47.0 (5-Feb-2023)
>> Filesystem volume name:   <none>
>> Last mounted on:          /srv
>> Filesystem UUID:          b77a2f23-bb0d-48bf-8c36-327c73460cb8
>> Filesystem magic number:  0xEF53
>> Filesystem revision #:    1 (dynamic)
>> Filesystem features:      has_journal ext_attr resize_inode dir_index
>> filetype needs_recovery extent 64bit flex_bg sparse_super large_file
>> huge_file dir_nlink extra_isize metadata_csum
>> Filesystem flags:         signed_directory_hash
>> Default mount options:    user_xattr acl
>> Filesystem state:         clean
>> Errors behavior:          Continue
>> Filesystem OS type:       Linux
>> Inode count:              122101760
>> Block count:              488378368
>> Reserved block count:     0
>> Overhead clusters:        7947216
>> Free blocks:              377442743
>> Free inodes:              122010897
>> First block:              0
>> Block size:               4096
>> Fragment size:            4096
>> Group descriptor size:    64
>> Reserved GDT blocks:      1024
>> Blocks per group:         32768
>> Fragments per group:      32768
>> Inodes per group:         8192
>> Inode blocks per group:   512
>> Flex block group size:    16
>> Filesystem created:       Thu Jan  2 20:13:09 2020
>> Last mount time:          Tue Oct  3 07:35:18 2023
>> Last write time:          Tue Oct  3 07:35:18 2023
>> Mount count:              3
>> Maximum mount count:      -1
>> Last checked:             Mon Oct  2 12:55:12 2023
>> Check interval:           0 (<none>)
>> Lifetime writes:          7662 GB
>> Reserved blocks uid:      0 (user root)
>> Reserved blocks gid:      0 (group root)
>> First inode:              11
>> Inode size:              256
>> Required extra isize:     32
>> Desired extra isize:      32
>> Journal inode:            8
>> Default directory hash:   half_md4
>> Directory Hash Seed:      03b91d34-4d2d-44b4-ac94-0c055f696e8a
>> Journal backup:           inode blocks
>> Checksum type:            crc32c
>> Checksum:                 0xbee0f98e
>> Journal features:         journal_incompat_revoke journal_64bit
>> journal_checksum_v3
>> Total journal size:       1024M
>> Total journal blocks:     262144
>> Max transaction length:   262144
>> Fast commit length:       0
>> Journal sequence:         0x0004f764
>> Journal start:            23369
>> Journal checksum type:    crc32c
>> Journal checksum:         0x66761981
>>
>>
>>> And while the file system is mounted, please send the contents of
>>> /proc/fs/<block-device>/options, e.g.:
>>>
>>> % cat /proc/fs/ext4/dm-0/options
>> cat /proc/fs/ext4/nvme1n1p1/options
>> rw
>> bsddf
>> nogrpid
>> block_validity
>> dioread_nolock
>> nodiscard
>> delalloc
>> nowarn_on_error
>> journal_checksum
>> barrier
>> auto_da_alloc
>> user_xattr
>> acl
>> noquota
>> resuid=0
>> resgid=0
>> errors=continue
>> commit=5
>> min_batch_time=0
>> max_batch_time=15000
>> stripe=0
>> data=ordered
>> inode_readahead_blks=32
>> init_itable=10
>> max_dir_size_kb=0
>>
>>> And finally, how full was the file system?  What is the output of "df
>>> <mountpoint>" and "df -i <mountpoint>".
>> root@alap5:/# df /srv/
>> Filesystem      1K-blocks      Used  Available Use% Mounted on
>> /dev/nvme1n1p1 1921724608 542905720 1378802504  29% /srv
>> root@alap5:/# df -i /srv/
>> Filesystem        Inodes IUsed     IFree IUse% Mounted on
>> /dev/nvme1n1p1 122101760 91292 122010468    1% /srv
>>
>>
>> However, fullness of this filesystem varies substantially over time,
>> as I use
>> it for developing / testing postgres, albeit primarily when
>> travelling.  I was
>> e.g. benchmarking bulk-loading of data (knowingly on a mediocre SSD)
>> at the
>> time of the last crash, space usage during that goes up by a few
>> hundred GB,
>> with the data being deleted afterwards.
>>
>> Greetings,
>>
>> Andres Freund
>
>

2023-10-10 16:25:54

by Andres Freund

[permalink] [raw]
Subject: Re: [syzbot] INFO: task hung in ext4_fallocate

Hi,

On 2023-10-10 11:21:47 +0200, Thorsten Leemhuis wrote:
> Andres Freund: as it seems you have a different issue, have you or
> somebody else done anything to get us closer to a solution (like a
> bisection)? Doesn't look like it, but it's quite possible that I missed
> something.

I've indeed not performed a bisection - unfortunately it doesn't quite seem
viable at this point, I've only hit the issue after using the kernel
interactively for several hours. Being confident that some kernel version
doesn't hit the issue would require using a kernel version for quite a long
time...


> I just want to ensure that this is not lost if you care about the problems
> you encountered; makes we wonder if reporting it separately (e.g. in a new
> thread) might be wise.

Yes, probably a good idea. I'll try to do so soon.

Greetings,

Andres Freund

2023-10-12 20:59:20

by Shreeya Patel

[permalink] [raw]
Subject: Re: [syzbot] INFO: task hung in ext4_fallocate


On 10/10/23 14:51, Thorsten Leemhuis wrote:
> On 05.10.23 11:04, Shreeya Patel wrote:
>> On 04/10/23 06:12, Andres Freund wrote:
>>> On 2023-10-03 19:25:05 -0400, Theodore Ts'o wrote:
>>>> On Tue, Oct 03, 2023 at 07:11:38AM -0700, Andres Freund wrote:
>>>>> On 2023-09-18 18:07:54 -0400, Theodore Ts'o wrote:
>>>>>> On Mon, Sep 18, 2023 at 08:13:30PM +0530, Shreeya Patel wrote:
>>>>>>> When I tried to reproduce this issue on mainline linux kernel
>>>>>>> using the
>>>>>>> reproducer provided by syzbot, I see an endless loop of following
>>>>>>> errors :-
>>>>>>>
>>>>>>> [   89.689751][ T3922] loop1: detected capacity change from 0 to 512
>>>>>>> [   89.690514][ T3916] EXT4-fs error (device loop4):
>>>>>>> ext4_map_blocks:577:
>>>>>>> inode #3: block 9: comm poc: lblock 0 mapped to illegal pblock 9
>>>>>>> (length 1)
>>>>>>> [   89.694709][ T3890] EXT4-fs error (device loop0):
>>>>>>> ext4_map_blocks:577:
>>>>>> Please note that maliciously corrupted file system is considered low
>>>>>> priority by ext4 developers.
>>>>> FWIW, I am seeing occasional hangs in ext4_fallocate with 6.6-rc4 as
>>>>> well,
>>>>> just doing database development on my laptop.
>>>> Unless you are using a corrupted file system (e.g., there are EXT4-fs
>>>> error messages in dmesg), it's likely a different issue.
>>> Indeed quite possibly unrelated - the only reason that I mentioned it
>>> here was
>>> the fact that so far I've only hit it in 6.6-rc*, not earlier.
>> Since the issue that I reported is not reproducable locally, to confirm
>> if it happens on the mainline kernel or not,
>> we did a #syz test: https://github.com/torvalds/linux master
>>
>> But it wasn't reproduced on the mainline kernel so it is very unlikely
>> that the issue you are seeing is same as this one.
> Shreeya Patel, Andres Freund, please help me out here, as this
> regression is in the list of tracked issues, but I'm a bit lost about
> the state of things:
>
> Shreeya Patel: if I understand what you wrote above right the issue
> (which maybe only happened with a maliciously corrupted fs, which is
> kinda unsupported) doesn't happen anymore? Then I guess I can remove
> this thread from the tracking. That being said:
Hi Thorsten,

Just to clarify, this issue is still seen on upstream 5.10 and earlier
kernels.
I can confirm that we did not see this issue being reproduced on
mainline kernel using #sys test feature.

So it is possible that it has been fixed between 5.10 and mainline
kernel but we don't know the fix commit.

Also, as Ted said this is a low priority issue so I will mark this as
resolved for now so regzbot can stop tracking this.


#regzbot resolve: fix commit is unknown but it has been resolved
somewhere between 5.10 and mainline kernel.


Thanks,
Shreeya Patel
> Andres Freund: as it seems you have a different issue, have you or
> somebody else done anything to get us closer to a solution (like a
> bisection)? Doesn't look like it, but it's quite possible that I missed
> something. I just want to ensure that this is not lost if you care about
> the problems you encountered; makes we wonder if reporting it separately
> (e.g. in a new thread) might be wise.
>
> Ciao, Thorsten
>
>
>>> I've since then "downgraded" to 6.5.5, without hitting the issue so far,
>>> despite continuing to run a similar workload for most of the day.
>>>
>>> I did actually find ext4 messages in dmesg, but only during the latest
>>> boot
>>> into 6.5.5, despite encountering the hang four times (resolved via
>>> reboot each
>>> time).
>>>
>>> Oct 03 07:35:16 alap5 kernel: Linux version
>>> 6.5.5-andres-00001-g97e7c1c17aa1 (andres@alap5) (gcc (Debian 13.2.0-4)
>>> 13.2.0, GNU ld (GNU Binutils for Debian) 2.41) #76 SMP PREEMPT_DYNAMIC
>>> Tue Oct  3 07:29:10 PDT 2023
>>> Oct 03 07:35:16 alap5 kernel: Command line:
>>> BOOT_IMAGE=/vmlinuz-6.5.5-andres-00001-g97e7c1c17aa1
>>> root=/dev/mapper/alap5-root ro apparmor=0 rd.luks.options=discard
>>> systemd.show_status=1 i915.fastboot=1 i915.modeset=1
>>> psmouse.synaptics_intertouch=1 systemd.unified_cgroup_hierarchy
>>> nvme.poll_queues=2 nvme.write_queues=2 intel_iommu=sm_on,igfx_off
>>> iommu=pt
>>> ...
>>> Oct 03 07:35:17 alap5 systemd[1]: Starting
>>> systemd-fsck@dev-disk-by\x2duuid-b77a2f23\x2dbb0d\x2d48bf\x2d8c36\x2d327c73460cb8.service - File System Check on /dev/disk/by-uuid/b77a2f23-bb0d-48bf-8c36-327c73460cb8...
>>> ...
>>> Oct 03 07:35:17 alap5 systemd-fsck[705]: /dev/nvme1n1p1: recovering
>>> journal
>>> ...
>>> Oct 03 07:35:17 alap5 systemd-fsck[705]: /dev/nvme1n1p1: clean,
>>> 90863/122101760 files, 110935625/488378368 blocks
>>> Oct 03 07:35:17 alap5 systemd[1]: Finished
>>> systemd-fsck@dev-disk-by\x2duuid-b77a2f23\x2dbb0d\x2d48bf\x2d8c36\x2d327c73460cb8.service - File System Check on /dev/disk/by-uuid/b77a2f23-bb0d-48bf-8c36-327c73460cb8.
>>> ...
>>> Oct 03 07:35:18 alap5 kernel: EXT4-fs (nvme1n1p1): mounted filesystem
>>> b77a2f23-bb0d-48bf-8c36-327c73460cb8 r/w with ordered data mode. Quota
>>> mode: none.
>>> ...
>>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>>> extent logical block 2240, len 32000; IO logical block 2240, len 2048
>>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>>> extent logical block 4288, len 29952; IO logical block 4288, len 2048
>>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>>> extent logical block 6336, len 27904; IO logical block 6336, len 2048
>>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>>> extent logical block 8384, len 25856; IO logical block 8384, len 2048
>>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>>> extent logical block 10432, len 23808; IO logical block 10432, len 2048
>>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>>> extent logical block 12480, len 21760; IO logical block 12480, len 2048
>>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>>> extent logical block 14528, len 19712; IO logical block 14528, len 2048
>>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>>> extent logical block 16576, len 17664; IO logical block 16576, len 2048
>>> Oct 03 07:38:09 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>>> ext4_convert_unwritten_extents_endio:3743: Inode (82984794) finished:
>>> extent logical block 18624, len 15616; IO logical block 18624, len 320
>>> Oct 03 07:38:11 alap5 kernel: wl0: disconnect from AP
>>> 04:62:73:d2:2f:ff for new auth to fc:5b:39:c6:6e:9f
>>> ...
>>> Oct 03 07:38:11 alap5 kernel: wl0: Limiting TX power to 2 dBm as
>>> advertised by fc:5b:39:c6:6e:9f
>>> Oct 03 07:38:22 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>>> ext4_convert_unwritten_extents_endio:3743: Inode (82984800) finished:
>>> extent logical block 152080, len 9856; IO logical block 152080, len 2048
>>> Oct 03 07:38:22 alap5 kernel: EXT4-fs warning (device nvme1n1p1):
>>> ext4_convert_unwritten_extents_endio:3743: Inode (82984800) finished:
>>> extent logical block 154128, len 7808; IO logical block 154128, len 768
>>>
>>> I'm somewhat surprised to see the "
>>>
>>>
>>>> Can you give us details about (a) what kind of block device; are you
>>>> using dm-crypt, or just a HDD or a SSD?  If an SSD, what kind of SSD?
>>> NVMe, Samsung SSD 970 EVO Plus 2TB. No dm_crypt, raid or whatever.
>>>
>>> smartctl / nvme smart-log indicates no data correctness errors or
>>> such. Not
>>> that that's a guarantee of anything.
>>>
>>> A forced fsck doesn't find any corruption, but did report a number of
>>> Inode 82984754 extent tree (at level 2) could be narrower.  Optimize<y>?
>>> style "issues".
>>>
>>>
>>>> What CPU architecture is it?
>>> x86-64 (i9-9880H in a ThinkPad X1 Extreme 2nd). The laptop does have a
>>> battery
>>> that's not, uhm, perfect, anymore (turning off while supposedly still
>>> having
>>> around 15% capacity), so there have been a few out-of-power hard
>>> "shutdowns".
>>>
>>>
>>>> And can you send me the output of dumpe2fs -h <block device>?
>>> dumpe2fs 1.47.0 (5-Feb-2023)
>>> Filesystem volume name:   <none>
>>> Last mounted on:          /srv
>>> Filesystem UUID:          b77a2f23-bb0d-48bf-8c36-327c73460cb8
>>> Filesystem magic number:  0xEF53
>>> Filesystem revision #:    1 (dynamic)
>>> Filesystem features:      has_journal ext_attr resize_inode dir_index
>>> filetype needs_recovery extent 64bit flex_bg sparse_super large_file
>>> huge_file dir_nlink extra_isize metadata_csum
>>> Filesystem flags:         signed_directory_hash
>>> Default mount options:    user_xattr acl
>>> Filesystem state:         clean
>>> Errors behavior:          Continue
>>> Filesystem OS type:       Linux
>>> Inode count:              122101760
>>> Block count:              488378368
>>> Reserved block count:     0
>>> Overhead clusters:        7947216
>>> Free blocks:              377442743
>>> Free inodes:              122010897
>>> First block:              0
>>> Block size:               4096
>>> Fragment size:            4096
>>> Group descriptor size:    64
>>> Reserved GDT blocks:      1024
>>> Blocks per group:         32768
>>> Fragments per group:      32768
>>> Inodes per group:         8192
>>> Inode blocks per group:   512
>>> Flex block group size:    16
>>> Filesystem created:       Thu Jan  2 20:13:09 2020
>>> Last mount time:          Tue Oct  3 07:35:18 2023
>>> Last write time:          Tue Oct  3 07:35:18 2023
>>> Mount count:              3
>>> Maximum mount count:      -1
>>> Last checked:             Mon Oct  2 12:55:12 2023
>>> Check interval:           0 (<none>)
>>> Lifetime writes:          7662 GB
>>> Reserved blocks uid:      0 (user root)
>>> Reserved blocks gid:      0 (group root)
>>> First inode:              11
>>> Inode size:              256
>>> Required extra isize:     32
>>> Desired extra isize:      32
>>> Journal inode:            8
>>> Default directory hash:   half_md4
>>> Directory Hash Seed:      03b91d34-4d2d-44b4-ac94-0c055f696e8a
>>> Journal backup:           inode blocks
>>> Checksum type:            crc32c
>>> Checksum:                 0xbee0f98e
>>> Journal features:         journal_incompat_revoke journal_64bit
>>> journal_checksum_v3
>>> Total journal size:       1024M
>>> Total journal blocks:     262144
>>> Max transaction length:   262144
>>> Fast commit length:       0
>>> Journal sequence:         0x0004f764
>>> Journal start:            23369
>>> Journal checksum type:    crc32c
>>> Journal checksum:         0x66761981
>>>
>>>
>>>> And while the file system is mounted, please send the contents of
>>>> /proc/fs/<block-device>/options, e.g.:
>>>>
>>>> % cat /proc/fs/ext4/dm-0/options
>>> cat /proc/fs/ext4/nvme1n1p1/options
>>> rw
>>> bsddf
>>> nogrpid
>>> block_validity
>>> dioread_nolock
>>> nodiscard
>>> delalloc
>>> nowarn_on_error
>>> journal_checksum
>>> barrier
>>> auto_da_alloc
>>> user_xattr
>>> acl
>>> noquota
>>> resuid=0
>>> resgid=0
>>> errors=continue
>>> commit=5
>>> min_batch_time=0
>>> max_batch_time=15000
>>> stripe=0
>>> data=ordered
>>> inode_readahead_blks=32
>>> init_itable=10
>>> max_dir_size_kb=0
>>>
>>>> And finally, how full was the file system?  What is the output of "df
>>>> <mountpoint>" and "df -i <mountpoint>".
>>> root@alap5:/# df /srv/
>>> Filesystem      1K-blocks      Used  Available Use% Mounted on
>>> /dev/nvme1n1p1 1921724608 542905720 1378802504  29% /srv
>>> root@alap5:/# df -i /srv/
>>> Filesystem        Inodes IUsed     IFree IUse% Mounted on
>>> /dev/nvme1n1p1 122101760 91292 122010468    1% /srv
>>>
>>>
>>> However, fullness of this filesystem varies substantially over time,
>>> as I use
>>> it for developing / testing postgres, albeit primarily when
>>> travelling.  I was
>>> e.g. benchmarking bulk-loading of data (knowingly on a mediocre SSD)
>>> at the
>>> time of the last crash, space usage during that goes up by a few
>>> hundred GB,
>>> with the data being deleted afterwards.
>>>
>>> Greetings,
>>>
>>> Andres Freund
>>

2023-10-14 01:56:49

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [syzbot] INFO: task hung in ext4_fallocate

On Fri, Oct 13, 2023 at 02:28:52AM +0530, Shreeya Patel wrote:
>
> Just to clarify, this issue is still seen on upstream 5.10 and earlier
> kernels.
> I can confirm that we did not see this issue being reproduced on mainline
> kernel using #sys test feature.

So I'm confused. In the original post on this thread[1], it was stated:

> When I tried to reproduce this issue on mainline linux kernel using the
> reproducer provided by syzbot, I see an endless loop of following errors :-

and

> #regzbot introduced: v6.6-rc1..

[1] https://lore.kernel.org/all/[email protected]/

... and now you're sayingg this *wasn't* reproduced upstream? And of
course, because this was a private syzbot instance, there was syzbot
dashboard given, and the reproducer had the serial number filed off
(there is a URL in the comments at the beginning of the reproducer for
a *reason*), so *I* couldn't run "#syz test".

Huh?

Reading between the lines, it looks like the only similarity between
what was happening in the 5.10 kernel and the mainline kernel was that
it did not *hang* the kernel, but it was generating a stream of
EXT4-fs error messages. Well, if you take a deliberately corrupted
kernel, and mount it with errors=continue, and then keep pounding it
with lots of file system operations, of *course* it will continually
spew ext4 errors message. That is "Working As Intended" by
*definition*.

And this is why I maintain that irresponsible use of syzbot is
effectively a denial of service attack on upstream maintainers.

At this point, just as upstream policy is that debugging ancient LTS
kernels is not something upstream maintainers to do (and attempting to
trick them to debug it by claiming it is found in mainline is *really*
uncool), if there are bugs that are reported using private syzbot
instances, or where there isn't a URL to a public syzbot dashboard,
they should be treated as P4 priority or worse....

- Ted

2023-10-17 15:04:32

by Shreeya Patel

[permalink] [raw]
Subject: Re: [syzbot] INFO: task hung in ext4_fallocate


On 14/10/23 07:24, Theodore Ts'o wrote:

Hi Ted,


> On Fri, Oct 13, 2023 at 02:28:52AM +0530, Shreeya Patel wrote:
>> Just to clarify, this issue is still seen on upstream 5.10 and earlier
>> kernels.
>> I can confirm that we did not see this issue being reproduced on mainline
>> kernel using #sys test feature.
> So I'm confused. In the original post on this thread[1], it was stated:
>
>> When I tried to reproduce this issue on mainline linux kernel using the
>> reproducer provided by syzbot, I see an endless loop of following errors :-

I stated this because at that time we didn't know we could use #syz test
feature on buganizer tickets as well.
I locally tested it using the reproducer and I was seeing an endless
loops of errors which I thought
is something not an expected behaviour.

This was happening on mainline as well as 5.10 kernel.

> and
>
>> #regzbot introduced: v6.6-rc1..
> [1] https://lore.kernel.org/all/[email protected]/
>
> ... and now you're sayingg this *wasn't* reproduced upstream? And of
> course, because this was a private syzbot instance, there was syzbot
> dashboard given, and the reproducer had the serial number filed off
> (there is a URL in the comments at the beginning of the reproducer for
> a *reason*), so *I* couldn't run "#syz test".
>
> Huh?
>
> Reading between the lines, it looks like the only similarity between
> what was happening in the 5.10 kernel and the mainline kernel was that
> it did not *hang* the kernel, but it was generating a stream of
> EXT4-fs error messages. Well, if you take a deliberately corrupted
> kernel, and mount it with errors=continue, and then keep pounding it
> with lots of file system operations, of *course* it will continually
> spew ext4 errors message. That is "Working As Intended" by
> *definition*.


This is what I actually wanted to understand if the loops of errors were
something of concern or not.
I'm not an expert in the area of filesystem so I assumed loops of errors
that I was seeing
shouldn't be an intented behaviour.


> And this is why I maintain that irresponsible use of syzbot is
> effectively a denial of service attack on upstream maintainers.
>
> At this point, just as upstream policy is that debugging ancient LTS
> kernels is not something upstream maintainers to do (and attempting to
> trick them to debug it by claiming it is found in mainline is *really*
> uncool), if there are bugs that are reported using private syzbot
> instances, or where there isn't a URL to a public syzbot dashboard,
> they should be treated as P4 priority or worse....


We never had the intention to trick you into debugging this issue. When
I reported this issue,
I did it on the basis of what I saw after using the reproducer locally.
After some days when I came to know we could use #syz test in buganizer,
I tested mainline and 5.10 kernel again through it
but I didn't see it getting reproduced on mainline kernel and hence I
said it didn't reproduce upstream in my second email.

I should have given you more context when I said it doesn't happen in
upstream kernel so I'm sorry for the misunderstanding.


Thanks,
Shreeya Patel


> - Ted
>