2023-05-17 11:33:08

by yang lan

[permalink] [raw]
Subject: INFO: task hung in blkdev_open bug

Hi,

We use our modified Syzkaller to fuzz the Linux kernel and found the
following issue:

Head Commit: f1b32fda06d2cfb8eea9680b0ba7a8b0d5b81eeb
Git Tree: stable

Console output: https://pastebin.com/raw/U6yhCfpy
Kernel config: https://pastebin.com/raw/BiggLxRg
C reproducer: https://pastebin.com/raw/6mg7uF8W
Syz reproducer: https://pastebin.com/raw/bSjNi7Vc


root@syzkaller:~# uname -a
Linux syzkaller 5.10.179 #1 SMP PREEMPT Thu Apr 27 16:22:48 CST 2023
x86_64 GNU/Linux
root@syzkaller:~# gcc poc_blkdev.c -o poc_blkdev
root@syzkaller:~# ./poc_blkdev
[ 126.866571][ T1949] block nbd0: Possible stuck request
000000002439ca71: control (read@0,1024B). Runtime 30 seconds
[ 126.867464][ T1949] block nbd0: Possible stuck request
000000003e3fb642: control (read@1024,1024B). Runtime 30 seconds
[ 156.948517][ T1949] block nbd0: Possible stuck request
000000002439ca71: control (read@0,1024B). Runtime 60 seconds
[ 156.949284][ T1949] block nbd0: Possible stuck request
000000003e3fb642: control (read@1024,1024B). Runtime 60 seconds
[ 187.029585][ T1949] block nbd0: Possible stuck request
000000002439ca71: control (read@0,1024B). Runtime 90 seconds
[ 187.030378][ T1949] block nbd0: Possible stuck request
000000003e3fb642: control (read@1024,1024B). Runtime 90 seconds
[ 217.110282][ T1949] block nbd0: Possible stuck request
000000002439ca71: control (read@0,1024B). Runtime 120 seconds
[ 217.110314][ T1949] block nbd0: Possible stuck request
000000003e3fb642: control (read@1024,1024B). Runtime 120 seconds
[ 247.190786][ T1949] block nbd0: Possible stuck request
000000002439ca71: control (read@0,1024B). Runtime 150 seconds
[ 247.191613][ T1949] block nbd0: Possible stuck request
000000003e3fb642: control (read@1024,1024B). Runtime 150 seconds
[ 277.271159][ T1949] block nbd0: Possible stuck request
000000002439ca71: control (read@0,1024B). Runtime 180 seconds
[ 277.271982][ T1949] block nbd0: Possible stuck request
000000003e3fb642: control (read@1024,1024B). Runtime 180 seconds
[ 284.951335][ T1552] INFO: task systemd-udevd:7629 blocked for more
than 143 seconds.
[ 284.952044][ T1552] Not tainted 5.10.179 #1
[ 284.952368][ T1552] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 284.952928][ T1552] task:systemd-udevd state:D stack:26160 pid:
7629 ppid: 4451 flags:0x00004304
[ 284.953533][ T1552] Call Trace:
[ 284.953766][ T1552] __schedule+0xae4/0x1e10
[ 284.954061][ T1552] ? __sched_text_start+0x8/0x8
[ 284.954400][ T1552] ? preempt_schedule_thunk+0x16/0x18
[ 284.954754][ T1552] ? preempt_schedule_common+0x37/0xc0
[ 284.955112][ T1552] schedule+0xc3/0x270
[ 284.955389][ T1552] io_schedule+0x17/0x70
[ 284.955671][ T1552] wait_on_page_bit_common+0x542/0xcb0
[ 284.956032][ T1552] ? find_get_pages_range_tag+0xe40/0xe40
[ 284.956407][ T1552] ? bdev_disk_changed+0x3f0/0x3f0
[ 284.956746][ T1552] ? end_buffer_async_write+0x5c0/0x5c0
[ 284.957112][ T1552] ? find_get_pages_contig+0xc20/0xc20
[ 284.957473][ T1552] do_read_cache_page+0x66b/0x1000
[ 284.957810][ T1552] ? enable_ptr_key_workfn+0x30/0x30
[ 284.958167][ T1552] read_part_sector+0xf6/0x610
[ 284.958486][ T1552] ? adfspart_check_ADFS+0x800/0x800
[ 284.958834][ T1552] adfspart_check_ICS+0x9a/0xd00
[ 284.959161][ T1552] ? pointer+0x790/0x790
[ 284.959442][ T1552] ? adfspart_check_ADFS+0x800/0x800
[ 284.959792][ T1552] ? snprintf+0xae/0xe0
[ 284.960067][ T1552] ? vsprintf+0x30/0x30
[ 284.960353][ T1552] ? adfspart_check_ADFS+0x800/0x800
[ 284.960700][ T1552] blk_add_partitions+0x47a/0xe70
[ 284.961035][ T1552] bdev_disk_changed+0x249/0x3f0
[ 284.961787][ T1552] __blkdev_get+0xdb8/0x15b0
[ 284.962139][ T1552] ? rcu_read_lock_sched_held+0xd0/0xd0
[ 284.962512][ T1552] ? __blkdev_put+0x720/0x720
[ 284.962826][ T1552] ? devcgroup_check_permission+0x1ac/0x470
[ 284.963209][ T1552] blkdev_get+0xd1/0x250
[ 284.963490][ T1552] blkdev_open+0x20a/0x290
[ 284.963783][ T1552] do_dentry_open+0x69a/0x1240
[ 284.964097][ T1552] ? bd_acquire+0x2c0/0x2c0
[ 284.964400][ T1552] path_openat+0xd7d/0x2720
[ 284.964701][ T1552] ? path_lookupat.isra.41+0x560/0x560
[ 284.965059][ T1552] ? lock_downgrade+0x6a0/0x6a0
[ 284.965379][ T1552] ? alloc_set_pte+0x448/0x1b00
[ 284.965697][ T1552] ? xas_find+0x325/0x900
[ 284.965986][ T1552] ? find_held_lock+0x33/0x1c0
[ 284.966316][ T1552] do_filp_open+0x1a4/0x270
[ 284.966617][ T1552] ? may_open_dev+0xf0/0xf0
[ 284.966921][ T1552] ? rwlock_bug.part.1+0x90/0x90
[ 284.967252][ T1552] ? do_raw_spin_unlock+0x172/0x260
[ 284.967595][ T1552] ? __alloc_fd+0x2a9/0x620
[ 284.967907][ T1552] do_sys_openat2+0x5db/0x8c0
[ 284.968218][ T1552] ? file_open_root+0x400/0x400
[ 284.968541][ T1552] do_sys_open+0xca/0x140
[ 284.968830][ T1552] ? filp_open+0x70/0x70
[ 284.969114][ T1552] ? __secure_computing+0x100/0x360
[ 284.969458][ T1552] do_syscall_64+0x2d/0x70
[ 284.969754][ T1552] entry_SYSCALL_64_after_hwframe+0x61/0xc6
[ 284.970146][ T1552] RIP: 0033:0x7fd2bc544840
[ 284.970448][ T1552] RSP: 002b:00007ffe6f0c4778 EFLAGS: 00000246
ORIG_RAX: 0000000000000002
[ 284.971550][ T1552] RAX: ffffffffffffffda RBX: 000055f0dc215e90
RCX: 00007fd2bc544840
[ 284.972099][ T1552] RDX: 000055f0db45cfe3 RSI: 00000000000a0800
RDI: 000055f0dc229760
[ 284.972622][ T1552] RBP: 00007ffe6f0c48f0 R08: 000055f0db45c670
R09: 0000000000000010
[ 284.973143][ T1552] R10: 000055f0db45cd0c R11: 0000000000000246
R12: 00007ffe6f0c4840
[ 284.973666][ T1552] R13: 000055f0dc22aa70 R14: 0000000000000003
R15: 000000000000000e
[ 284.974207][ T1552]
[ 284.974207][ T1552] Showing all locks held in the system:
[ 284.974729][ T1552] 1 lock held by khungtaskd/1552:
[ 284.975057][ T1552] #0: ffffffff8ae9cea0
(rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x52/0x29f
[ 284.975721][ T1552] 2 locks held by kworker/u8:0/1940:
[ 284.976066][ T1552] #0: ffff888018636938
((wq_completion)knbd0-recv){+.+.}-{0:0}, at:
process_one_work+0x8e2/0x15d0
[ 284.976782][ T1552] #1: ffff8880110f7e00
((work_completion)(&args->work)){+.+.}-{0:0}, at:
process_one_work+0x917/0x15d0
[ 284.977539][ T1552] 1 lock held by in:imklog/7416:
[ 284.977860][ T1552] #0: ffff888041b50ff0
(&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xeb/0x110
[ 284.978476][ T1552] 1 lock held by systemd-udevd/7629:
[ 284.978819][ T1552] #0: ffff88800f813480
(&bdev->bd_mutex){+.+.}-{3:3}, at: __blkdev_get+0x45e/0x15b0
[ 284.979450][ T1552]
[ 284.979606][ T1552] =============================================
[ 284.979606][ T1552]
[ 284.980153][ T1552] NMI backtrace for cpu 0
[ 284.980443][ T1552] CPU: 0 PID: 1552 Comm: khungtaskd Not tainted 5.10.179 #1
[ 284.980915][ T1552] Hardware name: QEMU Standard PC (i440FX + PIIX,
1996), BIOS 1.12.0-1 04/01/2014
[ 284.981513][ T1552] Call Trace:
[ 284.981739][ T1552] dump_stack+0x106/0x162
[ 284.982026][ T1552] nmi_cpu_backtrace.cold.8+0x44/0xd5
[ 284.982382][ T1552] ? lapic_can_unplug_cpu+0x70/0x70
[ 284.982725][ T1552] nmi_trigger_cpumask_backtrace+0x1aa/0x1e0
[ 284.983117][ T1552] watchdog+0xd5a/0xf80
[ 284.983398][ T1552] ? hungtask_pm_notify+0xa0/0xa0
[ 284.983726][ T1552] kthread+0x3aa/0x490
[ 284.983994][ T1552] ? __kthread_cancel_work+0x190/0x190
[ 284.984358][ T1552] ret_from_fork+0x1f/0x30
[ 284.984703][ T1552] Sending NMI from CPU 0 to CPUs 1:
[ 284.985306][ C1] NMI backtrace for cpu 1
[ 284.985309][ C1] CPU: 1 PID: 7417 Comm: rs:main Q:Reg Not
tainted 5.10.179 #1
[ 284.985312][ C1] Hardware name: QEMU Standard PC (i440FX + PIIX,
1996), BIOS 1.12.0-1 04/01/2014
[ 284.985314][ C1] RIP: 0010:check_memory_region+0x11c/0x1e0
[ 284.985318][ C1] Code: 00 fc ff df 49 01 d9 49 01 c0 eb 03 49 89
c0 4d 39 c8 0f 84 8a 00 00 00 41 80 38 00 49 8d 40 01 74 ea eb b0 41
bc 08 00 00 00 <45> 29 c4 4d 89 c8 4b 8d 1c 0c eb 0c 49 83 c0 01 48 89
d8 49 39 d8
[ 284.985320][ C1] RSP: 0018:ffff888046fef988 EFLAGS: 00000202
4.9853
000000 M Ces1sa]ge fRroAm Xsy: ffslfogfd@esydzk1al0le0r 9ca9at1 A4pr7
2 8 R07B:5X:0: 003600 0..0.
K9 R CkeXrn:el :[f ff28f4.f9fff8386b0654]d[ cT1255e2]
ernel[ p a ni2c 8- 4n.o985326][ C1] RDX: 0000000000000001 RSI:
00000000000005c2 RDI: ffff88804e548a3e
[ 284.985328][ C1] RBP: ffffed1009ca9200 R08: 0000000000000007
R09: ffffed1009ca9147
[ 284.985331][ C1] R10: ffff88804e548fff R11: ffffed1009ca91ff
R12: 0000000000000008
[ 284.985333][ C1] R13: 00007f74bc025152 R14: 0000000000000000
R15: 00000000000005c2
[ 284.985335][ C1] FS: 00007f74c5e32700(0000)
GS:ffff88807ec00000(0000) knlGS:0000000000000000
[ 284.985337][ C1] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 284.985339][ C1] CR2: 00007f098c751008 CR3: 0000000049b08000
CR4: 0000000000350ee0
[ 284.985341][ C1] DR0: 0000000000000000 DR1: 0000000000000000
DR2: 0000000000000000
[ 284.985343][ C1] DR3: 0000000000000000 DR6: 00000000fffe0ff0
DR7: 0000000000000400
[ 284.985345][ C1] Call Trace:
[ 284.985346][ C1] copyin+0xde/0x110
[ 284.985348][ C1] iov_iter_copy_from_user_atomic+0x404/0xcf0
[ 284.985349][ C1] ? rcu_is_watching+0x11/0xb0
[ 284.985351][ C1] ? __mark_inode_dirty+0x13b/0xd90
[ 284.985352][ C1] ? current_time+0xb6/0x120
[ 284.985354][ C1] generic_perform_write+0x337/0x4d0
[ 284.985356][ C1] ? file_update_time+0xd0/0x470
[ 284.985357][ C1] ? filemap_check_errors+0x150/0x150
[ 284.985359][ C1] ? inode_update_time+0xb0/0xb0
[ 284.985360][ C1] ? down_write+0xdb/0x150
[ 284.985362][ C1] ext4_buffered_write_iter+0x20d/0x470
[ 284.985363][ C1] ext4_file_write_iter+0x426/0x1400
[ 284.985365][ C1] ? __lock_acquire+0x1839/0x5e90
[ 284.985366][ C1] ? lock_release+0x631/0x660
[ 284.985368][ C1] ? ext4_buffered_write_iter+0x470/0x470
[ 284.985370][ C1] ? lockdep_hardirqs_on_prepare+0x3f0/0x3f0
[ 284.985371][ C1] new_sync_write+0x491/0x660
[ 284.985373][ C1] ? new_sync_read+0x6e0/0x6e0
[ 284.985374][ C1] ? __fdget_pos+0xeb/0x110
[ 284.985376][ C1] ? rcu_read_lock_held+0xb0/0xb0
[ 284.985377][ C1] vfs_write+0x671/0xa90
[ 284.985378][ C1] ksys_write+0x11f/0x240
[ 284.985380][ C1] ? __ia32_sys_read+0xb0/0xb0
[ 284.985381][ C1] ? syscall_enter_from_user_mode+0x26/0x70
[ 284.985383][ C1] do_syscall_64+0x2d/0x70
[ 284.985385][ C1] entry_SYSCALL_64_after_hwframe+0x61/0xc6
[ 284.985386][ C1] RIP: 0033:0x7f74c88761cd
[ 284.985390][ C1] Code: c2 20 00 00 75 10 b8 01 00 00 00 0f 05 48
3d 01 f0 ff ff 73 31 c3 48 83 ec 08 e8 ae fc ff ff 48 89 04 24 b8 01
00 00 00 0f 05 <48> 8b 3c 24 48 89 c2 e8 f7 fc ff ff 48 89 d0 48 83 c4
08 48 3d 01
[ 284.985392][ C1] RSP: 002b:00007f74c5e31590 EFLAGS: 00000293
ORIG_RAX: 0000000000000001
[ 284.985396][ C1] RAX: ffffffffffffffda RBX: 00007f74bc024b90
RCX: 00007f74c88761cd
[ 284.985398][ C1] RDX: 0000000000000d21 RSI: 00007f74bc024b90
RDI: 0000000000000006
[ 284.985400][ C1] RBP: 0000000000000000 R08: 0000000000000000
R09: 0000000000000000
[ 284.985403][ C1] R10: 0000000000000000 R11: 0000000000000293
R12: 00007f74bc024910
[ 284.985405][ C1] R13: 00007f74c5e315b0 R14: 0000558be67cb360
R15: 0000000000000d21
[ 284.986064][ T1552] Kernel panic - not syncing: hung_task: blocked tasks
[ 285.008567][ T1552] CPU: 0 PID: 1552 Comm: khungtaskd Not tainted 5.10.179 #1
[ 285.009039][ T1552] Hardware name: QEMU Standard PC (i440FX + PIIX,
1996), BIOS 1.12.0-1 04/01/2014
[ 285.009625][ T1552] Call Trace:
[ 285.009848][ T1552] dump_stack+0x106/0x162
[ 285.010138][ T1552] panic+0x2d8/0x6fb
[ 285.010395][ T1552] ? print_oops_end_marker.cold.9+0x15/0x15
[ 285.010786][ T1552] ? cpumask_next+0x3c/0x40
[ 285.011079][ T1552] ? printk_safe_flush+0xd7/0x120
[ 285.011408][ T1552] ? watchdog.cold.5+0x5/0x15f
[ 285.011719][ T1552] ? watchdog+0xb36/0xf80
[ 285.012003][ T1552] watchdog.cold.5+0x16/0x15f
[ 285.012312][ T1552] ? hungtask_pm_notify+0xa0/0xa0
[ 285.012639][ T1552] kthread+0x3aa/0x490
[ 285.012912][ T1552] ? __kthread_cancel_work+0x190/0x190
[ 285.013269][ T1552] ret_from_fork+0x1f/0x30
[ 285.013915][ T1552] Kernel Offset: disabled
[ 285.014241][ T1552] Rebooting in 86400 seconds..

Please let me know if I can provide any more information, and I hope I
didn't mess up this bug report.

Regards,

Yang


2023-05-17 12:34:14

by Matthew Wilcox

[permalink] [raw]
Subject: Re: INFO: task hung in blkdev_open bug

On Wed, May 17, 2023 at 07:12:23PM +0800, yang lan wrote:
> root@syzkaller:~# uname -a
> Linux syzkaller 5.10.179 #1 SMP PREEMPT Thu Apr 27 16:22:48 CST 2023

Does this reproduce on current kernels, eg 6.4-rc2?

> root@syzkaller:~# gcc poc_blkdev.c -o poc_blkdev

You need to include poc_blkdev.c as part of your report.

> Please let me know if I can provide any more information, and I hope I
> didn't mess up this bug report.

I suspect you've done something that is known to not work (as root,
so we won't necessarily care). But I can't really say without seeing
what you've done. Running syzkaller is an art, and most people aren't
good at it. It takes a lot of work to submit good quality bug reports,
see this article:

https://blog.regehr.org/archives/2037

2023-05-17 16:40:32

by yang lan

[permalink] [raw]
Subject: Re: INFO: task hung in blkdev_open bug

Hi,

Thank you for your response.

> Does this reproduce on current kernels, eg 6.4-rc2?

Yeah, it can be reproduced on kernel 6.4-rc2.

root@syzkaller:~# uname -a
Linux syzkaller 6.4.0-rc2 #1 SMP PREEMPT_DYNAMIC Wed May 17 22:58:52
CST 2023 x86_64 GNU/Linux
root@syzkaller:~# gcc poc_blkdev.c -o poc_blkdev
root@syzkaller:~# ./poc_blkdev
[ 128.718051][ T7121] nbd0: detected capacity change from 0 to 4
[ 158.917678][ T998] block nbd0: Possible stuck request
ffff888016f08000: control (read@0,2048B). Runtime 30 seconds
[ 188.997677][ T998] block nbd0: Possible stuck request
ffff888016f08000: control (read@0,2048B). Runtime 60 seconds
[ 219.077191][ T998] block nbd0: Possible stuck request
ffff888016f08000: control (read@0,2048B). Runtime 90 seconds
[ 249.157312][ T998] block nbd0: Possible stuck request
ffff888016f08000: control (read@0,2048B). Runtime 120 seconds
[ 279.237409][ T998] block nbd0: Possible stuck request
ffff888016f08000: control (read@0,2048B). Runtime 150 seconds
[ 309.317843][ T998] block nbd0: Possible stuck request
ffff888016f08000: control (read@0,2048B). Runtime 180 seconds
[ 339.397950][ T998] block nbd0: Possible stuck request
ffff888016f08000: control (read@0,2048B). Runtime 210 seconds
[ 369.478031][ T998] block nbd0: Possible stuck request
ffff888016f08000: control (read@0,2048B). Runtime 240 seconds
[ 399.558253][ T998] block nbd0: Possible stuck request
ffff888016f08000: control (read@0,2048B). Runtime 270 seconds
[ 429.638372][ T998] block nbd0: Possible stuck request
ffff888016f08000: control (read@0,2048B). Runtime 300 seconds
[ 459.718454][ T998] block nbd0: Possible stuck request
ffff888016f08000: control (read@0,2048B). Runtime 330 seconds
[ 489.798571][ T998] block nbd0: Possible stuck request
ffff888016f08000: control (read@0,2048B). Runtime 360 seconds
[ 519.878643][ T998] block nbd0: Possible stuck request
ffff888016f08000: control (read@0,2048B). Runtime 390 seconds
[ 549.958966][ T998] block nbd0: Possible stuck request
ffff888016f08000: control (read@0,2048B). Runtime 420 seconds
[ 571.719145][ T30] INFO: task systemd-udevd:7123 blocked for more
than 143 seconds.
[ 571.719652][ T30] Not tainted 6.4.0-rc2 #1
[ 571.719900][ T30] "echo 0 >
/proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 571.720307][ T30] task:systemd-udevd state:D stack:26224
pid:7123 ppid:3998 flags:0x00004004
[ 571.720756][ T30] Call Trace:
[ 571.720923][ T30] <TASK>
[ 571.721073][ T30] __schedule+0x9ca/0x2630
[ 571.721348][ T30] ? firmware_map_remove+0x1e0/0x1e0
[ 571.721618][ T30] ? find_held_lock+0x33/0x1c0
[ 571.721866][ T30] ? lock_release+0x3b9/0x690
[ 571.722108][ T30] ? do_read_cache_folio+0x4ff/0xb20
[ 571.722447][ T30] ? lock_downgrade+0x6b0/0x6b0
[ 571.722785][ T30] ? mark_held_locks+0xb0/0x110
[ 571.723044][ T30] schedule+0xd3/0x1b0
[ 571.723264][ T30] io_schedule+0x1b/0x70
[ 571.723489][ T30] ? do_read_cache_folio+0x58c/0xb20
[ 571.723760][ T30] do_read_cache_folio+0x58c/0xb20
[ 571.724036][ T30] ? blkdev_readahead+0x20/0x20
[ 571.724319][ T30] ? __filemap_get_folio+0x8e0/0x8e0
[ 571.724588][ T30] ? __sanitizer_cov_trace_switch+0x53/0x90
[ 571.724885][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
[ 571.725246][ T30] ? format_decode+0x1cf/0xb50
[ 571.725547][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
[ 571.725837][ T30] ? fill_ptr_key+0x30/0x30
[ 571.726072][ T30] ? default_pointer+0x4a0/0x4a0
[ 571.726335][ T30] ? __isolate_free_page+0x220/0x220
[ 571.726608][ T30] ? filemap_fdatawrite_wbc+0x1c0/0x1c0
[ 571.726888][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
[ 571.727172][ T30] ? read_part_sector+0x229/0x420
[ 571.727434][ T30] ? adfspart_check_ADFS+0x560/0x560
[ 571.727707][ T30] read_part_sector+0xfa/0x420
[ 571.727963][ T30] adfspart_check_POWERTEC+0x90/0x690
[ 571.728244][ T30] ? adfspart_check_ADFS+0x560/0x560
[ 571.728520][ T30] ? __kasan_slab_alloc+0x33/0x70
[ 571.728780][ T30] ? adfspart_check_ICS+0x8f0/0x8f0
[ 571.729889][ T30] ? snprintf+0xb2/0xe0
[ 571.730145][ T30] ? vsprintf+0x30/0x30
[ 571.730374][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
[ 571.730659][ T30] ? adfspart_check_ICS+0x8f0/0x8f0
[ 571.730928][ T30] bdev_disk_changed+0x674/0x1260
[ 571.731189][ T30] ? write_comp_data+0x1f/0x70
[ 571.731439][ T30] ? iput+0xd0/0x780
[ 571.731646][ T30] blkdev_get_whole+0x186/0x260
[ 571.731886][ T30] blkdev_get_by_dev+0x4ce/0xae0
[ 571.732139][ T30] blkdev_open+0x140/0x2c0
[ 571.732366][ T30] do_dentry_open+0x6de/0x1450
[ 571.732612][ T30] ? blkdev_close+0x80/0x80
[ 571.732848][ T30] path_openat+0xd6d/0x26d0
[ 571.733084][ T30] ? lock_downgrade+0x6b0/0x6b0
[ 571.733336][ T30] ? vfs_path_lookup+0x110/0x110
[ 571.733591][ T30] do_filp_open+0x1bb/0x290
[ 571.733824][ T30] ? may_open_dev+0xf0/0xf0
[ 571.734061][ T30] ? __phys_addr_symbol+0x30/0x70
[ 571.734324][ T30] ? do_raw_spin_unlock+0x176/0x260
[ 571.734595][ T30] do_sys_openat2+0x5fd/0x980
[ 571.734837][ T30] ? file_open_root+0x3f0/0x3f0
[ 571.735087][ T30] ? seccomp_notify_ioctl+0xff0/0xff0
[ 571.735368][ T30] do_sys_open+0xce/0x140
[ 571.735596][ T30] ? filp_open+0x80/0x80
[ 571.735820][ T30] ? __secure_computing+0x1e3/0x340
[ 571.736090][ T30] do_syscall_64+0x38/0x80
[ 571.736325][ T30] entry_SYSCALL_64_after_hwframe+0x63/0xcd
[ 571.736626][ T30] RIP: 0033:0x7fb212210840
[ 571.736857][ T30] RSP: 002b:00007fffb37bbbe8 EFLAGS: 00000246
ORIG_RAX: 0000000000000002
[ 571.737269][ T30] RAX: ffffffffffffffda RBX: 0000560e09072e10
RCX: 00007fb212210840
[ 571.737651][ T30] RDX: 0000560e08e39fe3 RSI: 00000000000a0800
RDI: 0000560e090813b0
[ 571.738037][ T30] RBP: 00007fffb37bbd60 R08: 0000560e08e39670
R09: 0000000000000010
[ 571.738432][ T30] R10: 0000560e08e39d0c R11: 0000000000000246
R12: 00007fffb37bbcb0
[ 571.739563][ T30] R13: 0000560e09087a70 R14: 0000000000000003
R15: 000000000000000e
[ 571.739973][ T30] </TASK>
[ 571.740133][ T30]
[ 571.740133][ T30] Showing all locks held in the system:
[ 571.740495][ T30] 1 lock held by rcu_tasks_kthre/13:
[ 571.740758][ T30] #0: ffffffff8b6badd0
(rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at:
rcu_tasks_one_gp+0x2b/0xdb0
[ 571.741301][ T30] 1 lock held by rcu_tasks_trace/14:
[ 571.741571][ T30] #0: ffffffff8b6baad0
(rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at:
rcu_tasks_one_gp+0x2b/0xdb0
[ 571.742134][ T30] 1 lock held by khungtaskd/30:
[ 571.742385][ T30] #0: ffffffff8b6bb960
(rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x5b/0x300
[ 571.742947][ T30] 2 locks held by kworker/u8:0/50:
[ 571.743198][ T30] #0: ffff888016e7b138
((wq_completion)nbd0-recv){+.+.}-{0:0}, at:
process_one_work+0x94b/0x17b0
[ 571.743809][ T30] #1: ffff888011e4fdd0
((work_completion)(&args->work)){+.+.}-{0:0}, at:
process_one_work+0x984/0x17b0
[ 571.744393][ T30] 1 lock held by in:imklog/6784:
[ 571.744643][ T30] #0: ffff88801106e368
(&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100
[ 571.745122][ T30] 1 lock held by systemd-udevd/7123:
[ 571.745381][ T30] #0: ffff8880431854c8
(&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x24b/0xae0
[ 571.745885][ T30]
[ 571.746008][ T30] =============================================
[ 571.746008][ T30]
[ 571.746424][ T30] NMI backtrace for cpu 1
[ 571.746642][ T30] CPU: 1 PID: 30 Comm: khungtaskd Not tainted 6.4.0-rc2 #1
[ 571.746989][ T30] Hardware name: QEMU Standard PC (i440FX + PIIX,
1996), BIOS 1.12.0-1 04/01/2014
[ 571.747440][ T30] Call Trace:
[ 571.747606][ T30] <TASK>
[ 571.747764][ T30] dump_stack_lvl+0x91/0xf0
[ 571.747997][ T30] nmi_cpu_backtrace+0x21a/0x2b0
[ 571.748257][ T30] ? lapic_can_unplug_cpu+0xa0/0xa0
[ 571.748525][ T30] nmi_trigger_cpumask_backtrace+0x28c/0x2f0
[ 571.748830][ T30] watchdog+0xe4b/0x10c0
[ 571.749057][ T30] ? proc_dohung_task_timeout_secs+0x90/0x90
[ 571.749366][ T30] kthread+0x33b/0x430
[ 571.749596][ T30] ? kthread_complete_and_exit+0x40/0x40
[ 571.749891][ T30] ret_from_fork+0x1f/0x30
[ 571.750126][ T30] </TASK>
[ 571.750347][ T30] Sending NMI from CPU 1 to CPUs 0:
[ 571.750620][ C0] NMI backtrace for cpu 0
[ 571.750626][ C0] CPU: 0 PID: 3987 Comm: systemd-journal Not
tainted 6.4.0-rc2 #1
[ 571.750637][ C0] Hardware name: QEMU Standard PC (i440FX + PIIX,
1996), BIOS 1.12.0-1 04/01/2014
[ 571.750643][ C0] RIP: 0033:0x7fb1d8c34bd1
[ 571.750652][ C0] Code: ed 4d 89 cf 75 a3 0f 1f 00 48 85 ed 75 4b
48 8b 54 24 28 48 8b 44 24 18 48 8b 7c 24 20 48 29 da 48 8b 70 20 48
0f af 54 24 08 <48> 83 c4 38 5b 5d 41 5c 41 5d 41 5e 41 5f e9 ac f2 04
00 0f 1f 40
[ 571.750662][ C0] RSP: 002b:00007ffff9686c30 EFLAGS: 00000202
[ 571.750670][ C0] RAX: 00007ffff9686e50 RBX: 0000000000000002
RCX: 0000000000000010
[ 571.750677][ C0] RDX: 0000000000000010 RSI: 00007ffff9686d80
RDI: 00007ffff9686f20
[ 571.750683][ C0] RBP: 0000000000000000 R08: 0000000000000010
R09: 00007ffff9686d90
[ 571.750689][ C0] R10: 00007ffff9686fb0 R11: 00007fb1d8d6a060
R12: 00007ffff9686f30
[ 571.750696][ C0] R13: 00007fb1d9d20ee0 R14: 00007ffff9686f30
R15: 00007ffff9686d90
[ 571.750703][ C0] FS: 00007fb1da33d8c0 GS: 0000000000000000
[ 571.752358][ T30] Kernel panic - not syncing: hung_task: blocked tasks
[ 571.757337][ T30] CPU: 1 PID: 30 Comm: khungtaskd Not tainted 6.4.0-rc2 #1
[ 571.757686][ T30] Hardware name: QEMU Standard PC (i440FX + PIIX,
1996), BIOS 1.12.0-1 04/01/2014
[ 571.758131][ T30] Call Trace:
[ 571.758302][ T30] <TASK>
[ 571.758462][ T30] dump_stack_lvl+0x91/0xf0
[ 571.758714][ T30] panic+0x62d/0x6a0
[ 571.758926][ T30] ? panic_smp_self_stop+0x90/0x90
[ 571.759188][ T30] ? preempt_schedule_common+0x1a/0xc0
[ 571.759486][ T30] ? preempt_schedule_thunk+0x1a/0x20
[ 571.759785][ T30] ? watchdog+0xc21/0x10c0
[ 571.760020][ T30] watchdog+0xc32/0x10c0
[ 571.760240][ T30] ? proc_dohung_task_timeout_secs+0x90/0x90
[ 571.760541][ T30] kthread+0x33b/0x430
[ 571.760753][ T30] ? kthread_complete_and_exit+0x40/0x40
[ 571.761052][ T30] ret_from_fork+0x1f/0x30
[ 571.761286][ T30] </TASK>
[ 571.761814][ T30] Kernel Offset: disabled
[ 571.762047][ T30] Rebooting in 86400 seconds..

> You need to include poc_blkdev.c as part of your report.

It's a little confusing and I'm sorry for that.
The poc_blkdev.c is exactly the C reproducer
(https://pastebin.com/raw/6mg7uF8W).

> I suspect you've done something that is known to not work (as root,
> so we won't necessarily care). But I can't really say without seeing
> what you've done. Running syzkaller is an art, and most people aren't
> good at it. It takes a lot of work to submit good quality bug reports,
> see this article:
>
> https://blog.regehr.org/archives/2037

I have read this article and thanks for your recommendations.
I'm not familiar with this module and I haven't figured out the root
cause of this bug yet.

Regards,

Yang

Matthew Wilcox <[email protected]> 于2023年5月17日周三 20:20写道:
>
> On Wed, May 17, 2023 at 07:12:23PM +0800, yang lan wrote:
> > root@syzkaller:~# uname -a
> > Linux syzkaller 5.10.179 #1 SMP PREEMPT Thu Apr 27 16:22:48 CST 2023
>
> Does this reproduce on current kernels, eg 6.4-rc2?
>
> > root@syzkaller:~# gcc poc_blkdev.c -o poc_blkdev
>
> You need to include poc_blkdev.c as part of your report.
>
> > Please let me know if I can provide any more information, and I hope I
> > didn't mess up this bug report.
>
> I suspect you've done something that is known to not work (as root,
> so we won't necessarily care). But I can't really say without seeing
> what you've done. Running syzkaller is an art, and most people aren't
> good at it. It takes a lot of work to submit good quality bug reports,
> see this article:
>
> https://blog.regehr.org/archives/2037

Matthew Wilcox <[email protected]> 于2023年5月17日周三 20:20写道:
>
> On Wed, May 17, 2023 at 07:12:23PM +0800, yang lan wrote:
> > root@syzkaller:~# uname -a
> > Linux syzkaller 5.10.179 #1 SMP PREEMPT Thu Apr 27 16:22:48 CST 2023
>
> Does this reproduce on current kernels, eg 6.4-rc2?
>
> > root@syzkaller:~# gcc poc_blkdev.c -o poc_blkdev
>
> You need to include poc_blkdev.c as part of your report.
>
> > Please let me know if I can provide any more information, and I hope I
> > didn't mess up this bug report.
>
> I suspect you've done something that is known to not work (as root,
> so we won't necessarily care). But I can't really say without seeing
> what you've done. Running syzkaller is an art, and most people aren't
> good at it. It takes a lot of work to submit good quality bug reports,
> see this article:
>
> https://blog.regehr.org/archives/2037

2023-05-18 03:57:26

by Yu Kuai

[permalink] [raw]
Subject: Re: INFO: task hung in blkdev_open bug

Hi,

在 2023/05/18 0:27, yang lan 写道:
> Hi,
>
> Thank you for your response.
>
>> Does this reproduce on current kernels, eg 6.4-rc2?
>
> Yeah, it can be reproduced on kernel 6.4-rc2.
>

Below log shows that io hang, can you collect following debugfs so
that we can know where is the io now.

cd /sys/kernel/debug/block/[test_device] && find . -type f -exec grep
-aH . {} \;

Thanks,
Kuai
> root@syzkaller:~# uname -a
> Linux syzkaller 6.4.0-rc2 #1 SMP PREEMPT_DYNAMIC Wed May 17 22:58:52
> CST 2023 x86_64 GNU/Linux
> root@syzkaller:~# gcc poc_blkdev.c -o poc_blkdev
> root@syzkaller:~# ./poc_blkdev
> [ 128.718051][ T7121] nbd0: detected capacity change from 0 to 4
> [ 158.917678][ T998] block nbd0: Possible stuck request
> ffff888016f08000: control (read@0,2048B). Runtime 30 seconds
> [ 188.997677][ T998] block nbd0: Possible stuck request
> ffff888016f08000: control (read@0,2048B). Runtime 60 seconds
> [ 219.077191][ T998] block nbd0: Possible stuck request
> ffff888016f08000: control (read@0,2048B). Runtime 90 seconds
> [ 249.157312][ T998] block nbd0: Possible stuck request
> ffff888016f08000: control (read@0,2048B). Runtime 120 seconds
> [ 279.237409][ T998] block nbd0: Possible stuck request
> ffff888016f08000: control (read@0,2048B). Runtime 150 seconds
> [ 309.317843][ T998] block nbd0: Possible stuck request
> ffff888016f08000: control (read@0,2048B). Runtime 180 seconds
> [ 339.397950][ T998] block nbd0: Possible stuck request
> ffff888016f08000: control (read@0,2048B). Runtime 210 seconds
> [ 369.478031][ T998] block nbd0: Possible stuck request
> ffff888016f08000: control (read@0,2048B). Runtime 240 seconds
> [ 399.558253][ T998] block nbd0: Possible stuck request
> ffff888016f08000: control (read@0,2048B). Runtime 270 seconds
> [ 429.638372][ T998] block nbd0: Possible stuck request
> ffff888016f08000: control (read@0,2048B). Runtime 300 seconds
> [ 459.718454][ T998] block nbd0: Possible stuck request
> ffff888016f08000: control (read@0,2048B). Runtime 330 seconds
> [ 489.798571][ T998] block nbd0: Possible stuck request
> ffff888016f08000: control (read@0,2048B). Runtime 360 seconds
> [ 519.878643][ T998] block nbd0: Possible stuck request
> ffff888016f08000: control (read@0,2048B). Runtime 390 seconds
> [ 549.958966][ T998] block nbd0: Possible stuck request
> ffff888016f08000: control (read@0,2048B). Runtime 420 seconds
> [ 571.719145][ T30] INFO: task systemd-udevd:7123 blocked for more
> than 143 seconds.
> [ 571.719652][ T30] Not tainted 6.4.0-rc2 #1
> [ 571.719900][ T30] "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [ 571.720307][ T30] task:systemd-udevd state:D stack:26224
> pid:7123 ppid:3998 flags:0x00004004
> [ 571.720756][ T30] Call Trace:
> [ 571.720923][ T30] <TASK>
> [ 571.721073][ T30] __schedule+0x9ca/0x2630
> [ 571.721348][ T30] ? firmware_map_remove+0x1e0/0x1e0
> [ 571.721618][ T30] ? find_held_lock+0x33/0x1c0
> [ 571.721866][ T30] ? lock_release+0x3b9/0x690
> [ 571.722108][ T30] ? do_read_cache_folio+0x4ff/0xb20
> [ 571.722447][ T30] ? lock_downgrade+0x6b0/0x6b0
> [ 571.722785][ T30] ? mark_held_locks+0xb0/0x110
> [ 571.723044][ T30] schedule+0xd3/0x1b0
> [ 571.723264][ T30] io_schedule+0x1b/0x70
> [ 571.723489][ T30] ? do_read_cache_folio+0x58c/0xb20
> [ 571.723760][ T30] do_read_cache_folio+0x58c/0xb20
> [ 571.724036][ T30] ? blkdev_readahead+0x20/0x20
> [ 571.724319][ T30] ? __filemap_get_folio+0x8e0/0x8e0
> [ 571.724588][ T30] ? __sanitizer_cov_trace_switch+0x53/0x90
> [ 571.724885][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
> [ 571.725246][ T30] ? format_decode+0x1cf/0xb50
> [ 571.725547][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
> [ 571.725837][ T30] ? fill_ptr_key+0x30/0x30
> [ 571.726072][ T30] ? default_pointer+0x4a0/0x4a0
> [ 571.726335][ T30] ? __isolate_free_page+0x220/0x220
> [ 571.726608][ T30] ? filemap_fdatawrite_wbc+0x1c0/0x1c0
> [ 571.726888][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
> [ 571.727172][ T30] ? read_part_sector+0x229/0x420
> [ 571.727434][ T30] ? adfspart_check_ADFS+0x560/0x560
> [ 571.727707][ T30] read_part_sector+0xfa/0x420
> [ 571.727963][ T30] adfspart_check_POWERTEC+0x90/0x690
> [ 571.728244][ T30] ? adfspart_check_ADFS+0x560/0x560
> [ 571.728520][ T30] ? __kasan_slab_alloc+0x33/0x70
> [ 571.728780][ T30] ? adfspart_check_ICS+0x8f0/0x8f0
> [ 571.729889][ T30] ? snprintf+0xb2/0xe0
> [ 571.730145][ T30] ? vsprintf+0x30/0x30
> [ 571.730374][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
> [ 571.730659][ T30] ? adfspart_check_ICS+0x8f0/0x8f0
> [ 571.730928][ T30] bdev_disk_changed+0x674/0x1260
> [ 571.731189][ T30] ? write_comp_data+0x1f/0x70
> [ 571.731439][ T30] ? iput+0xd0/0x780
> [ 571.731646][ T30] blkdev_get_whole+0x186/0x260
> [ 571.731886][ T30] blkdev_get_by_dev+0x4ce/0xae0
> [ 571.732139][ T30] blkdev_open+0x140/0x2c0
> [ 571.732366][ T30] do_dentry_open+0x6de/0x1450
> [ 571.732612][ T30] ? blkdev_close+0x80/0x80
> [ 571.732848][ T30] path_openat+0xd6d/0x26d0
> [ 571.733084][ T30] ? lock_downgrade+0x6b0/0x6b0
> [ 571.733336][ T30] ? vfs_path_lookup+0x110/0x110
> [ 571.733591][ T30] do_filp_open+0x1bb/0x290
> [ 571.733824][ T30] ? may_open_dev+0xf0/0xf0
> [ 571.734061][ T30] ? __phys_addr_symbol+0x30/0x70
> [ 571.734324][ T30] ? do_raw_spin_unlock+0x176/0x260
> [ 571.734595][ T30] do_sys_openat2+0x5fd/0x980
> [ 571.734837][ T30] ? file_open_root+0x3f0/0x3f0
> [ 571.735087][ T30] ? seccomp_notify_ioctl+0xff0/0xff0
> [ 571.735368][ T30] do_sys_open+0xce/0x140
> [ 571.735596][ T30] ? filp_open+0x80/0x80
> [ 571.735820][ T30] ? __secure_computing+0x1e3/0x340
> [ 571.736090][ T30] do_syscall_64+0x38/0x80
> [ 571.736325][ T30] entry_SYSCALL_64_after_hwframe+0x63/0xcd
> [ 571.736626][ T30] RIP: 0033:0x7fb212210840
> [ 571.736857][ T30] RSP: 002b:00007fffb37bbbe8 EFLAGS: 00000246
> ORIG_RAX: 0000000000000002
> [ 571.737269][ T30] RAX: ffffffffffffffda RBX: 0000560e09072e10
> RCX: 00007fb212210840
> [ 571.737651][ T30] RDX: 0000560e08e39fe3 RSI: 00000000000a0800
> RDI: 0000560e090813b0
> [ 571.738037][ T30] RBP: 00007fffb37bbd60 R08: 0000560e08e39670
> R09: 0000000000000010
> [ 571.738432][ T30] R10: 0000560e08e39d0c R11: 0000000000000246
> R12: 00007fffb37bbcb0
> [ 571.739563][ T30] R13: 0000560e09087a70 R14: 0000000000000003
> R15: 000000000000000e
> [ 571.739973][ T30] </TASK>
> [ 571.740133][ T30]
> [ 571.740133][ T30] Showing all locks held in the system:
> [ 571.740495][ T30] 1 lock held by rcu_tasks_kthre/13:
> [ 571.740758][ T30] #0: ffffffff8b6badd0
> (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at:
> rcu_tasks_one_gp+0x2b/0xdb0
> [ 571.741301][ T30] 1 lock held by rcu_tasks_trace/14:
> [ 571.741571][ T30] #0: ffffffff8b6baad0
> (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at:
> rcu_tasks_one_gp+0x2b/0xdb0
> [ 571.742134][ T30] 1 lock held by khungtaskd/30:
> [ 571.742385][ T30] #0: ffffffff8b6bb960
> (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x5b/0x300
> [ 571.742947][ T30] 2 locks held by kworker/u8:0/50:
> [ 571.743198][ T30] #0: ffff888016e7b138
> ((wq_completion)nbd0-recv){+.+.}-{0:0}, at:
> process_one_work+0x94b/0x17b0
> [ 571.743809][ T30] #1: ffff888011e4fdd0
> ((work_completion)(&args->work)){+.+.}-{0:0}, at:
> process_one_work+0x984/0x17b0
> [ 571.744393][ T30] 1 lock held by in:imklog/6784:
> [ 571.744643][ T30] #0: ffff88801106e368
> (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100
> [ 571.745122][ T30] 1 lock held by systemd-udevd/7123:
> [ 571.745381][ T30] #0: ffff8880431854c8
> (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x24b/0xae0
> [ 571.745885][ T30]
> [ 571.746008][ T30] =============================================
> [ 571.746008][ T30]
> [ 571.746424][ T30] NMI backtrace for cpu 1
> [ 571.746642][ T30] CPU: 1 PID: 30 Comm: khungtaskd Not tainted 6.4.0-rc2 #1
> [ 571.746989][ T30] Hardware name: QEMU Standard PC (i440FX + PIIX,
> 1996), BIOS 1.12.0-1 04/01/2014
> [ 571.747440][ T30] Call Trace:
> [ 571.747606][ T30] <TASK>
> [ 571.747764][ T30] dump_stack_lvl+0x91/0xf0
> [ 571.747997][ T30] nmi_cpu_backtrace+0x21a/0x2b0
> [ 571.748257][ T30] ? lapic_can_unplug_cpu+0xa0/0xa0
> [ 571.748525][ T30] nmi_trigger_cpumask_backtrace+0x28c/0x2f0
> [ 571.748830][ T30] watchdog+0xe4b/0x10c0
> [ 571.749057][ T30] ? proc_dohung_task_timeout_secs+0x90/0x90
> [ 571.749366][ T30] kthread+0x33b/0x430
> [ 571.749596][ T30] ? kthread_complete_and_exit+0x40/0x40
> [ 571.749891][ T30] ret_from_fork+0x1f/0x30
> [ 571.750126][ T30] </TASK>
> [ 571.750347][ T30] Sending NMI from CPU 1 to CPUs 0:
> [ 571.750620][ C0] NMI backtrace for cpu 0
> [ 571.750626][ C0] CPU: 0 PID: 3987 Comm: systemd-journal Not
> tainted 6.4.0-rc2 #1
> [ 571.750637][ C0] Hardware name: QEMU Standard PC (i440FX + PIIX,
> 1996), BIOS 1.12.0-1 04/01/2014
> [ 571.750643][ C0] RIP: 0033:0x7fb1d8c34bd1
> [ 571.750652][ C0] Code: ed 4d 89 cf 75 a3 0f 1f 00 48 85 ed 75 4b
> 48 8b 54 24 28 48 8b 44 24 18 48 8b 7c 24 20 48 29 da 48 8b 70 20 48
> 0f af 54 24 08 <48> 83 c4 38 5b 5d 41 5c 41 5d 41 5e 41 5f e9 ac f2 04
> 00 0f 1f 40
> [ 571.750662][ C0] RSP: 002b:00007ffff9686c30 EFLAGS: 00000202
> [ 571.750670][ C0] RAX: 00007ffff9686e50 RBX: 0000000000000002
> RCX: 0000000000000010
> [ 571.750677][ C0] RDX: 0000000000000010 RSI: 00007ffff9686d80
> RDI: 00007ffff9686f20
> [ 571.750683][ C0] RBP: 0000000000000000 R08: 0000000000000010
> R09: 00007ffff9686d90
> [ 571.750689][ C0] R10: 00007ffff9686fb0 R11: 00007fb1d8d6a060
> R12: 00007ffff9686f30
> [ 571.750696][ C0] R13: 00007fb1d9d20ee0 R14: 00007ffff9686f30
> R15: 00007ffff9686d90
> [ 571.750703][ C0] FS: 00007fb1da33d8c0 GS: 0000000000000000
> [ 571.752358][ T30] Kernel panic - not syncing: hung_task: blocked tasks
> [ 571.757337][ T30] CPU: 1 PID: 30 Comm: khungtaskd Not tainted 6.4.0-rc2 #1
> [ 571.757686][ T30] Hardware name: QEMU Standard PC (i440FX + PIIX,
> 1996), BIOS 1.12.0-1 04/01/2014
> [ 571.758131][ T30] Call Trace:
> [ 571.758302][ T30] <TASK>
> [ 571.758462][ T30] dump_stack_lvl+0x91/0xf0
> [ 571.758714][ T30] panic+0x62d/0x6a0
> [ 571.758926][ T30] ? panic_smp_self_stop+0x90/0x90
> [ 571.759188][ T30] ? preempt_schedule_common+0x1a/0xc0
> [ 571.759486][ T30] ? preempt_schedule_thunk+0x1a/0x20
> [ 571.759785][ T30] ? watchdog+0xc21/0x10c0
> [ 571.760020][ T30] watchdog+0xc32/0x10c0
> [ 571.760240][ T30] ? proc_dohung_task_timeout_secs+0x90/0x90
> [ 571.760541][ T30] kthread+0x33b/0x430
> [ 571.760753][ T30] ? kthread_complete_and_exit+0x40/0x40
> [ 571.761052][ T30] ret_from_fork+0x1f/0x30
> [ 571.761286][ T30] </TASK>
> [ 571.761814][ T30] Kernel Offset: disabled
> [ 571.762047][ T30] Rebooting in 86400 seconds..
>
>> You need to include poc_blkdev.c as part of your report.
>
> It's a little confusing and I'm sorry for that.
> The poc_blkdev.c is exactly the C reproducer
> (https://pastebin.com/raw/6mg7uF8W).
>
>> I suspect you've done something that is known to not work (as root,
>> so we won't necessarily care). But I can't really say without seeing
>> what you've done. Running syzkaller is an art, and most people aren't
>> good at it. It takes a lot of work to submit good quality bug reports,
>> see this article:
>>
>> https://blog.regehr.org/archives/2037
>
> I have read this article and thanks for your recommendations.
> I'm not familiar with this module and I haven't figured out the root
> cause of this bug yet.
>
> Regards,
>
> Yang
>
> Matthew Wilcox <[email protected]> 于2023年5月17日周三 20:20写道:
>>
>> On Wed, May 17, 2023 at 07:12:23PM +0800, yang lan wrote:
>>> root@syzkaller:~# uname -a
>>> Linux syzkaller 5.10.179 #1 SMP PREEMPT Thu Apr 27 16:22:48 CST 2023
>>
>> Does this reproduce on current kernels, eg 6.4-rc2?
>>
>>> root@syzkaller:~# gcc poc_blkdev.c -o poc_blkdev
>>
>> You need to include poc_blkdev.c as part of your report.
>>
>>> Please let me know if I can provide any more information, and I hope I
>>> didn't mess up this bug report.
>>
>> I suspect you've done something that is known to not work (as root,
>> so we won't necessarily care). But I can't really say without seeing
>> what you've done. Running syzkaller is an art, and most people aren't
>> good at it. It takes a lot of work to submit good quality bug reports,
>> see this article:
>>
>> https://blog.regehr.org/archives/2037
>
> Matthew Wilcox <[email protected]> 于2023年5月17日周三 20:20写道:
>>
>> On Wed, May 17, 2023 at 07:12:23PM +0800, yang lan wrote:
>>> root@syzkaller:~# uname -a
>>> Linux syzkaller 5.10.179 #1 SMP PREEMPT Thu Apr 27 16:22:48 CST 2023
>>
>> Does this reproduce on current kernels, eg 6.4-rc2?
>>
>>> root@syzkaller:~# gcc poc_blkdev.c -o poc_blkdev
>>
>> You need to include poc_blkdev.c as part of your report.
>>
>>> Please let me know if I can provide any more information, and I hope I
>>> didn't mess up this bug report.
>>
>> I suspect you've done something that is known to not work (as root,
>> so we won't necessarily care). But I can't really say without seeing
>> what you've done. Running syzkaller is an art, and most people aren't
>> good at it. It takes a lot of work to submit good quality bug reports,
>> see this article:
>>
>> https://blog.regehr.org/archives/2037
> .
>


2023-05-19 07:14:25

by yang lan

[permalink] [raw]
Subject: Re: INFO: task hung in blkdev_open bug

Hi,

./rqos/wbt/wb_background:4
./rqos/wbt/wb_normal:8
./rqos/wbt/unknown_cnt:0
./rqos/wbt/min_lat_nsec:2000000
./rqos/wbt/inflight:0: inflight 0
./rqos/wbt/inflight:1: inflight 0
./rqos/wbt/inflight:2: inflight 0
./rqos/wbt/id:0
./rqos/wbt/enabled:1
./rqos/wbt/curr_win_nsec:0
./hctx0/type:default
./hctx0/dispatch_busy:0
./hctx0/active:0
./hctx0/run:1
./hctx0/sched_tags_bitmap:00000000: 0100 0000 0000 0000 0000 0000 0000 0000
./hctx0/sched_tags_bitmap:00000010: 0000 0000 0000 0000 0000 0000 0000 0000
./hctx0/sched_tags:nr_tags=256
./hctx0/sched_tags:nr_reserved_tags=0
./hctx0/sched_tags:active_queues=0
./hctx0/sched_tags:bitmap_tags:
./hctx0/sched_tags:depth=256
./hctx0/sched_tags:busy=1
./hctx0/sched_tags:cleared=0
./hctx0/sched_tags:bits_per_word=64
./hctx0/sched_tags:map_nr=4
./hctx0/sched_tags:alloc_hint={245, 45}
./hctx0/sched_tags:wake_batch=8
./hctx0/sched_tags:wake_index=0
./hctx0/sched_tags:ws_active=0
./hctx0/sched_tags:ws={
./hctx0/sched_tags: {.wait=inactive},
./hctx0/sched_tags: {.wait=inactive},
./hctx0/sched_tags: {.wait=inactive},
./hctx0/sched_tags: {.wait=inactive},
./hctx0/sched_tags: {.wait=inactive},
./hctx0/sched_tags: {.wait=inactive},
./hctx0/sched_tags: {.wait=inactive},
./hctx0/sched_tags: {.wait=inactive},
./hctx0/sched_tags:}
./hctx0/sched_tags:round_robin=0
./hctx0/sched_tags:min_shallow_depth=192
./hctx0/tags_bitmap:00000000: 0000 0000 0100 0000 0000 0000 0000 0000
./hctx0/tags:nr_tags=128
./hctx0/tags:nr_reserved_tags=0
./hctx0/tags:active_queues=0
./hctx0/tags:bitmap_tags:
./hctx0/tags:depth=128
./hctx0/tags:busy=1
./hctx0/tags:cleared=0
./hctx0/tags:bits_per_word=32
./hctx0/tags:map_nr=4
./hctx0/tags:alloc_hint={123, 51}
./hctx0/tags:wake_batch=8
./hctx0/tags:wake_index=0
./hctx0/tags:ws_active=0
./hctx0/tags:ws={
./hctx0/tags: {.wait=inactive},
./hctx0/tags: {.wait=inactive},
./hctx0/tags: {.wait=inactive},
./hctx0/tags: {.wait=inactive},
./hctx0/tags: {.wait=inactive},
./hctx0/tags: {.wait=inactive},
./hctx0/tags: {.wait=inactive},
./hctx0/tags: {.wait=inactive},
./hctx0/tags:}
./hctx0/tags:round_robin=0
./hctx0/tags:min_shallow_depth=4294967295
./hctx0/ctx_map:00000000: 00
./hctx0/busy:ffff888016860000 {.op=READ, .cmd_flags=,
.rq_flags=STARTED|ELVPRIV|IO_STAT|STATS|ELV, .state=in_flight,
.tag=32, .internal_tag=0}
./hctx0/flags:alloc_policy=FIFO SHOULD_MERGE|BLOCKING
./sched/queued:0 1 0
./sched/owned_by_driver:0 1 0
./sched/async_depth:192
./sched/starved:0
./sched/batching:1
./state:SAME_COMP|NONROT|IO_STAT|INIT_DONE|STATS|REGISTERED|NOWAIT|30
./pm_only:0

So how can we know where the io is?

Regards,

Yang

Yu Kuai <[email protected]> 于2023年5月18日周四 11:30写道:
>
> Hi,
>
> 在 2023/05/18 0:27, yang lan 写道:
> > Hi,
> >
> > Thank you for your response.
> >
> >> Does this reproduce on current kernels, eg 6.4-rc2?
> >
> > Yeah, it can be reproduced on kernel 6.4-rc2.
> >
>
> Below log shows that io hang, can you collect following debugfs so
> that we can know where is the io now.
>
> cd /sys/kernel/debug/block/[test_device] && find . -type f -exec grep
> -aH . {} \;
>
> Thanks,
> Kuai
> > root@syzkaller:~# uname -a
> > Linux syzkaller 6.4.0-rc2 #1 SMP PREEMPT_DYNAMIC Wed May 17 22:58:52
> > CST 2023 x86_64 GNU/Linux
> > root@syzkaller:~# gcc poc_blkdev.c -o poc_blkdev
> > root@syzkaller:~# ./poc_blkdev
> > [ 128.718051][ T7121] nbd0: detected capacity change from 0 to 4
> > [ 158.917678][ T998] block nbd0: Possible stuck request
> > ffff888016f08000: control (read@0,2048B). Runtime 30 seconds
> > [ 188.997677][ T998] block nbd0: Possible stuck request
> > ffff888016f08000: control (read@0,2048B). Runtime 60 seconds
> > [ 219.077191][ T998] block nbd0: Possible stuck request
> > ffff888016f08000: control (read@0,2048B). Runtime 90 seconds
> > [ 249.157312][ T998] block nbd0: Possible stuck request
> > ffff888016f08000: control (read@0,2048B). Runtime 120 seconds
> > [ 279.237409][ T998] block nbd0: Possible stuck request
> > ffff888016f08000: control (read@0,2048B). Runtime 150 seconds
> > [ 309.317843][ T998] block nbd0: Possible stuck request
> > ffff888016f08000: control (read@0,2048B). Runtime 180 seconds
> > [ 339.397950][ T998] block nbd0: Possible stuck request
> > ffff888016f08000: control (read@0,2048B). Runtime 210 seconds
> > [ 369.478031][ T998] block nbd0: Possible stuck request
> > ffff888016f08000: control (read@0,2048B). Runtime 240 seconds
> > [ 399.558253][ T998] block nbd0: Possible stuck request
> > ffff888016f08000: control (read@0,2048B). Runtime 270 seconds
> > [ 429.638372][ T998] block nbd0: Possible stuck request
> > ffff888016f08000: control (read@0,2048B). Runtime 300 seconds
> > [ 459.718454][ T998] block nbd0: Possible stuck request
> > ffff888016f08000: control (read@0,2048B). Runtime 330 seconds
> > [ 489.798571][ T998] block nbd0: Possible stuck request
> > ffff888016f08000: control (read@0,2048B). Runtime 360 seconds
> > [ 519.878643][ T998] block nbd0: Possible stuck request
> > ffff888016f08000: control (read@0,2048B). Runtime 390 seconds
> > [ 549.958966][ T998] block nbd0: Possible stuck request
> > ffff888016f08000: control (read@0,2048B). Runtime 420 seconds
> > [ 571.719145][ T30] INFO: task systemd-udevd:7123 blocked for more
> > than 143 seconds.
> > [ 571.719652][ T30] Not tainted 6.4.0-rc2 #1
> > [ 571.719900][ T30] "echo 0 >
> > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > [ 571.720307][ T30] task:systemd-udevd state:D stack:26224
> > pid:7123 ppid:3998 flags:0x00004004
> > [ 571.720756][ T30] Call Trace:
> > [ 571.720923][ T30] <TASK>
> > [ 571.721073][ T30] __schedule+0x9ca/0x2630
> > [ 571.721348][ T30] ? firmware_map_remove+0x1e0/0x1e0
> > [ 571.721618][ T30] ? find_held_lock+0x33/0x1c0
> > [ 571.721866][ T30] ? lock_release+0x3b9/0x690
> > [ 571.722108][ T30] ? do_read_cache_folio+0x4ff/0xb20
> > [ 571.722447][ T30] ? lock_downgrade+0x6b0/0x6b0
> > [ 571.722785][ T30] ? mark_held_locks+0xb0/0x110
> > [ 571.723044][ T30] schedule+0xd3/0x1b0
> > [ 571.723264][ T30] io_schedule+0x1b/0x70
> > [ 571.723489][ T30] ? do_read_cache_folio+0x58c/0xb20
> > [ 571.723760][ T30] do_read_cache_folio+0x58c/0xb20
> > [ 571.724036][ T30] ? blkdev_readahead+0x20/0x20
> > [ 571.724319][ T30] ? __filemap_get_folio+0x8e0/0x8e0
> > [ 571.724588][ T30] ? __sanitizer_cov_trace_switch+0x53/0x90
> > [ 571.724885][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
> > [ 571.725246][ T30] ? format_decode+0x1cf/0xb50
> > [ 571.725547][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
> > [ 571.725837][ T30] ? fill_ptr_key+0x30/0x30
> > [ 571.726072][ T30] ? default_pointer+0x4a0/0x4a0
> > [ 571.726335][ T30] ? __isolate_free_page+0x220/0x220
> > [ 571.726608][ T30] ? filemap_fdatawrite_wbc+0x1c0/0x1c0
> > [ 571.726888][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
> > [ 571.727172][ T30] ? read_part_sector+0x229/0x420
> > [ 571.727434][ T30] ? adfspart_check_ADFS+0x560/0x560
> > [ 571.727707][ T30] read_part_sector+0xfa/0x420
> > [ 571.727963][ T30] adfspart_check_POWERTEC+0x90/0x690
> > [ 571.728244][ T30] ? adfspart_check_ADFS+0x560/0x560
> > [ 571.728520][ T30] ? __kasan_slab_alloc+0x33/0x70
> > [ 571.728780][ T30] ? adfspart_check_ICS+0x8f0/0x8f0
> > [ 571.729889][ T30] ? snprintf+0xb2/0xe0
> > [ 571.730145][ T30] ? vsprintf+0x30/0x30
> > [ 571.730374][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
> > [ 571.730659][ T30] ? adfspart_check_ICS+0x8f0/0x8f0
> > [ 571.730928][ T30] bdev_disk_changed+0x674/0x1260
> > [ 571.731189][ T30] ? write_comp_data+0x1f/0x70
> > [ 571.731439][ T30] ? iput+0xd0/0x780
> > [ 571.731646][ T30] blkdev_get_whole+0x186/0x260
> > [ 571.731886][ T30] blkdev_get_by_dev+0x4ce/0xae0
> > [ 571.732139][ T30] blkdev_open+0x140/0x2c0
> > [ 571.732366][ T30] do_dentry_open+0x6de/0x1450
> > [ 571.732612][ T30] ? blkdev_close+0x80/0x80
> > [ 571.732848][ T30] path_openat+0xd6d/0x26d0
> > [ 571.733084][ T30] ? lock_downgrade+0x6b0/0x6b0
> > [ 571.733336][ T30] ? vfs_path_lookup+0x110/0x110
> > [ 571.733591][ T30] do_filp_open+0x1bb/0x290
> > [ 571.733824][ T30] ? may_open_dev+0xf0/0xf0
> > [ 571.734061][ T30] ? __phys_addr_symbol+0x30/0x70
> > [ 571.734324][ T30] ? do_raw_spin_unlock+0x176/0x260
> > [ 571.734595][ T30] do_sys_openat2+0x5fd/0x980
> > [ 571.734837][ T30] ? file_open_root+0x3f0/0x3f0
> > [ 571.735087][ T30] ? seccomp_notify_ioctl+0xff0/0xff0
> > [ 571.735368][ T30] do_sys_open+0xce/0x140
> > [ 571.735596][ T30] ? filp_open+0x80/0x80
> > [ 571.735820][ T30] ? __secure_computing+0x1e3/0x340
> > [ 571.736090][ T30] do_syscall_64+0x38/0x80
> > [ 571.736325][ T30] entry_SYSCALL_64_after_hwframe+0x63/0xcd
> > [ 571.736626][ T30] RIP: 0033:0x7fb212210840
> > [ 571.736857][ T30] RSP: 002b:00007fffb37bbbe8 EFLAGS: 00000246
> > ORIG_RAX: 0000000000000002
> > [ 571.737269][ T30] RAX: ffffffffffffffda RBX: 0000560e09072e10
> > RCX: 00007fb212210840
> > [ 571.737651][ T30] RDX: 0000560e08e39fe3 RSI: 00000000000a0800
> > RDI: 0000560e090813b0
> > [ 571.738037][ T30] RBP: 00007fffb37bbd60 R08: 0000560e08e39670
> > R09: 0000000000000010
> > [ 571.738432][ T30] R10: 0000560e08e39d0c R11: 0000000000000246
> > R12: 00007fffb37bbcb0
> > [ 571.739563][ T30] R13: 0000560e09087a70 R14: 0000000000000003
> > R15: 000000000000000e
> > [ 571.739973][ T30] </TASK>
> > [ 571.740133][ T30]
> > [ 571.740133][ T30] Showing all locks held in the system:
> > [ 571.740495][ T30] 1 lock held by rcu_tasks_kthre/13:
> > [ 571.740758][ T30] #0: ffffffff8b6badd0
> > (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at:
> > rcu_tasks_one_gp+0x2b/0xdb0
> > [ 571.741301][ T30] 1 lock held by rcu_tasks_trace/14:
> > [ 571.741571][ T30] #0: ffffffff8b6baad0
> > (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at:
> > rcu_tasks_one_gp+0x2b/0xdb0
> > [ 571.742134][ T30] 1 lock held by khungtaskd/30:
> > [ 571.742385][ T30] #0: ffffffff8b6bb960
> > (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x5b/0x300
> > [ 571.742947][ T30] 2 locks held by kworker/u8:0/50:
> > [ 571.743198][ T30] #0: ffff888016e7b138
> > ((wq_completion)nbd0-recv){+.+.}-{0:0}, at:
> > process_one_work+0x94b/0x17b0
> > [ 571.743809][ T30] #1: ffff888011e4fdd0
> > ((work_completion)(&args->work)){+.+.}-{0:0}, at:
> > process_one_work+0x984/0x17b0
> > [ 571.744393][ T30] 1 lock held by in:imklog/6784:
> > [ 571.744643][ T30] #0: ffff88801106e368
> > (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100
> > [ 571.745122][ T30] 1 lock held by systemd-udevd/7123:
> > [ 571.745381][ T30] #0: ffff8880431854c8
> > (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x24b/0xae0
> > [ 571.745885][ T30]
> > [ 571.746008][ T30] =============================================
> > [ 571.746008][ T30]
> > [ 571.746424][ T30] NMI backtrace for cpu 1
> > [ 571.746642][ T30] CPU: 1 PID: 30 Comm: khungtaskd Not tainted 6.4.0-rc2 #1
> > [ 571.746989][ T30] Hardware name: QEMU Standard PC (i440FX + PIIX,
> > 1996), BIOS 1.12.0-1 04/01/2014
> > [ 571.747440][ T30] Call Trace:
> > [ 571.747606][ T30] <TASK>
> > [ 571.747764][ T30] dump_stack_lvl+0x91/0xf0
> > [ 571.747997][ T30] nmi_cpu_backtrace+0x21a/0x2b0
> > [ 571.748257][ T30] ? lapic_can_unplug_cpu+0xa0/0xa0
> > [ 571.748525][ T30] nmi_trigger_cpumask_backtrace+0x28c/0x2f0
> > [ 571.748830][ T30] watchdog+0xe4b/0x10c0
> > [ 571.749057][ T30] ? proc_dohung_task_timeout_secs+0x90/0x90
> > [ 571.749366][ T30] kthread+0x33b/0x430
> > [ 571.749596][ T30] ? kthread_complete_and_exit+0x40/0x40
> > [ 571.749891][ T30] ret_from_fork+0x1f/0x30
> > [ 571.750126][ T30] </TASK>
> > [ 571.750347][ T30] Sending NMI from CPU 1 to CPUs 0:
> > [ 571.750620][ C0] NMI backtrace for cpu 0
> > [ 571.750626][ C0] CPU: 0 PID: 3987 Comm: systemd-journal Not
> > tainted 6.4.0-rc2 #1
> > [ 571.750637][ C0] Hardware name: QEMU Standard PC (i440FX + PIIX,
> > 1996), BIOS 1.12.0-1 04/01/2014
> > [ 571.750643][ C0] RIP: 0033:0x7fb1d8c34bd1
> > [ 571.750652][ C0] Code: ed 4d 89 cf 75 a3 0f 1f 00 48 85 ed 75 4b
> > 48 8b 54 24 28 48 8b 44 24 18 48 8b 7c 24 20 48 29 da 48 8b 70 20 48
> > 0f af 54 24 08 <48> 83 c4 38 5b 5d 41 5c 41 5d 41 5e 41 5f e9 ac f2 04
> > 00 0f 1f 40
> > [ 571.750662][ C0] RSP: 002b:00007ffff9686c30 EFLAGS: 00000202
> > [ 571.750670][ C0] RAX: 00007ffff9686e50 RBX: 0000000000000002
> > RCX: 0000000000000010
> > [ 571.750677][ C0] RDX: 0000000000000010 RSI: 00007ffff9686d80
> > RDI: 00007ffff9686f20
> > [ 571.750683][ C0] RBP: 0000000000000000 R08: 0000000000000010
> > R09: 00007ffff9686d90
> > [ 571.750689][ C0] R10: 00007ffff9686fb0 R11: 00007fb1d8d6a060
> > R12: 00007ffff9686f30
> > [ 571.750696][ C0] R13: 00007fb1d9d20ee0 R14: 00007ffff9686f30
> > R15: 00007ffff9686d90
> > [ 571.750703][ C0] FS: 00007fb1da33d8c0 GS: 0000000000000000
> > [ 571.752358][ T30] Kernel panic - not syncing: hung_task: blocked tasks
> > [ 571.757337][ T30] CPU: 1 PID: 30 Comm: khungtaskd Not tainted 6.4.0-rc2 #1
> > [ 571.757686][ T30] Hardware name: QEMU Standard PC (i440FX + PIIX,
> > 1996), BIOS 1.12.0-1 04/01/2014
> > [ 571.758131][ T30] Call Trace:
> > [ 571.758302][ T30] <TASK>
> > [ 571.758462][ T30] dump_stack_lvl+0x91/0xf0
> > [ 571.758714][ T30] panic+0x62d/0x6a0
> > [ 571.758926][ T30] ? panic_smp_self_stop+0x90/0x90
> > [ 571.759188][ T30] ? preempt_schedule_common+0x1a/0xc0
> > [ 571.759486][ T30] ? preempt_schedule_thunk+0x1a/0x20
> > [ 571.759785][ T30] ? watchdog+0xc21/0x10c0
> > [ 571.760020][ T30] watchdog+0xc32/0x10c0
> > [ 571.760240][ T30] ? proc_dohung_task_timeout_secs+0x90/0x90
> > [ 571.760541][ T30] kthread+0x33b/0x430
> > [ 571.760753][ T30] ? kthread_complete_and_exit+0x40/0x40
> > [ 571.761052][ T30] ret_from_fork+0x1f/0x30
> > [ 571.761286][ T30] </TASK>
> > [ 571.761814][ T30] Kernel Offset: disabled
> > [ 571.762047][ T30] Rebooting in 86400 seconds..
> >
> >> You need to include poc_blkdev.c as part of your report.
> >
> > It's a little confusing and I'm sorry for that.
> > The poc_blkdev.c is exactly the C reproducer
> > (https://pastebin.com/raw/6mg7uF8W).
> >
> >> I suspect you've done something that is known to not work (as root,
> >> so we won't necessarily care). But I can't really say without seeing
> >> what you've done. Running syzkaller is an art, and most people aren't
> >> good at it. It takes a lot of work to submit good quality bug reports,
> >> see this article:
> >>
> >> https://blog.regehr.org/archives/2037
> >
> > I have read this article and thanks for your recommendations.
> > I'm not familiar with this module and I haven't figured out the root
> > cause of this bug yet.
> >
> > Regards,
> >
> > Yang
> >
> > Matthew Wilcox <[email protected]> 于2023年5月17日周三 20:20写道:
> >>
> >> On Wed, May 17, 2023 at 07:12:23PM +0800, yang lan wrote:
> >>> root@syzkaller:~# uname -a
> >>> Linux syzkaller 5.10.179 #1 SMP PREEMPT Thu Apr 27 16:22:48 CST 2023
> >>
> >> Does this reproduce on current kernels, eg 6.4-rc2?
> >>
> >>> root@syzkaller:~# gcc poc_blkdev.c -o poc_blkdev
> >>
> >> You need to include poc_blkdev.c as part of your report.
> >>
> >>> Please let me know if I can provide any more information, and I hope I
> >>> didn't mess up this bug report.
> >>
> >> I suspect you've done something that is known to not work (as root,
> >> so we won't necessarily care). But I can't really say without seeing
> >> what you've done. Running syzkaller is an art, and most people aren't
> >> good at it. It takes a lot of work to submit good quality bug reports,
> >> see this article:
> >>
> >> https://blog.regehr.org/archives/2037
> >
> > Matthew Wilcox <[email protected]> 于2023年5月17日周三 20:20写道:
> >>
> >> On Wed, May 17, 2023 at 07:12:23PM +0800, yang lan wrote:
> >>> root@syzkaller:~# uname -a
> >>> Linux syzkaller 5.10.179 #1 SMP PREEMPT Thu Apr 27 16:22:48 CST 2023
> >>
> >> Does this reproduce on current kernels, eg 6.4-rc2?
> >>
> >>> root@syzkaller:~# gcc poc_blkdev.c -o poc_blkdev
> >>
> >> You need to include poc_blkdev.c as part of your report.
> >>
> >>> Please let me know if I can provide any more information, and I hope I
> >>> didn't mess up this bug report.
> >>
> >> I suspect you've done something that is known to not work (as root,
> >> so we won't necessarily care). But I can't really say without seeing
> >> what you've done. Running syzkaller is an art, and most people aren't
> >> good at it. It takes a lot of work to submit good quality bug reports,
> >> see this article:
> >>
> >> https://blog.regehr.org/archives/2037
> > .
> >
>

2023-05-19 07:43:04

by Yu Kuai

[permalink] [raw]
Subject: Re: INFO: task hung in blkdev_open bug

Hi,

在 2023/05/19 15:12, yang lan 写道:
> Hi,
>
> ./rqos/wbt/wb_background:4
> ./rqos/wbt/wb_normal:8
> ./rqos/wbt/unknown_cnt:0
> ./rqos/wbt/min_lat_nsec:2000000
> ./rqos/wbt/inflight:0: inflight 0
> ./rqos/wbt/inflight:1: inflight 0
> ./rqos/wbt/inflight:2: inflight 0
> ./rqos/wbt/id:0
> ./rqos/wbt/enabled:1
> ./rqos/wbt/curr_win_nsec:0
> ./hctx0/type:default
> ./hctx0/dispatch_busy:0
> ./hctx0/active:0
> ./hctx0/run:1
> ./hctx0/sched_tags_bitmap:00000000: 0100 0000 0000 0000 0000 0000 0000 0000
> ./hctx0/sched_tags_bitmap:00000010: 0000 0000 0000 0000 0000 0000 0000 0000
> ./hctx0/sched_tags:nr_tags=256
> ./hctx0/sched_tags:nr_reserved_tags=0
> ./hctx0/sched_tags:active_queues=0
> ./hctx0/sched_tags:bitmap_tags:
> ./hctx0/sched_tags:depth=256
> ./hctx0/sched_tags:busy=1
> ./hctx0/sched_tags:cleared=0
> ./hctx0/sched_tags:bits_per_word=64
> ./hctx0/sched_tags:map_nr=4
> ./hctx0/sched_tags:alloc_hint={245, 45}
> ./hctx0/sched_tags:wake_batch=8
> ./hctx0/sched_tags:wake_index=0
> ./hctx0/sched_tags:ws_active=0
> ./hctx0/sched_tags:ws={
> ./hctx0/sched_tags: {.wait=inactive},
> ./hctx0/sched_tags: {.wait=inactive},
> ./hctx0/sched_tags: {.wait=inactive},
> ./hctx0/sched_tags: {.wait=inactive},
> ./hctx0/sched_tags: {.wait=inactive},
> ./hctx0/sched_tags: {.wait=inactive},
> ./hctx0/sched_tags: {.wait=inactive},
> ./hctx0/sched_tags: {.wait=inactive},
> ./hctx0/sched_tags:}
> ./hctx0/sched_tags:round_robin=0
> ./hctx0/sched_tags:min_shallow_depth=192
> ./hctx0/tags_bitmap:00000000: 0000 0000 0100 0000 0000 0000 0000 0000
> ./hctx0/tags:nr_tags=128
> ./hctx0/tags:nr_reserved_tags=0
> ./hctx0/tags:active_queues=0
> ./hctx0/tags:bitmap_tags:
> ./hctx0/tags:depth=128
> ./hctx0/tags:busy=1

tags:busy and sched_tags:busy shows that one io is inflight.

> ./hctx0/tags:cleared=0
> ./hctx0/tags:bits_per_word=32
> ./hctx0/tags:map_nr=4
> ./hctx0/tags:alloc_hint={123, 51}
> ./hctx0/tags:wake_batch=8
> ./hctx0/tags:wake_index=0
> ./hctx0/tags:ws_active=0
> ./hctx0/tags:ws={
> ./hctx0/tags: {.wait=inactive},
> ./hctx0/tags: {.wait=inactive},
> ./hctx0/tags: {.wait=inactive},
> ./hctx0/tags: {.wait=inactive},
> ./hctx0/tags: {.wait=inactive},
> ./hctx0/tags: {.wait=inactive},
> ./hctx0/tags: {.wait=inactive},
> ./hctx0/tags: {.wait=inactive},
> ./hctx0/tags:}
> ./hctx0/tags:round_robin=0
> ./hctx0/tags:min_shallow_depth=4294967295
> ./hctx0/ctx_map:00000000: 00
> ./hctx0/busy:ffff888016860000 {.op=READ, .cmd_flags=,
> .rq_flags=STARTED|ELVPRIV|IO_STAT|STATS|ELV, .state=in_flight,
> .tag=32, .internal_tag=0}

About "busy" means this io is issued to driver and yet not finished. So
next step is to dig out where is the io in the driver and why it's not
done.

Thanks,
Kuai
> ./hctx0/flags:alloc_policy=FIFO SHOULD_MERGE|BLOCKING
> ./sched/queued:0 1 0
> ./sched/owned_by_driver:0 1 0
> ./sched/async_depth:192
> ./sched/starved:0
> ./sched/batching:1
> ./state:SAME_COMP|NONROT|IO_STAT|INIT_DONE|STATS|REGISTERED|NOWAIT|30
> ./pm_only:0
>
> So how can we know where the io is?
>
> Regards,
>
> Yang
>
> Yu Kuai <[email protected]> 于2023年5月18日周四 11:30写道:
>>
>> Hi,
>>
>> 在 2023/05/18 0:27, yang lan 写道:
>>> Hi,
>>>
>>> Thank you for your response.
>>>
>>>> Does this reproduce on current kernels, eg 6.4-rc2?
>>>
>>> Yeah, it can be reproduced on kernel 6.4-rc2.
>>>
>>
>> Below log shows that io hang, can you collect following debugfs so
>> that we can know where is the io now.
>>
>> cd /sys/kernel/debug/block/[test_device] && find . -type f -exec grep
>> -aH . {} \;
>>
>> Thanks,
>> Kuai
>>> root@syzkaller:~# uname -a
>>> Linux syzkaller 6.4.0-rc2 #1 SMP PREEMPT_DYNAMIC Wed May 17 22:58:52
>>> CST 2023 x86_64 GNU/Linux
>>> root@syzkaller:~# gcc poc_blkdev.c -o poc_blkdev
>>> root@syzkaller:~# ./poc_blkdev
>>> [ 128.718051][ T7121] nbd0: detected capacity change from 0 to 4
>>> [ 158.917678][ T998] block nbd0: Possible stuck request
>>> ffff888016f08000: control (read@0,2048B). Runtime 30 seconds
>>> [ 188.997677][ T998] block nbd0: Possible stuck request
>>> ffff888016f08000: control (read@0,2048B). Runtime 60 seconds
>>> [ 219.077191][ T998] block nbd0: Possible stuck request
>>> ffff888016f08000: control (read@0,2048B). Runtime 90 seconds
>>> [ 249.157312][ T998] block nbd0: Possible stuck request
>>> ffff888016f08000: control (read@0,2048B). Runtime 120 seconds
>>> [ 279.237409][ T998] block nbd0: Possible stuck request
>>> ffff888016f08000: control (read@0,2048B). Runtime 150 seconds
>>> [ 309.317843][ T998] block nbd0: Possible stuck request
>>> ffff888016f08000: control (read@0,2048B). Runtime 180 seconds
>>> [ 339.397950][ T998] block nbd0: Possible stuck request
>>> ffff888016f08000: control (read@0,2048B). Runtime 210 seconds
>>> [ 369.478031][ T998] block nbd0: Possible stuck request
>>> ffff888016f08000: control (read@0,2048B). Runtime 240 seconds
>>> [ 399.558253][ T998] block nbd0: Possible stuck request
>>> ffff888016f08000: control (read@0,2048B). Runtime 270 seconds
>>> [ 429.638372][ T998] block nbd0: Possible stuck request
>>> ffff888016f08000: control (read@0,2048B). Runtime 300 seconds
>>> [ 459.718454][ T998] block nbd0: Possible stuck request
>>> ffff888016f08000: control (read@0,2048B). Runtime 330 seconds
>>> [ 489.798571][ T998] block nbd0: Possible stuck request
>>> ffff888016f08000: control (read@0,2048B). Runtime 360 seconds
>>> [ 519.878643][ T998] block nbd0: Possible stuck request
>>> ffff888016f08000: control (read@0,2048B). Runtime 390 seconds
>>> [ 549.958966][ T998] block nbd0: Possible stuck request
>>> ffff888016f08000: control (read@0,2048B). Runtime 420 seconds
>>> [ 571.719145][ T30] INFO: task systemd-udevd:7123 blocked for more
>>> than 143 seconds.
>>> [ 571.719652][ T30] Not tainted 6.4.0-rc2 #1
>>> [ 571.719900][ T30] "echo 0 >
>>> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>> [ 571.720307][ T30] task:systemd-udevd state:D stack:26224
>>> pid:7123 ppid:3998 flags:0x00004004
>>> [ 571.720756][ T30] Call Trace:
>>> [ 571.720923][ T30] <TASK>
>>> [ 571.721073][ T30] __schedule+0x9ca/0x2630
>>> [ 571.721348][ T30] ? firmware_map_remove+0x1e0/0x1e0
>>> [ 571.721618][ T30] ? find_held_lock+0x33/0x1c0
>>> [ 571.721866][ T30] ? lock_release+0x3b9/0x690
>>> [ 571.722108][ T30] ? do_read_cache_folio+0x4ff/0xb20
>>> [ 571.722447][ T30] ? lock_downgrade+0x6b0/0x6b0
>>> [ 571.722785][ T30] ? mark_held_locks+0xb0/0x110
>>> [ 571.723044][ T30] schedule+0xd3/0x1b0
>>> [ 571.723264][ T30] io_schedule+0x1b/0x70
>>> [ 571.723489][ T30] ? do_read_cache_folio+0x58c/0xb20
>>> [ 571.723760][ T30] do_read_cache_folio+0x58c/0xb20
>>> [ 571.724036][ T30] ? blkdev_readahead+0x20/0x20
>>> [ 571.724319][ T30] ? __filemap_get_folio+0x8e0/0x8e0
>>> [ 571.724588][ T30] ? __sanitizer_cov_trace_switch+0x53/0x90
>>> [ 571.724885][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
>>> [ 571.725246][ T30] ? format_decode+0x1cf/0xb50
>>> [ 571.725547][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
>>> [ 571.725837][ T30] ? fill_ptr_key+0x30/0x30
>>> [ 571.726072][ T30] ? default_pointer+0x4a0/0x4a0
>>> [ 571.726335][ T30] ? __isolate_free_page+0x220/0x220
>>> [ 571.726608][ T30] ? filemap_fdatawrite_wbc+0x1c0/0x1c0
>>> [ 571.726888][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
>>> [ 571.727172][ T30] ? read_part_sector+0x229/0x420
>>> [ 571.727434][ T30] ? adfspart_check_ADFS+0x560/0x560
>>> [ 571.727707][ T30] read_part_sector+0xfa/0x420
>>> [ 571.727963][ T30] adfspart_check_POWERTEC+0x90/0x690
>>> [ 571.728244][ T30] ? adfspart_check_ADFS+0x560/0x560
>>> [ 571.728520][ T30] ? __kasan_slab_alloc+0x33/0x70
>>> [ 571.728780][ T30] ? adfspart_check_ICS+0x8f0/0x8f0
>>> [ 571.729889][ T30] ? snprintf+0xb2/0xe0
>>> [ 571.730145][ T30] ? vsprintf+0x30/0x30
>>> [ 571.730374][ T30] ? __sanitizer_cov_trace_pc+0x1e/0x50
>>> [ 571.730659][ T30] ? adfspart_check_ICS+0x8f0/0x8f0
>>> [ 571.730928][ T30] bdev_disk_changed+0x674/0x1260
>>> [ 571.731189][ T30] ? write_comp_data+0x1f/0x70
>>> [ 571.731439][ T30] ? iput+0xd0/0x780
>>> [ 571.731646][ T30] blkdev_get_whole+0x186/0x260
>>> [ 571.731886][ T30] blkdev_get_by_dev+0x4ce/0xae0
>>> [ 571.732139][ T30] blkdev_open+0x140/0x2c0
>>> [ 571.732366][ T30] do_dentry_open+0x6de/0x1450
>>> [ 571.732612][ T30] ? blkdev_close+0x80/0x80
>>> [ 571.732848][ T30] path_openat+0xd6d/0x26d0
>>> [ 571.733084][ T30] ? lock_downgrade+0x6b0/0x6b0
>>> [ 571.733336][ T30] ? vfs_path_lookup+0x110/0x110
>>> [ 571.733591][ T30] do_filp_open+0x1bb/0x290
>>> [ 571.733824][ T30] ? may_open_dev+0xf0/0xf0
>>> [ 571.734061][ T30] ? __phys_addr_symbol+0x30/0x70
>>> [ 571.734324][ T30] ? do_raw_spin_unlock+0x176/0x260
>>> [ 571.734595][ T30] do_sys_openat2+0x5fd/0x980
>>> [ 571.734837][ T30] ? file_open_root+0x3f0/0x3f0
>>> [ 571.735087][ T30] ? seccomp_notify_ioctl+0xff0/0xff0
>>> [ 571.735368][ T30] do_sys_open+0xce/0x140
>>> [ 571.735596][ T30] ? filp_open+0x80/0x80
>>> [ 571.735820][ T30] ? __secure_computing+0x1e3/0x340
>>> [ 571.736090][ T30] do_syscall_64+0x38/0x80
>>> [ 571.736325][ T30] entry_SYSCALL_64_after_hwframe+0x63/0xcd
>>> [ 571.736626][ T30] RIP: 0033:0x7fb212210840
>>> [ 571.736857][ T30] RSP: 002b:00007fffb37bbbe8 EFLAGS: 00000246
>>> ORIG_RAX: 0000000000000002
>>> [ 571.737269][ T30] RAX: ffffffffffffffda RBX: 0000560e09072e10
>>> RCX: 00007fb212210840
>>> [ 571.737651][ T30] RDX: 0000560e08e39fe3 RSI: 00000000000a0800
>>> RDI: 0000560e090813b0
>>> [ 571.738037][ T30] RBP: 00007fffb37bbd60 R08: 0000560e08e39670
>>> R09: 0000000000000010
>>> [ 571.738432][ T30] R10: 0000560e08e39d0c R11: 0000000000000246
>>> R12: 00007fffb37bbcb0
>>> [ 571.739563][ T30] R13: 0000560e09087a70 R14: 0000000000000003
>>> R15: 000000000000000e
>>> [ 571.739973][ T30] </TASK>
>>> [ 571.740133][ T30]
>>> [ 571.740133][ T30] Showing all locks held in the system:
>>> [ 571.740495][ T30] 1 lock held by rcu_tasks_kthre/13:
>>> [ 571.740758][ T30] #0: ffffffff8b6badd0
>>> (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at:
>>> rcu_tasks_one_gp+0x2b/0xdb0
>>> [ 571.741301][ T30] 1 lock held by rcu_tasks_trace/14:
>>> [ 571.741571][ T30] #0: ffffffff8b6baad0
>>> (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at:
>>> rcu_tasks_one_gp+0x2b/0xdb0
>>> [ 571.742134][ T30] 1 lock held by khungtaskd/30:
>>> [ 571.742385][ T30] #0: ffffffff8b6bb960
>>> (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x5b/0x300
>>> [ 571.742947][ T30] 2 locks held by kworker/u8:0/50:
>>> [ 571.743198][ T30] #0: ffff888016e7b138
>>> ((wq_completion)nbd0-recv){+.+.}-{0:0}, at:
>>> process_one_work+0x94b/0x17b0
>>> [ 571.743809][ T30] #1: ffff888011e4fdd0
>>> ((work_completion)(&args->work)){+.+.}-{0:0}, at:
>>> process_one_work+0x984/0x17b0
>>> [ 571.744393][ T30] 1 lock held by in:imklog/6784:
>>> [ 571.744643][ T30] #0: ffff88801106e368
>>> (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100
>>> [ 571.745122][ T30] 1 lock held by systemd-udevd/7123:
>>> [ 571.745381][ T30] #0: ffff8880431854c8
>>> (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x24b/0xae0
>>> [ 571.745885][ T30]
>>> [ 571.746008][ T30] =============================================
>>> [ 571.746008][ T30]
>>> [ 571.746424][ T30] NMI backtrace for cpu 1
>>> [ 571.746642][ T30] CPU: 1 PID: 30 Comm: khungtaskd Not tainted 6.4.0-rc2 #1
>>> [ 571.746989][ T30] Hardware name: QEMU Standard PC (i440FX + PIIX,
>>> 1996), BIOS 1.12.0-1 04/01/2014
>>> [ 571.747440][ T30] Call Trace:
>>> [ 571.747606][ T30] <TASK>
>>> [ 571.747764][ T30] dump_stack_lvl+0x91/0xf0
>>> [ 571.747997][ T30] nmi_cpu_backtrace+0x21a/0x2b0
>>> [ 571.748257][ T30] ? lapic_can_unplug_cpu+0xa0/0xa0
>>> [ 571.748525][ T30] nmi_trigger_cpumask_backtrace+0x28c/0x2f0
>>> [ 571.748830][ T30] watchdog+0xe4b/0x10c0
>>> [ 571.749057][ T30] ? proc_dohung_task_timeout_secs+0x90/0x90
>>> [ 571.749366][ T30] kthread+0x33b/0x430
>>> [ 571.749596][ T30] ? kthread_complete_and_exit+0x40/0x40
>>> [ 571.749891][ T30] ret_from_fork+0x1f/0x30
>>> [ 571.750126][ T30] </TASK>
>>> [ 571.750347][ T30] Sending NMI from CPU 1 to CPUs 0:
>>> [ 571.750620][ C0] NMI backtrace for cpu 0
>>> [ 571.750626][ C0] CPU: 0 PID: 3987 Comm: systemd-journal Not
>>> tainted 6.4.0-rc2 #1
>>> [ 571.750637][ C0] Hardware name: QEMU Standard PC (i440FX + PIIX,
>>> 1996), BIOS 1.12.0-1 04/01/2014
>>> [ 571.750643][ C0] RIP: 0033:0x7fb1d8c34bd1
>>> [ 571.750652][ C0] Code: ed 4d 89 cf 75 a3 0f 1f 00 48 85 ed 75 4b
>>> 48 8b 54 24 28 48 8b 44 24 18 48 8b 7c 24 20 48 29 da 48 8b 70 20 48
>>> 0f af 54 24 08 <48> 83 c4 38 5b 5d 41 5c 41 5d 41 5e 41 5f e9 ac f2 04
>>> 00 0f 1f 40
>>> [ 571.750662][ C0] RSP: 002b:00007ffff9686c30 EFLAGS: 00000202
>>> [ 571.750670][ C0] RAX: 00007ffff9686e50 RBX: 0000000000000002
>>> RCX: 0000000000000010
>>> [ 571.750677][ C0] RDX: 0000000000000010 RSI: 00007ffff9686d80
>>> RDI: 00007ffff9686f20
>>> [ 571.750683][ C0] RBP: 0000000000000000 R08: 0000000000000010
>>> R09: 00007ffff9686d90
>>> [ 571.750689][ C0] R10: 00007ffff9686fb0 R11: 00007fb1d8d6a060
>>> R12: 00007ffff9686f30
>>> [ 571.750696][ C0] R13: 00007fb1d9d20ee0 R14: 00007ffff9686f30
>>> R15: 00007ffff9686d90
>>> [ 571.750703][ C0] FS: 00007fb1da33d8c0 GS: 0000000000000000
>>> [ 571.752358][ T30] Kernel panic - not syncing: hung_task: blocked tasks
>>> [ 571.757337][ T30] CPU: 1 PID: 30 Comm: khungtaskd Not tainted 6.4.0-rc2 #1
>>> [ 571.757686][ T30] Hardware name: QEMU Standard PC (i440FX + PIIX,
>>> 1996), BIOS 1.12.0-1 04/01/2014
>>> [ 571.758131][ T30] Call Trace:
>>> [ 571.758302][ T30] <TASK>
>>> [ 571.758462][ T30] dump_stack_lvl+0x91/0xf0
>>> [ 571.758714][ T30] panic+0x62d/0x6a0
>>> [ 571.758926][ T30] ? panic_smp_self_stop+0x90/0x90
>>> [ 571.759188][ T30] ? preempt_schedule_common+0x1a/0xc0
>>> [ 571.759486][ T30] ? preempt_schedule_thunk+0x1a/0x20
>>> [ 571.759785][ T30] ? watchdog+0xc21/0x10c0
>>> [ 571.760020][ T30] watchdog+0xc32/0x10c0
>>> [ 571.760240][ T30] ? proc_dohung_task_timeout_secs+0x90/0x90
>>> [ 571.760541][ T30] kthread+0x33b/0x430
>>> [ 571.760753][ T30] ? kthread_complete_and_exit+0x40/0x40
>>> [ 571.761052][ T30] ret_from_fork+0x1f/0x30
>>> [ 571.761286][ T30] </TASK>
>>> [ 571.761814][ T30] Kernel Offset: disabled
>>> [ 571.762047][ T30] Rebooting in 86400 seconds..
>>>
>>>> You need to include poc_blkdev.c as part of your report.
>>>
>>> It's a little confusing and I'm sorry for that.
>>> The poc_blkdev.c is exactly the C reproducer
>>> (https://pastebin.com/raw/6mg7uF8W).
>>>
>>>> I suspect you've done something that is known to not work (as root,
>>>> so we won't necessarily care). But I can't really say without seeing
>>>> what you've done. Running syzkaller is an art, and most people aren't
>>>> good at it. It takes a lot of work to submit good quality bug reports,
>>>> see this article:
>>>>
>>>> https://blog.regehr.org/archives/2037
>>>
>>> I have read this article and thanks for your recommendations.
>>> I'm not familiar with this module and I haven't figured out the root
>>> cause of this bug yet.
>>>
>>> Regards,
>>>
>>> Yang
>>>
>>> Matthew Wilcox <[email protected]> 于2023年5月17日周三 20:20写道:
>>>>
>>>> On Wed, May 17, 2023 at 07:12:23PM +0800, yang lan wrote:
>>>>> root@syzkaller:~# uname -a
>>>>> Linux syzkaller 5.10.179 #1 SMP PREEMPT Thu Apr 27 16:22:48 CST 2023
>>>>
>>>> Does this reproduce on current kernels, eg 6.4-rc2?
>>>>
>>>>> root@syzkaller:~# gcc poc_blkdev.c -o poc_blkdev
>>>>
>>>> You need to include poc_blkdev.c as part of your report.
>>>>
>>>>> Please let me know if I can provide any more information, and I hope I
>>>>> didn't mess up this bug report.
>>>>
>>>> I suspect you've done something that is known to not work (as root,
>>>> so we won't necessarily care). But I can't really say without seeing
>>>> what you've done. Running syzkaller is an art, and most people aren't
>>>> good at it. It takes a lot of work to submit good quality bug reports,
>>>> see this article:
>>>>
>>>> https://blog.regehr.org/archives/2037
>>>
>>> Matthew Wilcox <[email protected]> 于2023年5月17日周三 20:20写道:
>>>>
>>>> On Wed, May 17, 2023 at 07:12:23PM +0800, yang lan wrote:
>>>>> root@syzkaller:~# uname -a
>>>>> Linux syzkaller 5.10.179 #1 SMP PREEMPT Thu Apr 27 16:22:48 CST 2023
>>>>
>>>> Does this reproduce on current kernels, eg 6.4-rc2?
>>>>
>>>>> root@syzkaller:~# gcc poc_blkdev.c -o poc_blkdev
>>>>
>>>> You need to include poc_blkdev.c as part of your report.
>>>>
>>>>> Please let me know if I can provide any more information, and I hope I
>>>>> didn't mess up this bug report.
>>>>
>>>> I suspect you've done something that is known to not work (as root,
>>>> so we won't necessarily care). But I can't really say without seeing
>>>> what you've done. Running syzkaller is an art, and most people aren't
>>>> good at it. It takes a lot of work to submit good quality bug reports,
>>>> see this article:
>>>>
>>>> https://blog.regehr.org/archives/2037
>>> .
>>>
>>
> .
>