2020-04-12 10:12:21

by syzbot

[permalink] [raw]
Subject: possible deadlock in shmem_uncharge

Hello,

syzbot found the following crash on:

HEAD commit: ae46d2aa mm/gup: Let __get_user_pages_locked() return -EIN..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=14a30a77e00000
kernel config: https://syzkaller.appspot.com/x/.config?x=ca75979eeebf06c2
dashboard link: https://syzkaller.appspot.com/bug?extid=c8a8197c8852f566b9d9
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=15f5632be00000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=132ade57e00000

The bug was bisected to:

commit 71725ed10c40696dc6bdccf8e225815dcef24dba
Author: Hugh Dickins <[email protected]>
Date: Tue Apr 7 03:07:57 2020 +0000

mm: huge tmpfs: try to split_huge_page() when punching hole

bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=120a752be00000
final crash: https://syzkaller.appspot.com/x/report.txt?x=110a752be00000
console output: https://syzkaller.appspot.com/x/log.txt?x=160a752be00000

IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: [email protected]
Fixes: 71725ed10c40 ("mm: huge tmpfs: try to split_huge_page() when punching hole")

=====================================================
WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
5.6.0-syzkaller #0 Not tainted
-----------------------------------------------------
syz-executor428/8337 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
ffff8880a851c778 (&info->lock){....}-{2:2}, at: shmem_uncharge+0x24/0x270 mm/shmem.c:341

and this task is already holding:
ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: split_huge_page_to_list+0xad0/0x33b0 mm/huge_memory.c:2864
which would create a new lock dependency:
(&xa->xa_lock#4){..-.}-{2:2} -> (&info->lock){....}-{2:2}

but this new dependency connects a SOFTIRQ-irq-safe lock:
(&xa->xa_lock#4){..-.}-{2:2}

... which became SOFTIRQ-irq-safe at:
lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
test_clear_page_writeback+0x1d7/0x11e0 mm/page-writeback.c:2728
end_page_writeback+0x239/0x520 mm/filemap.c:1317
end_buffer_async_write+0x442/0x5c0 fs/buffer.c:384
end_bio_bh_io_sync+0xe2/0x140 fs/buffer.c:3012
bio_endio+0x473/0x820 block/bio.c:1422
req_bio_endio block/blk-core.c:245 [inline]
blk_update_request+0x3e1/0xdc0 block/blk-core.c:1472
scsi_end_request+0x80/0x7b0 drivers/scsi/scsi_lib.c:575
scsi_io_completion+0x1e7/0x1300 drivers/scsi/scsi_lib.c:959
scsi_softirq_done+0x31e/0x3b0 drivers/scsi/scsi_lib.c:1454
blk_done_softirq+0x2db/0x440 block/blk-softirq.c:37
__do_softirq+0x26c/0x9f7 kernel/softirq.c:292
invoke_softirq kernel/softirq.c:373 [inline]
irq_exit+0x192/0x1d0 kernel/softirq.c:413
exiting_irq arch/x86/include/asm/apic.h:546 [inline]
do_IRQ+0xda/0x270 arch/x86/kernel/irq.c:263
ret_from_intr+0x0/0x2b
arch_local_irq_restore arch/x86/include/asm/paravirt.h:759 [inline]
lock_acquire+0x267/0x8f0 kernel/locking/lockdep.c:4926
down_write+0x8d/0x150 kernel/locking/rwsem.c:1531
inode_lock include/linux/fs.h:797 [inline]
process_measurement+0x68a/0x1750 security/integrity/ima/ima_main.c:228
ima_file_check+0xb9/0x100 security/integrity/ima/ima_main.c:440
do_open fs/namei.c:3231 [inline]
path_openat+0x1997/0x27d0 fs/namei.c:3346
do_filp_open+0x192/0x260 fs/namei.c:3373
do_sys_openat2+0x585/0x7d0 fs/open.c:1148
do_sys_open+0xc3/0x140 fs/open.c:1164
do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
entry_SYSCALL_64_after_hwframe+0x49/0xb3

to a SOFTIRQ-irq-unsafe lock:
(shmlock_user_lock){+.+.}-{2:2}

... which became SOFTIRQ-irq-unsafe at:
...
lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
spin_lock include/linux/spinlock.h:353 [inline]
user_shm_lock+0xab/0x230 mm/mlock.c:855
hugetlb_file_setup+0x4e1/0x677 fs/hugetlbfs/inode.c:1416
newseg+0x460/0xe60 ipc/shm.c:652
ipcget_new ipc/util.c:344 [inline]
ipcget+0xf0/0xcb0 ipc/util.c:643
ksys_shmget ipc/shm.c:742 [inline]
__do_sys_shmget ipc/shm.c:747 [inline]
__se_sys_shmget ipc/shm.c:745 [inline]
__x64_sys_shmget+0x139/0x1a0 ipc/shm.c:745
do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
entry_SYSCALL_64_after_hwframe+0x49/0xb3

other info that might help us debug this:

Chain exists of:
&xa->xa_lock#4 --> &info->lock --> shmlock_user_lock

Possible interrupt unsafe locking scenario:

CPU0 CPU1
---- ----
lock(shmlock_user_lock);
local_irq_disable();
lock(&xa->xa_lock#4);
lock(&info->lock);
<Interrupt>
lock(&xa->xa_lock#4);

*** DEADLOCK ***

5 locks held by syz-executor428/8337:
#0: ffff8880a7948450 (sb_writers#7){.+.+}-{0:0}, at: sb_start_write include/linux/fs.h:1655 [inline]
#0: ffff8880a7948450 (sb_writers#7){.+.+}-{0:0}, at: do_sys_ftruncate+0x29f/0x570 fs/open.c:190
#1: ffff8880a851c9d0 (&sb->s_type->i_mutex_key#16){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:797 [inline]
#1: ffff8880a851c9d0 (&sb->s_type->i_mutex_key#16){+.+.}-{3:3}, at: do_truncate+0x125/0x1f0 fs/open.c:62
#2: ffff8880a851cb90 (&mapping->i_mmap_rwsem){++++}-{3:3}, at: i_mmap_lock_read include/linux/fs.h:541 [inline]
#2: ffff8880a851cb90 (&mapping->i_mmap_rwsem){++++}-{3:3}, at: split_huge_page_to_list+0x4c3/0x33b0 mm/huge_memory.c:2825
#3: ffff88812ffffcd8 (&pgdat->lru_lock){....}-{2:2}, at: split_huge_page_to_list+0x8da/0x33b0 mm/huge_memory.c:2855
#4: ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
#4: ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: split_huge_page_to_list+0xad0/0x33b0 mm/huge_memory.c:2864

the dependencies between SOFTIRQ-irq-safe lock and the holding lock:
-> (&xa->xa_lock#4){..-.}-{2:2} {
IN-SOFTIRQ-W at:
lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
test_clear_page_writeback+0x1d7/0x11e0 mm/page-writeback.c:2728
end_page_writeback+0x239/0x520 mm/filemap.c:1317
end_buffer_async_write+0x442/0x5c0 fs/buffer.c:384
end_bio_bh_io_sync+0xe2/0x140 fs/buffer.c:3012
bio_endio+0x473/0x820 block/bio.c:1422
req_bio_endio block/blk-core.c:245 [inline]
blk_update_request+0x3e1/0xdc0 block/blk-core.c:1472
scsi_end_request+0x80/0x7b0 drivers/scsi/scsi_lib.c:575
scsi_io_completion+0x1e7/0x1300 drivers/scsi/scsi_lib.c:959
scsi_softirq_done+0x31e/0x3b0 drivers/scsi/scsi_lib.c:1454
blk_done_softirq+0x2db/0x440 block/blk-softirq.c:37
__do_softirq+0x26c/0x9f7 kernel/softirq.c:292
invoke_softirq kernel/softirq.c:373 [inline]
irq_exit+0x192/0x1d0 kernel/softirq.c:413
exiting_irq arch/x86/include/asm/apic.h:546 [inline]
do_IRQ+0xda/0x270 arch/x86/kernel/irq.c:263
ret_from_intr+0x0/0x2b
arch_local_irq_restore arch/x86/include/asm/paravirt.h:759 [inline]
lock_acquire+0x267/0x8f0 kernel/locking/lockdep.c:4926
down_write+0x8d/0x150 kernel/locking/rwsem.c:1531
inode_lock include/linux/fs.h:797 [inline]
process_measurement+0x68a/0x1750 security/integrity/ima/ima_main.c:228
ima_file_check+0xb9/0x100 security/integrity/ima/ima_main.c:440
do_open fs/namei.c:3231 [inline]
path_openat+0x1997/0x27d0 fs/namei.c:3346
do_filp_open+0x192/0x260 fs/namei.c:3373
do_sys_openat2+0x585/0x7d0 fs/open.c:1148
do_sys_open+0xc3/0x140 fs/open.c:1164
do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
entry_SYSCALL_64_after_hwframe+0x49/0xb3
INITIAL USE at:
lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
__raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
_raw_spin_lock_irq+0x5b/0x80 kernel/locking/spinlock.c:167
spin_lock_irq include/linux/spinlock.h:378 [inline]
__add_to_page_cache_locked+0x607/0xe00 mm/filemap.c:855
add_to_page_cache_lru+0x1aa/0x700 mm/filemap.c:921
do_read_cache_page+0x9ab/0x1810 mm/filemap.c:2755
read_mapping_page include/linux/pagemap.h:397 [inline]
read_part_sector+0xf6/0x600 block/partitions/core.c:643
adfspart_check_ICS+0x9d/0xc80 block/partitions/acorn.c:360
check_partition block/partitions/core.c:140 [inline]
blk_add_partitions+0x474/0xe50 block/partitions/core.c:571
bdev_disk_changed+0x1fb/0x380 fs/block_dev.c:1544
__blkdev_get+0xb15/0x1530 fs/block_dev.c:1647
blkdev_get+0x41/0x2b0 fs/block_dev.c:1749
register_disk block/genhd.c:763 [inline]
__device_add_disk+0xa4f/0x1170 block/genhd.c:853
add_disk include/linux/genhd.h:294 [inline]
brd_init+0x297/0x463 drivers/block/brd.c:533
do_one_initcall+0x10a/0x7d0 init/main.c:1158
do_initcall_level init/main.c:1231 [inline]
do_initcalls init/main.c:1247 [inline]
do_basic_setup init/main.c:1267 [inline]
kernel_init_freeable+0x501/0x5ae init/main.c:1451
kernel_init+0xd/0x1bb init/main.c:1358
ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352
}
... key at: [<ffffffff8c67b1e0>] __key.18007+0x0/0x40
... acquired at:
lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
shmem_uncharge+0x24/0x270 mm/shmem.c:341
__split_huge_page mm/huge_memory.c:2613 [inline]
split_huge_page_to_list+0x274b/0x33b0 mm/huge_memory.c:2886
split_huge_page include/linux/huge_mm.h:204 [inline]
shmem_punch_compound+0x13e/0x1e0 mm/shmem.c:814
shmem_undo_range+0x5f1/0x1b80 mm/shmem.c:870
shmem_truncate_range+0x27/0xa0 mm/shmem.c:980
shmem_setattr+0x8b6/0xc80 mm/shmem.c:1039
notify_change+0xb6d/0x1020 fs/attr.c:336
do_truncate+0x134/0x1f0 fs/open.c:64
do_sys_ftruncate+0x4a5/0x570 fs/open.c:195
do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
entry_SYSCALL_64_after_hwframe+0x49/0xb3


the dependencies between the lock to be acquired
and SOFTIRQ-irq-unsafe lock:
-> (shmlock_user_lock){+.+.}-{2:2} {
HARDIRQ-ON-W at:
lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
spin_lock include/linux/spinlock.h:353 [inline]
user_shm_lock+0xab/0x230 mm/mlock.c:855
hugetlb_file_setup+0x4e1/0x677 fs/hugetlbfs/inode.c:1416
newseg+0x460/0xe60 ipc/shm.c:652
ipcget_new ipc/util.c:344 [inline]
ipcget+0xf0/0xcb0 ipc/util.c:643
ksys_shmget ipc/shm.c:742 [inline]
__do_sys_shmget ipc/shm.c:747 [inline]
__se_sys_shmget ipc/shm.c:745 [inline]
__x64_sys_shmget+0x139/0x1a0 ipc/shm.c:745
do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
entry_SYSCALL_64_after_hwframe+0x49/0xb3
SOFTIRQ-ON-W at:
lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
spin_lock include/linux/spinlock.h:353 [inline]
user_shm_lock+0xab/0x230 mm/mlock.c:855
hugetlb_file_setup+0x4e1/0x677 fs/hugetlbfs/inode.c:1416
newseg+0x460/0xe60 ipc/shm.c:652
ipcget_new ipc/util.c:344 [inline]
ipcget+0xf0/0xcb0 ipc/util.c:643
ksys_shmget ipc/shm.c:742 [inline]
__do_sys_shmget ipc/shm.c:747 [inline]
__se_sys_shmget ipc/shm.c:745 [inline]
__x64_sys_shmget+0x139/0x1a0 ipc/shm.c:745
do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
entry_SYSCALL_64_after_hwframe+0x49/0xb3
INITIAL USE at:
lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
spin_lock include/linux/spinlock.h:353 [inline]
user_shm_lock+0xab/0x230 mm/mlock.c:855
shmem_lock+0x1dd/0x2d0 mm/shmem.c:2184
shmctl_do_lock+0x73f/0x8f0 ipc/shm.c:1111
ksys_shmctl.constprop.0+0x203/0x350 ipc/shm.c:1188
do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
entry_SYSCALL_64_after_hwframe+0x49/0xb3
}
... key at: [<ffffffff89a5e858>] shmlock_user_lock+0x18/0x5c0
... acquired at:
__raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
_raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
spin_lock include/linux/spinlock.h:353 [inline]
user_shm_lock+0xab/0x230 mm/mlock.c:855
shmem_lock+0x1dd/0x2d0 mm/shmem.c:2184
shmctl_do_lock+0x73f/0x8f0 ipc/shm.c:1111
ksys_shmctl.constprop.0+0x203/0x350 ipc/shm.c:1188
do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
entry_SYSCALL_64_after_hwframe+0x49/0xb3

-> (&info->lock){....}-{2:2} {
INITIAL USE at:
lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
__raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
_raw_spin_lock_irq+0x5b/0x80 kernel/locking/spinlock.c:167
spin_lock_irq include/linux/spinlock.h:378 [inline]
shmem_getpage_gfp+0x937/0x2a10 mm/shmem.c:1882
shmem_getpage mm/shmem.c:154 [inline]
shmem_write_begin+0x102/0x1e0 mm/shmem.c:2483
generic_perform_write+0x20a/0x4e0 mm/filemap.c:3302
__generic_file_write_iter+0x24c/0x610 mm/filemap.c:3431
generic_file_write_iter+0x3f3/0x630 mm/filemap.c:3463
call_write_iter include/linux/fs.h:1907 [inline]
new_sync_write+0x4a2/0x700 fs/read_write.c:483
__vfs_write+0xc9/0x100 fs/read_write.c:496
vfs_write+0x268/0x5d0 fs/read_write.c:558
ksys_write+0x12d/0x250 fs/read_write.c:611
do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
entry_SYSCALL_64_after_hwframe+0x49/0xb3
}
... key at: [<ffffffff8c667e80>] __key.56422+0x0/0x40
... acquired at:
lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
shmem_uncharge+0x24/0x270 mm/shmem.c:341
__split_huge_page mm/huge_memory.c:2613 [inline]
split_huge_page_to_list+0x274b/0x33b0 mm/huge_memory.c:2886
split_huge_page include/linux/huge_mm.h:204 [inline]
shmem_punch_compound+0x13e/0x1e0 mm/shmem.c:814
shmem_undo_range+0x5f1/0x1b80 mm/shmem.c:870
shmem_truncate_range+0x27/0xa0 mm/shmem.c:980
shmem_setattr+0x8b6/0xc80 mm/shmem.c:1039
notify_change+0xb6d/0x1020 fs/attr.c:336
do_truncate+0x134/0x1f0 fs/open.c:64
do_sys_ftruncate+0x4a5/0x570 fs/open.c:195
do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
entry_SYSCALL_64_after_hwframe+0x49/0xb3


stack backtrace:
CPU: 0 PID: 8337 Comm: syz-executor428 Not tainted 5.6.0-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x188/0x20d lib/dump_stack.c:118
print_bad_irq_dependency kernel/locking/lockdep.c:2132 [inline]
check_irq_usage.cold+0x566/0x6de kernel/locking/lockdep.c:2330
check_prev_add kernel/locking/lockdep.c:2519 [inline]
check_prevs_add kernel/locking/lockdep.c:2620 [inline]
validate_chain kernel/locking/lockdep.c:3237 [inline]
__lock_acquire+0x2c39/0x4e00 kernel/locking/lockdep.c:4344
lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
shmem_uncharge+0x24/0x270 mm/shmem.c:341
__split_huge_page mm/huge_memory.c:2613 [inline]
split_huge_page_to_list+0x274b/0x33b0 mm/huge_memory.c:2886
split_huge_page include/linux/huge_mm.h:204 [inline]
shmem_punch_compound+0x13e/0x1e0 mm/shmem.c:814
shmem_undo_range+0x5f1/0x1b80 mm/shmem.c:870
shmem_truncate_range+0x27/0xa0 mm/shmem.c:980
shmem_setattr+0x8b6/0xc80 mm/shmem.c:1039
notify_change+0xb6d/0x1020 fs/attr.c:336
do_truncate+0x134/0x1f0 fs/open.c:64
do_sys_ftruncate+0x4a5/0x570 fs/open.c:195
do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
entry_SYSCALL_64_after_hwframe+0x49/0xb3
RIP: 0033:0x44e769
Code: 4d c9 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 1b c9 fb ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007fe511b3fce8 EFLAGS: 00000246 ORIG_RAX: 000000000000004d
RAX: ffffffffffffffda RBX: 00000000006e1c68 RCX: 000000000044e769
RDX: 000000000044e769 RSI: 00000000000001ff RDI: 0000000000000006
RBP: 00000000006e1c60 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006e1c6c
R13: 00007ffce699f92f R14: 00007fe511b409c0 R15: 0000000000000000


---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at [email protected].

syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
For information about bisection process see: https://goo.gl/tpsmEJ#bisection
syzbot can test patches for this bug, for details see:
https://goo.gl/tpsmEJ#testing-patches


2020-04-14 13:40:53

by Yang Shi

[permalink] [raw]
Subject: Re: possible deadlock in shmem_uncharge

On Sun, Apr 12, 2020 at 3:11 AM syzbot
<[email protected]> wrote:
>
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit: ae46d2aa mm/gup: Let __get_user_pages_locked() return -EIN..
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=14a30a77e00000
> kernel config: https://syzkaller.appspot.com/x/.config?x=ca75979eeebf06c2
> dashboard link: https://syzkaller.appspot.com/bug?extid=c8a8197c8852f566b9d9
> compiler: gcc (GCC) 9.0.0 20181231 (experimental)
> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=15f5632be00000
> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=132ade57e00000
>
> The bug was bisected to:
>
> commit 71725ed10c40696dc6bdccf8e225815dcef24dba
> Author: Hugh Dickins <[email protected]>
> Date: Tue Apr 7 03:07:57 2020 +0000
>
> mm: huge tmpfs: try to split_huge_page() when punching hole
>
> bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=120a752be00000
> final crash: https://syzkaller.appspot.com/x/report.txt?x=110a752be00000
> console output: https://syzkaller.appspot.com/x/log.txt?x=160a752be00000
>
> IMPORTANT: if you fix the bug, please add the following tag to the commit:
> Reported-by: [email protected]
> Fixes: 71725ed10c40 ("mm: huge tmpfs: try to split_huge_page() when punching hole")
>
> =====================================================
> WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
> 5.6.0-syzkaller #0 Not tainted
> -----------------------------------------------------
> syz-executor428/8337 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
> ffff8880a851c778 (&info->lock){....}-{2:2}, at: shmem_uncharge+0x24/0x270 mm/shmem.c:341
>
> and this task is already holding:
> ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
> ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: split_huge_page_to_list+0xad0/0x33b0 mm/huge_memory.c:2864
> which would create a new lock dependency:
> (&xa->xa_lock#4){..-.}-{2:2} -> (&info->lock){....}-{2:2}
>
> but this new dependency connects a SOFTIRQ-irq-safe lock:
> (&xa->xa_lock#4){..-.}-{2:2}

It looks shmem_uncharge() is just called by __split_huge_page() and
collapse_file(). The collapse_file() has acquired xa_lock with irq
disabled before acquiring info->lock, so it is safe.
__split_huge_page() is called with holding xa_lock with irq enabled,
but lru_lock is acquired with irq disabled before acquiring xa_lock.

So, it is unnecessary to acquire info->lock with irq disabled in
shmem_uncharge(). Can syzbot try the below patch?

diff --git a/mm/shmem.c b/mm/shmem.c
index d722eb8..100117b 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -334,15 +334,14 @@ bool shmem_charge(struct inode *inode, long pages)
void shmem_uncharge(struct inode *inode, long pages)
{
struct shmem_inode_info *info = SHMEM_I(inode);
- unsigned long flags;

/* nrpages adjustment done by __delete_from_page_cache() or caller */

- spin_lock_irqsave(&info->lock, flags);
+ spin_lock(&info->lock);
info->alloced -= pages;
inode->i_blocks -= pages * BLOCKS_PER_PAGE;
shmem_recalc_inode(inode);
- spin_unlock_irqrestore(&info->lock, flags);
+ spin_unlock(&info->lock);

shmem_inode_unacct_blocks(inode, pages);
}



>
> ... which became SOFTIRQ-irq-safe at:
> lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
> test_clear_page_writeback+0x1d7/0x11e0 mm/page-writeback.c:2728
> end_page_writeback+0x239/0x520 mm/filemap.c:1317
> end_buffer_async_write+0x442/0x5c0 fs/buffer.c:384
> end_bio_bh_io_sync+0xe2/0x140 fs/buffer.c:3012
> bio_endio+0x473/0x820 block/bio.c:1422
> req_bio_endio block/blk-core.c:245 [inline]
> blk_update_request+0x3e1/0xdc0 block/blk-core.c:1472
> scsi_end_request+0x80/0x7b0 drivers/scsi/scsi_lib.c:575
> scsi_io_completion+0x1e7/0x1300 drivers/scsi/scsi_lib.c:959
> scsi_softirq_done+0x31e/0x3b0 drivers/scsi/scsi_lib.c:1454
> blk_done_softirq+0x2db/0x440 block/blk-softirq.c:37
> __do_softirq+0x26c/0x9f7 kernel/softirq.c:292
> invoke_softirq kernel/softirq.c:373 [inline]
> irq_exit+0x192/0x1d0 kernel/softirq.c:413
> exiting_irq arch/x86/include/asm/apic.h:546 [inline]
> do_IRQ+0xda/0x270 arch/x86/kernel/irq.c:263
> ret_from_intr+0x0/0x2b
> arch_local_irq_restore arch/x86/include/asm/paravirt.h:759 [inline]
> lock_acquire+0x267/0x8f0 kernel/locking/lockdep.c:4926
> down_write+0x8d/0x150 kernel/locking/rwsem.c:1531
> inode_lock include/linux/fs.h:797 [inline]
> process_measurement+0x68a/0x1750 security/integrity/ima/ima_main.c:228
> ima_file_check+0xb9/0x100 security/integrity/ima/ima_main.c:440
> do_open fs/namei.c:3231 [inline]
> path_openat+0x1997/0x27d0 fs/namei.c:3346
> do_filp_open+0x192/0x260 fs/namei.c:3373
> do_sys_openat2+0x585/0x7d0 fs/open.c:1148
> do_sys_open+0xc3/0x140 fs/open.c:1164
> do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> entry_SYSCALL_64_after_hwframe+0x49/0xb3
>
> to a SOFTIRQ-irq-unsafe lock:
> (shmlock_user_lock){+.+.}-{2:2}
>
> ... which became SOFTIRQ-irq-unsafe at:
> ...
> lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
> _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
> spin_lock include/linux/spinlock.h:353 [inline]
> user_shm_lock+0xab/0x230 mm/mlock.c:855
> hugetlb_file_setup+0x4e1/0x677 fs/hugetlbfs/inode.c:1416
> newseg+0x460/0xe60 ipc/shm.c:652
> ipcget_new ipc/util.c:344 [inline]
> ipcget+0xf0/0xcb0 ipc/util.c:643
> ksys_shmget ipc/shm.c:742 [inline]
> __do_sys_shmget ipc/shm.c:747 [inline]
> __se_sys_shmget ipc/shm.c:745 [inline]
> __x64_sys_shmget+0x139/0x1a0 ipc/shm.c:745
> do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> entry_SYSCALL_64_after_hwframe+0x49/0xb3
>
> other info that might help us debug this:
>
> Chain exists of:
> &xa->xa_lock#4 --> &info->lock --> shmlock_user_lock
>
> Possible interrupt unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> lock(shmlock_user_lock);
> local_irq_disable();
> lock(&xa->xa_lock#4);
> lock(&info->lock);
> <Interrupt>
> lock(&xa->xa_lock#4);
>
> *** DEADLOCK ***
>
> 5 locks held by syz-executor428/8337:
> #0: ffff8880a7948450 (sb_writers#7){.+.+}-{0:0}, at: sb_start_write include/linux/fs.h:1655 [inline]
> #0: ffff8880a7948450 (sb_writers#7){.+.+}-{0:0}, at: do_sys_ftruncate+0x29f/0x570 fs/open.c:190
> #1: ffff8880a851c9d0 (&sb->s_type->i_mutex_key#16){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:797 [inline]
> #1: ffff8880a851c9d0 (&sb->s_type->i_mutex_key#16){+.+.}-{3:3}, at: do_truncate+0x125/0x1f0 fs/open.c:62
> #2: ffff8880a851cb90 (&mapping->i_mmap_rwsem){++++}-{3:3}, at: i_mmap_lock_read include/linux/fs.h:541 [inline]
> #2: ffff8880a851cb90 (&mapping->i_mmap_rwsem){++++}-{3:3}, at: split_huge_page_to_list+0x4c3/0x33b0 mm/huge_memory.c:2825
> #3: ffff88812ffffcd8 (&pgdat->lru_lock){....}-{2:2}, at: split_huge_page_to_list+0x8da/0x33b0 mm/huge_memory.c:2855
> #4: ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
> #4: ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: split_huge_page_to_list+0xad0/0x33b0 mm/huge_memory.c:2864
>
> the dependencies between SOFTIRQ-irq-safe lock and the holding lock:
> -> (&xa->xa_lock#4){..-.}-{2:2} {
> IN-SOFTIRQ-W at:
> lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
> test_clear_page_writeback+0x1d7/0x11e0 mm/page-writeback.c:2728
> end_page_writeback+0x239/0x520 mm/filemap.c:1317
> end_buffer_async_write+0x442/0x5c0 fs/buffer.c:384
> end_bio_bh_io_sync+0xe2/0x140 fs/buffer.c:3012
> bio_endio+0x473/0x820 block/bio.c:1422
> req_bio_endio block/blk-core.c:245 [inline]
> blk_update_request+0x3e1/0xdc0 block/blk-core.c:1472
> scsi_end_request+0x80/0x7b0 drivers/scsi/scsi_lib.c:575
> scsi_io_completion+0x1e7/0x1300 drivers/scsi/scsi_lib.c:959
> scsi_softirq_done+0x31e/0x3b0 drivers/scsi/scsi_lib.c:1454
> blk_done_softirq+0x2db/0x440 block/blk-softirq.c:37
> __do_softirq+0x26c/0x9f7 kernel/softirq.c:292
> invoke_softirq kernel/softirq.c:373 [inline]
> irq_exit+0x192/0x1d0 kernel/softirq.c:413
> exiting_irq arch/x86/include/asm/apic.h:546 [inline]
> do_IRQ+0xda/0x270 arch/x86/kernel/irq.c:263
> ret_from_intr+0x0/0x2b
> arch_local_irq_restore arch/x86/include/asm/paravirt.h:759 [inline]
> lock_acquire+0x267/0x8f0 kernel/locking/lockdep.c:4926
> down_write+0x8d/0x150 kernel/locking/rwsem.c:1531
> inode_lock include/linux/fs.h:797 [inline]
> process_measurement+0x68a/0x1750 security/integrity/ima/ima_main.c:228
> ima_file_check+0xb9/0x100 security/integrity/ima/ima_main.c:440
> do_open fs/namei.c:3231 [inline]
> path_openat+0x1997/0x27d0 fs/namei.c:3346
> do_filp_open+0x192/0x260 fs/namei.c:3373
> do_sys_openat2+0x585/0x7d0 fs/open.c:1148
> do_sys_open+0xc3/0x140 fs/open.c:1164
> do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> entry_SYSCALL_64_after_hwframe+0x49/0xb3
> INITIAL USE at:
> lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
> _raw_spin_lock_irq+0x5b/0x80 kernel/locking/spinlock.c:167
> spin_lock_irq include/linux/spinlock.h:378 [inline]
> __add_to_page_cache_locked+0x607/0xe00 mm/filemap.c:855
> add_to_page_cache_lru+0x1aa/0x700 mm/filemap.c:921
> do_read_cache_page+0x9ab/0x1810 mm/filemap.c:2755
> read_mapping_page include/linux/pagemap.h:397 [inline]
> read_part_sector+0xf6/0x600 block/partitions/core.c:643
> adfspart_check_ICS+0x9d/0xc80 block/partitions/acorn.c:360
> check_partition block/partitions/core.c:140 [inline]
> blk_add_partitions+0x474/0xe50 block/partitions/core.c:571
> bdev_disk_changed+0x1fb/0x380 fs/block_dev.c:1544
> __blkdev_get+0xb15/0x1530 fs/block_dev.c:1647
> blkdev_get+0x41/0x2b0 fs/block_dev.c:1749
> register_disk block/genhd.c:763 [inline]
> __device_add_disk+0xa4f/0x1170 block/genhd.c:853
> add_disk include/linux/genhd.h:294 [inline]
> brd_init+0x297/0x463 drivers/block/brd.c:533
> do_one_initcall+0x10a/0x7d0 init/main.c:1158
> do_initcall_level init/main.c:1231 [inline]
> do_initcalls init/main.c:1247 [inline]
> do_basic_setup init/main.c:1267 [inline]
> kernel_init_freeable+0x501/0x5ae init/main.c:1451
> kernel_init+0xd/0x1bb init/main.c:1358
> ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352
> }
> ... key at: [<ffffffff8c67b1e0>] __key.18007+0x0/0x40
> ... acquired at:
> lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
> shmem_uncharge+0x24/0x270 mm/shmem.c:341
> __split_huge_page mm/huge_memory.c:2613 [inline]
> split_huge_page_to_list+0x274b/0x33b0 mm/huge_memory.c:2886
> split_huge_page include/linux/huge_mm.h:204 [inline]
> shmem_punch_compound+0x13e/0x1e0 mm/shmem.c:814
> shmem_undo_range+0x5f1/0x1b80 mm/shmem.c:870
> shmem_truncate_range+0x27/0xa0 mm/shmem.c:980
> shmem_setattr+0x8b6/0xc80 mm/shmem.c:1039
> notify_change+0xb6d/0x1020 fs/attr.c:336
> do_truncate+0x134/0x1f0 fs/open.c:64
> do_sys_ftruncate+0x4a5/0x570 fs/open.c:195
> do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> entry_SYSCALL_64_after_hwframe+0x49/0xb3
>
>
> the dependencies between the lock to be acquired
> and SOFTIRQ-irq-unsafe lock:
> -> (shmlock_user_lock){+.+.}-{2:2} {
> HARDIRQ-ON-W at:
> lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
> _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
> spin_lock include/linux/spinlock.h:353 [inline]
> user_shm_lock+0xab/0x230 mm/mlock.c:855
> hugetlb_file_setup+0x4e1/0x677 fs/hugetlbfs/inode.c:1416
> newseg+0x460/0xe60 ipc/shm.c:652
> ipcget_new ipc/util.c:344 [inline]
> ipcget+0xf0/0xcb0 ipc/util.c:643
> ksys_shmget ipc/shm.c:742 [inline]
> __do_sys_shmget ipc/shm.c:747 [inline]
> __se_sys_shmget ipc/shm.c:745 [inline]
> __x64_sys_shmget+0x139/0x1a0 ipc/shm.c:745
> do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> entry_SYSCALL_64_after_hwframe+0x49/0xb3
> SOFTIRQ-ON-W at:
> lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
> _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
> spin_lock include/linux/spinlock.h:353 [inline]
> user_shm_lock+0xab/0x230 mm/mlock.c:855
> hugetlb_file_setup+0x4e1/0x677 fs/hugetlbfs/inode.c:1416
> newseg+0x460/0xe60 ipc/shm.c:652
> ipcget_new ipc/util.c:344 [inline]
> ipcget+0xf0/0xcb0 ipc/util.c:643
> ksys_shmget ipc/shm.c:742 [inline]
> __do_sys_shmget ipc/shm.c:747 [inline]
> __se_sys_shmget ipc/shm.c:745 [inline]
> __x64_sys_shmget+0x139/0x1a0 ipc/shm.c:745
> do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> entry_SYSCALL_64_after_hwframe+0x49/0xb3
> INITIAL USE at:
> lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
> _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
> spin_lock include/linux/spinlock.h:353 [inline]
> user_shm_lock+0xab/0x230 mm/mlock.c:855
> shmem_lock+0x1dd/0x2d0 mm/shmem.c:2184
> shmctl_do_lock+0x73f/0x8f0 ipc/shm.c:1111
> ksys_shmctl.constprop.0+0x203/0x350 ipc/shm.c:1188
> do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> entry_SYSCALL_64_after_hwframe+0x49/0xb3
> }
> ... key at: [<ffffffff89a5e858>] shmlock_user_lock+0x18/0x5c0
> ... acquired at:
> __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
> _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
> spin_lock include/linux/spinlock.h:353 [inline]
> user_shm_lock+0xab/0x230 mm/mlock.c:855
> shmem_lock+0x1dd/0x2d0 mm/shmem.c:2184
> shmctl_do_lock+0x73f/0x8f0 ipc/shm.c:1111
> ksys_shmctl.constprop.0+0x203/0x350 ipc/shm.c:1188
> do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> entry_SYSCALL_64_after_hwframe+0x49/0xb3
>
> -> (&info->lock){....}-{2:2} {
> INITIAL USE at:
> lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
> _raw_spin_lock_irq+0x5b/0x80 kernel/locking/spinlock.c:167
> spin_lock_irq include/linux/spinlock.h:378 [inline]
> shmem_getpage_gfp+0x937/0x2a10 mm/shmem.c:1882
> shmem_getpage mm/shmem.c:154 [inline]
> shmem_write_begin+0x102/0x1e0 mm/shmem.c:2483
> generic_perform_write+0x20a/0x4e0 mm/filemap.c:3302
> __generic_file_write_iter+0x24c/0x610 mm/filemap.c:3431
> generic_file_write_iter+0x3f3/0x630 mm/filemap.c:3463
> call_write_iter include/linux/fs.h:1907 [inline]
> new_sync_write+0x4a2/0x700 fs/read_write.c:483
> __vfs_write+0xc9/0x100 fs/read_write.c:496
> vfs_write+0x268/0x5d0 fs/read_write.c:558
> ksys_write+0x12d/0x250 fs/read_write.c:611
> do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> entry_SYSCALL_64_after_hwframe+0x49/0xb3
> }
> ... key at: [<ffffffff8c667e80>] __key.56422+0x0/0x40
> ... acquired at:
> lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
> shmem_uncharge+0x24/0x270 mm/shmem.c:341
> __split_huge_page mm/huge_memory.c:2613 [inline]
> split_huge_page_to_list+0x274b/0x33b0 mm/huge_memory.c:2886
> split_huge_page include/linux/huge_mm.h:204 [inline]
> shmem_punch_compound+0x13e/0x1e0 mm/shmem.c:814
> shmem_undo_range+0x5f1/0x1b80 mm/shmem.c:870
> shmem_truncate_range+0x27/0xa0 mm/shmem.c:980
> shmem_setattr+0x8b6/0xc80 mm/shmem.c:1039
> notify_change+0xb6d/0x1020 fs/attr.c:336
> do_truncate+0x134/0x1f0 fs/open.c:64
> do_sys_ftruncate+0x4a5/0x570 fs/open.c:195
> do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> entry_SYSCALL_64_after_hwframe+0x49/0xb3
>
>
> stack backtrace:
> CPU: 0 PID: 8337 Comm: syz-executor428 Not tainted 5.6.0-syzkaller #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
> Call Trace:
> __dump_stack lib/dump_stack.c:77 [inline]
> dump_stack+0x188/0x20d lib/dump_stack.c:118
> print_bad_irq_dependency kernel/locking/lockdep.c:2132 [inline]
> check_irq_usage.cold+0x566/0x6de kernel/locking/lockdep.c:2330
> check_prev_add kernel/locking/lockdep.c:2519 [inline]
> check_prevs_add kernel/locking/lockdep.c:2620 [inline]
> validate_chain kernel/locking/lockdep.c:3237 [inline]
> __lock_acquire+0x2c39/0x4e00 kernel/locking/lockdep.c:4344
> lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
> shmem_uncharge+0x24/0x270 mm/shmem.c:341
> __split_huge_page mm/huge_memory.c:2613 [inline]
> split_huge_page_to_list+0x274b/0x33b0 mm/huge_memory.c:2886
> split_huge_page include/linux/huge_mm.h:204 [inline]
> shmem_punch_compound+0x13e/0x1e0 mm/shmem.c:814
> shmem_undo_range+0x5f1/0x1b80 mm/shmem.c:870
> shmem_truncate_range+0x27/0xa0 mm/shmem.c:980
> shmem_setattr+0x8b6/0xc80 mm/shmem.c:1039
> notify_change+0xb6d/0x1020 fs/attr.c:336
> do_truncate+0x134/0x1f0 fs/open.c:64
> do_sys_ftruncate+0x4a5/0x570 fs/open.c:195
> do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> entry_SYSCALL_64_after_hwframe+0x49/0xb3
> RIP: 0033:0x44e769
> Code: 4d c9 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 1b c9 fb ff c3 66 2e 0f 1f 84 00 00 00 00
> RSP: 002b:00007fe511b3fce8 EFLAGS: 00000246 ORIG_RAX: 000000000000004d
> RAX: ffffffffffffffda RBX: 00000000006e1c68 RCX: 000000000044e769
> RDX: 000000000044e769 RSI: 00000000000001ff RDI: 0000000000000006
> RBP: 00000000006e1c60 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006e1c6c
> R13: 00007ffce699f92f R14: 00007fe511b409c0 R15: 0000000000000000
>
>
> ---
> This bug is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at [email protected].
>
> syzbot will keep track of this bug report. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> For information about bisection process see: https://goo.gl/tpsmEJ#bisection
> syzbot can test patches for this bug, for details see:
> https://goo.gl/tpsmEJ#testing-patches
>

2020-04-14 14:24:51

by Dmitry Vyukov

[permalink] [raw]
Subject: Re: possible deadlock in shmem_uncharge

On Tue, Apr 14, 2020 at 12:10 AM Yang Shi <[email protected]> wrote:
>
> On Sun, Apr 12, 2020 at 3:11 AM syzbot
> <[email protected]> wrote:
> >
> > Hello,
> >
> > syzbot found the following crash on:
> >
> > HEAD commit: ae46d2aa mm/gup: Let __get_user_pages_locked() return -EIN..
> > git tree: upstream
> > console output: https://syzkaller.appspot.com/x/log.txt?x=14a30a77e00000
> > kernel config: https://syzkaller.appspot.com/x/.config?x=ca75979eeebf06c2
> > dashboard link: https://syzkaller.appspot.com/bug?extid=c8a8197c8852f566b9d9
> > compiler: gcc (GCC) 9.0.0 20181231 (experimental)
> > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=15f5632be00000
> > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=132ade57e00000
> >
> > The bug was bisected to:
> >
> > commit 71725ed10c40696dc6bdccf8e225815dcef24dba
> > Author: Hugh Dickins <[email protected]>
> > Date: Tue Apr 7 03:07:57 2020 +0000
> >
> > mm: huge tmpfs: try to split_huge_page() when punching hole
> >
> > bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=120a752be00000
> > final crash: https://syzkaller.appspot.com/x/report.txt?x=110a752be00000
> > console output: https://syzkaller.appspot.com/x/log.txt?x=160a752be00000
> >
> > IMPORTANT: if you fix the bug, please add the following tag to the commit:
> > Reported-by: [email protected]
> > Fixes: 71725ed10c40 ("mm: huge tmpfs: try to split_huge_page() when punching hole")
> >
> > =====================================================
> > WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
> > 5.6.0-syzkaller #0 Not tainted
> > -----------------------------------------------------
> > syz-executor428/8337 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
> > ffff8880a851c778 (&info->lock){....}-{2:2}, at: shmem_uncharge+0x24/0x270 mm/shmem.c:341
> >
> > and this task is already holding:
> > ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
> > ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: split_huge_page_to_list+0xad0/0x33b0 mm/huge_memory.c:2864
> > which would create a new lock dependency:
> > (&xa->xa_lock#4){..-.}-{2:2} -> (&info->lock){....}-{2:2}
> >
> > but this new dependency connects a SOFTIRQ-irq-safe lock:
> > (&xa->xa_lock#4){..-.}-{2:2}
>
> It looks shmem_uncharge() is just called by __split_huge_page() and
> collapse_file(). The collapse_file() has acquired xa_lock with irq
> disabled before acquiring info->lock, so it is safe.
> __split_huge_page() is called with holding xa_lock with irq enabled,
> but lru_lock is acquired with irq disabled before acquiring xa_lock.
>
> So, it is unnecessary to acquire info->lock with irq disabled in
> shmem_uncharge(). Can syzbot try the below patch?

Hi Yang,

Yes, sure, please see:
https://github.com/google/syzkaller/blob/master/docs/syzbot.md#testing-patches

> diff --git a/mm/shmem.c b/mm/shmem.c
> index d722eb8..100117b 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -334,15 +334,14 @@ bool shmem_charge(struct inode *inode, long pages)
> void shmem_uncharge(struct inode *inode, long pages)
> {
> struct shmem_inode_info *info = SHMEM_I(inode);
> - unsigned long flags;
>
> /* nrpages adjustment done by __delete_from_page_cache() or caller */
>
> - spin_lock_irqsave(&info->lock, flags);
> + spin_lock(&info->lock);
> info->alloced -= pages;
> inode->i_blocks -= pages * BLOCKS_PER_PAGE;
> shmem_recalc_inode(inode);
> - spin_unlock_irqrestore(&info->lock, flags);
> + spin_unlock(&info->lock);
>
> shmem_inode_unacct_blocks(inode, pages);
> }
>
>
>
> >
> > ... which became SOFTIRQ-irq-safe at:
> > lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> > __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> > _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
> > test_clear_page_writeback+0x1d7/0x11e0 mm/page-writeback.c:2728
> > end_page_writeback+0x239/0x520 mm/filemap.c:1317
> > end_buffer_async_write+0x442/0x5c0 fs/buffer.c:384
> > end_bio_bh_io_sync+0xe2/0x140 fs/buffer.c:3012
> > bio_endio+0x473/0x820 block/bio.c:1422
> > req_bio_endio block/blk-core.c:245 [inline]
> > blk_update_request+0x3e1/0xdc0 block/blk-core.c:1472
> > scsi_end_request+0x80/0x7b0 drivers/scsi/scsi_lib.c:575
> > scsi_io_completion+0x1e7/0x1300 drivers/scsi/scsi_lib.c:959
> > scsi_softirq_done+0x31e/0x3b0 drivers/scsi/scsi_lib.c:1454
> > blk_done_softirq+0x2db/0x440 block/blk-softirq.c:37
> > __do_softirq+0x26c/0x9f7 kernel/softirq.c:292
> > invoke_softirq kernel/softirq.c:373 [inline]
> > irq_exit+0x192/0x1d0 kernel/softirq.c:413
> > exiting_irq arch/x86/include/asm/apic.h:546 [inline]
> > do_IRQ+0xda/0x270 arch/x86/kernel/irq.c:263
> > ret_from_intr+0x0/0x2b
> > arch_local_irq_restore arch/x86/include/asm/paravirt.h:759 [inline]
> > lock_acquire+0x267/0x8f0 kernel/locking/lockdep.c:4926
> > down_write+0x8d/0x150 kernel/locking/rwsem.c:1531
> > inode_lock include/linux/fs.h:797 [inline]
> > process_measurement+0x68a/0x1750 security/integrity/ima/ima_main.c:228
> > ima_file_check+0xb9/0x100 security/integrity/ima/ima_main.c:440
> > do_open fs/namei.c:3231 [inline]
> > path_openat+0x1997/0x27d0 fs/namei.c:3346
> > do_filp_open+0x192/0x260 fs/namei.c:3373
> > do_sys_openat2+0x585/0x7d0 fs/open.c:1148
> > do_sys_open+0xc3/0x140 fs/open.c:1164
> > do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> > entry_SYSCALL_64_after_hwframe+0x49/0xb3
> >
> > to a SOFTIRQ-irq-unsafe lock:
> > (shmlock_user_lock){+.+.}-{2:2}
> >
> > ... which became SOFTIRQ-irq-unsafe at:
> > ...
> > lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> > __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
> > _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
> > spin_lock include/linux/spinlock.h:353 [inline]
> > user_shm_lock+0xab/0x230 mm/mlock.c:855
> > hugetlb_file_setup+0x4e1/0x677 fs/hugetlbfs/inode.c:1416
> > newseg+0x460/0xe60 ipc/shm.c:652
> > ipcget_new ipc/util.c:344 [inline]
> > ipcget+0xf0/0xcb0 ipc/util.c:643
> > ksys_shmget ipc/shm.c:742 [inline]
> > __do_sys_shmget ipc/shm.c:747 [inline]
> > __se_sys_shmget ipc/shm.c:745 [inline]
> > __x64_sys_shmget+0x139/0x1a0 ipc/shm.c:745
> > do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> > entry_SYSCALL_64_after_hwframe+0x49/0xb3
> >
> > other info that might help us debug this:
> >
> > Chain exists of:
> > &xa->xa_lock#4 --> &info->lock --> shmlock_user_lock
> >
> > Possible interrupt unsafe locking scenario:
> >
> > CPU0 CPU1
> > ---- ----
> > lock(shmlock_user_lock);
> > local_irq_disable();
> > lock(&xa->xa_lock#4);
> > lock(&info->lock);
> > <Interrupt>
> > lock(&xa->xa_lock#4);
> >
> > *** DEADLOCK ***
> >
> > 5 locks held by syz-executor428/8337:
> > #0: ffff8880a7948450 (sb_writers#7){.+.+}-{0:0}, at: sb_start_write include/linux/fs.h:1655 [inline]
> > #0: ffff8880a7948450 (sb_writers#7){.+.+}-{0:0}, at: do_sys_ftruncate+0x29f/0x570 fs/open.c:190
> > #1: ffff8880a851c9d0 (&sb->s_type->i_mutex_key#16){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:797 [inline]
> > #1: ffff8880a851c9d0 (&sb->s_type->i_mutex_key#16){+.+.}-{3:3}, at: do_truncate+0x125/0x1f0 fs/open.c:62
> > #2: ffff8880a851cb90 (&mapping->i_mmap_rwsem){++++}-{3:3}, at: i_mmap_lock_read include/linux/fs.h:541 [inline]
> > #2: ffff8880a851cb90 (&mapping->i_mmap_rwsem){++++}-{3:3}, at: split_huge_page_to_list+0x4c3/0x33b0 mm/huge_memory.c:2825
> > #3: ffff88812ffffcd8 (&pgdat->lru_lock){....}-{2:2}, at: split_huge_page_to_list+0x8da/0x33b0 mm/huge_memory.c:2855
> > #4: ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
> > #4: ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: split_huge_page_to_list+0xad0/0x33b0 mm/huge_memory.c:2864
> >
> > the dependencies between SOFTIRQ-irq-safe lock and the holding lock:
> > -> (&xa->xa_lock#4){..-.}-{2:2} {
> > IN-SOFTIRQ-W at:
> > lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> > __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> > _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
> > test_clear_page_writeback+0x1d7/0x11e0 mm/page-writeback.c:2728
> > end_page_writeback+0x239/0x520 mm/filemap.c:1317
> > end_buffer_async_write+0x442/0x5c0 fs/buffer.c:384
> > end_bio_bh_io_sync+0xe2/0x140 fs/buffer.c:3012
> > bio_endio+0x473/0x820 block/bio.c:1422
> > req_bio_endio block/blk-core.c:245 [inline]
> > blk_update_request+0x3e1/0xdc0 block/blk-core.c:1472
> > scsi_end_request+0x80/0x7b0 drivers/scsi/scsi_lib.c:575
> > scsi_io_completion+0x1e7/0x1300 drivers/scsi/scsi_lib.c:959
> > scsi_softirq_done+0x31e/0x3b0 drivers/scsi/scsi_lib.c:1454
> > blk_done_softirq+0x2db/0x440 block/blk-softirq.c:37
> > __do_softirq+0x26c/0x9f7 kernel/softirq.c:292
> > invoke_softirq kernel/softirq.c:373 [inline]
> > irq_exit+0x192/0x1d0 kernel/softirq.c:413
> > exiting_irq arch/x86/include/asm/apic.h:546 [inline]
> > do_IRQ+0xda/0x270 arch/x86/kernel/irq.c:263
> > ret_from_intr+0x0/0x2b
> > arch_local_irq_restore arch/x86/include/asm/paravirt.h:759 [inline]
> > lock_acquire+0x267/0x8f0 kernel/locking/lockdep.c:4926
> > down_write+0x8d/0x150 kernel/locking/rwsem.c:1531
> > inode_lock include/linux/fs.h:797 [inline]
> > process_measurement+0x68a/0x1750 security/integrity/ima/ima_main.c:228
> > ima_file_check+0xb9/0x100 security/integrity/ima/ima_main.c:440
> > do_open fs/namei.c:3231 [inline]
> > path_openat+0x1997/0x27d0 fs/namei.c:3346
> > do_filp_open+0x192/0x260 fs/namei.c:3373
> > do_sys_openat2+0x585/0x7d0 fs/open.c:1148
> > do_sys_open+0xc3/0x140 fs/open.c:1164
> > do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> > entry_SYSCALL_64_after_hwframe+0x49/0xb3
> > INITIAL USE at:
> > lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> > __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
> > _raw_spin_lock_irq+0x5b/0x80 kernel/locking/spinlock.c:167
> > spin_lock_irq include/linux/spinlock.h:378 [inline]
> > __add_to_page_cache_locked+0x607/0xe00 mm/filemap.c:855
> > add_to_page_cache_lru+0x1aa/0x700 mm/filemap.c:921
> > do_read_cache_page+0x9ab/0x1810 mm/filemap.c:2755
> > read_mapping_page include/linux/pagemap.h:397 [inline]
> > read_part_sector+0xf6/0x600 block/partitions/core.c:643
> > adfspart_check_ICS+0x9d/0xc80 block/partitions/acorn.c:360
> > check_partition block/partitions/core.c:140 [inline]
> > blk_add_partitions+0x474/0xe50 block/partitions/core.c:571
> > bdev_disk_changed+0x1fb/0x380 fs/block_dev.c:1544
> > __blkdev_get+0xb15/0x1530 fs/block_dev.c:1647
> > blkdev_get+0x41/0x2b0 fs/block_dev.c:1749
> > register_disk block/genhd.c:763 [inline]
> > __device_add_disk+0xa4f/0x1170 block/genhd.c:853
> > add_disk include/linux/genhd.h:294 [inline]
> > brd_init+0x297/0x463 drivers/block/brd.c:533
> > do_one_initcall+0x10a/0x7d0 init/main.c:1158
> > do_initcall_level init/main.c:1231 [inline]
> > do_initcalls init/main.c:1247 [inline]
> > do_basic_setup init/main.c:1267 [inline]
> > kernel_init_freeable+0x501/0x5ae init/main.c:1451
> > kernel_init+0xd/0x1bb init/main.c:1358
> > ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352
> > }
> > ... key at: [<ffffffff8c67b1e0>] __key.18007+0x0/0x40
> > ... acquired at:
> > lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> > __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> > _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
> > shmem_uncharge+0x24/0x270 mm/shmem.c:341
> > __split_huge_page mm/huge_memory.c:2613 [inline]
> > split_huge_page_to_list+0x274b/0x33b0 mm/huge_memory.c:2886
> > split_huge_page include/linux/huge_mm.h:204 [inline]
> > shmem_punch_compound+0x13e/0x1e0 mm/shmem.c:814
> > shmem_undo_range+0x5f1/0x1b80 mm/shmem.c:870
> > shmem_truncate_range+0x27/0xa0 mm/shmem.c:980
> > shmem_setattr+0x8b6/0xc80 mm/shmem.c:1039
> > notify_change+0xb6d/0x1020 fs/attr.c:336
> > do_truncate+0x134/0x1f0 fs/open.c:64
> > do_sys_ftruncate+0x4a5/0x570 fs/open.c:195
> > do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> > entry_SYSCALL_64_after_hwframe+0x49/0xb3
> >
> >
> > the dependencies between the lock to be acquired
> > and SOFTIRQ-irq-unsafe lock:
> > -> (shmlock_user_lock){+.+.}-{2:2} {
> > HARDIRQ-ON-W at:
> > lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> > __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
> > _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
> > spin_lock include/linux/spinlock.h:353 [inline]
> > user_shm_lock+0xab/0x230 mm/mlock.c:855
> > hugetlb_file_setup+0x4e1/0x677 fs/hugetlbfs/inode.c:1416
> > newseg+0x460/0xe60 ipc/shm.c:652
> > ipcget_new ipc/util.c:344 [inline]
> > ipcget+0xf0/0xcb0 ipc/util.c:643
> > ksys_shmget ipc/shm.c:742 [inline]
> > __do_sys_shmget ipc/shm.c:747 [inline]
> > __se_sys_shmget ipc/shm.c:745 [inline]
> > __x64_sys_shmget+0x139/0x1a0 ipc/shm.c:745
> > do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> > entry_SYSCALL_64_after_hwframe+0x49/0xb3
> > SOFTIRQ-ON-W at:
> > lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> > __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
> > _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
> > spin_lock include/linux/spinlock.h:353 [inline]
> > user_shm_lock+0xab/0x230 mm/mlock.c:855
> > hugetlb_file_setup+0x4e1/0x677 fs/hugetlbfs/inode.c:1416
> > newseg+0x460/0xe60 ipc/shm.c:652
> > ipcget_new ipc/util.c:344 [inline]
> > ipcget+0xf0/0xcb0 ipc/util.c:643
> > ksys_shmget ipc/shm.c:742 [inline]
> > __do_sys_shmget ipc/shm.c:747 [inline]
> > __se_sys_shmget ipc/shm.c:745 [inline]
> > __x64_sys_shmget+0x139/0x1a0 ipc/shm.c:745
> > do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> > entry_SYSCALL_64_after_hwframe+0x49/0xb3
> > INITIAL USE at:
> > lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> > __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
> > _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
> > spin_lock include/linux/spinlock.h:353 [inline]
> > user_shm_lock+0xab/0x230 mm/mlock.c:855
> > shmem_lock+0x1dd/0x2d0 mm/shmem.c:2184
> > shmctl_do_lock+0x73f/0x8f0 ipc/shm.c:1111
> > ksys_shmctl.constprop.0+0x203/0x350 ipc/shm.c:1188
> > do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> > entry_SYSCALL_64_after_hwframe+0x49/0xb3
> > }
> > ... key at: [<ffffffff89a5e858>] shmlock_user_lock+0x18/0x5c0
> > ... acquired at:
> > __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
> > _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
> > spin_lock include/linux/spinlock.h:353 [inline]
> > user_shm_lock+0xab/0x230 mm/mlock.c:855
> > shmem_lock+0x1dd/0x2d0 mm/shmem.c:2184
> > shmctl_do_lock+0x73f/0x8f0 ipc/shm.c:1111
> > ksys_shmctl.constprop.0+0x203/0x350 ipc/shm.c:1188
> > do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> > entry_SYSCALL_64_after_hwframe+0x49/0xb3
> >
> > -> (&info->lock){....}-{2:2} {
> > INITIAL USE at:
> > lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> > __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
> > _raw_spin_lock_irq+0x5b/0x80 kernel/locking/spinlock.c:167
> > spin_lock_irq include/linux/spinlock.h:378 [inline]
> > shmem_getpage_gfp+0x937/0x2a10 mm/shmem.c:1882
> > shmem_getpage mm/shmem.c:154 [inline]
> > shmem_write_begin+0x102/0x1e0 mm/shmem.c:2483
> > generic_perform_write+0x20a/0x4e0 mm/filemap.c:3302
> > __generic_file_write_iter+0x24c/0x610 mm/filemap.c:3431
> > generic_file_write_iter+0x3f3/0x630 mm/filemap.c:3463
> > call_write_iter include/linux/fs.h:1907 [inline]
> > new_sync_write+0x4a2/0x700 fs/read_write.c:483
> > __vfs_write+0xc9/0x100 fs/read_write.c:496
> > vfs_write+0x268/0x5d0 fs/read_write.c:558
> > ksys_write+0x12d/0x250 fs/read_write.c:611
> > do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> > entry_SYSCALL_64_after_hwframe+0x49/0xb3
> > }
> > ... key at: [<ffffffff8c667e80>] __key.56422+0x0/0x40
> > ... acquired at:
> > lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> > __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> > _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
> > shmem_uncharge+0x24/0x270 mm/shmem.c:341
> > __split_huge_page mm/huge_memory.c:2613 [inline]
> > split_huge_page_to_list+0x274b/0x33b0 mm/huge_memory.c:2886
> > split_huge_page include/linux/huge_mm.h:204 [inline]
> > shmem_punch_compound+0x13e/0x1e0 mm/shmem.c:814
> > shmem_undo_range+0x5f1/0x1b80 mm/shmem.c:870
> > shmem_truncate_range+0x27/0xa0 mm/shmem.c:980
> > shmem_setattr+0x8b6/0xc80 mm/shmem.c:1039
> > notify_change+0xb6d/0x1020 fs/attr.c:336
> > do_truncate+0x134/0x1f0 fs/open.c:64
> > do_sys_ftruncate+0x4a5/0x570 fs/open.c:195
> > do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> > entry_SYSCALL_64_after_hwframe+0x49/0xb3
> >
> >
> > stack backtrace:
> > CPU: 0 PID: 8337 Comm: syz-executor428 Not tainted 5.6.0-syzkaller #0
> > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
> > Call Trace:
> > __dump_stack lib/dump_stack.c:77 [inline]
> > dump_stack+0x188/0x20d lib/dump_stack.c:118
> > print_bad_irq_dependency kernel/locking/lockdep.c:2132 [inline]
> > check_irq_usage.cold+0x566/0x6de kernel/locking/lockdep.c:2330
> > check_prev_add kernel/locking/lockdep.c:2519 [inline]
> > check_prevs_add kernel/locking/lockdep.c:2620 [inline]
> > validate_chain kernel/locking/lockdep.c:3237 [inline]
> > __lock_acquire+0x2c39/0x4e00 kernel/locking/lockdep.c:4344
> > lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
> > __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
> > _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
> > shmem_uncharge+0x24/0x270 mm/shmem.c:341
> > __split_huge_page mm/huge_memory.c:2613 [inline]
> > split_huge_page_to_list+0x274b/0x33b0 mm/huge_memory.c:2886
> > split_huge_page include/linux/huge_mm.h:204 [inline]
> > shmem_punch_compound+0x13e/0x1e0 mm/shmem.c:814
> > shmem_undo_range+0x5f1/0x1b80 mm/shmem.c:870
> > shmem_truncate_range+0x27/0xa0 mm/shmem.c:980
> > shmem_setattr+0x8b6/0xc80 mm/shmem.c:1039
> > notify_change+0xb6d/0x1020 fs/attr.c:336
> > do_truncate+0x134/0x1f0 fs/open.c:64
> > do_sys_ftruncate+0x4a5/0x570 fs/open.c:195
> > do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
> > entry_SYSCALL_64_after_hwframe+0x49/0xb3
> > RIP: 0033:0x44e769
> > Code: 4d c9 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 1b c9 fb ff c3 66 2e 0f 1f 84 00 00 00 00
> > RSP: 002b:00007fe511b3fce8 EFLAGS: 00000246 ORIG_RAX: 000000000000004d
> > RAX: ffffffffffffffda RBX: 00000000006e1c68 RCX: 000000000044e769
> > RDX: 000000000044e769 RSI: 00000000000001ff RDI: 0000000000000006
> > RBP: 00000000006e1c60 R08: 0000000000000000 R09: 0000000000000000
> > R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006e1c6c
> > R13: 00007ffce699f92f R14: 00007fe511b409c0 R15: 0000000000000000
> >
> >
> > ---
> > This bug is generated by a bot. It may contain errors.
> > See https://goo.gl/tpsmEJ for more information about syzbot.
> > syzbot engineers can be reached at [email protected].
> >
> > syzbot will keep track of this bug report. See:
> > https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> > For information about bisection process see: https://goo.gl/tpsmEJ#bisection
> > syzbot can test patches for this bug, for details see:
> > https://goo.gl/tpsmEJ#testing-patches
> >
>
> --
> You received this message because you are subscribed to the Google Groups "syzkaller-bugs" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
> To view this discussion on the web visit https://groups.google.com/d/msgid/syzkaller-bugs/CAHbLzkpJjpOjizxhG6oS1OsbdycwaRdLeA8nb1R4Y2C4F7nV%2Bg%40mail.gmail.com.

2020-04-16 02:07:21

by Hugh Dickins

[permalink] [raw]
Subject: Re: possible deadlock in shmem_uncharge

On Mon, 13 Apr 2020, Yang Shi wrote:
> On Sun, Apr 12, 2020 at 3:11 AM syzbot
> <[email protected]> wrote:
> >
> > Hello,
> >
> > syzbot found the following crash on:
> >
> > HEAD commit: ae46d2aa mm/gup: Let __get_user_pages_locked() return -EIN..
> > git tree: upstream
> > console output: https://syzkaller.appspot.com/x/log.txt?x=14a30a77e00000
> > kernel config: https://syzkaller.appspot.com/x/.config?x=ca75979eeebf06c2
> > dashboard link: https://syzkaller.appspot.com/bug?extid=c8a8197c8852f566b9d9
> > compiler: gcc (GCC) 9.0.0 20181231 (experimental)
> > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=15f5632be00000
> > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=132ade57e00000
> >
> > The bug was bisected to:
> >
> > commit 71725ed10c40696dc6bdccf8e225815dcef24dba
> > Author: Hugh Dickins <[email protected]>
> > Date: Tue Apr 7 03:07:57 2020 +0000
> >
> > mm: huge tmpfs: try to split_huge_page() when punching hole
> >
> > bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=120a752be00000
> > final crash: https://syzkaller.appspot.com/x/report.txt?x=110a752be00000
> > console output: https://syzkaller.appspot.com/x/log.txt?x=160a752be00000
> >
> > IMPORTANT: if you fix the bug, please add the following tag to the commit:
> > Reported-by: [email protected]
> > Fixes: 71725ed10c40 ("mm: huge tmpfs: try to split_huge_page() when punching hole")

No, that commit just gave syzkaller an easier way to reach old code.

> >
> > =====================================================
> > WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
> > 5.6.0-syzkaller #0 Not tainted
> > -----------------------------------------------------
> > syz-executor428/8337 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
> > ffff8880a851c778 (&info->lock){....}-{2:2}, at: shmem_uncharge+0x24/0x270 mm/shmem.c:341
> >
> > and this task is already holding:
> > ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
> > ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: split_huge_page_to_list+0xad0/0x33b0 mm/huge_memory.c:2864
> > which would create a new lock dependency:
> > (&xa->xa_lock#4){..-.}-{2:2} -> (&info->lock){....}-{2:2}
> >
> > but this new dependency connects a SOFTIRQ-irq-safe lock:
> > (&xa->xa_lock#4){..-.}-{2:2}
>
> It looks shmem_uncharge() is just called by __split_huge_page() and
> collapse_file(). The collapse_file() has acquired xa_lock with irq
> disabled before acquiring info->lock, so it is safe.
> __split_huge_page() is called with holding xa_lock with irq enabled,
> but lru_lock is acquired with irq disabled before acquiring xa_lock.
>
> So, it is unnecessary to acquire info->lock with irq disabled in
> shmem_uncharge(). Can syzbot try the below patch?

But I disagree with the patch below. You're right that IRQ-disabling
here is unnecessary, given its two callers; but I'm not sure that we
want it to look different from shmem_charge() and all other info->lock
takers; and, more importantly, I don't see how removing the redundant
IRQ-saving below could make it any less liable to deadlock.

The crucial observation comes lower down
> > to a SOFTIRQ-irq-unsafe lock:
> > (shmlock_user_lock){+.+.}-{2:2}
and there's another syzbot report that's come out on shmlock_user_lock,
"possible deadlock in user_shm_lock".

I believe all that's needed to fix both reports is not to use info->lock
in shmem_lock() - I see now that we saw lockdep reports of this kind
internally, a long time ago, and fixed them in that way.

(I haven't composed the patch and references yet, and not decided if
I'll add it here or there or separately. I'll put it together now.)

Hugh

>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index d722eb8..100117b 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -334,15 +334,14 @@ bool shmem_charge(struct inode *inode, long pages)
> void shmem_uncharge(struct inode *inode, long pages)
> {
> struct shmem_inode_info *info = SHMEM_I(inode);
> - unsigned long flags;
>
> /* nrpages adjustment done by __delete_from_page_cache() or caller */
>
> - spin_lock_irqsave(&info->lock, flags);
> + spin_lock(&info->lock);
> info->alloced -= pages;
> inode->i_blocks -= pages * BLOCKS_PER_PAGE;
> shmem_recalc_inode(inode);
> - spin_unlock_irqrestore(&info->lock, flags);
> + spin_unlock(&info->lock);
>
> shmem_inode_unacct_blocks(inode, pages);
> }

2020-04-16 02:22:38

by Yang Shi

[permalink] [raw]
Subject: Re: possible deadlock in shmem_uncharge

On Wed, Apr 15, 2020 at 7:04 PM Hugh Dickins <[email protected]> wrote:
>
> On Mon, 13 Apr 2020, Yang Shi wrote:
> > On Sun, Apr 12, 2020 at 3:11 AM syzbot
> > <[email protected]> wrote:
> > >
> > > Hello,
> > >
> > > syzbot found the following crash on:
> > >
> > > HEAD commit: ae46d2aa mm/gup: Let __get_user_pages_locked() return -EIN..
> > > git tree: upstream
> > > console output: https://syzkaller.appspot.com/x/log.txt?x=14a30a77e00000
> > > kernel config: https://syzkaller.appspot.com/x/.config?x=ca75979eeebf06c2
> > > dashboard link: https://syzkaller.appspot.com/bug?extid=c8a8197c8852f566b9d9
> > > compiler: gcc (GCC) 9.0.0 20181231 (experimental)
> > > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=15f5632be00000
> > > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=132ade57e00000
> > >
> > > The bug was bisected to:
> > >
> > > commit 71725ed10c40696dc6bdccf8e225815dcef24dba
> > > Author: Hugh Dickins <[email protected]>
> > > Date: Tue Apr 7 03:07:57 2020 +0000
> > >
> > > mm: huge tmpfs: try to split_huge_page() when punching hole
> > >
> > > bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=120a752be00000
> > > final crash: https://syzkaller.appspot.com/x/report.txt?x=110a752be00000
> > > console output: https://syzkaller.appspot.com/x/log.txt?x=160a752be00000
> > >
> > > IMPORTANT: if you fix the bug, please add the following tag to the commit:
> > > Reported-by: [email protected]
> > > Fixes: 71725ed10c40 ("mm: huge tmpfs: try to split_huge_page() when punching hole")
>
> No, that commit just gave syzkaller an easier way to reach old code.
>
> > >
> > > =====================================================
> > > WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
> > > 5.6.0-syzkaller #0 Not tainted
> > > -----------------------------------------------------
> > > syz-executor428/8337 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
> > > ffff8880a851c778 (&info->lock){....}-{2:2}, at: shmem_uncharge+0x24/0x270 mm/shmem.c:341
> > >
> > > and this task is already holding:
> > > ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
> > > ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: split_huge_page_to_list+0xad0/0x33b0 mm/huge_memory.c:2864
> > > which would create a new lock dependency:
> > > (&xa->xa_lock#4){..-.}-{2:2} -> (&info->lock){....}-{2:2}
> > >
> > > but this new dependency connects a SOFTIRQ-irq-safe lock:
> > > (&xa->xa_lock#4){..-.}-{2:2}
> >
> > It looks shmem_uncharge() is just called by __split_huge_page() and
> > collapse_file(). The collapse_file() has acquired xa_lock with irq
> > disabled before acquiring info->lock, so it is safe.
> > __split_huge_page() is called with holding xa_lock with irq enabled,
> > but lru_lock is acquired with irq disabled before acquiring xa_lock.
> >
> > So, it is unnecessary to acquire info->lock with irq disabled in
> > shmem_uncharge(). Can syzbot try the below patch?
>
> But I disagree with the patch below. You're right that IRQ-disabling
> here is unnecessary, given its two callers; but I'm not sure that we
> want it to look different from shmem_charge() and all other info->lock
> takers; and, more importantly, I don't see how removing the redundant
> IRQ-saving below could make it any less liable to deadlock.

Yes, I realized the patch can't suppress the lockdep splat. But,
actually I didn't understand how this deadlock could happen because
info_lock is acquired with IRQ disabled before acquiring
user_shm_lock. So, interrupt can't come in at all if I didn't miss
anything.

>
> The crucial observation comes lower down
> > > to a SOFTIRQ-irq-unsafe lock:
> > > (shmlock_user_lock){+.+.}-{2:2}
> and there's another syzbot report that's come out on shmlock_user_lock,
> "possible deadlock in user_shm_lock".
>
> I believe all that's needed to fix both reports is not to use info->lock
> in shmem_lock() - I see now that we saw lockdep reports of this kind
> internally, a long time ago, and fixed them in that way.
>
> (I haven't composed the patch and references yet, and not decided if
> I'll add it here or there or separately. I'll put it together now.)
>
> Hugh
>
> >
> > diff --git a/mm/shmem.c b/mm/shmem.c
> > index d722eb8..100117b 100644
> > --- a/mm/shmem.c
> > +++ b/mm/shmem.c
> > @@ -334,15 +334,14 @@ bool shmem_charge(struct inode *inode, long pages)
> > void shmem_uncharge(struct inode *inode, long pages)
> > {
> > struct shmem_inode_info *info = SHMEM_I(inode);
> > - unsigned long flags;
> >
> > /* nrpages adjustment done by __delete_from_page_cache() or caller */
> >
> > - spin_lock_irqsave(&info->lock, flags);
> > + spin_lock(&info->lock);
> > info->alloced -= pages;
> > inode->i_blocks -= pages * BLOCKS_PER_PAGE;
> > shmem_recalc_inode(inode);
> > - spin_unlock_irqrestore(&info->lock, flags);
> > + spin_unlock(&info->lock);
> >
> > shmem_inode_unacct_blocks(inode, pages);
> > }

2020-04-16 03:08:02

by Hugh Dickins

[permalink] [raw]
Subject: Re: possible deadlock in shmem_uncharge

On Wed, 15 Apr 2020, Yang Shi wrote:
> On Wed, Apr 15, 2020 at 7:04 PM Hugh Dickins <[email protected]> wrote:
> > On Mon, 13 Apr 2020, Yang Shi wrote:
> > >
> > > It looks shmem_uncharge() is just called by __split_huge_page() and
> > > collapse_file(). The collapse_file() has acquired xa_lock with irq
> > > disabled before acquiring info->lock, so it is safe.
> > > __split_huge_page() is called with holding xa_lock with irq enabled,
> > > but lru_lock is acquired with irq disabled before acquiring xa_lock.
> > >
> > > So, it is unnecessary to acquire info->lock with irq disabled in
> > > shmem_uncharge(). Can syzbot try the below patch?
> >
> > But I disagree with the patch below. You're right that IRQ-disabling
> > here is unnecessary, given its two callers; but I'm not sure that we
> > want it to look different from shmem_charge() and all other info->lock
> > takers; and, more importantly, I don't see how removing the redundant
> > IRQ-saving below could make it any less liable to deadlock.
>
> Yes, I realized the patch can't suppress the lockdep splat. But,
> actually I didn't understand how this deadlock could happen because
> info_lock is acquired with IRQ disabled before acquiring
> user_shm_lock. So, interrupt can't come in at all if I didn't miss
> anything.

I think the story it's trying to tell is this (but, like most of us,
I do find Mr Lockdep embarrassingly difficult to understand; and I'm
not much good at drawing race diagrams either):

CPU0 was in user_shm_unlock(), it's got shmlock_user_lock, then an
interrupt comes in. It's an endio kind of interrupt, which goes off
to test_clear_page_writeback(), which wants the xa_lock on i_pages.

Meanwhile, CPU1 was doing some SysV SHM locking, it's got as far as
shmem_lock(), it has acquired info->lock, and goes off to user_shm_lock()
which wants shmlock_user_lock.

But sadly, CPU2 is splitting a shmem THP, calling shmem_uncharge()
that wants info->lock while outer level holds xa_lock on i_pages:
with interrupts properly disabled, but that doesn't help.

Now, that story doesn't quite hold up as a deadlock, because shmem
doesn't use writeback tags; and (unless you set shmem_enabled "force")
I don't think there's a way to get shmem THPs in SysV SHM (and are
they hole-punchable? maybe through MADV_REMOVE); so it looks like
we're talking about different inodes.

But lockdep is right to report it, and more thought might arrive at
a more convincing scenario. Anyway, easily fixed and best fixed.

(But now I think my patch must wait until tomorrow.)

Hugh

2020-04-17 00:21:36

by Hugh Dickins

[permalink] [raw]
Subject: Re: possible deadlock in shmem_uncharge

On Wed, 15 Apr 2020, Hugh Dickins wrote:
> On Wed, 15 Apr 2020, Yang Shi wrote:
> > On Wed, Apr 15, 2020 at 7:04 PM Hugh Dickins <[email protected]> wrote:
> > > On Mon, 13 Apr 2020, Yang Shi wrote:
> > > >
> > > > It looks shmem_uncharge() is just called by __split_huge_page() and
> > > > collapse_file(). The collapse_file() has acquired xa_lock with irq
> > > > disabled before acquiring info->lock, so it is safe.
> > > > __split_huge_page() is called with holding xa_lock with irq enabled,
> > > > but lru_lock is acquired with irq disabled before acquiring xa_lock.
> > > >
> > > > So, it is unnecessary to acquire info->lock with irq disabled in
> > > > shmem_uncharge(). Can syzbot try the below patch?
> > >
> > > But I disagree with the patch below. You're right that IRQ-disabling
> > > here is unnecessary, given its two callers; but I'm not sure that we
> > > want it to look different from shmem_charge() and all other info->lock
> > > takers; and, more importantly, I don't see how removing the redundant
> > > IRQ-saving below could make it any less liable to deadlock.
> >
> > Yes, I realized the patch can't suppress the lockdep splat. But,
> > actually I didn't understand how this deadlock could happen because
> > info_lock is acquired with IRQ disabled before acquiring
> > user_shm_lock. So, interrupt can't come in at all if I didn't miss
> > anything.
>
> I think the story it's trying to tell is this (but, like most of us,
> I do find Mr Lockdep embarrassingly difficult to understand; and I'm
> not much good at drawing race diagrams either):
>
> CPU0 was in user_shm_unlock(), it's got shmlock_user_lock, then an
> interrupt comes in. It's an endio kind of interrupt, which goes off
> to test_clear_page_writeback(), which wants the xa_lock on i_pages.
>
> Meanwhile, CPU1 was doing some SysV SHM locking, it's got as far as
> shmem_lock(), it has acquired info->lock, and goes off to user_shm_lock()
> which wants shmlock_user_lock.
>
> But sadly, CPU2 is splitting a shmem THP, calling shmem_uncharge()
> that wants info->lock while outer level holds xa_lock on i_pages:
> with interrupts properly disabled, but that doesn't help.
>
> Now, that story doesn't quite hold up as a deadlock, because shmem
> doesn't use writeback tags; and (unless you set shmem_enabled "force")
> I don't think there's a way to get shmem THPs in SysV SHM (and are
> they hole-punchable? maybe through MADV_REMOVE); so it looks like
> we're talking about different inodes.
>
> But lockdep is right to report it, and more thought might arrive at
> a more convincing scenario. Anyway, easily fixed and best fixed.
>
> (But now I think my patch must wait until tomorrow.)

https://lore.kernel.org/lkml/[email protected]/