2022-11-26 08:27:28

by syzbot

[permalink] [raw]
Subject: [syzbot] possible deadlock in hfsplus_file_extend

Hello,

syzbot found the following issue on:

HEAD commit: 0b1dcc2cf55a Merge tag 'mm-hotfixes-stable-2022-11-24' of ..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=138ad173880000
kernel config: https://syzkaller.appspot.com/x/.config?x=436ee340148d5197
dashboard link: https://syzkaller.appspot.com/bug?extid=325b61d3c9a17729454b
compiler: gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/3af32b89453e/disk-0b1dcc2c.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/063b631f0d64/vmlinux-0b1dcc2c.xz
kernel image: https://storage.googleapis.com/syzbot-assets/959ae1bdec1b/bzImage-0b1dcc2c.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: [email protected]

======================================================
WARNING: possible circular locking dependency detected
6.1.0-rc6-syzkaller-00251-g0b1dcc2cf55a #0 Not tainted
------------------------------------------------------
syz-executor.2/23177 is trying to acquire lock:
ffff88805c843dc8 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_extend+0x1bf/0xf60 fs/hfsplus/extents.c:457

but task is already holding lock:
ffff888089d400b0 (&tree->tree_lock){+.+.}-{3:3}, at: hfsplus_find_init+0x1bb/0x230 fs/hfsplus/bfind.c:30

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&tree->tree_lock){+.+.}-{3:3}:
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x12f/0x1360 kernel/locking/mutex.c:747
hfsplus_file_truncate+0xe87/0x10d0 fs/hfsplus/extents.c:595
hfsplus_setattr+0x1f2/0x320 fs/hfsplus/inode.c:269
notify_change+0xcd4/0x1440 fs/attr.c:420
do_truncate+0x140/0x200 fs/open.c:65
handle_truncate fs/namei.c:3216 [inline]
do_open fs/namei.c:3561 [inline]
path_openat+0x2143/0x2860 fs/namei.c:3714
do_filp_open+0x1ba/0x410 fs/namei.c:3741
do_sys_openat2+0x16d/0x4c0 fs/open.c:1310
do_sys_open fs/open.c:1326 [inline]
__do_sys_creat fs/open.c:1402 [inline]
__se_sys_creat fs/open.c:1396 [inline]
__x64_sys_creat+0xcd/0x120 fs/open.c:1396
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3097 [inline]
check_prevs_add kernel/locking/lockdep.c:3216 [inline]
validate_chain kernel/locking/lockdep.c:3831 [inline]
__lock_acquire+0x2a43/0x56d0 kernel/locking/lockdep.c:5055
lock_acquire kernel/locking/lockdep.c:5668 [inline]
lock_acquire+0x1e3/0x630 kernel/locking/lockdep.c:5633
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x12f/0x1360 kernel/locking/mutex.c:747
hfsplus_file_extend+0x1bf/0xf60 fs/hfsplus/extents.c:457
hfsplus_bmap_reserve+0x31c/0x410 fs/hfsplus/btree.c:358
hfsplus_create_cat+0x1ea/0x10d0 fs/hfsplus/catalog.c:272
hfsplus_fill_super+0x1544/0x1a30 fs/hfsplus/super.c:560
mount_bdev+0x351/0x410 fs/super.c:1401
legacy_get_tree+0x109/0x220 fs/fs_context.c:610
vfs_get_tree+0x8d/0x2f0 fs/super.c:1531
do_new_mount fs/namespace.c:3040 [inline]
path_mount+0x132a/0x1e20 fs/namespace.c:3370
do_mount fs/namespace.c:3383 [inline]
__do_sys_mount fs/namespace.c:3591 [inline]
__se_sys_mount fs/namespace.c:3568 [inline]
__x64_sys_mount+0x283/0x300 fs/namespace.c:3568
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&tree->tree_lock);
lock(&HFSPLUS_I(inode)->extents_lock);
lock(&tree->tree_lock);
lock(&HFSPLUS_I(inode)->extents_lock);

*** DEADLOCK ***

3 locks held by syz-executor.2/23177:
#0: ffff888077b000e0 (&type->s_umount_key#58/1){+.+.}-{3:3}, at: alloc_super+0x22e/0xb60 fs/super.c:228
#1: ffff888026292998 (&sbi->vh_mutex){+.+.}-{3:3}, at: hfsplus_fill_super+0x14cd/0x1a30 fs/hfsplus/super.c:553
#2: ffff888089d400b0 (&tree->tree_lock){+.+.}-{3:3}, at: hfsplus_find_init+0x1bb/0x230 fs/hfsplus/bfind.c:30

stack backtrace:
CPU: 0 PID: 23177 Comm: syz-executor.2 Not tainted 6.1.0-rc6-syzkaller-00251-g0b1dcc2cf55a #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xd1/0x138 lib/dump_stack.c:106
check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2177
check_prev_add kernel/locking/lockdep.c:3097 [inline]
check_prevs_add kernel/locking/lockdep.c:3216 [inline]
validate_chain kernel/locking/lockdep.c:3831 [inline]
__lock_acquire+0x2a43/0x56d0 kernel/locking/lockdep.c:5055
lock_acquire kernel/locking/lockdep.c:5668 [inline]
lock_acquire+0x1e3/0x630 kernel/locking/lockdep.c:5633
__mutex_lock_common kernel/locking/mutex.c:603 [inline]
__mutex_lock+0x12f/0x1360 kernel/locking/mutex.c:747
hfsplus_file_extend+0x1bf/0xf60 fs/hfsplus/extents.c:457
hfsplus_bmap_reserve+0x31c/0x410 fs/hfsplus/btree.c:358
hfsplus_create_cat+0x1ea/0x10d0 fs/hfsplus/catalog.c:272
hfsplus_fill_super+0x1544/0x1a30 fs/hfsplus/super.c:560
mount_bdev+0x351/0x410 fs/super.c:1401
legacy_get_tree+0x109/0x220 fs/fs_context.c:610
vfs_get_tree+0x8d/0x2f0 fs/super.c:1531
do_new_mount fs/namespace.c:3040 [inline]
path_mount+0x132a/0x1e20 fs/namespace.c:3370
do_mount fs/namespace.c:3383 [inline]
__do_sys_mount fs/namespace.c:3591 [inline]
__se_sys_mount fs/namespace.c:3568 [inline]
__x64_sys_mount+0x283/0x300 fs/namespace.c:3568
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f752048d60a
Code: 48 c7 c2 b8 ff ff ff f7 d8 64 89 02 b8 ff ff ff ff eb d2 e8 b8 04 00 00 0f 1f 84 00 00 00 00 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f7521100f88 EFLAGS: 00000202 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00000000000005f8 RCX: 00007f752048d60a
RDX: 0000000020000600 RSI: 0000000020000040 RDI: 00007f7521100fe0
RBP: 00007f7521101020 R08: 00007f7521101020 R09: 0000000001a00050
R10: 0000000001a00050 R11: 0000000000000202 R12: 0000000020000600
R13: 0000000020000040 R14: 00007f7521100fe0 R15: 0000000020000280
</TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at [email protected].

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.


2022-12-02 08:49:56

by syzbot

[permalink] [raw]
Subject: Re: [syzbot] possible deadlock in hfsplus_file_extend

syzbot has found a reproducer for the following issue on:

HEAD commit: ef4d3ea40565 afs: Fix server->active leak in afs_put_server
git tree: upstream
console+strace: https://syzkaller.appspot.com/x/log.txt?x=17b09247880000
kernel config: https://syzkaller.appspot.com/x/.config?x=2325e409a9a893e1
dashboard link: https://syzkaller.appspot.com/bug?extid=325b61d3c9a17729454b
compiler: Debian clang version 13.0.1-++20220126092033+75e33f71c2da-1~exp1~20220126212112.63, GNU ld (GNU Binutils for Debian) 2.35.2
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=161ff423880000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1130b38d880000

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/e840f63d5bd2/disk-ef4d3ea4.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/004e32e50436/vmlinux-ef4d3ea4.xz
kernel image: https://storage.googleapis.com/syzbot-assets/e371ed85328c/bzImage-ef4d3ea4.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/208dde4bde06/mount_0.gz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: [email protected]

loop0: detected capacity change from 0 to 1024
======================================================
WARNING: possible circular locking dependency detected
6.1.0-rc7-syzkaller-00103-gef4d3ea40565 #0 Not tainted
------------------------------------------------------
syz-executor112/3638 is trying to acquire lock:
ffff88807e8e07c8 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_extend+0x1af/0x19d0 fs/hfsplus/extents.c:457

but task is already holding lock:
ffff8880183fe0b0 (&tree->tree_lock){+.+.}-{3:3}, at: hfsplus_find_init+0x143/0x1b0

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&tree->tree_lock){+.+.}-{3:3}:
lock_acquire+0x182/0x3c0 kernel/locking/lockdep.c:5668
__mutex_lock_common+0x1bd/0x26e0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
hfsplus_file_truncate+0x871/0xbb0 fs/hfsplus/extents.c:595
hfsplus_setattr+0x1b8/0x280 fs/hfsplus/inode.c:269
notify_change+0xe38/0x10f0 fs/attr.c:420
do_truncate+0x1fb/0x2e0 fs/open.c:65
handle_truncate fs/namei.c:3216 [inline]
do_open fs/namei.c:3561 [inline]
path_openat+0x2770/0x2df0 fs/namei.c:3714
do_filp_open+0x264/0x4f0 fs/namei.c:3741
do_sys_openat2+0x124/0x4e0 fs/open.c:1310
do_sys_open fs/open.c:1326 [inline]
__do_sys_creat fs/open.c:1402 [inline]
__se_sys_creat fs/open.c:1396 [inline]
__x64_sys_creat+0x11f/0x160 fs/open.c:1396
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3097 [inline]
check_prevs_add kernel/locking/lockdep.c:3216 [inline]
validate_chain+0x1898/0x6ae0 kernel/locking/lockdep.c:3831
__lock_acquire+0x1292/0x1f60 kernel/locking/lockdep.c:5055
lock_acquire+0x182/0x3c0 kernel/locking/lockdep.c:5668
__mutex_lock_common+0x1bd/0x26e0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
hfsplus_file_extend+0x1af/0x19d0 fs/hfsplus/extents.c:457
hfsplus_bmap_reserve+0x123/0x500 fs/hfsplus/btree.c:358
hfsplus_rename_cat+0x1ab/0x1070 fs/hfsplus/catalog.c:456
hfsplus_rename+0x129/0x1b0 fs/hfsplus/dir.c:552
vfs_rename+0xd53/0x1130 fs/namei.c:4779
do_renameat2+0xb53/0x1370 fs/namei.c:4930
__do_sys_renameat2 fs/namei.c:4963 [inline]
__se_sys_renameat2 fs/namei.c:4960 [inline]
__x64_sys_renameat2+0xce/0xe0 fs/namei.c:4960
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&tree->tree_lock);
lock(&HFSPLUS_I(inode)->extents_lock);
lock(&tree->tree_lock);
lock(&HFSPLUS_I(inode)->extents_lock);

*** DEADLOCK ***

7 locks held by syz-executor112/3638:
#0: ffff8880183fa460 (sb_writers#9){.+.+}-{0:0}, at: mnt_want_write+0x3b/0x80 fs/namespace.c:393
#1: ffff8880183fa748 (&type->s_vfs_rename_key){+.+.}-{3:3}, at: lock_rename+0x54/0x1a0 fs/namei.c:2994
#2: ffff88807e8e1e00 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:791 [inline]
#2: ffff88807e8e1e00 (&type->i_mutex_dir_key#6/1){+.+.}-{3:3}, at: lock_rename+0xa0/0x1a0 fs/namei.c:2998
#3: ffff88807e8e2b80 (&sb->s_type->i_mutex_key#15/2){+.+.}-{3:3}, at: lock_rename+0x16e/0x1a0
#4: ffff88807e8e3900 (&sb->s_type->i_mutex_key#15){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
#4: ffff88807e8e3900 (&sb->s_type->i_mutex_key#15){+.+.}-{3:3}, at: lock_two_nondirectories+0xdd/0x130 fs/inode.c:1121
#5: ffff88807e8e3fc0 (&sb->s_type->i_mutex_key#15/4){+.+.}-{3:3}, at: vfs_rename+0x80a/0x1130 fs/namei.c:4749
#6: ffff8880183fe0b0 (&tree->tree_lock){+.+.}-{3:3}, at: hfsplus_find_init+0x143/0x1b0

stack backtrace:
CPU: 1 PID: 3638 Comm: syz-executor112 Not tainted 6.1.0-rc7-syzkaller-00103-gef4d3ea40565 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1b1/0x28e lib/dump_stack.c:106
check_noncircular+0x2cc/0x390 kernel/locking/lockdep.c:2177
check_prev_add kernel/locking/lockdep.c:3097 [inline]
check_prevs_add kernel/locking/lockdep.c:3216 [inline]
validate_chain+0x1898/0x6ae0 kernel/locking/lockdep.c:3831
__lock_acquire+0x1292/0x1f60 kernel/locking/lockdep.c:5055
lock_acquire+0x182/0x3c0 kernel/locking/lockdep.c:5668
__mutex_lock_common+0x1bd/0x26e0 kernel/locking/mutex.c:603
__mutex_lock kernel/locking/mutex.c:747 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
hfsplus_file_extend+0x1af/0x19d0 fs/hfsplus/extents.c:457
hfsplus_bmap_reserve+0x123/0x500 fs/hfsplus/btree.c:358
hfsplus_rename_cat+0x1ab/0x1070 fs/hfsplus/catalog.c:456
hfsplus_rename+0x129/0x1b0 fs/hfsplus/dir.c:552
vfs_rename+0xd53/0x1130 fs/namei.c:4779
do_renameat2+0xb53/0x1370 fs/namei.c:4930
__do_sys_renameat2 fs/namei.c:4963 [inline]
__se_sys_renameat2 fs/namei.c:4960 [inline]
__x64_sys_renameat2+0xce/0xe0 fs/namei.c:4960
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f1c184509f9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 51 14 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fff069f5818 EFLAGS: 00000246 ORIG_RAX: 000000000000013c
RAX: ffffffffffffffda RBX: 2f30656c69662f2e RCX: 00007f1c184509f9
RDX: 0000000000000007 RSI: 00000000200001c0 RDI: 0000000000000007
RBP: 00007f1c18410290 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000200002c0 R11: 000000

2024-04-24 16:39:35

by Jeongjun Park

[permalink] [raw]
Subject: Re: [syzbot] possible deadlock in hfsplus_file_extend

please test deadlock in hfsplus_file_extend

#syz test git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/ master

---
fs/hfsplus/attributes.c | 33 +++++++++++++++--
fs/hfsplus/btree.c | 14 ++++++-
fs/hfsplus/catalog.c | 45 ++++++++++++++++++++--
fs/hfsplus/extents.c | 82 ++++++++++++++++++++++++++++++-----------
fs/hfsplus/xattr.c | 17 +++++++--
5 files changed, 159 insertions(+), 32 deletions(-)

diff --git a/fs/hfsplus/attributes.c b/fs/hfsplus/attributes.c
index eeebe80c6be4..af142e458ac2 100644
--- a/fs/hfsplus/attributes.c
+++ b/fs/hfsplus/attributes.c
@@ -198,8 +198,11 @@ int hfsplus_create_attr(struct inode *inode,
struct super_block *sb = inode->i_sb;
struct hfs_find_data fd;
hfsplus_attr_entry *entry_ptr;
+ atomic_long_t owner;
+ unsigned long curr = (unsigned long)current;
+ struct inode *fd_inode;
int entry_size;
- int err;
+ int err, locked = 0;

hfs_dbg(ATTR_MOD, "create_attr: %s,%ld\n",
name ? name : NULL, inode->i_ino);
@@ -216,9 +219,19 @@ int hfsplus_create_attr(struct inode *inode,
err = hfs_find_init(HFSPLUS_SB(sb)->attr_tree, &fd);
if (err)
goto failed_init_create_attr;
-
+
+ fd_inode = fd.tree->inode;
+ locked = mutex_trylock(&HFSPLUS_I(fd_inode)->extents_lock);
+
+ if(!locked){
+ owner = HFSPLUS_I(fd_inode)->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
/* Fail early and avoid ENOSPC during the btree operation */
err = hfs_bmap_reserve(fd.tree, fd.tree->depth + 1);
+ if(locked)
+ mutex_unlock(&HFSPLUS_I(fd_inode)->extents_lock);
if (err)
goto failed_create_attr;

@@ -306,7 +319,10 @@ static int __hfsplus_delete_attr(struct inode *inode, u32 cnid,

int hfsplus_delete_attr(struct inode *inode, const char *name)
{
- int err = 0;
+ int err = 0, locked = 0;
+ atomic_long_t owner;
+ unsigned long curr = (unsigned long)current;
+ struct inode *fd_inode;
struct super_block *sb = inode->i_sb;
struct hfs_find_data fd;

@@ -321,9 +337,18 @@ int hfsplus_delete_attr(struct inode *inode, const char *name)
err = hfs_find_init(HFSPLUS_SB(sb)->attr_tree, &fd);
if (err)
return err;
-
+
+ fd_inode = fd.tree->inode;
+ locked = mutex_trylock(&HFSPLUS_I(fd_inode)->extents_lock);
+ if(!locked){
+ owner = HFSPLUS_I(fd_inode)->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
/* Fail early and avoid ENOSPC during the btree operation */
err = hfs_bmap_reserve(fd.tree, fd.tree->depth);
+ if(locked)
+ mutex_unlock(&HFSPLUS_I(fd_inode)->extents_lock);
if (err)
goto out;

diff --git a/fs/hfsplus/btree.c b/fs/hfsplus/btree.c
index 9e1732a2b92a..aea695c4cfb8 100644
--- a/fs/hfsplus/btree.c
+++ b/fs/hfsplus/btree.c
@@ -380,9 +380,21 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
u16 off16;
u16 len;
u8 *data, byte, m;
- int i, res;
+ int i, res, locked = 0;
+ struct inode *inode = tree->inode;
+ atomic_long_t owner;
+ unsigned long curr = (unsigned long)current;
+
+ locked = mutex_trylock(&HFSPLUS_I(inode)->extents_lock);

+ if(!locked){
+ owner = HFSPLUS_I(inode)->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return ERR_PTR(-MAX_ERRNO);
+ }
res = hfs_bmap_reserve(tree, 1);
+ if (locked)
+ mutex_unlock(&HFSPLUS_I(inode)->extents_lock);
if (res)
return ERR_PTR(res);

diff --git a/fs/hfsplus/catalog.c b/fs/hfsplus/catalog.c
index 1995bafee839..b5cd01bce6ea 100644
--- a/fs/hfsplus/catalog.c
+++ b/fs/hfsplus/catalog.c
@@ -257,7 +257,10 @@ int hfsplus_create_cat(u32 cnid, struct inode *dir,
struct hfs_find_data fd;
hfsplus_cat_entry entry;
int entry_size;
- int err;
+ int err, locked = 0;
+ atomic_long_t owner;
+ struct inode *fd_inode;
+ unsigned long curr = (unsigned long)current;

hfs_dbg(CAT_MOD, "create_cat: %s,%u(%d)\n",
str->name, cnid, inode->i_nlink);
@@ -269,7 +272,17 @@ int hfsplus_create_cat(u32 cnid, struct inode *dir,
* Fail early and avoid ENOSPC during the btree operations. We may
* have to split the root node at most once.
*/
+ fd_inode = fd.tree->inode;
+ locked = mutex_trylock(&HFSPLUS_I(fd_inode)->extents_lock);
+
+ if(!locked){
+ owner = HFSPLUS_I(fd_inode)->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
err = hfs_bmap_reserve(fd.tree, 2 * fd.tree->depth);
+ if (locked)
+ mutex_unlock(&HFSPLUS_I(fd_inode)->extents_lock);
if (err)
goto err2;

@@ -333,7 +346,10 @@ int hfsplus_delete_cat(u32 cnid, struct inode *dir, const struct qstr *str)
struct hfs_find_data fd;
struct hfsplus_fork_raw fork;
struct list_head *pos;
- int err, off;
+ struct inode *fd_inode;
+ int err, off, locked = 0;
+ atomic_long_t owner;
+ unsigned long curr = (unsigned long)current;
u16 type;

hfs_dbg(CAT_MOD, "delete_cat: %s,%u\n", str ? str->name : NULL, cnid);
@@ -345,7 +361,17 @@ int hfsplus_delete_cat(u32 cnid, struct inode *dir, const struct qstr *str)
* Fail early and avoid ENOSPC during the btree operations. We may
* have to split the root node at most once.
*/
+ fd_inode = fd.tree->inode;
+ locked = mutex_trylock(&HFSPLUS_I(fd_inode)->extents_lock);
+
+ if(!locked){
+ owner = HFSPLUS_I(fd_inode)->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
err = hfs_bmap_reserve(fd.tree, 2 * (int)fd.tree->depth - 2);
+ if (locked)
+ mutex_unlock(&HFSPLUS_I(fd_inode)->extents_lock);
if (err)
goto out;

@@ -439,7 +465,10 @@ int hfsplus_rename_cat(u32 cnid,
struct hfs_find_data src_fd, dst_fd;
hfsplus_cat_entry entry;
int entry_size, type;
- int err;
+ int err, locked = 0;
+ struct inode *fd_inode;
+ atomic_long_t owner;
+ unsigned long curr = (unsigned long)current;

hfs_dbg(CAT_MOD, "rename_cat: %u - %lu,%s - %lu,%s\n",
cnid, src_dir->i_ino, src_name->name,
@@ -453,7 +482,17 @@ int hfsplus_rename_cat(u32 cnid,
* Fail early and avoid ENOSPC during the btree operations. We may
* have to split the root node at most twice.
*/
+ fd_inode = src_fd.tree->inode;
+ locked = mutex_trylock(&HFSPLUS_I(fd_inode)->extents_lock);
+
+ if(!locked){
+ owner = HFSPLUS_I(fd_inode)->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
err = hfs_bmap_reserve(src_fd.tree, 4 * (int)src_fd.tree->depth - 1);
+ if (locked)
+ mutex_unlock(&HFSPLUS_I(fd_inode)->extents_lock);
if (err)
goto out;

diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c
index 3c572e44f2ad..933c4409618a 100644
--- a/fs/hfsplus/extents.c
+++ b/fs/hfsplus/extents.c
@@ -88,7 +88,10 @@ static int __hfsplus_ext_write_extent(struct inode *inode,
struct hfs_find_data *fd)
{
struct hfsplus_inode_info *hip = HFSPLUS_I(inode);
- int res;
+ int res, locked = 0;
+ atomic_long_t owner;
+ unsigned long curr = (unsigned long)current;
+ struct inode *fd_inode;

WARN_ON(!mutex_is_locked(&hip->extents_lock));

@@ -97,6 +100,15 @@ static int __hfsplus_ext_write_extent(struct inode *inode,
HFSPLUS_TYPE_RSRC : HFSPLUS_TYPE_DATA);

res = hfs_brec_find(fd, hfs_find_rec_by_key);
+
+ fd_inode = fd->tree->inode;
+ locked = mutex_trylock(&HFSPLUS_I(fd_inode)->extents_lock);
+
+ if(!locked){
+ owner = HFSPLUS_I(fd_inode)->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
if (hip->extent_state & HFSPLUS_EXT_NEW) {
if (res != -ENOENT)
return res;
@@ -115,6 +127,8 @@ static int __hfsplus_ext_write_extent(struct inode *inode,
hip->extent_state &= ~HFSPLUS_EXT_DIRTY;
}

+ if (locked)
+ mutex_unlock(&HFSPLUS_I(fd_inode)->extents_lock);
/*
* We can't just use hfsplus_mark_inode_dirty here, because we
* also get called from hfsplus_write_inode, which should not
@@ -228,36 +242,51 @@ int hfsplus_get_block(struct inode *inode, sector_t iblock,
struct super_block *sb = inode->i_sb;
struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb);
struct hfsplus_inode_info *hip = HFSPLUS_I(inode);
+ unsigned long curr = (unsigned long)current;
int res = -EIO;
u32 ablock, dblock, mask;
+ atomic_long_t owner;
sector_t sector;
- int was_dirty = 0;
+ int was_dirty = 0, locked = 0;

/* Convert inode block to disk allocation block */
ablock = iblock >> sbi->fs_shift;

+ locked = mutex_trylock(&hip->extents_lock);
+ if(!locked){
+ owner = hip->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
+
if (iblock >= hip->fs_blocks) {
- if (!create)
- return 0;
- if (iblock > hip->fs_blocks)
- return -EIO;
+ if (!create){
+ res = 0;
+ goto out;
+ }
+ if (iblock > hip->fs_blocks){
+ res = -EIO;
+ goto out;
+ }
if (ablock >= hip->alloc_blocks) {
res = hfsplus_file_extend(inode, false);
if (res)
- return res;
+ goto out;
}
} else
create = 0;

if (ablock < hip->first_blocks) {
dblock = hfsplus_ext_find_block(hip->first_extents, ablock);
+ if (locked)
+ mutex_unlock(&hip->extents_lock);
goto done;
}

- if (inode->i_ino == HFSPLUS_EXT_CNID)
- return -EIO;
-
- mutex_lock(&hip->extents_lock);
+ if (inode->i_ino == HFSPLUS_EXT_CNID){
+ res = -EIO;
+ goto out;
+ }

/*
* hfsplus_ext_read_extent will write out a cached extent into
@@ -267,12 +296,13 @@ int hfsplus_get_block(struct inode *inode, sector_t iblock,
was_dirty = (hip->extent_state & HFSPLUS_EXT_DIRTY);
res = hfsplus_ext_read_extent(inode, ablock);
if (res) {
- mutex_unlock(&hip->extents_lock);
- return -EIO;
+ res = -EIO;
+ goto out;
}
dblock = hfsplus_ext_find_block(hip->cached_extents,
ablock - hip->cached_start);
- mutex_unlock(&hip->extents_lock);
+ if (locked)
+ mutex_unlock(&hip->extents_lock);

done:
hfs_dbg(EXTENT, "get_block(%lu): %llu - %u\n",
@@ -292,6 +322,10 @@ int hfsplus_get_block(struct inode *inode, sector_t iblock,
if (create || was_dirty)
mark_inode_dirty(inode);
return 0;
+out:
+ if (locked)
+ mutex_unlock(&hip->extents_lock);
+ return res;
}

static void hfsplus_dump_extent(struct hfsplus_extent *extent)
@@ -454,7 +488,6 @@ int hfsplus_file_extend(struct inode *inode, bool zeroout)
return -ENOSPC;
}

- mutex_lock(&hip->extents_lock);
if (hip->alloc_blocks == hip->first_blocks)
goal = hfsplus_ext_lastblock(hip->first_extents);
else {
@@ -515,11 +548,9 @@ int hfsplus_file_extend(struct inode *inode, bool zeroout)
out:
if (!res) {
hip->alloc_blocks += len;
- mutex_unlock(&hip->extents_lock);
hfsplus_mark_inode_dirty(inode, HFSPLUS_I_ALLOC_DIRTY);
return 0;
}
- mutex_unlock(&hip->extents_lock);
return res;

insert_extent:
@@ -546,7 +577,9 @@ void hfsplus_file_truncate(struct inode *inode)
struct hfsplus_inode_info *hip = HFSPLUS_I(inode);
struct hfs_find_data fd;
u32 alloc_cnt, blk_cnt, start;
- int res;
+ int res, locked = 0;
+ unsigned long curr = (unsigned long)current;
+ atomic_long_t owner;

hfs_dbg(INODE, "truncate: %lu, %llu -> %llu\n",
inode->i_ino, (long long)hip->phys_size, inode->i_size);
@@ -573,7 +606,12 @@ void hfsplus_file_truncate(struct inode *inode)
blk_cnt = (inode->i_size + HFSPLUS_SB(sb)->alloc_blksz - 1) >>
HFSPLUS_SB(sb)->alloc_blksz_shift;

- mutex_lock(&hip->extents_lock);
+ locked = mutex_trylock(&hip->extents_lock);
+ if(!locked){
+ owner = hip->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return;
+ }

alloc_cnt = hip->alloc_blocks;
if (blk_cnt == alloc_cnt)
@@ -581,7 +619,8 @@ void hfsplus_file_truncate(struct inode *inode)

res = hfs_find_init(HFSPLUS_SB(sb)->ext_tree, &fd);
if (res) {
- mutex_unlock(&hip->extents_lock);
+ if (locked)
+ mutex_unlock(&hip->extents_lock);
/* XXX: We lack error handling of hfsplus_file_truncate() */
return;
}
@@ -619,7 +658,8 @@ void hfsplus_file_truncate(struct inode *inode)

hip->alloc_blocks = blk_cnt;
out_unlock:
- mutex_unlock(&hip->extents_lock);
+ if (locked)
+ mutex_unlock(&hip->extents_lock);
hip->phys_size = inode->i_size;
hip->fs_blocks = (inode->i_size + sb->s_blocksize - 1) >>
sb->s_blocksize_bits;
diff --git a/fs/hfsplus/xattr.c b/fs/hfsplus/xattr.c
index 9c9ff6b8c6f7..d3f8c0352a24 100644
--- a/fs/hfsplus/xattr.c
+++ b/fs/hfsplus/xattr.c
@@ -130,7 +130,9 @@ static int hfsplus_create_attributes_file(struct super_block *sb)
int index, written;
struct address_space *mapping;
struct page *page;
- int old_state = HFSPLUS_EMPTY_ATTR_TREE;
+ atomic_long_t owner;
+ unsigned long curr = (unsigned long)current;
+ int old_state = HFSPLUS_EMPTY_ATTR_TREE, locked = 0;

hfs_dbg(ATTR_MOD, "create_attr_file: ino %d\n", HFSPLUS_ATTR_CNID);

@@ -181,9 +183,14 @@ static int hfsplus_create_attributes_file(struct super_block *sb)
sbi->sect_count,
HFSPLUS_ATTR_CNID);

- mutex_lock(&hip->extents_lock);
+ locked = mutex_trylock(&hip->extents_lock);
+ if(!locked){
+ owner = hip->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
+
hip->clump_blocks = clump_size >> sbi->alloc_blksz_shift;
- mutex_unlock(&hip->extents_lock);

if (sbi->free_blocks <= (hip->clump_blocks << 1)) {
err = -ENOSPC;
@@ -194,6 +201,8 @@ static int hfsplus_create_attributes_file(struct super_block *sb)
err = hfsplus_file_extend(attr_file, false);
if (unlikely(err)) {
pr_err("failed to extend attributes file\n");
+ if(locked)
+ mutex_unlock(&hip->extents_lock);
goto end_attr_file_creation;
}
hip->phys_size = attr_file->i_size =
@@ -201,6 +210,8 @@ static int hfsplus_create_attributes_file(struct super_block *sb)
hip->fs_blocks = hip->alloc_blocks << sbi->fs_shift;
inode_set_bytes(attr_file, attr_file->i_size);
}
+ if (locked)
+ mutex_unlock(&hip->extents_lock);

buf = kzalloc(node_size, GFP_NOFS);
if (!buf) {
--
2.34.1

2024-04-24 23:39:34

by syzbot

[permalink] [raw]
Subject: Re: [syzbot] [hfs?] possible deadlock in hfsplus_file_extend

Hello,

syzbot has tested the proposed patch and the reproducer did not trigger any issue:

Reported-and-tested-by: [email protected]

Tested on:

commit: e88c4cfc Merge tag 'for-6.9-rc5-tag' of git://git.kern..
git tree: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/ master
console output: https://syzkaller.appspot.com/x/log.txt?x=1533285f180000
kernel config: https://syzkaller.appspot.com/x/.config?x=98d5a8e00ed1044a
dashboard link: https://syzkaller.appspot.com/bug?extid=325b61d3c9a17729454b
compiler: Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40
patch: https://syzkaller.appspot.com/x/patch.diff?x=131f0927180000

Note: testing is done by a robot and is best-effort only.

2024-04-25 15:39:48

by Jeongjun Park

[permalink] [raw]
Subject: [PATCH] hfsplus: Fix deadlock in hfsplus filesystem

[syz report]
======================================================
WARNING: possible circular locking dependency detected
6.9.0-rc5-syzkaller-00036-g9d1ddab261f3 #0 Not tainted
------------------------------------------------------
syz-executor343/5074 is trying to acquire lock:
ffff8880482187c8 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_extend+0x21b/0x1b70 fs/hfsplus/extents.c:457

but task is already holding lock:
ffff88807dadc0b0 (&tree->tree_lock){+.+.}-{3:3}, at: hfsplus_find_init+0x14a/0x1c0

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&tree->tree_lock){+.+.}-{3:3}:
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
__mutex_lock_common kernel/locking/mutex.c:608 [inline]
__mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
hfsplus_file_truncate+0x811/0xb50 fs/hfsplus/extents.c:595
hfsplus_delete_inode+0x174/0x220
hfsplus_unlink+0x512/0x790 fs/hfsplus/dir.c:405
vfs_unlink+0x365/0x600 fs/namei.c:4335
do_unlinkat+0x4ae/0x830 fs/namei.c:4399
__do_sys_unlinkat fs/namei.c:4442 [inline]
__se_sys_unlinkat fs/namei.c:4435 [inline]
__x64_sys_unlinkat+0xce/0xf0 fs/namei.c:4435
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3134 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
__lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
__mutex_lock_common kernel/locking/mutex.c:608 [inline]
__mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
hfsplus_file_extend+0x21b/0x1b70 fs/hfsplus/extents.c:457
hfsplus_bmap_reserve+0x105/0x4e0 fs/hfsplus/btree.c:358
hfsplus_rename_cat+0x1d0/0x1050 fs/hfsplus/catalog.c:456
hfsplus_rename+0x12e/0x1c0 fs/hfsplus/dir.c:552
vfs_rename+0xbdb/0xf00 fs/namei.c:4880
do_renameat2+0xd94/0x13f0 fs/namei.c:5037
__do_sys_rename fs/namei.c:5084 [inline]
__se_sys_rename fs/namei.c:5082 [inline]
__x64_sys_rename+0x86/0xa0 fs/namei.c:5082
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Possible unsafe locking scenario:

CPU0 CPU1
---- ----
lock(&tree->tree_lock);
lock(&HFSPLUS_I(inode)->extents_lock);
lock(&tree->tree_lock);
lock(&HFSPLUS_I(inode)->extents_lock);

*** DEADLOCK ***
==================================================

I wrote a patch to eliminate the deadlock that has been occurring
continuously in hfsplus for a long time. This patch prevents deadlock
caused by recursion and ABBA deadlock.

Reported-by: [email protected]
Signed-off-by: Jeongjun Park <[email protected]>
---
fs/hfsplus/attributes.c | 33 +++++++++++++++--
fs/hfsplus/btree.c | 14 ++++++-
fs/hfsplus/catalog.c | 45 ++++++++++++++++++++--
fs/hfsplus/extents.c | 82 ++++++++++++++++++++++++++++++-----------
fs/hfsplus/xattr.c | 17 +++++++--
5 files changed, 159 insertions(+), 32 deletions(-)

diff --git a/fs/hfsplus/attributes.c b/fs/hfsplus/attributes.c
index eeebe80c6be4..af142e458ac2 100644
--- a/fs/hfsplus/attributes.c
+++ b/fs/hfsplus/attributes.c
@@ -198,8 +198,11 @@ int hfsplus_create_attr(struct inode *inode,
struct super_block *sb = inode->i_sb;
struct hfs_find_data fd;
hfsplus_attr_entry *entry_ptr;
+ atomic_long_t owner;
+ unsigned long curr = (unsigned long)current;
+ struct inode *fd_inode;
int entry_size;
- int err;
+ int err, locked = 0;

hfs_dbg(ATTR_MOD, "create_attr: %s,%ld\n",
name ? name : NULL, inode->i_ino);
@@ -216,9 +219,19 @@ int hfsplus_create_attr(struct inode *inode,
err = hfs_find_init(HFSPLUS_SB(sb)->attr_tree, &fd);
if (err)
goto failed_init_create_attr;
-
+
+ fd_inode = fd.tree->inode;
+ locked = mutex_trylock(&HFSPLUS_I(fd_inode)->extents_lock);
+
+ if(!locked){
+ owner = HFSPLUS_I(fd_inode)->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
/* Fail early and avoid ENOSPC during the btree operation */
err = hfs_bmap_reserve(fd.tree, fd.tree->depth + 1);
+ if(locked)
+ mutex_unlock(&HFSPLUS_I(fd_inode)->extents_lock);
if (err)
goto failed_create_attr;

@@ -306,7 +319,10 @@ static int __hfsplus_delete_attr(struct inode *inode, u32 cnid,

int hfsplus_delete_attr(struct inode *inode, const char *name)
{
- int err = 0;
+ int err = 0, locked = 0;
+ atomic_long_t owner;
+ unsigned long curr = (unsigned long)current;
+ struct inode *fd_inode;
struct super_block *sb = inode->i_sb;
struct hfs_find_data fd;

@@ -321,9 +337,18 @@ int hfsplus_delete_attr(struct inode *inode, const char *name)
err = hfs_find_init(HFSPLUS_SB(sb)->attr_tree, &fd);
if (err)
return err;
-
+
+ fd_inode = fd.tree->inode;
+ locked = mutex_trylock(&HFSPLUS_I(fd_inode)->extents_lock);
+ if(!locked){
+ owner = HFSPLUS_I(fd_inode)->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
/* Fail early and avoid ENOSPC during the btree operation */
err = hfs_bmap_reserve(fd.tree, fd.tree->depth);
+ if(locked)
+ mutex_unlock(&HFSPLUS_I(fd_inode)->extents_lock);
if (err)
goto out;

diff --git a/fs/hfsplus/btree.c b/fs/hfsplus/btree.c
index 9e1732a2b92a..aea695c4cfb8 100644
--- a/fs/hfsplus/btree.c
+++ b/fs/hfsplus/btree.c
@@ -380,9 +380,21 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
u16 off16;
u16 len;
u8 *data, byte, m;
- int i, res;
+ int i, res, locked = 0;
+ struct inode *inode = tree->inode;
+ atomic_long_t owner;
+ unsigned long curr = (unsigned long)current;
+
+ locked = mutex_trylock(&HFSPLUS_I(inode)->extents_lock);

+ if(!locked){
+ owner = HFSPLUS_I(inode)->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return ERR_PTR(-MAX_ERRNO);
+ }
res = hfs_bmap_reserve(tree, 1);
+ if (locked)
+ mutex_unlock(&HFSPLUS_I(inode)->extents_lock);
if (res)
return ERR_PTR(res);

diff --git a/fs/hfsplus/catalog.c b/fs/hfsplus/catalog.c
index 1995bafee839..b5cd01bce6ea 100644
--- a/fs/hfsplus/catalog.c
+++ b/fs/hfsplus/catalog.c
@@ -257,7 +257,10 @@ int hfsplus_create_cat(u32 cnid, struct inode *dir,
struct hfs_find_data fd;
hfsplus_cat_entry entry;
int entry_size;
- int err;
+ int err, locked = 0;
+ atomic_long_t owner;
+ struct inode *fd_inode;
+ unsigned long curr = (unsigned long)current;

hfs_dbg(CAT_MOD, "create_cat: %s,%u(%d)\n",
str->name, cnid, inode->i_nlink);
@@ -269,7 +272,17 @@ int hfsplus_create_cat(u32 cnid, struct inode *dir,
* Fail early and avoid ENOSPC during the btree operations. We may
* have to split the root node at most once.
*/
+ fd_inode = fd.tree->inode;
+ locked = mutex_trylock(&HFSPLUS_I(fd_inode)->extents_lock);
+
+ if(!locked){
+ owner = HFSPLUS_I(fd_inode)->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
err = hfs_bmap_reserve(fd.tree, 2 * fd.tree->depth);
+ if (locked)
+ mutex_unlock(&HFSPLUS_I(fd_inode)->extents_lock);
if (err)
goto err2;

@@ -333,7 +346,10 @@ int hfsplus_delete_cat(u32 cnid, struct inode *dir, const struct qstr *str)
struct hfs_find_data fd;
struct hfsplus_fork_raw fork;
struct list_head *pos;
- int err, off;
+ struct inode *fd_inode;
+ int err, off, locked = 0;
+ atomic_long_t owner;
+ unsigned long curr = (unsigned long)current;
u16 type;

hfs_dbg(CAT_MOD, "delete_cat: %s,%u\n", str ? str->name : NULL, cnid);
@@ -345,7 +361,17 @@ int hfsplus_delete_cat(u32 cnid, struct inode *dir, const struct qstr *str)
* Fail early and avoid ENOSPC during the btree operations. We may
* have to split the root node at most once.
*/
+ fd_inode = fd.tree->inode;
+ locked = mutex_trylock(&HFSPLUS_I(fd_inode)->extents_lock);
+
+ if(!locked){
+ owner = HFSPLUS_I(fd_inode)->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
err = hfs_bmap_reserve(fd.tree, 2 * (int)fd.tree->depth - 2);
+ if (locked)
+ mutex_unlock(&HFSPLUS_I(fd_inode)->extents_lock);
if (err)
goto out;

@@ -439,7 +465,10 @@ int hfsplus_rename_cat(u32 cnid,
struct hfs_find_data src_fd, dst_fd;
hfsplus_cat_entry entry;
int entry_size, type;
- int err;
+ int err, locked = 0;
+ struct inode *fd_inode;
+ atomic_long_t owner;
+ unsigned long curr = (unsigned long)current;

hfs_dbg(CAT_MOD, "rename_cat: %u - %lu,%s - %lu,%s\n",
cnid, src_dir->i_ino, src_name->name,
@@ -453,7 +482,17 @@ int hfsplus_rename_cat(u32 cnid,
* Fail early and avoid ENOSPC during the btree operations. We may
* have to split the root node at most twice.
*/
+ fd_inode = src_fd.tree->inode;
+ locked = mutex_trylock(&HFSPLUS_I(fd_inode)->extents_lock);
+
+ if(!locked){
+ owner = HFSPLUS_I(fd_inode)->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
err = hfs_bmap_reserve(src_fd.tree, 4 * (int)src_fd.tree->depth - 1);
+ if (locked)
+ mutex_unlock(&HFSPLUS_I(fd_inode)->extents_lock);
if (err)
goto out;

diff --git a/fs/hfsplus/extents.c b/fs/hfsplus/extents.c
index 3c572e44f2ad..933c4409618a 100644
--- a/fs/hfsplus/extents.c
+++ b/fs/hfsplus/extents.c
@@ -88,7 +88,10 @@ static int __hfsplus_ext_write_extent(struct inode *inode,
struct hfs_find_data *fd)
{
struct hfsplus_inode_info *hip = HFSPLUS_I(inode);
- int res;
+ int res, locked = 0;
+ atomic_long_t owner;
+ unsigned long curr = (unsigned long)current;
+ struct inode *fd_inode;

WARN_ON(!mutex_is_locked(&hip->extents_lock));

@@ -97,6 +100,15 @@ static int __hfsplus_ext_write_extent(struct inode *inode,
HFSPLUS_TYPE_RSRC : HFSPLUS_TYPE_DATA);

res = hfs_brec_find(fd, hfs_find_rec_by_key);
+
+ fd_inode = fd->tree->inode;
+ locked = mutex_trylock(&HFSPLUS_I(fd_inode)->extents_lock);
+
+ if(!locked){
+ owner = HFSPLUS_I(fd_inode)->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
if (hip->extent_state & HFSPLUS_EXT_NEW) {
if (res != -ENOENT)
return res;
@@ -115,6 +127,8 @@ static int __hfsplus_ext_write_extent(struct inode *inode,
hip->extent_state &= ~HFSPLUS_EXT_DIRTY;
}

+ if (locked)
+ mutex_unlock(&HFSPLUS_I(fd_inode)->extents_lock);
/*
* We can't just use hfsplus_mark_inode_dirty here, because we
* also get called from hfsplus_write_inode, which should not
@@ -228,36 +242,51 @@ int hfsplus_get_block(struct inode *inode, sector_t iblock,
struct super_block *sb = inode->i_sb;
struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb);
struct hfsplus_inode_info *hip = HFSPLUS_I(inode);
+ unsigned long curr = (unsigned long)current;
int res = -EIO;
u32 ablock, dblock, mask;
+ atomic_long_t owner;
sector_t sector;
- int was_dirty = 0;
+ int was_dirty = 0, locked = 0;

/* Convert inode block to disk allocation block */
ablock = iblock >> sbi->fs_shift;

+ locked = mutex_trylock(&hip->extents_lock);
+ if(!locked){
+ owner = hip->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
+
if (iblock >= hip->fs_blocks) {
- if (!create)
- return 0;
- if (iblock > hip->fs_blocks)
- return -EIO;
+ if (!create){
+ res = 0;
+ goto out;
+ }
+ if (iblock > hip->fs_blocks){
+ res = -EIO;
+ goto out;
+ }
if (ablock >= hip->alloc_blocks) {
res = hfsplus_file_extend(inode, false);
if (res)
- return res;
+ goto out;
}
} else
create = 0;

if (ablock < hip->first_blocks) {
dblock = hfsplus_ext_find_block(hip->first_extents, ablock);
+ if (locked)
+ mutex_unlock(&hip->extents_lock);
goto done;
}

- if (inode->i_ino == HFSPLUS_EXT_CNID)
- return -EIO;
-
- mutex_lock(&hip->extents_lock);
+ if (inode->i_ino == HFSPLUS_EXT_CNID){
+ res = -EIO;
+ goto out;
+ }

/*
* hfsplus_ext_read_extent will write out a cached extent into
@@ -267,12 +296,13 @@ int hfsplus_get_block(struct inode *inode, sector_t iblock,
was_dirty = (hip->extent_state & HFSPLUS_EXT_DIRTY);
res = hfsplus_ext_read_extent(inode, ablock);
if (res) {
- mutex_unlock(&hip->extents_lock);
- return -EIO;
+ res = -EIO;
+ goto out;
}
dblock = hfsplus_ext_find_block(hip->cached_extents,
ablock - hip->cached_start);
- mutex_unlock(&hip->extents_lock);
+ if (locked)
+ mutex_unlock(&hip->extents_lock);

done:
hfs_dbg(EXTENT, "get_block(%lu): %llu - %u\n",
@@ -292,6 +322,10 @@ int hfsplus_get_block(struct inode *inode, sector_t iblock,
if (create || was_dirty)
mark_inode_dirty(inode);
return 0;
+out:
+ if (locked)
+ mutex_unlock(&hip->extents_lock);
+ return res;
}

static void hfsplus_dump_extent(struct hfsplus_extent *extent)
@@ -454,7 +488,6 @@ int hfsplus_file_extend(struct inode *inode, bool zeroout)
return -ENOSPC;
}

- mutex_lock(&hip->extents_lock);
if (hip->alloc_blocks == hip->first_blocks)
goal = hfsplus_ext_lastblock(hip->first_extents);
else {
@@ -515,11 +548,9 @@ int hfsplus_file_extend(struct inode *inode, bool zeroout)
out:
if (!res) {
hip->alloc_blocks += len;
- mutex_unlock(&hip->extents_lock);
hfsplus_mark_inode_dirty(inode, HFSPLUS_I_ALLOC_DIRTY);
return 0;
}
- mutex_unlock(&hip->extents_lock);
return res;

insert_extent:
@@ -546,7 +577,9 @@ void hfsplus_file_truncate(struct inode *inode)
struct hfsplus_inode_info *hip = HFSPLUS_I(inode);
struct hfs_find_data fd;
u32 alloc_cnt, blk_cnt, start;
- int res;
+ int res, locked = 0;
+ unsigned long curr = (unsigned long)current;
+ atomic_long_t owner;

hfs_dbg(INODE, "truncate: %lu, %llu -> %llu\n",
inode->i_ino, (long long)hip->phys_size, inode->i_size);
@@ -573,7 +606,12 @@ void hfsplus_file_truncate(struct inode *inode)
blk_cnt = (inode->i_size + HFSPLUS_SB(sb)->alloc_blksz - 1) >>
HFSPLUS_SB(sb)->alloc_blksz_shift;

- mutex_lock(&hip->extents_lock);
+ locked = mutex_trylock(&hip->extents_lock);
+ if(!locked){
+ owner = hip->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return;
+ }

alloc_cnt = hip->alloc_blocks;
if (blk_cnt == alloc_cnt)
@@ -581,7 +619,8 @@ void hfsplus_file_truncate(struct inode *inode)

res = hfs_find_init(HFSPLUS_SB(sb)->ext_tree, &fd);
if (res) {
- mutex_unlock(&hip->extents_lock);
+ if (locked)
+ mutex_unlock(&hip->extents_lock);
/* XXX: We lack error handling of hfsplus_file_truncate() */
return;
}
@@ -619,7 +658,8 @@ void hfsplus_file_truncate(struct inode *inode)

hip->alloc_blocks = blk_cnt;
out_unlock:
- mutex_unlock(&hip->extents_lock);
+ if (locked)
+ mutex_unlock(&hip->extents_lock);
hip->phys_size = inode->i_size;
hip->fs_blocks = (inode->i_size + sb->s_blocksize - 1) >>
sb->s_blocksize_bits;
diff --git a/fs/hfsplus/xattr.c b/fs/hfsplus/xattr.c
index 9c9ff6b8c6f7..d3f8c0352a24 100644
--- a/fs/hfsplus/xattr.c
+++ b/fs/hfsplus/xattr.c
@@ -130,7 +130,9 @@ static int hfsplus_create_attributes_file(struct super_block *sb)
int index, written;
struct address_space *mapping;
struct page *page;
- int old_state = HFSPLUS_EMPTY_ATTR_TREE;
+ atomic_long_t owner;
+ unsigned long curr = (unsigned long)current;
+ int old_state = HFSPLUS_EMPTY_ATTR_TREE, locked = 0;

hfs_dbg(ATTR_MOD, "create_attr_file: ino %d\n", HFSPLUS_ATTR_CNID);

@@ -181,9 +183,14 @@ static int hfsplus_create_attributes_file(struct super_block *sb)
sbi->sect_count,
HFSPLUS_ATTR_CNID);

- mutex_lock(&hip->extents_lock);
+ locked = mutex_trylock(&hip->extents_lock);
+ if(!locked){
+ owner = hip->extents_lock.owner;
+ if((unsigned long)atomic_long_cmpxchg(&owner, 0, 0) != curr)
+ return -EAGAIN;
+ }
+
hip->clump_blocks = clump_size >> sbi->alloc_blksz_shift;
- mutex_unlock(&hip->extents_lock);

if (sbi->free_blocks <= (hip->clump_blocks << 1)) {
err = -ENOSPC;
@@ -194,6 +201,8 @@ static int hfsplus_create_attributes_file(struct super_block *sb)
err = hfsplus_file_extend(attr_file, false);
if (unlikely(err)) {
pr_err("failed to extend attributes file\n");
+ if(locked)
+ mutex_unlock(&hip->extents_lock);
goto end_attr_file_creation;
}
hip->phys_size = attr_file->i_size =
@@ -201,6 +210,8 @@ static int hfsplus_create_attributes_file(struct super_block *sb)
hip->fs_blocks = hip->alloc_blocks << sbi->fs_shift;
inode_set_bytes(attr_file, attr_file->i_size);
}
+ if (locked)
+ mutex_unlock(&hip->extents_lock);

buf = kzalloc(node_size, GFP_NOFS);
if (!buf) {
--
2.34.1