Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933822AbbLOWBb (ORCPT ); Tue, 15 Dec 2015 17:01:31 -0500 Received: from mail.kernel.org ([198.145.29.136]:37678 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933678AbbLOWBa (ORCPT ); Tue, 15 Dec 2015 17:01:30 -0500 Date: Tue, 15 Dec 2015 14:01:28 -0800 From: Jaegeuk Kim To: Chao Yu Cc: linux-f2fs-devel@lists.sourceforge.net, linux-kernel@vger.kernel.org Subject: Re: [PATCH 8/8] f2fs: fix to avoid deadlock between checkpoint and writepages Message-ID: <20151215220128.GB66113@jaegeuk.local> References: <00fb01d136fa$97e29d70$c7a7d850$@samsung.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <00fb01d136fa$97e29d70$c7a7d850$@samsung.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4033 Lines: 107 Hi Chao, On Tue, Dec 15, 2015 at 01:36:08PM +0800, Chao Yu wrote: > This patch fixes to move f2fs_balance_fs out of sbi->writepages' > coverage to avoid potential ABBA deadlock which was found by lockdep: > > Possible unsafe locking scenario: > > CPU0 CPU1 > ---- ---- > lock(&sbi->writepages); > lock(&sbi->cp_mutex); > lock(&sbi->writepages); > lock(&sbi->cp_mutex); > > *** DEADLOCK *** I expect it will be fine if syncing is done by f2fs_balance_fs_bg(). Thanks, > > stack of CPU0: > [] __lock_acquire+0x1321/0x1770 > [] lock_acquire+0xb7/0x130 > [] mutex_lock_nested+0x52/0x380 > [] f2fs_balance_fs+0x8b/0xa0 [f2fs] > [] f2fs_write_data_page+0x33b/0x460 [f2fs] > [] __f2fs_writepage+0x1a/0x50 [f2fs] > [] T.1541+0x293/0x560 [f2fs] > [] f2fs_write_data_pages+0x12c/0x230 [f2fs] > [] do_writepages+0x23/0x40 > [] __filemap_fdatawrite_range+0xb5/0xf0 > [] filemap_write_and_wait_range+0xa3/0xd0 > [] f2fs_symlink+0x180/0x300 [f2fs] > [] vfs_symlink+0xb7/0xe0 > [] SyS_symlinkat+0xc5/0x100 > [] SyS_symlink+0x16/0x20 > [] entry_SYSCALL_64_fastpath+0x12/0x6f > > stack of CPU1 > [] lock_acquire+0xb7/0x130 > [] mutex_lock_nested+0x52/0x380 > [] f2fs_write_data_pages+0x11e/0x230 [f2fs] > [] do_writepages+0x23/0x40 > [] __filemap_fdatawrite_range+0xb5/0xf0 > [] filemap_fdatawrite+0x1f/0x30 > [] sync_dirty_inodes+0x4d/0xd0 [f2fs] > [] block_operations+0x71/0x160 [f2fs] > [] write_checkpoint+0xe8/0xbb0 [f2fs] > [] f2fs_sync_fs+0x8f/0xf0 [f2fs] > [] f2fs_balance_fs_bg+0x6f/0xd0 [f2fs] > [] f2fs_write_node_pages+0x57/0x150 [f2fs] > [] do_writepages+0x23/0x40 > [] __writeback_single_inode+0x6d/0x3d0 > [] writeback_sb_inodes+0x2c7/0x520 > [] wb_writeback+0x133/0x330 > [] wb_do_writeback+0xe8/0x270 > [] wb_workfn+0x80/0x1f0 > [] process_one_work+0x20c/0x5c0 > [] worker_thread+0x132/0x5f0 > [] kthread+0xde/0x100 > [] ret_from_fork+0x3f/0x70 > > Signed-off-by: Chao Yu > --- > fs/f2fs/data.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c > index 2e97057..985671d 100644 > --- a/fs/f2fs/data.c > +++ b/fs/f2fs/data.c > @@ -506,7 +506,6 @@ static void __allocate_data_blocks(struct inode *inode, loff_t offset, > u64 end_offset; > > while (len) { > - f2fs_balance_fs(sbi); > f2fs_lock_op(sbi); > > /* When reading holes, we need its node page */ > @@ -1186,7 +1185,7 @@ out: > if (err) > ClearPageUptodate(page); > unlock_page(page); > - if (need_balance_fs) > + if (need_balance_fs && !test_opt(sbi, DATA_FLUSH)) > f2fs_balance_fs(sbi); > if (wbc->for_reclaim) { > f2fs_submit_merged_bio(sbi, DATA, WRITE); > @@ -1617,6 +1616,8 @@ static ssize_t f2fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter, > trace_f2fs_direct_IO_enter(inode, offset, count, rw); > > if (rw == WRITE) { > + f2fs_balance_fs(sbi); > + > if (serialized) > mutex_lock(&sbi->writepages); > __allocate_data_blocks(inode, offset, count); > -- > 2.6.3 > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/