Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp2379093ybk; Mon, 11 May 2020 20:28:37 -0700 (PDT) X-Google-Smtp-Source: APiQypK2vu1kV0+IhMunxgwRn+UObZ46vUrzXcayuarj8Eg+VJnB7wmzyZVFyw3RiKO2C0zo/OZ8 X-Received: by 2002:a50:a412:: with SMTP id u18mr8768372edb.192.1589254117025; Mon, 11 May 2020 20:28:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589254117; cv=none; d=google.com; s=arc-20160816; b=x6pj85A/xpF82p3bVqohFTnZWkUir2eP3oMgYpod77gHb5YXLAq6la4ENeHpqEoHic Wrp/z0DahlR5ryQXjhr1d9IgCaLqlmkg0SsBxkN2Zg5f1S6tpYcXkMcGPtkb23PzPOv8 HEcAfT0za5aIqzG/xxDxN8rnrDXeKbDlOIX+vE45BsKVH5E5cGenkaBE+sEkkSQdRhlq zMC8rmhoUuoMKOsGVZ5GODvK8y5SP5fbAHkUI9Nil2isIkrrpuEXWnRe/xeIZyAVlyEo GvtT+oZWfi00B9hCBg2uHyILcLjEyQMlQ9iX8jHMnVvRZc2HPCl8S27wrvGESmpgWSBT q1ew== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=XDT1JiH4njp6d2kEuGmh24JhkZhY0TRfGch01/KdQXw=; b=ehvJc0beBIm6/aKf0G2+EzMSN4r5iuBHQx9m4Q7e9XPaSdMLI8xiXtwkb22bxck1QO TeCcVkQu+fKI4eq8tBS300xMRTEr0ZaREvkY63YcX//AkWOdw76SfeY92hdJuF2LnsPM SltY2N5CBK06mSL6U3F5ojcCY0KWJ2XeqCZGc4YrruuJN//3DEr0wJmZeiDoEhrWDEyN BbDUUkVPSQZ7PJVTMti7UtljEALTabNe8uoEwJmD+yBn2SQeSaGz6P17Jo01y16LCMvI xUF/2247CAsDlA/ygl9A/DVa0gCo6OWXFqRtY9E6uoF7FgHnE7K98N9sSUAit+qwtWr4 Sphg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=QycjEauT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id fx7si6888929ejb.149.2020.05.11.20.28.14; Mon, 11 May 2020 20:28:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=QycjEauT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728714AbgELDYl (ORCPT + 99 others); Mon, 11 May 2020 23:24:41 -0400 Received: from mail.kernel.org ([198.145.29.99]:41422 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727942AbgELDYk (ORCPT ); Mon, 11 May 2020 23:24:40 -0400 Received: from localhost (unknown [104.132.1.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 94229206B7; Tue, 12 May 2020 03:24:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1589253879; bh=tzL90nTP6g/zTwNPhG6apawkix6NIfY0N0HNMdEjGYY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=QycjEauTaceGwBhK7LqwBoEbj1QgjomzsaTwK7A9hOh2e478iJefOQ24es3lzhcTd lx5UqxeWgspwgni45J7X8/rce/K7+kHOrr8aJ4w75YaYyJJ2PfCdYyiV1dDJoMJQLo mY6scp0j1rX8WSzAhfPhhRCuZZEyDd4prbIWRTuI= Date: Mon, 11 May 2020 20:24:39 -0700 From: Jaegeuk Kim To: Chao Yu Cc: Sayali Lokhande , linux-f2fs-devel@lists.sourceforge.net, stummala@codeaurora.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH V4] f2fs: Avoid double lock for cp_rwsem during checkpoint Message-ID: <20200512032439.GA216359@google.com> References: <1588244309-1468-1-git-send-email-sayalil@codeaurora.org> <20200508161052.GA49579@google.com> <0902037e-998d-812e-53e7-90ea7b9957eb@huawei.com> <20200509190342.GA11239@google.com> <20200511221100.GA171700@google.com> <34a9cdcd-0e3d-8d2a-6b19-6fced3a3aa68@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <34a9cdcd-0e3d-8d2a-6b19-6fced3a3aa68@huawei.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/12, Chao Yu wrote: > On 2020/5/12 6:11, Jaegeuk Kim wrote: > > On 05/11, Chao Yu wrote: > >> On 2020/5/10 3:03, Jaegeuk Kim wrote: > >>> On 05/09, Chao Yu wrote: > >>>> On 2020/5/9 0:10, Jaegeuk Kim wrote: > >>>>> Hi Sayali, > >>>>> > >>>>> In order to address the perf regression, how about this? > >>>>> > >>>>> >From 48418af635884803ffb35972df7958a2e6649322 Mon Sep 17 00:00:00 2001 > >>>>> From: Jaegeuk Kim > >>>>> Date: Fri, 8 May 2020 09:08:37 -0700 > >>>>> Subject: [PATCH] f2fs: avoid double lock for cp_rwsem during checkpoint > >>>>> > >>>>> There could be a scenario where f2fs_sync_node_pages gets > >>>>> called during checkpoint, which in turn tries to flush > >>>>> inline data and calls iput(). This results in deadlock as > >>>>> iput() tries to hold cp_rwsem, which is already held at the > >>>>> beginning by checkpoint->block_operations(). > >>>>> > >>>>> Call stack : > >>>>> > >>>>> Thread A Thread B > >>>>> f2fs_write_checkpoint() > >>>>> - block_operations(sbi) > >>>>> - f2fs_lock_all(sbi); > >>>>> - down_write(&sbi->cp_rwsem); > >>>>> > >>>>> - open() > >>>>> - igrab() > >>>>> - write() write inline data > >>>>> - unlink() > >>>>> - f2fs_sync_node_pages() > >>>>> - if (is_inline_node(page)) > >>>>> - flush_inline_data() > >>>>> - ilookup() > >>>>> page = f2fs_pagecache_get_page() > >>>>> if (!page) > >>>>> goto iput_out; > >>>>> iput_out: > >>>>> -close() > >>>>> -iput() > >>>>> iput(inode); > >>>>> - f2fs_evict_inode() > >>>>> - f2fs_truncate_blocks() > >>>>> - f2fs_lock_op() > >>>>> - down_read(&sbi->cp_rwsem); > >>>>> > >>>>> Fixes: 2049d4fcb057 ("f2fs: avoid multiple node page writes due to inline_data") > >>>>> Signed-off-by: Sayali Lokhande > >>>>> Signed-off-by: Jaegeuk Kim > >>>>> --- > >>>>> fs/f2fs/node.c | 4 ++-- > >>>>> 1 file changed, 2 insertions(+), 2 deletions(-) > >>>>> > >>>>> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c > >>>>> index 1db8cabf727ef..626d7daca09de 100644 > >>>>> --- a/fs/f2fs/node.c > >>>>> +++ b/fs/f2fs/node.c > >>>>> @@ -1870,8 +1870,8 @@ int f2fs_sync_node_pages(struct f2fs_sb_info *sbi, > >>>>> goto continue_unlock; > >>>>> } > >>>>> > >>>>> - /* flush inline_data */ > >>>>> - if (is_inline_node(page)) { > >>>>> + /* flush inline_data, if it's not sync path. */ > >>>>> + if (do_balance && is_inline_node(page)) { > >>>> > >>>> IIRC, this flow was designed to avoid running out of free space issue > >>>> during checkpoint: > >>>> > >>>> 2049d4fcb057 ("f2fs: avoid multiple node page writes due to inline_data") > >>>> > >>>> The sceanrio is: > >>>> 1. create fully node blocks > >>>> 2. flush node blocks > >>>> 3. write inline_data for all the node blocks again > >>>> 4. flush node blocks redundantly > >>>> > >>>> I guess this may cause failing one case of fstest. > >>> > >>> Yeah, actually I was hitting 204 failure, and thus, revised like this. > >>> Now, I don't see any regression in fstest. > >>> > >>> >From 8f1882acfb0a5fc43e5a2bbd576a8f3c681a7d2c Mon Sep 17 00:00:00 2001 > >>> From: Sayali Lokhande > >>> Date: Thu, 30 Apr 2020 16:28:29 +0530 > >>> Subject: [PATCH] f2fs: Avoid double lock for cp_rwsem during checkpoint > >>> > >>> There could be a scenario where f2fs_sync_node_pages gets > >>> called during checkpoint, which in turn tries to flush > >>> inline data and calls iput(). This results in deadlock as > >>> iput() tries to hold cp_rwsem, which is already held at the > >>> beginning by checkpoint->block_operations(). > >>> > >>> Call stack : > >>> > >>> Thread A Thread B > >>> f2fs_write_checkpoint() > >>> - block_operations(sbi) > >>> - f2fs_lock_all(sbi); > >>> - down_write(&sbi->cp_rwsem); > >>> > >>> - open() > >>> - igrab() > >>> - write() write inline data > >>> - unlink() > >>> - f2fs_sync_node_pages() > >>> - if (is_inline_node(page)) > >>> - flush_inline_data() > >>> - ilookup() > >>> page = f2fs_pagecache_get_page() > >>> if (!page) > >>> goto iput_out; > >>> iput_out: > >>> -close() > >>> -iput() > >>> iput(inode); > >>> - f2fs_evict_inode() > >>> - f2fs_truncate_blocks() > >>> - f2fs_lock_op() > >>> - down_read(&sbi->cp_rwsem); > >>> > >>> Fixes: 2049d4fcb057 ("f2fs: avoid multiple node page writes due to inline_data") > >>> Signed-off-by: Sayali Lokhande > >>> Signed-off-by: Jaegeuk Kim > >>> --- > >>> fs/f2fs/checkpoint.c | 9 ++++++++- > >>> fs/f2fs/f2fs.h | 4 ++-- > >>> fs/f2fs/node.c | 10 +++++----- > >>> 3 files changed, 15 insertions(+), 8 deletions(-) > >>> > >>> diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c > >>> index d49f7a01d8a26..928aea4ff663d 100644 > >>> --- a/fs/f2fs/checkpoint.c > >>> +++ b/fs/f2fs/checkpoint.c > >>> @@ -1168,6 +1168,12 @@ static int block_operations(struct f2fs_sb_info *sbi) > >>> }; > >>> int err = 0, cnt = 0; > >>> > >>> + /* > >>> + * Let's flush node pages first to flush inline_data. > >>> + * We'll actually guarantee everything below under f2fs_lock_all. > >>> + */ > >>> + f2fs_sync_node_pages(sbi, &wbc, false, false, FS_CP_NODE_IO); > >> > >> It is possible that user write a large number of inline data in between > >> f2fs_sync_node_pages() and f2fs_lock_all(), it will cause the no-space issue in > >> race condition. > >> > >> Also, if there is huge number of F2FS_DIRTY_IMETA, after this change, we will > >> flush inode page twice which is unneeded. > >> > >> f2fs_sync_node_pages() --- flush dirty inode page > >> f2fs_lock_all() > >> ... > >> f2fs_sync_inode_meta() --- update dirty inode page > >> f2fs_sync_node_pages() --- flush dirty inode page again. > >> > > > > Another version: > > > >>From 6b430b72af57c65c20dea7b87f7ba7e9df36be98 Mon Sep 17 00:00:00 2001 > > From: Sayali Lokhande > > Date: Thu, 30 Apr 2020 16:28:29 +0530 > > Subject: [PATCH] f2fs: Avoid double lock for cp_rwsem during checkpoint > > > > There could be a scenario where f2fs_sync_node_pages gets > > called during checkpoint, which in turn tries to flush > > inline data and calls iput(). This results in deadlock as > > iput() tries to hold cp_rwsem, which is already held at the > > beginning by checkpoint->block_operations(). > > > > Call stack : > > > > Thread A Thread B > > f2fs_write_checkpoint() > > - block_operations(sbi) > > - f2fs_lock_all(sbi); > > - down_write(&sbi->cp_rwsem); > > > > - open() > > - igrab() > > - write() write inline data > > - unlink() > > - f2fs_sync_node_pages() > > - if (is_inline_node(page)) > > - flush_inline_data() > > - ilookup() > > page = f2fs_pagecache_get_page() > > if (!page) > > goto iput_out; > > iput_out: > > -close() > > -iput() > > iput(inode); > > - f2fs_evict_inode() > > - f2fs_truncate_blocks() > > - f2fs_lock_op() > > - down_read(&sbi->cp_rwsem); > > > > Fixes: 2049d4fcb057 ("f2fs: avoid multiple node page writes due to inline_data") > > Signed-off-by: Sayali Lokhande > > Signed-off-by: Jaegeuk Kim > > --- > > fs/f2fs/checkpoint.c | 5 +++++ > > fs/f2fs/f2fs.h | 1 + > > fs/f2fs/node.c | 51 ++++++++++++++++++++++++++++++++++++++++++-- > > 3 files changed, 55 insertions(+), 2 deletions(-) > > > > diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c > > index d49f7a01d8a26..79e605f38f4fa 100644 > > --- a/fs/f2fs/checkpoint.c > > +++ b/fs/f2fs/checkpoint.c > > @@ -1168,6 +1168,11 @@ static int block_operations(struct f2fs_sb_info *sbi) > > }; > > int err = 0, cnt = 0; > > > > + /* > > + * Let's flush inline_data in dirty node pages. > > + */ > > + f2fs_flush_inline_data(sbi); > > Still there is a gap, user can write a large number of inline data here... I think generic/204 is the case, and I don't hit a panic with this patch. > > Would that be enough? I doubt we can suffer this issue in below pathes > as well: I don't think so, since iput is called after f2fs_unlock_all(). > > - block_operations > - f2fs_sync_dirty_inodes > - iput > - f2fs_sync_inode_meta > - iput > > Thanks, > > > + > > retry_flush_quotas: > > f2fs_lock_all(sbi); > > if (__need_flush_quota(sbi)) { > > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h > > index 2a8ea81c52a15..7f3d259e7e376 100644 > > --- a/fs/f2fs/f2fs.h > > +++ b/fs/f2fs/f2fs.h > > @@ -3282,6 +3282,7 @@ void f2fs_ra_node_page(struct f2fs_sb_info *sbi, nid_t nid); > > struct page *f2fs_get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid); > > struct page *f2fs_get_node_page_ra(struct page *parent, int start); > > int f2fs_move_node_page(struct page *node_page, int gc_type); > > +int f2fs_flush_inline_data(struct f2fs_sb_info *sbi); > > int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode, > > struct writeback_control *wbc, bool atomic, > > unsigned int *seq_id); > > diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c > > index 1db8cabf727ef..e632de10aedab 100644 > > --- a/fs/f2fs/node.c > > +++ b/fs/f2fs/node.c > > @@ -1807,6 +1807,53 @@ static bool flush_dirty_inode(struct page *page) > > return true; > > } > > > > +int f2fs_flush_inline_data(struct f2fs_sb_info *sbi) > > +{ > > + pgoff_t index = 0; > > + struct pagevec pvec; > > + int nr_pages; > > + int ret = 0; > > + > > + pagevec_init(&pvec); > > + > > + while ((nr_pages = pagevec_lookup_tag(&pvec, > > + NODE_MAPPING(sbi), &index, PAGECACHE_TAG_DIRTY))) { > > + int i; > > + > > + for (i = 0; i < nr_pages; i++) { > > + struct page *page = pvec.pages[i]; > > + > > + if (!IS_DNODE(page)) > > + continue; > > + > > + lock_page(page); > > + > > + if (unlikely(page->mapping != NODE_MAPPING(sbi))) { > > +continue_unlock: > > + unlock_page(page); > > + continue; > > + } > > + > > + if (!PageDirty(page)) { > > + /* someone wrote it for us */ > > + goto continue_unlock; > > + } > > + > > + /* flush inline_data, if it's async context. */ > > + if (is_inline_node(page)) { > > + clear_inline_node(page); > > + unlock_page(page); > > + flush_inline_data(sbi, ino_of_node(page)); > > + continue; > > + } > > + unlock_page(page); > > + } > > + pagevec_release(&pvec); > > + cond_resched(); > > + } > > + return ret; > > +} > > + > > int f2fs_sync_node_pages(struct f2fs_sb_info *sbi, > > struct writeback_control *wbc, > > bool do_balance, enum iostat_type io_type) > > @@ -1870,8 +1917,8 @@ int f2fs_sync_node_pages(struct f2fs_sb_info *sbi, > > goto continue_unlock; > > } > > > > - /* flush inline_data */ > > - if (is_inline_node(page)) { > > + /* flush inline_data, if it's async context. */ > > + if (do_balance && is_inline_node(page)) { > > clear_inline_node(page); > > unlock_page(page); > > flush_inline_data(sbi, ino_of_node(page)); > >