Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp3661789imm; Mon, 20 Aug 2018 02:33:49 -0700 (PDT) X-Google-Smtp-Source: AA+uWPy/TDnfwRnau0VPW12OJnH2OZ+XPszBi1lUHHegMFKGMGCeWfC/8RV1LzbmLGig7NuqqSJ6 X-Received: by 2002:a62:249c:: with SMTP id k28-v6mr47930288pfk.195.1534757629569; Mon, 20 Aug 2018 02:33:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534757629; cv=none; d=google.com; s=arc-20160816; b=QrNhYTmr1LpAweurI7hKv8kKNxSKCpkUAbqIeVju2V4qtvJU0QiWz24w7uK1jQuLrc k/l57ZVsjtbgnlYug+qGsFMnzpkM1EnznJLQCtO4YVYNJ99VN/AmYCmuG+w45iOJj+WO zy9y6CgeM+6cSKJsB8xueaCXu9bT03saGImdBp3aLC+VmtXI7POFYAAsjc8t0ts2iS5l tDIHv1bUdMSPPVc4QIR0mWPwwWrgbdWOdPxTxQB9OgIDn12Qf0nOdXFFs40cvVfEdvnM uLO8+w3CNBeO3llFw+0XEZ+ijZUR4Zt+/lkAUvWnyfrkVJD/qpGf7i4wVNgTf89lEwy/ Ud/A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-disposition :content-transfer-encoding:mime-version:in-reply-to:references :subject:cc:to:from:date:message-id:arc-authentication-results; bh=VVA47mbTPAF+vb46RPaz2f7cNZhSqGcaocXzRDcLhtk=; b=e5MzWZ/+mFKycBFJAqtizbDlyG41q0IzauCQNgbl/+jXtcUCbiHlEeHNzuP6vo2vf7 Pvk68/JJ/qPiGEm63fH5GeNiy+BH+J7WWvsz6F3qEN8obGufpYTfuJlK5+OxEmOVHBvB J9rlM7bdE6dO4sZI3vsuSYuiLuclTIK6RGBfUEEq7mbNQyko+MDHPG9H3rStR6nfDbmr vf07MJzsBpZBKL4d+IRkOCuDinXbVhFdYpWVzBr0GdQFnWrMFMROTF+ESlE3CX9H0jG/ AsS5b9LcmvqLzwMgsECBBcQIhtml8HtF+iVvvu7TXrJufO5RGFJCp1JXubENcfYhvSC4 pfJg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e6-v6si8136183pgu.676.2018.08.20.02.33.34; Mon, 20 Aug 2018 02:33:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726570AbeHTMrY convert rfc822-to-8bit (ORCPT + 99 others); Mon, 20 Aug 2018 08:47:24 -0400 Received: from prv1-mh.provo.novell.com ([137.65.248.33]:51357 "EHLO prv1-mh.provo.novell.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726129AbeHTMrY (ORCPT ); Mon, 20 Aug 2018 08:47:24 -0400 X-Greylist: delayed 1223 seconds by postgrey-1.27 at vger.kernel.org; Mon, 20 Aug 2018 08:47:24 EDT Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Mon, 20 Aug 2018 03:12:12 -0600 Message-Id: <5B7A85DF020000F90002F815@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.0.2 Date: Mon, 20 Aug 2018 03:11:59 -0600 From: "Gang He" To: , "Lei Chen" , Cc: , Subject: Re: [Ocfs2-devel] [PATCH] fix crash on ocfs2_duplicate_clusters_by_page References: <20180816112426.12218-1-lchen@suse.com> In-Reply-To: <20180816112426.12218-1-lchen@suse.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 8BIT Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello Larry, >>> On 2018/8/16 at 19:24, in message <20180816112426.12218-1-lchen@suse.com>, Larry Chen wrote: > ocfs2_duplicate_clusters_by_page may crash if an extent's page is dirty. > When a page has not been written back, it is still in dirty state. If at > that moment, ocfs2_duplicate_clusters_by_page is called against this > page, the crash happens. > > To fix this bug, we can just unlock the page and wait the page until > it's not dirty. > > I don't know whether the patch is appropriate, so I need comments, > thanks. > > The following is the core dump: > > kernel BUG at /root/code/ocfs2/refcounttree.c:2961! > __ocfs2_move_extent+0x80/0x450 [ocfs2] > ? __ocfs2_claim_clusters+0x130/0x250 [ocfs2] > ocfs2_defrag_extent+0x5b8/0x5e0 [ocfs2] > __ocfs2_move_extents_range+0x2a4/0x470 [ocfs2] > ocfs2_move_extents+0x180/0x3b0 [ocfs2] > ? ocfs2_wait_for_recovery+0x13/0x70 [ocfs2] > ocfs2_ioctl_move_extents+0x133/0x2d0 [ocfs2] > ocfs2_ioctl+0x253/0x640 [ocfs2] > do_vfs_ioctl+0x90/0x5f0 > SyS_ioctl+0x74/0x80 > do_syscall_64+0x74/0x140 > entry_SYSCALL_64_after_hwframe+0x3d/0xa2 > > To: mfasheh@versity.com, > jlbec@evilplan.org > Cc: linux-kernel@vger.kernel.org, > ocfs2-devel@oss.oracle.com, > akpm@linux-foundation.org > > Signed-off-by: Larry Chen > --- > fs/ocfs2/refcounttree.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/fs/ocfs2/refcounttree.c b/fs/ocfs2/refcounttree.c > index 7869622af22a..ee3b9dbbc310 100644 > --- a/fs/ocfs2/refcounttree.c > +++ b/fs/ocfs2/refcounttree.c > @@ -2946,6 +2946,7 @@ int ocfs2_duplicate_clusters_by_page(handle_t *handle, > if (map_end & (PAGE_SIZE - 1)) > to = map_end & (PAGE_SIZE - 1); > > +retry: > page = find_or_create_page(mapping, page_index, GFP_NOFS); > if (!page) { > ret = -ENOMEM; > @@ -2957,8 +2958,13 @@ int ocfs2_duplicate_clusters_by_page(handle_t *handle, > * In case PAGE_SIZE <= CLUSTER_SIZE, This page > * can't be dirtied before we CoW it out. > */ > - if (PAGE_SIZE <= OCFS2_SB(sb)->s_clustersize) > - BUG_ON(PageDirty(page)); > + if (PAGE_SIZE <= OCFS2_SB(sb)->s_clustersize) { > + if (PageDirty(page)) { > + unlock_page(page); Here, if we find this page is dirty, could we write this page to the disk initiatively? rather than wait for the page become clean by VM mechanism. Thanks Gang > + cond_resched(); > + goto retry; > + } > + } > > if (!PageUptodate(page)) { > ret = block_read_full_page(page, ocfs2_get_block); > -- > 2.13.7 > > > _______________________________________________ > Ocfs2-devel mailing list > Ocfs2-devel@oss.oracle.com > https://oss.oracle.com/mailman/listinfo/ocfs2-devel