Received: by 2002:a25:31c3:0:0:0:0:0 with SMTP id x186csp5675582ybx; Sun, 10 Nov 2019 18:53:43 -0800 (PST) X-Google-Smtp-Source: APXvYqy95P2TptU7yoyT0u2ckfAsWmLypl764BQy4hK0CSHoQyZDebjQkEBPUAhdWF9WLztWV6sd X-Received: by 2002:a50:ea8a:: with SMTP id d10mr23138005edo.97.1573440823851; Sun, 10 Nov 2019 18:53:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1573440823; cv=none; d=google.com; s=arc-20160816; b=H55+WZWvCzdpNxRDKtNPgjai7b24lIaNQUk5Ae/bOaGPcEiiVO13uji6046Qr6v6Pu W9/7g1y4/xwIvVmmK70y1hZh8b3WGJKQEVHDFe6CpjH22W9jwkDAfV48SQiHpUnW2PuT 8u4olp3qQxgtTBnoVs+tD/zGtQJ5Nw/OGfExFmXl7J4hU5XWNOtFL5hLk4gyIlpWFbo4 01GfcrLQGEe44yV++6LnReeHuYUXXqGAowb/wFD+7VaJArq2QQMBLpWRGpsAJL1DeYUw L22OotS8QShZhe9HvS0f2yFy1/R+xiX3y5I9AZUyzWZOFceP3qLmTp3MYHzS3hnQvGmJ 6YEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=NjAt6zGDzmSIRVQvxl+ZSy2+QUtB3ZzA//buGuo7olc=; b=qU6U+dWUpeLzvOJGJ65rUKeWtcBLFBvwL/IY8l948VDC0L3pxobLK24gD34JImfXtj W3l2q0Rqwf2+bnVqT4ajRFz6MyW+x2V6h1IgxiVt+SYGEUHjKEzxBE5yn0Yy+MWlGxO8 +QlNF9eVkUVYYgXfnJg/UjsJ/PDd8fUiFUB17a5CoksJ9mR9MuKBN/D3WqAuHi6EDaMW 5IGz8b9mIzrW12jnqNjsn4L0lPeVgnAmmqTBJHsCAxWdbKqVebK7+zRVKeQ67ca7YV/G qbz3cSgORfU/0zFjSYxnNdIzlGb4htCXTfDA0V4OjCIkj9zmuSXRjZes7KRJPIL7EXBL zhEQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bt28si2284041edb.270.2019.11.10.18.53.19; Sun, 10 Nov 2019 18:53:43 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726764AbfKKCvO (ORCPT + 99 others); Sun, 10 Nov 2019 21:51:14 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:46978 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726715AbfKKCvO (ORCPT ); Sun, 10 Nov 2019 21:51:14 -0500 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 88F48B832F8AC4F8EB49; Mon, 11 Nov 2019 10:51:11 +0800 (CST) Received: from [10.134.22.195] (10.134.22.195) by smtp.huawei.com (10.3.19.207) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 11 Nov 2019 10:51:07 +0800 Subject: Re: [PATCH] f2fs: Fix deadlock under storage almost full/dirty condition To: Sahitya Tummala , Jaegeuk Kim , CC: References: <1573211027-30785-1-git-send-email-stummala@codeaurora.org> From: Chao Yu Message-ID: <5c491884-91d3-5b85-6d49-569a8d06f3a3@huawei.com> Date: Mon, 11 Nov 2019 10:51:10 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <1573211027-30785-1-git-send-email-stummala@codeaurora.org> Content-Type: text/plain; charset="windows-1252" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.134.22.195] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/11/8 19:03, Sahitya Tummala wrote: > There could be a potential deadlock when the storage capacity > is almost full and theren't enough free segments available, due > to which FG_GC is needed in the atomic commit ioctl as shown in > the below callstack - > > schedule_timeout > io_schedule_timeout > congestion_wait > f2fs_drop_inmem_pages_all > f2fs_gc > f2fs_balance_fs > __write_node_page > f2fs_fsync_node_pages > f2fs_do_sync_file > f2fs_ioctl > > If this inode doesn't have i_gc_failures[GC_FAILURE_ATOMIC] set, > then it waits forever in f2fs_drop_inmem_pages_all(), for this > atomic inode to be dropped. And the rest of the system is stuck > waiting for sbi->gc_mutex lock, which is acquired by f2fs_balance_fs() > in the stack above. I think the root cause of this issue is there is potential infinite loop in f2fs_drop_inmem_pages_all() for the case of gc_failure is true, because once the first inode in inode_list[ATOMIC_FILE] list didn't suffer gc failure, we will skip dropping its in-memory cache and calling iput(), and traverse the list again, most possibly there is the same inode in the head of that list. Could you please check below fix: diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 7bf7b0194944..8a3a35b42a37 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -1395,6 +1395,7 @@ struct f2fs_sb_info { unsigned int gc_mode; /* current GC state */ unsigned int next_victim_seg[2]; /* next segment in victim section */ /* for skip statistic */ + unsigned int atomic_files; /* # of opened atomic file */ unsigned long long skipped_atomic_files[2]; /* FG_GC and BG_GC */ unsigned long long skipped_gc_rwsem; /* FG_GC only */ diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c index ecd063239642..79f4b348951a 100644 --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -2047,6 +2047,7 @@ static int f2fs_ioc_start_atomic_write(struct file *filp) spin_lock(&sbi->inode_lock[ATOMIC_FILE]); if (list_empty(&fi->inmem_ilist)) list_add_tail(&fi->inmem_ilist, &sbi->inode_list[ATOMIC_FILE]); + sbi->atomic_files++; spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); /* add inode in inmem_list first and set atomic_file */ diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 8b977bbd6822..6aa0bb693697 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -288,6 +288,8 @@ void f2fs_drop_inmem_pages_all(struct f2fs_sb_info *sbi, bool gc_failure) struct list_head *head = &sbi->inode_list[ATOMIC_FILE]; struct inode *inode; struct f2fs_inode_info *fi; + unsigned int count = sbi->atomic_files; + unsigned int looped = 0; next: spin_lock(&sbi->inode_lock[ATOMIC_FILE]); if (list_empty(head)) { @@ -296,22 +298,29 @@ void f2fs_drop_inmem_pages_all(struct f2fs_sb_info *sbi, bool gc_failure) } fi = list_first_entry(head, struct f2fs_inode_info, inmem_ilist); inode = igrab(&fi->vfs_inode); + if (inode) + list_move_tail(&fi->inmem_ilist, head); spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); if (inode) { if (gc_failure) { - if (fi->i_gc_failures[GC_FAILURE_ATOMIC]) - goto drop; - goto skip; + if (!fi->i_gc_failures[GC_FAILURE_ATOMIC]) + goto skip; } -drop: set_inode_flag(inode, FI_ATOMIC_REVOKE_REQUEST); f2fs_drop_inmem_pages(inode); +skip: iput(inode); } -skip: + congestion_wait(BLK_RW_ASYNC, HZ/50); cond_resched(); + + if (gc_failure) { + if (++looped >= count) + return; + } + goto next; } @@ -334,6 +343,7 @@ void f2fs_drop_inmem_pages(struct inode *inode) spin_lock(&sbi->inode_lock[ATOMIC_FILE]); if (!list_empty(&fi->inmem_ilist)) list_del_init(&fi->inmem_ilist); + sbi->atomic_files--; spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); } Thanks,