Received: by 2002:a25:31c3:0:0:0:0:0 with SMTP id x186csp5825501ybx; Sun, 10 Nov 2019 22:30:44 -0800 (PST) X-Google-Smtp-Source: APXvYqxWut5hfX28yE3vCI3Pz7rX7xaqGs2luXKl1XH89zIHrSutO+S6uWWBhfnAQb9sl5a+nbYe X-Received: by 2002:a05:6402:1a4f:: with SMTP id bf15mr24653713edb.160.1573453844576; Sun, 10 Nov 2019 22:30:44 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1573453844; cv=none; d=google.com; s=arc-20160816; b=iwULFE23DtsxU+sYWixy6r5y54umO7bTvUwY/WUFW1MdHlDr8MBEdmJcroYiOOp+eb YPqhqiEL8ceHzk0432yTEbIT6kTeuty2ims56Hyu6nzsJr5Gu0krETpnEkTSu9af1MBV eE59ybFvw8E9f4wo9TvxWQJksAUh75ku9lZ8kNGtTxRvIw0V781dyqf6tPmrNzs5n5dV j8lRUZDjeR3lhvAmmcUnsvc92cYDB/Phc6/6ttCSRPWg9Ie6UJ6wtkOzQiwUixULa8Kc 18Gd/PxfWp0E38dymihyeRajeFCsvNdAfN7VsA+qtGBzJqZYa5C4RjxsIZvO7DPKUwjb KlsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=RGRFQDLXl7NBsHFwTo0QA/oVJF+GDY0FPuhXQteEURk=; b=oDvmjYiZkqgjGxyy3CXz+tFuaFNCS4QvWNV89+3q6gruL9BbNAiwlSv6wGgrK+l8dS WJdCEW8OCg9YwVoRw3Ag5A/+mlPnnxirSLIMLLuL6qG6UitOlkh12ricJKQMadRDg6sX IdKeLtDJT9wSFP70AbJ7g/8s3e+3L+Mfcu5Pfg5ygkfn2eUcLfrsY0ni5AB8WIuk1Hjv 8BUdENgAfYIjGKhDIzr6qHRLq9WsEpIuVbV7j20ubtg+u0wkyIuWrRiYiqOFF9fVS0EH cww9RRpjGbIL3DuawufGWEf+1I6igBnnKzrhqCkzDo8lSvTL42sQnDTFi22uwRzacY7A /F/A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w25si9953089edl.418.2019.11.10.22.30.20; Sun, 10 Nov 2019 22:30:44 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726819AbfKKG3r (ORCPT + 99 others); Mon, 11 Nov 2019 01:29:47 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:44242 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726360AbfKKG3r (ORCPT ); Mon, 11 Nov 2019 01:29:47 -0500 Received: from DGGEMS408-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id A768F510A903BE8105D5; Mon, 11 Nov 2019 14:28:50 +0800 (CST) Received: from [10.134.22.195] (10.134.22.195) by smtp.huawei.com (10.3.19.208) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 11 Nov 2019 14:28:47 +0800 Subject: Re: [PATCH] f2fs: Fix deadlock under storage almost full/dirty condition To: Sahitya Tummala CC: Jaegeuk Kim , , References: <1573211027-30785-1-git-send-email-stummala@codeaurora.org> <5c491884-91d3-5b85-6d49-569a8d06f3a3@huawei.com> <20191111034026.GA15669@codeaurora.org> From: Chao Yu Message-ID: <9ece86fd-ff53-3a70-627e-c6acb03b9264@huawei.com> Date: Mon, 11 Nov 2019 14:28:47 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20191111034026.GA15669@codeaurora.org> Content-Type: text/plain; charset="windows-1252" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.134.22.195] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Sahitya, On 2019/11/11 11:40, Sahitya Tummala wrote: > Hi Chao, > > On Mon, Nov 11, 2019 at 10:51:10AM +0800, Chao Yu wrote: >> On 2019/11/8 19:03, Sahitya Tummala wrote: >>> There could be a potential deadlock when the storage capacity >>> is almost full and theren't enough free segments available, due >>> to which FG_GC is needed in the atomic commit ioctl as shown in >>> the below callstack - >>> >>> schedule_timeout >>> io_schedule_timeout >>> congestion_wait >>> f2fs_drop_inmem_pages_all >>> f2fs_gc >>> f2fs_balance_fs >>> __write_node_page >>> f2fs_fsync_node_pages >>> f2fs_do_sync_file >>> f2fs_ioctl >>> >>> If this inode doesn't have i_gc_failures[GC_FAILURE_ATOMIC] set, >>> then it waits forever in f2fs_drop_inmem_pages_all(), for this >>> atomic inode to be dropped. And the rest of the system is stuck >>> waiting for sbi->gc_mutex lock, which is acquired by f2fs_balance_fs() >>> in the stack above. >> >> I think the root cause of this issue is there is potential infinite loop in >> f2fs_drop_inmem_pages_all() for the case of gc_failure is true, because once the >> first inode in inode_list[ATOMIC_FILE] list didn't suffer gc failure, we will >> skip dropping its in-memory cache and calling iput(), and traverse the list >> again, most possibly there is the same inode in the head of that list. >> > > I thought we are expecting for those atomic updates (without any gc failures) to be > committed by doing congestion_wait() and thus retrying again. Hence, I just Nope, we only need to drop inode which encounter gc failures, and keep the rest inodes. > fixed only if we are ending up waiting for commit to happen in the atomic > commit path itself, which will be a deadlock. Look into call stack you provide, I don't think it's correct to drop such inode, as its dirty pages should be committed before f2fs_fsync_node_pages(), so calling f2fs_drop_inmem_pages won't release any inmem pages, and won't help looped GC caused by skipping due to inmem pages. And then I figure out below fix... > >> Could you please check below fix: >> >> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h >> index 7bf7b0194944..8a3a35b42a37 100644 >> --- a/fs/f2fs/f2fs.h >> +++ b/fs/f2fs/f2fs.h >> @@ -1395,6 +1395,7 @@ struct f2fs_sb_info { >> unsigned int gc_mode; /* current GC state */ >> unsigned int next_victim_seg[2]; /* next segment in victim section */ >> /* for skip statistic */ >> + unsigned int atomic_files; /* # of opened atomic file */ >> unsigned long long skipped_atomic_files[2]; /* FG_GC and BG_GC */ >> unsigned long long skipped_gc_rwsem; /* FG_GC only */ >> >> diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c >> index ecd063239642..79f4b348951a 100644 >> --- a/fs/f2fs/file.c >> +++ b/fs/f2fs/file.c >> @@ -2047,6 +2047,7 @@ static int f2fs_ioc_start_atomic_write(struct file *filp) >> spin_lock(&sbi->inode_lock[ATOMIC_FILE]); >> if (list_empty(&fi->inmem_ilist)) >> list_add_tail(&fi->inmem_ilist, &sbi->inode_list[ATOMIC_FILE]); >> + sbi->atomic_files++; >> spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); >> >> /* add inode in inmem_list first and set atomic_file */ >> diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c >> index 8b977bbd6822..6aa0bb693697 100644 >> --- a/fs/f2fs/segment.c >> +++ b/fs/f2fs/segment.c >> @@ -288,6 +288,8 @@ void f2fs_drop_inmem_pages_all(struct f2fs_sb_info *sbi, >> bool gc_failure) >> struct list_head *head = &sbi->inode_list[ATOMIC_FILE]; >> struct inode *inode; >> struct f2fs_inode_info *fi; >> + unsigned int count = sbi->atomic_files; > > If the sbi->atomic_files decrements just after this, then the below exit condition > may not work. In that case, looped will never be >= count. > >> + unsigned int looped = 0; >> next: >> spin_lock(&sbi->inode_lock[ATOMIC_FILE]); >> if (list_empty(head)) { >> @@ -296,22 +298,29 @@ void f2fs_drop_inmem_pages_all(struct f2fs_sb_info *sbi, >> bool gc_failure) >> } >> fi = list_first_entry(head, struct f2fs_inode_info, inmem_ilist); >> inode = igrab(&fi->vfs_inode); >> + if (inode) >> + list_move_tail(&fi->inmem_ilist, head); >> spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); >> >> if (inode) { >> if (gc_failure) { >> - if (fi->i_gc_failures[GC_FAILURE_ATOMIC]) >> - goto drop; >> - goto skip; >> + if (!fi->i_gc_failures[GC_FAILURE_ATOMIC]) >> + goto skip; >> } >> -drop: >> set_inode_flag(inode, FI_ATOMIC_REVOKE_REQUEST); >> f2fs_drop_inmem_pages(inode); >> +skip: >> iput(inode); > > Does this result into f2fs_evict_inode() in this context for this inode? Yup, we need to call igrab/iput in pair in f2fs_drop_inmem_pages_all() anyway. Previously, we may have .i_count leak... Thanks, > > thanks, > >> } >> -skip: >> + >> congestion_wait(BLK_RW_ASYNC, HZ/50); >> cond_resched(); >> + >> + if (gc_failure) { >> + if (++looped >= count) >> + return; >> + } >> + >> goto next; >> } >> >> @@ -334,6 +343,7 @@ void f2fs_drop_inmem_pages(struct inode *inode) >> spin_lock(&sbi->inode_lock[ATOMIC_FILE]); >> if (!list_empty(&fi->inmem_ilist)) >> list_del_init(&fi->inmem_ilist); >> + sbi->atomic_files--; >> spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); >> } >> >> Thanks, >