Received: by 10.223.164.221 with SMTP id h29csp2560623wrb; Mon, 30 Oct 2017 06:06:55 -0700 (PDT) X-Google-Smtp-Source: ABhQp+T59L/63FV+lWBLBh81kFTS61QCnZ/KMu/BdyMCnbl31rKJgG5HvF46TMcbMHDSIXebvGG3 X-Received: by 10.84.194.131 with SMTP id h3mr7527849pld.264.1509368815821; Mon, 30 Oct 2017 06:06:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1509368815; cv=none; d=google.com; s=arc-20160816; b=DJA4NCQ6dUD62wJlTqk5EjAIk1jLuOzoOS+e8XeEKOqjJ1MFEzAZkw8znZLzrWBWFA wDJrSa4LsAaFl5ImE5cC5ma/v+P/LgQu2LsEwJQXRi7Dw3jrXLah+5927FwbfcIEZfuv 5IYq40pZbtGu9PNAodtizhlnnYraKVVDGYm6rWlKXQFkFTMWkI4jXqJL41cRYJphhjff VGDhQNr8niGSqbXPoFCgjKJUAZZ4iWqj3RNlyMSbeexCh3GmgtlE1DgkQsXr8HLt7PtZ 3XS+0plvxQ4FSffwAr/TXXXIbI97/TadARUXSjB5BE5Jl0JYKL1zerytNmRxd2KfAR0S Cy7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:arc-authentication-results; bh=tuYf9AVt/NonxMDQ7SLwoOQsxFW+r8NxnlEWmiMMvxE=; b=URDHL/UB0nmk/B1yzDMTvuovSZ0rosA3/uPyZuRCe8XMANH1abKq0jqMnIVSO05FlL vYJtuQoJpG3mum2opoCKlGDKRZOlEsUouMY+XzTN0Cpf/MTGy5O/SaqtjaqvYDepdsyq 2aYxXrnjovTfl5t94RjzQ3A1hq0jWpM291eyq/KZ1R6Wn2VZs5ziEhQi+g08I5Pbtwfp ih4LmNdgwBYBxaBgwyg9CfRcKZaE/0dhjGjja0UZJCGiK3WEyGZygKxtU06oaJuhLyj5 PDM5+R35IRXEMuBs46NvACbZ/SOSzsrzqlO3oXahZe+75KeDYyeBfI/xLv27bHjKOLLe OR4Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q5si9952077pgs.112.2017.10.30.06.06.37; Mon, 30 Oct 2017 06:06:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752181AbdJ3NFi (ORCPT + 99 others); Mon, 30 Oct 2017 09:05:38 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:9546 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751794AbdJ3NFg (ORCPT ); Mon, 30 Oct 2017 09:05:36 -0400 Received: from 172.30.72.60 (EHLO DGGEMS413-HUB.china.huawei.com) ([172.30.72.60]) by dggrg05-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id DKD33178; Mon, 30 Oct 2017 21:05:32 +0800 (CST) Received: from huawei.com (10.107.193.250) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.361.1; Mon, 30 Oct 2017 21:04:23 +0800 From: Yunlong Song To: , , , , CC: , , , , Subject: [PATCH v2] f2fs: fix out-of-free problem caused by atomic write Date: Mon, 30 Oct 2017 21:04:18 +0800 Message-ID: <1509368658-60355-1-git-send-email-yunlong.song@huawei.com> X-Mailer: git-send-email 1.8.5.2 In-Reply-To: <1509027715-80477-1-git-send-email-yunlong.song@huawei.com> References: <1509027715-80477-1-git-send-email-yunlong.song@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.107.193.250] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A0B0207.59F7239D.009A,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: b22afa0c0dedccc521bddbbb5ad44402 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org f2fs_balance_fs only actives once in the commit_inmem_pages, but there are more than one page to commit, so all the other pages will miss the check. This will lead to out-of-free problem when commit a very large file. However, we cannot do f2fs_balance_fs for each inmem page, since this will break atomicity. As a result, we should collect prefree segments if needed and stop atomic commit when there are not enough available blocks to write atomic pages. Signed-off-by: Yunlong Song --- fs/f2fs/f2fs.h | 1 + fs/f2fs/segment.c | 29 ++++++++++++++++++++++++++++- 2 files changed, 29 insertions(+), 1 deletion(-) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 13a96b8..04ce48f 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -610,6 +610,7 @@ struct f2fs_inode_info { struct list_head inmem_pages; /* inmemory pages managed by f2fs */ struct task_struct *inmem_task; /* store inmemory task */ struct mutex inmem_lock; /* lock for inmemory pages */ + unsigned long inmem_blocks; /* inmemory blocks */ struct extent_tree *extent_tree; /* cached extent_tree entry */ struct rw_semaphore dio_rwsem[2];/* avoid racing between dio and gc */ struct rw_semaphore i_mmap_sem; diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 46dfbca..813c110 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -210,6 +210,7 @@ void register_inmem_page(struct inode *inode, struct page *page) list_add_tail(&fi->inmem_ilist, &sbi->inode_list[ATOMIC_FILE]); spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); inc_page_count(F2FS_I_SB(inode), F2FS_INMEM_PAGES); + fi->inmem_blocks++; mutex_unlock(&fi->inmem_lock); trace_f2fs_register_inmem_page(page, INMEM); @@ -221,6 +222,7 @@ static int __revoke_inmem_pages(struct inode *inode, struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct inmem_pages *cur, *tmp; int err = 0; + struct f2fs_inode_info *fi = F2FS_I(inode); list_for_each_entry_safe(cur, tmp, head, list) { struct page *page = cur->page; @@ -263,6 +265,7 @@ static int __revoke_inmem_pages(struct inode *inode, list_del(&cur->list); kmem_cache_free(inmem_entry_slab, cur); dec_page_count(F2FS_I_SB(inode), F2FS_INMEM_PAGES); + fi->inmem_blocks--; } return err; } @@ -302,6 +305,10 @@ void drop_inmem_pages(struct inode *inode) if (!list_empty(&fi->inmem_ilist)) list_del_init(&fi->inmem_ilist); spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); + if (fi->inmem_blocks) { + f2fs_bug_on(sbi, 1); + fi->inmem_blocks = 0; + } mutex_unlock(&fi->inmem_lock); clear_inode_flag(inode, FI_ATOMIC_FILE); @@ -326,6 +333,7 @@ void drop_inmem_page(struct inode *inode, struct page *page) f2fs_bug_on(sbi, !cur || cur->page != page); list_del(&cur->list); + fi->inmem_blocks--; mutex_unlock(&fi->inmem_lock); dec_page_count(sbi, F2FS_INMEM_PAGES); @@ -410,11 +418,26 @@ int commit_inmem_pages(struct inode *inode) INIT_LIST_HEAD(&revoke_list); f2fs_balance_fs(sbi, true); + if (prefree_segments(sbi) + && has_not_enough_free_secs(sbi, 0, + fi->inmem_blocks / BLKS_PER_SEC(sbi))) { + struct cp_control cpc; + + cpc.reason = __get_cp_reason(sbi); + err = write_checkpoint(sbi, &cpc); + if (err) + goto drop; + } f2fs_lock_op(sbi); set_inode_flag(inode, FI_ATOMIC_COMMIT); mutex_lock(&fi->inmem_lock); + if ((sbi->user_block_count - valid_user_blocks(sbi)) < + fi->inmem_blocks) { + err = -ENOSPC; + goto drop; + } err = __commit_inmem_pages(inode, &revoke_list); if (err) { int ret; @@ -429,7 +452,7 @@ int commit_inmem_pages(struct inode *inode) ret = __revoke_inmem_pages(inode, &revoke_list, false, true); if (ret) err = ret; - +drop: /* drop all uncommitted pages */ __revoke_inmem_pages(inode, &fi->inmem_pages, true, false); } @@ -437,6 +460,10 @@ int commit_inmem_pages(struct inode *inode) if (!list_empty(&fi->inmem_ilist)) list_del_init(&fi->inmem_ilist); spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); + if (fi->inmem_blocks) { + f2fs_bug_on(sbi, 1); + fi->inmem_blocks = 0; + } mutex_unlock(&fi->inmem_lock); clear_inode_flag(inode, FI_ATOMIC_COMMIT); -- 1.8.5.2 From 1582409662880987698@xxx Fri Oct 27 11:24:13 +0000 2017 X-GM-THRID: 1582330312060990093 X-Gmail-Labels: Inbox,Category Forums