Received: by 2002:a05:6a10:1a4d:0:0:0:0 with SMTP id nk13csp2543139pxb; Thu, 3 Feb 2022 08:44:53 -0800 (PST) X-Google-Smtp-Source: ABdhPJyxB/1ydepEAiHcNjjjao/3D53M8jKG6Ojiqogc0P4I6ImcYSBI16BduoMfZJt94Kqi5LDh X-Received: by 2002:a63:b90f:: with SMTP id z15mr29326138pge.73.1643906693029; Thu, 03 Feb 2022 08:44:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1643906693; cv=none; d=google.com; s=arc-20160816; b=fi1yj9SSP88ozOWDWE/FUUM0ONcvDihFGyiBE2SF5MZyi/S/X80uDPxZMu6xA8ohGE y2w21ccAj9hLRIaHHInrXMZV4uf0ujG4nD1J5TJ7l0zE4+greBE3KGRKa3eJ93clkV4r nIcv9ixr38sD12iZ8XPq9aTPBPLVElE3ogFu1EdcWbmbBMMOumAvTAOx6qmpSxXCslFA rDrLX3NWyWWsQKMLn+XYWTcJdLuO3KT6OuM21Eme1XbtyEMnY8nTdDgNWSNVPShZ3v1o 236nKoPZB11A7q/2GM5zEEgs4bOYb9r6wH2nj5b+LN61V+ijM3h5gp+Pxzel/PnojWnk bCqg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=ruz32yHKlegK1zsfXSN9IpzNuybnU3IEqRygQTYrYvY=; b=MbddLlcDQ88MPZCC3vPRxDRQbyDw9IVzH5Hy7bavgUAuMjjNQDU7tCjWkYLYNTWScs Z5CFuwD/c3rMdf4Mm66oxm1reyFlV0ggwJClHaCuPPVvTv1asDkQgVGynLxofUqMExN8 Stwp8nE2iOGx1n7lN1TKL8aezAqmBzAEGSxW2M5rl2nSvWOM8dyi4wC6KwR1T7BLPoMB PfL8+1wvtPRCwX7ILe73gfqLbdyVhSXp6bRQh7ANUDvDP6pXUUm9z5vs92cf15m/+kSj F+n2+oA52S84gWsYxyDOmrlMbnaMi+uOQCn691j0gE5x0oSnUYkUxg0tbYnAyUFNx2NU IHMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=O3NoQJJ9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q17si8577828pgq.14.2022.02.03.08.44.41; Thu, 03 Feb 2022 08:44:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=O3NoQJJ9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1351333AbiBCOxQ (ORCPT + 99 others); Thu, 3 Feb 2022 09:53:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233409AbiBCOxO (ORCPT ); Thu, 3 Feb 2022 09:53:14 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B1C0C061714 for ; Thu, 3 Feb 2022 06:53:14 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 070DEB8346B for ; Thu, 3 Feb 2022 14:53:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 24BCBC340E4; Thu, 3 Feb 2022 14:53:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1643899991; bh=X1TRBfZ/0x3XO/bfgHPMfSGcvTR6DJ72trufGxtCHOw=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=O3NoQJJ93NrmkaDQetrFa9faQoq21kytPHtbRPQtadT7+efZcAKjsSCcu7MB7xFRK ZdXA5GN2T8o9YYPoNjzwfAeVWPpbnQTHxYYjWdis/Ix4M6RUsXuGSwqKyRLoNgeM6U leNRurklA+VcZlptJuT2LljWPzASfSMcT7uRWLlkZXMhppN/yVHeJo67qjs3mTrbF8 8/FNZNBmjYk0LX8tSoROHxB9hCgJ1nTfRstxrPAxU6Mg39Fwfl2dq7DkU+VtK4SBX0 d3KXTsoEnK8L4RUS7pvB249IFFcklzKlk7e3br7uG8slGgHldxc55vWuN2bo+lIf85 6OdpSXgbIKP8Q== Message-ID: <9d450a33-41eb-0caf-aba1-427c5ae547ed@kernel.org> Date: Thu, 3 Feb 2022 22:53:07 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.4.0 Subject: Re: [PATCH] f2fs: adjust readahead block number during recovery Content-Language: en-US To: jaegeuk@kernel.org Cc: linux-f2fs-devel@lists.sourceforge.net, linux-kernel@vger.kernel.org References: <20220129082112.1814398-1-chao@kernel.org> From: Chao Yu In-Reply-To: <20220129082112.1814398-1-chao@kernel.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Jaegeuk, any comments on this patch? Thanks, On 2022/1/29 16:21, Chao Yu wrote: > In a fragmented image, entries in dnode block list may locate in > incontiguous physical block address space, however, in recovery flow, > we will always readahead BIO_MAX_VECS size blocks, so in such case, > current readahead policy is low efficient, let's adjust readahead > window size dynamically based on consecutiveness of dnode blocks. > > Signed-off-by: Chao Yu > --- > fs/f2fs/checkpoint.c | 8 ++++++-- > fs/f2fs/f2fs.h | 6 +++++- > fs/f2fs/recovery.c | 27 ++++++++++++++++++++++++--- > 3 files changed, 35 insertions(+), 6 deletions(-) > > diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c > index 57a2d9164bee..203a1577942d 100644 > --- a/fs/f2fs/checkpoint.c > +++ b/fs/f2fs/checkpoint.c > @@ -282,18 +282,22 @@ int f2fs_ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages, > return blkno - start; > } > > -void f2fs_ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index) > +void f2fs_ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index, > + unsigned int ra_blocks) > { > struct page *page; > bool readahead = false; > > + if (ra_blocks == RECOVERY_MIN_RA_BLOCKS) > + return; > + > page = find_get_page(META_MAPPING(sbi), index); > if (!page || !PageUptodate(page)) > readahead = true; > f2fs_put_page(page, 0); > > if (readahead) > - f2fs_ra_meta_pages(sbi, index, BIO_MAX_VECS, META_POR, true); > + f2fs_ra_meta_pages(sbi, index, ra_blocks, META_POR, true); > } > > static int __f2fs_write_meta_page(struct page *page, > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h > index 5af415208488..1fa6b3f98a71 100644 > --- a/fs/f2fs/f2fs.h > +++ b/fs/f2fs/f2fs.h > @@ -590,6 +590,9 @@ enum { > /* number of extent info in extent cache we try to shrink */ > #define EXTENT_CACHE_SHRINK_NUMBER 128 > > +#define RECOVERY_MAX_RA_BLOCKS BIO_MAX_VECS > +#define RECOVERY_MIN_RA_BLOCKS 1 > + > struct rb_entry { > struct rb_node rb_node; /* rb node located in rb-tree */ > union { > @@ -3655,7 +3658,8 @@ bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi, > block_t blkaddr, int type); > int f2fs_ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages, > int type, bool sync); > -void f2fs_ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index); > +void f2fs_ra_meta_pages_cond(struct f2fs_sb_info *sbi, pgoff_t index, > + unsigned int ra_blocks); > long f2fs_sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type, > long nr_to_write, enum iostat_type io_type); > void f2fs_add_ino_entry(struct f2fs_sb_info *sbi, nid_t ino, int type); > diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c > index f69b685fb2b2..0b88d0ce284a 100644 > --- a/fs/f2fs/recovery.c > +++ b/fs/f2fs/recovery.c > @@ -346,6 +346,19 @@ static int recover_inode(struct inode *inode, struct page *page) > return 0; > } > > +static unsigned int adjust_por_ra_blocks(struct f2fs_sb_info *sbi, > + unsigned int ra_blocks, unsigned int blkaddr, > + unsigned int next_blkaddr) > +{ > + if (blkaddr + 1 == next_blkaddr) > + ra_blocks = min_t(unsigned int, RECOVERY_MIN_RA_BLOCKS, > + ra_blocks * 2); > + else if (next_blkaddr % sbi->blocks_per_seg) > + ra_blocks = max_t(unsigned int, RECOVERY_MAX_RA_BLOCKS, > + ra_blocks / 2); > + return ra_blocks; > +} > + > static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head, > bool check_only) > { > @@ -353,6 +366,7 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head, > struct page *page = NULL; > block_t blkaddr; > unsigned int loop_cnt = 0; > + unsigned int ra_blocks = RECOVERY_MAX_RA_BLOCKS; > unsigned int free_blocks = MAIN_SEGS(sbi) * sbi->blocks_per_seg - > valid_user_blocks(sbi); > int err = 0; > @@ -427,11 +441,14 @@ static int find_fsync_dnodes(struct f2fs_sb_info *sbi, struct list_head *head, > break; > } > > + ra_blocks = adjust_por_ra_blocks(sbi, ra_blocks, blkaddr, > + next_blkaddr_of_node(page)); > + > /* check next segment */ > blkaddr = next_blkaddr_of_node(page); > f2fs_put_page(page, 1); > > - f2fs_ra_meta_pages_cond(sbi, blkaddr); > + f2fs_ra_meta_pages_cond(sbi, blkaddr, ra_blocks); > } > return err; > } > @@ -707,6 +724,7 @@ static int recover_data(struct f2fs_sb_info *sbi, struct list_head *inode_list, > struct page *page = NULL; > int err = 0; > block_t blkaddr; > + unsigned int ra_blocks = RECOVERY_MAX_RA_BLOCKS; > > /* get node pages in the current segment */ > curseg = CURSEG_I(sbi, CURSEG_WARM_NODE); > @@ -718,8 +736,6 @@ static int recover_data(struct f2fs_sb_info *sbi, struct list_head *inode_list, > if (!f2fs_is_valid_blkaddr(sbi, blkaddr, META_POR)) > break; > > - f2fs_ra_meta_pages_cond(sbi, blkaddr); > - > page = f2fs_get_tmp_page(sbi, blkaddr); > if (IS_ERR(page)) { > err = PTR_ERR(page); > @@ -762,9 +778,14 @@ static int recover_data(struct f2fs_sb_info *sbi, struct list_head *inode_list, > if (entry->blkaddr == blkaddr) > list_move_tail(&entry->list, tmp_inode_list); > next: > + ra_blocks = adjust_por_ra_blocks(sbi, ra_blocks, blkaddr, > + next_blkaddr_of_node(page)); > + > /* check next segment */ > blkaddr = next_blkaddr_of_node(page); > f2fs_put_page(page, 1); > + > + f2fs_ra_meta_pages_cond(sbi, blkaddr, ra_blocks); > } > if (!err) > f2fs_allocate_new_segments(sbi);