Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp7019600rwb; Mon, 5 Dec 2022 23:15:13 -0800 (PST) X-Google-Smtp-Source: AA0mqf42gvH7IxJSf5Qg7+9mTViC6q8zKPC8PkYVPxCjN4KdnGGqTTtPsuE5QVZ1E7eYp5yrEZo1 X-Received: by 2002:a05:6a00:1a90:b0:575:6f0e:1093 with SMTP id e16-20020a056a001a9000b005756f0e1093mr35132704pfv.52.1670310912771; Mon, 05 Dec 2022 23:15:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1670310912; cv=none; d=google.com; s=arc-20160816; b=pWu8McFmduw3OBJbkKULJ/LtEN+wEqpyn5PU29aHJRN+Fi47JQvrdnnONc8msOCPD3 wFe0ZdQfxDuGNVexCCxAkjitAPLIjgmfFX693d6mS/fKS3BEEIQTL4m2yfGMO15Fmfdy BeKAslH5lwtXdSyS3bcVJZdViiQ8b/AxFTtf7FrU7IsOBoC+86SrXVdsIdMcyE/4ZrFL RynRWF5/JsN/a8Z9+purG1FRLME/W+3T0Y1Pd7xVoplZQjKh0D/bmvH9P43VeSZMCOmM NwYDw2gZNCOOMjeOvPXDfruONEqnIjfHV6gce0ZOpTQxI+LRkw4gWdAuYf1V1yddLQuZ 72CA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=kRE5tB0jDIxXYc0jZd3A6moai+68BGA7Rbghi1NHuGM=; b=p/ZoNO20RyKNEkn41PuOi2IGJ2T5vjKmHkM2Rci9XfCj8XHC2AXHfM5GPaW2Aj130G QL5yf8NdIkFDnVka9pMBW2TEkxubDuL8Bwvi4Ie12se69mDtsPuEaDVqmo+hPV+/jrOk IUAjZkOGw4MJSUJC8dEdhM3TZDQmi5c3vHf/5EJkpelF+3HIHwdKp9UYWBWjtiY7NLgo 8umm87o/+KYoiaUlK4umvOrAtUc6P9Wy0LI2V/DkjP/9xfCnyWLeJMRuR53eV2dPbguT BWfLPbzg+j+9Ht/GxHaTcYNx517NqKAhdrDQDrvEiBfk5l55tMmWKR9fGGvylGesgQDw 6KOA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=uEVXx1qS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e38-20020a635466000000b004784254c31fsi15978346pgm.451.2022.12.05.23.15.02; Mon, 05 Dec 2022 23:15:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=uEVXx1qS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232165AbiLFGEG (ORCPT + 79 others); Tue, 6 Dec 2022 01:04:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47400 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229591AbiLFGEE (ORCPT ); Tue, 6 Dec 2022 01:04:04 -0500 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9856F26563 for ; Mon, 5 Dec 2022 22:04:03 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 3CD22B81600 for ; Tue, 6 Dec 2022 06:04:02 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B7A2FC433C1; Tue, 6 Dec 2022 06:03:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1670306640; bh=a2DXxzn2tCMHvMeAX4+C+K/btZTLZZOQGTh6c8Fnjn0=; h=From:To:Cc:Subject:Date:From; b=uEVXx1qSx6SIB5uOnJZGJ71z0f97sOlkVleCJS5JphJVp6nSU77fnzTFVUGe8IYHn 7svbYWpi6P3MnX1Glr3LTC5+xhfr3S348ZcpCtlQPOU9YAn43Ygo64jB6whsfyyjni H6/0Mf3b/A2UeOrDYhqyjAIlYaWh6y72Hs3z/fkDN33/6Ndrac3DT68MVSInHM19+M wAFrE67i2S05y04kbqZxPURgbvPo+DjNAibW2ymkTyfnoE78m/QiMs0765nMDChEGq aBW2jefgTZDVCeQYog/Ayx/nk9WAeuAsI1IFbmix68HGFwxtZPFmcwWFG4BIHj3Or7 P8px9rwoXv5nA== From: Gao Xiang To: linux-erofs@lists.ozlabs.org Cc: LKML , Gao Xiang , Yue Hu Subject: [PATCH v2] erofs: clean up cached I/O strategies Date: Tue, 6 Dec 2022 14:03:52 +0800 Message-Id: <20221206060352.152830-1-xiang@kernel.org> X-Mailer: git-send-email 2.30.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Gao Xiang After commit 4c7e42552b3a ("erofs: remove useless cache strategy of DELAYEDALLOC"), only one cached I/O allocation strategy is supported: When cached I/O is preferred, page allocation is applied without direct reclaim. If allocation fails, fall back to inplace I/O. Let's get rid of z_erofs_cache_alloctype. No logical changes. Reviewed-by: Yue Hu Signed-off-by: Yue Hu Signed-off-by: Gao Xiang --- changes since v1: - fold in Yue Hu's fix: https://lore.kernel.org/r/20221206053633.4251-1-zbestahu@gmail.com fs/erofs/zdata.c | 77 +++++++++++++++++++----------------------------- 1 file changed, 31 insertions(+), 46 deletions(-) diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c index b792d424d774..b66c16473273 100644 --- a/fs/erofs/zdata.c +++ b/fs/erofs/zdata.c @@ -175,16 +175,6 @@ static void z_erofs_free_pcluster(struct z_erofs_pcluster *pcl) DBG_BUGON(1); } -/* how to allocate cached pages for a pcluster */ -enum z_erofs_cache_alloctype { - DONTALLOC, /* don't allocate any cached pages */ - /* - * try to use cached I/O if page allocation succeeds or fallback - * to in-place I/O instead to avoid any direct reclaim. - */ - TRYALLOC, -}; - /* * tagged pointer with 1-bit tag for all compressed pages * tag 0 - the page is just found with an extra page reference @@ -292,12 +282,29 @@ struct z_erofs_decompress_frontend { .inode = __i, .owned_head = Z_EROFS_PCLUSTER_TAIL, \ .mode = Z_EROFS_PCLUSTER_FOLLOWED, .backmost = true } +static bool z_erofs_should_alloc_cache(struct z_erofs_decompress_frontend *fe) +{ + unsigned int cachestrategy = EROFS_I_SB(fe->inode)->opt.cache_strategy; + + if (cachestrategy <= EROFS_ZIP_CACHE_DISABLED) + return false; + + if (fe->backmost) + return true; + + if (cachestrategy >= EROFS_ZIP_CACHE_READAROUND && + fe->map.m_la < fe->headoffset) + return true; + + return false; +} + static void z_erofs_bind_cache(struct z_erofs_decompress_frontend *fe, - enum z_erofs_cache_alloctype type, struct page **pagepool) { struct address_space *mc = MNGD_MAPPING(EROFS_I_SB(fe->inode)); struct z_erofs_pcluster *pcl = fe->pcl; + bool shouldalloc = z_erofs_should_alloc_cache(fe); bool standalone = true; /* * optimistic allocation without direct reclaim since inplace I/O @@ -326,18 +333,19 @@ static void z_erofs_bind_cache(struct z_erofs_decompress_frontend *fe, } else { /* I/O is needed, no possible to decompress directly */ standalone = false; - switch (type) { - case TRYALLOC: - newpage = erofs_allocpage(pagepool, gfp); - if (!newpage) - continue; - set_page_private(newpage, - Z_EROFS_PREALLOCATED_PAGE); - t = tag_compressed_page_justfound(newpage); - break; - default: /* DONTALLOC */ + if (!shouldalloc) continue; - } + + /* + * try to use cached I/O if page allocation + * succeeds or fallback to in-place I/O instead + * to avoid any direct reclaim. + */ + newpage = erofs_allocpage(pagepool, gfp); + if (!newpage) + continue; + set_page_private(newpage, Z_EROFS_PREALLOCATED_PAGE); + t = tag_compressed_page_justfound(newpage); } if (!cmpxchg_relaxed(&pcl->compressed_bvecs[i].page, NULL, @@ -637,20 +645,6 @@ static bool z_erofs_collector_end(struct z_erofs_decompress_frontend *fe) return true; } -static bool should_alloc_managed_pages(struct z_erofs_decompress_frontend *fe, - unsigned int cachestrategy, - erofs_off_t la) -{ - if (cachestrategy <= EROFS_ZIP_CACHE_DISABLED) - return false; - - if (fe->backmost) - return true; - - return cachestrategy >= EROFS_ZIP_CACHE_READAROUND && - la < fe->headoffset; -} - static int z_erofs_read_fragment(struct inode *inode, erofs_off_t pos, struct page *page, unsigned int pageofs, unsigned int len) @@ -687,12 +681,9 @@ static int z_erofs_do_read_page(struct z_erofs_decompress_frontend *fe, struct page *page, struct page **pagepool) { struct inode *const inode = fe->inode; - struct erofs_sb_info *const sbi = EROFS_I_SB(inode); struct erofs_map_blocks *const map = &fe->map; const loff_t offset = page_offset(page); bool tight = true, exclusive; - - enum z_erofs_cache_alloctype cache_strategy; unsigned int cur, end, spiltted; int err = 0; @@ -746,13 +737,7 @@ static int z_erofs_do_read_page(struct z_erofs_decompress_frontend *fe, fe->mode = Z_EROFS_PCLUSTER_FOLLOWED_NOINPLACE; } else { /* bind cache first when cached decompression is preferred */ - if (should_alloc_managed_pages(fe, sbi->opt.cache_strategy, - map->m_la)) - cache_strategy = TRYALLOC; - else - cache_strategy = DONTALLOC; - - z_erofs_bind_cache(fe, cache_strategy, pagepool); + z_erofs_bind_cache(fe, pagepool); } hitted: /* -- 2.30.2