Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp2937388rwd; Fri, 26 May 2023 13:38:55 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5cVv8BhgZfV1vDUHuMoCOHcVCDzgZMINbOfSL76wkHB1k86lHUUcdFOWu5dwbWUpjI+cFb X-Received: by 2002:a05:6a00:198f:b0:644:d77:a2c5 with SMTP id d15-20020a056a00198f00b006440d77a2c5mr5951351pfl.29.1685133534913; Fri, 26 May 2023 13:38:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1685133534; cv=none; d=google.com; s=arc-20160816; b=mELM3qoq7dAhzn3nHgAUidfugzJxVI4iWGn8uyWYyGffKMcFbHz9G1RCmwS3S/LSPH pbY0JM+nf0muJbMkEIOMHBGS/ZN0scM8OxrmcytflKpJsgTAqBTk+bdO/bCxg97hku1f zIxeheK35li1hJZx3vMf5rS8sz19euZBlk3vEZ8NTUQbgWr2SbH1SCUwJHVkbpYsrSHr eazzttY3ifK/acguxq2s4XAwDYvTF54842goTcj6bXIzj4LluefHxOhgXOIqZr2LOFc6 yHrMREjuvfBA6bzhP+gCCj/Y0qH+RZ/JZE17HfuWfPvcoD7OmHqHSjlk71dWrU+0wWAb /krw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=YBajxNufSWSY3fCEsM2orC5x7qp0O2mwJ20UZVH+P3I=; b=hhlyF7eXkP9rkiP7tfArAoe+gEHtQCNbuBbdOxXVt6Zio6LYOchYwoOQ/81krUVCud 1dZf8Bz52cRZQc8VvyWJOMLMqADWVIvFu66GvwFoKLWx9t1ZNpgHCdQLqD7eyXWLabcq Mp1v5fB1Tv59kQPZH//h21/O9aZ3BKtFJfJleuyIJhWLp6gji3pzOQZAvuSww0ww5Ztg iP3bW3qZnc6W8UeVCgGjsqJEMwJMqSbYI9omEoOWCMkGsVhNFokFXd8CvECd3Qp8aT7k 2LlaY9RnB4eUneWp2Caa3a0S/w6TiTKvNmSbbaCyBxkxe+hhFHpLK+cSFnXD8jqMxsaA Fjmg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e29-20020aa798dd000000b0064f44863407si4907768pfm.384.2023.05.26.13.38.40; Fri, 26 May 2023 13:38:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242710AbjEZUPZ (ORCPT + 99 others); Fri, 26 May 2023 16:15:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236975AbjEZUPP (ORCPT ); Fri, 26 May 2023 16:15:15 -0400 Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B9EDDE7 for ; Fri, 26 May 2023 13:15:13 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046051;MF=hsiangkao@linux.alibaba.com;NM=1;PH=DS;RN=3;SR=0;TI=SMTPD_---0VjXYN5G_1685132110; Received: from e18g06460.et15sqa.tbsite.net(mailfrom:hsiangkao@linux.alibaba.com fp:SMTPD_---0VjXYN5G_1685132110) by smtp.aliyun-inc.com; Sat, 27 May 2023 04:15:10 +0800 From: Gao Xiang To: linux-erofs@lists.ozlabs.org Cc: LKML , Gao Xiang Subject: [PATCH 4/6] erofs: adapt managed inode operations into folios Date: Sat, 27 May 2023 04:14:57 +0800 Message-Id: <20230526201459.128169-5-hsiangkao@linux.alibaba.com> X-Mailer: git-send-email 2.24.4 In-Reply-To: <20230526201459.128169-1-hsiangkao@linux.alibaba.com> References: <20230526201459.128169-1-hsiangkao@linux.alibaba.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch gets rid of erofs_try_to_free_cached_page() and fold it into .release_folio(). It also moves managed inode operations into zdata.c, which simplifies the code a bit. No logic changes. Signed-off-by: Gao Xiang --- fs/erofs/internal.h | 3 ++- fs/erofs/super.c | 62 --------------------------------------------- fs/erofs/zdata.c | 59 ++++++++++++++++++++++++++++++++++++------ 3 files changed, 53 insertions(+), 71 deletions(-) diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h index af0431a40647..0b8506c39145 100644 --- a/fs/erofs/internal.h +++ b/fs/erofs/internal.h @@ -506,12 +506,12 @@ int __init z_erofs_init_zip_subsystem(void); void z_erofs_exit_zip_subsystem(void); int erofs_try_to_free_all_cached_pages(struct erofs_sb_info *sbi, struct erofs_workgroup *egrp); -int erofs_try_to_free_cached_page(struct page *page); int z_erofs_load_lz4_config(struct super_block *sb, struct erofs_super_block *dsb, struct z_erofs_lz4_cfgs *lz4, int len); int z_erofs_map_blocks_iter(struct inode *inode, struct erofs_map_blocks *map, int flags); +int erofs_init_managed_cache(struct super_block *sb); #else static inline void erofs_shrinker_register(struct super_block *sb) {} static inline void erofs_shrinker_unregister(struct super_block *sb) {} @@ -529,6 +529,7 @@ static inline int z_erofs_load_lz4_config(struct super_block *sb, } return 0; } +static inline int erofs_init_managed_cache(struct super_block *sb) { return 0; } #endif /* !CONFIG_EROFS_FS_ZIP */ #ifdef CONFIG_EROFS_FS_ZIP_LZMA diff --git a/fs/erofs/super.c b/fs/erofs/super.c index 811ab66d805e..c2829c91812b 100644 --- a/fs/erofs/super.c +++ b/fs/erofs/super.c @@ -599,68 +599,6 @@ static int erofs_fc_parse_param(struct fs_context *fc, return 0; } -#ifdef CONFIG_EROFS_FS_ZIP -static const struct address_space_operations managed_cache_aops; - -static bool erofs_managed_cache_release_folio(struct folio *folio, gfp_t gfp) -{ - bool ret = true; - struct address_space *const mapping = folio->mapping; - - DBG_BUGON(!folio_test_locked(folio)); - DBG_BUGON(mapping->a_ops != &managed_cache_aops); - - if (folio_test_private(folio)) - ret = erofs_try_to_free_cached_page(&folio->page); - - return ret; -} - -/* - * It will be called only on inode eviction. In case that there are still some - * decompression requests in progress, wait with rescheduling for a bit here. - * We could introduce an extra locking instead but it seems unnecessary. - */ -static void erofs_managed_cache_invalidate_folio(struct folio *folio, - size_t offset, size_t length) -{ - const size_t stop = length + offset; - - DBG_BUGON(!folio_test_locked(folio)); - - /* Check for potential overflow in debug mode */ - DBG_BUGON(stop > folio_size(folio) || stop < length); - - if (offset == 0 && stop == folio_size(folio)) - while (!erofs_managed_cache_release_folio(folio, GFP_NOFS)) - cond_resched(); -} - -static const struct address_space_operations managed_cache_aops = { - .release_folio = erofs_managed_cache_release_folio, - .invalidate_folio = erofs_managed_cache_invalidate_folio, -}; - -static int erofs_init_managed_cache(struct super_block *sb) -{ - struct erofs_sb_info *const sbi = EROFS_SB(sb); - struct inode *const inode = new_inode(sb); - - if (!inode) - return -ENOMEM; - - set_nlink(inode, 1); - inode->i_size = OFFSET_MAX; - - inode->i_mapping->a_ops = &managed_cache_aops; - mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS); - sbi->managed_cache = inode; - return 0; -} -#else -static int erofs_init_managed_cache(struct super_block *sb) { return 0; } -#endif - static struct inode *erofs_nfs_get_inode(struct super_block *sb, u64 ino, u32 generation) { diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c index 76488824f146..15a383899540 100644 --- a/fs/erofs/zdata.c +++ b/fs/erofs/zdata.c @@ -667,29 +667,72 @@ int erofs_try_to_free_all_cached_pages(struct erofs_sb_info *sbi, return 0; } -int erofs_try_to_free_cached_page(struct page *page) +static bool z_erofs_cache_release_folio(struct folio *folio, gfp_t gfp) { - struct z_erofs_pcluster *const pcl = (void *)page_private(page); - int ret, i; + struct z_erofs_pcluster *pcl = folio_get_private(folio); + bool ret; + int i; + + if (!folio_test_private(folio)) + return true; if (!erofs_workgroup_try_to_freeze(&pcl->obj, 1)) - return 0; + return false; - ret = 0; + ret = false; DBG_BUGON(z_erofs_is_inline_pcluster(pcl)); for (i = 0; i < pcl->pclusterpages; ++i) { - if (pcl->compressed_bvecs[i].page == page) { + if (pcl->compressed_bvecs[i].page == &folio->page) { WRITE_ONCE(pcl->compressed_bvecs[i].page, NULL); - ret = 1; + ret = true; break; } } erofs_workgroup_unfreeze(&pcl->obj, 1); + if (ret) - detach_page_private(page); + folio_detach_private(folio); return ret; } +/* + * It will be called only on inode eviction. In case that there are still some + * decompression requests in progress, wait with rescheduling for a bit here. + * An extra lock could be introduced instead but it seems unnecessary. + */ +static void z_erofs_cache_invalidate_folio(struct folio *folio, + size_t offset, size_t length) +{ + const size_t stop = length + offset; + + /* Check for potential overflow in debug mode */ + DBG_BUGON(stop > folio_size(folio) || stop < length); + + if (offset == 0 && stop == folio_size(folio)) + while (!z_erofs_cache_release_folio(folio, GFP_NOFS)) + cond_resched(); +} + +static const struct address_space_operations z_erofs_cache_aops = { + .release_folio = z_erofs_cache_release_folio, + .invalidate_folio = z_erofs_cache_invalidate_folio, +}; + +int erofs_init_managed_cache(struct super_block *sb) +{ + struct inode *const inode = new_inode(sb); + + if (!inode) + return -ENOMEM; + + set_nlink(inode, 1); + inode->i_size = OFFSET_MAX; + inode->i_mapping->a_ops = &z_erofs_cache_aops; + mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS); + EROFS_SB(sb)->managed_cache = inode; + return 0; +} + static bool z_erofs_try_inplace_io(struct z_erofs_decompress_frontend *fe, struct z_erofs_bvec *bvec) { -- 2.24.4