Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp11112210rwb; Fri, 25 Nov 2022 11:11:34 -0800 (PST) X-Google-Smtp-Source: AA0mqf6mYLnsmPR1ttLVQVLI4iCToH5Uyy1ZIBwaWccnGwrz0jYwutApGBf1rFGmj4ZomDXQXpY5 X-Received: by 2002:a17:90a:880f:b0:212:e996:704a with SMTP id s15-20020a17090a880f00b00212e996704amr46848711pjn.13.1669403494428; Fri, 25 Nov 2022 11:11:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1669403494; cv=none; d=google.com; s=arc-20160816; b=ftZkN4uQdyu+Erg5u9v92lWgmD0EEIpi6mb4erZIRGMnWKaR+lFYGvfOo5zM406XgJ wwEJP0q4a/dVBlDXnh2XEeGijfHT77OwSuoPao18WVcXvF9psi0vlqPfJvnbHsHsppyG Z4PxaAO7pAJEwGhVnttyea7zBGimxa7OB6RAfJ+hxlqHN9GtGeY1Yf4Uc8bJDxsNsyR2 perQ60Woj2JLI6MxcM+0+ZuGCE99/QyntMc8kdbYS9Yh+saKCun6yUvgkYIN+68DaIC+ alEYVVHec2eAwgRE1srS0D+L0onebKUt0yH2/cPHbz8OIesgZh+Uk9bOAGw+RFIRq33W YSIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=C/5Y82YC8VwWrNLTVGenIUze/DZB2IWEEsZFK0ltr5w=; b=O8/eFSnl3Mjg5nUtGONWz1R3RuJgt/dev5H1V5NjclV+w4pCGnJgPabW3wRxxQluTc ch2Xcm/kmRqrJDPU0ypUW0HoADx41bZbcIS8+zMN5lM4Y71/S6Xsb7+t6GEc52YzaprU XVDDyhSWHHcVaWXks0YhKJH14DJUc2cnrs+r3gBxdajl5iLvq9Z9s/vLg1akXXsNOoVl 3OCeu9eJwko1Jqih5Dzi0CYg7+SEJV1W/aq5nodxZdJdxXVynLT1dUuuNFEhKTJVdRgC c2zap0vTMDzzICXjLNfwaJH4C7z6Xx9mXhtkza/EQYwhgCH+CznrkFB/kqxOw5BB6RhS w0/Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=hdaGO7YG; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k20-20020aa788d4000000b0056d89d8fba7si5032374pff.154.2022.11.25.11.11.13; Fri, 25 Nov 2022 11:11:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=hdaGO7YG; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229730AbiKYTG5 (ORCPT + 99 others); Fri, 25 Nov 2022 14:06:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229475AbiKYTG5 (ORCPT ); Fri, 25 Nov 2022 14:06:57 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB7DD554E9; Fri, 25 Nov 2022 11:06:55 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4E3CC60B99; Fri, 25 Nov 2022 19:06:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7520DC433C1; Fri, 25 Nov 2022 19:06:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1669403214; bh=uQMdVfMVj5lGbAk5rv9KMLVzXMLt+tl4YtDvcVaSy88=; h=From:To:Cc:Subject:Date:From; b=hdaGO7YGWpCRkN3TBJhAVhOTpPcWCk6ZVOLbV+2K0tArVZpCak97+/a6WzHTGJPHI BMkNYT1+oW2idDh0B11xknfgwAjoj0bafpVob2br9lQs3Wsug0fTHiunhbB0LWdy8t OH1mfAQl4tJYQqO/gb0I05fZUmkSnGJ6G6l1sPGVuK8heGm8TN2ziAEr9b4FKY0ZjV xpvSdSsrrY8naUROkOXoYwoRdiG3Nw0s40KAiBTrDZBH+44K/kpWQckUUqk2DCJZYM kPfGNDzrsRj8WY5KZZsQErctTRYljQuGY6lfsSrdn0M1XO7hzVejiL9MoiekI0eYOb cojOrnf63DV7g== From: Eric Biggers To: linux-fscrypt@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, Matthew Wilcox , Chao Yu Subject: [PATCH v4] fsverity: stop using PG_error to track error status Date: Fri, 25 Nov 2022 11:06:42 -0800 Message-Id: <20221125190642.12787-1-ebiggers@kernel.org> X-Mailer: git-send-email 2.38.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org From: Eric Biggers As a step towards freeing the PG_error flag for other uses, change ext4 and f2fs to stop using PG_error to track verity errors. Instead, if a verity error occurs, just mark the whole bio as failed. The coarser granularity isn't really a problem since it isn't any worse than what the block layer provides, and errors from a multi-page readahead aren't reported to applications unless a single-page read fails too. f2fs supports compression, which makes the f2fs changes a bit more complicated than desired, but the basic premise still works. Note: there are still a few uses of PageError in f2fs, but they are on the write path, so they are unrelated and this patch doesn't touch them. Reviewed-by: Chao Yu Signed-off-by: Eric Biggers --- v4: Added a comment for decompression_attempted, added a paragraph to the commit message, and added Chao's Reviewed-by. v3: made a small simplification to the f2fs changes. Also dropped the fscrypt patch since it is upstream now. fs/ext4/readpage.c | 8 ++---- fs/f2fs/compress.c | 64 ++++++++++++++++++++++------------------------ fs/f2fs/data.c | 53 +++++++++++++++++++++++--------------- fs/verity/verify.c | 12 ++++----- 4 files changed, 72 insertions(+), 65 deletions(-) diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c index 3d21eae267fca..e604ea4e102b7 100644 --- a/fs/ext4/readpage.c +++ b/fs/ext4/readpage.c @@ -75,14 +75,10 @@ static void __read_end_io(struct bio *bio) bio_for_each_segment_all(bv, bio, iter_all) { page = bv->bv_page; - /* PG_error was set if verity failed. */ - if (bio->bi_status || PageError(page)) { + if (bio->bi_status) ClearPageUptodate(page); - /* will re-read again later */ - ClearPageError(page); - } else { + else SetPageUptodate(page); - } unlock_page(page); } if (bio->bi_private) diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c index d315c2de136f2..2b7a5cc4ed662 100644 --- a/fs/f2fs/compress.c +++ b/fs/f2fs/compress.c @@ -1711,50 +1711,27 @@ static void f2fs_put_dic(struct decompress_io_ctx *dic, bool in_task) } } -/* - * Update and unlock the cluster's pagecache pages, and release the reference to - * the decompress_io_ctx that was being held for I/O completion. - */ -static void __f2fs_decompress_end_io(struct decompress_io_ctx *dic, bool failed, - bool in_task) +static void f2fs_verify_cluster(struct work_struct *work) { + struct decompress_io_ctx *dic = + container_of(work, struct decompress_io_ctx, verity_work); int i; + /* Verify, update, and unlock the decompressed pages. */ for (i = 0; i < dic->cluster_size; i++) { struct page *rpage = dic->rpages[i]; if (!rpage) continue; - /* PG_error was set if verity failed. */ - if (failed || PageError(rpage)) { - ClearPageUptodate(rpage); - /* will re-read again later */ - ClearPageError(rpage); - } else { + if (fsverity_verify_page(rpage)) SetPageUptodate(rpage); - } + else + ClearPageUptodate(rpage); unlock_page(rpage); } - f2fs_put_dic(dic, in_task); -} - -static void f2fs_verify_cluster(struct work_struct *work) -{ - struct decompress_io_ctx *dic = - container_of(work, struct decompress_io_ctx, verity_work); - int i; - - /* Verify the cluster's decompressed pages with fs-verity. */ - for (i = 0; i < dic->cluster_size; i++) { - struct page *rpage = dic->rpages[i]; - - if (rpage && !fsverity_verify_page(rpage)) - SetPageError(rpage); - } - - __f2fs_decompress_end_io(dic, false, true); + f2fs_put_dic(dic, true); } /* @@ -1764,6 +1741,8 @@ static void f2fs_verify_cluster(struct work_struct *work) void f2fs_decompress_end_io(struct decompress_io_ctx *dic, bool failed, bool in_task) { + int i; + if (!failed && dic->need_verity) { /* * Note that to avoid deadlocks, the verity work can't be done @@ -1773,9 +1752,28 @@ void f2fs_decompress_end_io(struct decompress_io_ctx *dic, bool failed, */ INIT_WORK(&dic->verity_work, f2fs_verify_cluster); fsverity_enqueue_verify_work(&dic->verity_work); - } else { - __f2fs_decompress_end_io(dic, failed, in_task); + return; + } + + /* Update and unlock the cluster's pagecache pages. */ + for (i = 0; i < dic->cluster_size; i++) { + struct page *rpage = dic->rpages[i]; + + if (!rpage) + continue; + + if (failed) + ClearPageUptodate(rpage); + else + SetPageUptodate(rpage); + unlock_page(rpage); } + + /* + * Release the reference to the decompress_io_ctx that was being held + * for I/O completion. + */ + f2fs_put_dic(dic, in_task); } /* diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index a71e818cd67b4..1ae8da259d6c5 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -116,43 +116,56 @@ struct bio_post_read_ctx { struct f2fs_sb_info *sbi; struct work_struct work; unsigned int enabled_steps; + /* + * decompression_attempted keeps track of whether + * f2fs_end_read_compressed_page() has been called on the pages in the + * bio that belong to a compressed cluster yet. + */ + bool decompression_attempted; block_t fs_blkaddr; }; +/* + * Update and unlock a bio's pages, and free the bio. + * + * This marks pages up-to-date only if there was no error in the bio (I/O error, + * decryption error, or verity error), as indicated by bio->bi_status. + * + * "Compressed pages" (pagecache pages backed by a compressed cluster on-disk) + * aren't marked up-to-date here, as decompression is done on a per-compression- + * cluster basis rather than a per-bio basis. Instead, we only must do two + * things for each compressed page here: call f2fs_end_read_compressed_page() + * with failed=true if an error occurred before it would have normally gotten + * called (i.e., I/O error or decryption error, but *not* verity error), and + * release the bio's reference to the decompress_io_ctx of the page's cluster. + */ static void f2fs_finish_read_bio(struct bio *bio, bool in_task) { struct bio_vec *bv; struct bvec_iter_all iter_all; + struct bio_post_read_ctx *ctx = bio->bi_private; - /* - * Update and unlock the bio's pagecache pages, and put the - * decompression context for any compressed pages. - */ bio_for_each_segment_all(bv, bio, iter_all) { struct page *page = bv->bv_page; if (f2fs_is_compressed_page(page)) { - if (bio->bi_status) + if (!ctx->decompression_attempted) f2fs_end_read_compressed_page(page, true, 0, in_task); f2fs_put_page_dic(page, in_task); continue; } - /* PG_error was set if verity failed. */ - if (bio->bi_status || PageError(page)) { + if (bio->bi_status) ClearPageUptodate(page); - /* will re-read again later */ - ClearPageError(page); - } else { + else SetPageUptodate(page); - } dec_page_count(F2FS_P_SB(page), __read_io_type(page)); unlock_page(page); } - if (bio->bi_private) - mempool_free(bio->bi_private, bio_post_read_ctx_pool); + if (ctx) + mempool_free(ctx, bio_post_read_ctx_pool); bio_put(bio); } @@ -185,8 +198,10 @@ static void f2fs_verify_bio(struct work_struct *work) struct page *page = bv->bv_page; if (!f2fs_is_compressed_page(page) && - !fsverity_verify_page(page)) - SetPageError(page); + !fsverity_verify_page(page)) { + bio->bi_status = BLK_STS_IOERR; + break; + } } } else { fsverity_verify_bio(bio); @@ -245,6 +260,8 @@ static void f2fs_handle_step_decompress(struct bio_post_read_ctx *ctx, blkaddr++; } + ctx->decompression_attempted = true; + /* * Optimization: if all the bio's pages are compressed, then scheduling * the per-bio verity work is unnecessary, as verity will be fully @@ -1062,6 +1079,7 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr, ctx->sbi = sbi; ctx->enabled_steps = post_read_steps; ctx->fs_blkaddr = blkaddr; + ctx->decompression_attempted = false; bio->bi_private = ctx; } iostat_alloc_and_bind_ctx(sbi, bio, ctx); @@ -1089,7 +1107,6 @@ static int f2fs_submit_page_read(struct inode *inode, struct page *page, bio_put(bio); return -EFAULT; } - ClearPageError(page); inc_page_count(sbi, F2FS_RD_DATA); f2fs_update_iostat(sbi, NULL, FS_DATA_READ_IO, F2FS_BLKSIZE); __submit_bio(sbi, bio, DATA); @@ -2141,7 +2158,6 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page, inc_page_count(F2FS_I_SB(inode), F2FS_RD_DATA); f2fs_update_iostat(F2FS_I_SB(inode), NULL, FS_DATA_READ_IO, F2FS_BLKSIZE); - ClearPageError(page); *last_block_in_bio = block_nr; goto out; out: @@ -2289,7 +2305,6 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, inc_page_count(sbi, F2FS_RD_DATA); f2fs_update_iostat(sbi, inode, FS_DATA_READ_IO, F2FS_BLKSIZE); - ClearPageError(page); *last_block_in_bio = blkaddr; } @@ -2306,7 +2321,6 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret, for (i = 0; i < cc->cluster_size; i++) { if (cc->rpages[i]) { ClearPageUptodate(cc->rpages[i]); - ClearPageError(cc->rpages[i]); unlock_page(cc->rpages[i]); } } @@ -2403,7 +2417,6 @@ static int f2fs_mpage_readpages(struct inode *inode, #ifdef CONFIG_F2FS_FS_COMPRESSION set_error_page: #endif - SetPageError(page); zero_user_segment(page, 0, PAGE_SIZE); unlock_page(page); } diff --git a/fs/verity/verify.c b/fs/verity/verify.c index bde8c9b7d25f6..961ba248021f9 100644 --- a/fs/verity/verify.c +++ b/fs/verity/verify.c @@ -200,9 +200,8 @@ EXPORT_SYMBOL_GPL(fsverity_verify_page); * @bio: the bio to verify * * Verify a set of pages that have just been read from a verity file. The pages - * must be pagecache pages that are still locked and not yet uptodate. Pages - * that fail verification are set to the Error state. Verification is skipped - * for pages already in the Error state, e.g. due to fscrypt decryption failure. + * must be pagecache pages that are still locked and not yet uptodate. If a + * page fails verification, then bio->bi_status is set to an error status. * * This is a helper function for use by the ->readahead() method of filesystems * that issue bios to read data directly into the page cache. Filesystems that @@ -244,9 +243,10 @@ void fsverity_verify_bio(struct bio *bio) unsigned long level0_ra_pages = min(max_ra_pages, params->level0_blocks - level0_index); - if (!PageError(page) && - !verify_page(inode, vi, req, page, level0_ra_pages)) - SetPageError(page); + if (!verify_page(inode, vi, req, page, level0_ra_pages)) { + bio->bi_status = BLK_STS_IOERR; + break; + } } fsverity_free_hash_request(params->hash_alg, req); base-commit: f0c4d9fc9cc9462659728d168387191387e903cc -- 2.38.1