Received: by 2002:a05:6358:f14:b0:e5:3b68:ec04 with SMTP id b20csp704151rwj; Fri, 23 Dec 2022 07:06:58 -0800 (PST) X-Google-Smtp-Source: AMrXdXvgiX5F6JOiZL4Wr2nhpCLWClVJRP1L9CiKM3GptJG5dGSEDjWh5rOImVYzOLTx70k9rBLY X-Received: by 2002:a05:6402:4015:b0:46a:3bd0:4784 with SMTP id d21-20020a056402401500b0046a3bd04784mr9691643eda.7.1671808017778; Fri, 23 Dec 2022 07:06:57 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1671808017; cv=none; d=google.com; s=arc-20160816; b=wTnyIjYCcNSVGFUPVj4q8QnahfiZV2gGwUWZPTak3p4rf20WnjJQfRzA+WgJOgtb+F gyLsLHLXAnvZGQY3eeQQhF93DWVDK99/lL10Z3mUyrop8JnRrXXd0AB0dHBE2vuF19V8 5Awb28ul3fyE9m5+fvr7GPbmdq0RegnwPB1fgMA/ZgcUaiItONZxSeYFinhi5HmPN0NG OYO78uWrTefyGdT+NyO/KkW286Y3iTt3qvKYtR8hJO+J3WbQaHBTAMvecn5XvK3DS0TV R13L27rZiUf5xHeCHuetRdfSetJLCzR+QX7fqsA/UmJID5yfndr7ZjGaTylc4K7wr0Cq XUrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=T0IzOuznf6jL2kGG15kuuaoVimSHPTfo71bEOOPjJZA=; b=QAMInLS1MH+KxCp3DJ31L6M7ZNaiSg7w2ntU5FPMNKHnkPit7tI5quvhvhrla6t2Fi lXZvhf8j7nKAZx8yyvJiLcqYjEdner0IE0c2sQ4PClB0b0MRGJ5XeS/c8g+z5QhRsRbn ZY1LMEAA5qBI8QY/d9u28kLGjI/DjvnaPl3UCvAnjKPxIbQkJ8anQU80MuSMuVSoY2C8 o3X0/LlaFTn+sNNWwPg8kTrX8bNoutkRur93APBZnb97VsOOtIuVu/6aNJZCQhdx6/3d mPEiJjwPkapy+iQg84JTWtE9XautG2PFNxPguMeeL1+H93yye8XFZgA1jYycoDOrKBri tl4w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20210309 header.b=YrW2NgDw; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cx13-20020a05640222ad00b0046fab27ede4si2535540edb.591.2022.12.23.07.06.29; Fri, 23 Dec 2022 07:06:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20210309 header.b=YrW2NgDw; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230469AbiLWO4U (ORCPT + 99 others); Fri, 23 Dec 2022 09:56:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229658AbiLWO4U (ORCPT ); Fri, 23 Dec 2022 09:56:20 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 467FA1A047; Fri, 23 Dec 2022 06:56:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version :References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=T0IzOuznf6jL2kGG15kuuaoVimSHPTfo71bEOOPjJZA=; b=YrW2NgDwKTrbrWOGs0aUyCmthg dnuyDQ5co4JncqoOjZq6C/PqP2cYfpc0fk/o38kq68yanbk+vSUrGn4t4JFmcWses3SMl3mgpVJlX g9N2WhR/neB3l0EN+WMaUg75aeODf6+6qtz9tNJPhR97MKY8esuoIBtqI9gsBaPyM24rydkv/Wm8b i9WR3hcB1wCHxhRpTqsjese7K49E+sIzwRDqi1iXgqQ5awmoKgXlOUN125D+wGkPnhRhxVlD/p8en UQNKFAqaJXKxFxyM1KohD3Tjb0GA3qVDkDB7u1qk9POnBcC5n/VLgUw+P4khYydw/uSmk0XDR4BM7 8IS2WJhA==; Received: from hch by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1p8jT1-0096pR-35; Fri, 23 Dec 2022 14:56:11 +0000 Date: Fri, 23 Dec 2022 06:56:11 -0800 From: Christoph Hellwig To: Andreas Gruenbacher Cc: Christoph Hellwig , "Darrick J . Wong" , Alexander Viro , Matthew Wilcox , linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, cluster-devel@redhat.com Subject: Re: [RFC v3 1/7] fs: Add folio_may_straddle_isize helper Message-ID: References: <20221216150626.670312-1-agruenba@redhat.com> <20221216150626.670312-2-agruenba@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221216150626.670312-2-agruenba@redhat.com> X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Fri, Dec 16, 2022 at 04:06:20PM +0100, Andreas Gruenbacher wrote: > Add a folio_may_straddle_isize() helper as a replacement for > pagecache_isize_extended() when we have a locked folio. I find the naming very confusing. Any good reason to not follow the naming of pagecache_isize_extended an call it folio_isize_extended? > Use the new helper in generic_write_end(), iomap_write_end(), > ext4_write_end(), and ext4_journalled_write_end(). Please split this into a patch per caller in addition to the one adding the helper, and write commit logs explaining the rationale for the helper. The obious ones I'm trying to guess are that the new helper avoid a page cache radix tree lookup and a lock page/folio cycle, but I'd rather hear that from the horses mouth in the commit log. > --- a/fs/buffer.c > +++ b/fs/buffer.c > @@ -2164,16 +2164,15 @@ int generic_write_end(struct file *file, struct address_space *mapping, > * But it's important to update i_size while still holding page lock: > * page writeout could otherwise come in and zero beyond i_size. > */ > - if (pos + copied > inode->i_size) { > + if (pos + copied > old_size) { This is and unrelated and undocument (but useful) change. Please split it out as well. > + * This function must be called while we still hold i_rwsem - this not only > + * makes sure i_size is stable but also that userspace cannot observe the new > + * i_size value before we are prepared to handle mmap writes there. Please add a lockdep_assert_held_write to enforce that. > +void folio_may_straddle_isize(struct inode *inode, struct folio *folio, > + loff_t old_size, loff_t start) > +{ > + unsigned int blocksize = i_blocksize(inode); > + > + if (round_up(old_size, blocksize) >= round_down(start, blocksize)) > + return; > + > + /* > + * See clear_page_dirty_for_io() for details why folio_set_dirty() > + * is needed. > + */ > + if (folio_mkclean(folio)) > + folio_set_dirty(folio); Should pagecache_isize_extended be rewritten to use this helper, i.e. turn this into a factoring out of a helper? > +EXPORT_SYMBOL(folio_may_straddle_isize); Please make this an EXPORT_SYMBOL_GPL just like folio_mkclean.