Received: by 2002:a05:7412:37c9:b0:e2:908c:2ebd with SMTP id jz9csp418755rdb; Mon, 18 Sep 2023 21:54:45 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFZV3m34BTGciS2oa25qxc0yY5azSfMWGJn1uXi+JD1H8fWqeMVqt1D0PJcekoWiUecMPKw X-Received: by 2002:a17:903:32cf:b0:1b8:8ff5:2cee with SMTP id i15-20020a17090332cf00b001b88ff52ceemr10824197plr.64.1695099284831; Mon, 18 Sep 2023 21:54:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695099284; cv=none; d=google.com; s=arc-20160816; b=XlEmN/pcWQdav/AnzOpffiADvVyjVdWTcgLbtiZyqTIIVEK/NqgZ2iXubBZrAv/Gza arGZglqcgKXz9/ts+xKC7GLX+JHMIoJ2WvNM6h8MSntO4Cj/DWFQCHziUaK6c4NVsIS6 /ZqW/LIRjrA9WCs8JfGLQVHrsh4ivu2ccEGtSSg9zj2EOWby+2nNCxq+EK4lEsVZvrYk /HTkmswLeS1E2rBGsBtBU16kRgyWABVpx7KBfZYy7UNlpaMp6+1B4hfqVSt8UtKJU2O/ OT7qWndRcs2IVZDCZ6aMz2arxUjVYP1TVS7I02lG/JzG/+Jqj4/nfNR68/UCY1Kchqib Plvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=2doX7aDgrAV7u7ZEcXBGS632IaXA7nYXC8nB0Wjw2AM=; fh=Ex900LeWemyNv8v7eVaT25lhS5hAbxvedq4sD5vnIX8=; b=iVfhVO00rE3vorsZL30AMWkOiA28u1/jeR8H5cGsBZOUc3AJt47PexgGfX8uG+wcQI 6/F/EUBUFQSXUEaxD78q0fIh/VOgricusBRVpwpk2aCet5m7DgKrqqM/rOkGKJ/BhrBD UGZyQtcbMyAlYrk5wtv8/BY/TPNbru6yCVTwkcPQSe5HQ1yG3TRirNR/7bfxhNkFkGN9 tv2O0RlCdte9o6yH4ayY5+Co0ml8aq7Hoe46COrU2nSWkAgWCfK+OeLw/2cIviv6JnX6 7j1LuGcf1pBVrskP5bDc2F1sAwat9ANe657J4qiy9fj3UQnn6sq5oxxmfULBECWT3AKx 9f9Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=YMDYI3tc; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Return-Path: Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id j17-20020a170902da9100b001bde0ef9cb6si9527349plx.352.2023.09.18.21.54.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Sep 2023 21:54:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=YMDYI3tc; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 9E155801B65D; Mon, 18 Sep 2023 21:52:30 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231665AbjISEwd (ORCPT + 99 others); Tue, 19 Sep 2023 00:52:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231550AbjISEvt (ORCPT ); Tue, 19 Sep 2023 00:51:49 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 751D9137; Mon, 18 Sep 2023 21:51:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=2doX7aDgrAV7u7ZEcXBGS632IaXA7nYXC8nB0Wjw2AM=; b=YMDYI3tcP/V35p5WncnIj/BrAr XUoRasfcU8i20RVKwUtyAx9iEmK4sGP35/0nHnERPeaQEyXECJWIHcwiLDhJapT34M7q3VukM0Kvt tIetHNtSr72HTN6lewKJuBdpm63HYc5o4H1hdwoxmH0jg9JrwvuaQSvCa+legrZdjVbmRJ0R8AC/M Salg5oSem9sZ3lZ2PWIc1IDPZ6TmqQjmObN2Esh+XxNDV7UPp+r+ammg93kP6cdwEVfRfFAzP1GfH IypS9E2RPuHrNVJxHP4Ef9lMmgcPQImuVXuAjgVnn1gY97T2SEo6Ukw2It9UDiFQI+otoTKQpUIPC hg8xHJ7Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qiSi4-00FFlm-Hl; Tue, 19 Sep 2023 04:51:40 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-fsdevel@vger.kernel.org, gfs2@lists.linux.dev, linux-nilfs@vger.kernel.org, linux-ntfs-dev@lists.sourceforge.net, ntfs3@lists.linux.dev, ocfs2-devel@lists.linux.dev, reiserfs-devel@vger.kernel.org, linux-ext4@vger.kernel.org, Pankaj Raghav Subject: [PATCH 18/26] ntfs: Convert ntfs_prepare_pages_for_non_resident_write() to folios Date: Tue, 19 Sep 2023 05:51:27 +0100 Message-Id: <20230919045135.3635437-19-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230919045135.3635437-1-willy@infradead.org> References: <20230919045135.3635437-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Mon, 18 Sep 2023 21:52:31 -0700 (PDT) Convert each element of the pages array to a folio before using it. This in no way renders the function large-folio safe, but it does remove a lot of hidden calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/ntfs/file.c | 89 +++++++++++++++++++++++--------------------------- 1 file changed, 41 insertions(+), 48 deletions(-) diff --git a/fs/ntfs/file.c b/fs/ntfs/file.c index cbc545999cfe..099141d20db6 100644 --- a/fs/ntfs/file.c +++ b/fs/ntfs/file.c @@ -567,7 +567,7 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, LCN lcn; s64 bh_pos, vcn_len, end, initialized_size; sector_t lcn_block; - struct page *page; + struct folio *folio; struct inode *vi; ntfs_inode *ni, *base_ni = NULL; ntfs_volume *vol; @@ -601,20 +601,6 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, (long long)pos, bytes); blocksize = vol->sb->s_blocksize; blocksize_bits = vol->sb->s_blocksize_bits; - u = 0; - do { - page = pages[u]; - BUG_ON(!page); - /* - * create_empty_buffers() will create uptodate/dirty buffers if - * the page is uptodate/dirty. - */ - if (!page_has_buffers(page)) { - create_empty_buffers(page, blocksize, 0); - if (unlikely(!page_has_buffers(page))) - return -ENOMEM; - } - } while (++u < nr_pages); rl_write_locked = false; rl = NULL; err = 0; @@ -626,14 +612,21 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, end = pos + bytes; cend = (end + vol->cluster_size - 1) >> vol->cluster_size_bits; /* - * Loop over each page and for each page over each buffer. Use goto to + * Loop over each buffer in each folio. Use goto to * reduce indentation. */ u = 0; -do_next_page: - page = pages[u]; - bh_pos = (s64)page->index << PAGE_SHIFT; - bh = head = page_buffers(page); +do_next_folio: + folio = page_folio(pages[u]); + bh_pos = folio_pos(folio); + head = folio_buffers(folio); + if (!head) + /* + * create_empty_buffers() will create uptodate/dirty + * buffers if the folio is uptodate/dirty. + */ + head = folio_create_empty_buffers(folio, blocksize, 0); + bh = head; do { VCN cdelta; s64 bh_end; @@ -653,15 +646,15 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, if (buffer_uptodate(bh)) continue; /* - * The buffer is not uptodate. If the page is uptodate + * The buffer is not uptodate. If the folio is uptodate * set the buffer uptodate and otherwise ignore it. */ - if (PageUptodate(page)) { + if (folio_test_uptodate(folio)) { set_buffer_uptodate(bh); continue; } /* - * Neither the page nor the buffer are uptodate. If + * Neither the folio nor the buffer are uptodate. If * the buffer is only partially being written to, we * need to read it in before the write, i.e. now. */ @@ -679,7 +672,7 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, ntfs_submit_bh_for_read(bh); *wait_bh++ = bh; } else { - zero_user(page, bh_offset(bh), + folio_zero_range(folio, bh_offset(bh), blocksize); set_buffer_uptodate(bh); } @@ -706,7 +699,7 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, (bh_cofs >> blocksize_bits); set_buffer_mapped(bh); /* - * If the page is uptodate so is the buffer. If the + * If the folio is uptodate so is the buffer. If the * buffer is fully outside the write, we ignore it if * it was already allocated and we mark it dirty so it * gets written out if we allocated it. On the other @@ -714,7 +707,7 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, * marking it dirty we set buffer_new so we can do * error recovery. */ - if (PageUptodate(page)) { + if (folio_test_uptodate(folio)) { if (!buffer_uptodate(bh)) set_buffer_uptodate(bh); if (unlikely(was_hole)) { @@ -754,7 +747,8 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, ntfs_submit_bh_for_read(bh); *wait_bh++ = bh; } else { - zero_user(page, bh_offset(bh), + folio_zero_range(folio, + bh_offset(bh), blocksize); set_buffer_uptodate(bh); } @@ -773,7 +767,7 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, */ if (bh_end <= pos || bh_pos >= end) { if (!buffer_uptodate(bh)) { - zero_user(page, bh_offset(bh), + folio_zero_range(folio, bh_offset(bh), blocksize); set_buffer_uptodate(bh); } @@ -786,7 +780,7 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, u8 *kaddr; unsigned pofs; - kaddr = kmap_atomic(page); + kaddr = kmap_local_folio(folio, 0); if (bh_pos < pos) { pofs = bh_pos & ~PAGE_MASK; memset(kaddr + pofs, 0, pos - bh_pos); @@ -795,8 +789,8 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, pofs = end & ~PAGE_MASK; memset(kaddr + pofs, 0, bh_end - end); } - kunmap_atomic(kaddr); - flush_dcache_page(page); + kunmap_local(kaddr); + flush_dcache_folio(folio); } continue; } @@ -809,11 +803,12 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, initialized_size = ni->allocated_size; read_unlock_irqrestore(&ni->size_lock, flags); if (bh_pos > initialized_size) { - if (PageUptodate(page)) { + if (folio_test_uptodate(folio)) { if (!buffer_uptodate(bh)) set_buffer_uptodate(bh); } else if (!buffer_uptodate(bh)) { - zero_user(page, bh_offset(bh), blocksize); + folio_zero_range(folio, bh_offset(bh), + blocksize); set_buffer_uptodate(bh); } continue; @@ -927,17 +922,17 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, bh->b_blocknr = -1; /* * If the buffer is uptodate we skip it. If it - * is not but the page is uptodate, we can set - * the buffer uptodate. If the page is not + * is not but the folio is uptodate, we can set + * the buffer uptodate. If the folio is not * uptodate, we can clear the buffer and set it * uptodate. Whether this is worthwhile is * debatable and this could be removed. */ - if (PageUptodate(page)) { + if (folio_test_uptodate(folio)) { if (!buffer_uptodate(bh)) set_buffer_uptodate(bh); } else if (!buffer_uptodate(bh)) { - zero_user(page, bh_offset(bh), + folio_zero_range(folio, bh_offset(bh), blocksize); set_buffer_uptodate(bh); } @@ -1167,7 +1162,7 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, } while (bh_pos += blocksize, (bh = bh->b_this_page) != head); /* If there are no errors, do the next page. */ if (likely(!err && ++u < nr_pages)) - goto do_next_page; + goto do_next_folio; /* If there are no errors, release the runlist lock if we took it. */ if (likely(!err)) { if (unlikely(rl_write_locked)) { @@ -1185,9 +1180,8 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, bh = *--wait_bh; wait_on_buffer(bh); if (likely(buffer_uptodate(bh))) { - page = bh->b_page; - bh_pos = ((s64)page->index << PAGE_SHIFT) + - bh_offset(bh); + folio = bh->b_folio; + bh_pos = folio_pos(folio) + bh_offset(bh); /* * If the buffer overflows the initialized size, need * to zero the overflowing region. @@ -1197,7 +1191,7 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, if (likely(bh_pos < initialized_size)) ofs = initialized_size - bh_pos; - zero_user_segment(page, bh_offset(bh) + ofs, + folio_zero_segment(folio, bh_offset(bh) + ofs, blocksize); } } else /* if (unlikely(!buffer_uptodate(bh))) */ @@ -1324,21 +1318,20 @@ static int ntfs_prepare_pages_for_non_resident_write(struct page **pages, u = 0; end = bh_cpos << vol->cluster_size_bits; do { - page = pages[u]; - bh = head = page_buffers(page); + folio = page_folio(pages[u]); + bh = head = folio_buffers(folio); do { if (u == nr_pages && - ((s64)page->index << PAGE_SHIFT) + - bh_offset(bh) >= end) + folio_pos(folio) + bh_offset(bh) >= end) break; if (!buffer_new(bh)) continue; clear_buffer_new(bh); if (!buffer_uptodate(bh)) { - if (PageUptodate(page)) + if (folio_test_uptodate(folio)) set_buffer_uptodate(bh); else { - zero_user(page, bh_offset(bh), + folio_zero_range(folio, bh_offset(bh), blocksize); set_buffer_uptodate(bh); } -- 2.40.1