Received: by 2002:a05:6358:9144:b0:117:f937:c515 with SMTP id r4csp5659507rwr; Mon, 1 May 2023 08:58:25 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4K/VP0Z0GVG0Zh6u4aeZmUDqDno1jGlvZepDXRIgu/KHxEioTtQkmRfcwBuIT48HFNXFYu X-Received: by 2002:aa7:88cf:0:b0:63e:fb6d:a1c0 with SMTP id k15-20020aa788cf000000b0063efb6da1c0mr21111086pff.33.1682956704866; Mon, 01 May 2023 08:58:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1682956704; cv=none; d=google.com; s=arc-20160816; b=Jnq7F7FJRnIByFs0sS6qAbVu3kuZzfTOEclzcipv0SMaF4xqF4YpHGHw/dRqHVT8LX sbKewLVHk5RuIbEL4bkebq4Ej5fBM/s1jG6SH6AeKfN8YYwADp+rWBYGP3PvhgA3kch5 wSivbRfTHwoPFk/v0XCYaRBCYKszMKrwvd7ABTIDO2f48h+Bu2yeIZsrcdrj6iFDVuAV 1ihykp2D+IYbex55DgROO6N55ioKzJ02uKlTtbE6usufUrn1bLWixeCme3Cq92u/VsJV TsUJipSp0z8/VYAfUnTxVduKwNwz/2yO7AziTcLAMAQrwDGWYuq2xXZrEdyUQ+xUHKnz 1m8w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=1MUFWRQzN2U7B9NovRPIUdnOItgaF2BM1nA/v8zJpK0=; b=z8Vq8eBMeZER+ZO70IaEy4b7gjAqeNl3b6/F48yjeAarhoNBwlTaEJ2zY/rnhFRQ2G SA0aNhQD8Kx7mWgg0Ve6Q96HCoCBjrIFTBYEmgbafg8LkmFaX74bKD4AQFq2oWZb4Dqe I/GKcwoxljoHBFWmp5FWlFpD7PWTi5SOGnyfc3X0vr53cqMlb1bHBjDaoMJ40pUyCsGk +XLUNIZHWm3qM6B7heST62DCAi1p0jFH+xafKTYr8+TzeVofnq3EMrMJCmnVb8lRf8cC khvyUSn1EFlpgMyqNAH+bu/l8pH6+EpifVSy4wqnpo6oP9APjQ6W/i8CCmDb6MCiIfiy kKsw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=ZaYHCOGz; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e7-20020a635447000000b0051ade38b397si27895661pgm.264.2023.05.01.08.58.04; Mon, 01 May 2023 08:58:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=ZaYHCOGz; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232772AbjEAPq2 (ORCPT + 99 others); Mon, 1 May 2023 11:46:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232060AbjEAPq1 (ORCPT ); Mon, 1 May 2023 11:46:27 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 94A53BC; Mon, 1 May 2023 08:46:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=1MUFWRQzN2U7B9NovRPIUdnOItgaF2BM1nA/v8zJpK0=; b=ZaYHCOGzMIq2mcPn5wQKmGHwqc dEaQSi8ys2wSEWQrqILZhJ2E3A5yM+ZBDgUmG4S1N/wj6Iqje0McnUMuERPO4Cm38WIS/MBxM3rwh I8qJ2kkdbPvv/cgRTjGc/qugwfraL2/bDWcH697S80ypbfipuIuTB4vHRmbpbSJ/0h1rlcZ0n6Oga f78qWjBf+nRjYL+b87AUjZ88xZZSzlYC+e+nEJYVSzyI3caXusPZt8MNxLpC34ZSXYjRTYEVBiqIH yHgXirJrykwoZHkRDNMuK1Q2zhGhTljIDoMOrmEL+wFzEVnpwKbugMpon2NXMzkmZMpdG/f7BkzP2 DsjSuXIw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1ptVj9-007SZI-Oz; Mon, 01 May 2023 15:46:11 +0000 Date: Mon, 1 May 2023 16:46:11 +0100 From: Matthew Wilcox To: Luis Chamberlain Cc: Christoph Hellwig , Pankaj Raghav , Daniel Gomez , Jens Axboe , Miklos Szeredi , "Darrick J. Wong" , Andrew Morton , David Howells , linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com, linux-xfs@vger.kernel.org, linux-nfs@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 17/17] fs: add CONFIG_BUFFER_HEAD Message-ID: References: <20230424054926.26927-1-hch@lst.de> <20230424054926.26927-18-hch@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Sun, Apr 30, 2023 at 08:14:03PM -0700, Luis Chamberlain wrote: > On Sat, Apr 29, 2023 at 02:20:17AM +0100, Matthew Wilcox wrote: > > > [ 11.322212] Call Trace: > > > [ 11.323224] > > > [ 11.324146] iomap_readpage_iter+0x96/0x300 > > > [ 11.325694] iomap_readahead+0x174/0x2d0 > > > [ 11.327129] read_pages+0x69/0x1f0 > > > [ 11.329751] page_cache_ra_unbounded+0x187/0x1d0 > > > > ... that shouldn't be possible. read_pages() allocates pages, puts them > > in the page cache and tells the filesystem to fill them in. > > > > In your patches, did you call mapping_set_large_folios() anywhere? > > No but the only place to add that would be in the block cache. Adding > that alone to the block cache doesn't fix the issue. The below patch > however does get us by. That's "working around the error", not fixing it ... probably the same root cause as your other errors; at least I'm not diving into them until the obvious one is fixed. > >From my readings it does't seem like readahead_folio() should always > return non-NULL, and also I couldn't easily verify the math is right. readahead_folio() always returns non-NULL. That's guaranteed by how page_cache_ra_unbounded() and page_cache_ra_order() work. It allocates folios, until it can't (already-present folio, ENOMEM, EOF, max batch size) and then calls the filesystem to make those folios uptodate, telling it how many folios it put in the page cache, where they start. Hm. The fact that it's coming from page_cache_ra_unbounded() makes me wonder if you updated this line: folio = filemap_alloc_folio(gfp_mask, 0); without updating this line: ractl->_nr_pages++; This is actually number of pages, not number of folios, so needs to be ractl->_nr_pages += 1 << order; various other parts of page_cache_ra_unbounded() need to be examined carefully for assumptions of order-0; it's never been used for that before. all the large folio work has concentrated on page_cache_ra_order()