Received: by 2002:a5d:9c59:0:0:0:0:0 with SMTP id 25csp2252121iof; Wed, 8 Jun 2022 00:32:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwuHqJwIbQAfvFBQCA8vnId79D8394QVS9vHdTdnGpoG20VzLn9ZxpSEyKJ8f/2/oPn+acD X-Received: by 2002:a63:1a09:0:b0:3fd:ac2b:3876 with SMTP id a9-20020a631a09000000b003fdac2b3876mr13408090pga.457.1654673536395; Wed, 08 Jun 2022 00:32:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1654673536; cv=none; d=google.com; s=arc-20160816; b=K+7/FzMvJLnuS63GWl5aHHrQwxaNSt6w+BrkQEMovgQkHx/CbREW015rR7yfyFEDfC w5VWERfwB49bc9vsIH8/qrMZra/hJJ/XrQfH8LUQHIUi+zS5BOBz9Cxu2TD9qKiy9bC3 vgvEvUr/RAxftT7YHz8Y0+iQW/QqL4G6Rpl6q4gl0PuLh0c3Bsam+WEe78yVUWKZdrlh +fR5uDBQXqcHsz975wZfVTYcifUgHcAkdfHTb+8PO8lGxqrmSRDUFXVpbyHJlniX50Zx LhnqMUuMKbXoQYs0DdrmzdAdibzcsbSoXlpHepkAyNlw4bxZ4GsX0zLipXZRIfVjjAAy xESg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=oj6iBHbhY6wXvQP55JwyIS4V24gHjxSYo9c883R7TUw=; b=GJ0NQbFrHSHc2NVULawA+EdqifwmoJf5OB8mbNnwM0p3aFK2LSMtZksMDEOIVt6Exe 64uhBhrm93a93TWFnK7ZoxxCDplMffx41ivsctRxYINDzGJA7osFn0NTzEIxwYoQWBCx EbyuWBaEaDPPzTXEzSsExnHnDc70V90WWRUt6AFdcSRv9t0ikoLoKCs4xbMk1BtDjg8J XWpezp5IKKQrKnDwywpK/hyicVlJN3UHJSD4lCj2e/6waYKifK9PmSWIUnREhntmESio sVdp7cUdR8MSO3jh0YZr06Oal0WVQS9q+3BtkGsgISbOroJNzxr5tZygYHh67P1tXyip VqtA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=UfcjCcAZ; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id n13-20020a170902e54d00b00164164c5a4csi30866471plf.91.2022.06.08.00.32.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Jun 2022 00:32:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=UfcjCcAZ; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 854181FE4E1; Wed, 8 Jun 2022 00:04:57 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230438AbiFHEUc (ORCPT + 99 others); Wed, 8 Jun 2022 00:20:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232734AbiFHEUA (ORCPT ); Wed, 8 Jun 2022 00:20:00 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A1D430A465; Tue, 7 Jun 2022 18:43:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=oj6iBHbhY6wXvQP55JwyIS4V24gHjxSYo9c883R7TUw=; b=UfcjCcAZzQcl+QqNdFDayL0DrM rVsFMvfFUXJ+7oDg0pQnlpqu8q9pinredTIeMnZWJx7xr81EuHQZejrMDcKM5oM6LxeQPUOp4s5iX ZSlisTnw/St+HpFk9wwmSqM+LHciBRgwgWakpdRGd3adD7O0iYUtaFaX5R63bQzoG7LRFpPJ4R+6o QTbenogFjZKniIch80zZmMTGAqhansPk4Fgz2CPBlusie4PkzHtBvzj0pHemLIlcBqz4I7YWekTUf ODqwv9lHI37DiFMD5p4KndP3XL35ySlCLg7fIX147nJVyBtb79ZnjXvTWY+K0vIvQUROpSy8Axhht 24wQ41bw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nykiD-00CCvI-Vy; Wed, 08 Jun 2022 01:42:22 +0000 Date: Wed, 8 Jun 2022 02:42:21 +0100 From: Matthew Wilcox To: Jan Kara Cc: Jan Kara , tytso@mit.edu, Andreas Dilger , linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: Re: [PATCH 3/3] ext4: Use generic_quota_read() Message-ID: References: <20220605143815.2330891-1-willy@infradead.org> <20220605143815.2330891-4-willy@infradead.org> <20220606083814.skjv34b2tjn7l7pi@quack3.lan> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220606083814.skjv34b2tjn7l7pi@quack3.lan> X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Mon, Jun 06, 2022 at 10:38:14AM +0200, Jan Kara wrote: > On Sun 05-06-22 15:38:15, Matthew Wilcox (Oracle) wrote: > > The comment about the page cache is rather stale; the buffer cache will > > read into the page cache if the buffer isn't present, and the page cache > > will not take any locks if the page is present. > > > > Signed-off-by: Matthew Wilcox (Oracle) > > This will not work for couple of reasons, see below. BTW, I don't think the > comment about page cache was stale (but lacking details I admit ;). As far > as I remember (and it was really many years ago - definitely pre-git era) > the problem was (mainly on the write side) that before current state of the > code we were using calls like vfs_read() / vfs_write() to get quota > information and that was indeed prone to deadlocks. Ah yes, vfs_write() might indeed be prone to deadlocks. Particularly if we're doing it under the dq_mutex and any memory allocation might have recursed into reclaim ;-) I actually found the commit in linux-fullhistory. Changelog for context: commit b72debd66a6ed Author: Jan Kara Date: Mon Jan 3 04:12:24 2005 -0800 [PATCH] Fix of quota deadlock on pagelock: quota core The four patches in this series fix deadlocks with quotas of pagelock (the problem was lock inversion on PageLock and transaction start - quota code needed to first start a transaction and then write the data which subsequent ly needed acquisition of PageLock while the standard ordering - PageLock first and transaction start later - was used e.g. by pdflush). They implement a new way of quota access to disk: Every filesystem that would like to impleme nt quotas now has to provide quota_read() and quota_write() functions. These functions must obey quota lock ordering (in particular they should not take PageLock inside a transaction). The first patch implements the changes in the quota core, the other three patches implement needed functions in ext2, ext3 and reiserfs. The patch for reiserfs also fixes several other lock inversion problems (similar as ext3 had) and implements the journaled quota functionality (which comes almost for free after the locking fixes...). The quota core patch makes quota support in other filesystems (except XFS which implements everything on its own ;)) unfunctional (quotaon() will refuse to turn on quotas on them). When the patches get reasonable wide testing and it will seem that no major changes will be needed I can make fixes also for the other filesystems (JFS, UDF, UFS). This patch: The patch implements the new way of quota io in the quota core. Every filesystem wanting to support quotas has to provide functions quota_read() and quota_write() obeying quota locking rules. As the writes and reads bypass the pagecache there is some ugly stuff ensuring that userspace can see all the data after quotaoff() (or Q_SYNC quotactl). In future I plan to make quota files inaccessible from userspace (with the exception of quotacheck(8) which will take care about the cache flushing and such stuff itself) so that this synchronization stuff can be removed... The rewrite of the quota core. Quota uses the filesystem read() and write() functions no more to avoid possible deadlocks on PageLock. From now on every filesystem supporting quotas must provide functions quota_read() and quota_write() which obey the quota locking rules (e.g. they cannot acquire the PageLock). Signed-off-by: Jan Kara Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds > > @@ -6924,20 +6882,21 @@ static ssize_t ext4_quota_write(struct super_block *sb, int type, > > return -EIO; > > } > > > > - do { > > - bh = ext4_bread(handle, inode, blk, > > - EXT4_GET_BLOCKS_CREATE | > > - EXT4_GET_BLOCKS_METADATA_NOFAIL); > > - } while (PTR_ERR(bh) == -ENOSPC && > > - ext4_should_retry_alloc(inode->i_sb, &retries)); > > - if (IS_ERR(bh)) > > - return PTR_ERR(bh); > > - if (!bh) > > + folio = read_mapping_folio(inode->i_mapping, off / PAGE_SIZE, NULL); > > + if (IS_ERR(folio)) > > + return PTR_ERR(folio); > > + head = folio_buffers(folio); > > + if (!head) > > + head = alloc_page_buffers(&folio->page, sb->s_blocksize, false); > > + if (!head) > > goto out; > > + bh = head; > > + while ((bh_offset(bh) + sb->s_blocksize) <= (off % PAGE_SIZE)) > > + bh = bh->b_this_page; > > We miss proper handling of blocks that are currently beyond i_size > (we are extending the quota file), plus we also miss any mapping of buffers > to appropriate disk blocks here... > > It could be all fixed by replicating what we do in ext4_write_begin() but > I'm not quite convinced using inode's page cache is really worth it... Ah, yes, write_begin. Of course that's what I should have used. I'm looking at this from the point of view of removing buffer_heads where possible. Of course, it's not possible for ext4 while the journal relies on buffer_heads, but if we can steer filesystems away from using sb_bread() (or equivalents), I think that's a good thing.