Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp1728237pxj; Wed, 19 May 2021 12:27:59 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzZP4n5Ji9nEKLK9qzhDZSNvRu3Q86rLTCyHB1XXeBKJQUUZhog01mop0dcse3FYfAeKrTu X-Received: by 2002:a50:fe8e:: with SMTP id d14mr644748edt.97.1621452479391; Wed, 19 May 2021 12:27:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621452479; cv=none; d=google.com; s=arc-20160816; b=G10xB9ttWDd9RIJjWcCTZQT+4fFrceq247wvfgxWsNDKkPgkQAMQB2kq4rG4gUA/mb /+QDcfe3bVWd8JTYTWgFLKTNUXnnT9AUeoM/65X/LdjU9TWsdUARZKAtJz98fJG/1hxQ 3w/LGKcpNA14w5rNV42JWOQNDyIcqWEnb04U5cuoBF+QBpJQlWqrI/m0R6PIeqEZ9ARq GSCV9WCn5LGJ3P0TFxPURRB9AKUioNw9e+8Dkh4mjaBFAwk6NrY0uOTMQxBKgquNN++k pYGJj3HsqPjfho7ABZTv47kVdcJSuO3L0EiVbfv9S9HNOaXbBCTqhJPhRy9e/AE5BBmR 9Clg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=5RwXlNTFTROAhTUuGmyqEqzhTd2dRi4xl3I2fjWbu/U=; b=bA6xDGTTwwFVc1UcpcjHxt0xu9/fksady5Vxhm7RwDRDgR3lJcPpst8QfUamfYZ+Fk 1smze3lXsU2HuaDmkWSTbpA2IxmCiNOu2iK5/j/32gRMnfivGR9XiWfThUD5UzX2TiXM 99Sv3CUDzd/oZk8P251SUpZ4VxuVB50F+3KnOu8H2prD8vk85QeIP73fkbxOajkTYcx4 ghLfCoygQw7T0t965nvUPP9VmzzcI5MYUHmto2buhCY19CbF/2l4wv6ux8WWTmZsgP6q dGUTO7UqmSUDWbD7Daq13o7HJubr8iSbgt0XiucYwM057cuWVyXr15t61F9oDq63m0c2 EgtQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id nc10si283764ejc.734.2021.05.19.12.27.35; Wed, 19 May 2021 12:27:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240369AbhESK6h (ORCPT + 99 others); Wed, 19 May 2021 06:58:37 -0400 Received: from mx2.suse.de ([195.135.220.15]:60810 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229720AbhESK6g (ORCPT ); Wed, 19 May 2021 06:58:36 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id B6396AC4B; Wed, 19 May 2021 10:57:15 +0000 (UTC) Received: by quack2.suse.cz (Postfix, from userid 1000) id 086AA1F2C9C; Wed, 19 May 2021 12:57:14 +0200 (CEST) Date: Wed, 19 May 2021 12:57:14 +0200 From: Jan Kara To: Dave Chinner Cc: "Darrick J. Wong" , Jan Kara , linux-fsdevel@vger.kernel.org, Christoph Hellwig , ceph-devel@vger.kernel.org, Chao Yu , Damien Le Moal , "Darrick J. Wong" , Jaegeuk Kim , Jeff Layton , Johannes Thumshirn , linux-cifs@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org, linux-xfs@vger.kernel.org, Miklos Szeredi , Steve French , Ted Tso , Matthew Wilcox Subject: Re: [PATCH 03/11] mm: Protect operations adding pages to page cache with invalidate_lock Message-ID: <20210519105713.GA26250@quack2.suse.cz> References: <20210512101639.22278-1-jack@suse.cz> <20210512134631.4053-3-jack@suse.cz> <20210512152345.GE8606@magnolia> <20210513174459.GH2734@quack2.suse.cz> <20210513185252.GB9675@magnolia> <20210513231945.GD2893@dread.disaster.area> <20210514161730.GL9675@magnolia> <20210518223637.GJ2893@dread.disaster.area> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210518223637.GJ2893@dread.disaster.area> User-Agent: Mutt/1.10.1 (2018-07-13) Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Wed 19-05-21 08:36:37, Dave Chinner wrote: > On Fri, May 14, 2021 at 09:17:30AM -0700, Darrick J. Wong wrote: > > On Fri, May 14, 2021 at 09:19:45AM +1000, Dave Chinner wrote: > > > On Thu, May 13, 2021 at 11:52:52AM -0700, Darrick J. Wong wrote: > > > > On Thu, May 13, 2021 at 07:44:59PM +0200, Jan Kara wrote: > > > > > On Wed 12-05-21 08:23:45, Darrick J. Wong wrote: > > > > > > On Wed, May 12, 2021 at 03:46:11PM +0200, Jan Kara wrote: > > > > > > > +->fallocate implementation must be really careful to maintain page cache > > > > > > > +consistency when punching holes or performing other operations that invalidate > > > > > > > +page cache contents. Usually the filesystem needs to call > > > > > > > +truncate_inode_pages_range() to invalidate relevant range of the page cache. > > > > > > > +However the filesystem usually also needs to update its internal (and on disk) > > > > > > > +view of file offset -> disk block mapping. Until this update is finished, the > > > > > > > +filesystem needs to block page faults and reads from reloading now-stale page > > > > > > > +cache contents from the disk. VFS provides mapping->invalidate_lock for this > > > > > > > +and acquires it in shared mode in paths loading pages from disk > > > > > > > +(filemap_fault(), filemap_read(), readahead paths). The filesystem is > > > > > > > +responsible for taking this lock in its fallocate implementation and generally > > > > > > > +whenever the page cache contents needs to be invalidated because a block is > > > > > > > +moving from under a page. > > > > > > > + > > > > > > > +->copy_file_range and ->remap_file_range implementations need to serialize > > > > > > > +against modifications of file data while the operation is running. For blocking > > > > > > > +changes through write(2) and similar operations inode->i_rwsem can be used. For > > > > > > > +blocking changes through memory mapping, the filesystem can use > > > > > > > +mapping->invalidate_lock provided it also acquires it in its ->page_mkwrite > > > > > > > +implementation. > > > > > > > > > > > > Question: What is the locking order when acquiring the invalidate_lock > > > > > > of two different files? Is it the same as i_rwsem (increasing order of > > > > > > the struct inode pointer) or is it the same as the XFS MMAPLOCK that is > > > > > > being hoisted here (increasing order of i_ino)? > > > > > > > > > > > > The reason I ask is that remap_file_range has to do that, but I don't > > > > > > see any conversions for the xfs_lock_two_inodes(..., MMAPLOCK_EXCL) > > > > > > calls in xfs_ilock2_io_mmap in this series. > > > > > > > > > > Good question. Technically, I don't think there's real need to establish a > > > > > single ordering because locks among different filesystems are never going > > > > > to be acquired together (effectively each lock type is local per sb and we > > > > > are free to define an ordering for each lock type differently). But to > > > > > maintain some sanity I guess having the same locking order for doublelock > > > > > of i_rwsem and invalidate_lock makes sense. Is there a reason why XFS uses > > > > > by-ino ordering? So that we don't have to consider two different orders in > > > > > xfs_lock_two_inodes()... > > > > > > > > I imagine Dave will chime in on this, but I suspect the reason is > > > > hysterical raisins^Wreasons. > > > > > > It's the locking rules that XFS has used pretty much forever. > > > Locking by inode number always guarantees the same locking order of > > > two inodes in the same filesystem, regardless of the specific > > > in-memory instances of the two inodes. > > > > > > e.g. if we lock based on the inode structure address, in one > > > instancex, we could get A -> B, then B gets recycled and > > > reallocated, then we get B -> A as the locking order for the same > > > two inodes. > > > > > > That, IMNSHO, is utterly crazy because with non-deterministic inode > > > lock ordered like this you can't make consistent locking rules for > > > locking the physical inode cluster buffers underlying the inodes in > > > the situation where they also need to be locked. > > > > That's protected by the ILOCK, correct? > > > > > We've been down this path before more than a decade ago when the > > > powers that be decreed that inode locking order is to be "by > > > structure address" rather than inode number, because "inode number > > > is not unique across multiple superblocks". > > > > > > I'm not sure that there is anywhere that locks multiple inodes > > > across different superblocks, but here we are again.... > > > > Hm. Are there situations where one would want to lock multiple > > /mappings/ across different superblocks? The remapping code doesn't > > allow cross-super operations, so ... pipes and splice, maybe? I don't > > remember that code well enough to say for sure. > > Hmmmm. Doing read IO into a buffer that is mmap()d from another > file, and we take a page fault on it inside the read IO path? We're > copying from a page in one mapping and taking a fault in another > mapping and hence taking the invalidate_lock to populate the page > cache for the second mapping... > > I haven't looked closely enough at where the invalidate_lock is held > in the read path to determine if this is an issue, but if it is then > it is also a potential deadlock scenario... I was careful enough to avoid this problem - we first bring pages into pages cache (under invalidate_lock), then drop invalidate lock, just keep page refs, and copy page cache content into the buffer (which may grab invalidate_lock from another mapping as you say). Honza -- Jan Kara SUSE Labs, CR