Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp4535682pxj; Wed, 12 May 2021 07:43:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzZEfUdYOwxn/vIA3p3MttTcA3h5HG9lX6pDhZwaZaFBsrXuYnVUr9OJC58NHB+2sfLS/jJ X-Received: by 2002:a17:907:7216:: with SMTP id dr22mr38237386ejc.185.1620830585862; Wed, 12 May 2021 07:43:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620830585; cv=none; d=google.com; s=arc-20160816; b=c913uq9JlsEyjZKWQX5kp6+mD7KJKwifN8ye2pT9Az3VSk+codP0iTw7t2wV8n7neh tQ5b+moBPvFNrgSR3rYFSXWMmLVbtCPRjfe3VQ0ZU0aiRPonBYnsVYPrS/30zkQ4UIDU K3oUUlX7XLSDwn1KiaV3ZC+rj43pl+s1vxbsF4+sAUjQxiPtS3nIsUO+2YgciPPw08YY ibZGwKj4r6p8qKc83R1RyWFgVYr7k6tV819ZlhgpghoeeeVhbVEy4diQFUUJbPywPXG5 sTZWtMvcWMBtqcL/1w6GE+psN4ATJHeH4dp28lsS8kH4cJKweujOvPVkEBMD6PhpU1sS E7+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=nmCeol4OxOYxGsAsqjc6BSS4j1R2S4rzG/0fPJoXFKI=; b=ZJQY2ixByPeOpYtjCkZs6KaJc2V7Xsuc4ye/Ldi6qndOJAmIHIarZcSNxWoCyB0om5 zav97bKSqF5zI4cOwW2+i5LY7ooO1ZfMF1ynsu9o4CQ460DEpt0beshFh6SduGyXxO/6 F19dXmMC55afwQFDs3FDhcRBglWWBILcS27VmqiMFEtVHeXy7fdA3SqUhrB5U1pmOHo0 FwIyTU3MvFa2UrMuYWdYaLT6LF+R/IO8+lchm9wBbo1vZFjmz4oxDUaXV6wHS9CIzFza oa1AIKUF9Y6egtEx5j4erQTfH1I70CCv3E4I4qeV3z4OIeUY8dei0dviPCQxY0PoKWSz f/MQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="a+NNS/dS"; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dp16si164106ejc.200.2021.05.12.07.42.38; Wed, 12 May 2021 07:43:05 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b="a+NNS/dS"; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230429AbhELOnD (ORCPT + 99 others); Wed, 12 May 2021 10:43:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230202AbhELOnD (ORCPT ); Wed, 12 May 2021 10:43:03 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B4C3C061574; Wed, 12 May 2021 07:41:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=nmCeol4OxOYxGsAsqjc6BSS4j1R2S4rzG/0fPJoXFKI=; b=a+NNS/dSBg4ioksrXQKLAeOm0U fU0YgisnbBMFBlWnjNB0l4NWnvCy2wZOot3llat5Ww6lZvRuPm3gwsrPT3VbNlfaNPXo+mmYWK7CD GI4iIw8yeVzZVKkwbSyuxbJTLCe2EHfHQ55zluQDAWZp0+DB+fnvuAB766DGz2RH5Nhss0xfAw1B2 P1/NvqUn+xdA97P+o3Sw8HmnOS9H+PZ8qPtpFVoKHkm1jyf4U315OzYXWxMvMFqPIAtuczY9ZdRJ7 SqkG4lrx5+GB3z6L+CqZp3UHEsAshbfcW1DIKpYADkAyFJ/D7l+TxhRiuVYb49Scc6hu741DxOETB n319PzdA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lgq29-008NIW-W7; Wed, 12 May 2021 14:40:49 +0000 Date: Wed, 12 May 2021 15:40:21 +0100 From: Matthew Wilcox To: Jan Kara Cc: linux-fsdevel@vger.kernel.org, Christoph Hellwig , Dave Chinner , ceph-devel@vger.kernel.org, Chao Yu , Damien Le Moal , "Darrick J. Wong" , Jaegeuk Kim , Jeff Layton , Johannes Thumshirn , linux-cifs@vger.kernel.org, linux-ext4@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, linux-mm@kvack.org, linux-xfs@vger.kernel.org, Miklos Szeredi , Steve French , Ted Tso Subject: Re: [PATCH 03/11] mm: Protect operations adding pages to page cache with invalidate_lock Message-ID: References: <20210512101639.22278-1-jack@suse.cz> <20210512134631.4053-3-jack@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210512134631.4053-3-jack@suse.cz> Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Wed, May 12, 2021 at 03:46:11PM +0200, Jan Kara wrote: > Currently, serializing operations such as page fault, read, or readahead > against hole punching is rather difficult. The basic race scheme is > like: > > fallocate(FALLOC_FL_PUNCH_HOLE) read / fault / .. > truncate_inode_pages_range() > cache here> > > > Now the problem is in this way read / page fault / readahead can > instantiate pages in page cache with potentially stale data (if blocks > get quickly reused). Avoiding this race is not simple - page locks do > not work because we want to make sure there are *no* pages in given > range. inode->i_rwsem does not work because page fault happens under > mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes > the performance for mixed read-write workloads suffer. > > So create a new rw_semaphore in the address_space - invalidate_lock - > that protects adding of pages to page cache for page faults / reads / > readahead. Remind me (or, rather, add to the documentation) why we have to hold the invalidate_lock during the call to readpage / readahead, and we don't just hold it around the call to add_to_page_cache / add_to_page_cache_locked / add_to_page_cache_lru ? I appreciate that ->readpages is still going to suck, but we're down to just three implementations of ->readpages now (9p, cifs & nfs). Also, could I trouble you to run the comments through 'fmt' (or equivalent)? It's easier to read if you're not kissing right up on 80 columns. > +++ b/fs/inode.c > @@ -190,6 +190,9 @@ int inode_init_always(struct super_block *sb, struct inode *inode) > mapping_set_gfp_mask(mapping, GFP_HIGHUSER_MOVABLE); > mapping->private_data = NULL; > mapping->writeback_index = 0; > + init_rwsem(&mapping->invalidate_lock); > + lockdep_set_class(&mapping->invalidate_lock, > + &sb->s_type->invalidate_lock_key); Why not: __init_rwsem(&mapping->invalidate_lock, "mapping.invalidate_lock", &sb->s_type->invalidate_lock_key);