Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp2820868pxb; Sun, 28 Feb 2021 14:56:47 -0800 (PST) X-Google-Smtp-Source: ABdhPJzD74e5HYyvCS2k5B27y4pYEdjNTkG/3yOTTrfVi6LZwpK2/Y51xSTDtyPGZJsYtwpIdMxh X-Received: by 2002:a05:6402:1283:: with SMTP id w3mr13579515edv.340.1614553007371; Sun, 28 Feb 2021 14:56:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614553007; cv=none; d=google.com; s=arc-20160816; b=SyK0Zlee4s1o476Bl7PJLQ6j4nN434bAZRctBMWehe8MZQRjJBYqEneMn82+67IKHM JWIP5FpszlSDTxV+trx6zO1p/ZT2vnH6pzrQ3+PdUHci3+p+idpvzkM+BSch6S9b5Uf6 yzgUhrQRBXjgBxlwQkCcFhZW+5Yxc/tbYIzsl+OY44cxZqLvvq7hzz+kZZ0k/Wo6hvQq wjglllIjh+X3+fIGl4/OIhBNRrs3+1f5jX9EjPMp9crMTevZ6P3OPTYuwTeArHJkTVWC M/MNaairAQDxgB0PX89v/H7HmEbmFJDHM0pcPYMGbtXqXV+YjKHIM1l7BG2nWCbLN5tC 3ACA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=Do3JlHjmlzBVitGih2tjQ2GfrXWSl427mzffXPUsl4E=; b=gGubwdPyj8dweTTOlG0Q2PQ2OXqcyfdt96mxRsEvKgNKQEwXXT/BCVwZLQO88siaNu 5QxvvDKCCbgyoGGmOSpC72oISUNX0GsO+jLld42AQ/S2GE3GaPmWRs22gHB3Lk2Tur9f atlyKXTHbDKrRwBxmg501ywMnDd4r3nDezmMrZN+LBLUx9ZTJS44SaKb5GRsjI4KT7mx nnNVgGQ6zTUcUrG9ZshZf0IbbsCLl9vo6RgfPpkUaeqv7++IFQiMixBpqxSC7Ew09sHu wgfeela8UOc6y2sjlzkfZKUbB4XEQiDgqv/PHt/6OZVQidlHOeTgCJV1igQpIar3GcEc 5HBA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bc17si2464872edb.2.2021.02.28.14.56.25; Sun, 28 Feb 2021 14:56:47 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231582AbhB1Wjf (ORCPT + 99 others); Sun, 28 Feb 2021 17:39:35 -0500 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:47679 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230167AbhB1Wje (ORCPT ); Sun, 28 Feb 2021 17:39:34 -0500 Received: from dread.disaster.area (pa49-179-130-210.pa.nsw.optusnet.com.au [49.179.130.210]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 7E4FD1041250; Mon, 1 Mar 2021 09:38:47 +1100 (AEDT) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1lGUi6-008ztl-Kp; Mon, 01 Mar 2021 09:38:46 +1100 Date: Mon, 1 Mar 2021 09:38:46 +1100 From: Dave Chinner To: Dan Williams Cc: "Darrick J. Wong" , "ruansy.fnst@fujitsu.com" , "linux-kernel@vger.kernel.org" , "linux-xfs@vger.kernel.org" , "linux-nvdimm@lists.01.org" , "linux-fsdevel@vger.kernel.org" , "darrick.wong@oracle.com" , "willy@infradead.org" , "jack@suse.cz" , "viro@zeniv.linux.org.uk" , "linux-btrfs@vger.kernel.org" , "ocfs2-devel@oss.oracle.com" , "hch@lst.de" , "rgoldwyn@suse.de" , "y-goto@fujitsu.com" , "qi.fuli@fujitsu.com" , "fnstml-iaas@cn.fujitsu.com" Subject: Re: Question about the "EXPERIMENTAL" tag for dax in XFS Message-ID: <20210228223846.GA4662@dread.disaster.area> References: <20210226002030.653855-1-ruansy.fnst@fujitsu.com> <20210226190454.GD7272@magnolia> <20210226205126.GX4662@dread.disaster.area> <20210226212748.GY4662@dread.disaster.area> <20210227223611.GZ4662@dread.disaster.area> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.3 cv=YKPhNiOx c=1 sm=1 tr=0 cx=a_idp_d a=JD06eNgDs9tuHP7JIKoLzw==:117 a=JD06eNgDs9tuHP7JIKoLzw==:17 a=kj9zAlcOel0A:10 a=dESyimp9J3IA:10 a=7-415B0cAAAA:8 a=TP_jekbwqI1TK37FQS4A:9 a=CjuIK1q_8ugA:10 a=biEYGPWJfzWAr4FL6Ov7:22 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Feb 27, 2021 at 03:40:24PM -0800, Dan Williams wrote: > On Sat, Feb 27, 2021 at 2:36 PM Dave Chinner wrote: > > On Fri, Feb 26, 2021 at 02:41:34PM -0800, Dan Williams wrote: > > > On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner wrote: > > > > On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote: > > it points to, check if it points to the PMEM that is being removed, > > grab the page it points to, map that to the relevant struct page, > > run collect_procs() on that page, then kill the user processes that > > map that page. > > > > So why can't we walk the ptescheck the physical pages that they > > map to and if they map to a pmem page we go poison that > > page and that kills any user process that maps it. > > > > i.e. I can't see how unexpected pmem device unplug is any different > > to an MCE delivering a hwpoison event to a DAX mapped page. > > I guess the tradeoff is walking a long list of inodes vs walking a > large array of pages. Not really. You're assuming all a filesystem has to do is invalidate everything if a device goes away, and that's not true. Finding if an inode has a mapping that spans a specific device in a multi-device filesystem can be a lot more complex than that. Just walking inodes is easy - determining whihc inodes need invalidation is the hard part. That's where ->corrupt_range() comes in - the filesystem is already set up to do reverse mapping from physical range to inode(s) offsets... > There's likely always more pages than inodes, but perhaps it's more > efficient to walk the 'struct page' array than sb->s_inodes? I really don't see you seem to be telling us that invalidation is an either/or choice. There's more ways to convert physical block address -> inode file offset and mapping index than brute force inode cache walks.... ..... > > IOWs, what needs to happen at this point is very filesystem > > specific. Assuming that "device unplug == filesystem dead" is not > > correct, nor is specifying a generic action that assumes the > > filesystem is dead because a device it is using went away. > > Ok, I think I set this discussion in the wrong direction implying any > mapping of this action to a "filesystem dead" event. It's just a "zap > all ptes" event and upper layers recover from there. Yes, that's exactly what ->corrupt_range() is intended for. It allows the filesystem to lock out access to the bad range and then recover the data. Or metadata, if that's where the bad range lands. If that recovery fails, it can then report a data loss/filesystem shutdown event to userspace and kill user procs that span the bad range... FWIW, is this notification going to occur before or after the device has been physically unplugged? i.e. what do we do about the time-of-unplug-to-time-of-invalidation window where userspace can still attempt to access the missing pmem though the not-yet-invalidated ptes? It may not be likely that people just yank pmem nvdimms out of machines, but with NVMe persistent memory spaces, there's every chance that someone pulls the wrong device... Cheers, Dave. -- Dave Chinner david@fromorbit.com