Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp4201642pxb; Tue, 2 Mar 2021 09:02:45 -0800 (PST) X-Google-Smtp-Source: ABdhPJyYE8w7czXBhY5fGMLKc+KY10ZcpZ3Lzvf2s774c9TVq+UGtv7JECLQmSlR10z7nLSl/6gG X-Received: by 2002:a50:bf47:: with SMTP id g7mr21103709edk.323.1614704565732; Tue, 02 Mar 2021 09:02:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614704565; cv=none; d=google.com; s=arc-20160816; b=qp39agi1UCuQHB9gwUnhQejPR9SLDkVQhPm+VMMhR645TW1YAmrjVToGY7dmB59W9r 5cx+nH34DHHyGfdySIbF0/JbOfMH3/fd/1i4PMddUbV7Ljo4WeY4xfOsY7mPDoHtwFA9 +LQ25eWi16mMwb7TOc3JPkU1svCsjZit3ejXhKIayoDGJ98h4UdnhcbYXNah4kOseurw eZ74mdPnLuAJvRyaPWW4rQ5qKJsfb+7xGl/W7kTvcnkOXqnp0ZtY/26gPtQv+3XblM0f 8SeFpUMj0qrkCmFT8dAwo27bH+r9ZSh3455LdevybSAfnkGoy+BybcNt90f2/+85Lkoj 6hBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=Uf7JqQFHPCXtVt6OE3ndJLn8tz0ITkjzN5ljfjxdIhk=; b=pgVIDIH0m79acPQIzK+RwDQQacYDDCWKwaNnszFB3yGmLe+3tTjM7qAevfATVrNmPU AEnuGuRs2kQF8J4BS6zI+3o83t8pquf4Esu0DC2Y9OT+tiWKKr26pV+YEPzvDgQ6oove ZT8N+lfA6G5ZP4aIdSrSgUMmTcHc1WxUYXrRQ/3Im6TKDIAUAJeqMTD6vuLTk6N/3Rwq HAcSrp2G/a3HebeyELk3fhTW0gP1NaQ6Y0jVx9oSwz5CNW8RycqUGvVD3V900qhKvbo5 Oi/K+6KzuF6FR8ThCWfIgXIbLio9DXOHq5u9xRiVL5yRL7HjU568K/fsCbtsvNyGvyjf tkZg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=LsUxVPo7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id hq14si13904811ejc.148.2021.03.02.09.02.20; Tue, 02 Mar 2021 09:02:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@intel-com.20150623.gappssmtp.com header.s=20150623 header.b=LsUxVPo7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243379AbhCBDqE (ORCPT + 99 others); Mon, 1 Mar 2021 22:46:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243684AbhCAU4r (ORCPT ); Mon, 1 Mar 2021 15:56:47 -0500 Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com [IPv6:2a00:1450:4864:20::532]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 986C0C06178B for ; Mon, 1 Mar 2021 12:56:03 -0800 (PST) Received: by mail-ed1-x532.google.com with SMTP id f6so6823628edd.12 for ; Mon, 01 Mar 2021 12:56:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Uf7JqQFHPCXtVt6OE3ndJLn8tz0ITkjzN5ljfjxdIhk=; b=LsUxVPo7UlbfyojprFB1jf/rKDEY5l1bLqSBqTvGf9XMx0WBQVwMKTlkmnc8L2SxOQ CQVHrea9eSOecxf54/iF/vdL/K8FbOVPW9KBl4US9kraCLKPGcb8Zy87mpx2ALTioUUi I3u0SK3g+sKzDaob/ol6gZFxAQylheAV+/IIn35GbikR2zQgDwQWXktUXD9ULoL+PzQx pr97W32SC56CXIMupsYveBfr4YnEHV3ekVFIoIK0dIk72+g1vb0gS6NvBqVEpVagMIPu 0o2DeAiSUb4YzXlIswECNdeyiZKT86n3F+CjMKrQnKtykVj6ngmicpeLHNZKdX23vnF5 orlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Uf7JqQFHPCXtVt6OE3ndJLn8tz0ITkjzN5ljfjxdIhk=; b=WHNN3D0G3JrNiUI/NSr+kOvYEyCObgLq016gJ/JOZoszde6zMCSeLNKQl6em1L0J6r SQJI1Bhx4mQ5Tj7oDjJIMJpqvm5AEfmRcX2Hs2Ji1Trz9EvddtPQSaWvEDfWR89K6GdF SkiWLF/gPaMphzJNJ6CTKoA/tnJTK/mfzPG/+affdmw6K2GMZofx9P1p+JJc6496eFNX YLMKukcNPIZyEON/dGDdT6ILg3JcEvqxkcTyVfqbxu//OaGLiILi6g1ApXkFZAkmbey4 PBuUn5vxDzaVc1oBh+qB1V/9h3l40KPk5oQwogaiBW3hA3337RZ2P2gN057T+9ReWbMJ 7qqQ== X-Gm-Message-State: AOAM5333becWyyhdUhA/CrUhKo6ustwiR6xnYEyNbFZxJXAzlUAmxlT/ aRTPX9zrLjLpZzk5PxbHPlkKGydbEhiGVf54+P0kOA== X-Received: by 2002:a05:6402:3585:: with SMTP id y5mr17766218edc.97.1614632162323; Mon, 01 Mar 2021 12:56:02 -0800 (PST) MIME-Version: 1.0 References: <20210226002030.653855-1-ruansy.fnst@fujitsu.com> <20210226190454.GD7272@magnolia> <20210226205126.GX4662@dread.disaster.area> <20210226212748.GY4662@dread.disaster.area> <20210227223611.GZ4662@dread.disaster.area> <20210228223846.GA4662@dread.disaster.area> In-Reply-To: <20210228223846.GA4662@dread.disaster.area> From: Dan Williams Date: Mon, 1 Mar 2021 12:55:53 -0800 Message-ID: Subject: Re: Question about the "EXPERIMENTAL" tag for dax in XFS To: Dave Chinner Cc: "Darrick J. Wong" , "ruansy.fnst@fujitsu.com" , "linux-kernel@vger.kernel.org" , "linux-xfs@vger.kernel.org" , "linux-nvdimm@lists.01.org" , "linux-fsdevel@vger.kernel.org" , "darrick.wong@oracle.com" , "willy@infradead.org" , "jack@suse.cz" , "viro@zeniv.linux.org.uk" , "linux-btrfs@vger.kernel.org" , "ocfs2-devel@oss.oracle.com" , "hch@lst.de" , "rgoldwyn@suse.de" , "y-goto@fujitsu.com" , "qi.fuli@fujitsu.com" , "fnstml-iaas@cn.fujitsu.com" Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Feb 28, 2021 at 2:39 PM Dave Chinner wrote: > > On Sat, Feb 27, 2021 at 03:40:24PM -0800, Dan Williams wrote: > > On Sat, Feb 27, 2021 at 2:36 PM Dave Chinner wrote: > > > On Fri, Feb 26, 2021 at 02:41:34PM -0800, Dan Williams wrote: > > > > On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner wrote: > > > > > On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote: > > > it points to, check if it points to the PMEM that is being removed, > > > grab the page it points to, map that to the relevant struct page, > > > run collect_procs() on that page, then kill the user processes that > > > map that page. > > > > > > So why can't we walk the ptescheck the physical pages that they > > > map to and if they map to a pmem page we go poison that > > > page and that kills any user process that maps it. > > > > > > i.e. I can't see how unexpected pmem device unplug is any different > > > to an MCE delivering a hwpoison event to a DAX mapped page. > > > > I guess the tradeoff is walking a long list of inodes vs walking a > > large array of pages. > > Not really. You're assuming all a filesystem has to do is invalidate > everything if a device goes away, and that's not true. Finding if an > inode has a mapping that spans a specific device in a multi-device > filesystem can be a lot more complex than that. Just walking inodes > is easy - determining whihc inodes need invalidation is the hard > part. That inode-to-device level of specificity is not needed for the same reason that drop_caches does not need to be specific. If the wrong page is unmapped a re-fault will bring it back, and re-fault will fail for the pages that are successfully removed. > That's where ->corrupt_range() comes in - the filesystem is already > set up to do reverse mapping from physical range to inode(s) > offsets... Sure, but what is the need to get to that level of specificity with the filesystem for something that should rarely happen in the course of normal operation outside of a mistake? > > > There's likely always more pages than inodes, but perhaps it's more > > efficient to walk the 'struct page' array than sb->s_inodes? > > I really don't see you seem to be telling us that invalidation is an > either/or choice. There's more ways to convert physical block > address -> inode file offset and mapping index than brute force > inode cache walks.... Yes, but I was trying to map it to an existing mechanism and the internals of drop_pagecache_sb() are, in coarse terms, close to what needs to happen here. > > ..... > > > > IOWs, what needs to happen at this point is very filesystem > > > specific. Assuming that "device unplug == filesystem dead" is not > > > correct, nor is specifying a generic action that assumes the > > > filesystem is dead because a device it is using went away. > > > > Ok, I think I set this discussion in the wrong direction implying any > > mapping of this action to a "filesystem dead" event. It's just a "zap > > all ptes" event and upper layers recover from there. > > Yes, that's exactly what ->corrupt_range() is intended for. It > allows the filesystem to lock out access to the bad range > and then recover the data. Or metadata, if that's where the bad > range lands. If that recovery fails, it can then report a data > loss/filesystem shutdown event to userspace and kill user procs that > span the bad range... > > FWIW, is this notification going to occur before or after the device > has been physically unplugged? Before. This will be operations that happen in the pmem driver ->remove() callback. > i.e. what do we do about the > time-of-unplug-to-time-of-invalidation window where userspace can > still attempt to access the missing pmem though the > not-yet-invalidated ptes? It may not be likely that people just yank > pmem nvdimms out of machines, but with NVMe persistent memory > spaces, there's every chance that someone pulls the wrong device... The physical removal aspect is only theoretical today. While the pmem driver has a ->remove() path that's purely a software unbind operation. That said the vulnerability window today is if a process acquires a dax mapping, the pmem device hosting that filesystem goes through an unbind / bind cycle, and then a new filesystem is created / mounted. That old pte may be able to access data that is outside its intended protection domain. Going forward, for buses like CXL, there will be a managed physical remove operation via PCIE native hotplug. The flow there is that the PCIE hotplug driver will notify the OS of a pending removal, trigger ->remove() on the pmem driver, and then notify the technician (slot status LED) that the card is safe to pull.