Received: by 2002:a25:683:0:0:0:0:0 with SMTP id 125csp1761998ybg; Thu, 4 Jun 2020 19:15:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx560qT71KVgxMfhA405EuaJ+U5NzgQ80YKL5mV77y2clGEyv1UMuMGjvFlvi+QMgTyoJvM X-Received: by 2002:a05:6402:4c6:: with SMTP id n6mr6997164edw.264.1591323348627; Thu, 04 Jun 2020 19:15:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1591323348; cv=none; d=google.com; s=arc-20160816; b=LF6ZZwJf2Oq76fm2BeCW/abGofiENHvF+bt/39PbT3vlQ2Zyd8rrOS/I1pebIvdbIV 60O/d2p6d3uiu3m3dCpAfFkMqrywdO0ihkDrpBbPGyO6LxPiTx/kCYEPfh9pjqjQ4jrn RG0gOP3Oxl3XAWDCgi9YUP2v3AYfpRlnL/p5rpZRlevZF5U9FE2Ch9U4IWveD+b8itfa mtRdUuL+4MzMT4DLKCGEEBREz+PuuvHamOTDDG1KzFLloXlw/AkLSwTv3ZQa25bEfiM9 s/6QJTPk8ZAsA215Wqhs8oaCgeU1kZwoLVR/ngXbIBhcwGjOgmpkpdjloXq70czIGsJC 9A9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=X2wzDRXYvyc4FInU4cLR8Ba/FVi3LRkIkCSgb4tUXZI=; b=Tn1kn2P6D7TO18g8R1lkNEq50YsF9x2UmUmZJ44yU/9zvnBynuTZJRLLRllbBIvfWn JndYnudvcJ/SjApA/v8NSy7DLpECBuFfgrUzIxGsdnnJ7bdoVpxcaRrr0njQwnJRZ/BO OT1NaX3SKoF/6mfBumMqGIHLsTo2ZpKX0jvjL/fHDfSqB97vG7vHyId6Hszum33UVZjC ApA+fW7KtRARhvqdoNHUdpHSzVNcAlhAikgnWjHnbSkC6PFJuN3NL7aJCiFz7Kae2X+v yFCxjgpVC5MC2TlacY/MElVSkg8lAWJSSPnDOHbjMXnIB83nSZdW4c6egwpKXlsJUEwt SO6A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id di25si2628701edb.539.2020.06.04.19.15.26; Thu, 04 Jun 2020 19:15:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726076AbgFECMR (ORCPT + 99 others); Thu, 4 Jun 2020 22:12:17 -0400 Received: from mail.cn.fujitsu.com ([183.91.158.132]:16941 "EHLO heian.cn.fujitsu.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725601AbgFECMQ (ORCPT ); Thu, 4 Jun 2020 22:12:16 -0400 X-IronPort-AV: E=Sophos;i="5.73,474,1583164800"; d="scan'208";a="93871124" Received: from unknown (HELO cn.fujitsu.com) ([10.167.33.5]) by heian.cn.fujitsu.com with ESMTP; 05 Jun 2020 10:12:12 +0800 Received: from G08CNEXMBPEKD05.g08.fujitsu.local (unknown [10.167.33.204]) by cn.fujitsu.com (Postfix) with ESMTP id DF8CC4BCC8A8; Fri, 5 Jun 2020 10:12:07 +0800 (CST) Received: from [10.167.225.141] (10.167.225.141) by G08CNEXMBPEKD05.g08.fujitsu.local (10.167.33.204) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 5 Jun 2020 10:12:08 +0800 Subject: =?UTF-8?B?UmU6IOWbnuWkjTogUmU6IFtSRkMgUEFUQ0ggMC84XSBkYXg6IEFkZCBh?= =?UTF-8?Q?_dax-rmap_tree_to_support_reflink?= To: "Darrick J. Wong" CC: Dave Chinner , Matthew Wilcox , "linux-kernel@vger.kernel.org" , "linux-xfs@vger.kernel.org" , "linux-nvdimm@lists.01.org" , "linux-mm@kvack.org" , "linux-fsdevel@vger.kernel.org" , "dan.j.williams@intel.com" , "hch@lst.de" , "rgoldwyn@suse.de" , "Qi, Fuli" , "Gotou, Yasunori" References: <20200427084750.136031-1-ruansy.fnst@cn.fujitsu.com> <20200427122836.GD29705@bombadil.infradead.org> <20200428064318.GG2040@dread.disaster.area> <153e13e6-8685-fb0d-6bd3-bb553c06bf51@cn.fujitsu.com> <20200604145107.GA1334206@magnolia> From: Ruan Shiyang Message-ID: Date: Fri, 5 Jun 2020 10:11:51 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.8.0 MIME-Version: 1.0 In-Reply-To: <20200604145107.GA1334206@magnolia> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-Originating-IP: [10.167.225.141] X-ClientProxiedBy: G08CNEXCHPEKD05.g08.fujitsu.local (10.167.33.203) To G08CNEXMBPEKD05.g08.fujitsu.local (10.167.33.204) X-yoursite-MailScanner-ID: DF8CC4BCC8A8.AF5BC X-yoursite-MailScanner: Found to be clean X-yoursite-MailScanner-From: ruansy.fnst@cn.fujitsu.com X-Spam-Status: No Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2020/6/4 下午10:51, Darrick J. Wong wrote: > On Thu, Jun 04, 2020 at 03:37:42PM +0800, Ruan Shiyang wrote: >> >> >> On 2020/4/28 下午2:43, Dave Chinner wrote: >>> On Tue, Apr 28, 2020 at 06:09:47AM +0000, Ruan, Shiyang wrote: >>>> >>>> 在 2020/4/27 20:28:36, "Matthew Wilcox" 写道: >>>> >>>>> On Mon, Apr 27, 2020 at 04:47:42PM +0800, Shiyang Ruan wrote: >>>>>> This patchset is a try to resolve the shared 'page cache' problem for >>>>>> fsdax. >>>>>> >>>>>> In order to track multiple mappings and indexes on one page, I >>>>>> introduced a dax-rmap rb-tree to manage the relationship. A dax entry >>>>>> will be associated more than once if is shared. At the second time we >>>>>> associate this entry, we create this rb-tree and store its root in >>>>>> page->private(not used in fsdax). Insert (->mapping, ->index) when >>>>>> dax_associate_entry() and delete it when dax_disassociate_entry(). >>>>> >>>>> Do we really want to track all of this on a per-page basis? I would >>>>> have thought a per-extent basis was more useful. Essentially, create >>>>> a new address_space for each shared extent. Per page just seems like >>>>> a huge overhead. >>>>> >>>> Per-extent tracking is a nice idea for me. I haven't thought of it >>>> yet... >>>> >>>> But the extent info is maintained by filesystem. I think we need a way >>>> to obtain this info from FS when associating a page. May be a bit >>>> complicated. Let me think about it... >>> >>> That's why I want the -user of this association- to do a filesystem >>> callout instead of keeping it's own naive tracking infrastructure. >>> The filesystem can do an efficient, on-demand reverse mapping lookup >>> from it's own extent tracking infrastructure, and there's zero >>> runtime overhead when there are no errors present. >> >> Hi Dave, >> >> I ran into some difficulties when trying to implement the per-extent rmap >> tracking. So, I re-read your comments and found that I was misunderstanding >> what you described here. >> >> I think what you mean is: we don't need the in-memory dax-rmap tracking now. >> Just ask the FS for the owner's information that associate with one page >> when memory-failure. So, the per-page (even per-extent) dax-rmap is >> needless in this case. Is this right? > > Right. XFS already has its own rmap tree. > >> Based on this, we only need to store the extent information of a fsdax page >> in its ->mapping (by searching from FS). Then obtain the owners of this >> page (also by searching from FS) when memory-failure or other rmap case >> occurs. > > I don't even think you need that much. All you need is the "physical" > offset of that page within the pmem device (e.g. 'this is the 307th 4k > page == offset 1257472 since the start of /dev/pmem0') and xfs can look > up the owner of that range of physical storage and deal with it as > needed. Yes, I think so. > >> So, a fsdax page is no longer associated with a specific file, but with a >> FS(or the pmem device). I think it's easier to understand and implement. > > Yes. I also suspect this will be necessary to support reflink... > > --D OK, Thank you very much. -- Thanks, Ruan Shiyang. > >> >> -- >> Thanks, >> Ruan Shiyang. >>> >>> At the moment, this "dax association" is used to "report" a storage >>> media error directly to userspace. I say "report" because what it >>> does is kill userspace processes dead. The storage media error >>> actually needs to be reported to the owner of the storage media, >>> which in the case of FS-DAX is the filesytem. >>> >>> That way the filesystem can then look up all the owners of that bad >>> media range (i.e. the filesystem block it corresponds to) and take >>> appropriate action. e.g. >>> >>> - if it falls in filesytem metadata, shutdown the filesystem >>> - if it falls in user data, call the "kill userspace dead" routines >>> for each mapping/index tuple the filesystem finds for the given >>> LBA address that the media error occurred. >>> >>> Right now if the media error is in filesystem metadata, the >>> filesystem isn't even told about it. The filesystem can't even shut >>> down - the error is just dropped on the floor and it won't be until >>> the filesystem next tries to reference that metadata that we notice >>> there is an issue. >>> >>> Cheers, >>> >>> Dave. >>> >> >> > >