Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932680AbcJULNB (ORCPT ); Fri, 21 Oct 2016 07:13:01 -0400 Received: from ipmail04.adl6.internode.on.net ([150.101.137.141]:7370 "EHLO ipmail04.adl6.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932279AbcJULM7 (ORCPT ); Fri, 21 Oct 2016 07:12:59 -0400 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AlAcAKD3CVh5LNdpIGdsb2JhbABcHQEFAQsBgz4BAQEBAR2BVIZynBwBBoEbjAiGJ4QWhhsEAgKCAVQBAgEBAQEBAgYBAQEBAQE5RYRjAQEEOhwjEAgDDgoJJQ8FJQMHGhOIUcMqAQEIAiUeg3uBWYUgiiYFmhOQBJAMjQKEAIEfBgiFEyo0iEgBAQE Date: Fri, 21 Oct 2016 22:12:53 +1100 From: Dave Chinner To: Christoph Hellwig Cc: Stephen Bates , Dan Williams , "linux-kernel@vger.kernel.org" , "linux-nvdimm@lists.01.org" , linux-rdma@vger.kernel.org, linux-block@vger.kernel.org, Linux MM , Ross Zwisler , Matthew Wilcox , jgunthorpe@obsidianresearch.com, haggaie@mellanox.com, Jens Axboe , Jonathan Corbet , jim.macdonald@everspin.com, sbates@raithin.com, Logan Gunthorpe , David Woodhouse , "Raj, Ashok" Subject: Re: [PATCH 0/3] iopmem : A block device for PCIe memory Message-ID: <20161021111253.GQ14023@dastard> References: <1476826937-20665-1-git-send-email-sbates@raithlin.com> <20161019184814.GC16550@cgy1-donard.priv.deltatee.com> <20161020232239.GQ23194@dastard> <20161021095714.GA12209@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161021095714.GA12209@infradead.org> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1510 Lines: 35 On Fri, Oct 21, 2016 at 02:57:14AM -0700, Christoph Hellwig wrote: > On Fri, Oct 21, 2016 at 10:22:39AM +1100, Dave Chinner wrote: > > You do realise that local filesystems can silently change the > > location of file data at any point in time, so there is no such > > thing as a "stable mapping" of file data to block device addresses > > in userspace? > > > > If you want remote access to the blocks owned and controlled by a > > filesystem, then you need to use a filesystem with a remote locking > > mechanism to allow co-ordinated, coherent access to the data in > > those blocks. Anything else is just asking for ongoing, unfixable > > filesystem corruption or data leakage problems (i.e. security > > issues). > > And at least for XFS we have such a mechanism :) E.g. I have a > prototype of a pNFS layout that uses XFS+DAX to allow clients to do > RDMA directly to XFS files, with the same locking mechanism we use > for the current block and scsi layout in xfs_pnfs.c. Oh, that's good to know - pNFS over XFS was exactly what I was thinking of when I wrote my earlier reply. A few months ago someone else was trying to use file mappings in userspace for direct remote client access on fabric connected devices. I told them "pNFS on XFS and write an efficient transport for you hardware".... Now that I know we've got RDMA support for pNFS on XFS in the pipeline, I can just tell them "just write an rdma driver for your hardware" instead. :P Cheers, Dave. -- Dave Chinner david@fromorbit.com