Received: by 2002:ac0:950c:0:0:0:0:0 with SMTP id f12csp2485205imc; Tue, 12 Mar 2019 15:13:17 -0700 (PDT) X-Google-Smtp-Source: APXvYqyRzSGFMCOxhq2ZjGsnUmAxW2aTFL0WXzcK07WLN3JuInf3WhJZJyML8RN55jauKRZxM16M X-Received: by 2002:a63:ee58:: with SMTP id n24mr36477724pgk.247.1552428797402; Tue, 12 Mar 2019 15:13:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1552428797; cv=none; d=google.com; s=arc-20160816; b=LG5ydcFMkSB8EXg4s+QDF3oCEC3fcYnNAHZTb3iDOPJL9Ae86CnDfpwefy69v5om5r hFWkY/kenUoMrtQAyd6M3sAW0FyKaNqq9CVtQhPJCK1p+mBISgjjjPKCOfzIpAa2EgoV Kk+ZW/bCSMDUP9oHcvCwDA2Y5lXwWinu+p43c1bracvYfVJSGMscnUZwrUVUhm4rgPqo B/TCOrc7rprIaKTTHZ6M5lvee2swHZIaXFS4qr0ppkTxgKuuH7DLxI2eger93SqU/+4f 8P+h6S4J0l4vF9uYJr0Mk1qwaXiRA6RAXIXy2zT1R07qaJsCiww5YjQ6zetR/HVcs/VS 02YA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=GEIKmcExN+Zquuo05O2/aPZaWJFBJQet5aCFI200gKE=; b=jSvsDtV/1PIsfPgLZUc/oqx1C7p7xQu/dhcSA08m0uGZmc4Jl2/e64qa6DuqEB15M1 N+2v/uqVrGXrjNNMw9gF3yMkmExEg2zdihKHoVVC4Rwph3DNEaZlEn/XP8XgGsLrkAEU pXnbD2EwglcZ2mCMVtLcbqIYJrgwrgpEO2WzVfpHxk5y/TV1u7rtNE+DF5ea/t3oo3lH PWdoLejoEa/epRSp0ji1rVq4V+cCW9k4YcQddbtjWZvsUruile/Qa0IVdTZSBP6QKAO1 yP5T1fhhmLZXj5jxTJL0qoAiSsw+H7T7Juxvb0CRDU2IbmZjnOnjSPC1aoKkwOqTCXTa 1txQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g5si8657097pfi.60.2019.03.12.15.13.00; Tue, 12 Mar 2019 15:13:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726571AbfCLWLS (ORCPT + 99 others); Tue, 12 Mar 2019 18:11:18 -0400 Received: from ipmail07.adl2.internode.on.net ([150.101.137.131]:47216 "EHLO ipmail07.adl2.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726008AbfCLWLS (ORCPT ); Tue, 12 Mar 2019 18:11:18 -0400 Received: from ppp59-167-129-252.static.internode.on.net (HELO dastard) ([59.167.129.252]) by ipmail07.adl2.internode.on.net with ESMTP; 13 Mar 2019 08:41:14 +1030 Received: from dave by dastard with local (Exim 4.80) (envelope-from ) id 1h3pc9-0001Zt-4k; Wed, 13 Mar 2019 09:11:13 +1100 Date: Wed, 13 Mar 2019 09:11:13 +1100 From: Dave Chinner To: Ira Weiny Cc: Christopher Lameter , john.hubbard@gmail.com, Andrew Morton , linux-mm@kvack.org, Al Viro , Christian Benvenuti , Christoph Hellwig , Dan Williams , Dennis Dalessandro , Doug Ledford , Jan Kara , Jason Gunthorpe , Jerome Glisse , Matthew Wilcox , Michal Hocko , Mike Rapoport , Mike Marciniszyn , Ralph Campbell , Tom Talpey , LKML , linux-fsdevel@vger.kernel.org, John Hubbard Subject: Re: [PATCH v3 0/1] mm: introduce put_user_page*(), placeholder versions Message-ID: <20190312221113.GF23020@dastard> References: <20190306235455.26348-1-jhubbard@nvidia.com> <010001695b4631cd-f4b8fcbf-a760-4267-afce-fb7969e3ff87-000000@email.amazonses.com> <20190310224742.GK26298@dastard> <01000169705aecf0-76f2b83d-ac18-4872-9421-b4b6efe19fc7-000000@email.amazonses.com> <20190312103932.GD1119@iweiny-DESK2.sc.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190312103932.GD1119@iweiny-DESK2.sc.intel.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 12, 2019 at 03:39:33AM -0700, Ira Weiny wrote: > IMHO I don't think that the copy_file_range() is going to carry us through the > next wave of user performance requirements. RDMA, while the first, is not the > only technology which is looking to have direct access to files. XDP is > another.[1] Sure, all I doing here was demonstrating that people have been trying to get local direct access to file mappings to DMA directly into them for a long time. Direct Io games like these are now largely unnecessary because we now have much better APIs to do zero-copy data transfer between files (which can do hardware offload if it is available!). It's the long term pins that RDMA does that are the problem here. I'm asssuming that for XDP, you're talking about userspace zero copy from files to the network hardware and vice versa? transmit is simple (read-only mapping), but receive probably requires bpf programs to ensure that data (minus headers) in the incoming packet stream is correctly placed into the UMEM region? XDP receive seems pretty much like the same problem as RDMA writes into the file. i.e. the incoming write DMAs are going to have to trigger page faults if the UMEM is a long term pin so the filesystem behaves correctly with this remote data placement. I'd suggest that RDMA, XDP and anything other hardware that is going to pin file-backed mappings for the long term need to use the same "inform the fs of a write operation into it's mapping" mechanisms... And if we start talking about wanting to do peer-to-peer DMA from network/GPU device to storage device without going through a file-backed CPU mapping, we still need to have the filesystem involved to translate file offsets to storage locations the filesystem has allocated for the data and to lock them down for as long as the peer-to-peer DMA offload is in place. In effect, this is the same problem as RDMA+FS-DAXs - the filesystem owns the file offset to storage location mapping and manages storage access arbitration, not the mm/vma mapping presented to userspace.... Cheers, Dave. -- Dave Chinner david@fromorbit.com