Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753270AbcLLSZ4 (ORCPT ); Mon, 12 Dec 2016 13:25:56 -0500 Received: from gateway32.websitewelcome.com ([192.185.145.170]:38893 "EHLO gateway32.websitewelcome.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752794AbcLLSZz (ORCPT ); Mon, 12 Dec 2016 13:25:55 -0500 X-Greylist: delayed 1737 seconds by postgrey-1.27 at vger.kernel.org; Mon, 12 Dec 2016 13:25:55 EST Message-ID: <055180bb41167c3a6b9f1f20ae4d4f3f.squirrel@webmail.raithlin.com> Date: Mon, 12 Dec 2016 11:51:54 -0600 Subject: [LSF/MM TOPIC][LSF/MM ATTEND] Enabling Peer-to-Peer DMAs between PCIe devices From: "Stephen Bates" To: lsf-pc@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-rdma@vger.kernel.org, linux-nvme@lists.infradead.org User-Agent: SquirrelMail/1.5.2 [SVN] MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7BIT X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - estate.websitewelcome.com X-AntiAbuse: Original Domain - vger.kernel.org X-AntiAbuse: Originator/Caller UID/GID - [1547 32008] / [47 12] X-AntiAbuse: Sender Address Domain - raithlin.com X-BWhitelist: no X-Source-IP: X-Exim-ID: 1cGUlW-000Q8R-CE X-Source: X-Source-Args: /usr/local/cpanel/3rdparty/php/54/bin/php-cgi /usr/local/cpanel/base/3rdparty/squirrelmail/src/compose.php X-Source-Dir: /usr/local/cpanel/base/3rdparty/squirrelmail/src X-Source-Sender: X-Source-Auth: raithlin X-Email-Count: 5 X-Source-Cap: cmFpdGhsaW47c2NvdHQ7ZXN0YXRlLndlYnNpdGV3ZWxjb21lLmNvbQ== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1314 Lines: 35 Hi I'd like to discuss the topic of how best to enable DMAs between PCIe devices in the Linux kernel. There have been many attempts to add to the kernel the ability to DMA between two PCIe devices. However, to date, none of these have been accepted. However as PCIe devices like NICs, NVMe SSDs and GPGPUs continue to get faster the desire to move data directly between these devices (as opposed to having to using a temporary buffer in system memory) is increasing. Out of tree solutions like GPU-Direct are one illustration of the popularity of this functionality. A recent discussion on this topic provides a good summary of where things stand [1]. I would like to propose a session at LFS/MM to discuss some of the different use cases for these P2P DMAs and also to discuss the pros and cons of these approaches. The desire would be to try and form a consensus on how best to move forward to an upstreamable solution to this problem. In addition I would also be interested in participating in the following topics: * Anything related to PMEM and DAX. * Integrating the block-layer polling capability into file-systems. * New feature integration into the NVMe driver (e.g. fabrics, CMBs, IO tags etc.) Cheers Stephen [1] http://marc.info/?l=linux-pci&m=147976059431355&w=2 (and subsequent thread).