Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S969184AbdDSTC7 (ORCPT ); Wed, 19 Apr 2017 15:02:59 -0400 Received: from ale.deltatee.com ([207.54.116.67]:58079 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S969165AbdDSTC4 (ORCPT ); Wed, 19 Apr 2017 15:02:56 -0400 To: Jason Gunthorpe References: <20170418190138.GH7181@obsidianresearch.com> <20170418210339.GA24257@obsidianresearch.com> <1492564806.25766.124.camel@kernel.crashing.org> <20170419155557.GA8497@obsidianresearch.com> <4899b011-bdfb-18d8-ef00-33a1516216a6@deltatee.com> <20170419171451.GA10020@obsidianresearch.com> <20170419183247.GA13716@obsidianresearch.com> Cc: Benjamin Herrenschmidt , Dan Williams , Bjorn Helgaas , Christoph Hellwig , Sagi Grimberg , "James E.J. Bottomley" , "Martin K. Petersen" , Jens Axboe , Steve Wise , Stephen Bates , Max Gurtovoy , Keith Busch , linux-pci@vger.kernel.org, linux-scsi , linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-nvdimm , "linux-kernel@vger.kernel.org" , Jerome Glisse From: Logan Gunthorpe Message-ID: <21e8099a-d19d-7df0-682d-627d8b81dfde@deltatee.com> Date: Wed, 19 Apr 2017 13:02:49 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Icedove/45.6.0 MIME-Version: 1.0 In-Reply-To: <20170419183247.GA13716@obsidianresearch.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-SA-Exim-Connect-IP: 172.16.1.111 X-SA-Exim-Rcpt-To: jglisse@redhat.com, linux-kernel@vger.kernel.org, linux-nvdimm@ml01.01.org, linux-rdma@vger.kernel.org, linux-nvme@lists.infradead.org, linux-scsi@vger.kernel.org, linux-pci@vger.kernel.org, keith.busch@intel.com, maxg@mellanox.com, sbates@raithlin.com, swise@opengridcomputing.com, axboe@kernel.dk, martin.petersen@oracle.com, jejb@linux.vnet.ibm.com, sagi@grimberg.me, hch@lst.de, helgaas@kernel.org, dan.j.williams@intel.com, benh@kernel.crashing.org, jgunthorpe@obsidianresearch.com X-SA-Exim-Mail-From: logang@deltatee.com Subject: Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory X-SA-Exim-Version: 4.2.1 (built Mon, 26 Dec 2011 16:24:06 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1607 Lines: 38 On 19/04/17 12:32 PM, Jason Gunthorpe wrote: > On Wed, Apr 19, 2017 at 12:01:39PM -0600, Logan Gunthorpe wrote: > Not entirely, it would have to call through the whole process > including the arch_p2p_cross_segment().. Hmm, yes. Though it's still not clear what, if anything, arch_p2p_cross_segment would be doing. In my experience, if you are going between host bridges, the CPU address (or PCI address -- I'm not sure which seeing they are the same on my system) would still work fine -- it just _may_ be a bad idea because of performance. Similarly if you are crossing via a QPI bus or similar, I'd expect the CPU address to work fine. But here the performance is even less likely to be any good. > // Try the arch specific helper > const struct dma_map_ops *comp_ops = get_dma_ops(completer); > const struct dma_map_ops *init_ops = get_dma_ops(initiator); So, in this case, what device does the completer point to? The PCI device or a more specific GPU device? If it's the former, who's responsible for setting the new dma_ops? Typically the dma_ops are arch specific but now you'd be adding ones that are tied to hmm or the gpu. >> I'm not sure I like the name pci_p2p_same_segment. It reads as though >> it's only checking if the devices are not the same segment. > > Well, that is exactly what it is doing. If it succeeds then the caller > knows the DMA will not flow outside the segment and no iommu setup/etc > is required. It appears to me like it's calculating the DMA address, and the check is just a side requirement. It reads as though it's only doing the check. Logan