Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932130AbdDRWvb (ORCPT ); Tue, 18 Apr 2017 18:51:31 -0400 Received: from mail-oi0-f46.google.com ([209.85.218.46]:33092 "EHLO mail-oi0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753440AbdDRWv2 (ORCPT ); Tue, 18 Apr 2017 18:51:28 -0400 MIME-Version: 1.0 In-Reply-To: <20170418224225.GB27113@obsidianresearch.com> References: <20170418190138.GH7181@obsidianresearch.com> <20170418210339.GA24257@obsidianresearch.com> <20170418212258.GA26838@obsidianresearch.com> <96198489-1af5-abcf-f23f-9a7e41aa17f7@deltatee.com> <20170418224225.GB27113@obsidianresearch.com> From: Dan Williams Date: Tue, 18 Apr 2017 15:51:27 -0700 Message-ID: Subject: Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory To: Jason Gunthorpe Cc: Logan Gunthorpe , Benjamin Herrenschmidt , Bjorn Helgaas , Christoph Hellwig , Sagi Grimberg , "James E.J. Bottomley" , "Martin K. Petersen" , Jens Axboe , Steve Wise , Stephen Bates , Max Gurtovoy , Keith Busch , linux-pci@vger.kernel.org, linux-scsi , linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-nvdimm , "linux-kernel@vger.kernel.org" , Jerome Glisse Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1521 Lines: 38 On Tue, Apr 18, 2017 at 3:42 PM, Jason Gunthorpe wrote: > On Tue, Apr 18, 2017 at 03:28:17PM -0700, Dan Williams wrote: > >> Unlike the pci bus address offset case which I think is fundamental to >> support since shipping archs do this toda > > But we can support this by modifying those arch's unique dma_ops > directly. > > Eg as I explained, my p2p_same_segment_map_page() helper concept would > do the offset adjustment for same-segement DMA. > > If PPC calls that in their IOMMU drivers then they will have proper > support for this basic p2p, and the right framework to move on to more > advanced cases of p2p. > > This really seems like much less trouble than trying to wrapper all > the arch's dma ops, and doesn't have the wonky restrictions. I don't think the root bus iommu drivers have any business knowing or caring about dma happening between devices lower in the hierarchy. >> I think it is ok to say p2p is restricted to a single sgl that gets >> to talk to host memory or a single device. > > RDMA and GPU would be sad with this restriction... > >> That said, what's wrong with a p2p aware map_sg implementation >> calling up to the host memory map_sg implementation on a per sgl >> basis? > > Setting up the iommu is fairly expensive, so getting rid of the > batching would kill performance.. When we're crossing device and host memory boundaries how much batching is possible? As far as I can see you'll always be splitting the sgl on these dma mapping boundaries.