Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758096AbdDRXWb (ORCPT ); Tue, 18 Apr 2017 19:22:31 -0400 Received: from quartz.orcorp.ca ([184.70.90.242]:58627 "EHLO quartz.orcorp.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752310AbdDRXW3 (ORCPT ); Tue, 18 Apr 2017 19:22:29 -0400 Date: Tue, 18 Apr 2017 17:21:59 -0600 From: Jason Gunthorpe To: Dan Williams Cc: Logan Gunthorpe , Benjamin Herrenschmidt , Bjorn Helgaas , Christoph Hellwig , Sagi Grimberg , "James E.J. Bottomley" , "Martin K. Petersen" , Jens Axboe , Steve Wise , Stephen Bates , Max Gurtovoy , Keith Busch , linux-pci@vger.kernel.org, linux-scsi , linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-nvdimm , "linux-kernel@vger.kernel.org" , Jerome Glisse Subject: Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory Message-ID: <20170418232159.GA28477@obsidianresearch.com> References: <20170418210339.GA24257@obsidianresearch.com> <20170418212258.GA26838@obsidianresearch.com> <96198489-1af5-abcf-f23f-9a7e41aa17f7@deltatee.com> <20170418224225.GB27113@obsidianresearch.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) X-Broken-Reverse-DNS: no host name found for IP address 10.0.0.156 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1179 Lines: 30 On Tue, Apr 18, 2017 at 03:51:27PM -0700, Dan Williams wrote: > > This really seems like much less trouble than trying to wrapper all > > the arch's dma ops, and doesn't have the wonky restrictions. > > I don't think the root bus iommu drivers have any business knowing or > caring about dma happening between devices lower in the hierarchy. Maybe not, but performance requires some odd choices in this code.. :( > > Setting up the iommu is fairly expensive, so getting rid of the > > batching would kill performance.. > > When we're crossing device and host memory boundaries how much > batching is possible? As far as I can see you'll always be splitting > the sgl on these dma mapping boundaries. Splitting the sgl is different from iommu batching. As an example, an O_DIRECT write of 1 MB with a single 4K P2P page in the middle. The optimum behavior is to allocate a 1MB-4K iommu range and fill it with the CPU memory. Then return a SGL with three entires, two pointing into the range and one to the p2p. It is creating each range which tends to be expensive, so creating two ranges (or worse, if every SGL created a range it would be 255) is very undesired. Jason