Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757005AbcKXQnE (ORCPT ); Thu, 24 Nov 2016 11:43:04 -0500 Received: from quartz.orcorp.ca ([184.70.90.242]:37552 "EHLO quartz.orcorp.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752822AbcKXQnB (ORCPT ); Thu, 24 Nov 2016 11:43:01 -0500 Date: Thu, 24 Nov 2016 09:42:49 -0700 From: Jason Gunthorpe To: Logan Gunthorpe Cc: Serguei Sagalovitch , Dan Williams , "Deucher, Alexander" , "linux-nvdimm@lists.01.org" , "linux-rdma@vger.kernel.org" , "linux-pci@vger.kernel.org" , "Kuehling, Felix" , "Bridgman, John" , "linux-kernel@vger.kernel.org" , "dri-devel@lists.freedesktop.org" , "Koenig, Christian" , "Sander, Ben" , "Suthikulpanit, Suravee" , "Blinzer, Paul" , "Linux-media@vger.kernel.org" , Haggai Eran Subject: Re: Enabling peer to peer device transactions for PCIe devices Message-ID: <20161124164249.GD20818@obsidianresearch.com> References: <75a1f44f-c495-7d1e-7e1c-17e89555edba@amd.com> <45c6e878-bece-7987-aee7-0e940044158c@deltatee.com> <20161123190515.GA12146@obsidianresearch.com> <7bc38037-b6ab-943f-59db-6280e16901ab@amd.com> <20161123193228.GC12146@obsidianresearch.com> <20161123203332.GA15062@obsidianresearch.com> <20161123215510.GA16311@obsidianresearch.com> <91d28749-bc64-622f-56a1-26c00e6b462a@deltatee.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <91d28749-bc64-622f-56a1-26c00e6b462a@deltatee.com> User-Agent: Mutt/1.5.23 (2014-03-12) X-Broken-Reverse-DNS: no host name found for IP address 10.0.0.151 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2297 Lines: 57 On Wed, Nov 23, 2016 at 06:25:21PM -0700, Logan Gunthorpe wrote: > > > On 23/11/16 02:55 PM, Jason Gunthorpe wrote: > >>> Only ODP hardware allows changing the DMA address on the fly, and it > >>> works at the page table level. We do not need special handling for > >>> RDMA. > >> > >> I am aware of ODP but, noted by others, it doesn't provide a general > >> solution to the points above. > > > > How do you mean? > > I was only saying it wasn't general in that it wouldn't work for IB > hardware that doesn't support ODP or other hardware that doesn't do > similar things (like an NVMe drive). There are three cases to worry about: - Coherent long lived page table mirroring (RDMA ODP MR) - Non-coherent long lived page table mirroring (RDMA MR) - Short lived DMA mapping (everything else) Like you say below we have to handle short lived in the usual way, and that covers basically every device except IB MRs, including the command queue on a NVMe drive. > any complex allocators (GPU or otherwise) should respect that. And that > seems like it should be the default way most of this works -- and I > think it wouldn't actually take too much effort to make it all work now > as is. (Our iopmem work is actually quite small and simple.) Yes, absolutely, some kind of page pinning like locking is a hard requirement. > Yeah, we've had RDMA and O_DIRECT transfers to PCIe backed ZONE_DEVICE > memory working for some time. I'd say it's a good fit. The main question > we've had is how to expose PCIe bars to userspace to be used as MRs and > such. Is there any progress on that? I still don't quite get what iopmem was about.. I thought the objection to uncachable ZONE_DEVICE & DAX made sense, so running DAX over iopmem and still ending up with uncacheable mmaps still seems like a non-starter to me... Serguei, what is your plan in GPU land for migration? Ie if I have a CPU mapped page and the GPU moves it to VRAM, it becomes non-cachable - do you still allow the CPU to access it? Or do you swap it back to cachable memory if the CPU touches it? One approach might be to mmap the uncachable ZONE_DEVICE memory and mark it inaccessible to the CPU - DMA could still translate. If the CPU needs it then the kernel migrates it to system memory so it becomes cachable. ?? Jason