Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753075AbcLEUHT (ORCPT ); Mon, 5 Dec 2016 15:07:19 -0500 Received: from bombadil.infradead.org ([198.137.202.9]:37436 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752780AbcLEUGq (ORCPT ); Mon, 5 Dec 2016 15:06:46 -0500 Date: Mon, 5 Dec 2016 12:06:32 -0800 From: Christoph Hellwig To: Jason Gunthorpe Cc: Logan Gunthorpe , Dan Williams , Stephen Bates , Haggai Eran , "linux-kernel@vger.kernel.org" , "linux-rdma@vger.kernel.org" , "linux-nvdimm@ml01.01.org" , "christian.koenig@amd.com" , "Suravee.Suthikulpanit@amd.com" , "John.Bridgman@amd.com" , "Alexander.Deucher@amd.com" , "Linux-media@vger.kernel.org" , "dri-devel@lists.freedesktop.org" , Max Gurtovoy , "linux-pci@vger.kernel.org" , "serguei.sagalovitch@amd.com" , "Paul.Blinzer@amd.com" , "Felix.Kuehling@amd.com" , "ben.sander@amd.com" Subject: Re: Enabling peer to peer device transactions for PCIe devices Message-ID: <20161205200632.GA24497@infradead.org> References: <61a2fb07344aacd81111449d222de66e.squirrel@webmail.raithlin.com> <20161205171830.GB27784@obsidianresearch.com> <20161205180231.GA28133@obsidianresearch.com> <20161205191438.GA20464@obsidianresearch.com> <10356964-c454-47fb-7fb3-8bf2a418b11b@deltatee.com> <20161205194614.GA21132@obsidianresearch.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161205194614.GA21132@obsidianresearch.com> User-Agent: Mutt/1.6.1 (2016-04-27) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 723 Lines: 14 On Mon, Dec 05, 2016 at 12:46:14PM -0700, Jason Gunthorpe wrote: > In any event the allocator still needs to track which regions are in > use and be able to hook 'free' from userspace. That does suggest it > should be integrated into the nvme driver and not a bolt on driver.. Two totally different use cases: - a card that exposes directly byte addressable storage as a PCI-e bar. Thin of it as a nvdimm on a PCI-e card. That's the iopmem case. - the NVMe CMB which exposes a byte addressable indirection buffer for I/O, but does not actually provide byte addressable persistent storage. This is something that needs to be added to the NVMe driver (and the block layer for the abstraction probably).