Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751966AbcLETqc (ORCPT ); Mon, 5 Dec 2016 14:46:32 -0500 Received: from quartz.orcorp.ca ([184.70.90.242]:44407 "EHLO quartz.orcorp.ca" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751570AbcLETq2 (ORCPT ); Mon, 5 Dec 2016 14:46:28 -0500 Date: Mon, 5 Dec 2016 12:46:14 -0700 From: Jason Gunthorpe To: Logan Gunthorpe Cc: Dan Williams , Stephen Bates , Haggai Eran , "linux-kernel@vger.kernel.org" , "linux-rdma@vger.kernel.org" , "linux-nvdimm@ml01.01.org" , "christian.koenig@amd.com" , "Suravee.Suthikulpanit@amd.com" , "John.Bridgman@amd.com" , "Alexander.Deucher@amd.com" , "Linux-media@vger.kernel.org" , "dri-devel@lists.freedesktop.org" , Max Gurtovoy , "linux-pci@vger.kernel.org" , "serguei.sagalovitch@amd.com" , "Paul.Blinzer@amd.com" , "Felix.Kuehling@amd.com" , "ben.sander@amd.com" Subject: Re: Enabling peer to peer device transactions for PCIe devices Message-ID: <20161205194614.GA21132@obsidianresearch.com> References: <61a2fb07344aacd81111449d222de66e.squirrel@webmail.raithlin.com> <20161205171830.GB27784@obsidianresearch.com> <20161205180231.GA28133@obsidianresearch.com> <20161205191438.GA20464@obsidianresearch.com> <10356964-c454-47fb-7fb3-8bf2a418b11b@deltatee.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <10356964-c454-47fb-7fb3-8bf2a418b11b@deltatee.com> User-Agent: Mutt/1.5.24 (2015-08-30) X-Broken-Reverse-DNS: no host name found for IP address 10.0.0.156 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 977 Lines: 22 On Mon, Dec 05, 2016 at 12:27:20PM -0700, Logan Gunthorpe wrote: > > > On 05/12/16 12:14 PM, Jason Gunthorpe wrote: > >But CMB sounds much more like the GPU case where there is a > >specialized allocator handing out the BAR to consumers, so I'm not > >sure a general purpose chardev makes a lot of sense? > > I don't think it will ever need to be as complicated as the GPU case. There > will probably only ever be a relatively small amount of memory behind the > CMB and really the only users are those doing P2P work. Thus the specialized > allocator could be pretty simple and I expect it would be fine to just > return -ENOMEM if there is not enough memory. NVMe might have to deal with pci-e hot-unplug, which is a similar problem-class to the GPU case.. In any event the allocator still needs to track which regions are in use and be able to hook 'free' from userspace. That does suggest it should be integrated into the nvme driver and not a bolt on driver.. Jason