Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753924AbdFWIB4 (ORCPT ); Fri, 23 Jun 2017 04:01:56 -0400 Received: from mga07.intel.com ([134.134.136.100]:60296 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753447AbdFWIBy (ORCPT ); Fri, 23 Jun 2017 04:01:54 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.39,377,1493708400"; d="scan'208";a="102797681" Message-ID: <594CC7F6.7050507@intel.com> Date: Fri, 23 Jun 2017 15:49:10 +0800 From: Zhi Wang User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: Gerd Hoffmann , Alex Williamson CC: "Wang, Zhenyu Z" , "intel-gfx@lists.freedesktop.org" , "linux-kernel@vger.kernel.org" , "Chen, Xiaoguang" , "Zhang, Tina" , Kirti Wankhede , "Lv, Zhiyuan" , "intel-gvt-dev@lists.freedesktop.org" Subject: Re: [Intel-gfx] [PATCH v9 5/7] vfio: Define vfio based dma-buf operations References: <1497513611-2814-1-git-send-email-xiaoguang.chen@intel.com> <1497513611-2814-6-git-send-email-xiaoguang.chen@intel.com> <1497542438.29252.1.camel@redhat.com> <20170615143833.7526351b@w520.home> <24c4880b-24f5-ea07-834c-c77d3e895c78@nvidia.com> <1497854312.4207.4.camel@redhat.com> <20170619085530.1f5e46dc@w520.home> <237F54289DF84E4997F34151298ABEBC7C56EBE0@SHSMSX101.ccr.corp.intel.com> <1497956256.16795.7.camel@redhat.com> <20170620090004.44ac7fbc@w520.home> <237F54289DF84E4997F34151298ABEBC7C56F3DC@SHSMSX101.ccr.corp.intel.com> <20170620172204.09405cf4@w520.home> <237F54289DF84E4997F34151298ABEBC7C56F843@SHSMSX101.ccr.corp.intel.com> <1498043011.5802.5.camel@redhat.com> <20170621125938.1a92abda@w520.home> <1498120215.25651.5.camel@redhat.com> <20170622125458.01c953ab@w520.home> <1498202819.24807.3.camel@redhat.com> In-Reply-To: <1498202819.24807.3.camel@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5201 Lines: 132 Hi: Thanks for the discussions! If the userspace application has already maintained a LRU list, it looks like we don't need generation anymore, as userspace application will lookup the guest framebuffer from the LRU list by "offset". No matter how, it would know if this is a new guest framebuffer or not. If it's a new guest framebuffer, a new dmabuf fd should be generated. If it's an old framebuffer, it can just show that framebuffer. Thanks, Zhi. On 06/23/17 15:26, Gerd Hoffmann wrote: > Hi, > >> Is this only going to support accelerated driver output, not basic >> VGA >> modes for BIOS interaction? > Right now there is no vgabios or uefi support for the vgpu. > > But even with that in place there still is the problem that the display > device initialization happens before the guest runs and therefore there > isn't an plane yet ... > >>> Right now the experimental intel patches throw errors in case no >>> plane >>> exists (yet). Maybe we should have a "bool is_enabled" field in >>> the >>> plane_info struct, so drivers can use that to signal whenever the >>> guest >>> has programmed a valid video mode or not (likewise for the cursor, >>> which doesn't exist with fbcon, only when running xorg). With that >>> in >>> place using the QUERY_PLANE ioctl also for probing looks >>> reasonable. >> Sure, or -ENOTTY for ioctl not implemented vs -EINVAL for no >> available >> plane, but then that might not help the user know how a plane would >> be >> available if it were available. > So maybe a "enum plane_state" (instead of "bool is_enabled")? So we > can clearly disturgish ENABLED, DISABLED, NOT_SUPPORTED cases? > >>> Yes, I'd leave that to userspace. So, when the generation changes >>> userspace knows the guest changed the plane. It could be a >>> configuration the guest has used before (and where userspace could >>> have >>> a cached dma-buf handle for), or it could be something new. >> But userspace also doesn't know that a dmabuf generation will ever be >> visited again, > kernel wouldn't know either, only the guest knows ... > >> so they're bound to have some stale descriptors. Are >> we thinking userspace would have some LRU list of dmabufs so that >> they >> don't collect too many? Each uses some resources, do we just rely >> on >> open file handles to set an upper limit? > Yep, this is exactly what my qemu patches are doing, keep a LRU list. > >>>> What happens to >>>> existing dmabuf fds when the generation updates, do they stop >>>> refreshing? >>> Depends on what the guest is doing ;) >>> >>> The dma-buf is just a host-side handle for the piece of video >>> memory >>> where the guest stored the framebuffer. >> So the resources the user is holding if they don't release their >> dmabuf >> are potentially non-trivial. > Not really. Host IGD has a certain amount of memory, some of it is > assigned to the guest, guest stores the framebuffer there, the dma-buf > is a host handle (drm object, usable for rendering ops) for the guest > framebuffer. So it doesn't use much ressources. Some memory is needed > for management structs, but not for the actual data as this in the > video memory dedicated to the guest. > >>> Ok, if we want support multiple regions. Do we? Using the offset >>> we >>> can place multiple planes in a single region. And I'm not sure >>> nvidia >>> plans to use multiple planes in the first place ... >> I don't want to take a driver ioctl interface as a throw away, one >> time >> use exercise. If we can think of such questions now, let's define >> how >> they work. A device could have multiple graphics regions with >> multiple >> planes within each region. > I'd suggest to settle for one of these two. Either one region and > multiple planes inside (using offset) or one region per plane. I'd > prefer the former. When going for the latter then yes we have to > specify the region. I'd name the field region_id then to make clear > what it is. > > What would be the use case for multiple planes? > > cursor support? We already have plane_type for that. > > multihead support? We'll need (at minimum) a head_id field for that > (for both dma-buf and region) > > pageflipping support? Nothing needed, query_plane will simply return > the currently visible plane. Region only needs to be big enough to fit > the framebuffer twice. Then the driver can flip between two buffers, > point to the one qemu should display using "offset". > >> Do we also want to exclude that device >> needs to be strictly region or dmabuf? Maybe it does both. > Very unlikely IMHO. > >> Or maybe >> it supports dmabuf-ng (ie. whatever comes next). > Possibly happens some day, but who knows what interfaces we'll need to > support that ... > >>> vfio_device_query { >>> u32 argsz; >>> u32 flags; >>> enum query_type; /* or use flags for that */ >> We don't have an infinite number of ioctls > The limited ioctl number space is a good reason indeed. > Ok, lets take this route then. > > cheers, > Gerd > > _______________________________________________ > intel-gvt-dev mailing list > intel-gvt-dev@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gvt-dev