Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752532AbdF2Ij1 (ORCPT ); Thu, 29 Jun 2017 04:39:27 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:35566 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751861AbdF2IjS (ORCPT ); Thu, 29 Jun 2017 04:39:18 -0400 Date: Thu, 29 Jun 2017 10:39:14 +0200 From: Daniel Vetter To: Gerd Hoffmann Cc: "Zhang, Tina" , Alex Williamson , "Wang, Zhenyu Z" , "intel-gfx@lists.freedesktop.org" , "linux-kernel@vger.kernel.org" , "Chen, Xiaoguang" , Kirti Wankhede , "Lv, Zhiyuan" , "intel-gvt-dev@lists.freedesktop.org" Subject: Re: [Intel-gfx] [PATCH v9 5/7] vfio: Define vfio based dma-buf operations Message-ID: <20170629083914.qr2gpzy3tyomrfym@phenom.ffwll.local> Mail-Followup-To: Gerd Hoffmann , "Zhang, Tina" , Alex Williamson , "Wang, Zhenyu Z" , "intel-gfx@lists.freedesktop.org" , "linux-kernel@vger.kernel.org" , "Chen, Xiaoguang" , Kirti Wankhede , "Lv, Zhiyuan" , "intel-gvt-dev@lists.freedesktop.org" References: <1497854312.4207.4.camel@redhat.com> <20170619085530.1f5e46dc@w520.home> <237F54289DF84E4997F34151298ABEBC7C56EBE0@SHSMSX101.ccr.corp.intel.com> <1497956256.16795.7.camel@redhat.com> <237F54289DF84E4997F34151298ABEBC7C5731B3@SHSMSX101.ccr.corp.intel.com> <1498459157.20591.6.camel@redhat.com> <20170626112857.37c2aa65@w520.home> <1498543954.15306.2.camel@redhat.com> <237F54289DF84E4997F34151298ABEBC7C575706@SHSMSX101.ccr.corp.intel.com> <1498718513.31562.3.camel@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1498718513.31562.3.camel@redhat.com> X-Operating-System: Linux phenom 4.9.0-2-amd64 User-Agent: NeoMutt/20170306 (1.8.0) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1376 Lines: 31 On Thu, Jun 29, 2017 at 08:41:53AM +0200, Gerd Hoffmann wrote: > Hi, > > > > Does gvt track the live cycle of all dma-bufs it has handed out? > > > > The V9 implementation does track the dma-bufs' live cycle. The > > original idea was that leaving the dma-bufs' live cycle management to > > user mode. > > That is still the case, user space decides which dma-bufs it'll go keep > cached. But kernel space can see what user space is doing, so there is > no need to explicitly tell the kernel whenever a cached dma-buf exists > or not. We do the same trick in drm_prime.c, keeping a cache of exported dma-buf around for re-exporting. Since for prime sharing the use-case is almost always re-importing as a drm gem buffer again we can then on re-import also tell userspace whether it already has that buffer in it's userspace buffer manager, but that's an additional optimization. With plain dma-buf we could achieve the same by wiring up a real stat() implementation with unique inode numbers (atm they all share the anon_inode singleton). But thus far no one asked for that. btw I'm lost a bit in the discussion (was on vacation), but I think all the concerns I've noticed with the initial rfc have been raised already, so things look good. I'll check the next rfc once that shows up. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch