Received: by 10.192.165.156 with SMTP id m28csp1554189imm; Tue, 17 Apr 2018 01:01:02 -0700 (PDT) X-Google-Smtp-Source: AIpwx489T64qHKSLM184PJz5UjJ+itcJRnudZEDYpp3gjWw2uuKm4DlFTNUBl6MWfE/I5Wx0FesM X-Received: by 10.101.73.7 with SMTP id p7mr959124pgs.139.1523952062105; Tue, 17 Apr 2018 01:01:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523952062; cv=none; d=google.com; s=arc-20160816; b=kL3Q4665JmXtK2pquFu0j3ZJJQEkAsZ6709Mn9igpMC0KAJYVg910OgM9plPYZoeRz /EjWHAn2RQpAjJataK1VqiKTN0IThHZkePD1iEx//6+0znLOpq0LVhiR7OVlK0wFp6Wf 3ckzgWsk+zoU2Vey2v2mDHyZj4qNCY/XsRvGJ1cr4Whzory6ip1Nu7du9gj5VVuLrtqA rn+G649942T1fC5+0wEEKsxvcAmsBjqe9ovNh7PQISGmIR4m2niqQyaE4e1fqX/k6lS5 MQUZVnocTNOTLEksGteLKKWnqd3qgxGM/TnpAFrF59OB+iY7Xid+6nsI7rELXrpc37Q3 PH7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:mail-followup-to :message-id:subject:cc:to:from:date:dkim-signature :arc-authentication-results; bh=UvYgweJqq7CT8WY3EtoG9TZEmHscIrwKH3vj6cm55wk=; b=leVESSYQm/9O4IPs7AhorEgf3aQ5eLDesmV5ui0EhkFQy4NNUhXZHh6G/ZRXWap+2+ TEmj7LhHdgSz4C08oueXJTq8wMXXcRWyKOa2QVvcjIkuKH88dAO1SauTTAxLh8pS+HWT LZwsnpc3Wo6ab/2mxsdllCjUA9nbGkEU94Hz17oVd+AkHtw0QeYSgEzphNDatn3TnwRO oNfD2IbBbIK/VM+BOgh/c+gycHhkM+j3VXByKM3SVxHTjEPT4Ws6Zcop/N5x1CfRcqEd WOr/dNnJ0wZbvutu99hTmd8SZoh0FNNlSzmJuUrlmQqlnaKB7zfV337oxOImfv3MGnhx rtHQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@ffwll.ch header.s=google header.b=MWrdXW9I; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x12-v6si14136749pll.207.2018.04.17.01.00.47; Tue, 17 Apr 2018 01:01:02 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@ffwll.ch header.s=google header.b=MWrdXW9I; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752043AbeDQH7f (ORCPT + 99 others); Tue, 17 Apr 2018 03:59:35 -0400 Received: from mail-wr0-f182.google.com ([209.85.128.182]:39792 "EHLO mail-wr0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751183AbeDQH7d (ORCPT ); Tue, 17 Apr 2018 03:59:33 -0400 Received: by mail-wr0-f182.google.com with SMTP id q6so20647812wrd.6 for ; Tue, 17 Apr 2018 00:59:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=sender:date:from:to:cc:subject:message-id:mail-followup-to :references:mime-version:content-disposition:in-reply-to:user-agent; bh=UvYgweJqq7CT8WY3EtoG9TZEmHscIrwKH3vj6cm55wk=; b=MWrdXW9I6zI/qHrZzejAu2OGj4cNvEawlWJNPaDOSv2G6IXMuNjo0tkaKL4/T8kUC9 T2dbxWe7ykooRZzLMovZ9LTyxDR2cyXLJTmDHcpDMYr24gV0ufBPo6DJo3jUumLshNeB JzIE2RWp+rwSf86LUE0HxaefaS683rVW5UbEI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :mail-followup-to:references:mime-version:content-disposition :in-reply-to:user-agent; bh=UvYgweJqq7CT8WY3EtoG9TZEmHscIrwKH3vj6cm55wk=; b=npcgiDoj2OnCcuCtta57W7sCsaSFh9U/HANBmAl8rtru4ZasbDKvGrHtXqqYp0rizC HlboK/hDzb7KttuH+rf/BiNQ4JbZkN4mVKPN4V3yzMHWy96XDijGxwJK441m0vFTib3w 3bjPCobiSi8P14/vrMBGffiEZT9lA35WECiFDLeMvwgduuDy1yNDLH/jbJTxYHxK9dkH sOc8MInlnhE8LIQB6bh8rKCPJEBmCbCfPirPn7ETwnEREZx5sN8Q2RBf+cycD4dHalCW 7fqeWm1FLhOOpNlSupCH04OKN9mvfX55N7CjvckbpdEilerPxUm6wmp8bNglgPBo5kNa X+sA== X-Gm-Message-State: ALQs6tCL/eYcJxnNu+hSdvOO1anWzPpLjbDcyy8hb7dCpwE12nrfSBvu 1MDmerVuR5i9r7l0yPpe45DS1w== X-Received: by 10.80.212.195 with SMTP id e3mr1858873edj.127.1523951972026; Tue, 17 Apr 2018 00:59:32 -0700 (PDT) Received: from phenom.ffwll.local ([2a02:168:5635:0:39d2:f87e:2033:9f6]) by smtp.gmail.com with ESMTPSA id e24sm8194650edc.47.2018.04.17.00.59.30 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 17 Apr 2018 00:59:31 -0700 (PDT) Date: Tue, 17 Apr 2018 09:59:28 +0200 From: Daniel Vetter To: Dongwon Kim Cc: Oleksandr Andrushchenko , jgross@suse.com, Artem Mygaiev , konrad.wilk@oracle.com, airlied@linux.ie, Oleksandr Andrushchenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, "Potrola, MateuszX" , daniel.vetter@intel.com, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com Subject: Re: [PATCH 0/1] drm/xen-zcopy: Add Xen zero-copy helper DRM driver Message-ID: <20180417075928.GT31310@phenom.ffwll.local> Mail-Followup-To: Dongwon Kim , Oleksandr Andrushchenko , jgross@suse.com, Artem Mygaiev , konrad.wilk@oracle.com, airlied@linux.ie, Oleksandr Andrushchenko , linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, "Potrola, MateuszX" , daniel.vetter@intel.com, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com References: <20180329131931.29957-1-andr2000@gmail.com> <5d8fec7f-956c-378f-be90-f45029385740@gmail.com> <20180416192905.GA18096@downor-Z87X-UD5H> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180416192905.GA18096@downor-Z87X-UD5H> X-Operating-System: Linux phenom 4.15.0-1-amd64 User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Apr 16, 2018 at 12:29:05PM -0700, Dongwon Kim wrote: > Yeah, I definitely agree on the idea of expanding the use case to the > general domain where dmabuf sharing is used. However, what you are > targetting with proposed changes is identical to the core design of > hyper_dmabuf. > > On top of this basic functionalities, hyper_dmabuf has driver level > inter-domain communication, that is needed for dma-buf remote tracking > (no fence forwarding though), event triggering and event handling, extra > meta data exchange and hyper_dmabuf_id that represents grefs > (grefs are shared implicitly on driver level) This really isn't a positive design aspect of hyperdmabuf imo. The core code in xen-zcopy (ignoring the ioctl side, which will be cleaned up) is very simple & clean. If there's a clear need later on we can extend that. But for now xen-zcopy seems to cover the basic use-case needs, so gets the job done. > Also it is designed with frontend (common core framework) + backend > (hyper visor specific comm and memory sharing) structure for portability. > We just can't limit this feature to Xen because we want to use the same > uapis not only for Xen but also other applicable hypervisor, like ACORN. See the discussion around udmabuf and the needs for kvm. I think trying to make an ioctl/uapi that works for multiple hypervisors is misguided - it likely won't work. On top of that the 2nd hypervisor you're aiming to support is ACRN. That's not even upstream yet, nor have I seen any patches proposing to land linux support for ACRN. Since it's not upstream, it doesn't really matter for upstream consideration. I'm doubting that ACRN will use the same grant references as xen, so the same uapi won't work on ACRN as on Xen anyway. > So I am wondering we can start with this hyper_dmabuf then modify it for > your use-case if needed and polish and fix any glitches if we want to > to use this for all general dma-buf usecases. Imo xen-zcopy is a much more reasonable starting point for upstream, which can then be extended (if really proven to be necessary). > Also, I still have one unresolved question regarding the export/import flow > in both of hyper_dmabuf and xen-zcopy. > > @danvet: Would this flow (guest1->import existing dmabuf->share underlying > pages->guest2->map shared pages->create/export dmabuf) be acceptable now? I think if you just look at the pages, and make sure you handle the sg_page == NULL case it's ok-ish. It's not great, but mostly it should work. The real trouble with hyperdmabuf was the forwarding of all these calls, instead of just passing around a list of grant references. -Daniel > > Regards, > DW > > On Mon, Apr 16, 2018 at 05:33:46PM +0300, Oleksandr Andrushchenko wrote: > > Hello, all! > > > > After discussing xen-zcopy and hyper-dmabuf [1] approaches > > > > it seems that xen-zcopy can be made not depend on DRM core any more > > > > and be dma-buf centric (which it in fact is). > > > > The DRM code was mostly there for dma-buf's FD import/export > > > > with DRM PRIME UAPI and with DRM use-cases in mind, but it comes out that if > > > > the proposed 2 IOCTLs (DRM_XEN_ZCOPY_DUMB_FROM_REFS and > > DRM_XEN_ZCOPY_DUMB_TO_REFS) > > > > are extended to also provide a file descriptor of the corresponding dma-buf, > > then > > > > PRIME stuff in the driver is not needed anymore. > > > > That being said, xen-zcopy can safely be detached from DRM and moved from > > > > drivers/gpu/drm/xen into drivers/xen/dma-buf-backend(?). > > > > This driver then becomes a universal way to turn any shared buffer between > > Dom0/DomD > > > > and DomU(s) into a dma-buf, e.g. one can create a dma-buf from any grant > > references > > > > or represent a dma-buf as grant-references for export. > > > > This way the driver can be used not only for DRM use-cases, but also for > > other > > > > use-cases which may require zero copying between domains. > > > > For example, the use-cases we are about to work in the nearest future will > > use > > > > V4L, e.g. we plan to support cameras, codecs etc. and all these will benefit > > > > from zero copying much. Potentially, even block/net devices may benefit, > > > > but this needs some evaluation. > > > > > > I would love to hear comments for authors of the hyper-dmabuf > > > > and Xen community, as well as DRI-Devel and other interested parties. > > > > > > Thank you, > > > > Oleksandr > > > > > > On 03/29/2018 04:19 PM, Oleksandr Andrushchenko wrote: > > >From: Oleksandr Andrushchenko > > > > > >Hello! > > > > > >When using Xen PV DRM frontend driver then on backend side one will need > > >to do copying of display buffers' contents (filled by the > > >frontend's user-space) into buffers allocated at the backend side. > > >Taking into account the size of display buffers and frames per seconds > > >it may result in unneeded huge data bus occupation and performance loss. > > > > > >This helper driver allows implementing zero-copying use-cases > > >when using Xen para-virtualized frontend display driver by > > >implementing a DRM/KMS helper driver running on backend's side. > > >It utilizes PRIME buffers API to share frontend's buffers with > > >physical device drivers on backend's side: > > > > > > - a dumb buffer created on backend's side can be shared > > > with the Xen PV frontend driver, so it directly writes > > > into backend's domain memory (into the buffer exported from > > > DRM/KMS driver of a physical display device) > > > - a dumb buffer allocated by the frontend can be imported > > > into physical device DRM/KMS driver, thus allowing to > > > achieve no copying as well > > > > > >For that reason number of IOCTLs are introduced: > > > - DRM_XEN_ZCOPY_DUMB_FROM_REFS > > > This will create a DRM dumb buffer from grant references provided > > > by the frontend > > > - DRM_XEN_ZCOPY_DUMB_TO_REFS > > > This will grant references to a dumb/display buffer's memory provided > > > by the backend > > > - DRM_XEN_ZCOPY_DUMB_WAIT_FREE > > > This will block until the dumb buffer with the wait handle provided > > > be freed > > > > > >With this helper driver I was able to drop CPU usage from 17% to 3% > > >on Renesas R-Car M3 board. > > > > > >This was tested with Renesas' Wayland-KMS and backend running as DRM master. > > > > > >Thank you, > > >Oleksandr > > > > > >Oleksandr Andrushchenko (1): > > > drm/xen-zcopy: Add Xen zero-copy helper DRM driver > > > > > > Documentation/gpu/drivers.rst | 1 + > > > Documentation/gpu/xen-zcopy.rst | 32 + > > > drivers/gpu/drm/xen/Kconfig | 25 + > > > drivers/gpu/drm/xen/Makefile | 5 + > > > drivers/gpu/drm/xen/xen_drm_zcopy.c | 880 ++++++++++++++++++++++++++++ > > > drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c | 154 +++++ > > > drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h | 38 ++ > > > include/uapi/drm/xen_zcopy_drm.h | 129 ++++ > > > 8 files changed, 1264 insertions(+) > > > create mode 100644 Documentation/gpu/xen-zcopy.rst > > > create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy.c > > > create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.c > > > create mode 100644 drivers/gpu/drm/xen/xen_drm_zcopy_balloon.h > > > create mode 100644 include/uapi/drm/xen_zcopy_drm.h > > > > > [1] > > https://lists.xenproject.org/archives/html/xen-devel/2018-02/msg01202.html > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch