Received: by 10.192.165.156 with SMTP id m28csp332160imm; Tue, 10 Apr 2018 23:02:58 -0700 (PDT) X-Google-Smtp-Source: AIpwx4/8ZsJhUZjkEQzXYw2VfXtryBokfP1/M0XBUYtiRMUxytKcK9EPVQc9UFDErFTp3Ggzbs7v X-Received: by 2002:a17:902:1025:: with SMTP id b34-v6mr3531724pla.324.1523426578580; Tue, 10 Apr 2018 23:02:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523426578; cv=none; d=google.com; s=arc-20160816; b=VuheV5md5BI52RToVV2z3wWXXKb9vvAU1IE2ucr/A9daLCLwcMLsoIxfzEk848vwiG PfY1u5WJgrtHDJLAGaqYmIcBjpz+5y2FscXY5fKcLF4HgiPQAQxcNN8tWdiO/aOig8G5 WYTmJu6JpzWlaKI1fsWuGJXMOFoRb4JYn7pDL7gv9bHxKwtjhDfzRZzyExa1Silpl/zA pG3lhlBVV/nc1G+8Xt0GMT3GfMhXkCY8HW1oC6kVpjvuQt1ytOb7zoIsWmzrYQMCHIZW ghThrx/nRiCUWP3OL2VW8h4t/Yq+SDXWetnOK7ndBZAs51u+Sxq/NPzUikH1QDPZ3vGE nkEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-language :content-transfer-encoding:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:dkim-signature :arc-authentication-results; bh=QwZXRGKfOtpxtCWySA4Hr1DU/o0wm71sMS9op388Y+8=; b=ohBDcRAfy8qMM2oYcnr6LSo5H3cZbcJKE4LI79EQ48gliK9h0mMXXVqw/y1Qeh77Ow OTu8njkpUBScIHAp1OwYEkaGrt00zxSGsG20wGB8y9Wd53m3juY3qIZ21gefhT6y4YNm kWquJZVFadeFz1ch4BF6+dJ6+ZnV7Hp4kLuQhdBZAD3D6HbVfMYmvMG3soaqHe6CQQ5Z dz4bS2vkvVJobV4maS/eikFMUdqcVV9WoQpohou/oKH6XVXOLGjxQJsvZPJH4prAIue3 zIbGuxQXuvBZvs3OTwzK1fBIjnQxsxg4jlRKxXQotI8ttCvVy+5wgRHUcaACMQNRn8Ow hpiQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=a3YziBG+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i133si225058pgd.598.2018.04.10.23.02.19; Tue, 10 Apr 2018 23:02:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=a3YziBG+; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751870AbeDKF7i (ORCPT + 99 others); Wed, 11 Apr 2018 01:59:38 -0400 Received: from mail-lf0-f65.google.com ([209.85.215.65]:39934 "EHLO mail-lf0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751661AbeDKF7g (ORCPT ); Wed, 11 Apr 2018 01:59:36 -0400 Received: by mail-lf0-f65.google.com with SMTP id p142-v6so865320lfd.6; Tue, 10 Apr 2018 22:59:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-transfer-encoding:content-language; bh=QwZXRGKfOtpxtCWySA4Hr1DU/o0wm71sMS9op388Y+8=; b=a3YziBG+RRLB3LnE+tBwk7E2fq9e4STOgHcBqvCALNi7WpBayu7EWyOmr4u2AJjqoH AFCWwFCQHFJ0u+x4g63JcO4fyT72HygH3rrxMoXfiW9LM0i80uvNbhBlmvhRz3jrej+C DQ8PSC+EU/VG0rVWraoPa31oSZDI4QeDE/jEDIh4I2C2+HMIhDCV1LjrpvKeYHtS664P dWQpbBp2xtt0d6NC0kEFFz96rQP51WSsejkfEpAbQiyj0sr0pALLzftn8cXsqb2voJ9j MeuG5i1IwAGSAmUSI3iQ9Zo5CP7ePi3a4e++9XqlnwLjfxI21pOk3yyGkOYTccvjdgo1 8HLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-transfer-encoding :content-language; bh=QwZXRGKfOtpxtCWySA4Hr1DU/o0wm71sMS9op388Y+8=; b=R/8Xzm9n3UjYuowGM3kDm6twtL5suV9rCnISkO+0WgJyMFbfeSrJFh9cV6tSzkdgZ5 +SbcpjESYKo6KAzvsejPAwEhWkSgHNtc0bcXOfgrvYyCptABok6q/urc3kE3ZM71mv2s 0MWceGnmKadv04ZAN65oQx9Xen75SwgXSo3xKwjI8Fn5OVyvOOWzUpupAMrSRm4qXUW7 nREpOAg3InfLZVyAUnu0XKVM8+tP+jIt1Ss2QIbE1FauJ9xbwEhac5RdiLl+ShMdnacU dvnT/wq/5q6hYNfjdBmG16E6h4X8zsnpGJMPOmyIG3f15y7Yrgl/riOwCdRThQUEzkLs DD7g== X-Gm-Message-State: ALQs6tAFw9QSstXE127U+f4SFPNqdeemtjksFzv4/Io2sPnxQJ8CBn6z mlw8P1fXzTJ+k5zSQI7Gi5w= X-Received: by 10.46.128.16 with SMTP id j16mr2133391ljg.26.1523426374804; Tue, 10 Apr 2018 22:59:34 -0700 (PDT) Received: from [10.17.182.9] (ll-51.209.223.85.sovam.net.ua. [85.223.209.51]) by smtp.gmail.com with ESMTPSA id p93-v6sm95206lfg.81.2018.04.10.22.59.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 10 Apr 2018 22:59:33 -0700 (PDT) Subject: Re: [RfC PATCH] Add udmabuf misc device To: Dongwon Kim Cc: Gerd Hoffmann , Oleksandr Andrushchenko , Tomeu Vizoso , David Airlie , open list , dri-devel , qemu-devel@nongnu.org, "moderated list:DMA BUFFER SHARING FRAMEWORK" , "open list:DMA BUFFER SHARING FRAMEWORK" , Matt Roper References: <20180314080301.366zycak3whqvvqx@sirius.home.kraxel.org> <20180406001117.GD31612@mdroper-desk.amr.corp.intel.com> <2411d2c1-33c0-2ba5-67ea-3bb9af5d5ec9@epam.com> <20180406090747.gwiegu22z4noj23i@sirius.home.kraxel.org> <9a085854-3758-1500-9971-806c611cb54f@gmail.com> <20180406115730.jtwcbz5okrphlxli@sirius.home.kraxel.org> <7ef89a29-6584-d23c-efd1-f30d9b767a24@gmail.com> <20180406185746.GA4983@downor-Z87X-UD5H> <20180410172605.GA26472@downor-Z87X-UD5H> From: Oleksandr Andrushchenko Message-ID: Date: Wed, 11 Apr 2018 08:59:32 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <20180410172605.GA26472@downor-Z87X-UD5H> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 04/10/2018 08:26 PM, Dongwon Kim wrote: > On Tue, Apr 10, 2018 at 09:37:53AM +0300, Oleksandr Andrushchenko wrote: >> On 04/06/2018 09:57 PM, Dongwon Kim wrote: >>> On Fri, Apr 06, 2018 at 03:36:03PM +0300, Oleksandr Andrushchenko wrote: >>>> On 04/06/2018 02:57 PM, Gerd Hoffmann wrote: >>>>> Hi, >>>>> >>>>>>> I fail to see any common ground for xen-zcopy and udmabuf ... >>>>>> Does the above mean you can assume that xen-zcopy and udmabuf >>>>>> can co-exist as two different solutions? >>>>> Well, udmabuf route isn't fully clear yet, but yes. >>>>> >>>>> See also gvt (intel vgpu), where the hypervisor interface is abstracted >>>>> away into a separate kernel modules even though most of the actual vgpu >>>>> emulation code is common. >>>> Thank you for your input, I'm just trying to figure out >>>> which of the three z-copy solutions intersect and how much >>>>>> And what about hyper-dmabuf? >>> xen z-copy solution is pretty similar fundamentally to hyper_dmabuf >>> in terms of these core sharing feature: >>> >>> 1. the sharing process - import prime/dmabuf from the producer -> extract >>> underlying pages and get those shared -> return references for shared pages > Another thing is danvet was kind of against to the idea of importing existing > dmabuf/prime buffer and forward it to the other domain due to synchronization > issues. He proposed to make hyper_dmabuf only work as an exporter so that it > can have a full control over the buffer. I think we need to talk about this > further as well. Yes, I saw this. But this limits the use-cases so much. For instance, running Android as a Guest (which uses ION to allocate buffers) means that finally HW composer will import dma-buf into the DRM driver. Then, in case of xen-front for example, it needs to be shared with the backend (Host side). Of course, we can change user-space to make xen-front allocate the buffers (make it exporter), but what we try to avoid is to change user-space which in normal world would have remain unchanged otherwise. So, I do think we have to support this use-case and just have to understand the complexity. > > danvet, can you comment on this topic? > >>> 2. the page sharing mechanism - it uses Xen-grant-table. >>> >>> And to give you a quick summary of differences as far as I understand >>> between two implementations (please correct me if I am wrong, Oleksandr.) >>> >>> 1. xen-zcopy is DRM specific - can import only DRM prime buffer >>> while hyper_dmabuf can export any dmabuf regardless of originator >> Well, this is true. And at the same time this is just a matter >> of extending the API: xen-zcopy is a helper driver designed for >> xen-front/back use-case, so this is why it only has DRM PRIME API >>> 2. xen-zcopy doesn't seem to have dma-buf synchronization between two VMs >>> while (as danvet called it as remote dmabuf api sharing) hyper_dmabuf sends >>> out synchronization message to the exporting VM for synchronization. >> This is true. Again, this is because of the use-cases it covers. >> But having synchronization for a generic solution seems to be a good idea. > Yeah, understood xen-zcopy works ok with your use case. But I am just curious > if it is ok not to have any inter-domain synchronization in this sharing model. The synchronization is done with displif protocol [1] > The buffer being shared is technically dma-buf and originator needs to be able > to keep track of it. As I am working in DRM terms the tracking is done by the DRM core for me for free. (This might be one of the reasons Daniel sees DRM based implementation fit very good from code-reuse POV). > >>> 3. 1-level references - when using grant-table for sharing pages, there will >>> be same # of refs (each 8 byte) >> To be precise, grant ref is 4 bytes > You are right. Thanks for correction.;) > >>> as # of shared pages, which is passed to >>> the userspace to be shared with importing VM in case of xen-zcopy. >> The reason for that is that xen-zcopy is a helper driver, e.g. >> the grant references come from the display backend [1], which implements >> Xen display protocol [2]. So, effectively the backend extracts references >> from frontend's requests and passes those to xen-zcopy as an array >> of refs. >>> Compared >>> to this, hyper_dmabuf does multiple level addressing to generate only one >>> reference id that represents all shared pages. >> In the protocol [2] only one reference to the gref directory is passed >> between VMs >> (and the gref directory is a single-linked list of shared pages containing >> all >> of the grefs of the buffer). > ok, good to know. I will look into its implementation in more details but is > this gref directory (chained grefs) something that can be used for any general > memory sharing use case or is it jsut for xen-display (in current code base)? Not to mislead you: one grant ref is passed via displif protocol, but the page it's referencing contains the rest of the grant refs. As to if this can be used for any memory: yes. It is the same for sndif and displif Xen protocols, but defined twice as strictly speaking sndif and displif are two separate protocols. While reviewing your RFC v2 one of the comments I had [2] was that if we can start from defining such a generic protocol for hyper-dmabuf. It can be a header file, which not only has the description part (which then become a part of Documentation/...rst file), but also defines all the required constants for requests, responses, defines message formats, state diagrams etc. all at one place. Of course this protocol must not be Xen specific, but be OS/hypervisor agnostic. Having that will trigger a new round of discussion, so we have it all designed and discussed before we start implementing. Besides the protocol we have to design UAPI part as well and make sure the hyper-dmabuf is not only accessible from user-space, but there will be number of kernel-space users as well. > >>> 4. inter VM messaging (hype_dmabuf only) - hyper_dmabuf has inter-vm msg >>> communication defined for dmabuf synchronization and private data (meta >>> info that Matt Roper mentioned) exchange. >> This is true, xen-zcopy has no means for inter VM sync and meta-data, >> simply because it doesn't have any code for inter VM exchange in it, >> e.g. the inter VM protocol is handled by the backend [1]. >>> 5. driver-to-driver notification (hyper_dmabuf only) - importing VM gets >>> notified when newdmabuf is exported from other VM - uevent can be optionally >>> generated when this happens. >>> >>> 6. structure - hyper_dmabuf is targetting to provide a generic solution for >>> inter-domain dmabuf sharing for most hypervisors, which is why it has two >>> layers as mattrope mentioned, front-end that contains standard API and backend >>> that is specific to hypervisor. >> Again, xen-zcopy is decoupled from inter VM communication >>>>> No idea, didn't look at it in detail. >>>>> >>>>> Looks pretty complex from a distant view. Maybe because it tries to >>>>> build a communication framework using dma-bufs instead of a simple >>>>> dma-buf passing mechanism. >>> we started with simple dma-buf sharing but realized there are many >>> things we need to consider in real use-case, so we added communication >>> , notification and dma-buf synchronization then re-structured it to >>> front-end and back-end (this made things more compicated..) since Xen >>> was not our only target. Also, we thought passing the reference for the >>> buffer (hyper_dmabuf_id) is not secure so added uvent mechanism later. >>> >>>> Yes, I am looking at it now, trying to figure out the full story >>>> and its implementation. BTW, Intel guys were about to share some >>>> test application for hyper-dmabuf, maybe I have missed one. >>>> It could probably better explain the use-cases and the complexity >>>> they have in hyper-dmabuf. >>> One example is actually in github. If you want take a look at it, please >>> visit: >>> >>> https://github.com/downor/linux_hyper_dmabuf_test/tree/xen/simple_export >> Thank you, I'll have a look >>>>> Like xen-zcopy it seems to depend on the idea that the hypervisor >>>>> manages all memory it is easy for guests to share pages with the help of >>>>> the hypervisor. >>>> So, for xen-zcopy we were not trying to make it generic, >>>> it just solves display (dumb) zero-copying use-cases for Xen. >>>> We implemented it as a DRM helper driver because we can't see any >>>> other use-cases as of now. >>>> For example, we also have Xen para-virtualized sound driver, but >>>> its buffer memory usage is not comparable to what display wants >>>> and it works somewhat differently (e.g. there is no "frame done" >>>> event, so one can't tell when the sound buffer can be "flipped"). >>>> At the same time, we do not use virtio-gpu, so this could probably >>>> be one more candidate for shared dma-bufs some day. >>>>> Which simply isn't the case on kvm. >>>>> >>>>> hyper-dmabuf and xen-zcopy could maybe share code, or hyper-dmabuf build >>>>> on top of xen-zcopy. >>>> Hm, I can imagine that: xen-zcopy could be a library code for hyper-dmabuf >>>> in terms of implementing all that page sharing fun in multiple directions, >>>> e.g. Host->Guest, Guest->Host, Guest<->Guest. >>>> But I'll let Matt and Dongwon to comment on that. >>> I think we can definitely collaborate. Especially, maybe we are using some >>> outdated sharing mechanism/grant-table mechanism in our Xen backend (thanks >>> for bringing that up Oleksandr). However, the question is once we collaborate >>> somehow, can xen-zcopy's usecase use the standard API that hyper_dmabuf >>> provides? I don't think we need different IOCTLs that do the same in the final >>> solution. >>> >> If you think of xen-zcopy as a library (which implements Xen >> grant references mangling) and DRM PRIME wrapper on top of that >> library, we can probably define proper API for that library, >> so both xen-zcopy and hyper-dmabuf can use it. What is more, I am >> about to start upstreaming Xen para-virtualized sound device driver soon, >> which also uses similar code and gref passing mechanism [3]. >> (Actually, I was about to upstream drm/xen-front, drm/xen-zcopy and >> snd/xen-front and then propose a Xen helper library for sharing big buffers, >> so common code of the above drivers can use the same code w/o code >> duplication) > I think it is possible to use your functions for memory sharing part in > hyper_dmabuf's backend (this 'backend' means the layer that does page sharing > and inter-vm communication with xen-specific way.), so why don't we work on > "Xen helper library for sharing big buffers" first while we continue our > discussion on the common API layer that can cover any dmabuf sharing cases. > Well, I would love we reuse the code that I have, but I also understand that it was limited by my use-cases. So, I do not insist we have to ;) If we start designing and discussing hyper-dmabuf protocol we of course can work on this helper library in parallel. >> Thank you, >> Oleksandr >> >> P.S. All, is it a good idea to move this out of udmabuf thread into a >> dedicated one? > Either way is fine with me. So, if you can start designing the protocol we may have a dedicated mail thread for that. I will try to help with the protocol as much as I can >>>>> cheers, >>>>> Gerd >>>>> >>>> Thank you, >>>> Oleksandr >>>> >>>> P.S. Sorry for making your original mail thread to discuss things much >>>> broader than your RFC... >>>> >> [1] https://github.com/xen-troops/displ_be >> [2] https://elixir.bootlin.com/linux/v4.16-rc7/source/include/xen/interface/io/displif.h#L484 >> [3] https://elixir.bootlin.com/linux/v4.16-rc7/source/include/xen/interface/io/sndif.h >> [1] https://elixir.bootlin.com/linux/v4.16-rc7/source/include/xen/interface/io/displif.h [2] https://lists.xenproject.org/archives/html/xen-devel/2018-04/msg00685.html