Received: by 10.213.65.68 with SMTP id h4csp964357imn; Fri, 6 Apr 2018 12:01:27 -0700 (PDT) X-Google-Smtp-Source: AIpwx48agHPK4R1aW9e1NPoYvHwX1nWAVfIaCX8mJ9aKce33qkt5/0ylRZHzHAffcKJY7PJFk7BR X-Received: by 2002:a17:902:2e:: with SMTP id 43-v6mr28372103pla.282.1523041287942; Fri, 06 Apr 2018 12:01:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1523041287; cv=none; d=google.com; s=arc-20160816; b=lN2xBqbik7LFre0hMtwqXPAny/K8Q2fnq1kfwl5O9C3nBWacj+lv2uH4UZyoP/jqgD O5DshF06QfjakC3UzAzuR3i3d28uOxsN6n3SU3hnDZvcnV73JyynAf7GEw6atW/nFCU3 mO+BfkRexLEmjuXU5oZMFkCJlUv+LN2UADsUI2/nWZgVLHEQwcbIUZufRA2r4Tm9BhOX Q219VcftGP4Re5ZjLCWRB/0LhUFweSlC4UcYGDOIku5NhBrUVkCOl2xkhLDLrbkn4To0 RcA+DWX5qBxnmWGYV9ZHkKdMBIHptn2iK12onHu50sJplPlHKY8MSFbLwGfCIxYeklb6 EBYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=j0gofnweYzhjDD1BNsZlj1j8eI1KEZQkdtgfu/ppjCo=; b=UmM9faR9AtvhfZwggll/PQp718DiP3c+Z7sGY/l5ZxTrhf4H0R1AvsVYFUAsgxVM6u OA2dktlRVSwUz8q3XFKOzHbX97r0C1Rgz4H3YlS6VADL2M3/oe8+jyLYdDIM5lvZjQnK +1nglS0ULtBz2/XeLyzPqXrC1GAozsvEazlkNKp+ZtrjpuiR6jGLu3Pk9hHXUKynZmSc IifnN+BwfIrXjm8+4yi9B6iZNKLJ8QMLvkW5M7CA3Sv4gVdXnJuWbbqYQhvZGjxFA142 RiTWWVVpOYcHTPZDtdO0xAeml6wy68vwmELonso8ZK2Fk4RikTyAUR5nOabVe7ElBKr2 cxgA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o69si8346410pfj.329.2018.04.06.12.00.49; Fri, 06 Apr 2018 12:01:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751711AbeDFS5n (ORCPT + 99 others); Fri, 6 Apr 2018 14:57:43 -0400 Received: from mga14.intel.com ([192.55.52.115]:27569 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750807AbeDFS5m (ORCPT ); Fri, 6 Apr 2018 14:57:42 -0400 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 06 Apr 2018 11:57:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,416,1517904000"; d="scan'208";a="30696915" Received: from downor-z87x-ud5h.fm.intel.com (HELO downor-Z87X-UD5H) ([10.1.122.107]) by fmsmga008.fm.intel.com with ESMTP; 06 Apr 2018 11:57:41 -0700 Date: Fri, 6 Apr 2018 11:57:46 -0700 From: Dongwon Kim To: Oleksandr Andrushchenko Cc: Gerd Hoffmann , Oleksandr Andrushchenko , Tomeu Vizoso , David Airlie , open list , dri-devel , qemu-devel@nongnu.org, "moderated list:DMA BUFFER SHARING FRAMEWORK" , "open list:DMA BUFFER SHARING FRAMEWORK" , Matt Roper Subject: Re: [RfC PATCH] Add udmabuf misc device Message-ID: <20180406185746.GA4983@downor-Z87X-UD5H> References: <20180313154826.20436-1-kraxel@redhat.com> <20180313161035.GL4788@phenom.ffwll.local> <20180314080301.366zycak3whqvvqx@sirius.home.kraxel.org> <20180406001117.GD31612@mdroper-desk.amr.corp.intel.com> <2411d2c1-33c0-2ba5-67ea-3bb9af5d5ec9@epam.com> <20180406090747.gwiegu22z4noj23i@sirius.home.kraxel.org> <9a085854-3758-1500-9971-806c611cb54f@gmail.com> <20180406115730.jtwcbz5okrphlxli@sirius.home.kraxel.org> <7ef89a29-6584-d23c-efd1-f30d9b767a24@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <7ef89a29-6584-d23c-efd1-f30d9b767a24@gmail.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Apr 06, 2018 at 03:36:03PM +0300, Oleksandr Andrushchenko wrote: > On 04/06/2018 02:57 PM, Gerd Hoffmann wrote: > > Hi, > > > >>>I fail to see any common ground for xen-zcopy and udmabuf ... > >>Does the above mean you can assume that xen-zcopy and udmabuf > >>can co-exist as two different solutions? > >Well, udmabuf route isn't fully clear yet, but yes. > > > >See also gvt (intel vgpu), where the hypervisor interface is abstracted > >away into a separate kernel modules even though most of the actual vgpu > >emulation code is common. > Thank you for your input, I'm just trying to figure out > which of the three z-copy solutions intersect and how much > >>And what about hyper-dmabuf? xen z-copy solution is pretty similar fundamentally to hyper_dmabuf in terms of these core sharing feature: 1. the sharing process - import prime/dmabuf from the producer -> extract underlying pages and get those shared -> return references for shared pages 2. the page sharing mechanism - it uses Xen-grant-table. And to give you a quick summary of differences as far as I understand between two implementations (please correct me if I am wrong, Oleksandr.) 1. xen-zcopy is DRM specific - can import only DRM prime buffer while hyper_dmabuf can export any dmabuf regardless of originator 2. xen-zcopy doesn't seem to have dma-buf synchronization between two VMs while (as danvet called it as remote dmabuf api sharing) hyper_dmabuf sends out synchronization message to the exporting VM for synchronization. 3. 1-level references - when using grant-table for sharing pages, there will be same # of refs (each 8 byte) as # of shared pages, which is passed to the userspace to be shared with importing VM in case of xen-zcopy. Compared to this, hyper_dmabuf does multiple level addressing to generate only one reference id that represents all shared pages. 4. inter VM messaging (hype_dmabuf only) - hyper_dmabuf has inter-vm msg communication defined for dmabuf synchronization and private data (meta info that Matt Roper mentioned) exchange. 5. driver-to-driver notification (hyper_dmabuf only) - importing VM gets notified when newdmabuf is exported from other VM - uevent can be optionally generated when this happens. 6. structure - hyper_dmabuf is targetting to provide a generic solution for inter-domain dmabuf sharing for most hypervisors, which is why it has two layers as mattrope mentioned, front-end that contains standard API and backend that is specific to hypervisor. > >No idea, didn't look at it in detail. > > > >Looks pretty complex from a distant view. Maybe because it tries to > >build a communication framework using dma-bufs instead of a simple > >dma-buf passing mechanism. we started with simple dma-buf sharing but realized there are many things we need to consider in real use-case, so we added communication , notification and dma-buf synchronization then re-structured it to front-end and back-end (this made things more compicated..) since Xen was not our only target. Also, we thought passing the reference for the buffer (hyper_dmabuf_id) is not secure so added uvent mechanism later. > Yes, I am looking at it now, trying to figure out the full story > and its implementation. BTW, Intel guys were about to share some > test application for hyper-dmabuf, maybe I have missed one. > It could probably better explain the use-cases and the complexity > they have in hyper-dmabuf. One example is actually in github. If you want take a look at it, please visit: https://github.com/downor/linux_hyper_dmabuf_test/tree/xen/simple_export > > > >Like xen-zcopy it seems to depend on the idea that the hypervisor > >manages all memory it is easy for guests to share pages with the help of > >the hypervisor. > So, for xen-zcopy we were not trying to make it generic, > it just solves display (dumb) zero-copying use-cases for Xen. > We implemented it as a DRM helper driver because we can't see any > other use-cases as of now. > For example, we also have Xen para-virtualized sound driver, but > its buffer memory usage is not comparable to what display wants > and it works somewhat differently (e.g. there is no "frame done" > event, so one can't tell when the sound buffer can be "flipped"). > At the same time, we do not use virtio-gpu, so this could probably > be one more candidate for shared dma-bufs some day. > > Which simply isn't the case on kvm. > > > >hyper-dmabuf and xen-zcopy could maybe share code, or hyper-dmabuf build > >on top of xen-zcopy. > Hm, I can imagine that: xen-zcopy could be a library code for hyper-dmabuf > in terms of implementing all that page sharing fun in multiple directions, > e.g. Host->Guest, Guest->Host, Guest<->Guest. > But I'll let Matt and Dongwon to comment on that. I think we can definitely collaborate. Especially, maybe we are using some outdated sharing mechanism/grant-table mechanism in our Xen backend (thanks for bringing that up Oleksandr). However, the question is once we collaborate somehow, can xen-zcopy's usecase use the standard API that hyper_dmabuf provides? I don't think we need different IOCTLs that do the same in the final solution. > > > > >cheers, > > Gerd > > > Thank you, > Oleksandr > > P.S. Sorry for making your original mail thread to discuss things much > broader than your RFC... >