Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754494AbdLTKAJ (ORCPT ); Wed, 20 Dec 2017 05:00:09 -0500 Received: from mail-wm0-f54.google.com ([74.125.82.54]:40538 "EHLO mail-wm0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754153AbdLTKAB (ORCPT ); Wed, 20 Dec 2017 05:00:01 -0500 X-Google-Smtp-Source: ACJfBotecgK7rvxhkHZdNn7z/bsRtXtHWCdEAlUs3Sj7l7/70DeWy4exeBw7d/erPi2nPEtREYnG2g== Date: Wed, 20 Dec 2017 10:59:57 +0100 From: Daniel Vetter To: Dongwon Kim Cc: "linux-kernel@vger.kernel.org" , "xen-devel@lists.xenproject.org" , "Potrola, MateuszX" , "dri-devel@lists.freedesktop.org" , Intel Graphics Development , intel-gvt-dev@lists.freedesktop.org Subject: Re: [RFC PATCH 01/60] hyper_dmabuf: initial working version of hyper_dmabuf drv Message-ID: <20171220095957.GL26573@phenom.ffwll.local> Mail-Followup-To: Dongwon Kim , "linux-kernel@vger.kernel.org" , "xen-devel@lists.xenproject.org" , "Potrola, MateuszX" , "dri-devel@lists.freedesktop.org" , Intel Graphics Development , intel-gvt-dev@lists.freedesktop.org References: <1513711816-2618-1-git-send-email-dongwon.kim@intel.com> <20171219232731.GA6497@downor-Z87X-UD5H> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20171219232731.GA6497@downor-Z87X-UD5H> X-Operating-System: Linux phenom 4.13.0-1-amd64 User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3835 Lines: 84 On Tue, Dec 19, 2017 at 03:27:31PM -0800, Dongwon Kim wrote: > I forgot to include this brief information about this patch series. > > This patch series contains the implementation of a new device driver, > hyper_dmabuf, which provides a method for DMA-BUF sharing across > different OSes running on the same virtual OS platform powered by > a hypervisor. > > Detailed information about this driver is described in a high-level doc > added by the second patch of the series. > > [RFC PATCH 02/60] hyper_dmabuf: added a doc for hyper_dmabuf sharing > > I am attaching 'Overview' section here as a summary. > > ------------------------------------------------------------------------------ > Section 1. Overview > ------------------------------------------------------------------------------ > > Hyper_DMABUF driver is a Linux device driver running on multiple Virtual > achines (VMs), which expands DMA-BUF sharing capability to the VM environment > where multiple different OS instances need to share same physical data without > data-copy across VMs. > > To share a DMA_BUF across VMs, an instance of the Hyper_DMABUF drv on the > exporting VM (so called, “exporter”) imports a local DMA_BUF from the original > producer of the buffer, then re-exports it with an unique ID, hyper_dmabuf_id > for the buffer to the importing VM (so called, “importer”). > > Another instance of the Hyper_DMABUF driver on importer registers > a hyper_dmabuf_id together with reference information for the shared physical > pages associated with the DMA_BUF to its database when the export happens. > > The actual mapping of the DMA_BUF on the importer’s side is done by > the Hyper_DMABUF driver when user space issues the IOCTL command to access > the shared DMA_BUF. The Hyper_DMABUF driver works as both an importing and > exporting driver as is, that is, no special configuration is required. > Consequently, only a single module per VM is needed to enable cross-VM DMA_BUF > exchange. So I know that most dma-buf implementations (especially lots of importers in drivers/gpu) break this, but fundamentally only the original exporter is allowed to know about the underlying pages. There's various scenarios where a dma-buf isn't backed by anything like a struct page. So your first step of noodling the underlying struct page out from the dma-buf is kinda breaking the abstraction, and I think it's not a good idea to have that. Especially not for sharing across VMs. I think a better design would be if hyper-dmabuf would be the dma-buf exporter in both of the VMs, and you'd import it everywhere you want to in some gpu/video/whatever driver in the VMs. That way hyper-dmabuf is always in control of the pages, and a lot of the troubling forwarding you currently need to do disappears. 2nd thing: This seems very much related to what's happening around gvt and allowing at least the host (in a kvm based VM environment) to be able to access some of the dma-buf (or well, framebuffers in general) that the client is using. Adding some mailing lists for that. -Daniel > > ------------------------------------------------------------------------------ > > There is a git repository at github.com where this series of patches are all > integrated in Linux kernel tree based on the commit: > > commit ae64f9bd1d3621b5e60d7363bc20afb46aede215 > Author: Linus Torvalds > Date: Sun Dec 3 11:01:47 2017 -0500 > > Linux 4.15-rc2 > > https://github.com/downor/linux_hyper_dmabuf.git hyper_dmabuf_integration_v3 > > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch