Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp935982ybe; Thu, 19 Sep 2019 06:16:49 -0700 (PDT) X-Google-Smtp-Source: APXvYqxbQZr690Izq+B4ri8Ya6vAmzaqYwPFLz9x5xDkdlVlUkwbbjXg9G0TgzBP9ADFUNaD6CG6 X-Received: by 2002:a50:fc17:: with SMTP id i23mr16204950edr.287.1568899009080; Thu, 19 Sep 2019 06:16:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1568899009; cv=none; d=google.com; s=arc-20160816; b=oqA/9GoKtvsAylK3Bu2B4lJ7/xjmIf2Sn3fxvR3qtizneEZkdWUjngVt7YnFA+yU9G mIx2X10KWuN3kce9cHy4ouCe4wrvodBQMoaLRJ1dQ0J45OMidTfO8lO3ALzT5bAC1h49 6rqmK+7XA1MX2tlEw+WFu6ze6dxqOogE9yvfA8aLhz7tWaLV56fwcEyYBw87HfBwTLrB dWEB4W4GQCAgLxdFO2AO7R5TNdDTMULLR9ekgpFQJQQNeYE2PX0SObmC0SDkz4cXlnSE 0pS355dmZLUt/MB4bPmb0OTwcRv3Kj4qLPEkpSD3Y1PvL2YydIVejEmBq2fJV/vtn2+H aJIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=6nNrjxPSC8AHGfxG0lMrAaidSoqV4q9cpl9+1k/YpvU=; b=w7thSb4tZJ1DaLGCHPJ0zoM0denafi3n55PBJSCXqw1rSDIUmUY/kAn+dT1roU1+gw tH8P7vpydy885xE4o2BVlbfn858FzYOjlZBfFBZNAXG6ZId7liMuFnToKtv4jgtAFht6 dUKPocuUQWNcXnNdNI4QvwSSjJ5y2QaLr40a0tfDSkfpoSaJ1xItG2rh7b67IXj6vWeR e0CttHpjQY3/eRodOGiShW8Xdrto19fkY4Dmn3/cQoeQokEu3FsjV4Yd1klo08kzgFkt UAPbnAflhHiVOBHZH+s+c1xeZJcL7BeL8weaJiFnMchxyppLr9M5P0WVwS3DxrlByKET UeUA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=oHt8OtQP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id um16si4519459ejb.12.2019.09.19.06.16.25; Thu, 19 Sep 2019 06:16:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@chromium.org header.s=google header.b=oHt8OtQP; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=chromium.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390024AbfISLle (ORCPT + 99 others); Thu, 19 Sep 2019 07:41:34 -0400 Received: from mail-lf1-f67.google.com ([209.85.167.67]:41973 "EHLO mail-lf1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388688AbfISLld (ORCPT ); Thu, 19 Sep 2019 07:41:33 -0400 Received: by mail-lf1-f67.google.com with SMTP id r2so2106019lfn.8 for ; Thu, 19 Sep 2019 04:41:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=6nNrjxPSC8AHGfxG0lMrAaidSoqV4q9cpl9+1k/YpvU=; b=oHt8OtQPw7YIpcs+ggmuVlkDt5+ZHLP8pU62ps1F50yP2Mq99GOi7YIZWzUdP1vvTN 3DNxGJbcAw+DAifYune3ghEWa0axHDSn0fDsEI+7C31jWgj8gMYO/8B48ryPJgfN5o98 Lw3GJaFDIpvqVccDmowemr73vOQ9WBLpPfXDQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=6nNrjxPSC8AHGfxG0lMrAaidSoqV4q9cpl9+1k/YpvU=; b=nOCN1H7jg2M1FDJUvgf5STUfOR5uM9Jl+x8T94u3sOesf4c+z632tJRaEPtGjo/f8C /whpOhVYwbjBUtqcoR1mXhKN2d9vx7Z+r6HqKszrNVvmJwoQ+3C0sOWgCzPMB3Ku708+ e+VVVkF9c7mVIWkdk/KWtLjslb/uXfmfpW2zYcvKmlSuIUaETvYq0RDO4ZW6ZdPTHC3V rPhr4sMbizzMJ/9Ji91y4mopHxg9UHDQ8uKoKl7BAnYrC4cIcl8RhVyeMLKusjeZiASq m74MlDTrTtFdupZdJk5ReZ6CV7JdXrgLFcAinp/Aw9HEH3nQ8MBG4pz6/spWKezurLZW wX5Q== X-Gm-Message-State: APjAAAUnKelBUHm1q+9mwuebWU2Pdy3+T5CS+lx16muU/iXUljNH5hAx jjFVFJBKuZG17eMIYLJVJsl5q5vXX1wvUJLDLcvs1Gb5fyA= X-Received: by 2002:a19:6504:: with SMTP id z4mr4686680lfb.123.1568893290799; Thu, 19 Sep 2019 04:41:30 -0700 (PDT) MIME-Version: 1.0 References: <20190912094121.228435-1-tfiga@chromium.org> <20190912123821.rraib5entkcxt5p5@sirius.home.kraxel.org> <20190913080707.unhyoezesvfhx5np@sirius.home.kraxel.org> <20190913110544.htmslqt4yzejugs4@sirius.home.kraxel.org> In-Reply-To: <20190913110544.htmslqt4yzejugs4@sirius.home.kraxel.org> From: Keiichi Watanabe Date: Thu, 19 Sep 2019 20:41:19 +0900 Message-ID: Subject: Re: [RFC PATCH] drm/virtio: Export resource handles via DMA-buf API To: Gerd Hoffmann Cc: Tomasz Figa , David Airlie , Daniel Vetter , dri-devel , virtualization@lists.linux-foundation.org, Linux Kernel Mailing List , David Stevens , =?UTF-8?Q?St=C3=A9phane_Marchesin?= , Zach Reizner , Pawel Osciak Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 13, 2019 at 8:05 PM Gerd Hoffmann wrote: > > Hi, > > > > No. DMA master address space in virtual machines is pretty much the > > > same it is on physical machines. So, on x86 without iommu, identical > > > to (guest) physical address space. You can't re-define it like that. > > > > That's not true. Even on x86 without iommu the DMA address space can > > be different from the physical address space. > > On a standard pc (like the ones emulated by qemu) that is the case. > It's different on !x86, it also changes with a iommu being present. > > But that is not the main point here. The point is the dma master > address already has a definition and you can't simply change that. > > > That could be still just > > a simple addition/subtraction by constant, but still, the two are > > explicitly defined without any guarantees about a simple mapping > > between one or another existing. > > Sure. > > > "A CPU cannot reference a dma_addr_t directly because there may be > > translation between its physical > > address space and the DMA address space." > > Also note that dma address space is device-specific. In case a iommu > is present in the system you can have *multiple* dma address spaces, > separating (groups of) devices from each other. So passing a dma > address from one device to another isn't going to work. > > > > > However, we could as well introduce a separate DMA address > > > > space if resource handles are not the right way to refer to the memory > > > > from other virtio devices. > > > > > > s/other virtio devices/other devices/ > > > > > > dma-bufs are for buffer sharing between devices, not limited to virtio. > > > You can't re-define that in some virtio-specific way. > > > > > > > We don't need to limit this to virtio devices only. In fact I actually > > foresee this having a use case with the emulated USB host controller, > > which isn't a virtio device. > > What exactly? > > > That said, I deliberately referred to virtio to keep the scope of the > > problem in control. If there is a solution that could work without > > such assumption, I'm more than open to discuss it, of course. > > But it might lead to taking virtio-specific (or virtualization-specific) > shortcuts which will hurt in the long run ... > > > As per my understanding of the DMA address, anything that lets the DMA > > master access the target memory would qualify and there would be no > > need for an IOMMU in between. > > Yes. But that DMA address is already defined by the platform, you can't > freely re-define it. Well, you sort-of can when you design your own > virtual iommu for qemu. But even then you can't just pass around magic > cookies masqueraded as dma address. You have to make sure they actually > form an address space, without buffers overlapping, ... > > > Exactly. The very specific first scenario that we want to start with > > is allocating host memory through virtio-gpu and using that memory > > both as output of a video decoder and as input (texture) to Virgil3D. > > The memory needs to be specifically allocated by the host as only the > > host can know the requirements for memory allocation of the video > > decode accelerator hardware. > > So you probably have some virtio-video-decoder. You allocate a gpu > buffer, export it as dma-buf, import it into the decoder, then let the > video decoder render to it. Right? Right. I sent an RFC about virtio-vdec (video decoder) to virtio-dev ML today. https://lists.oasis-open.org/archives/virtio-dev/201909/msg00111.html In the current design, the driver and the device uses an integer to identify a DMA-buf. We call this integer a "resource handle" in the RFC. Our prototype driver uses exposed virtio-gpu's resource handles for this purpose. Regards, Keiichi > > Using dmabufs makes sense for sure. But we need an separate field in > struct dma_buf for an (optional) host dmabuf identifier, we can't just > hijack the dma address. > > cheers, > Gerd >