Received: by 10.223.176.5 with SMTP id f5csp542016wra; Fri, 9 Feb 2018 03:16:49 -0800 (PST) X-Google-Smtp-Source: AH8x225OyZkSXNYbXqa7c+/KtN1mfBQjcODvuz0+Z5BR4cpvMEFr8BuF7+TYD7AU6P4RkE+aLdpd X-Received: by 10.99.65.199 with SMTP id o190mr2183099pga.238.1518175009472; Fri, 09 Feb 2018 03:16:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518175009; cv=none; d=google.com; s=arc-20160816; b=hqyQZTFkpsp0P1eDcUdXzi+awpwVX9/mRV9zpGu/Qi9d0O2uACYZYlnS+OFw2Agl/E vu5rFtsivEaNaL6b7X/W7+qI9OQ0VSV7pMW91MTJwN45bazy/PzZuSbn4CLZ3bfi0TZE cRBDUVDtrO3gHu9GGukH+WvfsmJusVitlWTEMcLYjpaS4cHkGvmKiRqvJKmCbtQE2mT4 NUmVZ527FFfSMlmAQNH2KyytyuHRAQb7ljsF8+/r1r+y3s4sSzw0kktFmorDLs+n03Mw AnvG2xadnsNcMoZ8U4KKRjHfgeteUSA3soPdvgauXyA0zALs2pZIHofRGNkrAFqQtip2 tDtQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:references:cc:to:from:subject:arc-authentication-results; bh=VjdydwSZ3tfr1uogSnccj3PUa9/8sLlrD6KVdqcrrLo=; b=PC5SMW+TaVXkVqrSk49IkqFWmmatojhtRHmL6AVPrfWQACZLmiPKuLWWldD6wyVh6f 0Ja7kpUiByQyvXc/0jI/TuCCvfEX42zeLYj6QAl9pJx4qUkIbtS1q0i+LAMm0iWFYwHc aOjIjrW5tEagYKGd548kHgHRNLS/MGtEgEbhaOV2QtgT1m00CJDjfyUfS4Z5EBry66Cx Zh+iTQtLTMTzK0KCQcdOQoPjsRY8TEtTslpailIi/T+J10YSEzjzW7+yHPRNPCgJ+CQ4 btAolJSMBAFgoSgjtndlYUpmMlNrsOxN3KAIW7sNwZX2ZAwdKO0SkugycHPk9KSpT9L8 1jSQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=collabora.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n3-v6si1417992plp.487.2018.02.09.03.16.35; Fri, 09 Feb 2018 03:16:49 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=collabora.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751293AbeBILOp (ORCPT + 99 others); Fri, 9 Feb 2018 06:14:45 -0500 Received: from bhuna.collabora.co.uk ([46.235.227.227]:52308 "EHLO bhuna.collabora.co.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750863AbeBILOn (ORCPT ); Fri, 9 Feb 2018 06:14:43 -0500 Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: tomeu) with ESMTPSA id 9A137272770 Subject: Re: [PATCH v3 1/2] drm/virtio: Add window server support From: Tomeu Vizoso To: Gerd Hoffmann Cc: linux-kernel@vger.kernel.org, Zach Reizner , kernel@collabora.com, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org, "Michael S. Tsirkin" , David Airlie , Jason Wang , Stefan Hajnoczi References: <20180126135803.29781-1-tomeu.vizoso@collabora.com> <20180126135803.29781-2-tomeu.vizoso@collabora.com> <20180201163623.5cs2ysykg5wgulf4@sirius.home.kraxel.org> <49785e0d-936a-c3b4-62dd-aafc7083a942@collabora.com> <20180205122017.4vb5nlpodkq2uhxa@sirius.home.kraxel.org> <20180205160322.sntv5uoqp5o7flnh@sirius.home.kraxel.org> <20180206142302.vdjyqmnoypydci4t@sirius.home.kraxel.org> <04687943-847b-25a7-42ef-a21b4c7ef0cf@collabora.com> Message-ID: <38880e66-b676-1170-c2ca-5a5603c5b521@collabora.com> Date: Fri, 9 Feb 2018 12:14:37 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: <04687943-847b-25a7-42ef-a21b4c7ef0cf@collabora.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Gerd and Stefan, can we reach agreement on whether vsock should be involved in this? Thanks, Tomeu On 02/07/2018 10:49 AM, Tomeu Vizoso wrote: > On 02/06/2018 03:23 PM, Gerd Hoffmann wrote: >>    Hi, >> >>>> Hmm?  I'm assuming the wayland client (in the guest) talks to the >>>> wayland proxy, using the wayland protocol, like it would talk to a >>>> wayland display server.  Buffers must be passed from client to >>>> server/proxy somehow, probably using fd passing, so where is the >>>> problem? >>>> >>>> Or did I misunderstand the role of the proxy? >>> >>> Hi Gerd, >>> >>> it's starting to look to me that we're talking a bit past the other, so I >>> have pasted below a few words describing my current plan regarding the >>> 3 key >>> scenarios that I'm addressing. >> >> You are describing the details, but I'm missing the big picture ... >> >> So, virtualization aside, how do buffers work in wayland?  As far I know >> it goes like this: >> >>    (a) software rendering: client allocates shared memory buffer, renders >>        into it, then passes a file handle for that shmem block together >>        with some meta data (size, format, ...) to the wayland server. >> >>    (b) gpu rendering: client opens a render node, allocates a buffer, >>        asks the cpu to renders into it, exports the buffer as dma-buf >>        (DRM_IOCTL_PRIME_HANDLE_TO_FD), passes this to the wayland server >>        (again including meta data of course). >> >> Is that correct? > > Both are correct descriptions of typical behaviors. But it isn't spec'ed > anywhere who has to do the buffer allocation. > > In practical terms, the buffer allocation happens in either the 2D GUI > toolkit (gtk+, for example), or the EGL implementation. Someone using > this in a real product would most probably be interested in avoiding any > extra copies and make sure that both allocate buffers via virtio-gpu, for > example. > > Depending on the use case, they could be also interested in supporting > unmodified clients with an extra copy per buffer presentation. > > That's to say that if we cannot come up with a zero-copy solution for > unmodified clients, we should at least support zero-copy for cooperative > clients. > >> Now, with virtualization added to the mix it becomes a bit more >> complicated.  Client and server are unmodified.  The client talks to the >> guest proxy (wayland protocol).  The guest proxy talks to the host proxy >> (protocol to be defined). The host proxy talks to the server (wayland >> protocol). >> >> Buffers must be managed along the way, and we want avoid copying around >> the buffers.  The host proxy could be implemented directly in qemu, or >> as separate process which cooperates with qemu for buffer management. >> >> Fine so far? > > Yep. > >>> I really think that whatever we come up with needs to support 3D >>> clients as >>> well. >> >> Lets start with 3d clients, I think these are easier.  They simply use >> virtio-gpu for 3d rendering as usual.  When they are done the rendered >> buffer already lives in a host drm buffer (because virgl runs the actual >> rendering on the host gpu).  So the client passes the dma-buf to the >> guest proxy, the guest proxy imports it to look up the resource-id, >> passes the resource-id to the host proxy, the host proxy looks up the >> drm buffer and exports it as dma-buf, then passes it to the server. >> Done, without any extra data copies. > > Yep. > >>> Creation of shareable buffer by guest >>> ------------------------------------------------- >>> >>> 1. Client requests virtio driver to create a buffer suitable for sharing >>> with host (DRM_VIRTGPU_RESOURCE_CREATE) >> >> client or guest proxy? > > As per the above, the GUI toolkit could have been modified so the client > directly creates a shareable buffer, and renders directly to it without > any extra copies. > > If clients cannot be modified, then it's the guest proxy what has to > create the shareable buffer and keep it in sync with the client's > non-shareable buffer at the right times, by intercepting > wl_surface.commit messages and copying buffer contents. > >>> 4. QEMU maps that buffer to the guest's address space >>> (KVM_SET_USER_MEMORY_REGION), passes the guest PFN to the virtio driver >> >> That part is problematic.  The host can't simply allocate something in >> the physical address space, because most physical address space >> management is done by the guest.  All pci bars are mapped by the guest >> firmware for example (or by the guest OS in case of hotplug). > > How can KVM_SET_USER_MEMORY_REGION ever be safely used then? I would have > expected that callers of that ioctl have enough knowledge to be able to > choose a physical address that won't conflict with the guest's kernel. > > I see that the ivshmem device in QEMU registers the memory region in BAR > 2 of a PCI device instead. Would that be better in your opinion? > >>> 4. QEMU pops data+buffers from the virtqueue, looks up shmem FD for each >>> resource, sends data + FDs to the compositor with SCM_RIGHTS >> >> BTW: Is there a 1:1 relationship between buffers and shmem blocks?  Or >> does the wayland protocol allow for offsets in buffer meta data, so you >> can place multiple buffers in a single shmem block? > > The latter: > https://wayland.freedesktop.org/docs/html/apa.html#protocol-spec-wl_shm_pool > > Regards, > > Tomeu