Received: by 10.223.176.5 with SMTP id f5csp2597352wra; Mon, 5 Feb 2018 06:48:41 -0800 (PST) X-Google-Smtp-Source: AH8x226JWgRlmCLi5COFYgTpHAo2gHj9QrAiTaXZhA0cVrR4DE3+2IQn3dyyCsEYpDP5qGLN0B5T X-Received: by 2002:a17:902:8487:: with SMTP id c7-v6mr34485376plo.7.1517842121261; Mon, 05 Feb 2018 06:48:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517842121; cv=none; d=google.com; s=arc-20160816; b=V0bw9VvKKUm/CT1HK6fMBHZVlByEG1CLq9oA8t29NOW6+POfpKV6jnJtZrcabdTFyp XGDDIcfTd2UYbJtFBi+KUKvglOpWdfu8sPR9nwaeDuzfUX4KV/+dJRDXXatqPjTm80vr wJrbOu34GXsLtM6naxvcASVCl3XQ3M3SLvTeBqSxTT9+SeN6lfIzOG+WRnmiXul4Mx6O 3A6ptK4c+IdPoNeb8RWzX2iWKiyEmckAgtx+xRbmuCujvso7zQOpUWMVDbBXOwdV84fC +JNhuHY0zoQt2/nclL0M3/kCxbwB/zRZsutaiEvLjsmHxl2UV0ve+eneXlO4ZRbdc5qZ ys2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:references:cc:to:subject:from:arc-authentication-results; bh=9G4Yw233Co5mRV1E+LElaBiZvOEUDP9vFooejkFlimI=; b=holkCyIsExQfov1ck8p2GekxwuJoLtfxOsEV93fPxkRo5Xj/goMMGiJJ9FfKo7mjrb 5Z76XuKwaszM+BitWLN4hpuDWxS2+0Yhsk1q8d6Hiw04XQqqH5MRPK76S5fdytLj61d+ 4DG39Nw7Hgk1SDHGJOmU5/ywrxUKV/ophBikj6AvhYNauZUyzzHgQ5E4RTmS3zwFrzCl q5TyhTM6JOkNNhZy0zEzWRu5R2RC+6llIdlKvMECEEHeazcuhpTjuHVLX1omuCAvvJHq ypKSLEci3eHAYN8ULXL+NsLsg7Av6Fs6mbZJ96anM7MRJ7RI0oNt+FGfE/FQUtbzeC4Q Dnqg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=collabora.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e27si6982916pfb.89.2018.02.05.06.48.26; Mon, 05 Feb 2018 06:48:41 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=collabora.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752999AbeBEOqa (ORCPT + 99 others); Mon, 5 Feb 2018 09:46:30 -0500 Received: from bhuna.collabora.co.uk ([46.235.227.227]:57348 "EHLO bhuna.collabora.co.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752612AbeBEOqX (ORCPT ); Mon, 5 Feb 2018 09:46:23 -0500 Received: from [127.0.0.1] (localhost [127.0.0.1]) (Authenticated sender: tomeu) with ESMTPSA id 9896B270352 From: Tomeu Vizoso Subject: Re: [PATCH v3 1/2] drm/virtio: Add window server support To: Gerd Hoffmann Cc: linux-kernel@vger.kernel.org, Zach Reizner , kernel@collabora.com, dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org, "Michael S. Tsirkin" , David Airlie , Jason Wang , Stefan Hajnoczi References: <20180126135803.29781-1-tomeu.vizoso@collabora.com> <20180126135803.29781-2-tomeu.vizoso@collabora.com> <20180201163623.5cs2ysykg5wgulf4@sirius.home.kraxel.org> <49785e0d-936a-c3b4-62dd-aafc7083a942@collabora.com> <20180205122017.4vb5nlpodkq2uhxa@sirius.home.kraxel.org> Message-ID: Date: Mon, 5 Feb 2018 15:46:17 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: <20180205122017.4vb5nlpodkq2uhxa@sirius.home.kraxel.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 02/05/2018 01:20 PM, Gerd Hoffmann wrote: > Hi, > >>> Why not use virtio-vsock to run the wayland protocol? I don't like >>> the idea to duplicate something with very simliar functionality in >>> virtio-gpu. >> >> The reason for abandoning that approach was the type of objects that >> could be shared via virtio-vsock would be extremely limited. Besides >> that being potentially confusing to users, it would mean from the >> implementation side that either virtio-vsock would gain a dependency on >> the drm subsystem, or an appropriate abstraction for shareable buffers >> would need to be added for little gain. > > Well, no. The idea is that virtio-vsock and virtio-gpu are used largely > as-is, without knowing about each other. The guest wayland proxy which > does the buffer management talks to both devices. Note that the proxy won't know anything about buffers if clients opt-in for zero-copy support (they allocate the buffers in a way that allows for sharing with the host). >>> If you have a guest proxy anyway using virtio-sock for the protocol >>> stream and virtio-gpu for buffer sharing (and some day 3d rendering >>> too) should work fine I think. >> >> If I understand correctly your proposal, virtio-gpu would be used for >> creating buffers that could be shared across domains, but something >> equivalent to SCM_RIGHTS would still be needed in virtio-vsock? > > Yes, the proxy would send a reference to the buffer over virtio-vsock. > I was more thinking about a struct specifying something like > "ressource-id 42 on virtio-gpu-pci device in slot 1:23.0" instead of > using SCM_RIGHTS. Can you extend on this? I'm having trouble figuring out how this could work in a way that keeps protocol data together with the resources it refers to. >> If the mechanics of passing presentation data were very complex, I think >> this approach would have more merit. But as you can see from the code, >> it isn't that bad. > > Well, the devil is in the details. If you have multiple connections you > don't want one being able to stall the others for example. There are > reasons took quite a while to bring virtio-vsock to the state where it > is today. Yes, but at the same time there are use cases that virtio-vsock has to support but aren't important in this scenario. >>> What is the plan for the host side? I see basically two options. Either >>> implement the host wayland proxy directly in qemu. Or >>> implement it as separate process, which then needs some help from >>> qemu to get access to the buffers. The later would allow qemu running >>> independant from the desktop session. >> >> Regarding synchronizing buffers, this will stop becoming needed in >> subsequent commits as all shared memory is allocated in the host and >> mapped to the guest via KVM_SET_USER_MEMORY_REGION. > > --verbose please. The qemu patches linked from the cover letter not > exactly helpful in understanding how all this is supposed to work. A client will allocate a buffer with DRM_VIRTGPU_RESOURCE_CREATE, export it and pass the FD to the compositor (via the proxy). During resource creation, QEMU would allocate a shmem buffer and map it into the guest with KVM_SET_USER_MEMORY_REGION. The client would mmap that resource and render to it. Because it's backed by host memory, the compositor would be able to read it without any further copies. >> This is already the case for buffers passed from the compositor to the >> clients (see patch 2/2), and I'm working on the equivalent for buffers >> from the guest to the host (clients still have to create buffers with >> DRM_VIRTGPU_RESOURCE_CREATE but they will be only backend by host memory >> so no calls to DRM_VIRTGPU_TRANSFER_TO_HOST are needed). > > Same here. --verbose please. When a FD comes from the compositor, QEMU mmaps it and maps that virtual address to the guest via KVM_SET_USER_MEMORY_REGION. When the guest proxy reads from the winsrv socket, it will get a FD that wraps the buffer referenced above. When the client reads from the guest proxy, it would get a FD that references that same buffer and would mmap it. At that point, the client is reading from the same physical pages where the compositor wrote to. To be clear, I'm not against solving this via some form of restricted FD passing in virtio-vsock, but Stefan (added to CC) thought that it would be cleaner to do it all within virtio-gpu. This is the thread where it was discussed: https://lkml.kernel.org/r/<2d73a3e1-af70-83a1-0e84-98b5932ea20c@collabora.com> Thanks, Tomeu