Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp90980yba; Thu, 11 Apr 2019 22:50:22 -0700 (PDT) X-Google-Smtp-Source: APXvYqwiSP4kv7mHmGHtuAuJTPJ0mNyIw6yAOPqg0rxE1uIQ2D+scjWlsT+sqDn9dJOB+qWyvxXa X-Received: by 2002:a63:c944:: with SMTP id y4mr49481384pgg.257.1555048222026; Thu, 11 Apr 2019 22:50:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555048222; cv=none; d=google.com; s=arc-20160816; b=THvXzhylsnE08ZGGptll4Oecq0maNfnmAapFLzy+WkpgtfrOS+nKy15YsP4FMpm7zi TOagX2K5d8tocCtgFdOa2nfe7gFxOg0NmRfo74eDN4hOzrQYAKwm3EcH5MA9tMEzrAKS fyqO7wORbUgsjI2SkZBjVulOtP+jt6ek06gXU3fA0NFw9v9+cpk5ycoW8lVjNjU2Nn74 px3ZRW05xlCsot/QvnEylvEIy5Nufg/OUAfYuobFQG13iul4o7KpL7bcRuTzcV7O+YVx BPf3c7i4LJZPW7eAJ1+JgwIthzoziAFaS0jUrFizJrggTcnY3XucO9EuyzjKro4sAL6f JVsg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=yHI/kBbTMK9L5K8j9Sy6+FBvsgLrujOgUiDc8MiVSRY=; b=S5wLdIFb76zCLUE9cetnqzATQHrabBEuiH17RdtJi+GLvUxzDqW+r6fzJCLJCjaovY fJZ3TkIsBKJpizb3L4dNIKNw4INOg62mqJAvZeYOOgHOGavvSffB3GOraRTb6b2X44yD 6TnFHQ5mBGfglT6NOlmCvB+45hN/eSAuhQD9M9kt4VtgKqErdcTNO78CK8iNBqluAS3s pZwMwpBcyvl4pPLKbmvwB3h3J6b9s2WFQsWhOegGDQJFxcRPx8jtsDMmoRUjkhBjgYZv gQrXBccGnSvjlH3spzlFyeRQIFhMyIyT8E7QO8/R36BuNdGMLRyNSGb748DxUKyUoTLe RuKw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c24si29633805plo.220.2019.04.11.22.50.05; Thu, 11 Apr 2019 22:50:22 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726752AbfDLFt2 (ORCPT + 99 others); Fri, 12 Apr 2019 01:49:28 -0400 Received: from mx1.redhat.com ([209.132.183.28]:55784 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726061AbfDLFt2 (ORCPT ); Fri, 12 Apr 2019 01:49:28 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8B27F37E79; Fri, 12 Apr 2019 05:49:27 +0000 (UTC) Received: from sirius.home.kraxel.org (ovpn-116-65.ams2.redhat.com [10.36.116.65]) by smtp.corp.redhat.com (Postfix) with ESMTP id C07965C21E; Fri, 12 Apr 2019 05:49:24 +0000 (UTC) Received: by sirius.home.kraxel.org (Postfix, from userid 1000) id 06F7511AA4; Fri, 12 Apr 2019 07:49:24 +0200 (CEST) Date: Fri, 12 Apr 2019 07:49:24 +0200 From: Gerd Hoffmann To: Gurchetan Singh Cc: ML dri-devel , virtio@lists.oasis-open.org, David Airlie , "Michael S. Tsirkin" , =?utf-8?Q?Marc-Andr=C3=A9?= Lureau , Tomeu Vizoso , Jason Wang , David Airlie , "open list:VIRTIO CORE, NET AND BLOCK DRIVERS" , open list Subject: Re: [PATCH 3/3] virtio-gpu api: VIRTIO_GPU_F_RESSOURCE_V2 Message-ID: <20190412054924.dvh6bfxfrbgvezxr@sirius.home.kraxel.org> References: <20190410114227.25846-1-kraxel@redhat.com> <20190410114227.25846-4-kraxel@redhat.com> <20190411050322.mfxo5mrwwzajlz3h@sirius.home.kraxel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20180716 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Fri, 12 Apr 2019 05:49:27 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 11, 2019 at 06:36:15PM -0700, Gurchetan Singh wrote: > On Wed, Apr 10, 2019 at 10:03 PM Gerd Hoffmann wrote: > > > > > > +/* VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 */ > > > > +struct virtio_gpu_cmd_resource_create_v2 { > > > > + struct virtio_gpu_ctrl_hdr hdr; > > > > + __le32 resource_id; > > > > + __le32 format; > > > > + __le32 width; > > > > + __le32 height; > > > > + /* 3d only */ > > > > + __le32 target; > > > > + __le32 bind; > > > > + __le32 depth; > > > > + __le32 array_size; > > > > + __le32 last_level; > > > > + __le32 nr_samples; > > > > + __le32 flags; > > > > +}; > > > > > > > > > I assume this is always backed by some host side allocation, without any > > > guest side pages associated with it? > > > > No. It is not backed at all yet. Workflow would be like this: > > > > (1) VIRTIO_GPU_CMD_RESOURCE_CREATE_V2 > > (2) VIRTIO_GPU_CMD_MEMORY_CREATE (see patch 2) > > (3) VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH (see patch 2) > > Thanks for the clarification. > > > > > You could also create a larger pool with VIRTIO_GPU_CMD_MEMORY_CREATE, > > then go attach multiple resources to it. > > > > > If so, do we want the option for the guest allocate? > > > > Allocation options are handled by VIRTIO_GPU_CMD_MEMORY_CREATE > > (initially guest allocated only, i.e. what virtio-gpu supports today, > > the plan is to add other allocation types later on). > > You want to cover Vulkan, host-allocated dma-bufs, and guest-allocated > dma-bufs with this, correct? Let me know if it's a non-goal :-) Yes, even though it is not clear yet how we are going to handle host-allocated buffers in the vhost-user case ... > If so, we might want to distinguish between memory types (kind of like > memoryTypeIndex in Vulkan). [Assuming memory_id is like resource_id] For the host-allocated buffers we surely want that, yes. For guest-allocated memory regions it isn't useful I think ... > 1) Vulkan seems the most straightforward > > virtio_gpu_cmd_memory_create --> create kernel data structure, > vkAllocateMemory on the host or import guest memory into Vulkan, > depending on the memory type > virtio_gpu_cmd_resource_create_v2 --> vkCreateImage + > vkGetImageMemoryRequirements on host > virtio_gpu_cmd_resource_attach_memory --> vkBindImageMemory on host Yes. Note 1: virtio_gpu_cmd_memory_create + virtio_gpu_cmd_resource_create_v2 ordering doesn't matter, so you can virtio_gpu_cmd_resource_create_v2 first to figure stride and size, then adjust memory size accordingly. Note 2: The old virtio_gpu_cmd_resource_create variants can be used too if you don't need the _v2 features. Note 3: If I understand things correctly it would be valid to create a memory pool (allocate one big chunk of memory) with vkAllocateMemory, then bind multiple images at different offsets to it. > 2) With a guest allocated dma-buf using some new allocation library, > > virtio_gpu_cmd_resource_create_v2 --> host returns metadata describing > optimal allocation > virtio_gpu_cmd_memory_create --> allocate guest memory pages since > it's guest memory type > virtio_gpu_cmd_resource_attach_memory --> associate guest pages with > resource in kernel, send iovecs to host for bookkeeping virtio_gpu_cmd_memory_create sends the iovecs. Otherwise correct. > 3) With gbm it's a little trickier, > > virtio_gpu_cmd_resource_create_v2 --> gbm_bo_create_with_modifiers, > get metadata in return Only get metadata in return. > virtio_gpu_cmd_memory_create --> create kernel data structure, but > don't allocate pages, nothing on the host Memory allocation happens here. Probably makes sense to have a virtio_gpu_cmd_memory_create_host command here, because the parameters we need are quite different from the guest-allocated case. Maybe we even need a virtio_gpu_cmd_memory_create_host_for_resource variant, given that gbm doesn't have raw memory buffers without any format attached to it. > > > > +/* VIRTIO_GPU_RESP_OK_RESOURCE_INFO */ > > > > +struct virtio_gpu_resp_resource_info { > > > > + struct virtio_gpu_ctrl_hdr hdr; > > > > + __le32 stride[4]; > > > > + __le32 size[4]; > > > > +}; > > > > > > offsets[4] needed too > > > > That is in VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH ... > > I assume the offsets aren't returned by > VIRTIO_GPU_CMD_RESOURCE_MEMORY_ATTACH. Yes, they are send by the guest. > How does the guest know at which offsets in memory will be compatible > to share with display, camera, etc? Is is good enough to align offsets to page boundaries? > Also, do you want to cover the case where the resource is backed by > three separate memory regions (VK_IMAGE_CREATE_DISJOINT_BIT)? Good point. I guess we should make memory_id in virtio_gpu_cmd_resource_attach_memory an array then, so you can specify a different memory region for each plane. cheers, Gerd