Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp439273ybi; Wed, 19 Jun 2019 02:05:13 -0700 (PDT) X-Google-Smtp-Source: APXvYqyXrHgDVCHj+KZCAsqNAoQv0oMyAnnIytYIDBF0axzam46hIcbKytZZyKjh7sZu7hU55jme X-Received: by 2002:a17:902:2006:: with SMTP id n6mr61783171pla.232.1560935113113; Wed, 19 Jun 2019 02:05:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1560935113; cv=none; d=google.com; s=arc-20160816; b=RnzuQkMS2YFrO9FlJu2ynLE6SkXMzmtg/cQ1RlMweyDLVV2QFsbZXGN7TvuHksWJhh iWgIN/KyyIqWJjtau9LXaddrFMJxqWKdin/HuS1PbcAyqqT8Q0rMgCgQB72/w329noIo PQgD+PwLtgttk+rqwyIh4p7k4672OFGncL/i1wY6QyYjEDE2aa91/aQbeWNYPuyolRGs Iwx1kajcwuNjerixTzMeuZ0bSLYqQUssPlU2eHAffogYSXM27fEFC4/hasgmXR2kWOA9 29qtrQeaGzH0oJBco3Po8nCX0GxM/67VQegkZvrTUy4sDeVokKoGeEUVbXIymW6Rq4vS N9EQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=eVKnTC4fxieDo0ntfF7SWh8DjA2gC/YkclQ5HS68p2g=; b=fb5iT2EDu5SzURt1BRvSNn6lOUZ5XJLjFiyX8PFhKn91IExv4VTplyco67y/H/2TON v994c0kJRXEvdPTt3hycoxDFm6Rv+bbNW8tDaR5q44cgaTm4E5dm0QPLBLfUdJYucCTI q112gCmSn4eMMcdek+ocL/vfFDCCNQvvofK2MYhTNEnTtAMQgrAoKW5474gqYgT1ruab m2vBtQBjk/gzSbizapve8mQJwxkalPKRYnkqykNLw7mz1Y/+erslHCGeG/NFR+FOZavU R8ilfnjZdusigEDfowkqQS3bxpj8i8ZDwNTOEd+irnumdQu/YGXUw3xK3wlM19RXD+BH C54g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b9si2692753pgw.578.2019.06.19.02.04.56; Wed, 19 Jun 2019 02:05:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731465AbfFSJEg (ORCPT + 99 others); Wed, 19 Jun 2019 05:04:36 -0400 Received: from mx1.redhat.com ([209.132.183.28]:42677 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731422AbfFSJE3 (ORCPT ); Wed, 19 Jun 2019 05:04:29 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 38269307D871; Wed, 19 Jun 2019 09:04:29 +0000 (UTC) Received: from sirius.home.kraxel.org (ovpn-116-86.ams2.redhat.com [10.36.116.86]) by smtp.corp.redhat.com (Postfix) with ESMTP id 496F418C54; Wed, 19 Jun 2019 09:04:25 +0000 (UTC) Received: by sirius.home.kraxel.org (Postfix, from userid 1000) id E35C117511; Wed, 19 Jun 2019 11:04:21 +0200 (CEST) From: Gerd Hoffmann To: dri-devel@lists.freedesktop.org Cc: Gerd Hoffmann , Maarten Lankhorst , Maxime Ripard , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list), virtualization@lists.linux-foundation.org (open list:VIRTIO GPU DRIVER) Subject: [PATCH v3 08/12] drm/virtio: rework virtio_gpu_execbuffer_ioctl fencing Date: Wed, 19 Jun 2019 11:04:16 +0200 Message-Id: <20190619090420.6667-9-kraxel@redhat.com> In-Reply-To: <20190619090420.6667-1-kraxel@redhat.com> References: <20190619090420.6667-1-kraxel@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.48]); Wed, 19 Jun 2019 09:04:29 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Use gem reservation helpers and direct reservation_object_* calls instead of ttm. v3: Also attach the array of gem objects to the virtio command buffer, so we can drop the object references in the completion callback. Needed because ttm fence helpers grab a reference for us, but gem helpers don't. Signed-off-by: Gerd Hoffmann --- drivers/gpu/drm/virtio/virtgpu_drv.h | 6 ++- drivers/gpu/drm/drm_gem_array_helper.c | 2 + drivers/gpu/drm/virtio/virtgpu_ioctl.c | 62 +++++++++++--------------- drivers/gpu/drm/virtio/virtgpu_vq.c | 16 ++++--- 4 files changed, 43 insertions(+), 43 deletions(-) diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h index 77ac69a8e6cc..573173c35c48 100644 --- a/drivers/gpu/drm/virtio/virtgpu_drv.h +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h @@ -33,6 +33,7 @@ #include #include +#include #include #include #include @@ -115,9 +116,9 @@ struct virtio_gpu_vbuffer { char *resp_buf; int resp_size; - virtio_gpu_resp_cb resp_cb; + struct drm_gem_object_array *objs; struct list_head list; }; @@ -301,7 +302,8 @@ void virtio_gpu_cmd_context_detach_resource(struct virtio_gpu_device *vgdev, uint32_t resource_id); void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev, void *data, uint32_t data_size, - uint32_t ctx_id, struct virtio_gpu_fence *fence); + uint32_t ctx_id, struct virtio_gpu_fence *fence, + struct drm_gem_object_array *objs); void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev, uint32_t resource_id, uint32_t ctx_id, uint64_t offset, uint32_t level, diff --git a/drivers/gpu/drm/drm_gem_array_helper.c b/drivers/gpu/drm/drm_gem_array_helper.c index d35c77c4a02d..fde6c2e63253 100644 --- a/drivers/gpu/drm/drm_gem_array_helper.c +++ b/drivers/gpu/drm/drm_gem_array_helper.c @@ -57,6 +57,7 @@ drm_gem_array_from_handles(struct drm_file *drm_file, u32 *handles, u32 nents) } return objs; } +EXPORT_SYMBOL(drm_gem_array_from_handles); /** * drm_gem_array_put_free -- put gem objects and free array. @@ -74,3 +75,4 @@ void drm_gem_array_put_free(struct drm_gem_object_array *objs) } drm_gem_array_free(objs); } +EXPORT_SYMBOL(drm_gem_array_put_free); diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index 5cffd2e54c04..21ebf5cdb8bc 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -105,14 +105,11 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, struct drm_virtgpu_execbuffer *exbuf = data; struct virtio_gpu_device *vgdev = dev->dev_private; struct virtio_gpu_fpriv *vfpriv = drm_file->driver_priv; - struct drm_gem_object *gobj; struct virtio_gpu_fence *out_fence; - struct virtio_gpu_object *qobj; int ret; uint32_t *bo_handles = NULL; void __user *user_bo_handles = NULL; - struct list_head validate_list; - struct ttm_validate_buffer *buflist = NULL; + struct drm_gem_object_array *buflist = NULL; int i; struct ww_acquire_ctx ticket; struct sync_file *sync_file; @@ -155,15 +152,10 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, return out_fence_fd; } - INIT_LIST_HEAD(&validate_list); if (exbuf->num_bo_handles) { - bo_handles = kvmalloc_array(exbuf->num_bo_handles, - sizeof(uint32_t), GFP_KERNEL); - buflist = kvmalloc_array(exbuf->num_bo_handles, - sizeof(struct ttm_validate_buffer), - GFP_KERNEL | __GFP_ZERO); - if (!bo_handles || !buflist) { + sizeof(uint32_t), GFP_KERNEL); + if (!bo_handles) { ret = -ENOMEM; goto out_unused_fd; } @@ -175,25 +167,22 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, goto out_unused_fd; } - for (i = 0; i < exbuf->num_bo_handles; i++) { - gobj = drm_gem_object_lookup(drm_file, bo_handles[i]); - if (!gobj) { - ret = -ENOENT; - goto out_unused_fd; - } - - qobj = gem_to_virtio_gpu_obj(gobj); - buflist[i].bo = &qobj->tbo; - - list_add(&buflist[i].head, &validate_list); + buflist = drm_gem_array_from_handles(drm_file, bo_handles, + exbuf->num_bo_handles); + if (!buflist) { + ret = -ENOENT; + goto out_unused_fd; } kvfree(bo_handles); bo_handles = NULL; } - ret = virtio_gpu_object_list_validate(&ticket, &validate_list); - if (ret) - goto out_free; + if (buflist) { + ret = drm_gem_lock_reservations(buflist->objs, buflist->nents, + &ticket); + if (ret) + goto out_unused_fd; + } buf = memdup_user(u64_to_user_ptr(exbuf->command), exbuf->size); if (IS_ERR(buf)) { @@ -219,25 +208,26 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data, fd_install(out_fence_fd, sync_file->file); } + if (buflist) { + for (i = 0; i < exbuf->num_bo_handles; i++) + reservation_object_add_excl_fence(buflist->objs[i]->resv, + &out_fence->f); + drm_gem_unlock_reservations(buflist->objs, buflist->nents, + &ticket); + } + virtio_gpu_cmd_submit(vgdev, buf, exbuf->size, - vfpriv->ctx_id, out_fence); - - ttm_eu_fence_buffer_objects(&ticket, &validate_list, &out_fence->f); - - /* fence the command bo */ - virtio_gpu_unref_list(&validate_list); - kvfree(buflist); + vfpriv->ctx_id, out_fence, buflist); return 0; out_memdup: kfree(buf); out_unresv: - ttm_eu_backoff_reservation(&ticket, &validate_list); -out_free: - virtio_gpu_unref_list(&validate_list); + drm_gem_unlock_reservations(buflist->objs, buflist->nents, &ticket); out_unused_fd: kvfree(bo_handles); - kvfree(buflist); + if (buflist) + drm_gem_array_put_free(buflist); if (out_fence_fd >= 0) put_unused_fd(out_fence_fd); diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index 6c1a90717535..6efea4fca012 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -191,7 +191,7 @@ void virtio_gpu_dequeue_ctrl_func(struct work_struct *work) } while (!virtqueue_enable_cb(vgdev->ctrlq.vq)); spin_unlock(&vgdev->ctrlq.qlock); - list_for_each_entry_safe(entry, tmp, &reclaim_list, list) { + list_for_each_entry(entry, &reclaim_list, list) { resp = (struct virtio_gpu_ctrl_hdr *)entry->resp_buf; trace_virtio_gpu_cmd_response(vgdev->ctrlq.vq, resp); @@ -218,14 +218,18 @@ void virtio_gpu_dequeue_ctrl_func(struct work_struct *work) } if (entry->resp_cb) entry->resp_cb(vgdev, entry); - - list_del(&entry->list); - free_vbuf(vgdev, entry); } wake_up(&vgdev->ctrlq.ack_queue); if (fence_id) virtio_gpu_fence_event_process(vgdev, fence_id); + + list_for_each_entry_safe(entry, tmp, &reclaim_list, list) { + if (entry->objs) + drm_gem_array_put_free(entry->objs); + list_del(&entry->list); + free_vbuf(vgdev, entry); + } } void virtio_gpu_dequeue_cursor_func(struct work_struct *work) @@ -939,7 +943,8 @@ void virtio_gpu_cmd_transfer_from_host_3d(struct virtio_gpu_device *vgdev, void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev, void *data, uint32_t data_size, - uint32_t ctx_id, struct virtio_gpu_fence *fence) + uint32_t ctx_id, struct virtio_gpu_fence *fence, + struct drm_gem_object_array *objs) { struct virtio_gpu_cmd_submit *cmd_p; struct virtio_gpu_vbuffer *vbuf; @@ -949,6 +954,7 @@ void virtio_gpu_cmd_submit(struct virtio_gpu_device *vgdev, vbuf->data_buf = data; vbuf->data_size = data_size; + vbuf->objs = objs; cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_SUBMIT_3D); cmd_p->hdr.ctx_id = cpu_to_le32(ctx_id); -- 2.18.1