2018-11-12 16:55:26

by Robert Foss

[permalink] [raw]
Subject: [PATCH v5 0/4] virgl: fence fd support


This series implements fence support for drm/virtio and
has been tested using qemu, kmscube and the below branches.

Rob Herring solved a reference counting issue and
suggested a context check for the execbuf ioctl, his
changes have been included in the below commits to
keep the tree working at all commits.


The linux series can be found here:
https://gitlab.collabora.com/robertfoss/linux/commits/virtio_fences_v5

As for mesa, the branch can be found here:
https://gitlab.collabora.com/robertfoss/mesa/commits/virtio_fences_v3


Changes since v4:
- drm/virtio: add uapi for in and out explicit fences
- Emil/Gerd: Improved commit message and fence_fd comment

Changes since v3:
- Rebased on drm-misc-next
- drm/virtio: add virtio_gpu_alloc_fence()
- Gerd: Clarified and extended commit message
- Emil: Fixed whitespace issue
- Emil: Changed label name from fail_fence to fail_backoff
- Emil: Remove special case for !fence->drv in virtio_gpu_fence_cleanup()
- drm/virtio: add uapi for in and out explicit fences
- Emil: Added r-b
- Emil: Move fence_fd assignment to after sanity checks
- drm/virtio: add in-fences support for explicit synchronization
- Move all in_fence handling to the same VIRTGPU_EXECBUF_FENCE_FD_IN block
- Emil: Make sure to always call dma_fence_put()
- Emil: Added r-b
- drm/virtio: add out-fences support for explicit synchronization
- Emil: Combine with in-fences patch
- drm/virtio: bump driver version after explicit synchronization addition
- Emil: Added r-b

Changes since v2:
- drm/virtio: add virtio_gpu_alloc_fence()
- Forward port and fix compilation issues
- drm/virtio: add uapi for in and out explicit fences
- Check exbuf->flags for unsupported flags
- drm/virtio: add in-fences support for explicit synchronization


Gustavo Padovan (1):
drm/virtio: bump driver version after explicit synchronization
addition

Robert Foss (3):
drm/virtio: add virtio_gpu_alloc_fence()
drm/virtio: add uapi for in and out explicit fences
drm/virtio: add in/out fence support for explicit synchronization

drivers/gpu/drm/virtio/virtgpu_drv.h | 8 +-
drivers/gpu/drm/virtio/virtgpu_fence.c | 29 +++++--
drivers/gpu/drm/virtio/virtgpu_ioctl.c | 108 +++++++++++++++++++++----
drivers/gpu/drm/virtio/virtgpu_plane.c | 46 +++++++++--
drivers/gpu/drm/virtio/virtgpu_vq.c | 2 +-
include/uapi/drm/virtgpu_drm.h | 13 ++-
6 files changed, 173 insertions(+), 33 deletions(-)

--
2.17.1



2018-11-12 16:53:13

by Robert Foss

[permalink] [raw]
Subject: [PATCH v5 1/4] drm/virtio: add virtio_gpu_alloc_fence()

Refactor fence creation, add fences to relevant GPU
operations and add cursor helper functions.

This removes the potential for allocation failures from the
cmd_submit and atomic_commit paths.
Now a fence will be allocated first and only after that
will we proceed with the rest of the execution.

Signed-off-by: Gustavo Padovan <[email protected]>
Signed-off-by: Robert Foss <[email protected]>
Suggested-by: Rob Herring <[email protected]>
---

Changes since v3:
- Gerd: Clarified and extended commit message
- Emil: Fixed whitespace issue
- Emil: Changed label name from fail_fence to fail_backoff
- Emil: Remove special case for !fence->drv in virtio_gpu_fence_cleanup()

drivers/gpu/drm/virtio/virtgpu_drv.h | 4 +++
drivers/gpu/drm/virtio/virtgpu_fence.c | 29 ++++++++++++----
drivers/gpu/drm/virtio/virtgpu_ioctl.c | 30 +++++++++++++++--
drivers/gpu/drm/virtio/virtgpu_plane.c | 46 +++++++++++++++++++++++---
drivers/gpu/drm/virtio/virtgpu_vq.c | 2 +-
5 files changed, 96 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 6474e83cbf3d..acd130c58e33 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -131,6 +131,7 @@ struct virtio_gpu_framebuffer {
int x1, y1, x2, y2; /* dirty rect */
spinlock_t dirty_lock;
uint32_t hw_res_handle;
+ struct virtio_gpu_fence *fence;
};
#define to_virtio_gpu_framebuffer(x) \
container_of(x, struct virtio_gpu_framebuffer, base)
@@ -349,6 +350,9 @@ void virtio_gpu_ttm_fini(struct virtio_gpu_device *vgdev);
int virtio_gpu_mmap(struct file *filp, struct vm_area_struct *vma);

/* virtio_gpu_fence.c */
+struct virtio_gpu_fence *virtio_gpu_fence_alloc(
+ struct virtio_gpu_device *vgdev);
+void virtio_gpu_fence_cleanup(struct virtio_gpu_fence *fence);
int virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
struct virtio_gpu_ctrl_hdr *cmd_hdr,
struct virtio_gpu_fence **fence);
diff --git a/drivers/gpu/drm/virtio/virtgpu_fence.c b/drivers/gpu/drm/virtio/virtgpu_fence.c
index 00c742a441bf..6b5d92215cfb 100644
--- a/drivers/gpu/drm/virtio/virtgpu_fence.c
+++ b/drivers/gpu/drm/virtio/virtgpu_fence.c
@@ -67,6 +67,28 @@ static const struct dma_fence_ops virtio_fence_ops = {
.timeline_value_str = virtio_timeline_value_str,
};

+struct virtio_gpu_fence *virtio_gpu_fence_alloc(struct virtio_gpu_device *vgdev)
+{
+ struct virtio_gpu_fence_driver *drv = &vgdev->fence_drv;
+ struct virtio_gpu_fence *fence = kzalloc(sizeof(struct virtio_gpu_fence),
+ GFP_ATOMIC);
+ if (!fence)
+ return fence;
+
+ fence->drv = drv;
+ dma_fence_init(&fence->f, &virtio_fence_ops, &drv->lock, drv->context, 0);
+
+ return fence;
+}
+
+void virtio_gpu_fence_cleanup(struct virtio_gpu_fence *fence)
+{
+ if (!fence)
+ return;
+
+ dma_fence_put(&fence->f);
+}
+
int virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
struct virtio_gpu_ctrl_hdr *cmd_hdr,
struct virtio_gpu_fence **fence)
@@ -74,15 +96,8 @@ int virtio_gpu_fence_emit(struct virtio_gpu_device *vgdev,
struct virtio_gpu_fence_driver *drv = &vgdev->fence_drv;
unsigned long irq_flags;

- *fence = kmalloc(sizeof(struct virtio_gpu_fence), GFP_ATOMIC);
- if ((*fence) == NULL)
- return -ENOMEM;
-
spin_lock_irqsave(&drv->lock, irq_flags);
- (*fence)->drv = drv;
(*fence)->seq = ++drv->sync_seq;
- dma_fence_init(&(*fence)->f, &virtio_fence_ops, &drv->lock,
- drv->context, (*fence)->seq);
dma_fence_get(&(*fence)->f);
list_add_tail(&(*fence)->node, &drv->fences);
spin_unlock_irqrestore(&drv->lock, irq_flags);
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index bc5afa4f906e..d69fc356df0a 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -168,6 +168,13 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
ret = PTR_ERR(buf);
goto out_unresv;
}
+
+ fence = virtio_gpu_fence_alloc(vgdev);
+ if (!fence) {
+ kfree(buf);
+ ret = -ENOMEM;
+ goto out_unresv;
+ }
virtio_gpu_cmd_submit(vgdev, buf, exbuf->size,
vfpriv->ctx_id, &fence);

@@ -283,11 +290,17 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data,
rc_3d.nr_samples = cpu_to_le32(rc->nr_samples);
rc_3d.flags = cpu_to_le32(rc->flags);

+ fence = virtio_gpu_fence_alloc(vgdev);
+ if (!fence) {
+ ret = -ENOMEM;
+ goto fail_backoff;
+ }
+
virtio_gpu_cmd_resource_create_3d(vgdev, qobj, &rc_3d, NULL);
ret = virtio_gpu_object_attach(vgdev, qobj, &fence);
if (ret) {
- ttm_eu_backoff_reservation(&ticket, &validate_list);
- goto fail_unref;
+ virtio_gpu_fence_cleanup(fence);
+ goto fail_backoff;
}
ttm_eu_fence_buffer_objects(&ticket, &validate_list, &fence->f);
}
@@ -312,6 +325,8 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data,
dma_fence_put(&fence->f);
}
return 0;
+fail_backoff:
+ ttm_eu_backoff_reservation(&ticket, &validate_list);
fail_unref:
if (vgdev->has_virgl_3d) {
virtio_gpu_unref_list(&validate_list);
@@ -374,6 +389,12 @@ static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
goto out_unres;

convert_to_hw_box(&box, &args->box);
+
+ fence = virtio_gpu_fence_alloc(vgdev);
+ if (!fence) {
+ ret = -ENOMEM;
+ goto out_unres;
+ }
virtio_gpu_cmd_transfer_from_host_3d
(vgdev, qobj->hw_res_handle,
vfpriv->ctx_id, offset, args->level,
@@ -423,6 +444,11 @@ static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
(vgdev, qobj, offset,
box.w, box.h, box.x, box.y, NULL);
} else {
+ fence = virtio_gpu_fence_alloc(vgdev);
+ if (!fence) {
+ ret = -ENOMEM;
+ goto out_unres;
+ }
virtio_gpu_cmd_transfer_to_host_3d
(vgdev, qobj,
vfpriv ? vfpriv->ctx_id : 0, offset,
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index a9f4ae7d4483..b84ac8c25856 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -137,6 +137,41 @@ static void virtio_gpu_primary_plane_update(struct drm_plane *plane,
plane->state->src_h >> 16);
}

+static int virtio_gpu_cursor_prepare_fb(struct drm_plane *plane,
+ struct drm_plane_state *new_state)
+{
+ struct drm_device *dev = plane->dev;
+ struct virtio_gpu_device *vgdev = dev->dev_private;
+ struct virtio_gpu_framebuffer *vgfb;
+ struct virtio_gpu_object *bo;
+
+ if (!new_state->fb)
+ return 0;
+
+ vgfb = to_virtio_gpu_framebuffer(new_state->fb);
+ bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
+ if (bo && bo->dumb && (plane->state->fb != new_state->fb)) {
+ vgfb->fence = virtio_gpu_fence_alloc(vgdev);
+ if (!vgfb->fence)
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void virtio_gpu_cursor_cleanup_fb(struct drm_plane *plane,
+ struct drm_plane_state *old_state)
+{
+ struct virtio_gpu_framebuffer *vgfb;
+
+ if (!plane->state->fb)
+ return;
+
+ vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
+ if (vgfb->fence)
+ virtio_gpu_fence_cleanup(vgfb->fence);
+}
+
static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
struct drm_plane_state *old_state)
{
@@ -144,7 +179,6 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
struct virtio_gpu_device *vgdev = dev->dev_private;
struct virtio_gpu_output *output = NULL;
struct virtio_gpu_framebuffer *vgfb;
- struct virtio_gpu_fence *fence = NULL;
struct virtio_gpu_object *bo = NULL;
uint32_t handle;
int ret = 0;
@@ -170,13 +204,13 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
(vgdev, bo, 0,
cpu_to_le32(plane->state->crtc_w),
cpu_to_le32(plane->state->crtc_h),
- 0, 0, &fence);
+ 0, 0, &vgfb->fence);
ret = virtio_gpu_object_reserve(bo, false);
if (!ret) {
reservation_object_add_excl_fence(bo->tbo.resv,
- &fence->f);
- dma_fence_put(&fence->f);
- fence = NULL;
+ &vgfb->fence->f);
+ dma_fence_put(&vgfb->fence->f);
+ vgfb->fence = NULL;
virtio_gpu_object_unreserve(bo);
virtio_gpu_object_wait(bo, false);
}
@@ -218,6 +252,8 @@ static const struct drm_plane_helper_funcs virtio_gpu_primary_helper_funcs = {
};

static const struct drm_plane_helper_funcs virtio_gpu_cursor_helper_funcs = {
+ .prepare_fb = virtio_gpu_cursor_prepare_fb,
+ .cleanup_fb = virtio_gpu_cursor_cleanup_fb,
.atomic_check = virtio_gpu_plane_atomic_check,
.atomic_update = virtio_gpu_cursor_plane_update,
};
diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
index 51bef1775e47..93f2c3a51ee8 100644
--- a/drivers/gpu/drm/virtio/virtgpu_vq.c
+++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
@@ -896,9 +896,9 @@ void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
struct virtio_gpu_object *obj)
{
bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
- struct virtio_gpu_fence *fence;

if (use_dma_api && obj->mapped) {
+ struct virtio_gpu_fence *fence = virtio_gpu_fence_alloc(vgdev);
/* detach backing and wait for the host process it ... */
virtio_gpu_cmd_resource_inval_backing(vgdev, obj->hw_res_handle, &fence);
dma_fence_wait(&fence->f, true);
--
2.17.1


2018-11-12 16:53:22

by Robert Foss

[permalink] [raw]
Subject: [PATCH v5 2/4] drm/virtio: add uapi for in and out explicit fences

Add a new field called fence_fd that will be used by userspace to send
in-fences to the kernel and receive out-fences created by the kernel.

This uapi enables virtio to take advantage of explicit synchronization of
dma-bufs.

There are two new flags:

* VIRTGPU_EXECBUF_FENCE_FD_IN to be used when passing an in-fence fd.
* VIRTGPU_EXECBUF_FENCE_FD_OUT to be used when requesting an out-fence fd

The execbuffer IOCTL is now read-write to allow the userspace to read the
out-fence.

On error -1 should be returned in the fence_fd field.

Signed-off-by: Gustavo Padovan <[email protected]>
Signed-off-by: Robert Foss <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
---

Changes since v4:
- Emil/Gerd: Improved commit message and fence_fd comment

Changes since v3:
- Emil: Move fence_fd assignment to after sanity checks
- Emil: Added r-b


drivers/gpu/drm/virtio/virtgpu_ioctl.c | 5 +++++
include/uapi/drm/virtgpu_drm.h | 13 ++++++++++---
2 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index d69fc356df0a..3d497835b0f5 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -119,6 +119,11 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
if (vgdev->has_virgl_3d == false)
return -ENOSYS;

+ if ((exbuf->flags & ~VIRTGPU_EXECBUF_FLAGS))
+ return -EINVAL;
+
+ exbuf->fence_fd = -1;
+
INIT_LIST_HEAD(&validate_list);
if (exbuf->num_bo_handles) {

diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h
index 9a781f0611df..91062f4ac7c5 100644
--- a/include/uapi/drm/virtgpu_drm.h
+++ b/include/uapi/drm/virtgpu_drm.h
@@ -47,6 +47,13 @@ extern "C" {
#define DRM_VIRTGPU_WAIT 0x08
#define DRM_VIRTGPU_GET_CAPS 0x09

+#define VIRTGPU_EXECBUF_FENCE_FD_IN 0x01
+#define VIRTGPU_EXECBUF_FENCE_FD_OUT 0x02
+#define VIRTGPU_EXECBUF_FLAGS (\
+ VIRTGPU_EXECBUF_FENCE_FD_IN |\
+ VIRTGPU_EXECBUF_FENCE_FD_OUT |\
+ 0)
+
struct drm_virtgpu_map {
__u64 offset; /* use for mmap system call */
__u32 handle;
@@ -54,12 +61,12 @@ struct drm_virtgpu_map {
};

struct drm_virtgpu_execbuffer {
- __u32 flags; /* for future use */
+ __u32 flags;
__u32 size;
__u64 command; /* void* */
__u64 bo_handles;
__u32 num_bo_handles;
- __u32 pad;
+ __s32 fence_fd;
};

#define VIRTGPU_PARAM_3D_FEATURES 1 /* do we have 3D features in the hw */
@@ -137,7 +144,7 @@ struct drm_virtgpu_get_caps {
DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MAP, struct drm_virtgpu_map)

#define DRM_IOCTL_VIRTGPU_EXECBUFFER \
- DRM_IOW(DRM_COMMAND_BASE + DRM_VIRTGPU_EXECBUFFER,\
+ DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_EXECBUFFER,\
struct drm_virtgpu_execbuffer)

#define DRM_IOCTL_VIRTGPU_GETPARAM \
--
2.17.1


2018-11-12 16:53:37

by Robert Foss

[permalink] [raw]
Subject: [PATCH v5 3/4] drm/virtio: add in/out fence support for explicit synchronization

When the execbuf call receives an in-fence it will get the dma_fence
related to that fence fd and wait on it before submitting the draw call.

On the out-fence side we get fence returned by the submitted draw call
and attach it to a sync_file and send the sync_file fd to userspace. On
error -1 is returned to userspace.

VIRTGPU_EXECBUF_FENCE_FD_IN & VIRTGPU_EXECBUF_FENCE_FD_OUT
are supported at the simultaneously and can be flagged
for simultaneously.

Signed-off-by: Gustavo Padovan <[email protected]>
Signed-off-by: Robert Foss <[email protected]>
Suggested-by: Rob Herring <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
---

Changes since v3:
- Move all in_fence handling to the same VIRTGPU_EXECBUF_FENCE_FD_IN block
- Emil: Make sure to always call dma_fence_put()
- Emil: Added r-b


drivers/gpu/drm/virtio/virtgpu_ioctl.c | 81 ++++++++++++++++++++------
include/uapi/drm/virtgpu_drm.h | 2 +-
2 files changed, 65 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 3d497835b0f5..340f2513d829 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -28,6 +28,7 @@
#include <drm/drmP.h>
#include <drm/virtgpu_drm.h>
#include <drm/ttm/ttm_execbuf_util.h>
+#include <linux/sync_file.h>

#include "virtgpu_drv.h"

@@ -105,7 +106,7 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
struct virtio_gpu_device *vgdev = dev->dev_private;
struct virtio_gpu_fpriv *vfpriv = drm_file->driver_priv;
struct drm_gem_object *gobj;
- struct virtio_gpu_fence *fence;
+ struct virtio_gpu_fence *out_fence;
struct virtio_gpu_object *qobj;
int ret;
uint32_t *bo_handles = NULL;
@@ -114,6 +115,9 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
struct ttm_validate_buffer *buflist = NULL;
int i;
struct ww_acquire_ctx ticket;
+ struct sync_file *sync_file;
+ int in_fence_fd = exbuf->fence_fd;
+ int out_fence_fd = -1;
void *buf;

if (vgdev->has_virgl_3d == false)
@@ -124,6 +128,33 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,

exbuf->fence_fd = -1;

+ if (exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_IN) {
+ struct dma_fence *in_fence;
+
+ in_fence = sync_file_get_fence(in_fence_fd);
+
+ if (!in_fence)
+ return -EINVAL;
+
+ /*
+ * Wait if the fence is from a foreign context, or if the fence
+ * array contains any fence from a foreign context.
+ */
+ ret = 0;
+ if (!dma_fence_match_context(in_fence, vgdev->fence_drv.context))
+ ret = dma_fence_wait(in_fence, true);
+
+ dma_fence_put(in_fence);
+ if (ret)
+ return ret;
+ }
+
+ if (exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_OUT) {
+ out_fence_fd = get_unused_fd_flags(O_CLOEXEC);
+ if (out_fence_fd < 0)
+ return out_fence_fd;
+ }
+
INIT_LIST_HEAD(&validate_list);
if (exbuf->num_bo_handles) {

@@ -133,26 +164,22 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
sizeof(struct ttm_validate_buffer),
GFP_KERNEL | __GFP_ZERO);
if (!bo_handles || !buflist) {
- kvfree(bo_handles);
- kvfree(buflist);
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto out_unused_fd;
}

user_bo_handles = (void __user *)(uintptr_t)exbuf->bo_handles;
if (copy_from_user(bo_handles, user_bo_handles,
exbuf->num_bo_handles * sizeof(uint32_t))) {
ret = -EFAULT;
- kvfree(bo_handles);
- kvfree(buflist);
- return ret;
+ goto out_unused_fd;
}

for (i = 0; i < exbuf->num_bo_handles; i++) {
gobj = drm_gem_object_lookup(drm_file, bo_handles[i]);
if (!gobj) {
- kvfree(bo_handles);
- kvfree(buflist);
- return -ENOENT;
+ ret = -ENOENT;
+ goto out_unused_fd;
}

qobj = gem_to_virtio_gpu_obj(gobj);
@@ -161,6 +188,7 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
list_add(&buflist[i].head, &validate_list);
}
kvfree(bo_handles);
+ bo_handles = NULL;
}

ret = virtio_gpu_object_list_validate(&ticket, &validate_list);
@@ -174,28 +202,47 @@ static int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
goto out_unresv;
}

- fence = virtio_gpu_fence_alloc(vgdev);
- if (!fence) {
- kfree(buf);
+ out_fence = virtio_gpu_fence_alloc(vgdev);
+ if(!out_fence) {
ret = -ENOMEM;
- goto out_unresv;
+ goto out_memdup;
+ }
+
+ if (out_fence_fd >= 0) {
+ sync_file = sync_file_create(&out_fence->f);
+ if (!sync_file) {
+ dma_fence_put(&out_fence->f);
+ ret = -ENOMEM;
+ goto out_memdup;
+ }
+
+ exbuf->fence_fd = out_fence_fd;
+ fd_install(out_fence_fd, sync_file->file);
}
+
virtio_gpu_cmd_submit(vgdev, buf, exbuf->size,
- vfpriv->ctx_id, &fence);
+ vfpriv->ctx_id, &out_fence);

- ttm_eu_fence_buffer_objects(&ticket, &validate_list, &fence->f);
+ ttm_eu_fence_buffer_objects(&ticket, &validate_list, &out_fence->f);

/* fence the command bo */
virtio_gpu_unref_list(&validate_list);
kvfree(buflist);
- dma_fence_put(&fence->f);
return 0;

+out_memdup:
+ kfree(buf);
out_unresv:
ttm_eu_backoff_reservation(&ticket, &validate_list);
out_free:
virtio_gpu_unref_list(&validate_list);
+out_unused_fd:
+ kvfree(bo_handles);
kvfree(buflist);
+
+ if (out_fence_fd >= 0)
+ put_unused_fd(out_fence_fd);
+
return ret;
}

diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h
index 91062f4ac7c5..f06a789f34cd 100644
--- a/include/uapi/drm/virtgpu_drm.h
+++ b/include/uapi/drm/virtgpu_drm.h
@@ -66,7 +66,7 @@ struct drm_virtgpu_execbuffer {
__u64 command; /* void* */
__u64 bo_handles;
__u32 num_bo_handles;
- __s32 fence_fd;
+ __s32 fence_fd; /* in/out fence fd (see VIRTGPU_EXECBUF_FENCE_FD_IN/OUT) */
};

#define VIRTGPU_PARAM_3D_FEATURES 1 /* do we have 3D features in the hw */
--
2.17.1


2018-11-12 16:54:25

by Robert Foss

[permalink] [raw]
Subject: [PATCH v5 4/4] drm/virtio: bump driver version after explicit synchronization addition

From: Gustavo Padovan <[email protected]>

To reflect the (backward compatible) changes in the uabi we are bumping
the driver's version.

Signed-off-by: Gustavo Padovan <[email protected]>
Signed-off-by: Robert Foss <[email protected]>
Reviewed-by: Emil Velikov <[email protected]>
---

Changes since v3:
- Emil: Added r-b

drivers/gpu/drm/virtio/virtgpu_drv.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index acd130c58e33..4632bd7e1972 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -47,8 +47,8 @@
#define DRIVER_DATE "0"

#define DRIVER_MAJOR 0
-#define DRIVER_MINOR 0
-#define DRIVER_PATCHLEVEL 1
+#define DRIVER_MINOR 1
+#define DRIVER_PATCHLEVEL 0

/* virtgpu_drm_bus.c */
int drm_virtio_init(struct drm_driver *driver, struct virtio_device *vdev);
--
2.17.1


2018-11-14 12:14:58

by Gerd Hoffmann

[permalink] [raw]
Subject: Re: [PATCH v5 0/4] virgl: fence fd support

On Mon, Nov 12, 2018 at 05:51:53PM +0100, Robert Foss wrote:
>
> This series implements fence support for drm/virtio and
> has been tested using qemu, kmscube and the below branches.
>
> Rob Herring solved a reference counting issue and
> suggested a context check for the execbuf ioctl, his
> changes have been included in the below commits to
> keep the tree working at all commits.

Patches added to qemu queue, should land in drm-misc-next soon.

thanks,
Gerd