From: Oleksandr Andrushchenko <[email protected]>
Hello!
This patch series adds support for Xen [1] para-virtualized
frontend display driver. It implements the protocol from
include/xen/interface/io/displif.h [2].
Accompanying backend [3] is implemented as a user-space application
and its helper library [4], capable of running as a Weston client
or DRM master.
Configuration of both backend and frontend is done via
Xen guest domain configuration options [5].
*******************************************************************************
* Driver limitations
*******************************************************************************
1. Configuration options 1.1 (contiguous display buffers) and 2 (backend
allocated buffers) below are not supported at the same time.
2. Only primary plane without additional properties is supported.
3. Only one video mode supported which resolution is configured via XenStore.
4. All CRTCs operate at fixed frequency of 60Hz.
*******************************************************************************
* Driver modes of operation in terms of display buffers used
*******************************************************************************
Depending on the requirements for the para-virtualized environment, namely
requirements dictated by the accompanying DRM/(v)GPU drivers running in both
host and guest environments, number of operating modes of para-virtualized
display driver are supported:
- display buffers can be allocated by either frontend driver or backend
- display buffers can be allocated to be contiguous in memory or not
Note! Frontend driver itself has no dependency on contiguous memory for
its operation.
*******************************************************************************
* 1. Buffers allocated by the frontend driver.
*******************************************************************************
The below modes of operation are configured at compile-time via
frontend driver's kernel configuration.
1.1. Front driver configured to use GEM CMA helpers
This use-case is useful when used with accompanying DRM/vGPU driver in
guest domain which was designed to only work with contiguous buffers,
e.g. DRM driver based on GEM CMA helpers: such drivers can only import
contiguous PRIME buffers, thus requiring frontend driver to provide
such. In order to implement this mode of operation para-virtualized
frontend driver can be configured to use GEM CMA helpers.
1.2. Front driver doesn't use GEM CMA
If accompanying drivers can cope with non-contiguous memory then, to
lower pressure on CMA subsystem of the kernel, driver can allocate
buffers from system memory.
Note! If used with accompanying DRM/(v)GPU drivers this mode of operation
may require IOMMU support on the platform, so accompanying DRM/vGPU
hardware can still reach display buffer memory while importing PRIME
buffers from the frontend driver.
*******************************************************************************
* 2. Buffers allocated by the backend
*******************************************************************************
This mode of operation is run-time configured via guest domain configuration
through XenStore entries.
For systems which do not provide IOMMU support, but having specific
requirements for display buffers it is possible to allocate such buffers
at backend side and share those with the frontend.
For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting
physically contiguous memory, this allows implementing zero-copying
use-cases.
I would like to thank at least, but not at last the following
people/communities who helped this driver to happen ;)
1. My team at EPAM for continuous support
2. Xen community for answering tons of questions on different
modes of operation of the driver with respect to virtualized
environment.
3. Rob Clark for "GEM allocation for para-virtualized DRM driver" [6]
4. Maarten Lankhorst for "Atomic driver and old remove FB behavior" [7]
5. Ville Syrjälä for "Questions on page flips and atomic modeset" [8]
Thank you,
Oleksandr Andrushchenko
P.S. There are two dependencies for this driver limiting some of the
use-cases which are on review now:
1. "drm/simple_kms_helper: Add {enable|disable}_vblank callback support" [9]
2. "drm/simple_kms_helper: Fix NULL pointer dereference with no active CRTC" [10]
[1] https://wiki.xen.org/wiki/Paravirtualization_(PV)#PV_IO_Drivers
[2] https://elixir.bootlin.com/linux/v4.16-rc2/source/include/xen/interface/io/displif.h
[3] https://github.com/xen-troops/displ_be
[4] https://github.com/xen-troops/libxenbe
[5] https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/man/xl.cfg.pod.5.in;h=a699367779e2ae1212ff8f638eff0206ec1a1cc9;hb=refs/heads/master#l1257
[6] https://lists.freedesktop.org/archives/dri-devel/2017-March/136038.html
[7] https://www.spinics.net/lists/dri-devel/msg164102.html
[8] https://www.spinics.net/lists/dri-devel/msg164463.html
[9] https://patchwork.freedesktop.org/series/38073/
[10] https://patchwork.freedesktop.org/series/38139/
Oleksandr Andrushchenko (9):
drm/xen-front: Introduce Xen para-virtualized frontend driver
drm/xen-front: Implement Xen bus state handling
drm/xen-front: Read driver configuration from Xen store
drm/xen-front: Implement Xen event channel handling
drm/xen-front: Implement handling of shared display buffers
drm/xen-front: Introduce DRM/KMS virtual display driver
drm/xen-front: Implement KMS/connector handling
drm/xen-front: Implement GEM operations
drm/xen-front: Implement communication with backend
drivers/gpu/drm/Kconfig | 2 +
drivers/gpu/drm/Makefile | 1 +
drivers/gpu/drm/xen/Kconfig | 30 ++
drivers/gpu/drm/xen/Makefile | 17 +
drivers/gpu/drm/xen/xen_drm_front.c | 712 ++++++++++++++++++++++++++++
drivers/gpu/drm/xen/xen_drm_front.h | 154 ++++++
drivers/gpu/drm/xen/xen_drm_front_cfg.c | 84 ++++
drivers/gpu/drm/xen/xen_drm_front_cfg.h | 45 ++
drivers/gpu/drm/xen/xen_drm_front_conn.c | 125 +++++
drivers/gpu/drm/xen/xen_drm_front_conn.h | 35 ++
drivers/gpu/drm/xen/xen_drm_front_drv.c | 294 ++++++++++++
drivers/gpu/drm/xen/xen_drm_front_drv.h | 73 +++
drivers/gpu/drm/xen/xen_drm_front_evtchnl.c | 399 ++++++++++++++++
drivers/gpu/drm/xen/xen_drm_front_evtchnl.h | 89 ++++
drivers/gpu/drm/xen/xen_drm_front_gem.c | 360 ++++++++++++++
drivers/gpu/drm/xen/xen_drm_front_gem.h | 46 ++
drivers/gpu/drm/xen/xen_drm_front_gem_cma.c | 93 ++++
drivers/gpu/drm/xen/xen_drm_front_kms.c | 299 ++++++++++++
drivers/gpu/drm/xen/xen_drm_front_kms.h | 30 ++
drivers/gpu/drm/xen/xen_drm_front_shbuf.c | 430 +++++++++++++++++
drivers/gpu/drm/xen/xen_drm_front_shbuf.h | 80 ++++
21 files changed, 3398 insertions(+)
create mode 100644 drivers/gpu/drm/xen/Kconfig
create mode 100644 drivers/gpu/drm/xen/Makefile
create mode 100644 drivers/gpu/drm/xen/xen_drm_front.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front.h
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_cfg.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_cfg.h
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.h
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.h
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_evtchnl.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_evtchnl.h
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.h
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.h
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_shbuf.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_shbuf.h
--
2.7.4
From: Oleksandr Andrushchenko <[email protected]>
Implement essential initialization of the display driver:
- introduce required data structures
- handle DRM/KMS driver registration
- perform basic DRM driver initialization
- register driver on backend connection
- remove driver on backend disconnect
- introduce essential callbacks required by DRM/KMS core
- introduce essential callbacks required for frontend operations
Signed-off-by: Oleksandr Andrushchenko <[email protected]>
---
drivers/gpu/drm/xen/Makefile | 1 +
drivers/gpu/drm/xen/xen_drm_front.c | 169 ++++++++++++++++++++++++-
drivers/gpu/drm/xen/xen_drm_front.h | 24 ++++
drivers/gpu/drm/xen/xen_drm_front_drv.c | 211 ++++++++++++++++++++++++++++++++
drivers/gpu/drm/xen/xen_drm_front_drv.h | 60 +++++++++
5 files changed, 462 insertions(+), 3 deletions(-)
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.h
diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
index f1823cb596c5..d3068202590f 100644
--- a/drivers/gpu/drm/xen/Makefile
+++ b/drivers/gpu/drm/xen/Makefile
@@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
drm_xen_front-objs := xen_drm_front.o \
+ xen_drm_front_drv.o \
xen_drm_front_evtchnl.o \
xen_drm_front_shbuf.o \
xen_drm_front_cfg.o
diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
index 0d94ff272da3..8de88e359d5e 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.c
+++ b/drivers/gpu/drm/xen/xen_drm_front.c
@@ -18,6 +18,8 @@
#include <drm/drmP.h>
+#include <linux/of_device.h>
+
#include <xen/platform_pci.h>
#include <xen/xen.h>
#include <xen/xenbus.h>
@@ -25,15 +27,161 @@
#include <xen/interface/io/displif.h>
#include "xen_drm_front.h"
+#include "xen_drm_front_drv.h"
#include "xen_drm_front_evtchnl.h"
#include "xen_drm_front_shbuf.h"
+static int be_mode_set(struct xen_drm_front_drm_pipeline *pipeline, uint32_t x,
+ uint32_t y, uint32_t width, uint32_t height, uint32_t bpp,
+ uint64_t fb_cookie)
+
+{
+ return 0;
+}
+
+static int be_dbuf_create_int(struct xen_drm_front_info *front_info,
+ uint64_t dbuf_cookie, uint32_t width, uint32_t height,
+ uint32_t bpp, uint64_t size, struct page **pages,
+ struct sg_table *sgt)
+{
+ return 0;
+}
+
+static int be_dbuf_create_from_sgt(struct xen_drm_front_info *front_info,
+ uint64_t dbuf_cookie, uint32_t width, uint32_t height,
+ uint32_t bpp, uint64_t size, struct sg_table *sgt)
+{
+ return be_dbuf_create_int(front_info, dbuf_cookie, width, height,
+ bpp, size, NULL, sgt);
+}
+
+static int be_dbuf_create_from_pages(struct xen_drm_front_info *front_info,
+ uint64_t dbuf_cookie, uint32_t width, uint32_t height,
+ uint32_t bpp, uint64_t size, struct page **pages)
+{
+ return be_dbuf_create_int(front_info, dbuf_cookie, width, height,
+ bpp, size, pages, NULL);
+}
+
+static int be_dbuf_destroy(struct xen_drm_front_info *front_info,
+ uint64_t dbuf_cookie)
+{
+ return 0;
+}
+
+static int be_fb_attach(struct xen_drm_front_info *front_info,
+ uint64_t dbuf_cookie, uint64_t fb_cookie, uint32_t width,
+ uint32_t height, uint32_t pixel_format)
+{
+ return 0;
+}
+
+static int be_fb_detach(struct xen_drm_front_info *front_info,
+ uint64_t fb_cookie)
+{
+ return 0;
+}
+
+static int be_page_flip(struct xen_drm_front_info *front_info, int conn_idx,
+ uint64_t fb_cookie)
+{
+ return 0;
+}
+
+static void xen_drm_drv_unload(struct xen_drm_front_info *front_info)
+{
+ if (front_info->xb_dev->state != XenbusStateReconfiguring)
+ return;
+
+ DRM_DEBUG("Can try removing driver now\n");
+ xenbus_switch_state(front_info->xb_dev, XenbusStateInitialising);
+}
+
static struct xen_drm_front_ops front_ops = {
- /* placeholder for now */
+ .mode_set = be_mode_set,
+ .dbuf_create_from_pages = be_dbuf_create_from_pages,
+ .dbuf_create_from_sgt = be_dbuf_create_from_sgt,
+ .dbuf_destroy = be_dbuf_destroy,
+ .fb_attach = be_fb_attach,
+ .fb_detach = be_fb_detach,
+ .page_flip = be_page_flip,
+ .drm_last_close = xen_drm_drv_unload,
+};
+
+static int xen_drm_drv_probe(struct platform_device *pdev)
+{
+ /*
+ * The device is not spawn from a device tree, so arch_setup_dma_ops
+ * is not called, thus leaving the device with dummy DMA ops.
+ * This makes the device return error on PRIME buffer import, which
+ * is not correct: to fix this call of_dma_configure() with a NULL
+ * node to set default DMA ops.
+ */
+ of_dma_configure(&pdev->dev, NULL);
+ return xen_drm_front_drv_probe(pdev, &front_ops);
+}
+
+static int xen_drm_drv_remove(struct platform_device *pdev)
+{
+ return xen_drm_front_drv_remove(pdev);
+}
+
+struct platform_device_info xen_drm_front_platform_info = {
+ .name = XENDISPL_DRIVER_NAME,
+ .id = 0,
+ .num_res = 0,
+ .dma_mask = DMA_BIT_MASK(32),
};
+static struct platform_driver xen_drm_front_front_info = {
+ .probe = xen_drm_drv_probe,
+ .remove = xen_drm_drv_remove,
+ .driver = {
+ .name = XENDISPL_DRIVER_NAME,
+ },
+};
+
+static void xen_drm_drv_deinit(struct xen_drm_front_info *front_info)
+{
+ if (!front_info->drm_pdrv_registered)
+ return;
+
+ if (front_info->drm_pdev)
+ platform_device_unregister(front_info->drm_pdev);
+
+ platform_driver_unregister(&xen_drm_front_front_info);
+ front_info->drm_pdrv_registered = false;
+ front_info->drm_pdev = NULL;
+}
+
+static int xen_drm_drv_init(struct xen_drm_front_info *front_info)
+{
+ int ret;
+
+ ret = platform_driver_register(&xen_drm_front_front_info);
+ if (ret < 0)
+ return ret;
+
+ front_info->drm_pdrv_registered = true;
+ /* pass card configuration via platform data */
+ xen_drm_front_platform_info.data = &front_info->cfg;
+ xen_drm_front_platform_info.size_data = sizeof(front_info->cfg);
+
+ front_info->drm_pdev = platform_device_register_full(
+ &xen_drm_front_platform_info);
+ if (IS_ERR_OR_NULL(front_info->drm_pdev)) {
+ DRM_ERROR("Failed to register " XENDISPL_DRIVER_NAME " PV DRM driver\n");
+ front_info->drm_pdev = NULL;
+ xen_drm_drv_deinit(front_info);
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
static void xen_drv_remove_internal(struct xen_drm_front_info *front_info)
{
+ xen_drm_drv_deinit(front_info);
xen_drm_front_evtchnl_free_all(front_info);
}
@@ -59,13 +207,27 @@ static int backend_on_initwait(struct xen_drm_front_info *front_info)
static int backend_on_connected(struct xen_drm_front_info *front_info)
{
xen_drm_front_evtchnl_set_state(front_info, EVTCHNL_STATE_CONNECTED);
- return 0;
+ return xen_drm_drv_init(front_info);
}
static void backend_on_disconnected(struct xen_drm_front_info *front_info)
{
+ bool removed = true;
+
+ if (front_info->drm_pdev) {
+ if (xen_drm_front_drv_is_used(front_info->drm_pdev)) {
+ DRM_WARN("DRM driver still in use, deferring removal\n");
+ removed = false;
+ } else
+ xen_drv_remove_internal(front_info);
+ }
+
xen_drm_front_evtchnl_set_state(front_info, EVTCHNL_STATE_DISCONNECTED);
- xenbus_switch_state(front_info->xb_dev, XenbusStateInitialising);
+
+ if (removed)
+ xenbus_switch_state(front_info->xb_dev, XenbusStateInitialising);
+ else
+ xenbus_switch_state(front_info->xb_dev, XenbusStateReconfiguring);
}
static void backend_on_changed(struct xenbus_device *xb_dev,
@@ -148,6 +310,7 @@ static int xen_drv_probe(struct xenbus_device *xb_dev,
front_info->xb_dev = xb_dev;
spin_lock_init(&front_info->io_lock);
+ front_info->drm_pdrv_registered = false;
dev_set_drvdata(&xb_dev->dev, front_info);
return xenbus_switch_state(xb_dev, XenbusStateInitialising);
}
diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
index 13f22736ae02..9ed5bfb248d0 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.h
+++ b/drivers/gpu/drm/xen/xen_drm_front.h
@@ -19,6 +19,8 @@
#ifndef __XEN_DRM_FRONT_H_
#define __XEN_DRM_FRONT_H_
+#include <linux/scatterlist.h>
+
#include "xen_drm_front_cfg.h"
#ifndef GRANT_INVALID_REF
@@ -30,16 +32,38 @@
#define GRANT_INVALID_REF 0
#endif
+struct xen_drm_front_drm_pipeline;
+
struct xen_drm_front_ops {
+ int (*mode_set)(struct xen_drm_front_drm_pipeline *pipeline,
+ uint32_t x, uint32_t y, uint32_t width, uint32_t height,
+ uint32_t bpp, uint64_t fb_cookie);
+ int (*dbuf_create_from_pages)(struct xen_drm_front_info *front_info,
+ uint64_t dbuf_cookie, uint32_t width, uint32_t height,
+ uint32_t bpp, uint64_t size, struct page **pages);
+ int (*dbuf_create_from_sgt)(struct xen_drm_front_info *front_info,
+ uint64_t dbuf_cookie, uint32_t width, uint32_t height,
+ uint32_t bpp, uint64_t size, struct sg_table *sgt);
+ int (*dbuf_destroy)(struct xen_drm_front_info *front_info,
+ uint64_t dbuf_cookie);
+ int (*fb_attach)(struct xen_drm_front_info *front_info,
+ uint64_t dbuf_cookie, uint64_t fb_cookie,
+ uint32_t width, uint32_t height, uint32_t pixel_format);
+ int (*fb_detach)(struct xen_drm_front_info *front_info,
+ uint64_t fb_cookie);
+ int (*page_flip)(struct xen_drm_front_info *front_info,
+ int conn_idx, uint64_t fb_cookie);
/* CAUTION! this is called with a spin_lock held! */
void (*on_frame_done)(struct platform_device *pdev,
int conn_idx, uint64_t fb_cookie);
+ void (*drm_last_close)(struct xen_drm_front_info *front_info);
};
struct xen_drm_front_info {
struct xenbus_device *xb_dev;
/* to protect data between backend IO code and interrupt handler */
spinlock_t io_lock;
+ bool drm_pdrv_registered;
/* virtual DRM platform device */
struct platform_device *drm_pdev;
diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.c b/drivers/gpu/drm/xen/xen_drm_front_drv.c
new file mode 100644
index 000000000000..b3764d5ed0f6
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front_drv.c
@@ -0,0 +1,211 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#include <drm/drmP.h>
+#include <drm/drm_gem.h>
+#include <drm/drm_atomic_helper.h>
+
+#include "xen_drm_front.h"
+#include "xen_drm_front_cfg.h"
+#include "xen_drm_front_drv.h"
+
+static int dumb_create(struct drm_file *filp,
+ struct drm_device *dev, struct drm_mode_create_dumb *args)
+{
+ return -EINVAL;
+}
+
+static void free_object(struct drm_gem_object *obj)
+{
+ struct xen_drm_front_drm_info *drm_info = obj->dev->dev_private;
+
+ drm_info->front_ops->dbuf_destroy(drm_info->front_info,
+ xen_drm_front_dbuf_to_cookie(obj));
+}
+
+static void on_frame_done(struct platform_device *pdev,
+ int conn_idx, uint64_t fb_cookie)
+{
+}
+
+static void lastclose(struct drm_device *dev)
+{
+ struct xen_drm_front_drm_info *drm_info = dev->dev_private;
+
+ drm_info->front_ops->drm_last_close(drm_info->front_info);
+}
+
+static int gem_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ return -EINVAL;
+}
+
+static struct sg_table *prime_get_sg_table(struct drm_gem_object *obj)
+{
+ return NULL;
+}
+
+static struct drm_gem_object *prime_import_sg_table(struct drm_device *dev,
+ struct dma_buf_attachment *attach, struct sg_table *sgt)
+{
+ return NULL;
+}
+
+static void *prime_vmap(struct drm_gem_object *obj)
+{
+ return NULL;
+}
+
+static void prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+{
+}
+
+static int prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+{
+ return -EINVAL;
+}
+
+static const struct file_operations xendrm_fops = {
+ .owner = THIS_MODULE,
+ .open = drm_open,
+ .release = drm_release,
+ .unlocked_ioctl = drm_ioctl,
+#ifdef CONFIG_COMPAT
+ .compat_ioctl = drm_compat_ioctl,
+#endif
+ .poll = drm_poll,
+ .read = drm_read,
+ .llseek = no_llseek,
+ .mmap = gem_mmap,
+};
+
+static const struct vm_operations_struct xen_drm_vm_ops = {
+ .open = drm_gem_vm_open,
+ .close = drm_gem_vm_close,
+};
+
+struct drm_driver xen_drm_driver = {
+ .driver_features = DRIVER_GEM | DRIVER_MODESET |
+ DRIVER_PRIME | DRIVER_ATOMIC,
+ .lastclose = lastclose,
+ .gem_free_object_unlocked = free_object,
+ .gem_vm_ops = &xen_drm_vm_ops,
+ .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
+ .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
+ .gem_prime_import = drm_gem_prime_import,
+ .gem_prime_export = drm_gem_prime_export,
+ .gem_prime_get_sg_table = prime_get_sg_table,
+ .gem_prime_import_sg_table = prime_import_sg_table,
+ .gem_prime_vmap = prime_vmap,
+ .gem_prime_vunmap = prime_vunmap,
+ .gem_prime_mmap = prime_mmap,
+ .dumb_create = dumb_create,
+ .fops = &xendrm_fops,
+ .name = "xendrm-du",
+ .desc = "Xen PV DRM Display Unit",
+ .date = "20161109",
+ .major = 1,
+ .minor = 0,
+};
+
+int xen_drm_front_drv_probe(struct platform_device *pdev,
+ struct xen_drm_front_ops *front_ops)
+{
+ struct xen_drm_front_cfg *cfg = dev_get_platdata(&pdev->dev);
+ struct xen_drm_front_drm_info *drm_info;
+ struct drm_device *dev;
+ int ret;
+
+ DRM_INFO("Creating %s\n", xen_drm_driver.desc);
+
+ drm_info = devm_kzalloc(&pdev->dev, sizeof(*drm_info), GFP_KERNEL);
+ if (!drm_info)
+ return -ENOMEM;
+
+ drm_info->front_ops = front_ops;
+ drm_info->front_ops->on_frame_done = on_frame_done;
+ drm_info->front_info = cfg->front_info;
+
+ dev = drm_dev_alloc(&xen_drm_driver, &pdev->dev);
+ if (!dev)
+ return -ENOMEM;
+
+ drm_info->drm_dev = dev;
+
+ drm_info->cfg = cfg;
+ dev->dev_private = drm_info;
+ platform_set_drvdata(pdev, drm_info);
+
+ ret = drm_vblank_init(dev, cfg->num_connectors);
+ if (ret) {
+ DRM_ERROR("Failed to initialize vblank, ret %d\n", ret);
+ return ret;
+ }
+
+ dev->irq_enabled = 1;
+
+ ret = drm_dev_register(dev, 0);
+ if (ret)
+ goto fail_register;
+
+ DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n",
+ xen_drm_driver.name, xen_drm_driver.major,
+ xen_drm_driver.minor, xen_drm_driver.patchlevel,
+ xen_drm_driver.date, dev->primary->index);
+
+ return 0;
+
+fail_register:
+ drm_dev_unregister(dev);
+ drm_mode_config_cleanup(dev);
+ return ret;
+}
+
+int xen_drm_front_drv_remove(struct platform_device *pdev)
+{
+ struct xen_drm_front_drm_info *drm_info = platform_get_drvdata(pdev);
+ struct drm_device *dev = drm_info->drm_dev;
+
+ if (dev) {
+ drm_dev_unregister(dev);
+ drm_atomic_helper_shutdown(dev);
+ drm_mode_config_cleanup(dev);
+ drm_dev_unref(dev);
+ }
+ return 0;
+}
+
+bool xen_drm_front_drv_is_used(struct platform_device *pdev)
+{
+ struct xen_drm_front_drm_info *drm_info = platform_get_drvdata(pdev);
+ struct drm_device *dev;
+
+ if (!drm_info)
+ return false;
+
+ dev = drm_info->drm_dev;
+ if (!dev)
+ return false;
+
+ /*
+ * FIXME: the code below must be protected by drm_global_mutex,
+ * but it is not accessible to us. Anyways there is a race condition,
+ * but we will re-try.
+ */
+ return dev->open_count != 0;
+}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.h b/drivers/gpu/drm/xen/xen_drm_front_drv.h
new file mode 100644
index 000000000000..aaa476535c13
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front_drv.h
@@ -0,0 +1,60 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#ifndef __XEN_DRM_FRONT_DRV_H_
+#define __XEN_DRM_FRONT_DRV_H_
+
+#include <drm/drmP.h>
+
+#include "xen_drm_front.h"
+#include "xen_drm_front_cfg.h"
+
+struct xen_drm_front_drm_pipeline {
+ struct xen_drm_front_drm_info *drm_info;
+
+ int index;
+};
+
+struct xen_drm_front_drm_info {
+ struct xen_drm_front_info *front_info;
+ struct xen_drm_front_ops *front_ops;
+ struct drm_device *drm_dev;
+ struct xen_drm_front_cfg *cfg;
+};
+
+static inline uint64_t xen_drm_front_fb_to_cookie(
+ struct drm_framebuffer *fb)
+{
+ return (uint64_t)fb;
+}
+
+static inline uint64_t xen_drm_front_dbuf_to_cookie(
+ struct drm_gem_object *gem_obj)
+{
+ return (uint64_t)gem_obj;
+}
+
+int xen_drm_front_drv_probe(struct platform_device *pdev,
+ struct xen_drm_front_ops *front_ops);
+
+int xen_drm_front_drv_remove(struct platform_device *pdev);
+
+bool xen_drm_front_drv_is_used(struct platform_device *pdev);
+
+#endif /* __XEN_DRM_FRONT_DRV_H_ */
+
--
2.7.4
From: Oleksandr Andrushchenko <[email protected]>
Implement shared buffer handling according to the
para-virtualized display device protocol at xen/interface/io/displif.h:
- handle page directories according to displif protocol:
- allocate and share page directories
- grant references to the required set of pages for the
page directory
- allocate xen balllooned pages via Xen balloon driver
with alloc_xenballooned_pages/free_xenballooned_pages
- grant references to the required set of pages for the
shared buffer itself
- implement pages map/unmap for the buffers allocated by the
backend (gnttab_map_refs/gnttab_unmap_refs)
Signed-off-by: Oleksandr Andrushchenko <[email protected]>
---
drivers/gpu/drm/xen/Makefile | 1 +
drivers/gpu/drm/xen/xen_drm_front.c | 4 +
drivers/gpu/drm/xen/xen_drm_front_shbuf.c | 430 ++++++++++++++++++++++++++++++
drivers/gpu/drm/xen/xen_drm_front_shbuf.h | 80 ++++++
4 files changed, 515 insertions(+)
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_shbuf.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_shbuf.h
diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
index 4ce7756b8437..f1823cb596c5 100644
--- a/drivers/gpu/drm/xen/Makefile
+++ b/drivers/gpu/drm/xen/Makefile
@@ -2,6 +2,7 @@
drm_xen_front-objs := xen_drm_front.o \
xen_drm_front_evtchnl.o \
+ xen_drm_front_shbuf.o \
xen_drm_front_cfg.o
obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
index b558e0ae3b33..0d94ff272da3 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.c
+++ b/drivers/gpu/drm/xen/xen_drm_front.c
@@ -26,6 +26,7 @@
#include "xen_drm_front.h"
#include "xen_drm_front_evtchnl.h"
+#include "xen_drm_front_shbuf.h"
static struct xen_drm_front_ops front_ops = {
/* placeholder for now */
@@ -199,6 +200,9 @@ static struct xenbus_driver xen_driver = {
static int __init xen_drv_init(void)
{
+ /* At the moment we only support case with XEN_PAGE_SIZE == PAGE_SIZE */
+ BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE);
+
if (!xen_domain())
return -ENODEV;
diff --git a/drivers/gpu/drm/xen/xen_drm_front_shbuf.c b/drivers/gpu/drm/xen/xen_drm_front_shbuf.c
new file mode 100644
index 000000000000..fb8dd40dd5f5
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front_shbuf.c
@@ -0,0 +1,430 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#include <drm/drmP.h>
+
+#if defined(CONFIG_X86)
+#include <drm/drm_cache.h>
+#endif
+#include <linux/errno.h>
+#include <linux/mm.h>
+
+#include <asm/xen/hypervisor.h>
+#include <xen/balloon.h>
+#include <xen/xen.h>
+#include <xen/xenbus.h>
+#include <xen/interface/io/ring.h>
+#include <xen/interface/io/displif.h>
+
+#include "xen_drm_front.h"
+#include "xen_drm_front_shbuf.h"
+
+struct xen_drm_front_shbuf_ops {
+ /*
+ * Calculate number of grefs required to handle this buffer,
+ * e.g. if grefs are required for page directory only or the buffer
+ * pages as well.
+ */
+ void (*calc_num_grefs)(struct xen_drm_front_shbuf *buf);
+ /* Fill page directory according to para-virtual display protocol. */
+ void (*fill_page_dir)(struct xen_drm_front_shbuf *buf);
+ /* Claim grant references for the pages of the buffer. */
+ int (*grant_refs_for_buffer)(struct xen_drm_front_shbuf *buf,
+ grant_ref_t *priv_gref_head, int gref_idx);
+ /* Map grant references of the buffer. */
+ int (*map)(struct xen_drm_front_shbuf *buf);
+ /* Unmap grant references of the buffer. */
+ int (*unmap)(struct xen_drm_front_shbuf *buf);
+};
+
+grant_ref_t xen_drm_front_shbuf_get_dir_start(struct xen_drm_front_shbuf *buf)
+{
+ if (!buf->grefs)
+ return GRANT_INVALID_REF;
+
+ return buf->grefs[0];
+}
+
+int xen_drm_front_shbuf_map(struct xen_drm_front_shbuf *buf)
+{
+ if (buf->ops->map)
+ return buf->ops->map(buf);
+
+ /* no need to map own grant references */
+ return 0;
+}
+
+int xen_drm_front_shbuf_unmap(struct xen_drm_front_shbuf *buf)
+{
+ if (buf->ops->unmap)
+ return buf->ops->unmap(buf);
+
+ /* no need to unmap own grant references */
+ return 0;
+}
+
+void xen_drm_front_shbuf_flush(struct xen_drm_front_shbuf *buf)
+{
+#if defined(CONFIG_X86)
+ drm_clflush_pages(buf->pages, buf->num_pages);
+#endif
+}
+
+void xen_drm_front_shbuf_free(struct xen_drm_front_shbuf *buf)
+{
+ if (buf->grefs) {
+ int i;
+
+ for (i = 0; i < buf->num_grefs; i++)
+ if (buf->grefs[i] != GRANT_INVALID_REF)
+ gnttab_end_foreign_access(buf->grefs[i],
+ 0, 0UL);
+ }
+ kfree(buf->grefs);
+ kfree(buf->directory);
+ if (buf->sgt) {
+ sg_free_table(buf->sgt);
+ kvfree(buf->pages);
+ }
+ kfree(buf);
+}
+
+/*
+ * number of grefs a page can hold with respect to the
+ * struct xendispl_page_directory header
+ */
+#define XEN_DRM_NUM_GREFS_PER_PAGE ((PAGE_SIZE - \
+ offsetof(struct xendispl_page_directory, gref)) / \
+ sizeof(grant_ref_t))
+
+static int get_num_pages_dir(struct xen_drm_front_shbuf *buf)
+{
+ /* number of pages the page directory consumes itself */
+ return DIV_ROUND_UP(buf->num_pages, XEN_DRM_NUM_GREFS_PER_PAGE);
+}
+
+static void backend_calc_num_grefs(struct xen_drm_front_shbuf *buf)
+{
+ /* only for pages the page directory consumes itself */
+ buf->num_grefs = get_num_pages_dir(buf);
+}
+
+static void guest_calc_num_grefs(struct xen_drm_front_shbuf *buf)
+{
+ /*
+ * number of pages the page directory consumes itself
+ * plus grefs for the buffer pages
+ */
+ buf->num_grefs = get_num_pages_dir(buf) + buf->num_pages;
+}
+
+#define xen_page_to_vaddr(page) \
+ ((phys_addr_t)pfn_to_kaddr(page_to_xen_pfn(page)))
+
+static int backend_map(struct xen_drm_front_shbuf *buf)
+{
+ struct gnttab_map_grant_ref *map_ops = NULL;
+ unsigned char *ptr;
+ int ret, cur_gref, cur_dir_page, cur_page, grefs_left;
+
+ map_ops = kcalloc(buf->num_pages, sizeof(*map_ops), GFP_KERNEL);
+ if (!map_ops)
+ return -ENOMEM;
+
+ buf->backend_map_handles = kcalloc(buf->num_pages,
+ sizeof(*buf->backend_map_handles), GFP_KERNEL);
+ if (!buf->backend_map_handles) {
+ kfree(map_ops);
+ return -ENOMEM;
+ }
+
+ /*
+ * read page directory to get grefs from the backend: for external
+ * buffer we only allocate buf->grefs for the page directory,
+ * so buf->num_grefs has number of pages in the page directory itself
+ */
+ ptr = buf->directory;
+ grefs_left = buf->num_pages;
+ cur_page = 0;
+ for (cur_dir_page = 0; cur_dir_page < buf->num_grefs; cur_dir_page++) {
+ struct xendispl_page_directory *page_dir =
+ (struct xendispl_page_directory *)ptr;
+ int to_copy = XEN_DRM_NUM_GREFS_PER_PAGE;
+
+ if (to_copy > grefs_left)
+ to_copy = grefs_left;
+
+ for (cur_gref = 0; cur_gref < to_copy; cur_gref++) {
+ phys_addr_t addr;
+
+ addr = xen_page_to_vaddr(buf->pages[cur_page]);
+ gnttab_set_map_op(&map_ops[cur_page], addr,
+ GNTMAP_host_map,
+ page_dir->gref[cur_gref],
+ buf->xb_dev->otherend_id);
+ cur_page++;
+ }
+
+ grefs_left -= to_copy;
+ ptr += PAGE_SIZE;
+ }
+ ret = gnttab_map_refs(map_ops, NULL, buf->pages, buf->num_pages);
+ BUG_ON(ret);
+
+ /* save handles so we can unmap on free */
+ for (cur_page = 0; cur_page < buf->num_pages; cur_page++) {
+ buf->backend_map_handles[cur_page] = map_ops[cur_page].handle;
+ if (unlikely(map_ops[cur_page].status != GNTST_okay))
+ DRM_ERROR("Failed to map page %d: %d\n",
+ cur_page, map_ops[cur_page].status);
+ }
+
+ kfree(map_ops);
+ return 0;
+}
+
+static int backend_unmap(struct xen_drm_front_shbuf *buf)
+{
+ struct gnttab_unmap_grant_ref *unmap_ops;
+ int i;
+
+ if (!buf->pages || !buf->backend_map_handles || !buf->grefs)
+ return 0;
+
+ unmap_ops = kcalloc(buf->num_pages, sizeof(*unmap_ops),
+ GFP_KERNEL);
+ if (!unmap_ops) {
+ DRM_ERROR("Failed to get memory while unmapping\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < buf->num_pages; i++) {
+ phys_addr_t addr;
+
+ addr = xen_page_to_vaddr(buf->pages[i]);
+ gnttab_set_unmap_op(&unmap_ops[i], addr, GNTMAP_host_map,
+ buf->backend_map_handles[i]);
+ }
+
+ BUG_ON(gnttab_unmap_refs(unmap_ops, NULL, buf->pages,
+ buf->num_pages));
+
+ for (i = 0; i < buf->num_pages; i++) {
+ if (unlikely(unmap_ops[i].status != GNTST_okay))
+ DRM_ERROR("Failed to unmap page %d: %d\n",
+ i, unmap_ops[i].status);
+ }
+
+ kfree(unmap_ops);
+ kfree(buf->backend_map_handles);
+ buf->backend_map_handles = NULL;
+ return 0;
+}
+
+static void backend_fill_page_dir(struct xen_drm_front_shbuf *buf)
+{
+ struct xendispl_page_directory *page_dir;
+ unsigned char *ptr;
+ int i, num_pages_dir;
+
+ ptr = buf->directory;
+ num_pages_dir = get_num_pages_dir(buf);
+
+ /* fill only grefs for the page directory itself */
+ for (i = 0; i < num_pages_dir - 1; i++) {
+ page_dir = (struct xendispl_page_directory *)ptr;
+
+ page_dir->gref_dir_next_page = buf->grefs[i + 1];
+ ptr += PAGE_SIZE;
+ }
+ /* last page must say there is no more pages */
+ page_dir = (struct xendispl_page_directory *)ptr;
+ page_dir->gref_dir_next_page = GRANT_INVALID_REF;
+}
+
+static void guest_fill_page_dir(struct xen_drm_front_shbuf *buf)
+{
+ unsigned char *ptr;
+ int cur_gref, grefs_left, to_copy, i, num_pages_dir;
+
+ ptr = buf->directory;
+ num_pages_dir = get_num_pages_dir(buf);
+
+ /*
+ * while copying, skip grefs at start, they are for pages
+ * granted for the page directory itself
+ */
+ cur_gref = num_pages_dir;
+ grefs_left = buf->num_pages;
+ for (i = 0; i < num_pages_dir; i++) {
+ struct xendispl_page_directory *page_dir =
+ (struct xendispl_page_directory *)ptr;
+
+ if (grefs_left <= XEN_DRM_NUM_GREFS_PER_PAGE) {
+ to_copy = grefs_left;
+ page_dir->gref_dir_next_page = GRANT_INVALID_REF;
+ } else {
+ to_copy = XEN_DRM_NUM_GREFS_PER_PAGE;
+ page_dir->gref_dir_next_page = buf->grefs[i + 1];
+ }
+ memcpy(&page_dir->gref, &buf->grefs[cur_gref],
+ to_copy * sizeof(grant_ref_t));
+ ptr += PAGE_SIZE;
+ grefs_left -= to_copy;
+ cur_gref += to_copy;
+ }
+}
+
+static int guest_grant_refs_for_buffer(struct xen_drm_front_shbuf *buf,
+ grant_ref_t *priv_gref_head, int gref_idx)
+{
+ int i, cur_ref, otherend_id;
+
+ otherend_id = buf->xb_dev->otherend_id;
+ for (i = 0; i < buf->num_pages; i++) {
+ cur_ref = gnttab_claim_grant_reference(priv_gref_head);
+ if (cur_ref < 0)
+ return cur_ref;
+ gnttab_grant_foreign_access_ref(cur_ref, otherend_id,
+ xen_page_to_gfn(buf->pages[i]), 0);
+ buf->grefs[gref_idx++] = cur_ref;
+ }
+ return 0;
+}
+
+static int grant_references(struct xen_drm_front_shbuf *buf)
+{
+ grant_ref_t priv_gref_head;
+ int ret, i, j, cur_ref;
+ int otherend_id, num_pages_dir;
+
+ ret = gnttab_alloc_grant_references(buf->num_grefs, &priv_gref_head);
+ if (ret < 0) {
+ DRM_ERROR("Cannot allocate grant references\n");
+ return ret;
+ }
+ otherend_id = buf->xb_dev->otherend_id;
+ j = 0;
+ num_pages_dir = get_num_pages_dir(buf);
+ for (i = 0; i < num_pages_dir; i++) {
+ unsigned long frame;
+
+ cur_ref = gnttab_claim_grant_reference(&priv_gref_head);
+ if (cur_ref < 0)
+ return cur_ref;
+
+ frame = xen_page_to_gfn(virt_to_page(buf->directory +
+ PAGE_SIZE * i));
+ gnttab_grant_foreign_access_ref(cur_ref, otherend_id,
+ frame, 0);
+ buf->grefs[j++] = cur_ref;
+ }
+
+ if (buf->ops->grant_refs_for_buffer) {
+ ret = buf->ops->grant_refs_for_buffer(buf, &priv_gref_head, j);
+ if (ret)
+ return ret;
+ }
+
+ gnttab_free_grant_references(priv_gref_head);
+ return 0;
+}
+
+static int alloc_storage(struct xen_drm_front_shbuf *buf)
+{
+ if (buf->sgt) {
+ buf->pages = kvmalloc_array(buf->num_pages,
+ sizeof(struct page *), GFP_KERNEL);
+ if (!buf->pages)
+ return -ENOMEM;
+
+ if (drm_prime_sg_to_page_addr_arrays(buf->sgt, buf->pages,
+ NULL, buf->num_pages) < 0)
+ return -EINVAL;
+ }
+
+ buf->grefs = kcalloc(buf->num_grefs, sizeof(*buf->grefs), GFP_KERNEL);
+ if (!buf->grefs)
+ return -ENOMEM;
+
+ buf->directory = kcalloc(get_num_pages_dir(buf), PAGE_SIZE, GFP_KERNEL);
+ if (!buf->directory)
+ return -ENOMEM;
+
+ return 0;
+}
+
+/*
+ * For be allocated buffers we don't need grant_refs_for_buffer as those
+ * grant references are allocated at backend side
+ */
+static const struct xen_drm_front_shbuf_ops backend_ops = {
+ .calc_num_grefs = backend_calc_num_grefs,
+ .fill_page_dir = backend_fill_page_dir,
+ .map = backend_map,
+ .unmap = backend_unmap
+};
+
+/* For locally granted references we do not need to map/unmap the references */
+static const struct xen_drm_front_shbuf_ops local_ops = {
+ .calc_num_grefs = guest_calc_num_grefs,
+ .fill_page_dir = guest_fill_page_dir,
+ .grant_refs_for_buffer = guest_grant_refs_for_buffer,
+};
+
+struct xen_drm_front_shbuf *xen_drm_front_shbuf_alloc(
+ struct xen_drm_front_shbuf_cfg *cfg)
+{
+ struct xen_drm_front_shbuf *buf;
+ int ret;
+
+ /* either pages or sgt, not both */
+ BUG_ON(cfg->pages && cfg->sgt);
+
+ buf = kzalloc(sizeof(*buf), GFP_KERNEL);
+ if (!buf)
+ return NULL;
+
+ if (cfg->be_alloc)
+ buf->ops = &backend_ops;
+ else
+ buf->ops = &local_ops;
+
+ buf->xb_dev = cfg->xb_dev;
+ buf->num_pages = DIV_ROUND_UP(cfg->size, PAGE_SIZE);
+ buf->sgt = cfg->sgt;
+ buf->pages = cfg->pages;
+
+ buf->ops->calc_num_grefs(buf);
+
+ ret = alloc_storage(buf);
+ if (ret)
+ goto fail;
+
+ ret = grant_references(buf);
+ if (ret)
+ goto fail;
+
+ buf->ops->fill_page_dir(buf);
+
+ return buf;
+
+fail:
+ xen_drm_front_shbuf_free(buf);
+ return ERR_PTR(ret);
+}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_shbuf.h b/drivers/gpu/drm/xen/xen_drm_front_shbuf.h
new file mode 100644
index 000000000000..48151fc18f0a
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front_shbuf.h
@@ -0,0 +1,80 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#ifndef __XEN_DRM_FRONT_SHBUF_H_
+#define __XEN_DRM_FRONT_SHBUF_H_
+
+#include <linux/kernel.h>
+#include <linux/scatterlist.h>
+
+#include <xen/grant_table.h>
+
+struct xen_drm_front_shbuf {
+ /*
+ * number of references granted for the backend use:
+ * - for allocated/imported dma-buf's this holds number of grant
+ * references for the page directory and pages of the buffer
+ * - for the buffer provided by the backend this holds number of
+ * grant references for the page directory as grant references for
+ * the buffer will be provided by the backend
+ */
+ int num_grefs;
+ grant_ref_t *grefs;
+ unsigned char *directory;
+
+ /*
+ * there are 2 ways to provide backing storage for this shared buffer:
+ * either pages or sgt. if buffer created from sgt then we own
+ * the pages and must free those ourselves on closure
+ */
+ int num_pages;
+ struct page **pages;
+
+ struct sg_table *sgt;
+
+ struct xenbus_device *xb_dev;
+
+ /* these are the ops used internally depending on be_alloc mode */
+ const struct xen_drm_front_shbuf_ops *ops;
+
+ /* Xen map handles for the buffer allocated by the backend */
+ grant_handle_t *backend_map_handles;
+};
+
+struct xen_drm_front_shbuf_cfg {
+ struct xenbus_device *xb_dev;
+ size_t size;
+ struct page **pages;
+ struct sg_table *sgt;
+ bool be_alloc;
+};
+
+struct xen_drm_front_shbuf *xen_drm_front_shbuf_alloc(
+ struct xen_drm_front_shbuf_cfg *cfg);
+
+grant_ref_t xen_drm_front_shbuf_get_dir_start(struct xen_drm_front_shbuf *buf);
+
+int xen_drm_front_shbuf_map(struct xen_drm_front_shbuf *buf);
+
+int xen_drm_front_shbuf_unmap(struct xen_drm_front_shbuf *buf);
+
+void xen_drm_front_shbuf_flush(struct xen_drm_front_shbuf *buf);
+
+void xen_drm_front_shbuf_free(struct xen_drm_front_shbuf *buf);
+
+#endif /* __XEN_DRM_FRONT_SHBUF_H_ */
--
2.7.4
On 21/02/18 09:03, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <[email protected]>
>
> Initial handling for Xen bus states: implement
> Xen bus state machine for the frontend driver according to
> the state diagram and recovery flow from display para-virtualized
> protocol: xen/interface/io/displif.h.
>
> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
> ---
> drivers/gpu/drm/xen/xen_drm_front.c | 124 +++++++++++++++++++++++++++++++++++-
> drivers/gpu/drm/xen/xen_drm_front.h | 26 ++++++++
> 2 files changed, 149 insertions(+), 1 deletion(-)
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front.h
>
> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
> index fd372fb464a1..d0306f9d660d 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
> @@ -24,19 +24,141 @@
>
> #include <xen/interface/io/displif.h>
>
> +#include "xen_drm_front.h"
> +
> +static void xen_drv_remove_internal(struct xen_drm_front_info *front_info)
> +{
> +}
> +
> +static int backend_on_initwait(struct xen_drm_front_info *front_info)
> +{
> + return 0;
> +}
> +
> +static int backend_on_connected(struct xen_drm_front_info *front_info)
> +{
> + return 0;
> +}
> +
> +static void backend_on_disconnected(struct xen_drm_front_info *front_info)
> +{
> + xenbus_switch_state(front_info->xb_dev, XenbusStateInitialising);
> +}
> +
> static void backend_on_changed(struct xenbus_device *xb_dev,
> enum xenbus_state backend_state)
> {
> + struct xen_drm_front_info *front_info = dev_get_drvdata(&xb_dev->dev);
> + int ret;
> +
> + DRM_DEBUG("Backend state is %s, front is %s\n",
> + xenbus_strstate(backend_state),
> + xenbus_strstate(xb_dev->state));
> +
> + switch (backend_state) {
> + case XenbusStateReconfiguring:
> + /* fall through */
> + case XenbusStateReconfigured:
> + /* fall through */
> + case XenbusStateInitialised:
> + break;
> +
> + case XenbusStateInitialising:
> + /* recovering after backend unexpected closure */
> + backend_on_disconnected(front_info);
> + break;
> +
> + case XenbusStateInitWait:
> + /* recovering after backend unexpected closure */
> + backend_on_disconnected(front_info);
> + if (xb_dev->state != XenbusStateInitialising)
> + break;
> +
> + ret = backend_on_initwait(front_info);
> + if (ret < 0)
> + xenbus_dev_fatal(xb_dev, ret, "initializing frontend");
> + else
> + xenbus_switch_state(xb_dev, XenbusStateInitialised);
> + break;
> +
> + case XenbusStateConnected:
> + if (xb_dev->state != XenbusStateInitialised)
> + break;
> +
> + ret = backend_on_connected(front_info);
> + if (ret < 0)
> + xenbus_dev_fatal(xb_dev, ret, "initializing DRM driver");
> + else
> + xenbus_switch_state(xb_dev, XenbusStateConnected);
> + break;
> +
> + case XenbusStateClosing:
> + /*
> + * in this state backend starts freeing resources,
> + * so let it go into closed state, so we can also
> + * remove ours
> + */
> + break;
> +
> + case XenbusStateUnknown:
> + /* fall through */
> + case XenbusStateClosed:
> + if (xb_dev->state == XenbusStateClosed)
> + break;
> +
> + backend_on_disconnected(front_info);
> + break;
> + }
> }
>
> static int xen_drv_probe(struct xenbus_device *xb_dev,
> const struct xenbus_device_id *id)
> {
> - return 0;
> + struct xen_drm_front_info *front_info;
> +
> + front_info = devm_kzalloc(&xb_dev->dev,
> + sizeof(*front_info), GFP_KERNEL);
> + if (!front_info) {
> + xenbus_dev_fatal(xb_dev, -ENOMEM, "allocating device memory");
No need for message in case of allocation failure: this is
handled in memory allocation already.
Juergen
From: Oleksandr Andrushchenko <[email protected]>
Initial handling for Xen bus states: implement
Xen bus state machine for the frontend driver according to
the state diagram and recovery flow from display para-virtualized
protocol: xen/interface/io/displif.h.
Signed-off-by: Oleksandr Andrushchenko <[email protected]>
---
drivers/gpu/drm/xen/xen_drm_front.c | 124 +++++++++++++++++++++++++++++++++++-
drivers/gpu/drm/xen/xen_drm_front.h | 26 ++++++++
2 files changed, 149 insertions(+), 1 deletion(-)
create mode 100644 drivers/gpu/drm/xen/xen_drm_front.h
diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
index fd372fb464a1..d0306f9d660d 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.c
+++ b/drivers/gpu/drm/xen/xen_drm_front.c
@@ -24,19 +24,141 @@
#include <xen/interface/io/displif.h>
+#include "xen_drm_front.h"
+
+static void xen_drv_remove_internal(struct xen_drm_front_info *front_info)
+{
+}
+
+static int backend_on_initwait(struct xen_drm_front_info *front_info)
+{
+ return 0;
+}
+
+static int backend_on_connected(struct xen_drm_front_info *front_info)
+{
+ return 0;
+}
+
+static void backend_on_disconnected(struct xen_drm_front_info *front_info)
+{
+ xenbus_switch_state(front_info->xb_dev, XenbusStateInitialising);
+}
+
static void backend_on_changed(struct xenbus_device *xb_dev,
enum xenbus_state backend_state)
{
+ struct xen_drm_front_info *front_info = dev_get_drvdata(&xb_dev->dev);
+ int ret;
+
+ DRM_DEBUG("Backend state is %s, front is %s\n",
+ xenbus_strstate(backend_state),
+ xenbus_strstate(xb_dev->state));
+
+ switch (backend_state) {
+ case XenbusStateReconfiguring:
+ /* fall through */
+ case XenbusStateReconfigured:
+ /* fall through */
+ case XenbusStateInitialised:
+ break;
+
+ case XenbusStateInitialising:
+ /* recovering after backend unexpected closure */
+ backend_on_disconnected(front_info);
+ break;
+
+ case XenbusStateInitWait:
+ /* recovering after backend unexpected closure */
+ backend_on_disconnected(front_info);
+ if (xb_dev->state != XenbusStateInitialising)
+ break;
+
+ ret = backend_on_initwait(front_info);
+ if (ret < 0)
+ xenbus_dev_fatal(xb_dev, ret, "initializing frontend");
+ else
+ xenbus_switch_state(xb_dev, XenbusStateInitialised);
+ break;
+
+ case XenbusStateConnected:
+ if (xb_dev->state != XenbusStateInitialised)
+ break;
+
+ ret = backend_on_connected(front_info);
+ if (ret < 0)
+ xenbus_dev_fatal(xb_dev, ret, "initializing DRM driver");
+ else
+ xenbus_switch_state(xb_dev, XenbusStateConnected);
+ break;
+
+ case XenbusStateClosing:
+ /*
+ * in this state backend starts freeing resources,
+ * so let it go into closed state, so we can also
+ * remove ours
+ */
+ break;
+
+ case XenbusStateUnknown:
+ /* fall through */
+ case XenbusStateClosed:
+ if (xb_dev->state == XenbusStateClosed)
+ break;
+
+ backend_on_disconnected(front_info);
+ break;
+ }
}
static int xen_drv_probe(struct xenbus_device *xb_dev,
const struct xenbus_device_id *id)
{
- return 0;
+ struct xen_drm_front_info *front_info;
+
+ front_info = devm_kzalloc(&xb_dev->dev,
+ sizeof(*front_info), GFP_KERNEL);
+ if (!front_info) {
+ xenbus_dev_fatal(xb_dev, -ENOMEM, "allocating device memory");
+ return -ENOMEM;
+ }
+
+ front_info->xb_dev = xb_dev;
+ dev_set_drvdata(&xb_dev->dev, front_info);
+ return xenbus_switch_state(xb_dev, XenbusStateInitialising);
}
static int xen_drv_remove(struct xenbus_device *dev)
{
+ struct xen_drm_front_info *front_info = dev_get_drvdata(&dev->dev);
+ int to = 100;
+
+ xenbus_switch_state(dev, XenbusStateClosing);
+
+ /*
+ * On driver removal it is disconnected from XenBus,
+ * so no backend state change events come via .otherend_changed
+ * callback. This prevents us from exiting gracefully, e.g.
+ * signaling the backend to free event channels, waiting for its
+ * state to change to XenbusStateClosed and cleaning at our end.
+ * Normally when front driver removed backend will finally go into
+ * XenbusStateInitWait state.
+ *
+ * Workaround: read backend's state manually and wait with time-out.
+ */
+ while ((xenbus_read_unsigned(front_info->xb_dev->otherend,
+ "state", XenbusStateUnknown) != XenbusStateInitWait) &&
+ to--)
+ msleep(10);
+
+ if (!to)
+ DRM_ERROR("Backend state is %s while removing driver\n",
+ xenbus_strstate(xenbus_read_unsigned(
+ front_info->xb_dev->otherend,
+ "state", XenbusStateUnknown)));
+
+ xen_drv_remove_internal(front_info);
+ xenbus_frontend_closed(dev);
return 0;
}
diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
new file mode 100644
index 000000000000..8af46850f126
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front.h
@@ -0,0 +1,26 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#ifndef __XEN_DRM_FRONT_H_
+#define __XEN_DRM_FRONT_H_
+
+struct xen_drm_front_info {
+ struct xenbus_device *xb_dev;
+};
+
+#endif /* __XEN_DRM_FRONT_H_ */
--
2.7.4
On Wed, Feb 21, 2018 at 10:03:34AM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <[email protected]>
>
> Introduce skeleton of the para-virtualized Xen display
> frontend driver. This patch only adds required
> essential stubs.
>
> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
> ---
> drivers/gpu/drm/Kconfig | 2 +
> drivers/gpu/drm/Makefile | 1 +
> drivers/gpu/drm/xen/Kconfig | 17 ++++++++
> drivers/gpu/drm/xen/Makefile | 5 +++
> drivers/gpu/drm/xen/xen_drm_front.c | 83 +++++++++++++++++++++++++++++++++++++
> 5 files changed, 108 insertions(+)
> create mode 100644 drivers/gpu/drm/xen/Kconfig
> create mode 100644 drivers/gpu/drm/xen/Makefile
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front.c
>
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index deeefa7a1773..757825ac60df 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
>
> source "drivers/gpu/drm/tve200/Kconfig"
>
> +source "drivers/gpu/drm/xen/Kconfig"
> +
> # Keep legacy drivers last
>
> menuconfig DRM_LEGACY
> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
> index 50093ff4479b..9d66657ea117 100644
> --- a/drivers/gpu/drm/Makefile
> +++ b/drivers/gpu/drm/Makefile
> @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/
> obj-$(CONFIG_DRM_TINYDRM) += tinydrm/
> obj-$(CONFIG_DRM_PL111) += pl111/
> obj-$(CONFIG_DRM_TVE200) += tve200/
> +obj-$(CONFIG_DRM_XEN) += xen/
> diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig
> new file mode 100644
> index 000000000000..4cca160782ab
> --- /dev/null
> +++ b/drivers/gpu/drm/xen/Kconfig
> @@ -0,0 +1,17 @@
> +config DRM_XEN
> + bool "DRM Support for Xen guest OS"
> + depends on XEN
> + help
> + Choose this option if you want to enable DRM support
> + for Xen.
> +
> +config DRM_XEN_FRONTEND
> + tristate "Para-virtualized frontend driver for Xen guest OS"
> + depends on DRM_XEN
> + depends on DRM
> + select DRM_KMS_HELPER
> + select VIDEOMODE_HELPERS
> + select XEN_XENBUS_FRONTEND
> + help
> + Choose this option if you want to enable a para-virtualized
> + frontend DRM/KMS driver for Xen guest OSes.
> diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
> new file mode 100644
> index 000000000000..967074d348f6
> --- /dev/null
> +++ b/drivers/gpu/drm/xen/Makefile
> @@ -0,0 +1,5 @@
> +# SPDX-License-Identifier: GPL-2.0
> +
> +drm_xen_front-objs := xen_drm_front.o
> +
> +obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
> new file mode 100644
> index 000000000000..fd372fb464a1
> --- /dev/null
> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
> @@ -0,0 +1,83 @@
> +/*
> + * Xen para-virtual DRM device
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
Most Xen drivers in Linux use a dual GPL/BSD license, so that they can
be imported into other non GPL OSes:
This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License version 2
as published by the Free Software Foundation; or, when distributed
separately from the Linux kernel or incorporated into other
software packages, subject to the following license:
Permission is hereby granted, free of charge, to any person obtaining a copy
of this source file (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use, copy, modify,
merge, publish, distribute, sublicense, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
IN THE SOFTWARE.
IMO it would be good to release this driver under the same license, so
it can be incorporated into other OSes.
Thanks, Roger.
On Wed, Feb 21, 2018 at 11:42:23AM +0200, Oleksandr Andrushchenko wrote:
> On 02/21/2018 11:17 AM, Roger Pau Monn? wrote:
> > On Wed, Feb 21, 2018 at 10:03:34AM +0200, Oleksandr Andrushchenko wrote:
> > > --- /dev/null
> > > +++ b/drivers/gpu/drm/xen/xen_drm_front.c
> > > @@ -0,0 +1,83 @@
> > > +/*
> > > + * Xen para-virtual DRM device
> > > + *
> > > + * This program is free software; you can redistribute it and/or modify
> > > + * it under the terms of the GNU General Public License as published by
> > > + * the Free Software Foundation; either version 2 of the License, or
> > > + * (at your option) any later version.
> > > + *
> > > + * This program is distributed in the hope that it will be useful,
> > > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > > + * GNU General Public License for more details.
> > Most Xen drivers in Linux use a dual GPL/BSD license, so that they can
> > be imported into other non GPL OSes:
> >
> > This program is free software; you can redistribute it and/or
> > modify it under the terms of the GNU General Public License version 2
> > as published by the Free Software Foundation; or, when distributed
> > separately from the Linux kernel or incorporated into other
> > software packages, subject to the following license:
> >
> > Permission is hereby granted, free of charge, to any person obtaining a copy
> > of this source file (the "Software"), to deal in the Software without
> > restriction, including without limitation the rights to use, copy, modify,
> > merge, publish, distribute, sublicense, and/or sell copies of the Software,
> > and to permit persons to whom the Software is furnished to do so, subject to
> > the following conditions:
> >
> > The above copyright notice and this permission notice shall be included in
> > all copies or substantial portions of the Software.
> >
> > THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> > IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> > FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> > AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> > LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> > FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> > IN THE SOFTWARE.
> >
> > IMO it would be good to release this driver under the same license, so
> > it can be incorporated into other OSes.
> I am in any way expert in licensing, but the above seems to be
> /* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
> At least this is what I see at [1] for MIT.
> Could you please tell which license(s) as listed at [1]
> would be appropriate for Xen drivers in terms of how it is
> expected to appear in the kernel code, e.g. expected
> SPDX-License-Identifier?
I would be fine with anything MIT/BSD-*/Apache-* like. In the Xen
community we have generally done dual GPL-2.0 MIT, so your proposed
tag looks fine IMO (I would also personally be fine with
MIT/BSD-*/Apache-* only).
The point is that it would be good to have the code under a more
permissive license so it can be integrated into non GPL OSes in the
future if needed, and that your code could be used as a reference for
that.
Thanks, Roger.
On 02/21/2018 12:19 PM, Roger Pau Monné wrote:
> On Wed, Feb 21, 2018 at 11:42:23AM +0200, Oleksandr Andrushchenko wrote:
>> On 02/21/2018 11:17 AM, Roger Pau Monné wrote:
>>> On Wed, Feb 21, 2018 at 10:03:34AM +0200, Oleksandr Andrushchenko wrote:
>>>> --- /dev/null
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
>>>> @@ -0,0 +1,83 @@
>>>> +/*
>>>> + * Xen para-virtual DRM device
>>>> + *
>>>> + * This program is free software; you can redistribute it and/or modify
>>>> + * it under the terms of the GNU General Public License as published by
>>>> + * the Free Software Foundation; either version 2 of the License, or
>>>> + * (at your option) any later version.
>>>> + *
>>>> + * This program is distributed in the hope that it will be useful,
>>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>>>> + * GNU General Public License for more details.
>>> Most Xen drivers in Linux use a dual GPL/BSD license, so that they can
>>> be imported into other non GPL OSes:
>>>
>>> This program is free software; you can redistribute it and/or
>>> modify it under the terms of the GNU General Public License version 2
>>> as published by the Free Software Foundation; or, when distributed
>>> separately from the Linux kernel or incorporated into other
>>> software packages, subject to the following license:
>>>
>>> Permission is hereby granted, free of charge, to any person obtaining a copy
>>> of this source file (the "Software"), to deal in the Software without
>>> restriction, including without limitation the rights to use, copy, modify,
>>> merge, publish, distribute, sublicense, and/or sell copies of the Software,
>>> and to permit persons to whom the Software is furnished to do so, subject to
>>> the following conditions:
>>>
>>> The above copyright notice and this permission notice shall be included in
>>> all copies or substantial portions of the Software.
>>>
>>> THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
>>> IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
>>> FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
>>> AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
>>> LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
>>> FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
>>> IN THE SOFTWARE.
>>>
>>> IMO it would be good to release this driver under the same license, so
>>> it can be incorporated into other OSes.
>> I am in any way expert in licensing, but the above seems to be
>> /* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
>> At least this is what I see at [1] for MIT.
>> Could you please tell which license(s) as listed at [1]
>> would be appropriate for Xen drivers in terms of how it is
>> expected to appear in the kernel code, e.g. expected
>> SPDX-License-Identifier?
> I would be fine with anything MIT/BSD-*/Apache-* like. In the Xen
> community we have generally done dual GPL-2.0 MIT, so your proposed
> tag looks fine IMO (I would also personally be fine with
> MIT/BSD-*/Apache-* only).
Ok, then I am about to use /* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
>
> The point is that it would be good to have the code under a more
> permissive license so it can be integrated into non GPL OSes in the
> future if needed, and that your code could be used as a reference for
> that.
That is clear, no objections
> Thanks, Roger.
Thank you,
Oleksandr
From: Oleksandr Andrushchenko <[email protected]>
Handle Xen event channels:
- create for all configured connectors and publish
corresponding ring references and event channels in Xen store,
so backend can connect
- implement event channels interrupt handlers
- create and destroy event channels with respect to Xen bus state
Signed-off-by: Oleksandr Andrushchenko <[email protected]>
---
drivers/gpu/drm/xen/Makefile | 1 +
drivers/gpu/drm/xen/xen_drm_front.c | 16 +-
drivers/gpu/drm/xen/xen_drm_front.h | 22 ++
drivers/gpu/drm/xen/xen_drm_front_evtchnl.c | 399 ++++++++++++++++++++++++++++
drivers/gpu/drm/xen/xen_drm_front_evtchnl.h | 89 +++++++
5 files changed, 526 insertions(+), 1 deletion(-)
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_evtchnl.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_evtchnl.h
diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
index 0a2eae757f0c..4ce7756b8437 100644
--- a/drivers/gpu/drm/xen/Makefile
+++ b/drivers/gpu/drm/xen/Makefile
@@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
drm_xen_front-objs := xen_drm_front.o \
+ xen_drm_front_evtchnl.o \
xen_drm_front_cfg.o
obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
index 0a90c474c7ce..b558e0ae3b33 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.c
+++ b/drivers/gpu/drm/xen/xen_drm_front.c
@@ -25,9 +25,15 @@
#include <xen/interface/io/displif.h>
#include "xen_drm_front.h"
+#include "xen_drm_front_evtchnl.h"
+
+static struct xen_drm_front_ops front_ops = {
+ /* placeholder for now */
+};
static void xen_drv_remove_internal(struct xen_drm_front_info *front_info)
{
+ xen_drm_front_evtchnl_free_all(front_info);
}
static int backend_on_initwait(struct xen_drm_front_info *front_info)
@@ -41,16 +47,23 @@ static int backend_on_initwait(struct xen_drm_front_info *front_info)
return ret;
DRM_INFO("Have %d conector(s)\n", cfg->num_connectors);
- return 0;
+ /* Create event channels for all connectors and publish */
+ ret = xen_drm_front_evtchnl_create_all(front_info, &front_ops);
+ if (ret < 0)
+ return ret;
+
+ return xen_drm_front_evtchnl_publish_all(front_info);
}
static int backend_on_connected(struct xen_drm_front_info *front_info)
{
+ xen_drm_front_evtchnl_set_state(front_info, EVTCHNL_STATE_CONNECTED);
return 0;
}
static void backend_on_disconnected(struct xen_drm_front_info *front_info)
{
+ xen_drm_front_evtchnl_set_state(front_info, EVTCHNL_STATE_DISCONNECTED);
xenbus_switch_state(front_info->xb_dev, XenbusStateInitialising);
}
@@ -133,6 +146,7 @@ static int xen_drv_probe(struct xenbus_device *xb_dev,
}
front_info->xb_dev = xb_dev;
+ spin_lock_init(&front_info->io_lock);
dev_set_drvdata(&xb_dev->dev, front_info);
return xenbus_switch_state(xb_dev, XenbusStateInitialising);
}
diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
index 62b0d4e3e12b..13f22736ae02 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.h
+++ b/drivers/gpu/drm/xen/xen_drm_front.h
@@ -21,8 +21,30 @@
#include "xen_drm_front_cfg.h"
+#ifndef GRANT_INVALID_REF
+/*
+ * Note on usage of grant reference 0 as invalid grant reference:
+ * grant reference 0 is valid, but never exposed to a PV driver,
+ * because of the fact it is already in use/reserved by the PV console.
+ */
+#define GRANT_INVALID_REF 0
+#endif
+
+struct xen_drm_front_ops {
+ /* CAUTION! this is called with a spin_lock held! */
+ void (*on_frame_done)(struct platform_device *pdev,
+ int conn_idx, uint64_t fb_cookie);
+};
+
struct xen_drm_front_info {
struct xenbus_device *xb_dev;
+ /* to protect data between backend IO code and interrupt handler */
+ spinlock_t io_lock;
+ /* virtual DRM platform device */
+ struct platform_device *drm_pdev;
+
+ int num_evt_pairs;
+ struct xen_drm_front_evtchnl_pair *evt_pairs;
struct xen_drm_front_cfg cfg;
};
diff --git a/drivers/gpu/drm/xen/xen_drm_front_evtchnl.c b/drivers/gpu/drm/xen/xen_drm_front_evtchnl.c
new file mode 100644
index 000000000000..697a0e4dcaed
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front_evtchnl.c
@@ -0,0 +1,399 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#include <drm/drmP.h>
+
+#include <linux/errno.h>
+#include <linux/irq.h>
+
+#include <xen/xenbus.h>
+#include <xen/events.h>
+#include <xen/grant_table.h>
+
+#include "xen_drm_front.h"
+#include "xen_drm_front_evtchnl.h"
+
+static irqreturn_t evtchnl_interrupt_ctrl(int irq, void *dev_id)
+{
+ struct xen_drm_front_evtchnl *evtchnl = dev_id;
+ struct xen_drm_front_info *front_info = evtchnl->front_info;
+ struct xendispl_resp *resp;
+ RING_IDX i, rp;
+ unsigned long flags;
+
+ spin_lock_irqsave(&front_info->io_lock, flags);
+
+ if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED))
+ goto out;
+
+again:
+ rp = evtchnl->u.req.ring.sring->rsp_prod;
+ /* ensure we see queued responses up to rp */
+ virt_rmb();
+
+ for (i = evtchnl->u.req.ring.rsp_cons; i != rp; i++) {
+ resp = RING_GET_RESPONSE(&evtchnl->u.req.ring, i);
+ if (unlikely(resp->id != evtchnl->evt_id))
+ continue;
+
+ switch (resp->operation) {
+ case XENDISPL_OP_PG_FLIP:
+ case XENDISPL_OP_FB_ATTACH:
+ case XENDISPL_OP_FB_DETACH:
+ case XENDISPL_OP_DBUF_CREATE:
+ case XENDISPL_OP_DBUF_DESTROY:
+ case XENDISPL_OP_SET_CONFIG:
+ evtchnl->u.req.resp_status = resp->status;
+ complete(&evtchnl->u.req.completion);
+ break;
+
+ default:
+ DRM_ERROR("Operation %d is not supported\n",
+ resp->operation);
+ break;
+ }
+ }
+
+ evtchnl->u.req.ring.rsp_cons = i;
+
+ if (i != evtchnl->u.req.ring.req_prod_pvt) {
+ int more_to_do;
+
+ RING_FINAL_CHECK_FOR_RESPONSES(&evtchnl->u.req.ring,
+ more_to_do);
+ if (more_to_do)
+ goto again;
+ } else
+ evtchnl->u.req.ring.sring->rsp_event = i + 1;
+
+out:
+ spin_unlock_irqrestore(&front_info->io_lock, flags);
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t evtchnl_interrupt_evt(int irq, void *dev_id)
+{
+ struct xen_drm_front_evtchnl *evtchnl = dev_id;
+ struct xen_drm_front_info *front_info = evtchnl->front_info;
+ struct xendispl_event_page *page = evtchnl->u.evt.page;
+ uint32_t cons, prod;
+ unsigned long flags;
+
+ spin_lock_irqsave(&front_info->io_lock, flags);
+ if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED))
+ goto out;
+
+ prod = page->in_prod;
+ /* ensure we see ring contents up to prod */
+ virt_rmb();
+ if (prod == page->in_cons)
+ goto out;
+
+ for (cons = page->in_cons; cons != prod; cons++) {
+ struct xendispl_evt *event;
+
+ event = &XENDISPL_IN_RING_REF(page, cons);
+ if (unlikely(event->id != evtchnl->evt_id++))
+ continue;
+
+ switch (event->type) {
+ case XENDISPL_EVT_PG_FLIP:
+ evtchnl->u.evt.front_ops->on_frame_done(
+ front_info->drm_pdev, evtchnl->index,
+ event->op.pg_flip.fb_cookie);
+ break;
+ }
+ }
+ page->in_cons = cons;
+ /* ensure ring contents */
+ virt_wmb();
+
+out:
+ spin_unlock_irqrestore(&front_info->io_lock, flags);
+ return IRQ_HANDLED;
+}
+
+static void evtchnl_free(struct xen_drm_front_info *front_info,
+ struct xen_drm_front_evtchnl *evtchnl)
+{
+ unsigned long page = 0;
+
+ if (evtchnl->type == EVTCHNL_TYPE_REQ)
+ page = (unsigned long)evtchnl->u.req.ring.sring;
+ else if (evtchnl->type == EVTCHNL_TYPE_EVT)
+ page = (unsigned long)evtchnl->u.evt.page;
+ if (!page)
+ return;
+
+ evtchnl->state = EVTCHNL_STATE_DISCONNECTED;
+
+ if (evtchnl->type == EVTCHNL_TYPE_REQ) {
+ /* release all who still waits for response if any */
+ evtchnl->u.req.resp_status = -EIO;
+ complete_all(&evtchnl->u.req.completion);
+ }
+
+ if (evtchnl->irq)
+ unbind_from_irqhandler(evtchnl->irq, evtchnl);
+
+ if (evtchnl->port)
+ xenbus_free_evtchn(front_info->xb_dev, evtchnl->port);
+
+ /* end access and free the page */
+ if (evtchnl->gref != GRANT_INVALID_REF)
+ gnttab_end_foreign_access(evtchnl->gref, 0, page);
+
+ if (evtchnl->type == EVTCHNL_TYPE_REQ)
+ evtchnl->u.req.ring.sring = NULL;
+ else
+ evtchnl->u.evt.page = NULL;
+
+ memset(evtchnl, 0, sizeof(*evtchnl));
+}
+
+static int evtchnl_alloc(struct xen_drm_front_info *front_info, int index,
+ struct xen_drm_front_evtchnl *evtchnl,
+ enum xen_drm_front_evtchnl_type type)
+{
+ struct xenbus_device *xb_dev = front_info->xb_dev;
+ unsigned long page;
+ grant_ref_t gref;
+ irq_handler_t handler;
+ int ret;
+
+ memset(evtchnl, 0, sizeof(*evtchnl));
+ evtchnl->type = type;
+ evtchnl->index = index;
+ evtchnl->front_info = front_info;
+ evtchnl->state = EVTCHNL_STATE_DISCONNECTED;
+ evtchnl->gref = GRANT_INVALID_REF;
+
+ page = get_zeroed_page(GFP_NOIO | __GFP_HIGH);
+ if (!page) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ if (type == EVTCHNL_TYPE_REQ) {
+ struct xen_displif_sring *sring;
+
+ init_completion(&evtchnl->u.req.completion);
+ sring = (struct xen_displif_sring *)page;
+ SHARED_RING_INIT(sring);
+ FRONT_RING_INIT(&evtchnl->u.req.ring,
+ sring, XEN_PAGE_SIZE);
+
+ ret = xenbus_grant_ring(xb_dev, sring, 1, &gref);
+ if (ret < 0)
+ goto fail;
+
+ handler = evtchnl_interrupt_ctrl;
+ } else {
+ evtchnl->u.evt.page = (struct xendispl_event_page *)page;
+
+ ret = gnttab_grant_foreign_access(xb_dev->otherend_id,
+ virt_to_gfn((void *)page), 0);
+ if (ret < 0)
+ goto fail;
+
+ gref = ret;
+ handler = evtchnl_interrupt_evt;
+ }
+ evtchnl->gref = gref;
+
+ ret = xenbus_alloc_evtchn(xb_dev, &evtchnl->port);
+ if (ret < 0)
+ goto fail;
+
+ ret = bind_evtchn_to_irqhandler(evtchnl->port,
+ handler, 0, xb_dev->devicetype, evtchnl);
+ if (ret < 0)
+ goto fail;
+
+ evtchnl->irq = ret;
+ return 0;
+
+fail:
+ DRM_ERROR("Failed to allocate ring: %d\n", ret);
+ return ret;
+}
+
+int xen_drm_front_evtchnl_create_all(struct xen_drm_front_info *front_info,
+ struct xen_drm_front_ops *front_ops)
+{
+ struct xen_drm_front_cfg *cfg;
+ int ret, conn;
+
+ cfg = &front_info->cfg;
+
+ front_info->evt_pairs = devm_kcalloc(&front_info->xb_dev->dev,
+ cfg->num_connectors,
+ sizeof(struct xen_drm_front_evtchnl_pair), GFP_KERNEL);
+ if (!front_info->evt_pairs) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ for (conn = 0; conn < cfg->num_connectors; conn++) {
+ ret = evtchnl_alloc(front_info, conn,
+ &front_info->evt_pairs[conn].req,
+ EVTCHNL_TYPE_REQ);
+ if (ret < 0) {
+ DRM_ERROR("Error allocating control channel\n");
+ goto fail;
+ }
+
+ ret = evtchnl_alloc(front_info, conn,
+ &front_info->evt_pairs[conn].evt,
+ EVTCHNL_TYPE_EVT);
+ if (ret < 0) {
+ DRM_ERROR("Error allocating in-event channel\n");
+ goto fail;
+ }
+
+ front_info->evt_pairs[conn].evt.u.evt.front_ops = front_ops;
+ }
+ front_info->num_evt_pairs = cfg->num_connectors;
+ return 0;
+
+fail:
+ xen_drm_front_evtchnl_free_all(front_info);
+ return ret;
+}
+
+static int evtchnl_publish(struct xenbus_transaction xbt,
+ struct xen_drm_front_evtchnl *evtchnl, const char *path,
+ const char *node_ring, const char *node_chnl)
+{
+ struct xenbus_device *xb_dev = evtchnl->front_info->xb_dev;
+ int ret;
+
+ /* write control channel ring reference */
+ ret = xenbus_printf(xbt, path, node_ring, "%u", evtchnl->gref);
+ if (ret < 0) {
+ xenbus_dev_error(xb_dev, ret, "writing ring-ref");
+ return ret;
+ }
+
+ /* write event channel ring reference */
+ ret = xenbus_printf(xbt, path, node_chnl, "%u", evtchnl->port);
+ if (ret < 0) {
+ xenbus_dev_error(xb_dev, ret, "writing event channel");
+ return ret;
+ }
+
+ return 0;
+}
+
+int xen_drm_front_evtchnl_publish_all(struct xen_drm_front_info *front_info)
+{
+ struct xenbus_transaction xbt;
+ struct xen_drm_front_cfg *plat_data;
+ int ret, conn;
+
+ plat_data = &front_info->cfg;
+
+again:
+ ret = xenbus_transaction_start(&xbt);
+ if (ret < 0) {
+ xenbus_dev_fatal(front_info->xb_dev, ret,
+ "starting transaction");
+ return ret;
+ }
+
+ for (conn = 0; conn < plat_data->num_connectors; conn++) {
+ ret = evtchnl_publish(xbt,
+ &front_info->evt_pairs[conn].req,
+ plat_data->connectors[conn].xenstore_path,
+ XENDISPL_FIELD_REQ_RING_REF,
+ XENDISPL_FIELD_REQ_CHANNEL);
+ if (ret < 0)
+ goto fail;
+
+ ret = evtchnl_publish(xbt,
+ &front_info->evt_pairs[conn].evt,
+ plat_data->connectors[conn].xenstore_path,
+ XENDISPL_FIELD_EVT_RING_REF,
+ XENDISPL_FIELD_EVT_CHANNEL);
+ if (ret < 0)
+ goto fail;
+ }
+
+ ret = xenbus_transaction_end(xbt, 0);
+ if (ret < 0) {
+ if (ret == -EAGAIN)
+ goto again;
+
+ xenbus_dev_fatal(front_info->xb_dev, ret,
+ "completing transaction");
+ goto fail_to_end;
+ }
+
+ return 0;
+
+fail:
+ xenbus_transaction_end(xbt, 1);
+
+fail_to_end:
+ xenbus_dev_fatal(front_info->xb_dev, ret, "writing Xen store");
+ return ret;
+}
+
+void xen_drm_front_evtchnl_flush(struct xen_drm_front_evtchnl *evtchnl)
+{
+ int notify;
+
+ evtchnl->u.req.ring.req_prod_pvt++;
+ RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&evtchnl->u.req.ring, notify);
+ if (notify)
+ notify_remote_via_irq(evtchnl->irq);
+}
+
+void xen_drm_front_evtchnl_set_state(struct xen_drm_front_info *front_info,
+ enum xen_drm_front_evtchnl_state state)
+{
+ unsigned long flags;
+ int i;
+
+ if (!front_info->evt_pairs)
+ return;
+
+ spin_lock_irqsave(&front_info->io_lock, flags);
+ for (i = 0; i < front_info->num_evt_pairs; i++) {
+ front_info->evt_pairs[i].req.state = state;
+ front_info->evt_pairs[i].evt.state = state;
+ }
+ spin_unlock_irqrestore(&front_info->io_lock, flags);
+
+}
+
+void xen_drm_front_evtchnl_free_all(struct xen_drm_front_info *front_info)
+{
+ int i;
+
+ if (!front_info->evt_pairs)
+ return;
+
+ for (i = 0; i < front_info->num_evt_pairs; i++) {
+ evtchnl_free(front_info, &front_info->evt_pairs[i].req);
+ evtchnl_free(front_info, &front_info->evt_pairs[i].evt);
+ }
+
+ devm_kfree(&front_info->xb_dev->dev, front_info->evt_pairs);
+ front_info->evt_pairs = NULL;
+}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_evtchnl.h b/drivers/gpu/drm/xen/xen_drm_front_evtchnl.h
new file mode 100644
index 000000000000..e72d3aa68b4e
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front_evtchnl.h
@@ -0,0 +1,89 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#ifndef __XEN_DRM_FRONT_EVTCHNL_H_
+#define __XEN_DRM_FRONT_EVTCHNL_H_
+
+#include <linux/completion.h>
+#include <linux/types.h>
+
+#include <xen/interface/io/ring.h>
+#include <xen/interface/io/displif.h>
+
+/*
+ * All operations which are not connector oriented use this ctrl event channel,
+ * e.g. fb_attach/destroy which belong to a DRM device, not to a CRTC.
+ */
+#define GENERIC_OP_EVT_CHNL 0
+
+enum xen_drm_front_evtchnl_state {
+ EVTCHNL_STATE_DISCONNECTED,
+ EVTCHNL_STATE_CONNECTED,
+};
+
+enum xen_drm_front_evtchnl_type {
+ EVTCHNL_TYPE_REQ,
+ EVTCHNL_TYPE_EVT,
+};
+
+struct xen_drm_front_drm_info;
+
+struct xen_drm_front_evtchnl {
+ struct xen_drm_front_info *front_info;
+ int gref;
+ int port;
+ int irq;
+ int index;
+ enum xen_drm_front_evtchnl_state state;
+ enum xen_drm_front_evtchnl_type type;
+ /* either response id or incoming event id */
+ uint16_t evt_id;
+ /* next request id or next expected event id */
+ uint16_t evt_next_id;
+ union {
+ struct {
+ struct xen_displif_front_ring ring;
+ struct completion completion;
+ /* latest response status */
+ int resp_status;
+ } req;
+ struct {
+ struct xendispl_event_page *page;
+ struct xen_drm_front_ops *front_ops;
+ } evt;
+ } u;
+};
+
+struct xen_drm_front_evtchnl_pair {
+ struct xen_drm_front_evtchnl req;
+ struct xen_drm_front_evtchnl evt;
+};
+
+int xen_drm_front_evtchnl_create_all(struct xen_drm_front_info *front_info,
+ struct xen_drm_front_ops *front_ops);
+
+int xen_drm_front_evtchnl_publish_all(struct xen_drm_front_info *front_info);
+
+void xen_drm_front_evtchnl_flush(struct xen_drm_front_evtchnl *evtchnl);
+
+void xen_drm_front_evtchnl_set_state(struct xen_drm_front_info *front_info,
+ enum xen_drm_front_evtchnl_state state);
+
+void xen_drm_front_evtchnl_free_all(struct xen_drm_front_info *front_info);
+
+#endif /* __XEN_DRM_FRONT_EVTCHNL_H_ */
--
2.7.4
From: Oleksandr Andrushchenko <[email protected]>
Handle communication with the backend:
- send requests and wait for the responses according
to the displif protocol
- serialize access to the communication channel
- time-out used for backend communication is set to 3000 ms
- manage display buffers shared with the backend
Signed-off-by: Oleksandr Andrushchenko <[email protected]>
---
drivers/gpu/drm/xen/xen_drm_front.c | 327 +++++++++++++++++++++++++++++++++++-
drivers/gpu/drm/xen/xen_drm_front.h | 5 +
2 files changed, 327 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
index 8de88e359d5e..5ad546231d30 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.c
+++ b/drivers/gpu/drm/xen/xen_drm_front.c
@@ -31,12 +31,146 @@
#include "xen_drm_front_evtchnl.h"
#include "xen_drm_front_shbuf.h"
+/* timeout in ms to wait for backend to respond */
+#define VDRM_WAIT_BACK_MS 3000
+
+struct xen_drm_front_dbuf {
+ struct list_head list;
+ uint64_t dbuf_cookie;
+ uint64_t fb_cookie;
+ struct xen_drm_front_shbuf *shbuf;
+};
+
+static int dbuf_add_to_list(struct xen_drm_front_info *front_info,
+ struct xen_drm_front_shbuf *shbuf, uint64_t dbuf_cookie)
+{
+ struct xen_drm_front_dbuf *dbuf;
+
+ dbuf = kzalloc(sizeof(*dbuf), GFP_KERNEL);
+ if (!dbuf)
+ return -ENOMEM;
+
+ dbuf->dbuf_cookie = dbuf_cookie;
+ dbuf->shbuf = shbuf;
+ list_add(&dbuf->list, &front_info->dbuf_list);
+ return 0;
+}
+
+static struct xen_drm_front_dbuf *dbuf_get(struct list_head *dbuf_list,
+ uint64_t dbuf_cookie)
+{
+ struct xen_drm_front_dbuf *buf, *q;
+
+ list_for_each_entry_safe(buf, q, dbuf_list, list)
+ if (buf->dbuf_cookie == dbuf_cookie)
+ return buf;
+
+ return NULL;
+}
+
+static void dbuf_flush_fb(struct list_head *dbuf_list, uint64_t fb_cookie)
+{
+ struct xen_drm_front_dbuf *buf, *q;
+
+ list_for_each_entry_safe(buf, q, dbuf_list, list)
+ if (buf->fb_cookie == fb_cookie)
+ xen_drm_front_shbuf_flush(buf->shbuf);
+}
+
+static void dbuf_free(struct list_head *dbuf_list, uint64_t dbuf_cookie)
+{
+ struct xen_drm_front_dbuf *buf, *q;
+
+ list_for_each_entry_safe(buf, q, dbuf_list, list)
+ if (buf->dbuf_cookie == dbuf_cookie) {
+ list_del(&buf->list);
+ xen_drm_front_shbuf_unmap(buf->shbuf);
+ xen_drm_front_shbuf_free(buf->shbuf);
+ kfree(buf);
+ break;
+ }
+}
+
+static void dbuf_free_all(struct list_head *dbuf_list)
+{
+ struct xen_drm_front_dbuf *buf, *q;
+
+ list_for_each_entry_safe(buf, q, dbuf_list, list) {
+ list_del(&buf->list);
+ xen_drm_front_shbuf_unmap(buf->shbuf);
+ xen_drm_front_shbuf_free(buf->shbuf);
+ kfree(buf);
+ }
+}
+
+static struct xendispl_req *be_prepare_req(
+ struct xen_drm_front_evtchnl *evtchnl, uint8_t operation)
+{
+ struct xendispl_req *req;
+
+ req = RING_GET_REQUEST(&evtchnl->u.req.ring,
+ evtchnl->u.req.ring.req_prod_pvt);
+ req->operation = operation;
+ req->id = evtchnl->evt_next_id++;
+ evtchnl->evt_id = req->id;
+ return req;
+}
+
+static int be_stream_do_io(struct xen_drm_front_evtchnl *evtchnl,
+ struct xendispl_req *req)
+{
+ reinit_completion(&evtchnl->u.req.completion);
+ if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED))
+ return -EIO;
+
+ xen_drm_front_evtchnl_flush(evtchnl);
+ return 0;
+}
+
+static int be_stream_wait_io(struct xen_drm_front_evtchnl *evtchnl)
+{
+ if (wait_for_completion_timeout(&evtchnl->u.req.completion,
+ msecs_to_jiffies(VDRM_WAIT_BACK_MS)) <= 0)
+ return -ETIMEDOUT;
+
+ return evtchnl->u.req.resp_status;
+}
+
static int be_mode_set(struct xen_drm_front_drm_pipeline *pipeline, uint32_t x,
uint32_t y, uint32_t width, uint32_t height, uint32_t bpp,
uint64_t fb_cookie)
{
- return 0;
+ struct xen_drm_front_evtchnl *evtchnl;
+ struct xen_drm_front_info *front_info;
+ struct xendispl_req *req;
+ unsigned long flags;
+ int ret;
+
+ front_info = pipeline->drm_info->front_info;
+ evtchnl = &front_info->evt_pairs[pipeline->index].req;
+ if (unlikely(!evtchnl))
+ return -EIO;
+
+ mutex_lock(&front_info->req_io_lock);
+
+ spin_lock_irqsave(&front_info->io_lock, flags);
+ req = be_prepare_req(evtchnl, XENDISPL_OP_SET_CONFIG);
+ req->op.set_config.x = x;
+ req->op.set_config.y = y;
+ req->op.set_config.width = width;
+ req->op.set_config.height = height;
+ req->op.set_config.bpp = bpp;
+ req->op.set_config.fb_cookie = fb_cookie;
+
+ ret = be_stream_do_io(evtchnl, req);
+ spin_unlock_irqrestore(&front_info->io_lock, flags);
+
+ if (ret == 0)
+ ret = be_stream_wait_io(evtchnl);
+
+ mutex_unlock(&front_info->req_io_lock);
+ return ret;
}
static int be_dbuf_create_int(struct xen_drm_front_info *front_info,
@@ -44,7 +178,69 @@ static int be_dbuf_create_int(struct xen_drm_front_info *front_info,
uint32_t bpp, uint64_t size, struct page **pages,
struct sg_table *sgt)
{
+ struct xen_drm_front_evtchnl *evtchnl;
+ struct xen_drm_front_shbuf *shbuf;
+ struct xendispl_req *req;
+ struct xen_drm_front_shbuf_cfg buf_cfg;
+ unsigned long flags;
+ int ret;
+
+ evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
+ if (unlikely(!evtchnl))
+ return -EIO;
+
+ memset(&buf_cfg, 0, sizeof(buf_cfg));
+ buf_cfg.xb_dev = front_info->xb_dev;
+ buf_cfg.pages = pages;
+ buf_cfg.size = size;
+ buf_cfg.sgt = sgt;
+ buf_cfg.be_alloc = front_info->cfg.be_alloc;
+
+ shbuf = xen_drm_front_shbuf_alloc(&buf_cfg);
+ if (!shbuf)
+ return -ENOMEM;
+
+ ret = dbuf_add_to_list(front_info, shbuf, dbuf_cookie);
+ if (ret < 0) {
+ xen_drm_front_shbuf_free(shbuf);
+ return ret;
+ }
+
+ mutex_lock(&front_info->req_io_lock);
+
+ spin_lock_irqsave(&front_info->io_lock, flags);
+ req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_CREATE);
+ req->op.dbuf_create.gref_directory =
+ xen_drm_front_shbuf_get_dir_start(shbuf);
+ req->op.dbuf_create.buffer_sz = size;
+ req->op.dbuf_create.dbuf_cookie = dbuf_cookie;
+ req->op.dbuf_create.width = width;
+ req->op.dbuf_create.height = height;
+ req->op.dbuf_create.bpp = bpp;
+ if (buf_cfg.be_alloc)
+ req->op.dbuf_create.flags |= XENDISPL_DBUF_FLG_REQ_ALLOC;
+
+ ret = be_stream_do_io(evtchnl, req);
+ spin_unlock_irqrestore(&front_info->io_lock, flags);
+
+ if (ret < 0)
+ goto fail;
+
+ ret = be_stream_wait_io(evtchnl);
+ if (ret < 0)
+ goto fail;
+
+ ret = xen_drm_front_shbuf_map(shbuf);
+ if (ret < 0)
+ goto fail;
+
+ mutex_unlock(&front_info->req_io_lock);
return 0;
+
+fail:
+ mutex_unlock(&front_info->req_io_lock);
+ dbuf_free(&front_info->dbuf_list, dbuf_cookie);
+ return ret;
}
static int be_dbuf_create_from_sgt(struct xen_drm_front_info *front_info,
@@ -66,26 +262,144 @@ static int be_dbuf_create_from_pages(struct xen_drm_front_info *front_info,
static int be_dbuf_destroy(struct xen_drm_front_info *front_info,
uint64_t dbuf_cookie)
{
- return 0;
+ struct xen_drm_front_evtchnl *evtchnl;
+ struct xendispl_req *req;
+ unsigned long flags;
+ bool be_alloc;
+ int ret;
+
+ evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
+ if (unlikely(!evtchnl))
+ return -EIO;
+
+ be_alloc = front_info->cfg.be_alloc;
+
+ /*
+ * for the backend allocated buffer release references now, so backend
+ * can free the buffer
+ */
+ if (be_alloc)
+ dbuf_free(&front_info->dbuf_list, dbuf_cookie);
+
+ mutex_lock(&front_info->req_io_lock);
+
+ spin_lock_irqsave(&front_info->io_lock, flags);
+ req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_DESTROY);
+ req->op.dbuf_destroy.dbuf_cookie = dbuf_cookie;
+
+ ret = be_stream_do_io(evtchnl, req);
+ spin_unlock_irqrestore(&front_info->io_lock, flags);
+
+ if (ret == 0)
+ ret = be_stream_wait_io(evtchnl);
+
+ /*
+ * do this regardless of communication status with the backend:
+ * if we cannot remove remote resources remove what we can locally
+ */
+ if (!be_alloc)
+ dbuf_free(&front_info->dbuf_list, dbuf_cookie);
+
+ mutex_unlock(&front_info->req_io_lock);
+ return ret;
}
static int be_fb_attach(struct xen_drm_front_info *front_info,
uint64_t dbuf_cookie, uint64_t fb_cookie, uint32_t width,
uint32_t height, uint32_t pixel_format)
{
- return 0;
+ struct xen_drm_front_evtchnl *evtchnl;
+ struct xen_drm_front_dbuf *buf;
+ struct xendispl_req *req;
+ unsigned long flags;
+ int ret;
+
+ evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
+ if (unlikely(!evtchnl))
+ return -EIO;
+
+ buf = dbuf_get(&front_info->dbuf_list, dbuf_cookie);
+ if (!buf)
+ return -EINVAL;
+
+ buf->fb_cookie = fb_cookie;
+
+ mutex_lock(&front_info->req_io_lock);
+
+ spin_lock_irqsave(&front_info->io_lock, flags);
+ req = be_prepare_req(evtchnl, XENDISPL_OP_FB_ATTACH);
+ req->op.fb_attach.dbuf_cookie = dbuf_cookie;
+ req->op.fb_attach.fb_cookie = fb_cookie;
+ req->op.fb_attach.width = width;
+ req->op.fb_attach.height = height;
+ req->op.fb_attach.pixel_format = pixel_format;
+
+ ret = be_stream_do_io(evtchnl, req);
+ spin_unlock_irqrestore(&front_info->io_lock, flags);
+
+ if (ret == 0)
+ ret = be_stream_wait_io(evtchnl);
+
+ mutex_unlock(&front_info->req_io_lock);
+ return ret;
}
static int be_fb_detach(struct xen_drm_front_info *front_info,
uint64_t fb_cookie)
{
- return 0;
+ struct xen_drm_front_evtchnl *evtchnl;
+ struct xendispl_req *req;
+ unsigned long flags;
+ int ret;
+
+ evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
+ if (unlikely(!evtchnl))
+ return -EIO;
+
+ mutex_lock(&front_info->req_io_lock);
+
+ spin_lock_irqsave(&front_info->io_lock, flags);
+ req = be_prepare_req(evtchnl, XENDISPL_OP_FB_DETACH);
+ req->op.fb_detach.fb_cookie = fb_cookie;
+
+ ret = be_stream_do_io(evtchnl, req);
+ spin_unlock_irqrestore(&front_info->io_lock, flags);
+
+ if (ret == 0)
+ ret = be_stream_wait_io(evtchnl);
+
+ mutex_unlock(&front_info->req_io_lock);
+ return ret;
}
static int be_page_flip(struct xen_drm_front_info *front_info, int conn_idx,
uint64_t fb_cookie)
{
- return 0;
+ struct xen_drm_front_evtchnl *evtchnl;
+ struct xendispl_req *req;
+ unsigned long flags;
+ int ret;
+
+ if (unlikely(conn_idx >= front_info->num_evt_pairs))
+ return -EINVAL;
+
+ dbuf_flush_fb(&front_info->dbuf_list, fb_cookie);
+ evtchnl = &front_info->evt_pairs[conn_idx].req;
+
+ mutex_lock(&front_info->req_io_lock);
+
+ spin_lock_irqsave(&front_info->io_lock, flags);
+ req = be_prepare_req(evtchnl, XENDISPL_OP_PG_FLIP);
+ req->op.pg_flip.fb_cookie = fb_cookie;
+
+ ret = be_stream_do_io(evtchnl, req);
+ spin_unlock_irqrestore(&front_info->io_lock, flags);
+
+ if (ret == 0)
+ ret = be_stream_wait_io(evtchnl);
+
+ mutex_unlock(&front_info->req_io_lock);
+ return ret;
}
static void xen_drm_drv_unload(struct xen_drm_front_info *front_info)
@@ -183,6 +497,7 @@ static void xen_drv_remove_internal(struct xen_drm_front_info *front_info)
{
xen_drm_drv_deinit(front_info);
xen_drm_front_evtchnl_free_all(front_info);
+ dbuf_free_all(&front_info->dbuf_list);
}
static int backend_on_initwait(struct xen_drm_front_info *front_info)
@@ -310,6 +625,8 @@ static int xen_drv_probe(struct xenbus_device *xb_dev,
front_info->xb_dev = xb_dev;
spin_lock_init(&front_info->io_lock);
+ mutex_init(&front_info->req_io_lock);
+ INIT_LIST_HEAD(&front_info->dbuf_list);
front_info->drm_pdrv_registered = false;
dev_set_drvdata(&xb_dev->dev, front_info);
return xenbus_switch_state(xb_dev, XenbusStateInitialising);
diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
index c6f52c892434..db32d00145d1 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.h
+++ b/drivers/gpu/drm/xen/xen_drm_front.h
@@ -137,6 +137,8 @@ struct xen_drm_front_info {
struct xenbus_device *xb_dev;
/* to protect data between backend IO code and interrupt handler */
spinlock_t io_lock;
+ /* serializer for backend IO: request/response */
+ struct mutex req_io_lock;
bool drm_pdrv_registered;
/* virtual DRM platform device */
struct platform_device *drm_pdev;
@@ -144,6 +146,9 @@ struct xen_drm_front_info {
int num_evt_pairs;
struct xen_drm_front_evtchnl_pair *evt_pairs;
struct xen_drm_front_cfg cfg;
+
+ /* display buffers */
+ struct list_head dbuf_list;
};
#endif /* __XEN_DRM_FRONT_H_ */
--
2.7.4
From: Oleksandr Andrushchenko <[email protected]>
Implement kernel modesetiing/connector handling using
DRM simple KMS helper pipeline:
- implement KMS part of the driver with the help of DRM
simple pipepline helper which is possible due to the fact
that the para-virtualized driver only supports a single
(primary) plane:
- initialize connectors according to XenStore configuration
- handle frame done events from the backend
- generate vblank events
- create and destroy frame buffers and propagate those
to the backend
- propagate set/reset mode configuration to the backend on display
enable/disable callbacks
- send page flip request to the backend and implement logic for
reporting backend IO errors on prepare fb callback
- implement virtual connector handling:
- support only pixel formats suitable for single plane modes
- make sure the connector is always connected
- support a single video mode as per para-virtualized driver
configuration
Signed-off-by: Oleksandr Andrushchenko <[email protected]>
---
drivers/gpu/drm/xen/Makefile | 2 +
drivers/gpu/drm/xen/xen_drm_front_conn.c | 125 +++++++++++++
drivers/gpu/drm/xen/xen_drm_front_conn.h | 35 ++++
drivers/gpu/drm/xen/xen_drm_front_drv.c | 15 ++
drivers/gpu/drm/xen/xen_drm_front_drv.h | 12 ++
drivers/gpu/drm/xen/xen_drm_front_kms.c | 299 +++++++++++++++++++++++++++++++
drivers/gpu/drm/xen/xen_drm_front_kms.h | 30 ++++
7 files changed, 518 insertions(+)
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.h
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.h
diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
index d3068202590f..4fcb0da1a9c5 100644
--- a/drivers/gpu/drm/xen/Makefile
+++ b/drivers/gpu/drm/xen/Makefile
@@ -2,6 +2,8 @@
drm_xen_front-objs := xen_drm_front.o \
xen_drm_front_drv.o \
+ xen_drm_front_kms.o \
+ xen_drm_front_conn.o \
xen_drm_front_evtchnl.o \
xen_drm_front_shbuf.o \
xen_drm_front_cfg.o
diff --git a/drivers/gpu/drm/xen/xen_drm_front_conn.c b/drivers/gpu/drm/xen/xen_drm_front_conn.c
new file mode 100644
index 000000000000..d9986a2e1a3b
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front_conn.c
@@ -0,0 +1,125 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#include <drm/drm_atomic_helper.h>
+#include <drm/drm_crtc_helper.h>
+
+#include <video/videomode.h>
+
+#include "xen_drm_front_conn.h"
+#include "xen_drm_front_drv.h"
+
+static struct xen_drm_front_drm_pipeline *
+to_xen_drm_pipeline(struct drm_connector *connector)
+{
+ return container_of(connector, struct xen_drm_front_drm_pipeline, conn);
+}
+
+static const uint32_t plane_formats[] = {
+ DRM_FORMAT_RGB565,
+ DRM_FORMAT_RGB888,
+ DRM_FORMAT_XRGB8888,
+ DRM_FORMAT_ARGB8888,
+ DRM_FORMAT_XRGB4444,
+ DRM_FORMAT_ARGB4444,
+ DRM_FORMAT_XRGB1555,
+ DRM_FORMAT_ARGB1555,
+};
+
+const uint32_t *xen_drm_front_conn_get_formats(int *format_count)
+{
+ *format_count = ARRAY_SIZE(plane_formats);
+ return plane_formats;
+}
+
+static enum drm_connector_status connector_detect(
+ struct drm_connector *connector, bool force)
+{
+ if (drm_dev_is_unplugged(connector->dev))
+ return connector_status_disconnected;
+
+ return connector_status_connected;
+}
+
+#define XEN_DRM_NUM_VIDEO_MODES 1
+#define XEN_DRM_CRTC_VREFRESH_HZ 60
+
+static int connector_get_modes(struct drm_connector *connector)
+{
+ struct xen_drm_front_drm_pipeline *pipeline =
+ to_xen_drm_pipeline(connector);
+ struct drm_display_mode *mode;
+ struct videomode videomode;
+ int width, height;
+
+ mode = drm_mode_create(connector->dev);
+ if (!mode)
+ return 0;
+
+ memset(&videomode, 0, sizeof(videomode));
+ videomode.hactive = pipeline->width;
+ videomode.vactive = pipeline->height;
+ width = videomode.hactive + videomode.hfront_porch +
+ videomode.hback_porch + videomode.hsync_len;
+ height = videomode.vactive + videomode.vfront_porch +
+ videomode.vback_porch + videomode.vsync_len;
+ videomode.pixelclock = width * height * XEN_DRM_CRTC_VREFRESH_HZ;
+ mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
+
+ drm_display_mode_from_videomode(&videomode, mode);
+ drm_mode_probed_add(connector, mode);
+ return XEN_DRM_NUM_VIDEO_MODES;
+}
+
+static int connector_mode_valid(struct drm_connector *connector,
+ struct drm_display_mode *mode)
+{
+ struct xen_drm_front_drm_pipeline *pipeline =
+ to_xen_drm_pipeline(connector);
+
+ if (mode->hdisplay != pipeline->width)
+ return MODE_ERROR;
+
+ if (mode->vdisplay != pipeline->height)
+ return MODE_ERROR;
+
+ return MODE_OK;
+}
+
+static const struct drm_connector_helper_funcs connector_helper_funcs = {
+ .get_modes = connector_get_modes,
+ .mode_valid = connector_mode_valid,
+};
+
+static const struct drm_connector_funcs connector_funcs = {
+ .detect = connector_detect,
+ .fill_modes = drm_helper_probe_single_connector_modes,
+ .destroy = drm_connector_cleanup,
+ .reset = drm_atomic_helper_connector_reset,
+ .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
+};
+
+int xen_drm_front_conn_init(struct xen_drm_front_drm_info *drm_info,
+ struct drm_connector *connector)
+{
+ drm_connector_helper_add(connector, &connector_helper_funcs);
+
+ return drm_connector_init(drm_info->drm_dev, connector,
+ &connector_funcs, DRM_MODE_CONNECTOR_VIRTUAL);
+}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_conn.h b/drivers/gpu/drm/xen/xen_drm_front_conn.h
new file mode 100644
index 000000000000..708e80d45985
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front_conn.h
@@ -0,0 +1,35 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#ifndef __XEN_DRM_FRONT_CONN_H_
+#define __XEN_DRM_FRONT_CONN_H_
+
+#include <drm/drmP.h>
+#include <drm/drm_crtc.h>
+#include <drm/drm_encoder.h>
+
+#include <linux/wait.h>
+
+struct xen_drm_front_drm_info;
+
+const uint32_t *xen_drm_front_conn_get_formats(int *format_count);
+
+int xen_drm_front_conn_init(struct xen_drm_front_drm_info *drm_info,
+ struct drm_connector *connector);
+
+#endif /* __XEN_DRM_FRONT_CONN_H_ */
diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.c b/drivers/gpu/drm/xen/xen_drm_front_drv.c
index b3764d5ed0f6..e8862d26ba27 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_drv.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_drv.c
@@ -23,6 +23,7 @@
#include "xen_drm_front.h"
#include "xen_drm_front_cfg.h"
#include "xen_drm_front_drv.h"
+#include "xen_drm_front_kms.h"
static int dumb_create(struct drm_file *filp,
struct drm_device *dev, struct drm_mode_create_dumb *args)
@@ -41,6 +42,13 @@ static void free_object(struct drm_gem_object *obj)
static void on_frame_done(struct platform_device *pdev,
int conn_idx, uint64_t fb_cookie)
{
+ struct xen_drm_front_drm_info *drm_info = platform_get_drvdata(pdev);
+
+ if (unlikely(conn_idx >= drm_info->cfg->num_connectors))
+ return;
+
+ xen_drm_front_kms_on_frame_done(&drm_info->pipeline[conn_idx],
+ fb_cookie);
}
static void lastclose(struct drm_device *dev)
@@ -157,6 +165,12 @@ int xen_drm_front_drv_probe(struct platform_device *pdev,
return ret;
}
+ ret = xen_drm_front_kms_init(drm_info);
+ if (ret) {
+ DRM_ERROR("Failed to initialize DRM/KMS, ret %d\n", ret);
+ goto fail_modeset;
+ }
+
dev->irq_enabled = 1;
ret = drm_dev_register(dev, 0);
@@ -172,6 +186,7 @@ int xen_drm_front_drv_probe(struct platform_device *pdev,
fail_register:
drm_dev_unregister(dev);
+fail_modeset:
drm_mode_config_cleanup(dev);
return ret;
}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.h b/drivers/gpu/drm/xen/xen_drm_front_drv.h
index aaa476535c13..563318b19f34 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_drv.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_drv.h
@@ -20,14 +20,24 @@
#define __XEN_DRM_FRONT_DRV_H_
#include <drm/drmP.h>
+#include <drm/drm_simple_kms_helper.h>
#include "xen_drm_front.h"
#include "xen_drm_front_cfg.h"
+#include "xen_drm_front_conn.h"
struct xen_drm_front_drm_pipeline {
struct xen_drm_front_drm_info *drm_info;
int index;
+
+ struct drm_simple_display_pipe pipe;
+
+ struct drm_connector conn;
+ /* these are only for connector mode checking */
+ int width, height;
+ /* last backend error seen on page flip */
+ int pgflip_last_error;
};
struct xen_drm_front_drm_info {
@@ -35,6 +45,8 @@ struct xen_drm_front_drm_info {
struct xen_drm_front_ops *front_ops;
struct drm_device *drm_dev;
struct xen_drm_front_cfg *cfg;
+
+ struct xen_drm_front_drm_pipeline pipeline[XEN_DRM_FRONT_MAX_CRTCS];
};
static inline uint64_t xen_drm_front_fb_to_cookie(
diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm/xen/xen_drm_front_kms.c
new file mode 100644
index 000000000000..ad94c28835cd
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
@@ -0,0 +1,299 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#include "xen_drm_front_kms.h"
+
+#include <drm/drmP.h>
+#include <drm/drm_atomic.h>
+#include <drm/drm_atomic_helper.h>
+#include <drm/drm_gem.h>
+#include <drm/drm_gem_framebuffer_helper.h>
+
+#include "xen_drm_front.h"
+#include "xen_drm_front_conn.h"
+#include "xen_drm_front_drv.h"
+
+static struct xen_drm_front_drm_pipeline *
+to_xen_drm_pipeline(struct drm_simple_display_pipe *pipe)
+{
+ return container_of(pipe, struct xen_drm_front_drm_pipeline, pipe);
+}
+
+static void fb_destroy(struct drm_framebuffer *fb)
+{
+ struct xen_drm_front_drm_info *drm_info = fb->dev->dev_private;
+
+ drm_info->front_ops->fb_detach(drm_info->front_info,
+ xen_drm_front_fb_to_cookie(fb));
+ drm_gem_fb_destroy(fb);
+}
+
+static struct drm_framebuffer_funcs fb_funcs = {
+ .destroy = fb_destroy,
+};
+
+static struct drm_framebuffer *fb_create(struct drm_device *dev,
+ struct drm_file *filp, const struct drm_mode_fb_cmd2 *mode_cmd)
+{
+ struct xen_drm_front_drm_info *drm_info = dev->dev_private;
+ static struct drm_framebuffer *fb;
+ struct drm_gem_object *gem_obj;
+ int ret;
+
+ fb = drm_gem_fb_create_with_funcs(dev, filp, mode_cmd, &fb_funcs);
+ if (IS_ERR_OR_NULL(fb))
+ return fb;
+
+ gem_obj = drm_gem_object_lookup(filp, mode_cmd->handles[0]);
+ if (!gem_obj) {
+ DRM_ERROR("Failed to lookup GEM object\n");
+ ret = -ENOENT;
+ goto fail;
+ }
+
+ drm_gem_object_unreference_unlocked(gem_obj);
+
+ ret = drm_info->front_ops->fb_attach(
+ drm_info->front_info,
+ xen_drm_front_dbuf_to_cookie(gem_obj),
+ xen_drm_front_fb_to_cookie(fb),
+ fb->width, fb->height, fb->format->format);
+ if (ret < 0) {
+ DRM_ERROR("Back failed to attach FB %p: %d\n", fb, ret);
+ goto fail;
+ }
+
+ return fb;
+
+fail:
+ drm_gem_fb_destroy(fb);
+ return ERR_PTR(ret);
+}
+
+static const struct drm_mode_config_funcs mode_config_funcs = {
+ .fb_create = fb_create,
+ .atomic_check = drm_atomic_helper_check,
+ .atomic_commit = drm_atomic_helper_commit,
+};
+
+static int display_set_config(struct drm_simple_display_pipe *pipe,
+ struct drm_framebuffer *fb)
+{
+ struct xen_drm_front_drm_pipeline *pipeline =
+ to_xen_drm_pipeline(pipe);
+ struct drm_crtc *crtc = &pipe->crtc;
+ struct xen_drm_front_drm_info *drm_info = pipeline->drm_info;
+ int ret;
+
+ if (fb)
+ ret = drm_info->front_ops->mode_set(pipeline,
+ crtc->x, crtc->y,
+ fb->width, fb->height, fb->format->cpp[0] * 8,
+ xen_drm_front_fb_to_cookie(fb));
+ else
+ ret = drm_info->front_ops->mode_set(pipeline,
+ 0, 0, 0, 0, 0,
+ xen_drm_front_fb_to_cookie(NULL));
+
+ if (ret)
+ DRM_ERROR("Failed to set mode to back: %d\n", ret);
+
+ return ret;
+}
+
+static void display_enable(struct drm_simple_display_pipe *pipe,
+ struct drm_crtc_state *crtc_state)
+{
+ struct drm_crtc *crtc = &pipe->crtc;
+ struct drm_framebuffer *fb = pipe->plane.state->fb;
+
+ if (display_set_config(pipe, fb) == 0)
+ drm_crtc_vblank_on(crtc);
+ else
+ DRM_ERROR("Failed to enable display\n");
+}
+
+static void display_disable(struct drm_simple_display_pipe *pipe)
+{
+ struct drm_crtc *crtc = &pipe->crtc;
+
+ display_set_config(pipe, NULL);
+ drm_crtc_vblank_off(crtc);
+ /* final check for stalled events */
+ if (crtc->state->event && !crtc->state->active) {
+ unsigned long flags;
+
+ spin_lock_irqsave(&crtc->dev->event_lock, flags);
+ drm_crtc_send_vblank_event(crtc, crtc->state->event);
+ spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
+ crtc->state->event = NULL;
+ }
+}
+
+void xen_drm_front_kms_on_frame_done(
+ struct xen_drm_front_drm_pipeline *pipeline,
+ uint64_t fb_cookie)
+{
+ drm_crtc_handle_vblank(&pipeline->pipe.crtc);
+}
+
+static void display_send_page_flip(struct drm_simple_display_pipe *pipe,
+ struct drm_plane_state *old_plane_state)
+{
+ struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(
+ old_plane_state->state, &pipe->plane);
+
+ /*
+ * If old_plane_state->fb is NULL and plane_state->fb is not,
+ * then this is an atomic commit which will enable display.
+ * If old_plane_state->fb is not NULL and plane_state->fb is,
+ * then this is an atomic commit which will disable display.
+ * Ignore these and do not send page flip as this framebuffer will be
+ * sent to the backend as a part of display_set_config call.
+ */
+ if (old_plane_state->fb && plane_state->fb) {
+ struct xen_drm_front_drm_pipeline *pipeline =
+ to_xen_drm_pipeline(pipe);
+ struct xen_drm_front_drm_info *drm_info = pipeline->drm_info;
+ int ret;
+
+ ret = drm_info->front_ops->page_flip(drm_info->front_info,
+ pipeline->index,
+ xen_drm_front_fb_to_cookie(plane_state->fb));
+ pipeline->pgflip_last_error = ret;
+ if (ret) {
+ DRM_ERROR("Failed to send page flip request to backend: %d\n", ret);
+ /*
+ * As we are at commit stage the DRM core will anyways
+ * wait for the vblank and knows nothing about our
+ * failure. The best we can do is to handle
+ * vblank now, so there is no vblank/flip_done
+ * time outs
+ */
+ drm_crtc_handle_vblank(&pipeline->pipe.crtc);
+ }
+ }
+}
+
+static int display_prepare_fb(struct drm_simple_display_pipe *pipe,
+ struct drm_plane_state *plane_state)
+{
+ struct xen_drm_front_drm_pipeline *pipeline =
+ to_xen_drm_pipeline(pipe);
+
+ if (pipeline->pgflip_last_error) {
+ int ret;
+
+ /* if previous page flip didn't succeed then report the error */
+ ret = pipeline->pgflip_last_error;
+ /* and let us try to page flip next time */
+ pipeline->pgflip_last_error = 0;
+ return ret;
+ }
+ return drm_gem_fb_prepare_fb(&pipe->plane, plane_state);
+}
+
+static void display_update(struct drm_simple_display_pipe *pipe,
+ struct drm_plane_state *old_plane_state)
+{
+ struct drm_crtc *crtc = &pipe->crtc;
+ struct drm_pending_vblank_event *event;
+
+ event = crtc->state->event;
+ if (event) {
+ struct drm_device *dev = crtc->dev;
+ unsigned long flags;
+
+ crtc->state->event = NULL;
+
+ spin_lock_irqsave(&dev->event_lock, flags);
+ if (drm_crtc_vblank_get(crtc) == 0)
+ drm_crtc_arm_vblank_event(crtc, event);
+ else
+ drm_crtc_send_vblank_event(crtc, event);
+ spin_unlock_irqrestore(&dev->event_lock, flags);
+ }
+ /*
+ * Send page flip request to the backend *after* we have event armed/
+ * sent above, so on page flip done event from the backend we can
+ * deliver it while handling vblank.
+ */
+ display_send_page_flip(pipe, old_plane_state);
+}
+
+static const struct drm_simple_display_pipe_funcs display_funcs = {
+ .enable = display_enable,
+ .disable = display_disable,
+ .prepare_fb = display_prepare_fb,
+ .update = display_update,
+};
+
+static int display_pipe_init(struct xen_drm_front_drm_info *drm_info,
+ int index, struct xen_drm_front_cfg_connector *cfg,
+ struct xen_drm_front_drm_pipeline *pipeline)
+{
+ struct drm_device *dev = drm_info->drm_dev;
+ const uint32_t *formats;
+ int format_count;
+ int ret;
+
+ pipeline->drm_info = drm_info;
+ pipeline->index = index;
+ pipeline->height = cfg->height;
+ pipeline->width = cfg->width;
+
+ ret = xen_drm_front_conn_init(drm_info, &pipeline->conn);
+ if (ret)
+ return ret;
+
+ formats = xen_drm_front_conn_get_formats(&format_count);
+
+ return drm_simple_display_pipe_init(dev, &pipeline->pipe,
+ &display_funcs, formats, format_count,
+ NULL, &pipeline->conn);
+}
+
+int xen_drm_front_kms_init(struct xen_drm_front_drm_info *drm_info)
+{
+ struct drm_device *dev = drm_info->drm_dev;
+ int i, ret;
+
+ drm_mode_config_init(dev);
+
+ dev->mode_config.min_width = 0;
+ dev->mode_config.min_height = 0;
+ dev->mode_config.max_width = 4095;
+ dev->mode_config.max_height = 2047;
+ dev->mode_config.funcs = &mode_config_funcs;
+
+ for (i = 0; i < drm_info->cfg->num_connectors; i++) {
+ struct xen_drm_front_cfg_connector *cfg =
+ &drm_info->cfg->connectors[i];
+ struct xen_drm_front_drm_pipeline *pipeline =
+ &drm_info->pipeline[i];
+
+ ret = display_pipe_init(drm_info, i, cfg, pipeline);
+ if (ret) {
+ drm_mode_config_cleanup(dev);
+ return ret;
+ }
+ }
+
+ drm_mode_config_reset(dev);
+ return 0;
+}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.h b/drivers/gpu/drm/xen/xen_drm_front_kms.h
new file mode 100644
index 000000000000..65a50033bb9b
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front_kms.h
@@ -0,0 +1,30 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#ifndef __XEN_DRM_FRONT_KMS_H_
+#define __XEN_DRM_FRONT_KMS_H_
+
+#include "xen_drm_front_drv.h"
+
+int xen_drm_front_kms_init(struct xen_drm_front_drm_info *drm_info);
+
+void xen_drm_front_kms_on_frame_done(
+ struct xen_drm_front_drm_pipeline *pipeline,
+ uint64_t fb_cookie);
+
+#endif /* __XEN_DRM_FRONT_KMS_H_ */
--
2.7.4
From: Oleksandr Andrushchenko <[email protected]>
Implement GEM handling depending on driver mode of operation:
depending on the requirements for the para-virtualized environment, namely
requirements dictated by the accompanying DRM/(v)GPU drivers running in both
host and guest environments, number of operating modes of para-virtualized
display driver are supported:
- display buffers can be allocated by either frontend driver or backend
- display buffers can be allocated to be contiguous in memory or not
Note! Frontend driver itself has no dependency on contiguous memory for
its operation.
1. Buffers allocated by the frontend driver.
The below modes of operation are configured at compile-time via
frontend driver's kernel configuration.
1.1. Front driver configured to use GEM CMA helpers
This use-case is useful when used with accompanying DRM/vGPU driver in
guest domain which was designed to only work with contiguous buffers,
e.g. DRM driver based on GEM CMA helpers: such drivers can only import
contiguous PRIME buffers, thus requiring frontend driver to provide
such. In order to implement this mode of operation para-virtualized
frontend driver can be configured to use GEM CMA helpers.
1.2. Front driver doesn't use GEM CMA
If accompanying drivers can cope with non-contiguous memory then, to
lower pressure on CMA subsystem of the kernel, driver can allocate
buffers from system memory.
Note! If used with accompanying DRM/(v)GPU drivers this mode of operation
may require IOMMU support on the platform, so accompanying DRM/vGPU
hardware can still reach display buffer memory while importing PRIME
buffers from the frontend driver.
2. Buffers allocated by the backend
This mode of operation is run-time configured via guest domain configuration
through XenStore entries.
For systems which do not provide IOMMU support, but having specific
requirements for display buffers it is possible to allocate such buffers
at backend side and share those with the frontend.
For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting
physically contiguous memory, this allows implementing zero-copying
use-cases.
Note! Configuration options 1.1 (contiguous display buffers) and 2 (backend
allocated buffers) are not supported at the same time.
Signed-off-by: Oleksandr Andrushchenko <[email protected]>
---
drivers/gpu/drm/xen/Kconfig | 13 +
drivers/gpu/drm/xen/Makefile | 6 +
drivers/gpu/drm/xen/xen_drm_front.h | 74 ++++++
drivers/gpu/drm/xen/xen_drm_front_drv.c | 80 ++++++-
drivers/gpu/drm/xen/xen_drm_front_drv.h | 1 +
drivers/gpu/drm/xen/xen_drm_front_gem.c | 360 ++++++++++++++++++++++++++++
drivers/gpu/drm/xen/xen_drm_front_gem.h | 46 ++++
drivers/gpu/drm/xen/xen_drm_front_gem_cma.c | 93 +++++++
8 files changed, 667 insertions(+), 6 deletions(-)
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.h
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig
index 4cca160782ab..4f4abc91f3b6 100644
--- a/drivers/gpu/drm/xen/Kconfig
+++ b/drivers/gpu/drm/xen/Kconfig
@@ -15,3 +15,16 @@ config DRM_XEN_FRONTEND
help
Choose this option if you want to enable a para-virtualized
frontend DRM/KMS driver for Xen guest OSes.
+
+config DRM_XEN_FRONTEND_CMA
+ bool "Use DRM CMA to allocate dumb buffers"
+ depends on DRM_XEN_FRONTEND
+ select DRM_KMS_CMA_HELPER
+ select DRM_GEM_CMA_HELPER
+ help
+ Use DRM CMA helpers to allocate display buffers.
+ This is useful for the use-cases when guest driver needs to
+ share or export buffers to other drivers which only expect
+ contiguous buffers.
+ Note: in this mode driver cannot use buffers allocated
+ by the backend.
diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
index 4fcb0da1a9c5..12376ec78fbc 100644
--- a/drivers/gpu/drm/xen/Makefile
+++ b/drivers/gpu/drm/xen/Makefile
@@ -8,4 +8,10 @@ drm_xen_front-objs := xen_drm_front.o \
xen_drm_front_shbuf.o \
xen_drm_front_cfg.o
+ifeq ($(CONFIG_DRM_XEN_FRONTEND_CMA),y)
+ drm_xen_front-objs += xen_drm_front_gem_cma.o
+else
+ drm_xen_front-objs += xen_drm_front_gem.o
+endif
+
obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
index 9ed5bfb248d0..c6f52c892434 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.h
+++ b/drivers/gpu/drm/xen/xen_drm_front.h
@@ -34,6 +34,80 @@
struct xen_drm_front_drm_pipeline;
+/*
+ *******************************************************************************
+ * Para-virtualized DRM/KMS frontend driver
+ *******************************************************************************
+ * This frontend driver implements Xen para-virtualized display
+ * according to the display protocol described at
+ * include/xen/interface/io/displif.h
+ *
+ *******************************************************************************
+ * Driver modes of operation in terms of display buffers used
+ *******************************************************************************
+ * Depending on the requirements for the para-virtualized environment, namely
+ * requirements dictated by the accompanying DRM/(v)GPU drivers running in both
+ * host and guest environments, number of operating modes of para-virtualized
+ * display driver are supported:
+ * - display buffers can be allocated by either frontend driver or backend
+ * - display buffers can be allocated to be contiguous in memory or not
+ *
+ * Note! Frontend driver itself has no dependency on contiguous memory for
+ * its operation.
+ *
+ *******************************************************************************
+ * 1. Buffers allocated by the frontend driver.
+ *******************************************************************************
+ *
+ * The below modes of operation are configured at compile-time via
+ * frontend driver's kernel configuration.
+ *
+ * 1.1. Front driver configured to use GEM CMA helpers
+ * This use-case is useful when used with accompanying DRM/vGPU driver in
+ * guest domain which was designed to only work with contiguous buffers,
+ * e.g. DRM driver based on GEM CMA helpers: such drivers can only import
+ * contiguous PRIME buffers, thus requiring frontend driver to provide
+ * such. In order to implement this mode of operation para-virtualized
+ * frontend driver can be configured to use GEM CMA helpers.
+ *
+ * 1.2. Front driver doesn't use GEM CMA
+ * If accompanying drivers can cope with non-contiguous memory then, to
+ * lower pressure on CMA subsystem of the kernel, driver can allocate
+ * buffers from system memory.
+ *
+ * Note! If used with accompanying DRM/(v)GPU drivers this mode of operation
+ * may require IOMMU support on the platform, so accompanying DRM/vGPU
+ * hardware can still reach display buffer memory while importing PRIME
+ * buffers from the frontend driver.
+ *
+ *******************************************************************************
+ * 2. Buffers allocated by the backend
+ *******************************************************************************
+ *
+ * This mode of operation is run-time configured via guest domain configuration
+ * through XenStore entries.
+ *
+ * For systems which do not provide IOMMU support, but having specific
+ * requirements for display buffers it is possible to allocate such buffers
+ * at backend side and share those with the frontend.
+ * For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting
+ * physically contiguous memory, this allows implementing zero-copying
+ * use-cases.
+ *
+ *******************************************************************************
+ * Driver limitations
+ *******************************************************************************
+ * 1. Configuration options 1.1 (contiguous display buffers) and 2 (backend
+ * allocated buffers) are not supported at the same time.
+ *
+ * 2. Only primary plane without additional properties is supported.
+ *
+ * 3. Only one video mode supported which is configured via XenStore.
+ *
+ * 4. All CRTCs operate at fixed frequency of 60Hz.
+ *
+ ******************************************************************************/
+
struct xen_drm_front_ops {
int (*mode_set)(struct xen_drm_front_drm_pipeline *pipeline,
uint32_t x, uint32_t y, uint32_t width, uint32_t height,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.c b/drivers/gpu/drm/xen/xen_drm_front_drv.c
index e8862d26ba27..35e7e9cda9d1 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_drv.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_drv.c
@@ -23,12 +23,58 @@
#include "xen_drm_front.h"
#include "xen_drm_front_cfg.h"
#include "xen_drm_front_drv.h"
+#include "xen_drm_front_gem.h"
#include "xen_drm_front_kms.h"
static int dumb_create(struct drm_file *filp,
struct drm_device *dev, struct drm_mode_create_dumb *args)
{
- return -EINVAL;
+ struct xen_drm_front_drm_info *drm_info = dev->dev_private;
+ struct drm_gem_object *obj;
+ int ret;
+
+ ret = drm_info->gem_ops->dumb_create(filp, dev, args);
+ if (ret)
+ goto fail;
+
+ obj = drm_gem_object_lookup(filp, args->handle);
+ if (!obj) {
+ ret = -ENOENT;
+ goto fail_destroy;
+ }
+
+ drm_gem_object_unreference_unlocked(obj);
+
+ /*
+ * In case of CONFIG_DRM_XEN_FRONTEND_CMA gem_obj is constructed
+ * via DRM CMA helpers and doesn't have ->pages allocated
+ * (xendrm_gem_get_pages will return NULL), but instead can provide
+ * sg table
+ */
+ if (drm_info->gem_ops->get_pages(obj))
+ ret = drm_info->front_ops->dbuf_create_from_pages(
+ drm_info->front_info,
+ xen_drm_front_dbuf_to_cookie(obj),
+ args->width, args->height, args->bpp,
+ args->size,
+ drm_info->gem_ops->get_pages(obj));
+ else
+ ret = drm_info->front_ops->dbuf_create_from_sgt(
+ drm_info->front_info,
+ xen_drm_front_dbuf_to_cookie(obj),
+ args->width, args->height, args->bpp,
+ args->size,
+ drm_info->gem_ops->prime_get_sg_table(obj));
+ if (ret)
+ goto fail_destroy;
+
+ return 0;
+
+fail_destroy:
+ drm_gem_dumb_destroy(filp, dev, args->handle);
+fail:
+ DRM_ERROR("Failed to create dumb buffer: %d\n", ret);
+ return ret;
}
static void free_object(struct drm_gem_object *obj)
@@ -37,6 +83,7 @@ static void free_object(struct drm_gem_object *obj)
drm_info->front_ops->dbuf_destroy(drm_info->front_info,
xen_drm_front_dbuf_to_cookie(obj));
+ drm_info->gem_ops->free_object_unlocked(obj);
}
static void on_frame_done(struct platform_device *pdev,
@@ -60,32 +107,52 @@ static void lastclose(struct drm_device *dev)
static int gem_mmap(struct file *filp, struct vm_area_struct *vma)
{
- return -EINVAL;
+ struct drm_file *file_priv = filp->private_data;
+ struct drm_device *dev = file_priv->minor->dev;
+ struct xen_drm_front_drm_info *drm_info = dev->dev_private;
+
+ return drm_info->gem_ops->mmap(filp, vma);
}
static struct sg_table *prime_get_sg_table(struct drm_gem_object *obj)
{
- return NULL;
+ struct xen_drm_front_drm_info *drm_info;
+
+ drm_info = obj->dev->dev_private;
+ return drm_info->gem_ops->prime_get_sg_table(obj);
}
static struct drm_gem_object *prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach, struct sg_table *sgt)
{
- return NULL;
+ struct xen_drm_front_drm_info *drm_info;
+
+ drm_info = dev->dev_private;
+ return drm_info->gem_ops->prime_import_sg_table(dev, attach, sgt);
}
static void *prime_vmap(struct drm_gem_object *obj)
{
- return NULL;
+ struct xen_drm_front_drm_info *drm_info;
+
+ drm_info = obj->dev->dev_private;
+ return drm_info->gem_ops->prime_vmap(obj);
}
static void prime_vunmap(struct drm_gem_object *obj, void *vaddr)
{
+ struct xen_drm_front_drm_info *drm_info;
+
+ drm_info = obj->dev->dev_private;
+ drm_info->gem_ops->prime_vunmap(obj, vaddr);
}
static int prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
{
- return -EINVAL;
+ struct xen_drm_front_drm_info *drm_info;
+
+ drm_info = obj->dev->dev_private;
+ return drm_info->gem_ops->prime_mmap(obj, vma);
}
static const struct file_operations xendrm_fops = {
@@ -147,6 +214,7 @@ int xen_drm_front_drv_probe(struct platform_device *pdev,
drm_info->front_ops = front_ops;
drm_info->front_ops->on_frame_done = on_frame_done;
+ drm_info->gem_ops = xen_drm_front_gem_get_ops();
drm_info->front_info = cfg->front_info;
dev = drm_dev_alloc(&xen_drm_driver, &pdev->dev);
diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.h b/drivers/gpu/drm/xen/xen_drm_front_drv.h
index 563318b19f34..34228eb86255 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_drv.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_drv.h
@@ -43,6 +43,7 @@ struct xen_drm_front_drm_pipeline {
struct xen_drm_front_drm_info {
struct xen_drm_front_info *front_info;
struct xen_drm_front_ops *front_ops;
+ const struct xen_drm_front_gem_ops *gem_ops;
struct drm_device *drm_dev;
struct xen_drm_front_cfg *cfg;
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
new file mode 100644
index 000000000000..367e08f6a9ef
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -0,0 +1,360 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#include "xen_drm_front_gem.h"
+
+#include <drm/drmP.h>
+#include <drm/drm_crtc_helper.h>
+#include <drm/drm_fb_helper.h>
+#include <drm/drm_gem.h>
+
+#include <linux/dma-buf.h>
+#include <linux/scatterlist.h>
+#include <linux/shmem_fs.h>
+
+#include <xen/balloon.h>
+
+#include "xen_drm_front.h"
+#include "xen_drm_front_drv.h"
+#include "xen_drm_front_shbuf.h"
+
+struct xen_gem_object {
+ struct drm_gem_object base;
+
+ size_t num_pages;
+ struct page **pages;
+
+ /* set for buffers allocated by the backend */
+ bool be_alloc;
+
+ /* this is for imported PRIME buffer */
+ struct sg_table *sgt_imported;
+};
+
+static inline struct xen_gem_object *to_xen_gem_obj(
+ struct drm_gem_object *gem_obj)
+{
+ return container_of(gem_obj, struct xen_gem_object, base);
+}
+
+static int gem_alloc_pages_array(struct xen_gem_object *xen_obj,
+ size_t buf_size)
+{
+ xen_obj->num_pages = DIV_ROUND_UP(buf_size, PAGE_SIZE);
+ xen_obj->pages = kvmalloc_array(xen_obj->num_pages,
+ sizeof(struct page *), GFP_KERNEL);
+ return xen_obj->pages == NULL ? -ENOMEM : 0;
+}
+
+static void gem_free_pages_array(struct xen_gem_object *xen_obj)
+{
+ kvfree(xen_obj->pages);
+ xen_obj->pages = NULL;
+}
+
+static struct xen_gem_object *gem_create_obj(struct drm_device *dev,
+ size_t size)
+{
+ struct xen_gem_object *xen_obj;
+ int ret;
+
+ xen_obj = kzalloc(sizeof(*xen_obj), GFP_KERNEL);
+ if (!xen_obj)
+ return ERR_PTR(-ENOMEM);
+
+ ret = drm_gem_object_init(dev, &xen_obj->base, size);
+ if (ret < 0) {
+ kfree(xen_obj);
+ return ERR_PTR(ret);
+ }
+
+ return xen_obj;
+}
+
+static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
+{
+ struct xen_drm_front_drm_info *drm_info = dev->dev_private;
+ struct xen_gem_object *xen_obj;
+ int ret;
+
+ size = round_up(size, PAGE_SIZE);
+ xen_obj = gem_create_obj(dev, size);
+ if (IS_ERR_OR_NULL(xen_obj))
+ return xen_obj;
+
+ if (drm_info->cfg->be_alloc) {
+ /*
+ * backend will allocate space for this buffer, so
+ * only allocate array of pointers to pages
+ */
+ xen_obj->be_alloc = true;
+ ret = gem_alloc_pages_array(xen_obj, size);
+ if (ret < 0) {
+ gem_free_pages_array(xen_obj);
+ goto fail;
+ }
+
+ ret = alloc_xenballooned_pages(xen_obj->num_pages,
+ xen_obj->pages);
+ if (ret < 0) {
+ DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
+ xen_obj->num_pages, ret);
+ goto fail;
+ }
+
+ return xen_obj;
+ }
+ /*
+ * need to allocate backing pages now, so we can share those
+ * with the backend
+ */
+ xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE);
+ xen_obj->pages = drm_gem_get_pages(&xen_obj->base);
+ if (IS_ERR_OR_NULL(xen_obj->pages)) {
+ ret = PTR_ERR(xen_obj->pages);
+ xen_obj->pages = NULL;
+ goto fail;
+ }
+
+ return xen_obj;
+
+fail:
+ DRM_ERROR("Failed to allocate buffer with size %zu\n", size);
+ return ERR_PTR(ret);
+}
+
+static struct xen_gem_object *gem_create_with_handle(struct drm_file *filp,
+ struct drm_device *dev, size_t size, uint32_t *handle)
+{
+ struct xen_gem_object *xen_obj;
+ struct drm_gem_object *gem_obj;
+ int ret;
+
+ xen_obj = gem_create(dev, size);
+ if (IS_ERR_OR_NULL(xen_obj))
+ return xen_obj;
+
+ gem_obj = &xen_obj->base;
+ ret = drm_gem_handle_create(filp, gem_obj, handle);
+ /* handle holds the reference */
+ drm_gem_object_unreference_unlocked(gem_obj);
+ if (ret < 0)
+ return ERR_PTR(ret);
+
+ return xen_obj;
+}
+
+static int gem_dumb_create(struct drm_file *filp, struct drm_device *dev,
+ struct drm_mode_create_dumb *args)
+{
+ struct xen_gem_object *xen_obj;
+
+ args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
+ args->size = args->pitch * args->height;
+
+ xen_obj = gem_create_with_handle(filp, dev, args->size, &args->handle);
+ if (IS_ERR_OR_NULL(xen_obj))
+ return xen_obj == NULL ? -ENOMEM : PTR_ERR(xen_obj);
+
+ return 0;
+}
+
+static void gem_free_object(struct drm_gem_object *gem_obj)
+{
+ struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+
+ if (xen_obj->base.import_attach) {
+ drm_prime_gem_destroy(&xen_obj->base, xen_obj->sgt_imported);
+ gem_free_pages_array(xen_obj);
+ } else {
+ if (xen_obj->pages) {
+ if (xen_obj->be_alloc) {
+ free_xenballooned_pages(xen_obj->num_pages,
+ xen_obj->pages);
+ gem_free_pages_array(xen_obj);
+ } else
+ drm_gem_put_pages(&xen_obj->base,
+ xen_obj->pages, true, false);
+ }
+ }
+ drm_gem_object_release(gem_obj);
+ kfree(xen_obj);
+}
+
+static struct page **gem_get_pages(struct drm_gem_object *gem_obj)
+{
+ struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+
+ return xen_obj->pages;
+}
+
+static struct sg_table *gem_get_sg_table(struct drm_gem_object *gem_obj)
+{
+ struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+
+ if (!xen_obj->pages)
+ return NULL;
+
+ return drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages);
+}
+
+static struct drm_gem_object *gem_import_sg_table(struct drm_device *dev,
+ struct dma_buf_attachment *attach, struct sg_table *sgt)
+{
+ struct xen_drm_front_drm_info *drm_info = dev->dev_private;
+ struct xen_gem_object *xen_obj;
+ size_t size;
+ int ret;
+
+ size = attach->dmabuf->size;
+ xen_obj = gem_create_obj(dev, size);
+ if (IS_ERR_OR_NULL(xen_obj))
+ return ERR_CAST(xen_obj);
+
+ ret = gem_alloc_pages_array(xen_obj, size);
+ if (ret < 0)
+ return ERR_PTR(ret);
+
+ xen_obj->sgt_imported = sgt;
+
+ ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages,
+ NULL, xen_obj->num_pages);
+ if (ret < 0)
+ return ERR_PTR(ret);
+
+ /*
+ * N.B. Although we have an API to create display buffer from sgt
+ * we use pages API, because we still need those for GEM handling,
+ * e.g. for mapping etc.
+ */
+ ret = drm_info->front_ops->dbuf_create_from_pages(
+ drm_info->front_info,
+ xen_drm_front_dbuf_to_cookie(&xen_obj->base),
+ 0, 0, 0, size, xen_obj->pages);
+ if (ret < 0)
+ return ERR_PTR(ret);
+
+ DRM_DEBUG("Imported buffer of size %zu with nents %u\n",
+ size, sgt->nents);
+
+ return &xen_obj->base;
+}
+
+static int gem_mmap_obj(struct xen_gem_object *xen_obj,
+ struct vm_area_struct *vma)
+{
+ unsigned long addr = vma->vm_start;
+ int i;
+
+ /*
+ * clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
+ * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
+ * the whole buffer.
+ */
+ vma->vm_flags &= ~VM_PFNMAP;
+ vma->vm_flags |= VM_MIXEDMAP;
+ vma->vm_pgoff = 0;
+ vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
+
+ /*
+ * vm_operations_struct.fault handler will be called if CPU access
+ * to VM is here. For GPUs this isn't the case, because CPU
+ * doesn't touch the memory. Insert pages now, so both CPU and GPU are
+ * happy.
+ * FIXME: as we insert all the pages now then no .fault handler must
+ * be called, so don't provide one
+ */
+ for (i = 0; i < xen_obj->num_pages; i++) {
+ int ret;
+
+ ret = vm_insert_page(vma, addr, xen_obj->pages[i]);
+ if (ret < 0) {
+ DRM_ERROR("Failed to insert pages into vma: %d\n", ret);
+ return ret;
+ }
+
+ addr += PAGE_SIZE;
+ }
+ return 0;
+}
+
+static int gem_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ struct xen_gem_object *xen_obj;
+ struct drm_gem_object *gem_obj;
+ int ret;
+
+ ret = drm_gem_mmap(filp, vma);
+ if (ret < 0)
+ return ret;
+
+ gem_obj = vma->vm_private_data;
+ xen_obj = to_xen_gem_obj(gem_obj);
+ return gem_mmap_obj(xen_obj, vma);
+}
+
+static void *gem_prime_vmap(struct drm_gem_object *gem_obj)
+{
+ struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+
+ if (!xen_obj->pages)
+ return NULL;
+
+ return vmap(xen_obj->pages, xen_obj->num_pages,
+ VM_MAP, pgprot_writecombine(PAGE_KERNEL));
+}
+
+static void gem_prime_vunmap(struct drm_gem_object *gem_obj, void *vaddr)
+{
+ vunmap(vaddr);
+}
+
+static int gem_prime_mmap(struct drm_gem_object *gem_obj,
+ struct vm_area_struct *vma)
+{
+ struct xen_gem_object *xen_obj;
+ int ret;
+
+ ret = drm_gem_mmap_obj(gem_obj, gem_obj->size, vma);
+ if (ret < 0)
+ return ret;
+
+ xen_obj = to_xen_gem_obj(gem_obj);
+ return gem_mmap_obj(xen_obj, vma);
+}
+
+static const struct xen_drm_front_gem_ops xen_drm_gem_ops = {
+ .free_object_unlocked = gem_free_object,
+ .prime_get_sg_table = gem_get_sg_table,
+ .prime_import_sg_table = gem_import_sg_table,
+
+ .prime_vmap = gem_prime_vmap,
+ .prime_vunmap = gem_prime_vunmap,
+ .prime_mmap = gem_prime_mmap,
+
+ .dumb_create = gem_dumb_create,
+
+ .mmap = gem_mmap,
+
+ .get_pages = gem_get_pages,
+};
+
+const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void)
+{
+ return &xen_drm_gem_ops;
+}
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
new file mode 100644
index 000000000000..d1e1711cc3fc
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
@@ -0,0 +1,46 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#ifndef __XEN_DRM_FRONT_GEM_H
+#define __XEN_DRM_FRONT_GEM_H
+
+#include <drm/drmP.h>
+
+struct xen_drm_front_gem_ops {
+ void (*free_object_unlocked)(struct drm_gem_object *obj);
+
+ struct sg_table *(*prime_get_sg_table)(struct drm_gem_object *obj);
+ struct drm_gem_object *(*prime_import_sg_table)(struct drm_device *dev,
+ struct dma_buf_attachment *attach,
+ struct sg_table *sgt);
+ void *(*prime_vmap)(struct drm_gem_object *obj);
+ void (*prime_vunmap)(struct drm_gem_object *obj, void *vaddr);
+ int (*prime_mmap)(struct drm_gem_object *obj,
+ struct vm_area_struct *vma);
+
+ int (*dumb_create)(struct drm_file *file_priv, struct drm_device *dev,
+ struct drm_mode_create_dumb *args);
+
+ int (*mmap)(struct file *filp, struct vm_area_struct *vma);
+
+ struct page **(*get_pages)(struct drm_gem_object *obj);
+};
+
+const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void);
+
+#endif /* __XEN_DRM_FRONT_GEM_H */
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c b/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
new file mode 100644
index 000000000000..5ffcbfa652d5
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
@@ -0,0 +1,93 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#include <drm/drmP.h>
+#include <drm/drm_gem.h>
+#include <drm/drm_fb_cma_helper.h>
+#include <drm/drm_gem_cma_helper.h>
+
+#include "xen_drm_front.h"
+#include "xen_drm_front_drv.h"
+#include "xen_drm_front_gem.h"
+
+static struct drm_gem_object *gem_import_sg_table(struct drm_device *dev,
+ struct dma_buf_attachment *attach, struct sg_table *sgt)
+{
+ struct xen_drm_front_drm_info *drm_info = dev->dev_private;
+ struct drm_gem_object *gem_obj;
+ struct drm_gem_cma_object *cma_obj;
+ int ret;
+
+ gem_obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
+ if (IS_ERR_OR_NULL(gem_obj))
+ return gem_obj;
+
+ cma_obj = to_drm_gem_cma_obj(gem_obj);
+
+ ret = drm_info->front_ops->dbuf_create_from_sgt(
+ drm_info->front_info,
+ xen_drm_front_dbuf_to_cookie(gem_obj),
+ 0, 0, 0, gem_obj->size,
+ drm_gem_cma_prime_get_sg_table(gem_obj));
+ if (ret < 0)
+ return ERR_PTR(ret);
+
+ DRM_DEBUG("Imported CMA buffer of size %zu\n", gem_obj->size);
+
+ return gem_obj;
+}
+
+static int gem_dumb_create(struct drm_file *filp, struct drm_device *dev,
+ struct drm_mode_create_dumb *args)
+{
+ struct xen_drm_front_drm_info *drm_info = dev->dev_private;
+
+ if (drm_info->cfg->be_alloc) {
+ /* This use-case is not yet supported and probably won't be */
+ DRM_ERROR("Backend allocated buffers and CMA helpers are not supported at the same time\n");
+ return -EINVAL;
+ }
+
+ return drm_gem_cma_dumb_create(filp, dev, args);
+}
+
+static struct page **gem_get_pages(struct drm_gem_object *gem_obj)
+{
+ return NULL;
+}
+
+static const struct xen_drm_front_gem_ops xen_drm_front_gem_cma_ops = {
+ .free_object_unlocked = drm_gem_cma_free_object,
+ .prime_get_sg_table = drm_gem_cma_prime_get_sg_table,
+ .prime_import_sg_table = gem_import_sg_table,
+
+ .prime_vmap = drm_gem_cma_prime_vmap,
+ .prime_vunmap = drm_gem_cma_prime_vunmap,
+ .prime_mmap = drm_gem_cma_prime_mmap,
+
+ .dumb_create = gem_dumb_create,
+
+ .mmap = drm_gem_cma_mmap,
+
+ .get_pages = gem_get_pages,
+};
+
+const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void)
+{
+ return &xen_drm_front_gem_cma_ops;
+}
--
2.7.4
From: Oleksandr Andrushchenko <[email protected]>
Read configuration values from Xen store according
to xen/interface/io/displif.h protocol:
- read connector(s) configuration
- read buffer allocation mode (backend/frontend)
Signed-off-by: Oleksandr Andrushchenko <[email protected]>
---
drivers/gpu/drm/xen/Makefile | 3 +-
drivers/gpu/drm/xen/xen_drm_front.c | 9 ++++
drivers/gpu/drm/xen/xen_drm_front.h | 3 ++
drivers/gpu/drm/xen/xen_drm_front_cfg.c | 84 +++++++++++++++++++++++++++++++++
drivers/gpu/drm/xen/xen_drm_front_cfg.h | 45 ++++++++++++++++++
5 files changed, 143 insertions(+), 1 deletion(-)
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_cfg.c
create mode 100644 drivers/gpu/drm/xen/xen_drm_front_cfg.h
diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
index 967074d348f6..0a2eae757f0c 100644
--- a/drivers/gpu/drm/xen/Makefile
+++ b/drivers/gpu/drm/xen/Makefile
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
-drm_xen_front-objs := xen_drm_front.o
+drm_xen_front-objs := xen_drm_front.o \
+ xen_drm_front_cfg.o
obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
index d0306f9d660d..0a90c474c7ce 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.c
+++ b/drivers/gpu/drm/xen/xen_drm_front.c
@@ -32,6 +32,15 @@ static void xen_drv_remove_internal(struct xen_drm_front_info *front_info)
static int backend_on_initwait(struct xen_drm_front_info *front_info)
{
+ struct xen_drm_front_cfg *cfg = &front_info->cfg;
+ int ret;
+
+ cfg->front_info = front_info;
+ ret = xen_drm_front_cfg_card(front_info, cfg);
+ if (ret < 0)
+ return ret;
+
+ DRM_INFO("Have %d conector(s)\n", cfg->num_connectors);
return 0;
}
diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
index 8af46850f126..62b0d4e3e12b 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.h
+++ b/drivers/gpu/drm/xen/xen_drm_front.h
@@ -19,8 +19,11 @@
#ifndef __XEN_DRM_FRONT_H_
#define __XEN_DRM_FRONT_H_
+#include "xen_drm_front_cfg.h"
+
struct xen_drm_front_info {
struct xenbus_device *xb_dev;
+ struct xen_drm_front_cfg cfg;
};
#endif /* __XEN_DRM_FRONT_H_ */
diff --git a/drivers/gpu/drm/xen/xen_drm_front_cfg.c b/drivers/gpu/drm/xen/xen_drm_front_cfg.c
new file mode 100644
index 000000000000..58fe50bc52a5
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front_cfg.c
@@ -0,0 +1,84 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#include <drm/drmP.h>
+
+#include <linux/device.h>
+
+#include <xen/interface/io/displif.h>
+#include <xen/xenbus.h>
+
+#include "xen_drm_front.h"
+#include "xen_drm_front_cfg.h"
+
+static int cfg_connector(struct xen_drm_front_info *front_info,
+ struct xen_drm_front_cfg_connector *connector,
+ const char *path, int index)
+{
+ char *connector_path;
+
+ connector_path = devm_kasprintf(&front_info->xb_dev->dev,
+ GFP_KERNEL, "%s/%d", path, index);
+ if (!connector_path)
+ return -ENOMEM;
+
+ connector->xenstore_path = connector_path;
+ if (xenbus_scanf(XBT_NIL, connector_path, XENDISPL_FIELD_RESOLUTION,
+ "%d" XENDISPL_RESOLUTION_SEPARATOR "%d",
+ &connector->width, &connector->height) < 0) {
+ /* either no entry configured or wrong resolution set */
+ connector->width = 0;
+ connector->height = 0;
+ return -EINVAL;
+ }
+
+ DRM_INFO("Connector %s: resolution %dx%d\n",
+ connector_path, connector->width, connector->height);
+ return 0;
+}
+
+int xen_drm_front_cfg_card(struct xen_drm_front_info *front_info,
+ struct xen_drm_front_cfg *cfg)
+{
+ struct xenbus_device *xb_dev = front_info->xb_dev;
+ int ret, i;
+
+ if (xenbus_read_unsigned(front_info->xb_dev->nodename,
+ XENDISPL_FIELD_BE_ALLOC, 0)) {
+ DRM_INFO("Backend can provide display buffers\n");
+ cfg->be_alloc = true;
+ }
+
+ cfg->num_connectors = 0;
+ for (i = 0; i < ARRAY_SIZE(cfg->connectors); i++) {
+ ret = cfg_connector(front_info,
+ &cfg->connectors[i], xb_dev->nodename, i);
+ if (ret < 0)
+ break;
+ cfg->num_connectors++;
+ }
+
+ if (!cfg->num_connectors) {
+ DRM_ERROR("No connector(s) configured at %s\n",
+ xb_dev->nodename);
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
diff --git a/drivers/gpu/drm/xen/xen_drm_front_cfg.h b/drivers/gpu/drm/xen/xen_drm_front_cfg.h
new file mode 100644
index 000000000000..1ac4948a13e5
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front_cfg.h
@@ -0,0 +1,45 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#ifndef __XEN_DRM_FRONT_CFG_H_
+#define __XEN_DRM_FRONT_CFG_H_
+
+#include <linux/types.h>
+
+#define XEN_DRM_FRONT_MAX_CRTCS 4
+
+struct xen_drm_front_cfg_connector {
+ int width;
+ int height;
+ char *xenstore_path;
+};
+
+struct xen_drm_front_cfg {
+ struct xen_drm_front_info *front_info;
+ /* number of connectors in this configuration */
+ int num_connectors;
+ /* connector configurations */
+ struct xen_drm_front_cfg_connector connectors[XEN_DRM_FRONT_MAX_CRTCS];
+ /* set if dumb buffers are allocated externally on backend side */
+ bool be_alloc;
+};
+
+int xen_drm_front_cfg_card(struct xen_drm_front_info *front_info,
+ struct xen_drm_front_cfg *cfg);
+
+#endif /* __XEN_DRM_FRONT_CFG_H_ */
--
2.7.4
From: Oleksandr Andrushchenko <[email protected]>
Introduce skeleton of the para-virtualized Xen display
frontend driver. This patch only adds required
essential stubs.
Signed-off-by: Oleksandr Andrushchenko <[email protected]>
---
drivers/gpu/drm/Kconfig | 2 +
drivers/gpu/drm/Makefile | 1 +
drivers/gpu/drm/xen/Kconfig | 17 ++++++++
drivers/gpu/drm/xen/Makefile | 5 +++
drivers/gpu/drm/xen/xen_drm_front.c | 83 +++++++++++++++++++++++++++++++++++++
5 files changed, 108 insertions(+)
create mode 100644 drivers/gpu/drm/xen/Kconfig
create mode 100644 drivers/gpu/drm/xen/Makefile
create mode 100644 drivers/gpu/drm/xen/xen_drm_front.c
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index deeefa7a1773..757825ac60df 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
source "drivers/gpu/drm/tve200/Kconfig"
+source "drivers/gpu/drm/xen/Kconfig"
+
# Keep legacy drivers last
menuconfig DRM_LEGACY
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index 50093ff4479b..9d66657ea117 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/
obj-$(CONFIG_DRM_TINYDRM) += tinydrm/
obj-$(CONFIG_DRM_PL111) += pl111/
obj-$(CONFIG_DRM_TVE200) += tve200/
+obj-$(CONFIG_DRM_XEN) += xen/
diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig
new file mode 100644
index 000000000000..4cca160782ab
--- /dev/null
+++ b/drivers/gpu/drm/xen/Kconfig
@@ -0,0 +1,17 @@
+config DRM_XEN
+ bool "DRM Support for Xen guest OS"
+ depends on XEN
+ help
+ Choose this option if you want to enable DRM support
+ for Xen.
+
+config DRM_XEN_FRONTEND
+ tristate "Para-virtualized frontend driver for Xen guest OS"
+ depends on DRM_XEN
+ depends on DRM
+ select DRM_KMS_HELPER
+ select VIDEOMODE_HELPERS
+ select XEN_XENBUS_FRONTEND
+ help
+ Choose this option if you want to enable a para-virtualized
+ frontend DRM/KMS driver for Xen guest OSes.
diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
new file mode 100644
index 000000000000..967074d348f6
--- /dev/null
+++ b/drivers/gpu/drm/xen/Makefile
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: GPL-2.0
+
+drm_xen_front-objs := xen_drm_front.o
+
+obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
new file mode 100644
index 000000000000..fd372fb464a1
--- /dev/null
+++ b/drivers/gpu/drm/xen/xen_drm_front.c
@@ -0,0 +1,83 @@
+/*
+ * Xen para-virtual DRM device
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Copyright (C) 2016-2018 EPAM Systems Inc.
+ *
+ * Author: Oleksandr Andrushchenko <[email protected]>
+ */
+
+#include <drm/drmP.h>
+
+#include <xen/platform_pci.h>
+#include <xen/xen.h>
+#include <xen/xenbus.h>
+
+#include <xen/interface/io/displif.h>
+
+static void backend_on_changed(struct xenbus_device *xb_dev,
+ enum xenbus_state backend_state)
+{
+}
+
+static int xen_drv_probe(struct xenbus_device *xb_dev,
+ const struct xenbus_device_id *id)
+{
+ return 0;
+}
+
+static int xen_drv_remove(struct xenbus_device *dev)
+{
+ return 0;
+}
+
+static const struct xenbus_device_id xen_drv_ids[] = {
+ { XENDISPL_DRIVER_NAME },
+ { "" }
+};
+
+static struct xenbus_driver xen_driver = {
+ .ids = xen_drv_ids,
+ .probe = xen_drv_probe,
+ .remove = xen_drv_remove,
+ .otherend_changed = backend_on_changed,
+};
+
+static int __init xen_drv_init(void)
+{
+ if (!xen_domain())
+ return -ENODEV;
+
+ if (xen_initial_domain()) {
+ DRM_ERROR(XENDISPL_DRIVER_NAME " cannot run in initial domain\n");
+ return -ENODEV;
+ }
+
+ if (!xen_has_pv_devices())
+ return -ENODEV;
+
+ DRM_INFO("Registering XEN PV " XENDISPL_DRIVER_NAME "\n");
+ return xenbus_register_frontend(&xen_driver);
+}
+
+static void __exit xen_drv_cleanup(void)
+{
+ DRM_INFO("Unregistering XEN PV " XENDISPL_DRIVER_NAME "\n");
+ xenbus_unregister_driver(&xen_driver);
+}
+
+module_init(xen_drv_init);
+module_exit(xen_drv_cleanup);
+
+MODULE_DESCRIPTION("Xen para-virtualized display device frontend");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS("xen:"XENDISPL_DRIVER_NAME);
--
2.7.4
On 21/02/18 09:03, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <[email protected]>
>
> Introduce skeleton of the para-virtualized Xen display
> frontend driver. This patch only adds required
> essential stubs.
>
> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
> ---
> drivers/gpu/drm/Kconfig | 2 +
> drivers/gpu/drm/Makefile | 1 +
> drivers/gpu/drm/xen/Kconfig | 17 ++++++++
> drivers/gpu/drm/xen/Makefile | 5 +++
> drivers/gpu/drm/xen/xen_drm_front.c | 83 +++++++++++++++++++++++++++++++++++++
> 5 files changed, 108 insertions(+)
> create mode 100644 drivers/gpu/drm/xen/Kconfig
> create mode 100644 drivers/gpu/drm/xen/Makefile
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front.c
>
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index deeefa7a1773..757825ac60df 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
>
> source "drivers/gpu/drm/tve200/Kconfig"
>
> +source "drivers/gpu/drm/xen/Kconfig"
> +
> # Keep legacy drivers last
>
> menuconfig DRM_LEGACY
> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
> index 50093ff4479b..9d66657ea117 100644
> --- a/drivers/gpu/drm/Makefile
> +++ b/drivers/gpu/drm/Makefile
> @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/
> obj-$(CONFIG_DRM_TINYDRM) += tinydrm/
> obj-$(CONFIG_DRM_PL111) += pl111/
> obj-$(CONFIG_DRM_TVE200) += tve200/
> +obj-$(CONFIG_DRM_XEN) += xen/
> diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig
> new file mode 100644
> index 000000000000..4cca160782ab
> --- /dev/null
> +++ b/drivers/gpu/drm/xen/Kconfig
> @@ -0,0 +1,17 @@
> +config DRM_XEN
> + bool "DRM Support for Xen guest OS"
> + depends on XEN
> + help
> + Choose this option if you want to enable DRM support
> + for Xen.
> +
> +config DRM_XEN_FRONTEND
> + tristate "Para-virtualized frontend driver for Xen guest OS"
> + depends on DRM_XEN
> + depends on DRM
> + select DRM_KMS_HELPER
> + select VIDEOMODE_HELPERS
> + select XEN_XENBUS_FRONTEND
> + help
> + Choose this option if you want to enable a para-virtualized
> + frontend DRM/KMS driver for Xen guest OSes.
> diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
> new file mode 100644
> index 000000000000..967074d348f6
> --- /dev/null
> +++ b/drivers/gpu/drm/xen/Makefile
> @@ -0,0 +1,5 @@
> +# SPDX-License-Identifier: GPL-2.0
> +
> +drm_xen_front-objs := xen_drm_front.o
> +
> +obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
> new file mode 100644
> index 000000000000..fd372fb464a1
> --- /dev/null
> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
> @@ -0,0 +1,83 @@
> +/*
> + * Xen para-virtual DRM device
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
Use SPDX identifier instead (same applies for all other new
sources):
// SPDX-License-Identifier: GPL-2.0
> + *
> + * Copyright (C) 2016-2018 EPAM Systems Inc.
> + *
> + * Author: Oleksandr Andrushchenko <[email protected]>
> + */
> +
> +#include <drm/drmP.h>
> +
> +#include <xen/platform_pci.h>
> +#include <xen/xen.h>
> +#include <xen/xenbus.h>
> +
> +#include <xen/interface/io/displif.h>
> +
> +static void backend_on_changed(struct xenbus_device *xb_dev,
> + enum xenbus_state backend_state)
> +{
> +}
> +
> +static int xen_drv_probe(struct xenbus_device *xb_dev,
> + const struct xenbus_device_id *id)
> +{
> + return 0;
> +}
> +
> +static int xen_drv_remove(struct xenbus_device *dev)
> +{
> + return 0;
> +}
> +
> +static const struct xenbus_device_id xen_drv_ids[] = {
> + { XENDISPL_DRIVER_NAME },
> + { "" }
> +};
> +
> +static struct xenbus_driver xen_driver = {
> + .ids = xen_drv_ids,
> + .probe = xen_drv_probe,
> + .remove = xen_drv_remove,
> + .otherend_changed = backend_on_changed,
> +};
> +
> +static int __init xen_drv_init(void)
> +{
> + if (!xen_domain())
> + return -ENODEV;
> +
> + if (xen_initial_domain()) {
> + DRM_ERROR(XENDISPL_DRIVER_NAME " cannot run in initial domain\n");
> + return -ENODEV;
> + }
Why not? Wouldn't that be possible in case of the backend living in a
driver domain?
Juergen
On 02/21/2018 10:19 AM, Juergen Gross wrote:
> On 21/02/18 09:03, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko <[email protected]>
>>
>> Introduce skeleton of the para-virtualized Xen display
>> frontend driver. This patch only adds required
>> essential stubs.
>>
>> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
>> ---
>> drivers/gpu/drm/Kconfig | 2 +
>> drivers/gpu/drm/Makefile | 1 +
>> drivers/gpu/drm/xen/Kconfig | 17 ++++++++
>> drivers/gpu/drm/xen/Makefile | 5 +++
>> drivers/gpu/drm/xen/xen_drm_front.c | 83 +++++++++++++++++++++++++++++++++++++
>> 5 files changed, 108 insertions(+)
>> create mode 100644 drivers/gpu/drm/xen/Kconfig
>> create mode 100644 drivers/gpu/drm/xen/Makefile
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front.c
>>
>> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
>> index deeefa7a1773..757825ac60df 100644
>> --- a/drivers/gpu/drm/Kconfig
>> +++ b/drivers/gpu/drm/Kconfig
>> @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
>>
>> source "drivers/gpu/drm/tve200/Kconfig"
>>
>> +source "drivers/gpu/drm/xen/Kconfig"
>> +
>> # Keep legacy drivers last
>>
>> menuconfig DRM_LEGACY
>> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
>> index 50093ff4479b..9d66657ea117 100644
>> --- a/drivers/gpu/drm/Makefile
>> +++ b/drivers/gpu/drm/Makefile
>> @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/
>> obj-$(CONFIG_DRM_TINYDRM) += tinydrm/
>> obj-$(CONFIG_DRM_PL111) += pl111/
>> obj-$(CONFIG_DRM_TVE200) += tve200/
>> +obj-$(CONFIG_DRM_XEN) += xen/
>> diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig
>> new file mode 100644
>> index 000000000000..4cca160782ab
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xen/Kconfig
>> @@ -0,0 +1,17 @@
>> +config DRM_XEN
>> + bool "DRM Support for Xen guest OS"
>> + depends on XEN
>> + help
>> + Choose this option if you want to enable DRM support
>> + for Xen.
>> +
>> +config DRM_XEN_FRONTEND
>> + tristate "Para-virtualized frontend driver for Xen guest OS"
>> + depends on DRM_XEN
>> + depends on DRM
>> + select DRM_KMS_HELPER
>> + select VIDEOMODE_HELPERS
>> + select XEN_XENBUS_FRONTEND
>> + help
>> + Choose this option if you want to enable a para-virtualized
>> + frontend DRM/KMS driver for Xen guest OSes.
>> diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
>> new file mode 100644
>> index 000000000000..967074d348f6
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xen/Makefile
>> @@ -0,0 +1,5 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +
>> +drm_xen_front-objs := xen_drm_front.o
>> +
>> +obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
>> new file mode 100644
>> index 000000000000..fd372fb464a1
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
>> @@ -0,0 +1,83 @@
>> +/*
>> + * Xen para-virtual DRM device
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
> Use SPDX identifier instead (same applies for all other new
> sources):
>
> // SPDX-License-Identifier: GPL-2.0
Will update, thank you
>> + *
>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>> + *
>> + * Author: Oleksandr Andrushchenko <[email protected]>
>> + */
>> +
>> +#include <drm/drmP.h>
>> +
>> +#include <xen/platform_pci.h>
>> +#include <xen/xen.h>
>> +#include <xen/xenbus.h>
>> +
>> +#include <xen/interface/io/displif.h>
>> +
>> +static void backend_on_changed(struct xenbus_device *xb_dev,
>> + enum xenbus_state backend_state)
>> +{
>> +}
>> +
>> +static int xen_drv_probe(struct xenbus_device *xb_dev,
>> + const struct xenbus_device_id *id)
>> +{
>> + return 0;
>> +}
>> +
>> +static int xen_drv_remove(struct xenbus_device *dev)
>> +{
>> + return 0;
>> +}
>> +
>> +static const struct xenbus_device_id xen_drv_ids[] = {
>> + { XENDISPL_DRIVER_NAME },
>> + { "" }
>> +};
>> +
>> +static struct xenbus_driver xen_driver = {
>> + .ids = xen_drv_ids,
>> + .probe = xen_drv_probe,
>> + .remove = xen_drv_remove,
>> + .otherend_changed = backend_on_changed,
>> +};
>> +
>> +static int __init xen_drv_init(void)
>> +{
>> + if (!xen_domain())
>> + return -ENODEV;
>> +
>> + if (xen_initial_domain()) {
>> + DRM_ERROR(XENDISPL_DRIVER_NAME " cannot run in initial domain\n");
>> + return -ENODEV;
>> + }
> Why not? Wouldn't that be possible in case of the backend living in a
> driver domain?
It is possible (and in my use-case backend indeed runs in
a driver domain). I was just not sure if it is really a
good idea to allow that. If you think this is ok, then
I'll remove this check
>
> Juergen
Thank you,
Oleksandr
On 02/21/2018 10:23 AM, Juergen Gross wrote:
> On 21/02/18 09:03, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko <[email protected]>
>>
>> Initial handling for Xen bus states: implement
>> Xen bus state machine for the frontend driver according to
>> the state diagram and recovery flow from display para-virtualized
>> protocol: xen/interface/io/displif.h.
>>
>> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
>> ---
>> drivers/gpu/drm/xen/xen_drm_front.c | 124 +++++++++++++++++++++++++++++++++++-
>> drivers/gpu/drm/xen/xen_drm_front.h | 26 ++++++++
>> 2 files changed, 149 insertions(+), 1 deletion(-)
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front.h
>>
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
>> index fd372fb464a1..d0306f9d660d 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front.c
>> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
>> @@ -24,19 +24,141 @@
>>
>> #include <xen/interface/io/displif.h>
>>
>> +#include "xen_drm_front.h"
>> +
>> +static void xen_drv_remove_internal(struct xen_drm_front_info *front_info)
>> +{
>> +}
>> +
>> +static int backend_on_initwait(struct xen_drm_front_info *front_info)
>> +{
>> + return 0;
>> +}
>> +
>> +static int backend_on_connected(struct xen_drm_front_info *front_info)
>> +{
>> + return 0;
>> +}
>> +
>> +static void backend_on_disconnected(struct xen_drm_front_info *front_info)
>> +{
>> + xenbus_switch_state(front_info->xb_dev, XenbusStateInitialising);
>> +}
>> +
>> static void backend_on_changed(struct xenbus_device *xb_dev,
>> enum xenbus_state backend_state)
>> {
>> + struct xen_drm_front_info *front_info = dev_get_drvdata(&xb_dev->dev);
>> + int ret;
>> +
>> + DRM_DEBUG("Backend state is %s, front is %s\n",
>> + xenbus_strstate(backend_state),
>> + xenbus_strstate(xb_dev->state));
>> +
>> + switch (backend_state) {
>> + case XenbusStateReconfiguring:
>> + /* fall through */
>> + case XenbusStateReconfigured:
>> + /* fall through */
>> + case XenbusStateInitialised:
>> + break;
>> +
>> + case XenbusStateInitialising:
>> + /* recovering after backend unexpected closure */
>> + backend_on_disconnected(front_info);
>> + break;
>> +
>> + case XenbusStateInitWait:
>> + /* recovering after backend unexpected closure */
>> + backend_on_disconnected(front_info);
>> + if (xb_dev->state != XenbusStateInitialising)
>> + break;
>> +
>> + ret = backend_on_initwait(front_info);
>> + if (ret < 0)
>> + xenbus_dev_fatal(xb_dev, ret, "initializing frontend");
>> + else
>> + xenbus_switch_state(xb_dev, XenbusStateInitialised);
>> + break;
>> +
>> + case XenbusStateConnected:
>> + if (xb_dev->state != XenbusStateInitialised)
>> + break;
>> +
>> + ret = backend_on_connected(front_info);
>> + if (ret < 0)
>> + xenbus_dev_fatal(xb_dev, ret, "initializing DRM driver");
>> + else
>> + xenbus_switch_state(xb_dev, XenbusStateConnected);
>> + break;
>> +
>> + case XenbusStateClosing:
>> + /*
>> + * in this state backend starts freeing resources,
>> + * so let it go into closed state, so we can also
>> + * remove ours
>> + */
>> + break;
>> +
>> + case XenbusStateUnknown:
>> + /* fall through */
>> + case XenbusStateClosed:
>> + if (xb_dev->state == XenbusStateClosed)
>> + break;
>> +
>> + backend_on_disconnected(front_info);
>> + break;
>> + }
>> }
>>
>> static int xen_drv_probe(struct xenbus_device *xb_dev,
>> const struct xenbus_device_id *id)
>> {
>> - return 0;
>> + struct xen_drm_front_info *front_info;
>> +
>> + front_info = devm_kzalloc(&xb_dev->dev,
>> + sizeof(*front_info), GFP_KERNEL);
>> + if (!front_info) {
>> + xenbus_dev_fatal(xb_dev, -ENOMEM, "allocating device memory");
> No need for message in case of allocation failure: this is
> handled in memory allocation already.
Will remove, thank you
>
> Juergen
On 02/21/2018 11:09 AM, Juergen Gross wrote:
> On 21/02/18 09:47, Oleksandr Andrushchenko wrote:
>> On 02/21/2018 10:19 AM, Juergen Gross wrote:
>>> On 21/02/18 09:03, Oleksandr Andrushchenko wrote:
>>>> From: Oleksandr Andrushchenko <[email protected]>
>>>>
>>>> Introduce skeleton of the para-virtualized Xen display
>>>> frontend driver. This patch only adds required
>>>> essential stubs.
>>>>
>>>> Signed-off-by: Oleksandr Andrushchenko
>>>> <[email protected]>
>>>> ---
>>>> drivers/gpu/drm/Kconfig | 2 +
>>>> drivers/gpu/drm/Makefile | 1 +
>>>> drivers/gpu/drm/xen/Kconfig | 17 ++++++++
>>>> drivers/gpu/drm/xen/Makefile | 5 +++
>>>> drivers/gpu/drm/xen/xen_drm_front.c | 83
>>>> +++++++++++++++++++++++++++++++++++++
>>>> 5 files changed, 108 insertions(+)
>>>> create mode 100644 drivers/gpu/drm/xen/Kconfig
>>>> create mode 100644 drivers/gpu/drm/xen/Makefile
>>>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front.c
>>>>
>>>> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
>>>> index deeefa7a1773..757825ac60df 100644
>>>> --- a/drivers/gpu/drm/Kconfig
>>>> +++ b/drivers/gpu/drm/Kconfig
>>>> @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
>>>> source "drivers/gpu/drm/tve200/Kconfig"
>>>> +source "drivers/gpu/drm/xen/Kconfig"
>>>> +
>>>> # Keep legacy drivers last
>>>> menuconfig DRM_LEGACY
>>>> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
>>>> index 50093ff4479b..9d66657ea117 100644
>>>> --- a/drivers/gpu/drm/Makefile
>>>> +++ b/drivers/gpu/drm/Makefile
>>>> @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/
>>>> obj-$(CONFIG_DRM_TINYDRM) += tinydrm/
>>>> obj-$(CONFIG_DRM_PL111) += pl111/
>>>> obj-$(CONFIG_DRM_TVE200) += tve200/
>>>> +obj-$(CONFIG_DRM_XEN) += xen/
>>>> diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig
>>>> new file mode 100644
>>>> index 000000000000..4cca160782ab
>>>> --- /dev/null
>>>> +++ b/drivers/gpu/drm/xen/Kconfig
>>>> @@ -0,0 +1,17 @@
>>>> +config DRM_XEN
>>>> + bool "DRM Support for Xen guest OS"
>>>> + depends on XEN
>>>> + help
>>>> + Choose this option if you want to enable DRM support
>>>> + for Xen.
>>>> +
>>>> +config DRM_XEN_FRONTEND
>>>> + tristate "Para-virtualized frontend driver for Xen guest OS"
>>>> + depends on DRM_XEN
>>>> + depends on DRM
>>>> + select DRM_KMS_HELPER
>>>> + select VIDEOMODE_HELPERS
>>>> + select XEN_XENBUS_FRONTEND
>>>> + help
>>>> + Choose this option if you want to enable a para-virtualized
>>>> + frontend DRM/KMS driver for Xen guest OSes.
>>>> diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
>>>> new file mode 100644
>>>> index 000000000000..967074d348f6
>>>> --- /dev/null
>>>> +++ b/drivers/gpu/drm/xen/Makefile
>>>> @@ -0,0 +1,5 @@
>>>> +# SPDX-License-Identifier: GPL-2.0
>>>> +
>>>> +drm_xen_front-objs := xen_drm_front.o
>>>> +
>>>> +obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
>>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c
>>>> b/drivers/gpu/drm/xen/xen_drm_front.c
>>>> new file mode 100644
>>>> index 000000000000..fd372fb464a1
>>>> --- /dev/null
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
>>>> @@ -0,0 +1,83 @@
>>>> +/*
>>>> + * Xen para-virtual DRM device
>>>> + *
>>>> + * This program is free software; you can redistribute it and/or
>>>> modify
>>>> + * it under the terms of the GNU General Public License as
>>>> published by
>>>> + * the Free Software Foundation; either version 2 of the License, or
>>>> + * (at your option) any later version.
>>>> + *
>>>> + * This program is distributed in the hope that it will be useful,
>>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>>>> + * GNU General Public License for more details.
>>> Use SPDX identifier instead (same applies for all other new
>>> sources):
>>>
>>> // SPDX-License-Identifier: GPL-2.0
>> Will update, thank you
>>>> + *
>>>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>>>> + *
>>>> + * Author: Oleksandr Andrushchenko <[email protected]>
>>>> + */
>>>> +
>>>> +#include <drm/drmP.h>
>>>> +
>>>> +#include <xen/platform_pci.h>
>>>> +#include <xen/xen.h>
>>>> +#include <xen/xenbus.h>
>>>> +
>>>> +#include <xen/interface/io/displif.h>
>>>> +
>>>> +static void backend_on_changed(struct xenbus_device *xb_dev,
>>>> + enum xenbus_state backend_state)
>>>> +{
>>>> +}
>>>> +
>>>> +static int xen_drv_probe(struct xenbus_device *xb_dev,
>>>> + const struct xenbus_device_id *id)
>>>> +{
>>>> + return 0;
>>>> +}
>>>> +
>>>> +static int xen_drv_remove(struct xenbus_device *dev)
>>>> +{
>>>> + return 0;
>>>> +}
>>>> +
>>>> +static const struct xenbus_device_id xen_drv_ids[] = {
>>>> + { XENDISPL_DRIVER_NAME },
>>>> + { "" }
>>>> +};
>>>> +
>>>> +static struct xenbus_driver xen_driver = {
>>>> + .ids = xen_drv_ids,
>>>> + .probe = xen_drv_probe,
>>>> + .remove = xen_drv_remove,
>>>> + .otherend_changed = backend_on_changed,
>>>> +};
>>>> +
>>>> +static int __init xen_drv_init(void)
>>>> +{
>>>> + if (!xen_domain())
>>>> + return -ENODEV;
>>>> +
>>>> + if (xen_initial_domain()) {
>>>> + DRM_ERROR(XENDISPL_DRIVER_NAME " cannot run in initial
>>>> domain\n");
>>>> + return -ENODEV;
>>>> + }
>>> Why not? Wouldn't that be possible in case of the backend living in a
>>> driver domain?
>> It is possible (and in my use-case backend indeed runs in
>> a driver domain). I was just not sure if it is really a
>> good idea to allow that. If you think this is ok, then
>> I'll remove this check
> I don't think the driver should decide that. This would be the job of
> Xen tools IMO.
Agree, will remove
>
> Juergen
On 21/02/18 09:47, Oleksandr Andrushchenko wrote:
> On 02/21/2018 10:19 AM, Juergen Gross wrote:
>> On 21/02/18 09:03, Oleksandr Andrushchenko wrote:
>>> From: Oleksandr Andrushchenko <[email protected]>
>>>
>>> Introduce skeleton of the para-virtualized Xen display
>>> frontend driver. This patch only adds required
>>> essential stubs.
>>>
>>> Signed-off-by: Oleksandr Andrushchenko
>>> <[email protected]>
>>> ---
>>> drivers/gpu/drm/Kconfig | 2 +
>>> drivers/gpu/drm/Makefile | 1 +
>>> drivers/gpu/drm/xen/Kconfig | 17 ++++++++
>>> drivers/gpu/drm/xen/Makefile | 5 +++
>>> drivers/gpu/drm/xen/xen_drm_front.c | 83
>>> +++++++++++++++++++++++++++++++++++++
>>> 5 files changed, 108 insertions(+)
>>> create mode 100644 drivers/gpu/drm/xen/Kconfig
>>> create mode 100644 drivers/gpu/drm/xen/Makefile
>>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front.c
>>>
>>> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
>>> index deeefa7a1773..757825ac60df 100644
>>> --- a/drivers/gpu/drm/Kconfig
>>> +++ b/drivers/gpu/drm/Kconfig
>>> @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
>>> source "drivers/gpu/drm/tve200/Kconfig"
>>> +source "drivers/gpu/drm/xen/Kconfig"
>>> +
>>> # Keep legacy drivers last
>>> menuconfig DRM_LEGACY
>>> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
>>> index 50093ff4479b..9d66657ea117 100644
>>> --- a/drivers/gpu/drm/Makefile
>>> +++ b/drivers/gpu/drm/Makefile
>>> @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/
>>> obj-$(CONFIG_DRM_TINYDRM) += tinydrm/
>>> obj-$(CONFIG_DRM_PL111) += pl111/
>>> obj-$(CONFIG_DRM_TVE200) += tve200/
>>> +obj-$(CONFIG_DRM_XEN) += xen/
>>> diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig
>>> new file mode 100644
>>> index 000000000000..4cca160782ab
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/xen/Kconfig
>>> @@ -0,0 +1,17 @@
>>> +config DRM_XEN
>>> + bool "DRM Support for Xen guest OS"
>>> + depends on XEN
>>> + help
>>> + Choose this option if you want to enable DRM support
>>> + for Xen.
>>> +
>>> +config DRM_XEN_FRONTEND
>>> + tristate "Para-virtualized frontend driver for Xen guest OS"
>>> + depends on DRM_XEN
>>> + depends on DRM
>>> + select DRM_KMS_HELPER
>>> + select VIDEOMODE_HELPERS
>>> + select XEN_XENBUS_FRONTEND
>>> + help
>>> + Choose this option if you want to enable a para-virtualized
>>> + frontend DRM/KMS driver for Xen guest OSes.
>>> diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
>>> new file mode 100644
>>> index 000000000000..967074d348f6
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/xen/Makefile
>>> @@ -0,0 +1,5 @@
>>> +# SPDX-License-Identifier: GPL-2.0
>>> +
>>> +drm_xen_front-objs := xen_drm_front.o
>>> +
>>> +obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c
>>> b/drivers/gpu/drm/xen/xen_drm_front.c
>>> new file mode 100644
>>> index 000000000000..fd372fb464a1
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
>>> @@ -0,0 +1,83 @@
>>> +/*
>>> + * Xen para-virtual DRM device
>>> + *
>>> + * This program is free software; you can redistribute it and/or
>>> modify
>>> + * it under the terms of the GNU General Public License as
>>> published by
>>> + * the Free Software Foundation; either version 2 of the License, or
>>> + * (at your option) any later version.
>>> + *
>>> + * This program is distributed in the hope that it will be useful,
>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>>> + * GNU General Public License for more details.
>> Use SPDX identifier instead (same applies for all other new
>> sources):
>>
>> // SPDX-License-Identifier: GPL-2.0
> Will update, thank you
>>> + *
>>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>>> + *
>>> + * Author: Oleksandr Andrushchenko <[email protected]>
>>> + */
>>> +
>>> +#include <drm/drmP.h>
>>> +
>>> +#include <xen/platform_pci.h>
>>> +#include <xen/xen.h>
>>> +#include <xen/xenbus.h>
>>> +
>>> +#include <xen/interface/io/displif.h>
>>> +
>>> +static void backend_on_changed(struct xenbus_device *xb_dev,
>>> + enum xenbus_state backend_state)
>>> +{
>>> +}
>>> +
>>> +static int xen_drv_probe(struct xenbus_device *xb_dev,
>>> + const struct xenbus_device_id *id)
>>> +{
>>> + return 0;
>>> +}
>>> +
>>> +static int xen_drv_remove(struct xenbus_device *dev)
>>> +{
>>> + return 0;
>>> +}
>>> +
>>> +static const struct xenbus_device_id xen_drv_ids[] = {
>>> + { XENDISPL_DRIVER_NAME },
>>> + { "" }
>>> +};
>>> +
>>> +static struct xenbus_driver xen_driver = {
>>> + .ids = xen_drv_ids,
>>> + .probe = xen_drv_probe,
>>> + .remove = xen_drv_remove,
>>> + .otherend_changed = backend_on_changed,
>>> +};
>>> +
>>> +static int __init xen_drv_init(void)
>>> +{
>>> + if (!xen_domain())
>>> + return -ENODEV;
>>> +
>>> + if (xen_initial_domain()) {
>>> + DRM_ERROR(XENDISPL_DRIVER_NAME " cannot run in initial
>>> domain\n");
>>> + return -ENODEV;
>>> + }
>> Why not? Wouldn't that be possible in case of the backend living in a
>> driver domain?
> It is possible (and in my use-case backend indeed runs in
> a driver domain). I was just not sure if it is really a
> good idea to allow that. If you think this is ok, then
> I'll remove this check
I don't think the driver should decide that. This would be the job of
Xen tools IMO.
Juergen
On 02/21/2018 11:17 AM, Roger Pau Monné wrote:
> On Wed, Feb 21, 2018 at 10:03:34AM +0200, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko <[email protected]>
>>
>> Introduce skeleton of the para-virtualized Xen display
>> frontend driver. This patch only adds required
>> essential stubs.
>>
>> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
>> ---
>> drivers/gpu/drm/Kconfig | 2 +
>> drivers/gpu/drm/Makefile | 1 +
>> drivers/gpu/drm/xen/Kconfig | 17 ++++++++
>> drivers/gpu/drm/xen/Makefile | 5 +++
>> drivers/gpu/drm/xen/xen_drm_front.c | 83 +++++++++++++++++++++++++++++++++++++
>> 5 files changed, 108 insertions(+)
>> create mode 100644 drivers/gpu/drm/xen/Kconfig
>> create mode 100644 drivers/gpu/drm/xen/Makefile
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front.c
>>
>> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
>> index deeefa7a1773..757825ac60df 100644
>> --- a/drivers/gpu/drm/Kconfig
>> +++ b/drivers/gpu/drm/Kconfig
>> @@ -289,6 +289,8 @@ source "drivers/gpu/drm/pl111/Kconfig"
>>
>> source "drivers/gpu/drm/tve200/Kconfig"
>>
>> +source "drivers/gpu/drm/xen/Kconfig"
>> +
>> # Keep legacy drivers last
>>
>> menuconfig DRM_LEGACY
>> diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
>> index 50093ff4479b..9d66657ea117 100644
>> --- a/drivers/gpu/drm/Makefile
>> +++ b/drivers/gpu/drm/Makefile
>> @@ -103,3 +103,4 @@ obj-$(CONFIG_DRM_MXSFB) += mxsfb/
>> obj-$(CONFIG_DRM_TINYDRM) += tinydrm/
>> obj-$(CONFIG_DRM_PL111) += pl111/
>> obj-$(CONFIG_DRM_TVE200) += tve200/
>> +obj-$(CONFIG_DRM_XEN) += xen/
>> diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig
>> new file mode 100644
>> index 000000000000..4cca160782ab
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xen/Kconfig
>> @@ -0,0 +1,17 @@
>> +config DRM_XEN
>> + bool "DRM Support for Xen guest OS"
>> + depends on XEN
>> + help
>> + Choose this option if you want to enable DRM support
>> + for Xen.
>> +
>> +config DRM_XEN_FRONTEND
>> + tristate "Para-virtualized frontend driver for Xen guest OS"
>> + depends on DRM_XEN
>> + depends on DRM
>> + select DRM_KMS_HELPER
>> + select VIDEOMODE_HELPERS
>> + select XEN_XENBUS_FRONTEND
>> + help
>> + Choose this option if you want to enable a para-virtualized
>> + frontend DRM/KMS driver for Xen guest OSes.
>> diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
>> new file mode 100644
>> index 000000000000..967074d348f6
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xen/Makefile
>> @@ -0,0 +1,5 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +
>> +drm_xen_front-objs := xen_drm_front.o
>> +
>> +obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
>> new file mode 100644
>> index 000000000000..fd372fb464a1
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
>> @@ -0,0 +1,83 @@
>> +/*
>> + * Xen para-virtual DRM device
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
> Most Xen drivers in Linux use a dual GPL/BSD license, so that they can
> be imported into other non GPL OSes:
>
> This program is free software; you can redistribute it and/or
> modify it under the terms of the GNU General Public License version 2
> as published by the Free Software Foundation; or, when distributed
> separately from the Linux kernel or incorporated into other
> software packages, subject to the following license:
>
> Permission is hereby granted, free of charge, to any person obtaining a copy
> of this source file (the "Software"), to deal in the Software without
> restriction, including without limitation the rights to use, copy, modify,
> merge, publish, distribute, sublicense, and/or sell copies of the Software,
> and to permit persons to whom the Software is furnished to do so, subject to
> the following conditions:
>
> The above copyright notice and this permission notice shall be included in
> all copies or substantial portions of the Software.
>
> THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> IN THE SOFTWARE.
>
> IMO it would be good to release this driver under the same license, so
> it can be incorporated into other OSes.
I am in any way expert in licensing, but the above seems to be
/* SPDX-License-Identifier: (GPL-2.0 OR MIT) */
At least this is what I see at [1] for MIT.
Could you please tell which license(s) as listed at [1]
would be appropriate for Xen drivers in terms of how it is
expected to appear in the kernel code, e.g. expected
SPDX-License-Identifier?
> Thanks, Roger.
Thank you,
Oleksandr
[1] https://spdx.org/licenses/
On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
> +static struct xenbus_driver xen_driver = {
> + .ids = xen_drv_ids,
> + .probe = xen_drv_probe,
> + .remove = xen_drv_remove,
> + .otherend_changed = backend_on_changed,
What does "_on_" stand for?
-boris
On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
> +
> +static int cfg_connector(struct xen_drm_front_info *front_info,
> + struct xen_drm_front_cfg_connector *connector,
> + const char *path, int index)
> +{
> + char *connector_path;
> +
> + connector_path = devm_kasprintf(&front_info->xb_dev->dev,
> + GFP_KERNEL, "%s/%d", path, index);
> + if (!connector_path)
> + return -ENOMEM;
> +
> + connector->xenstore_path = connector_path;
> + if (xenbus_scanf(XBT_NIL, connector_path, XENDISPL_FIELD_RESOLUTION,
> + "%d" XENDISPL_RESOLUTION_SEPARATOR "%d",
> + &connector->width, &connector->height) < 0) {
> + /* either no entry configured or wrong resolution set */
> + connector->width = 0;
> + connector->height = 0;
Do you also need to set connector->xenstore_path to NULL? Or maybe just
set it after xenbus_scanf() call.
-boris
On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
> +
> +static irqreturn_t evtchnl_interrupt_ctrl(int irq, void *dev_id)
> +{
> + struct xen_drm_front_evtchnl *evtchnl = dev_id;
> + struct xen_drm_front_info *front_info = evtchnl->front_info;
> + struct xendispl_resp *resp;
> + RING_IDX i, rp;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&front_info->io_lock, flags);
> +
> + if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED))
> + goto out;
Do you need to check the state under lock? (in other routines too).
...
> +
> +static void evtchnl_free(struct xen_drm_front_info *front_info,
> + struct xen_drm_front_evtchnl *evtchnl)
> +{
> + unsigned long page = 0;
> +
> + if (evtchnl->type == EVTCHNL_TYPE_REQ)
> + page = (unsigned long)evtchnl->u.req.ring.sring;
> + else if (evtchnl->type == EVTCHNL_TYPE_EVT)
> + page = (unsigned long)evtchnl->u.evt.page;
> + if (!page)
> + return;
> +
> + evtchnl->state = EVTCHNL_STATE_DISCONNECTED;
> +
> + if (evtchnl->type == EVTCHNL_TYPE_REQ) {
> + /* release all who still waits for response if any */
> + evtchnl->u.req.resp_status = -EIO;
> + complete_all(&evtchnl->u.req.completion);
> + }
> +
> + if (evtchnl->irq)
> + unbind_from_irqhandler(evtchnl->irq, evtchnl);
> +
> + if (evtchnl->port)
> + xenbus_free_evtchn(front_info->xb_dev, evtchnl->port);
> +
> + /* end access and free the page */
> + if (evtchnl->gref != GRANT_INVALID_REF)
> + gnttab_end_foreign_access(evtchnl->gref, 0, page);
> +
> + if (evtchnl->type == EVTCHNL_TYPE_REQ)
> + evtchnl->u.req.ring.sring = NULL;
> + else
> + evtchnl->u.evt.page = NULL;
> +
> + memset(evtchnl, 0, sizeof(*evtchnl));
Since you are zeroing out the structure you don't need to set fields to
zero.
I also think you need to free the page.
-boris
On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>
> static int __init xen_drv_init(void)
> {
> + /* At the moment we only support case with XEN_PAGE_SIZE == PAGE_SIZE */
> + BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE);
Why BUILD_BUG_ON? This should simply not load if page sizes are different.
> + ret = gnttab_map_refs(map_ops, NULL, buf->pages, buf->num_pages);
> + BUG_ON(ret);
We should try not to BUG*(). There are a few in this patch (and possibly
others) that I think can be avoided.
> +
> +static int alloc_storage(struct xen_drm_front_shbuf *buf)
> +{
> + if (buf->sgt) {
> + buf->pages = kvmalloc_array(buf->num_pages,
> + sizeof(struct page *), GFP_KERNEL);
> + if (!buf->pages)
> + return -ENOMEM;
> +
> + if (drm_prime_sg_to_page_addr_arrays(buf->sgt, buf->pages,
> + NULL, buf->num_pages) < 0)
> + return -EINVAL;
> + }
> +
> + buf->grefs = kcalloc(buf->num_grefs, sizeof(*buf->grefs), GFP_KERNEL);
> + if (!buf->grefs)
> + return -ENOMEM;
> +
> + buf->directory = kcalloc(get_num_pages_dir(buf), PAGE_SIZE, GFP_KERNEL);
> + if (!buf->directory)
> + return -ENOMEM;
You need to clean up on errors.
-boris
> +
> + return 0;
> +}
On 02/23/2018 12:23 AM, Boris Ostrovsky wrote:
> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>> +static struct xenbus_driver xen_driver = {
>> + .ids = xen_drv_ids,
>> + .probe = xen_drv_probe,
>> + .remove = xen_drv_remove,
>> + .otherend_changed = backend_on_changed,
> What does "_on_" stand for?
Well, it is somewhat like a hint that this is called "on" event,
e.g. event when the other end state has changed, backend in this
case. It could be something like "backend_on_state_changed"
> -boris
On 02/23/2018 01:20 AM, Boris Ostrovsky wrote:
> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>> +
>> +static int cfg_connector(struct xen_drm_front_info *front_info,
>> + struct xen_drm_front_cfg_connector *connector,
>> + const char *path, int index)
>> +{
>> + char *connector_path;
>> +
>> + connector_path = devm_kasprintf(&front_info->xb_dev->dev,
>> + GFP_KERNEL, "%s/%d", path, index);
>> + if (!connector_path)
>> + return -ENOMEM;
>> +
>> + connector->xenstore_path = connector_path;
>> + if (xenbus_scanf(XBT_NIL, connector_path, XENDISPL_FIELD_RESOLUTION,
>> + "%d" XENDISPL_RESOLUTION_SEPARATOR "%d",
>> + &connector->width, &connector->height) < 0) {
>> + /* either no entry configured or wrong resolution set */
>> + connector->width = 0;
>> + connector->height = 0;
> Do you also need to set connector->xenstore_path to NULL? Or maybe just
> set it after xenbus_scanf() call.
Will move it down the code, after "if", thank you
> -boris
>
>
>
On 02/23/2018 01:50 AM, Boris Ostrovsky wrote:
> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>> +
>> +static irqreturn_t evtchnl_interrupt_ctrl(int irq, void *dev_id)
>> +{
>> + struct xen_drm_front_evtchnl *evtchnl = dev_id;
>> + struct xen_drm_front_info *front_info = evtchnl->front_info;
>> + struct xendispl_resp *resp;
>> + RING_IDX i, rp;
>> + unsigned long flags;
>> +
>> + spin_lock_irqsave(&front_info->io_lock, flags);
>> +
>> + if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED))
>> + goto out;
> Do you need to check the state under lock? (in other routines too).
not really, will move out of the lock in interrupt handlers
other places (I assume you refer to be_stream_do_io)
it is set under lock as a part of atomic operation, e.g.
we get a new request pointer from the ring and reset completion
So, those places still seem to be ok
> ...
>
>> +
>> +static void evtchnl_free(struct xen_drm_front_info *front_info,
>> + struct xen_drm_front_evtchnl *evtchnl)
>> +{
>> + unsigned long page = 0;
>> +
>> + if (evtchnl->type == EVTCHNL_TYPE_REQ)
>> + page = (unsigned long)evtchnl->u.req.ring.sring;
>> + else if (evtchnl->type == EVTCHNL_TYPE_EVT)
>> + page = (unsigned long)evtchnl->u.evt.page;
>> + if (!page)
>> + return;
>> +
>> + evtchnl->state = EVTCHNL_STATE_DISCONNECTED;
>> +
>> + if (evtchnl->type == EVTCHNL_TYPE_REQ) {
>> + /* release all who still waits for response if any */
>> + evtchnl->u.req.resp_status = -EIO;
>> + complete_all(&evtchnl->u.req.completion);
>> + }
>> +
>> + if (evtchnl->irq)
>> + unbind_from_irqhandler(evtchnl->irq, evtchnl);
>> +
>> + if (evtchnl->port)
>> + xenbus_free_evtchn(front_info->xb_dev, evtchnl->port);
>> +
>> + /* end access and free the page */
>> + if (evtchnl->gref != GRANT_INVALID_REF)
>> + gnttab_end_foreign_access(evtchnl->gref, 0, page);
>> +
>> + if (evtchnl->type == EVTCHNL_TYPE_REQ)
>> + evtchnl->u.req.ring.sring = NULL;
>> + else
>> + evtchnl->u.evt.page = NULL;
>> +
>> + memset(evtchnl, 0, sizeof(*evtchnl));
> Since you are zeroing out the structure you don't need to set fields to
> zero.
good catch, thank you
> I also think you need to free the page.
it is freed by gnttab_end_foreign_access, please see [1]
> -boris
[1]
https://elixir.bootlin.com/linux/v4.11-rc1/source/drivers/xen/grant-table.c#L380
On 02/23/2018 02:25 AM, Boris Ostrovsky wrote:
> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>>
>> static int __init xen_drv_init(void)
>> {
>> + /* At the moment we only support case with XEN_PAGE_SIZE == PAGE_SIZE */
>> + BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE);
>
> Why BUILD_BUG_ON? This should simply not load if page sizes are different.
>
>
This is a compile time check, so if kernel/Xen is configured
to use page size combination which is not supported by the
driver it will fail during compilation. This seems correct to me,
because you shouldn't even try to load the driver which
cannot handle different page sizes to not make any harm.
>
>
>
>> + ret = gnttab_map_refs(map_ops, NULL, buf->pages, buf->num_pages);
>> + BUG_ON(ret);
>
> We should try not to BUG*(). There are a few in this patch (and possibly
> others) that I think can be avoided.
>
I will rework BUG_* for map/unmap code to handle errors,
but will still leave
/* either pages or sgt, not both */
BUG_ON(cfg->pages && cfg->sgt);
which is a real driver bug and must not happen
>
>
>
>> +
>> +static int alloc_storage(struct xen_drm_front_shbuf *buf)
>> +{
>> + if (buf->sgt) {
>> + buf->pages = kvmalloc_array(buf->num_pages,
>> + sizeof(struct page *), GFP_KERNEL);
>> + if (!buf->pages)
>> + return -ENOMEM;
>> +
>> + if (drm_prime_sg_to_page_addr_arrays(buf->sgt, buf->pages,
>> + NULL, buf->num_pages) < 0)
>> + return -EINVAL;
>> + }
>> +
>> + buf->grefs = kcalloc(buf->num_grefs, sizeof(*buf->grefs), GFP_KERNEL);
>> + if (!buf->grefs)
>> + return -ENOMEM;
>> +
>> + buf->directory = kcalloc(get_num_pages_dir(buf), PAGE_SIZE, GFP_KERNEL);
>> + if (!buf->directory)
>> + return -ENOMEM;
> You need to clean up on errors.
this is called in xen_drm_front_shbuf_alloc and is properly cleaned
on failure, e.g.:
ret = alloc_storage(buf);
if (ret)
goto fail;
[...]
fail:
xen_drm_front_shbuf_free(buf);
> -boris
>
>> +
>> + return 0;
>> +}
On 02/23/2018 02:53 AM, Oleksandr Andrushchenko wrote:
> On 02/23/2018 02:25 AM, Boris Ostrovsky wrote:
>> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>>> static int __init xen_drv_init(void)
>>> {
>>> + /* At the moment we only support case with XEN_PAGE_SIZE ==
>>> PAGE_SIZE */
>>> + BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE);
>>
>> Why BUILD_BUG_ON? This should simply not load if page sizes are
>> different.
>>
>>
> This is a compile time check, so if kernel/Xen is configured
> to use page size combination which is not supported by the
> driver it will fail during compilation. This seems correct to me,
> because you shouldn't even try to load the driver which
> cannot handle different page sizes to not make any harm.
This will prevent whole kernel from building. So, for example,
randconfig builds will fail.
>>
>>
>>
>>> + ret = gnttab_map_refs(map_ops, NULL, buf->pages, buf->num_pages);
>>> + BUG_ON(ret);
>>
>> We should try not to BUG*(). There are a few in this patch (and possibly
>> others) that I think can be avoided.
>>
> I will rework BUG_* for map/unmap code to handle errors,
> but will still leave
> /* either pages or sgt, not both */
> BUG_ON(cfg->pages && cfg->sgt);
> which is a real driver bug and must not happen
Why not return an error?
In fact, AFAICS you only call it in patch 9 where both of these can be
tested, in which case something like -EINVAL would look reasonable.
-boris
On 02/23/2018 01:37 AM, Oleksandr Andrushchenko wrote:
> On 02/23/2018 12:23 AM, Boris Ostrovsky wrote:
>> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>>> +static struct xenbus_driver xen_driver = {
>>> + .ids = xen_drv_ids,
>>> + .probe = xen_drv_probe,
>>> + .remove = xen_drv_remove,
>>> + .otherend_changed = backend_on_changed,
>> What does "_on_" stand for?
> Well, it is somewhat like a hint that this is called "on" event,
> e.g. event when the other end state has changed, backend in this
> case. It could be something like "backend_on_state_changed"
If you look at other xenbus_drivers none of the uses this so I think we
should stick to conventional naming. (and the same applies to other
backend_on_* routines).
-boris
On 02/23/2018 02:00 AM, Oleksandr Andrushchenko wrote:
> On 02/23/2018 01:50 AM, Boris Ostrovsky wrote:
>> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>>> +
>>> +static irqreturn_t evtchnl_interrupt_ctrl(int irq, void *dev_id)
>>> +{
>>> + struct xen_drm_front_evtchnl *evtchnl = dev_id;
>>> + struct xen_drm_front_info *front_info = evtchnl->front_info;
>>> + struct xendispl_resp *resp;
>>> + RING_IDX i, rp;
>>> + unsigned long flags;
>>> +
>>> + spin_lock_irqsave(&front_info->io_lock, flags);
>>> +
>>> + if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED))
>>> + goto out;
>> Do you need to check the state under lock? (in other routines too).
> not really, will move out of the lock in interrupt handlers
> other places (I assume you refer to be_stream_do_io)
I was mostly referring to evtchnl_interrupt_evt().
-boris
> it is set under lock as a part of atomic operation, e.g.
> we get a new request pointer from the ring and reset completion
> So, those places still seem to be ok
On 02/23/2018 04:36 PM, Boris Ostrovsky wrote:
> On 02/23/2018 02:53 AM, Oleksandr Andrushchenko wrote:
>> On 02/23/2018 02:25 AM, Boris Ostrovsky wrote:
>>> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>>>> static int __init xen_drv_init(void)
>>>> {
>>>> + /* At the moment we only support case with XEN_PAGE_SIZE ==
>>>> PAGE_SIZE */
>>>> + BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE);
>>> Why BUILD_BUG_ON? This should simply not load if page sizes are
>>> different.
>>>
>>>
>> This is a compile time check, so if kernel/Xen is configured
>> to use page size combination which is not supported by the
>> driver it will fail during compilation. This seems correct to me,
>> because you shouldn't even try to load the driver which
>> cannot handle different page sizes to not make any harm.
>
> This will prevent whole kernel from building. So, for example,
> randconfig builds will fail.
>
makes a lot of sense, thank you
will rework so I reject to load if the requirement is not met
>>>
>>>
>>>> + ret = gnttab_map_refs(map_ops, NULL, buf->pages, buf->num_pages);
>>>> + BUG_ON(ret);
>>> We should try not to BUG*(). There are a few in this patch (and possibly
>>> others) that I think can be avoided.
>>>
>> I will rework BUG_* for map/unmap code to handle errors,
>> but will still leave
>> /* either pages or sgt, not both */
>> BUG_ON(cfg->pages && cfg->sgt);
>> which is a real driver bug and must not happen
> Why not return an error?
>
> In fact, AFAICS you only call it in patch 9 where both of these can be
> tested, in which case something like -EINVAL would look reasonable.
ok, will remove BUG_ON as well
> -boris
On 02/23/2018 04:44 PM, Boris Ostrovsky wrote:
> On 02/23/2018 02:00 AM, Oleksandr Andrushchenko wrote:
>> On 02/23/2018 01:50 AM, Boris Ostrovsky wrote:
>>> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>>>> +
>>>> +static irqreturn_t evtchnl_interrupt_ctrl(int irq, void *dev_id)
>>>> +{
>>>> + struct xen_drm_front_evtchnl *evtchnl = dev_id;
>>>> + struct xen_drm_front_info *front_info = evtchnl->front_info;
>>>> + struct xendispl_resp *resp;
>>>> + RING_IDX i, rp;
>>>> + unsigned long flags;
>>>> +
>>>> + spin_lock_irqsave(&front_info->io_lock, flags);
>>>> +
>>>> + if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED))
>>>> + goto out;
>>> Do you need to check the state under lock? (in other routines too).
>> not really, will move out of the lock in interrupt handlers
>> other places (I assume you refer to be_stream_do_io)
>
> I was mostly referring to evtchnl_interrupt_evt().
ah, then we are on the same page: I will move the check
in interrupt handlers
> -boris
>
>
>> it is set under lock as a part of atomic operation, e.g.
>> we get a new request pointer from the ring and reset completion
>> So, those places still seem to be ok
On 02/23/2018 04:39 PM, Boris Ostrovsky wrote:
> On 02/23/2018 01:37 AM, Oleksandr Andrushchenko wrote:
>> On 02/23/2018 12:23 AM, Boris Ostrovsky wrote:
>>> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>>>> +static struct xenbus_driver xen_driver = {
>>>> + .ids = xen_drv_ids,
>>>> + .probe = xen_drv_probe,
>>>> + .remove = xen_drv_remove,
>>>> + .otherend_changed = backend_on_changed,
>>> What does "_on_" stand for?
>> Well, it is somewhat like a hint that this is called "on" event,
>> e.g. event when the other end state has changed, backend in this
>> case. It could be something like "backend_on_state_changed"
> If you look at other xenbus_drivers none of the uses this so I think we
> should stick to conventional naming. (and the same applies to other
> backend_on_* routines).
ok, no problem. will rename to be aligned with other frontends
>
> -boris
On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
> +
> +struct drm_driver xen_drm_driver = {
> + .driver_features = DRIVER_GEM | DRIVER_MODESET |
> + DRIVER_PRIME | DRIVER_ATOMIC,
> + .lastclose = lastclose,
> + .gem_free_object_unlocked = free_object,
> + .gem_vm_ops = &xen_drm_vm_ops,
> + .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
> + .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
> + .gem_prime_import = drm_gem_prime_import,
> + .gem_prime_export = drm_gem_prime_export,
> + .gem_prime_get_sg_table = prime_get_sg_table,
> + .gem_prime_import_sg_table = prime_import_sg_table,
> + .gem_prime_vmap = prime_vmap,
> + .gem_prime_vunmap = prime_vunmap,
> + .gem_prime_mmap = prime_mmap,
> + .dumb_create = dumb_create,
> + .fops = &xendrm_fops,
> + .name = "xendrm-du",
> + .desc = "Xen PV DRM Display Unit",
> + .date = "20161109",
You must have been working on this for a while ;-)
I assume this needs to be updated.
> +bool xen_drm_front_drv_is_used(struct platform_device *pdev)
> +{
> + struct xen_drm_front_drm_info *drm_info = platform_get_drvdata(pdev);
> + struct drm_device *dev;
> +
> + if (!drm_info)
> + return false;
> +
> + dev = drm_info->drm_dev;
> + if (!dev)
> + return false;
> +
> + /*
> + * FIXME: the code below must be protected by drm_global_mutex,
> + * but it is not accessible to us. Anyways there is a race condition,
> + * but we will re-try.
> + */
> + return dev->open_count != 0;
Would it be a problem, given the race, if you report that the frontend
is not in use, while it actually is?
-boris
On 02/23/2018 05:12 PM, Boris Ostrovsky wrote:
> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>
>> +
>> +struct drm_driver xen_drm_driver = {
>> + .driver_features = DRIVER_GEM | DRIVER_MODESET |
>> + DRIVER_PRIME | DRIVER_ATOMIC,
>> + .lastclose = lastclose,
>> + .gem_free_object_unlocked = free_object,
>> + .gem_vm_ops = &xen_drm_vm_ops,
>> + .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
>> + .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
>> + .gem_prime_import = drm_gem_prime_import,
>> + .gem_prime_export = drm_gem_prime_export,
>> + .gem_prime_get_sg_table = prime_get_sg_table,
>> + .gem_prime_import_sg_table = prime_import_sg_table,
>> + .gem_prime_vmap = prime_vmap,
>> + .gem_prime_vunmap = prime_vunmap,
>> + .gem_prime_mmap = prime_mmap,
>> + .dumb_create = dumb_create,
>> + .fops = &xendrm_fops,
>> + .name = "xendrm-du",
>> + .desc = "Xen PV DRM Display Unit",
>> + .date = "20161109",
> You must have been working on this for a while ;-)
yes, this is true ;)
>
> I assume this needs to be updated.
It can be, but I would either stick to the current value
for historical reasons or would update it in the final version
of the driver, so it reflects the date of issuing ;)
>> +bool xen_drm_front_drv_is_used(struct platform_device *pdev)
>> +{
>> + struct xen_drm_front_drm_info *drm_info = platform_get_drvdata(pdev);
>> + struct drm_device *dev;
>> +
>> + if (!drm_info)
>> + return false;
>> +
>> + dev = drm_info->drm_dev;
>> + if (!dev)
>> + return false;
>> +
>> + /*
>> + * FIXME: the code below must be protected by drm_global_mutex,
>> + * but it is not accessible to us. Anyways there is a race condition,
>> + * but we will re-try.
>> + */
>> + return dev->open_count != 0;
> Would it be a problem, given the race, if you report that the frontend
> is not in use, while it actually is?
no, backend will not be able to activate us again
> -boris
On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
> +static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
> +{
> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> + struct xen_gem_object *xen_obj;
> + int ret;
> +
> + size = round_up(size, PAGE_SIZE);
> + xen_obj = gem_create_obj(dev, size);
> + if (IS_ERR_OR_NULL(xen_obj))
> + return xen_obj;
> +
> + if (drm_info->cfg->be_alloc) {
> + /*
> + * backend will allocate space for this buffer, so
> + * only allocate array of pointers to pages
> + */
> + xen_obj->be_alloc = true;
If be_alloc is a flag (which I am not sure about) --- should it be set
to true *after* you've successfully allocated your things?
> + ret = gem_alloc_pages_array(xen_obj, size);
> + if (ret < 0) {
> + gem_free_pages_array(xen_obj);
> + goto fail;
> + }
> +
> + ret = alloc_xenballooned_pages(xen_obj->num_pages,
> + xen_obj->pages);
Why are you allocating balloon pages?
-boris
> + if (ret < 0) {
> + DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
> + xen_obj->num_pages, ret);
> + goto fail;
> + }
> +
> + return xen_obj;
> + }
> + /*
> + * need to allocate backing pages now, so we can share those
> + * with the backend
> + */
> + xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE);
> + xen_obj->pages = drm_gem_get_pages(&xen_obj->base);
> + if (IS_ERR_OR_NULL(xen_obj->pages)) {
> + ret = PTR_ERR(xen_obj->pages);
> + xen_obj->pages = NULL;
> + goto fail;
> + }
> +
> + return xen_obj;
> +
> +fail:
> + DRM_ERROR("Failed to allocate buffer with size %zu\n", size);
> + return ERR_PTR(ret);
> +}
> +
>
On 02/23/2018 05:26 PM, Boris Ostrovsky wrote:
> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>> +static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
>> +{
>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>> + struct xen_gem_object *xen_obj;
>> + int ret;
>> +
>> + size = round_up(size, PAGE_SIZE);
>> + xen_obj = gem_create_obj(dev, size);
>> + if (IS_ERR_OR_NULL(xen_obj))
>> + return xen_obj;
>> +
>> + if (drm_info->cfg->be_alloc) {
>> + /*
>> + * backend will allocate space for this buffer, so
>> + * only allocate array of pointers to pages
>> + */
>> + xen_obj->be_alloc = true;
> If be_alloc is a flag (which I am not sure about) --- should it be set
> to true *after* you've successfully allocated your things?
this is a configuration option telling about the way
the buffer gets allocated: either by the frontend or
backend (be_alloc -> buffer allocated by the backend)
>
>> + ret = gem_alloc_pages_array(xen_obj, size);
>> + if (ret < 0) {
>> + gem_free_pages_array(xen_obj);
>> + goto fail;
>> + }
>> +
>> + ret = alloc_xenballooned_pages(xen_obj->num_pages,
>> + xen_obj->pages);
> Why are you allocating balloon pages?
in this use-case we map pages provided by the backend
(yes, I know this can be a problem from both security
POV and that DomU can die holding pages of Dom0 forever:
but still it is a configuration option, so user decides
if her use-case needs this and takes responsibility for
such a decision).
Please see description of the buffering modes in xen_drm_front.h
specifically for backend allocated buffers:
*******************************************************************************
* 2. Buffers allocated by the backend
*******************************************************************************
*
* This mode of operation is run-time configured via guest domain
configuration
* through XenStore entries.
*
* For systems which do not provide IOMMU support, but having specific
* requirements for display buffers it is possible to allocate such buffers
* at backend side and share those with the frontend.
* For example, if host domain is 1:1 mapped and has DRM/GPU hardware
expecting
* physically contiguous memory, this allows implementing zero-copying
* use-cases.
>
> -boris
>
>> + if (ret < 0) {
>> + DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
>> + xen_obj->num_pages, ret);
>> + goto fail;
>> + }
>> +
>> + return xen_obj;
>> + }
>> + /*
>> + * need to allocate backing pages now, so we can share those
>> + * with the backend
>> + */
>> + xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE);
>> + xen_obj->pages = drm_gem_get_pages(&xen_obj->base);
>> + if (IS_ERR_OR_NULL(xen_obj->pages)) {
>> + ret = PTR_ERR(xen_obj->pages);
>> + xen_obj->pages = NULL;
>> + goto fail;
>> + }
>> +
>> + return xen_obj;
>> +
>> +fail:
>> + DRM_ERROR("Failed to allocate buffer with size %zu\n", size);
>> + return ERR_PTR(ret);
>> +}
>> +
>>
**
*Hi, all!*
*
Last *Friday* some concerns on #dri-devel were raised wrt "yet
another driver" for Xen and why not virtio-gpu. Let me highlight
on why we need a new paravirtualized driver for Xen and why we
can't just use virtio. Hope this helps the communities (both Xen
and DRI) to have better understanding of this work and our motivation.
Disclaimer: some or all of the below may sound weak argument or
not 100% correct, so any help on clarifying the below is more
than welcome ;)
1. First of all, we are targeting ARM embedded use-cases and for
ARM we do not use QEMU [1]: "...Xen on ARM is not just a straight
1:1 port of x86 Xen... Xen on ARM does not need QEMU because it does
not do any emulation. It accomplishes the goal by exploiting
virtualization support in hardware as much as possible and using
paravirtualized interfaces for IO."
That being said it is still possible to run virtio-gpu and Xen+QEMU: [2]
In this case QEMU can be used for device virtualization, e.g. network,
block, console. But these already exist as Xen para-virtualized drivers
again eliminating the need for QEMU: typical ARM system runs
para-virtualized
drivers for network, block, console etc.
2. virtio-gpu requires PCI/MMIO emulation
virtio-gpu (virtio-gpu-pci) require virtio-pci, but para-virtualized device
drivers do not need this.
3. No need for 3d/virgl.
There are use-cases which either do not use OpenGL at all or will use
custom virtualization solutions allowing sharing of a real GPU with guest,
e.g. vGPU approach.
4. More freedom for buffer allocation.
As of now virtio-gpu is only capable of allocating buffers via TTM, while
there are use-cases where we need to have more freedom:
for systems which do not provide IOMMU support, but having specific
requirements for display buffers, it is possible to allocate such buffers
at backend side and share those with the frontend driver.
For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting
physically contiguous memory (in PA, not IPA), this allows implementing
zero-copying use-cases.
5. Zero-copying support at backend side
Having native Xen implementation allows implementing zero-copying use-cases
on backend side with the help of supporting driver DRM driver [3] which we
hope to upstream as well (it is not yet ready in terms of code cleanup).
6. QEMU backends for virtio-gpu cannot be used as is, e.g. guest displays
could be just a part of the final user experience. Thus, a QEMU backend
must be modified to interact, for example, with Automotive Grade Linux
display manager. So, QEMU part needs modifications.
In our use-case we have a backend which supports multi-touch and guest
display(s) and running either as a weston client (which is not supported
by QEMU at the moment?) or KMS/DRM client. This allows us to enable much
more use-cases**without the need to run QEMU.
*
*Thank you,*
**Oleksandr Andrushchenko*
*
*
*
*[1]
https://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions_whitepaper*
*
[2] https://elinux.org/R-Car/Virtualization
[3]
https://github.com/xen-troops/linux/blob/ces2018/drivers/gpu/drm/xen/xen_drm_zcopy_drv.c
*
On 02/21/2018 10:03 AM, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <[email protected]>
>
> Hello!
>
> This patch series adds support for Xen [1] para-virtualized
> frontend display driver. It implements the protocol from
> include/xen/interface/io/displif.h [2].
> Accompanying backend [3] is implemented as a user-space application
> and its helper library [4], capable of running as a Weston client
> or DRM master.
> Configuration of both backend and frontend is done via
> Xen guest domain configuration options [5].
>
> *******************************************************************************
> * Driver limitations
> *******************************************************************************
> 1. Configuration options 1.1 (contiguous display buffers) and 2 (backend
> allocated buffers) below are not supported at the same time.
>
> 2. Only primary plane without additional properties is supported.
>
> 3. Only one video mode supported which resolution is configured via XenStore.
>
> 4. All CRTCs operate at fixed frequency of 60Hz.
>
> *******************************************************************************
> * Driver modes of operation in terms of display buffers used
> *******************************************************************************
> Depending on the requirements for the para-virtualized environment, namely
> requirements dictated by the accompanying DRM/(v)GPU drivers running in both
> host and guest environments, number of operating modes of para-virtualized
> display driver are supported:
> - display buffers can be allocated by either frontend driver or backend
> - display buffers can be allocated to be contiguous in memory or not
>
> Note! Frontend driver itself has no dependency on contiguous memory for
> its operation.
>
> *******************************************************************************
> * 1. Buffers allocated by the frontend driver.
> *******************************************************************************
>
> The below modes of operation are configured at compile-time via
> frontend driver's kernel configuration.
>
> 1.1. Front driver configured to use GEM CMA helpers
> This use-case is useful when used with accompanying DRM/vGPU driver in
> guest domain which was designed to only work with contiguous buffers,
> e.g. DRM driver based on GEM CMA helpers: such drivers can only import
> contiguous PRIME buffers, thus requiring frontend driver to provide
> such. In order to implement this mode of operation para-virtualized
> frontend driver can be configured to use GEM CMA helpers.
>
> 1.2. Front driver doesn't use GEM CMA
> If accompanying drivers can cope with non-contiguous memory then, to
> lower pressure on CMA subsystem of the kernel, driver can allocate
> buffers from system memory.
>
> Note! If used with accompanying DRM/(v)GPU drivers this mode of operation
> may require IOMMU support on the platform, so accompanying DRM/vGPU
> hardware can still reach display buffer memory while importing PRIME
> buffers from the frontend driver.
>
> *******************************************************************************
> * 2. Buffers allocated by the backend
> *******************************************************************************
>
> This mode of operation is run-time configured via guest domain configuration
> through XenStore entries.
>
> For systems which do not provide IOMMU support, but having specific
> requirements for display buffers it is possible to allocate such buffers
> at backend side and share those with the frontend.
> For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting
> physically contiguous memory, this allows implementing zero-copying
> use-cases.
>
>
> I would like to thank at least, but not at last the following
> people/communities who helped this driver to happen ;)
>
> 1. My team at EPAM for continuous support
> 2. Xen community for answering tons of questions on different
> modes of operation of the driver with respect to virtualized
> environment.
> 3. Rob Clark for "GEM allocation for para-virtualized DRM driver" [6]
> 4. Maarten Lankhorst for "Atomic driver and old remove FB behavior" [7]
> 5. Ville Syrjälä for "Questions on page flips and atomic modeset" [8]
>
> Thank you,
> Oleksandr Andrushchenko
>
> P.S. There are two dependencies for this driver limiting some of the
> use-cases which are on review now:
> 1. "drm/simple_kms_helper: Add {enable|disable}_vblank callback support" [9]
> 2. "drm/simple_kms_helper: Fix NULL pointer dereference with no active CRTC" [10]
>
> [1] https://wiki.xen.org/wiki/Paravirtualization_(PV)#PV_IO_Drivers
> [2] https://elixir.bootlin.com/linux/v4.16-rc2/source/include/xen/interface/io/displif.h
> [3] https://github.com/xen-troops/displ_be
> [4] https://github.com/xen-troops/libxenbe
> [5] https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/man/xl.cfg.pod.5.in;h=a699367779e2ae1212ff8f638eff0206ec1a1cc9;hb=refs/heads/master#l1257
> [6] https://lists.freedesktop.org/archives/dri-devel/2017-March/136038.html
> [7] https://www.spinics.net/lists/dri-devel/msg164102.html
> [8] https://www.spinics.net/lists/dri-devel/msg164463.html
> [9] https://patchwork.freedesktop.org/series/38073/
> [10] https://patchwork.freedesktop.org/series/38139/
>
> Oleksandr Andrushchenko (9):
> drm/xen-front: Introduce Xen para-virtualized frontend driver
> drm/xen-front: Implement Xen bus state handling
> drm/xen-front: Read driver configuration from Xen store
> drm/xen-front: Implement Xen event channel handling
> drm/xen-front: Implement handling of shared display buffers
> drm/xen-front: Introduce DRM/KMS virtual display driver
> drm/xen-front: Implement KMS/connector handling
> drm/xen-front: Implement GEM operations
> drm/xen-front: Implement communication with backend
>
> drivers/gpu/drm/Kconfig | 2 +
> drivers/gpu/drm/Makefile | 1 +
> drivers/gpu/drm/xen/Kconfig | 30 ++
> drivers/gpu/drm/xen/Makefile | 17 +
> drivers/gpu/drm/xen/xen_drm_front.c | 712 ++++++++++++++++++++++++++++
> drivers/gpu/drm/xen/xen_drm_front.h | 154 ++++++
> drivers/gpu/drm/xen/xen_drm_front_cfg.c | 84 ++++
> drivers/gpu/drm/xen/xen_drm_front_cfg.h | 45 ++
> drivers/gpu/drm/xen/xen_drm_front_conn.c | 125 +++++
> drivers/gpu/drm/xen/xen_drm_front_conn.h | 35 ++
> drivers/gpu/drm/xen/xen_drm_front_drv.c | 294 ++++++++++++
> drivers/gpu/drm/xen/xen_drm_front_drv.h | 73 +++
> drivers/gpu/drm/xen/xen_drm_front_evtchnl.c | 399 ++++++++++++++++
> drivers/gpu/drm/xen/xen_drm_front_evtchnl.h | 89 ++++
> drivers/gpu/drm/xen/xen_drm_front_gem.c | 360 ++++++++++++++
> drivers/gpu/drm/xen/xen_drm_front_gem.h | 46 ++
> drivers/gpu/drm/xen/xen_drm_front_gem_cma.c | 93 ++++
> drivers/gpu/drm/xen/xen_drm_front_kms.c | 299 ++++++++++++
> drivers/gpu/drm/xen/xen_drm_front_kms.h | 30 ++
> drivers/gpu/drm/xen/xen_drm_front_shbuf.c | 430 +++++++++++++++++
> drivers/gpu/drm/xen/xen_drm_front_shbuf.h | 80 ++++
> 21 files changed, 3398 insertions(+)
> create mode 100644 drivers/gpu/drm/xen/Kconfig
> create mode 100644 drivers/gpu/drm/xen/Makefile
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_cfg.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_cfg.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_evtchnl.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_evtchnl.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_shbuf.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_shbuf.h
>
On 02/23/2018 10:35 AM, Oleksandr Andrushchenko wrote:
> On 02/23/2018 05:26 PM, Boris Ostrovsky wrote:
>> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>>> +static struct xen_gem_object *gem_create(struct drm_device *dev,
>>> size_t size)
>>> +{
>>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>>> + struct xen_gem_object *xen_obj;
>>> + int ret;
>>> +
>>> + size = round_up(size, PAGE_SIZE);
>>> + xen_obj = gem_create_obj(dev, size);
>>> + if (IS_ERR_OR_NULL(xen_obj))
>>> + return xen_obj;
>>> +
>>> + if (drm_info->cfg->be_alloc) {
>>> + /*
>>> + * backend will allocate space for this buffer, so
>>> + * only allocate array of pointers to pages
>>> + */
>>> + xen_obj->be_alloc = true;
>> If be_alloc is a flag (which I am not sure about) --- should it be set
>> to true *after* you've successfully allocated your things?
> this is a configuration option telling about the way
> the buffer gets allocated: either by the frontend or
> backend (be_alloc -> buffer allocated by the backend)
I can see how drm_info->cfg->be_alloc might be a configuration option
but xen_obj->be_alloc is set here and that's not how configuration
options typically behave.
>>
>>> + ret = gem_alloc_pages_array(xen_obj, size);
>>> + if (ret < 0) {
>>> + gem_free_pages_array(xen_obj);
>>> + goto fail;
>>> + }
>>> +
>>> + ret = alloc_xenballooned_pages(xen_obj->num_pages,
>>> + xen_obj->pages);
>> Why are you allocating balloon pages?
> in this use-case we map pages provided by the backend
> (yes, I know this can be a problem from both security
> POV and that DomU can die holding pages of Dom0 forever:
> but still it is a configuration option, so user decides
> if her use-case needs this and takes responsibility for
> such a decision).
Perhaps I am missing something here but when you say "I know this can be
a problem from both security POV ..." then there is something wrong with
your solution.
-boris
>
> Please see description of the buffering modes in xen_drm_front.h
> specifically for backend allocated buffers:
> *******************************************************************************
>
> * 2. Buffers allocated by the backend
> *******************************************************************************
>
> *
> * This mode of operation is run-time configured via guest domain
> configuration
> * through XenStore entries.
> *
> * For systems which do not provide IOMMU support, but having specific
> * requirements for display buffers it is possible to allocate such
> buffers
> * at backend side and share those with the frontend.
> * For example, if host domain is 1:1 mapped and has DRM/GPU hardware
> expecting
> * physically contiguous memory, this allows implementing zero-copying
> * use-cases.
>
>>
>> -boris
>>
>>> + if (ret < 0) {
>>> + DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
>>> + xen_obj->num_pages, ret);
>>> + goto fail;
>>> + }
>>> +
>>> + return xen_obj;
>>> + }
>>> + /*
>>> + * need to allocate backing pages now, so we can share those
>>> + * with the backend
>>> + */
>>> + xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE);
>>> + xen_obj->pages = drm_gem_get_pages(&xen_obj->base);
>>> + if (IS_ERR_OR_NULL(xen_obj->pages)) {
>>> + ret = PTR_ERR(xen_obj->pages);
>>> + xen_obj->pages = NULL;
>>> + goto fail;
>>> + }
>>> +
>>> + return xen_obj;
>>> +
>>> +fail:
>>> + DRM_ERROR("Failed to allocate buffer with size %zu\n", size);
>>> + return ERR_PTR(ret);
>>> +}
>>> +
>>>
>
On 02/27/2018 01:47 AM, Boris Ostrovsky wrote:
> On 02/23/2018 10:35 AM, Oleksandr Andrushchenko wrote:
>> On 02/23/2018 05:26 PM, Boris Ostrovsky wrote:
>>> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>>>> +static struct xen_gem_object *gem_create(struct drm_device *dev,
>>>> size_t size)
>>>> +{
>>>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>>>> + struct xen_gem_object *xen_obj;
>>>> + int ret;
>>>> +
>>>> + size = round_up(size, PAGE_SIZE);
>>>> + xen_obj = gem_create_obj(dev, size);
>>>> + if (IS_ERR_OR_NULL(xen_obj))
>>>> + return xen_obj;
>>>> +
>>>> + if (drm_info->cfg->be_alloc) {
>>>> + /*
>>>> + * backend will allocate space for this buffer, so
>>>> + * only allocate array of pointers to pages
>>>> + */
>>>> + xen_obj->be_alloc = true;
>>> If be_alloc is a flag (which I am not sure about) --- should it be set
>>> to true *after* you've successfully allocated your things?
>> this is a configuration option telling about the way
>> the buffer gets allocated: either by the frontend or
>> backend (be_alloc -> buffer allocated by the backend)
>
> I can see how drm_info->cfg->be_alloc might be a configuration option
> but xen_obj->be_alloc is set here and that's not how configuration
> options typically behave.
you are right, I will put be_alloc down the code and will slightly
rework error handling for this function
>
>>>> + ret = gem_alloc_pages_array(xen_obj, size);
>>>> + if (ret < 0) {
>>>> + gem_free_pages_array(xen_obj);
>>>> + goto fail;
>>>> + }
>>>> +
>>>> + ret = alloc_xenballooned_pages(xen_obj->num_pages,
>>>> + xen_obj->pages);
>>> Why are you allocating balloon pages?
>> in this use-case we map pages provided by the backend
>> (yes, I know this can be a problem from both security
>> POV and that DomU can die holding pages of Dom0 forever:
>> but still it is a configuration option, so user decides
>> if her use-case needs this and takes responsibility for
>> such a decision).
>
> Perhaps I am missing something here but when you say "I know this can be
> a problem from both security POV ..." then there is something wrong with
> your solution.
well, in this scenario there are actually 2 concerns:
1. If DomU dies the pages/grants from Dom0/DomD cannot be
reclaimed back
2. Misbehaving guest may send too many requests to the
backend exhausting grant references and memory of Dom0/DomD
(this is the only concern from security POV). Please see [1]
But, we are focusing on embedded use-cases,
so those systems we use are not that "dynamic" with respect to 2).
Namely: we have fixed number of domains and their functionality
is well known, so we can do rather precise assumption on resource
usage. This is why I try to warn on such a use-case and rely on
the end user who understands the caveats
I'll probably add more precise description of this use-case
clarifying what is that security POV, so there is no confusion
Hope this explanation answers your questions
> -boris
>
>> Please see description of the buffering modes in xen_drm_front.h
>> specifically for backend allocated buffers:
>> *******************************************************************************
>>
>> * 2. Buffers allocated by the backend
>> *******************************************************************************
>>
>> *
>> * This mode of operation is run-time configured via guest domain
>> configuration
>> * through XenStore entries.
>> *
>> * For systems which do not provide IOMMU support, but having specific
>> * requirements for display buffers it is possible to allocate such
>> buffers
>> * at backend side and share those with the frontend.
>> * For example, if host domain is 1:1 mapped and has DRM/GPU hardware
>> expecting
>> * physically contiguous memory, this allows implementing zero-copying
>> * use-cases.
>>
>>> -boris
>>>
>>>> + if (ret < 0) {
>>>> + DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
>>>> + xen_obj->num_pages, ret);
>>>> + goto fail;
>>>> + }
>>>> +
>>>> + return xen_obj;
>>>> + }
>>>> + /*
>>>> + * need to allocate backing pages now, so we can share those
>>>> + * with the backend
>>>> + */
>>>> + xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE);
>>>> + xen_obj->pages = drm_gem_get_pages(&xen_obj->base);
>>>> + if (IS_ERR_OR_NULL(xen_obj->pages)) {
>>>> + ret = PTR_ERR(xen_obj->pages);
>>>> + xen_obj->pages = NULL;
>>>> + goto fail;
>>>> + }
>>>> +
>>>> + return xen_obj;
>>>> +
>>>> +fail:
>>>> + DRM_ERROR("Failed to allocate buffer with size %zu\n", size);
>>>> + return ERR_PTR(ret);
>>>> +}
>>>> +
>>>>
Thank you,
Oleksandr
[1]
https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg03100.html
Please find some more clarifications on VirtIO use with Xen
(I would like to thank Xen community for helping with this)
1. Possible security issues - VirtIO devices are PCI bus masters, thus
allowing real device (running, for example, in untrusted driver domain)
to get control over guest's memory by writing to its memory
2. VirtIO currently uses GFNs written into the shared ring, without Xen
grants support. This will require generic grant-mapping/sharing layer
to be added to VirtIO.
3. VirtIO requires QEMU PCI emulation for setting up a device. Xen PV
(and PVH)
domains don't use QEMU for platform emulation in order to reduce attack
surface.
(PVH is in the process of gaining PCI config space emulation though, but
it is
optional, not a requirement)
4. Most of the PV drivers a guest uses at the moment are Xen PV drivers,
e.g. net,
block, console, so only virtio-gpu will require QEMU to run.
Although this use case would work on x86 it will require additional changes
to get this running on ARM, which is my target platform.
Thank you,
Oleksandr
On 02/26/2018 10:21 AM, Oleksandr Andrushchenko wrote:
> **
>
> *Hi, all!*
>
> *
>
> Last *Friday* some concerns on #dri-devel were raised wrt "yet
>
> another driver" for Xen and why not virtio-gpu. Let me highlight
>
> on why we need a new paravirtualized driver for Xen and why we
>
> can't just use virtio. Hope this helps the communities (both Xen
>
> and DRI) to have better understanding of this work and our motivation.
>
>
> Disclaimer: some or all of the below may sound weak argument or
>
> not 100% correct, so any help on clarifying the below is more
>
> than welcome ;)
>
>
> 1. First of all, we are targeting ARM embedded use-cases and for
>
> ARM we do not use QEMU [1]: "...Xen on ARM is not just a straight
>
> 1:1 port of x86 Xen... Xen on ARM does not need QEMU because it does
>
> not do any emulation. It accomplishes the goal by exploiting
>
> virtualization support in hardware as much as possible and using
>
> paravirtualized interfaces for IO."
>
>
> That being said it is still possible to run virtio-gpu and Xen+QEMU: [2]
>
>
> In this case QEMU can be used for device virtualization, e.g. network,
>
> block, console. But these already exist as Xen para-virtualized drivers
>
> again eliminating the need for QEMU: typical ARM system runs
> para-virtualized
>
> drivers for network, block, console etc.
>
>
> 2. virtio-gpu requires PCI/MMIO emulation
>
> virtio-gpu (virtio-gpu-pci) require virtio-pci, but para-virtualized
> device
>
> drivers do not need this.
>
>
> 3. No need for 3d/virgl.
>
> There are use-cases which either do not use OpenGL at all or will use
>
> custom virtualization solutions allowing sharing of a real GPU with
> guest,
>
> e.g. vGPU approach.
>
>
> 4. More freedom for buffer allocation.
>
> As of now virtio-gpu is only capable of allocating buffers via TTM, while
>
> there are use-cases where we need to have more freedom:
>
> for systems which do not provide IOMMU support, but having specific
>
> requirements for display buffers, it is possible to allocate such buffers
>
> at backend side and share those with the frontend driver.
>
> For example, if host domain is 1:1 mapped and has DRM/GPU hardware
> expecting
>
> physically contiguous memory (in PA, not IPA), this allows implementing
>
> zero-copying use-cases.
>
>
> 5. Zero-copying support at backend side
>
> Having native Xen implementation allows implementing zero-copying
> use-cases
>
> on backend side with the help of supporting driver DRM driver [3]
> which we
>
> hope to upstream as well (it is not yet ready in terms of code cleanup).
>
>
> 6. QEMU backends for virtio-gpu cannot be used as is, e.g. guest displays
>
> could be just a part of the final user experience. Thus, a QEMU backend
>
> must be modified to interact, for example, with Automotive Grade Linux
>
> display manager. So, QEMU part needs modifications.
>
> In our use-case we have a backend which supports multi-touch and guest
>
> display(s) and running either as a weston client (which is not supported
>
> by QEMU at the moment?) or KMS/DRM client. This allows us to enable much
>
> more use-cases**without the need to run QEMU.
>
> *
>
> *Thank you,*
>
> **Oleksandr Andrushchenko*
> *
>
> *
> *
>
> *[1]
> https://wiki.xen.org/wiki/Xen_ARM_with_Virtualization_Extensions_whitepaper*
>
> *
>
> [2] https://elinux.org/R-Car/Virtualization
>
> [3]
> https://github.com/xen-troops/linux/blob/ces2018/drivers/gpu/drm/xen/xen_drm_zcopy_drv.c
>
>
> *
>
>
> On 02/21/2018 10:03 AM, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko <[email protected]>
>>
>> Hello!
>>
>> This patch series adds support for Xen [1] para-virtualized
>> frontend display driver. It implements the protocol from
>> include/xen/interface/io/displif.h [2].
>> Accompanying backend [3] is implemented as a user-space application
>> and its helper library [4], capable of running as a Weston client
>> or DRM master.
>> Configuration of both backend and frontend is done via
>> Xen guest domain configuration options [5].
>>
>> *******************************************************************************
>>
>> * Driver limitations
>> *******************************************************************************
>>
>> 1. Configuration options 1.1 (contiguous display buffers) and 2
>> (backend
>> allocated buffers) below are not supported at the same time.
>>
>> 2. Only primary plane without additional properties is supported.
>>
>> 3. Only one video mode supported which resolution is configured via
>> XenStore.
>>
>> 4. All CRTCs operate at fixed frequency of 60Hz.
>>
>> *******************************************************************************
>>
>> * Driver modes of operation in terms of display buffers used
>> *******************************************************************************
>>
>> Depending on the requirements for the para-virtualized environment,
>> namely
>> requirements dictated by the accompanying DRM/(v)GPU drivers
>> running in both
>> host and guest environments, number of operating modes of
>> para-virtualized
>> display driver are supported:
>> - display buffers can be allocated by either frontend driver or
>> backend
>> - display buffers can be allocated to be contiguous in memory or not
>>
>> Note! Frontend driver itself has no dependency on contiguous memory
>> for
>> its operation.
>>
>> *******************************************************************************
>>
>> * 1. Buffers allocated by the frontend driver.
>> *******************************************************************************
>>
>>
>> The below modes of operation are configured at compile-time via
>> frontend driver's kernel configuration.
>>
>> 1.1. Front driver configured to use GEM CMA helpers
>> This use-case is useful when used with accompanying DRM/vGPU
>> driver in
>> guest domain which was designed to only work with contiguous
>> buffers,
>> e.g. DRM driver based on GEM CMA helpers: such drivers can
>> only import
>> contiguous PRIME buffers, thus requiring frontend driver to
>> provide
>> such. In order to implement this mode of operation
>> para-virtualized
>> frontend driver can be configured to use GEM CMA helpers.
>>
>> 1.2. Front driver doesn't use GEM CMA
>> If accompanying drivers can cope with non-contiguous memory
>> then, to
>> lower pressure on CMA subsystem of the kernel, driver can
>> allocate
>> buffers from system memory.
>>
>> Note! If used with accompanying DRM/(v)GPU drivers this mode of
>> operation
>> may require IOMMU support on the platform, so accompanying DRM/vGPU
>> hardware can still reach display buffer memory while importing PRIME
>> buffers from the frontend driver.
>>
>> *******************************************************************************
>>
>> * 2. Buffers allocated by the backend
>> *******************************************************************************
>>
>>
>> This mode of operation is run-time configured via guest domain
>> configuration
>> through XenStore entries.
>>
>> For systems which do not provide IOMMU support, but having specific
>> requirements for display buffers it is possible to allocate such
>> buffers
>> at backend side and share those with the frontend.
>> For example, if host domain is 1:1 mapped and has DRM/GPU hardware
>> expecting
>> physically contiguous memory, this allows implementing zero-copying
>> use-cases.
>>
>>
>> I would like to thank at least, but not at last the following
>> people/communities who helped this driver to happen ;)
>>
>> 1. My team at EPAM for continuous support
>> 2. Xen community for answering tons of questions on different
>> modes of operation of the driver with respect to virtualized
>> environment.
>> 3. Rob Clark for "GEM allocation for para-virtualized DRM driver" [6]
>> 4. Maarten Lankhorst for "Atomic driver and old remove FB behavior" [7]
>> 5. Ville Syrjälä for "Questions on page flips and atomic modeset" [8]
>>
>> Thank you,
>> Oleksandr Andrushchenko
>>
>> P.S. There are two dependencies for this driver limiting some of the
>> use-cases which are on review now:
>> 1. "drm/simple_kms_helper: Add {enable|disable}_vblank callback
>> support" [9]
>> 2. "drm/simple_kms_helper: Fix NULL pointer dereference with no
>> active CRTC" [10]
>>
>> [1] https://wiki.xen.org/wiki/Paravirtualization_(PV)#PV_IO_Drivers
>> [2]
>> https://elixir.bootlin.com/linux/v4.16-rc2/source/include/xen/interface/io/displif.h
>> [3] https://github.com/xen-troops/displ_be
>> [4] https://github.com/xen-troops/libxenbe
>> [5]
>> https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/man/xl.cfg.pod.5.in;h=a699367779e2ae1212ff8f638eff0206ec1a1cc9;hb=refs/heads/master#l1257
>> [6]
>> https://lists.freedesktop.org/archives/dri-devel/2017-March/136038.html
>> [7] https://www.spinics.net/lists/dri-devel/msg164102.html
>> [8] https://www.spinics.net/lists/dri-devel/msg164463.html
>> [9] https://patchwork.freedesktop.org/series/38073/
>> [10] https://patchwork.freedesktop.org/series/38139/
>>
>> Oleksandr Andrushchenko (9):
>> drm/xen-front: Introduce Xen para-virtualized frontend driver
>> drm/xen-front: Implement Xen bus state handling
>> drm/xen-front: Read driver configuration from Xen store
>> drm/xen-front: Implement Xen event channel handling
>> drm/xen-front: Implement handling of shared display buffers
>> drm/xen-front: Introduce DRM/KMS virtual display driver
>> drm/xen-front: Implement KMS/connector handling
>> drm/xen-front: Implement GEM operations
>> drm/xen-front: Implement communication with backend
>>
>> drivers/gpu/drm/Kconfig | 2 +
>> drivers/gpu/drm/Makefile | 1 +
>> drivers/gpu/drm/xen/Kconfig | 30 ++
>> drivers/gpu/drm/xen/Makefile | 17 +
>> drivers/gpu/drm/xen/xen_drm_front.c | 712
>> ++++++++++++++++++++++++++++
>> drivers/gpu/drm/xen/xen_drm_front.h | 154 ++++++
>> drivers/gpu/drm/xen/xen_drm_front_cfg.c | 84 ++++
>> drivers/gpu/drm/xen/xen_drm_front_cfg.h | 45 ++
>> drivers/gpu/drm/xen/xen_drm_front_conn.c | 125 +++++
>> drivers/gpu/drm/xen/xen_drm_front_conn.h | 35 ++
>> drivers/gpu/drm/xen/xen_drm_front_drv.c | 294 ++++++++++++
>> drivers/gpu/drm/xen/xen_drm_front_drv.h | 73 +++
>> drivers/gpu/drm/xen/xen_drm_front_evtchnl.c | 399 ++++++++++++++++
>> drivers/gpu/drm/xen/xen_drm_front_evtchnl.h | 89 ++++
>> drivers/gpu/drm/xen/xen_drm_front_gem.c | 360 ++++++++++++++
>> drivers/gpu/drm/xen/xen_drm_front_gem.h | 46 ++
>> drivers/gpu/drm/xen/xen_drm_front_gem_cma.c | 93 ++++
>> drivers/gpu/drm/xen/xen_drm_front_kms.c | 299 ++++++++++++
>> drivers/gpu/drm/xen/xen_drm_front_kms.h | 30 ++
>> drivers/gpu/drm/xen/xen_drm_front_shbuf.c | 430 +++++++++++++++++
>> drivers/gpu/drm/xen/xen_drm_front_shbuf.h | 80 ++++
>> 21 files changed, 3398 insertions(+)
>> create mode 100644 drivers/gpu/drm/xen/Kconfig
>> create mode 100644 drivers/gpu/drm/xen/Makefile
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front.c
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front.h
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_cfg.c
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_cfg.h
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.c
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.h
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.c
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.h
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_evtchnl.c
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_evtchnl.h
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.c
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.h
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.c
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.h
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_shbuf.c
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_shbuf.h
>>
>
(+ Stefano and Wei)
Hi,
On 02/27/2018 12:40 PM, Oleksandr Andrushchenko wrote:
> Please find some more clarifications on VirtIO use with Xen
> (I would like to thank Xen community for helping with this)
>
> 1. Possible security issues - VirtIO devices are PCI bus masters, thus
> allowing real device (running, for example, in untrusted driver domain)
> to get control over guest's memory by writing to its memory
>
> 2. VirtIO currently uses GFNs written into the shared ring, without Xen
> grants support. This will require generic grant-mapping/sharing layer
> to be added to VirtIO.
>
> 3. VirtIO requires QEMU PCI emulation for setting up a device. Xen PV
> (and PVH)
> domains don't use QEMU for platform emulation in order to reduce attack
> surface.
> (PVH is in the process of gaining PCI config space emulation though, but
> it is
> optional, not a requirement)
I don't think the support of PCI configuration space emulation for PVH
would help there. The plan is to emulate in Xen, QEMU is still out of
the equation there.
>
> 4. Most of the PV drivers a guest uses at the moment are Xen PV drivers,
> e.g. net,
> block, console, so only virtio-gpu will require QEMU to run.
> Although this use case would work on x86 it will require additional changes
> to get this running on ARM, which is my target platform.
All type of guests but x86 HVM are not using QEMU for device emulation.
I would even be stronger here. Using QEMU would require a significant
amount of engineering to make it work and increase the cost of safety
certification for automotive use cases. So IHMO, the Xen PV display
solution is the best.
The protocol was accepted and merged in Xen 4.9. This the standard way
to have para-virtualized display for guests on Xen. Having the driver
merged in Linux would help user to get out-of-box display in guest.
Cheers,
--
Julien Grall
On 02/27/2018 01:52 AM, Oleksandr Andrushchenko wrote:
> On 02/27/2018 01:47 AM, Boris Ostrovsky wrote:
>> On 02/23/2018 10:35 AM, Oleksandr Andrushchenko wrote:
>>> On 02/23/2018 05:26 PM, Boris Ostrovsky wrote:
>>>> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>>
>>>>> + ret = gem_alloc_pages_array(xen_obj, size);
>>>>> + if (ret < 0) {
>>>>> + gem_free_pages_array(xen_obj);
>>>>> + goto fail;
>>>>> + }
>>>>> +
>>>>> + ret = alloc_xenballooned_pages(xen_obj->num_pages,
>>>>> + xen_obj->pages);
>>>> Why are you allocating balloon pages?
>>> in this use-case we map pages provided by the backend
>>> (yes, I know this can be a problem from both security
>>> POV and that DomU can die holding pages of Dom0 forever:
>>> but still it is a configuration option, so user decides
>>> if her use-case needs this and takes responsibility for
>>> such a decision).
>>
>> Perhaps I am missing something here but when you say "I know this can be
>> a problem from both security POV ..." then there is something wrong with
>> your solution.
> well, in this scenario there are actually 2 concerns:
> 1. If DomU dies the pages/grants from Dom0/DomD cannot be
> reclaimed back
> 2. Misbehaving guest may send too many requests to the
> backend exhausting grant references and memory of Dom0/DomD
> (this is the only concern from security POV). Please see [1]
>
> But, we are focusing on embedded use-cases,
> so those systems we use are not that "dynamic" with respect to 2).
> Namely: we have fixed number of domains and their functionality
> is well known, so we can do rather precise assumption on resource
> usage. This is why I try to warn on such a use-case and rely on
> the end user who understands the caveats
How will dom0/backend know whether or not to trust the front end (and
thus whether or not to provide provide pages to it)? Will there be
something in xenstore, for example, to indicate such trusted frontends?
-boris
>
> I'll probably add more precise description of this use-case
> clarifying what is that security POV, so there is no confusion
>
> Hope this explanation answers your questions
>> -boris
>>
>>> Please see description of the buffering modes in xen_drm_front.h
>>> specifically for backend allocated buffers:
>>>
>>> *******************************************************************************
>>>
>>> * 2. Buffers allocated by the backend
>>>
>>> *******************************************************************************
>>>
>>> *
>>> * This mode of operation is run-time configured via guest domain
>>> configuration
>>> * through XenStore entries.
>>> *
>>> * For systems which do not provide IOMMU support, but having specific
>>> * requirements for display buffers it is possible to allocate such
>>> buffers
>>> * at backend side and share those with the frontend.
>>> * For example, if host domain is 1:1 mapped and has DRM/GPU hardware
>>> expecting
>>> * physically contiguous memory, this allows implementing zero-copying
>>> * use-cases.
>>>
>>>> -boris
>>>>
>>>>> + if (ret < 0) {
>>>>> + DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
>>>>> + xen_obj->num_pages, ret);
>>>>> + goto fail;
>>>>> + }
>>>>> +
>>>>> + return xen_obj;
>>>>> + }
>>>>> + /*
>>>>> + * need to allocate backing pages now, so we can share those
>>>>> + * with the backend
>>>>> + */
>>>>> + xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE);
>>>>> + xen_obj->pages = drm_gem_get_pages(&xen_obj->base);
>>>>> + if (IS_ERR_OR_NULL(xen_obj->pages)) {
>>>>> + ret = PTR_ERR(xen_obj->pages);
>>>>> + xen_obj->pages = NULL;
>>>>> + goto fail;
>>>>> + }
>>>>> +
>>>>> + return xen_obj;
>>>>> +
>>>>> +fail:
>>>>> + DRM_ERROR("Failed to allocate buffer with size %zu\n", size);
>>>>> + return ERR_PTR(ret);
>>>>> +}
>>>>> +
>>>>>
> Thank you,
> Oleksandr
>
> [1]
> https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg03100.html
>
On 02/28/2018 09:46 PM, Boris Ostrovsky wrote:
> On 02/27/2018 01:52 AM, Oleksandr Andrushchenko wrote:
>> On 02/27/2018 01:47 AM, Boris Ostrovsky wrote:
>>> On 02/23/2018 10:35 AM, Oleksandr Andrushchenko wrote:
>>>> On 02/23/2018 05:26 PM, Boris Ostrovsky wrote:
>>>>> On 02/21/2018 03:03 AM, Oleksandr Andrushchenko wrote:
>>>>>> + ret = gem_alloc_pages_array(xen_obj, size);
>>>>>> + if (ret < 0) {
>>>>>> + gem_free_pages_array(xen_obj);
>>>>>> + goto fail;
>>>>>> + }
>>>>>> +
>>>>>> + ret = alloc_xenballooned_pages(xen_obj->num_pages,
>>>>>> + xen_obj->pages);
>>>>> Why are you allocating balloon pages?
>>>> in this use-case we map pages provided by the backend
>>>> (yes, I know this can be a problem from both security
>>>> POV and that DomU can die holding pages of Dom0 forever:
>>>> but still it is a configuration option, so user decides
>>>> if her use-case needs this and takes responsibility for
>>>> such a decision).
>>> Perhaps I am missing something here but when you say "I know this can be
>>> a problem from both security POV ..." then there is something wrong with
>>> your solution.
>> well, in this scenario there are actually 2 concerns:
>> 1. If DomU dies the pages/grants from Dom0/DomD cannot be
>> reclaimed back
>> 2. Misbehaving guest may send too many requests to the
>> backend exhausting grant references and memory of Dom0/DomD
>> (this is the only concern from security POV). Please see [1]
>>
>> But, we are focusing on embedded use-cases,
>> so those systems we use are not that "dynamic" with respect to 2).
>> Namely: we have fixed number of domains and their functionality
>> is well known, so we can do rather precise assumption on resource
>> usage. This is why I try to warn on such a use-case and rely on
>> the end user who understands the caveats
>
> How will dom0/backend know whether or not to trust the front end (and
> thus whether or not to provide provide pages to it)? Will there be
> something in xenstore, for example, to indicate such trusted frontends?
Exactly, there is a dedicated xl configuration option available [1] for
vdispl:
"be-alloc=BOOLEAN
Indicates if backend can be a buffer provider/allocator for this domain.
See display protocol for details."
Thus, one can configure this per domain for trusted ones in corresponding
xl configuration files
> -boris
>
>
>> I'll probably add more precise description of this use-case
>> clarifying what is that security POV, so there is no confusion
>>
>> Hope this explanation answers your questions
>>> -boris
>>>
>>>> Please see description of the buffering modes in xen_drm_front.h
>>>> specifically for backend allocated buffers:
>>>>
>>>> *******************************************************************************
>>>>
>>>> * 2. Buffers allocated by the backend
>>>>
>>>> *******************************************************************************
>>>>
>>>> *
>>>> * This mode of operation is run-time configured via guest domain
>>>> configuration
>>>> * through XenStore entries.
>>>> *
>>>> * For systems which do not provide IOMMU support, but having specific
>>>> * requirements for display buffers it is possible to allocate such
>>>> buffers
>>>> * at backend side and share those with the frontend.
>>>> * For example, if host domain is 1:1 mapped and has DRM/GPU hardware
>>>> expecting
>>>> * physically contiguous memory, this allows implementing zero-copying
>>>> * use-cases.
>>>>
>>>>> -boris
>>>>>
>>>>>> + if (ret < 0) {
>>>>>> + DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
>>>>>> + xen_obj->num_pages, ret);
>>>>>> + goto fail;
>>>>>> + }
>>>>>> +
>>>>>> + return xen_obj;
>>>>>> + }
>>>>>> + /*
>>>>>> + * need to allocate backing pages now, so we can share those
>>>>>> + * with the backend
>>>>>> + */
>>>>>> + xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE);
>>>>>> + xen_obj->pages = drm_gem_get_pages(&xen_obj->base);
>>>>>> + if (IS_ERR_OR_NULL(xen_obj->pages)) {
>>>>>> + ret = PTR_ERR(xen_obj->pages);
>>>>>> + xen_obj->pages = NULL;
>>>>>> + goto fail;
>>>>>> + }
>>>>>> +
>>>>>> + return xen_obj;
>>>>>> +
>>>>>> +fail:
>>>>>> + DRM_ERROR("Failed to allocate buffer with size %zu\n", size);
>>>>>> + return ERR_PTR(ret);
>>>>>> +}
>>>>>> +
>>>>>>
>> Thank you,
>> Oleksandr
>>
>> [1]
>> https://lists.xenproject.org/archives/html/xen-devel/2017-07/msg03100.html
>>
[1] https://xenbits.xen.org/docs/4.10-testing/man/xl.cfg.5.html
Indicates if backend can be a buffer provider/allocator for this
domain. See display protocol for details.
Hi all,
just as a clarification, this patch series implements the frontend
driver for the "vdispl" protocol, which was reviewed, approved and
committed in xen.git back in April:
https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/io/displif.h
As Xen maintainer, if a competing PV DRM protocol proposal will come up,
I'll try to steer it into evolving the existing vdispl protocol, as we
like to have only one protocol per device class.
I am really looking forward to having this driver upstream in Linux.
Thanks Oleksandr!
Cheers,
Stefano
On Wed, 21 Feb 2018, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <[email protected]>
>
> Hello!
>
> This patch series adds support for Xen [1] para-virtualized
> frontend display driver. It implements the protocol from
> include/xen/interface/io/displif.h [2].
> Accompanying backend [3] is implemented as a user-space application
> and its helper library [4], capable of running as a Weston client
> or DRM master.
> Configuration of both backend and frontend is done via
> Xen guest domain configuration options [5].
>
> *******************************************************************************
> * Driver limitations
> *******************************************************************************
> 1. Configuration options 1.1 (contiguous display buffers) and 2 (backend
> allocated buffers) below are not supported at the same time.
>
> 2. Only primary plane without additional properties is supported.
>
> 3. Only one video mode supported which resolution is configured via XenStore.
>
> 4. All CRTCs operate at fixed frequency of 60Hz.
>
> *******************************************************************************
> * Driver modes of operation in terms of display buffers used
> *******************************************************************************
> Depending on the requirements for the para-virtualized environment, namely
> requirements dictated by the accompanying DRM/(v)GPU drivers running in both
> host and guest environments, number of operating modes of para-virtualized
> display driver are supported:
> - display buffers can be allocated by either frontend driver or backend
> - display buffers can be allocated to be contiguous in memory or not
>
> Note! Frontend driver itself has no dependency on contiguous memory for
> its operation.
>
> *******************************************************************************
> * 1. Buffers allocated by the frontend driver.
> *******************************************************************************
>
> The below modes of operation are configured at compile-time via
> frontend driver's kernel configuration.
>
> 1.1. Front driver configured to use GEM CMA helpers
> This use-case is useful when used with accompanying DRM/vGPU driver in
> guest domain which was designed to only work with contiguous buffers,
> e.g. DRM driver based on GEM CMA helpers: such drivers can only import
> contiguous PRIME buffers, thus requiring frontend driver to provide
> such. In order to implement this mode of operation para-virtualized
> frontend driver can be configured to use GEM CMA helpers.
>
> 1.2. Front driver doesn't use GEM CMA
> If accompanying drivers can cope with non-contiguous memory then, to
> lower pressure on CMA subsystem of the kernel, driver can allocate
> buffers from system memory.
>
> Note! If used with accompanying DRM/(v)GPU drivers this mode of operation
> may require IOMMU support on the platform, so accompanying DRM/vGPU
> hardware can still reach display buffer memory while importing PRIME
> buffers from the frontend driver.
>
> *******************************************************************************
> * 2. Buffers allocated by the backend
> *******************************************************************************
>
> This mode of operation is run-time configured via guest domain configuration
> through XenStore entries.
>
> For systems which do not provide IOMMU support, but having specific
> requirements for display buffers it is possible to allocate such buffers
> at backend side and share those with the frontend.
> For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting
> physically contiguous memory, this allows implementing zero-copying
> use-cases.
>
>
> I would like to thank at least, but not at last the following
> people/communities who helped this driver to happen ;)
>
> 1. My team at EPAM for continuous support
> 2. Xen community for answering tons of questions on different
> modes of operation of the driver with respect to virtualized
> environment.
> 3. Rob Clark for "GEM allocation for para-virtualized DRM driver" [6]
> 4. Maarten Lankhorst for "Atomic driver and old remove FB behavior" [7]
> 5. Ville Syrjälä for "Questions on page flips and atomic modeset" [8]
>
> Thank you,
> Oleksandr Andrushchenko
>
> P.S. There are two dependencies for this driver limiting some of the
> use-cases which are on review now:
> 1. "drm/simple_kms_helper: Add {enable|disable}_vblank callback support" [9]
> 2. "drm/simple_kms_helper: Fix NULL pointer dereference with no active CRTC" [10]
>
> [1] https://wiki.xen.org/wiki/Paravirtualization_(PV)#PV_IO_Drivers
> [2] https://elixir.bootlin.com/linux/v4.16-rc2/source/include/xen/interface/io/displif.h
> [3] https://github.com/xen-troops/displ_be
> [4] https://github.com/xen-troops/libxenbe
> [5] https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/man/xl.cfg.pod.5.in;h=a699367779e2ae1212ff8f638eff0206ec1a1cc9;hb=refs/heads/master#l1257
> [6] https://lists.freedesktop.org/archives/dri-devel/2017-March/136038.html
> [7] https://www.spinics.net/lists/dri-devel/msg164102.html
> [8] https://www.spinics.net/lists/dri-devel/msg164463.html
> [9] https://patchwork.freedesktop.org/series/38073/
> [10] https://patchwork.freedesktop.org/series/38139/
>
> Oleksandr Andrushchenko (9):
> drm/xen-front: Introduce Xen para-virtualized frontend driver
> drm/xen-front: Implement Xen bus state handling
> drm/xen-front: Read driver configuration from Xen store
> drm/xen-front: Implement Xen event channel handling
> drm/xen-front: Implement handling of shared display buffers
> drm/xen-front: Introduce DRM/KMS virtual display driver
> drm/xen-front: Implement KMS/connector handling
> drm/xen-front: Implement GEM operations
> drm/xen-front: Implement communication with backend
>
> drivers/gpu/drm/Kconfig | 2 +
> drivers/gpu/drm/Makefile | 1 +
> drivers/gpu/drm/xen/Kconfig | 30 ++
> drivers/gpu/drm/xen/Makefile | 17 +
> drivers/gpu/drm/xen/xen_drm_front.c | 712 ++++++++++++++++++++++++++++
> drivers/gpu/drm/xen/xen_drm_front.h | 154 ++++++
> drivers/gpu/drm/xen/xen_drm_front_cfg.c | 84 ++++
> drivers/gpu/drm/xen/xen_drm_front_cfg.h | 45 ++
> drivers/gpu/drm/xen/xen_drm_front_conn.c | 125 +++++
> drivers/gpu/drm/xen/xen_drm_front_conn.h | 35 ++
> drivers/gpu/drm/xen/xen_drm_front_drv.c | 294 ++++++++++++
> drivers/gpu/drm/xen/xen_drm_front_drv.h | 73 +++
> drivers/gpu/drm/xen/xen_drm_front_evtchnl.c | 399 ++++++++++++++++
> drivers/gpu/drm/xen/xen_drm_front_evtchnl.h | 89 ++++
> drivers/gpu/drm/xen/xen_drm_front_gem.c | 360 ++++++++++++++
> drivers/gpu/drm/xen/xen_drm_front_gem.h | 46 ++
> drivers/gpu/drm/xen/xen_drm_front_gem_cma.c | 93 ++++
> drivers/gpu/drm/xen/xen_drm_front_kms.c | 299 ++++++++++++
> drivers/gpu/drm/xen/xen_drm_front_kms.h | 30 ++
> drivers/gpu/drm/xen/xen_drm_front_shbuf.c | 430 +++++++++++++++++
> drivers/gpu/drm/xen/xen_drm_front_shbuf.h | 80 ++++
> 21 files changed, 3398 insertions(+)
> create mode 100644 drivers/gpu/drm/xen/Kconfig
> create mode 100644 drivers/gpu/drm/xen/Makefile
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_cfg.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_cfg.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_evtchnl.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_evtchnl.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_shbuf.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_shbuf.h
>
> --
> 2.7.4
>
On Wed, 28 Feb 2018, Julien Grall wrote:
> (+ Stefano and Wei)
>
> Hi,
>
> On 02/27/2018 12:40 PM, Oleksandr Andrushchenko wrote:
> > Please find some more clarifications on VirtIO use with Xen
> > (I would like to thank Xen community for helping with this)
> >
> > 1. Possible security issues - VirtIO devices are PCI bus masters, thus
> > allowing real device (running, for example, in untrusted driver domain)
> > to get control over guest's memory by writing to its memory
> >
> > 2. VirtIO currently uses GFNs written into the shared ring, without Xen
> > grants support. This will require generic grant-mapping/sharing layer
> > to be added to VirtIO.
This is important. VirtIO doesn't allow for driver domains (running the
backend inside a virtual machine).
> > 3. VirtIO requires QEMU PCI emulation for setting up a device. Xen PV (and
> > PVH)
> > domains don't use QEMU for platform emulation in order to reduce attack
> > surface.
> > (PVH is in the process of gaining PCI config space emulation though, but it
> > is
> > optional, not a requirement)
> I don't think the support of PCI configuration space emulation for PVH would
> help there. The plan is to emulate in Xen, QEMU is still out of the equation
> there.
Right: there is no infrastructure to run IO emulation in userspace for
PV, PVH and ARM guests. We do have a QEMU instance running for PV, PVH
and ARM guests but only to implement PV backends, such as the vdispl
backend for example, which are handling asynchronous requests from
frontends using the traditional grant table maps/unmaps.
> > 4. Most of the PV drivers a guest uses at the moment are Xen PV drivers,
> > e.g. net,
> > block, console, so only virtio-gpu will require QEMU to run.
> > Although this use case would work on x86 it will require additional changes
> > to get this running on ARM, which is my target platform.
>
> All type of guests but x86 HVM are not using QEMU for device emulation.
>
> I would even be stronger here. Using QEMU would require a significant amount
> of engineering to make it work and increase the cost of safety certification
> for automotive use cases. So IHMO, the Xen PV display solution is the best.
>
> The protocol was accepted and merged in Xen 4.9. This the standard way to have
> para-virtualized display for guests on Xen. Having the driver merged in Linux
> would help user to get out-of-box display in guest.
That's right. I don't think it really makes sense to introduce virtio
support in Xen on ARM as it is today.
Hi,
> 1. Possible security issues - VirtIO devices are PCI bus masters, thus
> allowing real device (running, for example, in untrusted driver domain)
> to get control over guest's memory by writing to its memory
>
> 2. VirtIO currently uses GFNs written into the shared ring, without Xen
> grants support. This will require generic grant-mapping/sharing layer
> to be added to VirtIO.
>
> 3. VirtIO requires QEMU PCI emulation for setting up a device. Xen PV
> (and PVH) domains don't use QEMU for platform emulation in order to
> reduce attack surface. (PVH is in the process of gaining PCI config
> space emulation though, but it is optional, not a requirement)
Well, that is wrong. virtio doesn't require pci. There are other
transports (mmio, ccw), and it should be possible to create a xen
specific transport which uses grant tables properly. Seems there even
was an attempt to implement that in 2011, see
https://wiki.xenproject.org/wiki/Virtio_On_Xen
> 4. Most of the PV drivers a guest uses at the moment are Xen PV drivers,
> e.g. net,
> block, console, so only virtio-gpu will require QEMU to run.
You are not forced to use qemu, you can certainly create an alternative
host side implementation (and still use on the existing virtio guest
drivers).
Whenever writing a xenbus implementation for both guest and host or
writing a virtio implementation for the host only is better -- dunno.
The virtio path obviously needs some infrastructure work for virtio
support in Xen, which may pay off long term. Your call.
cheers,
Gerd
On 03/01/2018 10:26 AM, Gerd Hoffmann wrote:
> Hi,
>
>> 1. Possible security issues - VirtIO devices are PCI bus masters, thus
>> allowing real device (running, for example, in untrusted driver domain)
>> to get control over guest's memory by writing to its memory
>>
>> 2. VirtIO currently uses GFNs written into the shared ring, without Xen
>> grants support. This will require generic grant-mapping/sharing layer
>> to be added to VirtIO.
>>
>> 3. VirtIO requires QEMU PCI emulation for setting up a device. Xen PV
>> (and PVH) domains don't use QEMU for platform emulation in order to
>> reduce attack surface. (PVH is in the process of gaining PCI config
>> space emulation though, but it is optional, not a requirement)
> Well, that is wrong. virtio doesn't require pci. There are other
> transports (mmio, ccw), and it should be possible to create a xen
> specific transport which uses grant tables properly.
You are correct, PCI is not a requirement here
> Seems there even
> was an attempt to implement that in 2011, see
> https://wiki.xenproject.org/wiki/Virtio_On_Xen
And even more, that was also raised at Linux Plumbers
Conference 2013 [1]. But still, there is no implementation
available
>> 4. Most of the PV drivers a guest uses at the moment are Xen PV drivers,
>> e.g. net,
>> block, console, so only virtio-gpu will require QEMU to run.
> You are not forced to use qemu, you can certainly create an alternative
> host side implementation (and still use on the existing virtio guest
> drivers).
Yes, this is true. We also discussed virtio with Xen
community, Stefano Stabellini says:
"the issues with virtio are (in order of seriousness):
1) virtio assumes that the backend is able to map any page in memory
In other words, it is not possible to do driver domains with virtio
2) virtio doesn't work with PV guests, only HVM (I think PVH would have
the same issue)
Virtio does synchronous IO emulation. QEMU for PV guests is not capable
of handling synchronous IO requests. The infratructure is just not
there.
3) virtio performance is poor with Xen
There are multiple reasons for this, but the main one is that the
backends are in QEMU, running in Dom0. They become the bottleneck
quickly.
"
> Whenever writing a xenbus implementation for both guest and host or
> writing a virtio implementation for the host only is better -- dunno.
> The virtio path obviously needs some infrastructure work for virtio
> support in Xen, which may pay off long term. Your call.
Well, I do agree that long term virtio might be a better choice.
But at the moment I still tend to have a dedicated Xen PV DRM
implementation.
That being said, I would kindly ask DRI community to review
the driver and consider it for inclusion.
> cheers,
> Gerd
>
Thank you,
Oleksandr Andrushchenko
[1] https://www.youtube.com/watch?v=FcVDHBQInxA
On Wed, Feb 21, 2018 at 10:03:42AM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <[email protected]>
>
> Handle communication with the backend:
> - send requests and wait for the responses according
> to the displif protocol
> - serialize access to the communication channel
> - time-out used for backend communication is set to 3000 ms
> - manage display buffers shared with the backend
>
> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
After the demidlayering it probably makes sense to merge this with the
overall kms/basic-drm-driver patch. Up to you really.
-Daniel
> ---
> drivers/gpu/drm/xen/xen_drm_front.c | 327 +++++++++++++++++++++++++++++++++++-
> drivers/gpu/drm/xen/xen_drm_front.h | 5 +
> 2 files changed, 327 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
> index 8de88e359d5e..5ad546231d30 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
> @@ -31,12 +31,146 @@
> #include "xen_drm_front_evtchnl.h"
> #include "xen_drm_front_shbuf.h"
>
> +/* timeout in ms to wait for backend to respond */
> +#define VDRM_WAIT_BACK_MS 3000
> +
> +struct xen_drm_front_dbuf {
> + struct list_head list;
> + uint64_t dbuf_cookie;
> + uint64_t fb_cookie;
> + struct xen_drm_front_shbuf *shbuf;
> +};
> +
> +static int dbuf_add_to_list(struct xen_drm_front_info *front_info,
> + struct xen_drm_front_shbuf *shbuf, uint64_t dbuf_cookie)
> +{
> + struct xen_drm_front_dbuf *dbuf;
> +
> + dbuf = kzalloc(sizeof(*dbuf), GFP_KERNEL);
> + if (!dbuf)
> + return -ENOMEM;
> +
> + dbuf->dbuf_cookie = dbuf_cookie;
> + dbuf->shbuf = shbuf;
> + list_add(&dbuf->list, &front_info->dbuf_list);
> + return 0;
> +}
> +
> +static struct xen_drm_front_dbuf *dbuf_get(struct list_head *dbuf_list,
> + uint64_t dbuf_cookie)
> +{
> + struct xen_drm_front_dbuf *buf, *q;
> +
> + list_for_each_entry_safe(buf, q, dbuf_list, list)
> + if (buf->dbuf_cookie == dbuf_cookie)
> + return buf;
> +
> + return NULL;
> +}
> +
> +static void dbuf_flush_fb(struct list_head *dbuf_list, uint64_t fb_cookie)
> +{
> + struct xen_drm_front_dbuf *buf, *q;
> +
> + list_for_each_entry_safe(buf, q, dbuf_list, list)
> + if (buf->fb_cookie == fb_cookie)
> + xen_drm_front_shbuf_flush(buf->shbuf);
> +}
> +
> +static void dbuf_free(struct list_head *dbuf_list, uint64_t dbuf_cookie)
> +{
> + struct xen_drm_front_dbuf *buf, *q;
> +
> + list_for_each_entry_safe(buf, q, dbuf_list, list)
> + if (buf->dbuf_cookie == dbuf_cookie) {
> + list_del(&buf->list);
> + xen_drm_front_shbuf_unmap(buf->shbuf);
> + xen_drm_front_shbuf_free(buf->shbuf);
> + kfree(buf);
> + break;
> + }
> +}
> +
> +static void dbuf_free_all(struct list_head *dbuf_list)
> +{
> + struct xen_drm_front_dbuf *buf, *q;
> +
> + list_for_each_entry_safe(buf, q, dbuf_list, list) {
> + list_del(&buf->list);
> + xen_drm_front_shbuf_unmap(buf->shbuf);
> + xen_drm_front_shbuf_free(buf->shbuf);
> + kfree(buf);
> + }
> +}
> +
> +static struct xendispl_req *be_prepare_req(
> + struct xen_drm_front_evtchnl *evtchnl, uint8_t operation)
> +{
> + struct xendispl_req *req;
> +
> + req = RING_GET_REQUEST(&evtchnl->u.req.ring,
> + evtchnl->u.req.ring.req_prod_pvt);
> + req->operation = operation;
> + req->id = evtchnl->evt_next_id++;
> + evtchnl->evt_id = req->id;
> + return req;
> +}
> +
> +static int be_stream_do_io(struct xen_drm_front_evtchnl *evtchnl,
> + struct xendispl_req *req)
> +{
> + reinit_completion(&evtchnl->u.req.completion);
> + if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED))
> + return -EIO;
> +
> + xen_drm_front_evtchnl_flush(evtchnl);
> + return 0;
> +}
> +
> +static int be_stream_wait_io(struct xen_drm_front_evtchnl *evtchnl)
> +{
> + if (wait_for_completion_timeout(&evtchnl->u.req.completion,
> + msecs_to_jiffies(VDRM_WAIT_BACK_MS)) <= 0)
> + return -ETIMEDOUT;
> +
> + return evtchnl->u.req.resp_status;
> +}
> +
> static int be_mode_set(struct xen_drm_front_drm_pipeline *pipeline, uint32_t x,
> uint32_t y, uint32_t width, uint32_t height, uint32_t bpp,
> uint64_t fb_cookie)
>
> {
> - return 0;
> + struct xen_drm_front_evtchnl *evtchnl;
> + struct xen_drm_front_info *front_info;
> + struct xendispl_req *req;
> + unsigned long flags;
> + int ret;
> +
> + front_info = pipeline->drm_info->front_info;
> + evtchnl = &front_info->evt_pairs[pipeline->index].req;
> + if (unlikely(!evtchnl))
> + return -EIO;
> +
> + mutex_lock(&front_info->req_io_lock);
> +
> + spin_lock_irqsave(&front_info->io_lock, flags);
> + req = be_prepare_req(evtchnl, XENDISPL_OP_SET_CONFIG);
> + req->op.set_config.x = x;
> + req->op.set_config.y = y;
> + req->op.set_config.width = width;
> + req->op.set_config.height = height;
> + req->op.set_config.bpp = bpp;
> + req->op.set_config.fb_cookie = fb_cookie;
> +
> + ret = be_stream_do_io(evtchnl, req);
> + spin_unlock_irqrestore(&front_info->io_lock, flags);
> +
> + if (ret == 0)
> + ret = be_stream_wait_io(evtchnl);
> +
> + mutex_unlock(&front_info->req_io_lock);
> + return ret;
> }
>
> static int be_dbuf_create_int(struct xen_drm_front_info *front_info,
> @@ -44,7 +178,69 @@ static int be_dbuf_create_int(struct xen_drm_front_info *front_info,
> uint32_t bpp, uint64_t size, struct page **pages,
> struct sg_table *sgt)
> {
> + struct xen_drm_front_evtchnl *evtchnl;
> + struct xen_drm_front_shbuf *shbuf;
> + struct xendispl_req *req;
> + struct xen_drm_front_shbuf_cfg buf_cfg;
> + unsigned long flags;
> + int ret;
> +
> + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
> + if (unlikely(!evtchnl))
> + return -EIO;
> +
> + memset(&buf_cfg, 0, sizeof(buf_cfg));
> + buf_cfg.xb_dev = front_info->xb_dev;
> + buf_cfg.pages = pages;
> + buf_cfg.size = size;
> + buf_cfg.sgt = sgt;
> + buf_cfg.be_alloc = front_info->cfg.be_alloc;
> +
> + shbuf = xen_drm_front_shbuf_alloc(&buf_cfg);
> + if (!shbuf)
> + return -ENOMEM;
> +
> + ret = dbuf_add_to_list(front_info, shbuf, dbuf_cookie);
> + if (ret < 0) {
> + xen_drm_front_shbuf_free(shbuf);
> + return ret;
> + }
> +
> + mutex_lock(&front_info->req_io_lock);
> +
> + spin_lock_irqsave(&front_info->io_lock, flags);
> + req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_CREATE);
> + req->op.dbuf_create.gref_directory =
> + xen_drm_front_shbuf_get_dir_start(shbuf);
> + req->op.dbuf_create.buffer_sz = size;
> + req->op.dbuf_create.dbuf_cookie = dbuf_cookie;
> + req->op.dbuf_create.width = width;
> + req->op.dbuf_create.height = height;
> + req->op.dbuf_create.bpp = bpp;
> + if (buf_cfg.be_alloc)
> + req->op.dbuf_create.flags |= XENDISPL_DBUF_FLG_REQ_ALLOC;
> +
> + ret = be_stream_do_io(evtchnl, req);
> + spin_unlock_irqrestore(&front_info->io_lock, flags);
> +
> + if (ret < 0)
> + goto fail;
> +
> + ret = be_stream_wait_io(evtchnl);
> + if (ret < 0)
> + goto fail;
> +
> + ret = xen_drm_front_shbuf_map(shbuf);
> + if (ret < 0)
> + goto fail;
> +
> + mutex_unlock(&front_info->req_io_lock);
> return 0;
> +
> +fail:
> + mutex_unlock(&front_info->req_io_lock);
> + dbuf_free(&front_info->dbuf_list, dbuf_cookie);
> + return ret;
> }
>
> static int be_dbuf_create_from_sgt(struct xen_drm_front_info *front_info,
> @@ -66,26 +262,144 @@ static int be_dbuf_create_from_pages(struct xen_drm_front_info *front_info,
> static int be_dbuf_destroy(struct xen_drm_front_info *front_info,
> uint64_t dbuf_cookie)
> {
> - return 0;
> + struct xen_drm_front_evtchnl *evtchnl;
> + struct xendispl_req *req;
> + unsigned long flags;
> + bool be_alloc;
> + int ret;
> +
> + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
> + if (unlikely(!evtchnl))
> + return -EIO;
> +
> + be_alloc = front_info->cfg.be_alloc;
> +
> + /*
> + * for the backend allocated buffer release references now, so backend
> + * can free the buffer
> + */
> + if (be_alloc)
> + dbuf_free(&front_info->dbuf_list, dbuf_cookie);
> +
> + mutex_lock(&front_info->req_io_lock);
> +
> + spin_lock_irqsave(&front_info->io_lock, flags);
> + req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_DESTROY);
> + req->op.dbuf_destroy.dbuf_cookie = dbuf_cookie;
> +
> + ret = be_stream_do_io(evtchnl, req);
> + spin_unlock_irqrestore(&front_info->io_lock, flags);
> +
> + if (ret == 0)
> + ret = be_stream_wait_io(evtchnl);
> +
> + /*
> + * do this regardless of communication status with the backend:
> + * if we cannot remove remote resources remove what we can locally
> + */
> + if (!be_alloc)
> + dbuf_free(&front_info->dbuf_list, dbuf_cookie);
> +
> + mutex_unlock(&front_info->req_io_lock);
> + return ret;
> }
>
> static int be_fb_attach(struct xen_drm_front_info *front_info,
> uint64_t dbuf_cookie, uint64_t fb_cookie, uint32_t width,
> uint32_t height, uint32_t pixel_format)
> {
> - return 0;
> + struct xen_drm_front_evtchnl *evtchnl;
> + struct xen_drm_front_dbuf *buf;
> + struct xendispl_req *req;
> + unsigned long flags;
> + int ret;
> +
> + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
> + if (unlikely(!evtchnl))
> + return -EIO;
> +
> + buf = dbuf_get(&front_info->dbuf_list, dbuf_cookie);
> + if (!buf)
> + return -EINVAL;
> +
> + buf->fb_cookie = fb_cookie;
> +
> + mutex_lock(&front_info->req_io_lock);
> +
> + spin_lock_irqsave(&front_info->io_lock, flags);
> + req = be_prepare_req(evtchnl, XENDISPL_OP_FB_ATTACH);
> + req->op.fb_attach.dbuf_cookie = dbuf_cookie;
> + req->op.fb_attach.fb_cookie = fb_cookie;
> + req->op.fb_attach.width = width;
> + req->op.fb_attach.height = height;
> + req->op.fb_attach.pixel_format = pixel_format;
> +
> + ret = be_stream_do_io(evtchnl, req);
> + spin_unlock_irqrestore(&front_info->io_lock, flags);
> +
> + if (ret == 0)
> + ret = be_stream_wait_io(evtchnl);
> +
> + mutex_unlock(&front_info->req_io_lock);
> + return ret;
> }
>
> static int be_fb_detach(struct xen_drm_front_info *front_info,
> uint64_t fb_cookie)
> {
> - return 0;
> + struct xen_drm_front_evtchnl *evtchnl;
> + struct xendispl_req *req;
> + unsigned long flags;
> + int ret;
> +
> + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
> + if (unlikely(!evtchnl))
> + return -EIO;
> +
> + mutex_lock(&front_info->req_io_lock);
> +
> + spin_lock_irqsave(&front_info->io_lock, flags);
> + req = be_prepare_req(evtchnl, XENDISPL_OP_FB_DETACH);
> + req->op.fb_detach.fb_cookie = fb_cookie;
> +
> + ret = be_stream_do_io(evtchnl, req);
> + spin_unlock_irqrestore(&front_info->io_lock, flags);
> +
> + if (ret == 0)
> + ret = be_stream_wait_io(evtchnl);
> +
> + mutex_unlock(&front_info->req_io_lock);
> + return ret;
> }
>
> static int be_page_flip(struct xen_drm_front_info *front_info, int conn_idx,
> uint64_t fb_cookie)
> {
> - return 0;
> + struct xen_drm_front_evtchnl *evtchnl;
> + struct xendispl_req *req;
> + unsigned long flags;
> + int ret;
> +
> + if (unlikely(conn_idx >= front_info->num_evt_pairs))
> + return -EINVAL;
> +
> + dbuf_flush_fb(&front_info->dbuf_list, fb_cookie);
> + evtchnl = &front_info->evt_pairs[conn_idx].req;
> +
> + mutex_lock(&front_info->req_io_lock);
> +
> + spin_lock_irqsave(&front_info->io_lock, flags);
> + req = be_prepare_req(evtchnl, XENDISPL_OP_PG_FLIP);
> + req->op.pg_flip.fb_cookie = fb_cookie;
> +
> + ret = be_stream_do_io(evtchnl, req);
> + spin_unlock_irqrestore(&front_info->io_lock, flags);
> +
> + if (ret == 0)
> + ret = be_stream_wait_io(evtchnl);
> +
> + mutex_unlock(&front_info->req_io_lock);
> + return ret;
> }
>
> static void xen_drm_drv_unload(struct xen_drm_front_info *front_info)
> @@ -183,6 +497,7 @@ static void xen_drv_remove_internal(struct xen_drm_front_info *front_info)
> {
> xen_drm_drv_deinit(front_info);
> xen_drm_front_evtchnl_free_all(front_info);
> + dbuf_free_all(&front_info->dbuf_list);
> }
>
> static int backend_on_initwait(struct xen_drm_front_info *front_info)
> @@ -310,6 +625,8 @@ static int xen_drv_probe(struct xenbus_device *xb_dev,
>
> front_info->xb_dev = xb_dev;
> spin_lock_init(&front_info->io_lock);
> + mutex_init(&front_info->req_io_lock);
> + INIT_LIST_HEAD(&front_info->dbuf_list);
> front_info->drm_pdrv_registered = false;
> dev_set_drvdata(&xb_dev->dev, front_info);
> return xenbus_switch_state(xb_dev, XenbusStateInitialising);
> diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
> index c6f52c892434..db32d00145d1 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front.h
> +++ b/drivers/gpu/drm/xen/xen_drm_front.h
> @@ -137,6 +137,8 @@ struct xen_drm_front_info {
> struct xenbus_device *xb_dev;
> /* to protect data between backend IO code and interrupt handler */
> spinlock_t io_lock;
> + /* serializer for backend IO: request/response */
> + struct mutex req_io_lock;
> bool drm_pdrv_registered;
> /* virtual DRM platform device */
> struct platform_device *drm_pdev;
> @@ -144,6 +146,9 @@ struct xen_drm_front_info {
> int num_evt_pairs;
> struct xen_drm_front_evtchnl_pair *evt_pairs;
> struct xen_drm_front_cfg cfg;
> +
> + /* display buffers */
> + struct list_head dbuf_list;
> };
>
> #endif /* __XEN_DRM_FRONT_H_ */
> --
> 2.7.4
>
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On 03/05/2018 11:25 AM, Daniel Vetter wrote:
> On Wed, Feb 21, 2018 at 10:03:42AM +0200, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko <[email protected]>
>>
>> Handle communication with the backend:
>> - send requests and wait for the responses according
>> to the displif protocol
>> - serialize access to the communication channel
>> - time-out used for backend communication is set to 3000 ms
>> - manage display buffers shared with the backend
>>
>> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
> After the demidlayering it probably makes sense to merge this with the
> overall kms/basic-drm-driver patch. Up to you really.
The reason for such partitioning here and before was that
I can have Xen/DRM parts separate, so those are easier for
review by Xen/DRM communities. So, I would prefer to have it
as it is
> -Daniel
>> ---
>> drivers/gpu/drm/xen/xen_drm_front.c | 327 +++++++++++++++++++++++++++++++++++-
>> drivers/gpu/drm/xen/xen_drm_front.h | 5 +
>> 2 files changed, 327 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
>> index 8de88e359d5e..5ad546231d30 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front.c
>> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
>> @@ -31,12 +31,146 @@
>> #include "xen_drm_front_evtchnl.h"
>> #include "xen_drm_front_shbuf.h"
>>
>> +/* timeout in ms to wait for backend to respond */
>> +#define VDRM_WAIT_BACK_MS 3000
>> +
>> +struct xen_drm_front_dbuf {
>> + struct list_head list;
>> + uint64_t dbuf_cookie;
>> + uint64_t fb_cookie;
>> + struct xen_drm_front_shbuf *shbuf;
>> +};
>> +
>> +static int dbuf_add_to_list(struct xen_drm_front_info *front_info,
>> + struct xen_drm_front_shbuf *shbuf, uint64_t dbuf_cookie)
>> +{
>> + struct xen_drm_front_dbuf *dbuf;
>> +
>> + dbuf = kzalloc(sizeof(*dbuf), GFP_KERNEL);
>> + if (!dbuf)
>> + return -ENOMEM;
>> +
>> + dbuf->dbuf_cookie = dbuf_cookie;
>> + dbuf->shbuf = shbuf;
>> + list_add(&dbuf->list, &front_info->dbuf_list);
>> + return 0;
>> +}
>> +
>> +static struct xen_drm_front_dbuf *dbuf_get(struct list_head *dbuf_list,
>> + uint64_t dbuf_cookie)
>> +{
>> + struct xen_drm_front_dbuf *buf, *q;
>> +
>> + list_for_each_entry_safe(buf, q, dbuf_list, list)
>> + if (buf->dbuf_cookie == dbuf_cookie)
>> + return buf;
>> +
>> + return NULL;
>> +}
>> +
>> +static void dbuf_flush_fb(struct list_head *dbuf_list, uint64_t fb_cookie)
>> +{
>> + struct xen_drm_front_dbuf *buf, *q;
>> +
>> + list_for_each_entry_safe(buf, q, dbuf_list, list)
>> + if (buf->fb_cookie == fb_cookie)
>> + xen_drm_front_shbuf_flush(buf->shbuf);
>> +}
>> +
>> +static void dbuf_free(struct list_head *dbuf_list, uint64_t dbuf_cookie)
>> +{
>> + struct xen_drm_front_dbuf *buf, *q;
>> +
>> + list_for_each_entry_safe(buf, q, dbuf_list, list)
>> + if (buf->dbuf_cookie == dbuf_cookie) {
>> + list_del(&buf->list);
>> + xen_drm_front_shbuf_unmap(buf->shbuf);
>> + xen_drm_front_shbuf_free(buf->shbuf);
>> + kfree(buf);
>> + break;
>> + }
>> +}
>> +
>> +static void dbuf_free_all(struct list_head *dbuf_list)
>> +{
>> + struct xen_drm_front_dbuf *buf, *q;
>> +
>> + list_for_each_entry_safe(buf, q, dbuf_list, list) {
>> + list_del(&buf->list);
>> + xen_drm_front_shbuf_unmap(buf->shbuf);
>> + xen_drm_front_shbuf_free(buf->shbuf);
>> + kfree(buf);
>> + }
>> +}
>> +
>> +static struct xendispl_req *be_prepare_req(
>> + struct xen_drm_front_evtchnl *evtchnl, uint8_t operation)
>> +{
>> + struct xendispl_req *req;
>> +
>> + req = RING_GET_REQUEST(&evtchnl->u.req.ring,
>> + evtchnl->u.req.ring.req_prod_pvt);
>> + req->operation = operation;
>> + req->id = evtchnl->evt_next_id++;
>> + evtchnl->evt_id = req->id;
>> + return req;
>> +}
>> +
>> +static int be_stream_do_io(struct xen_drm_front_evtchnl *evtchnl,
>> + struct xendispl_req *req)
>> +{
>> + reinit_completion(&evtchnl->u.req.completion);
>> + if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED))
>> + return -EIO;
>> +
>> + xen_drm_front_evtchnl_flush(evtchnl);
>> + return 0;
>> +}
>> +
>> +static int be_stream_wait_io(struct xen_drm_front_evtchnl *evtchnl)
>> +{
>> + if (wait_for_completion_timeout(&evtchnl->u.req.completion,
>> + msecs_to_jiffies(VDRM_WAIT_BACK_MS)) <= 0)
>> + return -ETIMEDOUT;
>> +
>> + return evtchnl->u.req.resp_status;
>> +}
>> +
>> static int be_mode_set(struct xen_drm_front_drm_pipeline *pipeline, uint32_t x,
>> uint32_t y, uint32_t width, uint32_t height, uint32_t bpp,
>> uint64_t fb_cookie)
>>
>> {
>> - return 0;
>> + struct xen_drm_front_evtchnl *evtchnl;
>> + struct xen_drm_front_info *front_info;
>> + struct xendispl_req *req;
>> + unsigned long flags;
>> + int ret;
>> +
>> + front_info = pipeline->drm_info->front_info;
>> + evtchnl = &front_info->evt_pairs[pipeline->index].req;
>> + if (unlikely(!evtchnl))
>> + return -EIO;
>> +
>> + mutex_lock(&front_info->req_io_lock);
>> +
>> + spin_lock_irqsave(&front_info->io_lock, flags);
>> + req = be_prepare_req(evtchnl, XENDISPL_OP_SET_CONFIG);
>> + req->op.set_config.x = x;
>> + req->op.set_config.y = y;
>> + req->op.set_config.width = width;
>> + req->op.set_config.height = height;
>> + req->op.set_config.bpp = bpp;
>> + req->op.set_config.fb_cookie = fb_cookie;
>> +
>> + ret = be_stream_do_io(evtchnl, req);
>> + spin_unlock_irqrestore(&front_info->io_lock, flags);
>> +
>> + if (ret == 0)
>> + ret = be_stream_wait_io(evtchnl);
>> +
>> + mutex_unlock(&front_info->req_io_lock);
>> + return ret;
>> }
>>
>> static int be_dbuf_create_int(struct xen_drm_front_info *front_info,
>> @@ -44,7 +178,69 @@ static int be_dbuf_create_int(struct xen_drm_front_info *front_info,
>> uint32_t bpp, uint64_t size, struct page **pages,
>> struct sg_table *sgt)
>> {
>> + struct xen_drm_front_evtchnl *evtchnl;
>> + struct xen_drm_front_shbuf *shbuf;
>> + struct xendispl_req *req;
>> + struct xen_drm_front_shbuf_cfg buf_cfg;
>> + unsigned long flags;
>> + int ret;
>> +
>> + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
>> + if (unlikely(!evtchnl))
>> + return -EIO;
>> +
>> + memset(&buf_cfg, 0, sizeof(buf_cfg));
>> + buf_cfg.xb_dev = front_info->xb_dev;
>> + buf_cfg.pages = pages;
>> + buf_cfg.size = size;
>> + buf_cfg.sgt = sgt;
>> + buf_cfg.be_alloc = front_info->cfg.be_alloc;
>> +
>> + shbuf = xen_drm_front_shbuf_alloc(&buf_cfg);
>> + if (!shbuf)
>> + return -ENOMEM;
>> +
>> + ret = dbuf_add_to_list(front_info, shbuf, dbuf_cookie);
>> + if (ret < 0) {
>> + xen_drm_front_shbuf_free(shbuf);
>> + return ret;
>> + }
>> +
>> + mutex_lock(&front_info->req_io_lock);
>> +
>> + spin_lock_irqsave(&front_info->io_lock, flags);
>> + req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_CREATE);
>> + req->op.dbuf_create.gref_directory =
>> + xen_drm_front_shbuf_get_dir_start(shbuf);
>> + req->op.dbuf_create.buffer_sz = size;
>> + req->op.dbuf_create.dbuf_cookie = dbuf_cookie;
>> + req->op.dbuf_create.width = width;
>> + req->op.dbuf_create.height = height;
>> + req->op.dbuf_create.bpp = bpp;
>> + if (buf_cfg.be_alloc)
>> + req->op.dbuf_create.flags |= XENDISPL_DBUF_FLG_REQ_ALLOC;
>> +
>> + ret = be_stream_do_io(evtchnl, req);
>> + spin_unlock_irqrestore(&front_info->io_lock, flags);
>> +
>> + if (ret < 0)
>> + goto fail;
>> +
>> + ret = be_stream_wait_io(evtchnl);
>> + if (ret < 0)
>> + goto fail;
>> +
>> + ret = xen_drm_front_shbuf_map(shbuf);
>> + if (ret < 0)
>> + goto fail;
>> +
>> + mutex_unlock(&front_info->req_io_lock);
>> return 0;
>> +
>> +fail:
>> + mutex_unlock(&front_info->req_io_lock);
>> + dbuf_free(&front_info->dbuf_list, dbuf_cookie);
>> + return ret;
>> }
>>
>> static int be_dbuf_create_from_sgt(struct xen_drm_front_info *front_info,
>> @@ -66,26 +262,144 @@ static int be_dbuf_create_from_pages(struct xen_drm_front_info *front_info,
>> static int be_dbuf_destroy(struct xen_drm_front_info *front_info,
>> uint64_t dbuf_cookie)
>> {
>> - return 0;
>> + struct xen_drm_front_evtchnl *evtchnl;
>> + struct xendispl_req *req;
>> + unsigned long flags;
>> + bool be_alloc;
>> + int ret;
>> +
>> + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
>> + if (unlikely(!evtchnl))
>> + return -EIO;
>> +
>> + be_alloc = front_info->cfg.be_alloc;
>> +
>> + /*
>> + * for the backend allocated buffer release references now, so backend
>> + * can free the buffer
>> + */
>> + if (be_alloc)
>> + dbuf_free(&front_info->dbuf_list, dbuf_cookie);
>> +
>> + mutex_lock(&front_info->req_io_lock);
>> +
>> + spin_lock_irqsave(&front_info->io_lock, flags);
>> + req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_DESTROY);
>> + req->op.dbuf_destroy.dbuf_cookie = dbuf_cookie;
>> +
>> + ret = be_stream_do_io(evtchnl, req);
>> + spin_unlock_irqrestore(&front_info->io_lock, flags);
>> +
>> + if (ret == 0)
>> + ret = be_stream_wait_io(evtchnl);
>> +
>> + /*
>> + * do this regardless of communication status with the backend:
>> + * if we cannot remove remote resources remove what we can locally
>> + */
>> + if (!be_alloc)
>> + dbuf_free(&front_info->dbuf_list, dbuf_cookie);
>> +
>> + mutex_unlock(&front_info->req_io_lock);
>> + return ret;
>> }
>>
>> static int be_fb_attach(struct xen_drm_front_info *front_info,
>> uint64_t dbuf_cookie, uint64_t fb_cookie, uint32_t width,
>> uint32_t height, uint32_t pixel_format)
>> {
>> - return 0;
>> + struct xen_drm_front_evtchnl *evtchnl;
>> + struct xen_drm_front_dbuf *buf;
>> + struct xendispl_req *req;
>> + unsigned long flags;
>> + int ret;
>> +
>> + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
>> + if (unlikely(!evtchnl))
>> + return -EIO;
>> +
>> + buf = dbuf_get(&front_info->dbuf_list, dbuf_cookie);
>> + if (!buf)
>> + return -EINVAL;
>> +
>> + buf->fb_cookie = fb_cookie;
>> +
>> + mutex_lock(&front_info->req_io_lock);
>> +
>> + spin_lock_irqsave(&front_info->io_lock, flags);
>> + req = be_prepare_req(evtchnl, XENDISPL_OP_FB_ATTACH);
>> + req->op.fb_attach.dbuf_cookie = dbuf_cookie;
>> + req->op.fb_attach.fb_cookie = fb_cookie;
>> + req->op.fb_attach.width = width;
>> + req->op.fb_attach.height = height;
>> + req->op.fb_attach.pixel_format = pixel_format;
>> +
>> + ret = be_stream_do_io(evtchnl, req);
>> + spin_unlock_irqrestore(&front_info->io_lock, flags);
>> +
>> + if (ret == 0)
>> + ret = be_stream_wait_io(evtchnl);
>> +
>> + mutex_unlock(&front_info->req_io_lock);
>> + return ret;
>> }
>>
>> static int be_fb_detach(struct xen_drm_front_info *front_info,
>> uint64_t fb_cookie)
>> {
>> - return 0;
>> + struct xen_drm_front_evtchnl *evtchnl;
>> + struct xendispl_req *req;
>> + unsigned long flags;
>> + int ret;
>> +
>> + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
>> + if (unlikely(!evtchnl))
>> + return -EIO;
>> +
>> + mutex_lock(&front_info->req_io_lock);
>> +
>> + spin_lock_irqsave(&front_info->io_lock, flags);
>> + req = be_prepare_req(evtchnl, XENDISPL_OP_FB_DETACH);
>> + req->op.fb_detach.fb_cookie = fb_cookie;
>> +
>> + ret = be_stream_do_io(evtchnl, req);
>> + spin_unlock_irqrestore(&front_info->io_lock, flags);
>> +
>> + if (ret == 0)
>> + ret = be_stream_wait_io(evtchnl);
>> +
>> + mutex_unlock(&front_info->req_io_lock);
>> + return ret;
>> }
>>
>> static int be_page_flip(struct xen_drm_front_info *front_info, int conn_idx,
>> uint64_t fb_cookie)
>> {
>> - return 0;
>> + struct xen_drm_front_evtchnl *evtchnl;
>> + struct xendispl_req *req;
>> + unsigned long flags;
>> + int ret;
>> +
>> + if (unlikely(conn_idx >= front_info->num_evt_pairs))
>> + return -EINVAL;
>> +
>> + dbuf_flush_fb(&front_info->dbuf_list, fb_cookie);
>> + evtchnl = &front_info->evt_pairs[conn_idx].req;
>> +
>> + mutex_lock(&front_info->req_io_lock);
>> +
>> + spin_lock_irqsave(&front_info->io_lock, flags);
>> + req = be_prepare_req(evtchnl, XENDISPL_OP_PG_FLIP);
>> + req->op.pg_flip.fb_cookie = fb_cookie;
>> +
>> + ret = be_stream_do_io(evtchnl, req);
>> + spin_unlock_irqrestore(&front_info->io_lock, flags);
>> +
>> + if (ret == 0)
>> + ret = be_stream_wait_io(evtchnl);
>> +
>> + mutex_unlock(&front_info->req_io_lock);
>> + return ret;
>> }
>>
>> static void xen_drm_drv_unload(struct xen_drm_front_info *front_info)
>> @@ -183,6 +497,7 @@ static void xen_drv_remove_internal(struct xen_drm_front_info *front_info)
>> {
>> xen_drm_drv_deinit(front_info);
>> xen_drm_front_evtchnl_free_all(front_info);
>> + dbuf_free_all(&front_info->dbuf_list);
>> }
>>
>> static int backend_on_initwait(struct xen_drm_front_info *front_info)
>> @@ -310,6 +625,8 @@ static int xen_drv_probe(struct xenbus_device *xb_dev,
>>
>> front_info->xb_dev = xb_dev;
>> spin_lock_init(&front_info->io_lock);
>> + mutex_init(&front_info->req_io_lock);
>> + INIT_LIST_HEAD(&front_info->dbuf_list);
>> front_info->drm_pdrv_registered = false;
>> dev_set_drvdata(&xb_dev->dev, front_info);
>> return xenbus_switch_state(xb_dev, XenbusStateInitialising);
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
>> index c6f52c892434..db32d00145d1 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front.h
>> +++ b/drivers/gpu/drm/xen/xen_drm_front.h
>> @@ -137,6 +137,8 @@ struct xen_drm_front_info {
>> struct xenbus_device *xb_dev;
>> /* to protect data between backend IO code and interrupt handler */
>> spinlock_t io_lock;
>> + /* serializer for backend IO: request/response */
>> + struct mutex req_io_lock;
>> bool drm_pdrv_registered;
>> /* virtual DRM platform device */
>> struct platform_device *drm_pdev;
>> @@ -144,6 +146,9 @@ struct xen_drm_front_info {
>> int num_evt_pairs;
>> struct xen_drm_front_evtchnl_pair *evt_pairs;
>> struct xen_drm_front_cfg cfg;
>> +
>> + /* display buffers */
>> + struct list_head dbuf_list;
>> };
>>
>> #endif /* __XEN_DRM_FRONT_H_ */
>> --
>> 2.7.4
>>
>> _______________________________________________
>> dri-devel mailing list
>> [email protected]
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Wed, Feb 21, 2018 at 10:03:39AM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <[email protected]>
>
> Implement essential initialization of the display driver:
> - introduce required data structures
> - handle DRM/KMS driver registration
> - perform basic DRM driver initialization
> - register driver on backend connection
> - remove driver on backend disconnect
> - introduce essential callbacks required by DRM/KMS core
> - introduce essential callbacks required for frontend operations
>
> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
> ---
> drivers/gpu/drm/xen/Makefile | 1 +
> drivers/gpu/drm/xen/xen_drm_front.c | 169 ++++++++++++++++++++++++-
> drivers/gpu/drm/xen/xen_drm_front.h | 24 ++++
> drivers/gpu/drm/xen/xen_drm_front_drv.c | 211 ++++++++++++++++++++++++++++++++
> drivers/gpu/drm/xen/xen_drm_front_drv.h | 60 +++++++++
> 5 files changed, 462 insertions(+), 3 deletions(-)
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.h
>
> diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
> index f1823cb596c5..d3068202590f 100644
> --- a/drivers/gpu/drm/xen/Makefile
> +++ b/drivers/gpu/drm/xen/Makefile
> @@ -1,6 +1,7 @@
> # SPDX-License-Identifier: GPL-2.0
>
> drm_xen_front-objs := xen_drm_front.o \
> + xen_drm_front_drv.o \
> xen_drm_front_evtchnl.o \
> xen_drm_front_shbuf.o \
> xen_drm_front_cfg.o
> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
> index 0d94ff272da3..8de88e359d5e 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
> @@ -18,6 +18,8 @@
>
> #include <drm/drmP.h>
>
> +#include <linux/of_device.h>
> +
> #include <xen/platform_pci.h>
> #include <xen/xen.h>
> #include <xen/xenbus.h>
> @@ -25,15 +27,161 @@
> #include <xen/interface/io/displif.h>
>
> #include "xen_drm_front.h"
> +#include "xen_drm_front_drv.h"
> #include "xen_drm_front_evtchnl.h"
> #include "xen_drm_front_shbuf.h"
>
> +static int be_mode_set(struct xen_drm_front_drm_pipeline *pipeline, uint32_t x,
> + uint32_t y, uint32_t width, uint32_t height, uint32_t bpp,
> + uint64_t fb_cookie)
> +
> +{
> + return 0;
> +}
> +
> +static int be_dbuf_create_int(struct xen_drm_front_info *front_info,
> + uint64_t dbuf_cookie, uint32_t width, uint32_t height,
> + uint32_t bpp, uint64_t size, struct page **pages,
> + struct sg_table *sgt)
> +{
> + return 0;
> +}
> +
> +static int be_dbuf_create_from_sgt(struct xen_drm_front_info *front_info,
> + uint64_t dbuf_cookie, uint32_t width, uint32_t height,
> + uint32_t bpp, uint64_t size, struct sg_table *sgt)
> +{
> + return be_dbuf_create_int(front_info, dbuf_cookie, width, height,
> + bpp, size, NULL, sgt);
> +}
> +
> +static int be_dbuf_create_from_pages(struct xen_drm_front_info *front_info,
> + uint64_t dbuf_cookie, uint32_t width, uint32_t height,
> + uint32_t bpp, uint64_t size, struct page **pages)
> +{
> + return be_dbuf_create_int(front_info, dbuf_cookie, width, height,
> + bpp, size, pages, NULL);
> +}
> +
> +static int be_dbuf_destroy(struct xen_drm_front_info *front_info,
> + uint64_t dbuf_cookie)
> +{
> + return 0;
> +}
> +
> +static int be_fb_attach(struct xen_drm_front_info *front_info,
> + uint64_t dbuf_cookie, uint64_t fb_cookie, uint32_t width,
> + uint32_t height, uint32_t pixel_format)
> +{
> + return 0;
> +}
> +
> +static int be_fb_detach(struct xen_drm_front_info *front_info,
> + uint64_t fb_cookie)
> +{
> + return 0;
> +}
> +
> +static int be_page_flip(struct xen_drm_front_info *front_info, int conn_idx,
> + uint64_t fb_cookie)
> +{
> + return 0;
> +}
> +
> +static void xen_drm_drv_unload(struct xen_drm_front_info *front_info)
> +{
> + if (front_info->xb_dev->state != XenbusStateReconfiguring)
> + return;
> +
> + DRM_DEBUG("Can try removing driver now\n");
> + xenbus_switch_state(front_info->xb_dev, XenbusStateInitialising);
> +}
> +
> static struct xen_drm_front_ops front_ops = {
> - /* placeholder for now */
> + .mode_set = be_mode_set,
> + .dbuf_create_from_pages = be_dbuf_create_from_pages,
> + .dbuf_create_from_sgt = be_dbuf_create_from_sgt,
> + .dbuf_destroy = be_dbuf_destroy,
> + .fb_attach = be_fb_attach,
> + .fb_detach = be_fb_detach,
> + .page_flip = be_page_flip,
> + .drm_last_close = xen_drm_drv_unload,
> +};
This looks like a midlayer/DRM-abstraction in your driver. Please remove,
and instead directly hook your xen-front code into the relevant drm
callbacks.
In general also pls make sure you don't implement dummy callbacks that do
nothing, we've tried really hard to make them all optional in the drm
infrastructure.
-Daniel
> +
> +static int xen_drm_drv_probe(struct platform_device *pdev)
> +{
> + /*
> + * The device is not spawn from a device tree, so arch_setup_dma_ops
> + * is not called, thus leaving the device with dummy DMA ops.
> + * This makes the device return error on PRIME buffer import, which
> + * is not correct: to fix this call of_dma_configure() with a NULL
> + * node to set default DMA ops.
> + */
> + of_dma_configure(&pdev->dev, NULL);
> + return xen_drm_front_drv_probe(pdev, &front_ops);
> +}
> +
> +static int xen_drm_drv_remove(struct platform_device *pdev)
> +{
> + return xen_drm_front_drv_remove(pdev);
> +}
> +
> +struct platform_device_info xen_drm_front_platform_info = {
> + .name = XENDISPL_DRIVER_NAME,
> + .id = 0,
> + .num_res = 0,
> + .dma_mask = DMA_BIT_MASK(32),
> };
>
> +static struct platform_driver xen_drm_front_front_info = {
> + .probe = xen_drm_drv_probe,
> + .remove = xen_drm_drv_remove,
> + .driver = {
> + .name = XENDISPL_DRIVER_NAME,
> + },
> +};
> +
> +static void xen_drm_drv_deinit(struct xen_drm_front_info *front_info)
> +{
> + if (!front_info->drm_pdrv_registered)
> + return;
> +
> + if (front_info->drm_pdev)
> + platform_device_unregister(front_info->drm_pdev);
> +
> + platform_driver_unregister(&xen_drm_front_front_info);
> + front_info->drm_pdrv_registered = false;
> + front_info->drm_pdev = NULL;
> +}
> +
> +static int xen_drm_drv_init(struct xen_drm_front_info *front_info)
> +{
> + int ret;
> +
> + ret = platform_driver_register(&xen_drm_front_front_info);
> + if (ret < 0)
> + return ret;
> +
> + front_info->drm_pdrv_registered = true;
> + /* pass card configuration via platform data */
> + xen_drm_front_platform_info.data = &front_info->cfg;
> + xen_drm_front_platform_info.size_data = sizeof(front_info->cfg);
> +
> + front_info->drm_pdev = platform_device_register_full(
> + &xen_drm_front_platform_info);
> + if (IS_ERR_OR_NULL(front_info->drm_pdev)) {
> + DRM_ERROR("Failed to register " XENDISPL_DRIVER_NAME " PV DRM driver\n");
> + front_info->drm_pdev = NULL;
> + xen_drm_drv_deinit(front_info);
> + return -ENODEV;
> + }
> +
> + return 0;
> +}
> +
> static void xen_drv_remove_internal(struct xen_drm_front_info *front_info)
> {
> + xen_drm_drv_deinit(front_info);
> xen_drm_front_evtchnl_free_all(front_info);
> }
>
> @@ -59,13 +207,27 @@ static int backend_on_initwait(struct xen_drm_front_info *front_info)
> static int backend_on_connected(struct xen_drm_front_info *front_info)
> {
> xen_drm_front_evtchnl_set_state(front_info, EVTCHNL_STATE_CONNECTED);
> - return 0;
> + return xen_drm_drv_init(front_info);
> }
>
> static void backend_on_disconnected(struct xen_drm_front_info *front_info)
> {
> + bool removed = true;
> +
> + if (front_info->drm_pdev) {
> + if (xen_drm_front_drv_is_used(front_info->drm_pdev)) {
> + DRM_WARN("DRM driver still in use, deferring removal\n");
> + removed = false;
> + } else
> + xen_drv_remove_internal(front_info);
> + }
> +
> xen_drm_front_evtchnl_set_state(front_info, EVTCHNL_STATE_DISCONNECTED);
> - xenbus_switch_state(front_info->xb_dev, XenbusStateInitialising);
> +
> + if (removed)
> + xenbus_switch_state(front_info->xb_dev, XenbusStateInitialising);
> + else
> + xenbus_switch_state(front_info->xb_dev, XenbusStateReconfiguring);
> }
>
> static void backend_on_changed(struct xenbus_device *xb_dev,
> @@ -148,6 +310,7 @@ static int xen_drv_probe(struct xenbus_device *xb_dev,
>
> front_info->xb_dev = xb_dev;
> spin_lock_init(&front_info->io_lock);
> + front_info->drm_pdrv_registered = false;
> dev_set_drvdata(&xb_dev->dev, front_info);
> return xenbus_switch_state(xb_dev, XenbusStateInitialising);
> }
> diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
> index 13f22736ae02..9ed5bfb248d0 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front.h
> +++ b/drivers/gpu/drm/xen/xen_drm_front.h
> @@ -19,6 +19,8 @@
> #ifndef __XEN_DRM_FRONT_H_
> #define __XEN_DRM_FRONT_H_
>
> +#include <linux/scatterlist.h>
> +
> #include "xen_drm_front_cfg.h"
>
> #ifndef GRANT_INVALID_REF
> @@ -30,16 +32,38 @@
> #define GRANT_INVALID_REF 0
> #endif
>
> +struct xen_drm_front_drm_pipeline;
> +
> struct xen_drm_front_ops {
> + int (*mode_set)(struct xen_drm_front_drm_pipeline *pipeline,
> + uint32_t x, uint32_t y, uint32_t width, uint32_t height,
> + uint32_t bpp, uint64_t fb_cookie);
> + int (*dbuf_create_from_pages)(struct xen_drm_front_info *front_info,
> + uint64_t dbuf_cookie, uint32_t width, uint32_t height,
> + uint32_t bpp, uint64_t size, struct page **pages);
> + int (*dbuf_create_from_sgt)(struct xen_drm_front_info *front_info,
> + uint64_t dbuf_cookie, uint32_t width, uint32_t height,
> + uint32_t bpp, uint64_t size, struct sg_table *sgt);
> + int (*dbuf_destroy)(struct xen_drm_front_info *front_info,
> + uint64_t dbuf_cookie);
> + int (*fb_attach)(struct xen_drm_front_info *front_info,
> + uint64_t dbuf_cookie, uint64_t fb_cookie,
> + uint32_t width, uint32_t height, uint32_t pixel_format);
> + int (*fb_detach)(struct xen_drm_front_info *front_info,
> + uint64_t fb_cookie);
> + int (*page_flip)(struct xen_drm_front_info *front_info,
> + int conn_idx, uint64_t fb_cookie);
> /* CAUTION! this is called with a spin_lock held! */
> void (*on_frame_done)(struct platform_device *pdev,
> int conn_idx, uint64_t fb_cookie);
> + void (*drm_last_close)(struct xen_drm_front_info *front_info);
> };
>
> struct xen_drm_front_info {
> struct xenbus_device *xb_dev;
> /* to protect data between backend IO code and interrupt handler */
> spinlock_t io_lock;
> + bool drm_pdrv_registered;
> /* virtual DRM platform device */
> struct platform_device *drm_pdev;
>
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.c b/drivers/gpu/drm/xen/xen_drm_front_drv.c
> new file mode 100644
> index 000000000000..b3764d5ed0f6
> --- /dev/null
> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.c
> @@ -0,0 +1,211 @@
> +/*
> + * Xen para-virtual DRM device
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2016-2018 EPAM Systems Inc.
> + *
> + * Author: Oleksandr Andrushchenko <[email protected]>
> + */
> +
> +#include <drm/drmP.h>
> +#include <drm/drm_gem.h>
> +#include <drm/drm_atomic_helper.h>
> +
> +#include "xen_drm_front.h"
> +#include "xen_drm_front_cfg.h"
> +#include "xen_drm_front_drv.h"
> +
> +static int dumb_create(struct drm_file *filp,
> + struct drm_device *dev, struct drm_mode_create_dumb *args)
> +{
> + return -EINVAL;
> +}
> +
> +static void free_object(struct drm_gem_object *obj)
> +{
> + struct xen_drm_front_drm_info *drm_info = obj->dev->dev_private;
> +
> + drm_info->front_ops->dbuf_destroy(drm_info->front_info,
> + xen_drm_front_dbuf_to_cookie(obj));
> +}
> +
> +static void on_frame_done(struct platform_device *pdev,
> + int conn_idx, uint64_t fb_cookie)
> +{
> +}
> +
> +static void lastclose(struct drm_device *dev)
> +{
> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> +
> + drm_info->front_ops->drm_last_close(drm_info->front_info);
> +}
> +
> +static int gem_mmap(struct file *filp, struct vm_area_struct *vma)
> +{
> + return -EINVAL;
> +}
> +
> +static struct sg_table *prime_get_sg_table(struct drm_gem_object *obj)
> +{
> + return NULL;
> +}
> +
> +static struct drm_gem_object *prime_import_sg_table(struct drm_device *dev,
> + struct dma_buf_attachment *attach, struct sg_table *sgt)
> +{
> + return NULL;
> +}
> +
> +static void *prime_vmap(struct drm_gem_object *obj)
> +{
> + return NULL;
> +}
> +
> +static void prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +{
> +}
> +
> +static int prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> +{
> + return -EINVAL;
> +}
> +
> +static const struct file_operations xendrm_fops = {
> + .owner = THIS_MODULE,
> + .open = drm_open,
> + .release = drm_release,
> + .unlocked_ioctl = drm_ioctl,
> +#ifdef CONFIG_COMPAT
> + .compat_ioctl = drm_compat_ioctl,
> +#endif
> + .poll = drm_poll,
> + .read = drm_read,
> + .llseek = no_llseek,
> + .mmap = gem_mmap,
> +};
> +
> +static const struct vm_operations_struct xen_drm_vm_ops = {
> + .open = drm_gem_vm_open,
> + .close = drm_gem_vm_close,
> +};
> +
> +struct drm_driver xen_drm_driver = {
> + .driver_features = DRIVER_GEM | DRIVER_MODESET |
> + DRIVER_PRIME | DRIVER_ATOMIC,
> + .lastclose = lastclose,
> + .gem_free_object_unlocked = free_object,
> + .gem_vm_ops = &xen_drm_vm_ops,
> + .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
> + .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
> + .gem_prime_import = drm_gem_prime_import,
> + .gem_prime_export = drm_gem_prime_export,
> + .gem_prime_get_sg_table = prime_get_sg_table,
> + .gem_prime_import_sg_table = prime_import_sg_table,
> + .gem_prime_vmap = prime_vmap,
> + .gem_prime_vunmap = prime_vunmap,
> + .gem_prime_mmap = prime_mmap,
> + .dumb_create = dumb_create,
> + .fops = &xendrm_fops,
> + .name = "xendrm-du",
> + .desc = "Xen PV DRM Display Unit",
> + .date = "20161109",
> + .major = 1,
> + .minor = 0,
> +};
> +
> +int xen_drm_front_drv_probe(struct platform_device *pdev,
> + struct xen_drm_front_ops *front_ops)
> +{
> + struct xen_drm_front_cfg *cfg = dev_get_platdata(&pdev->dev);
> + struct xen_drm_front_drm_info *drm_info;
> + struct drm_device *dev;
> + int ret;
> +
> + DRM_INFO("Creating %s\n", xen_drm_driver.desc);
> +
> + drm_info = devm_kzalloc(&pdev->dev, sizeof(*drm_info), GFP_KERNEL);
> + if (!drm_info)
> + return -ENOMEM;
> +
> + drm_info->front_ops = front_ops;
> + drm_info->front_ops->on_frame_done = on_frame_done;
> + drm_info->front_info = cfg->front_info;
> +
> + dev = drm_dev_alloc(&xen_drm_driver, &pdev->dev);
> + if (!dev)
> + return -ENOMEM;
> +
> + drm_info->drm_dev = dev;
> +
> + drm_info->cfg = cfg;
> + dev->dev_private = drm_info;
> + platform_set_drvdata(pdev, drm_info);
> +
> + ret = drm_vblank_init(dev, cfg->num_connectors);
> + if (ret) {
> + DRM_ERROR("Failed to initialize vblank, ret %d\n", ret);
> + return ret;
> + }
> +
> + dev->irq_enabled = 1;
> +
> + ret = drm_dev_register(dev, 0);
> + if (ret)
> + goto fail_register;
> +
> + DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n",
> + xen_drm_driver.name, xen_drm_driver.major,
> + xen_drm_driver.minor, xen_drm_driver.patchlevel,
> + xen_drm_driver.date, dev->primary->index);
> +
> + return 0;
> +
> +fail_register:
> + drm_dev_unregister(dev);
> + drm_mode_config_cleanup(dev);
> + return ret;
> +}
> +
> +int xen_drm_front_drv_remove(struct platform_device *pdev)
> +{
> + struct xen_drm_front_drm_info *drm_info = platform_get_drvdata(pdev);
> + struct drm_device *dev = drm_info->drm_dev;
> +
> + if (dev) {
> + drm_dev_unregister(dev);
> + drm_atomic_helper_shutdown(dev);
> + drm_mode_config_cleanup(dev);
> + drm_dev_unref(dev);
> + }
> + return 0;
> +}
> +
> +bool xen_drm_front_drv_is_used(struct platform_device *pdev)
> +{
> + struct xen_drm_front_drm_info *drm_info = platform_get_drvdata(pdev);
> + struct drm_device *dev;
> +
> + if (!drm_info)
> + return false;
> +
> + dev = drm_info->drm_dev;
> + if (!dev)
> + return false;
> +
> + /*
> + * FIXME: the code below must be protected by drm_global_mutex,
> + * but it is not accessible to us. Anyways there is a race condition,
> + * but we will re-try.
> + */
> + return dev->open_count != 0;
> +}
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.h b/drivers/gpu/drm/xen/xen_drm_front_drv.h
> new file mode 100644
> index 000000000000..aaa476535c13
> --- /dev/null
> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.h
> @@ -0,0 +1,60 @@
> +/*
> + * Xen para-virtual DRM device
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2016-2018 EPAM Systems Inc.
> + *
> + * Author: Oleksandr Andrushchenko <[email protected]>
> + */
> +
> +#ifndef __XEN_DRM_FRONT_DRV_H_
> +#define __XEN_DRM_FRONT_DRV_H_
> +
> +#include <drm/drmP.h>
> +
> +#include "xen_drm_front.h"
> +#include "xen_drm_front_cfg.h"
> +
> +struct xen_drm_front_drm_pipeline {
> + struct xen_drm_front_drm_info *drm_info;
> +
> + int index;
> +};
> +
> +struct xen_drm_front_drm_info {
> + struct xen_drm_front_info *front_info;
> + struct xen_drm_front_ops *front_ops;
> + struct drm_device *drm_dev;
> + struct xen_drm_front_cfg *cfg;
> +};
> +
> +static inline uint64_t xen_drm_front_fb_to_cookie(
> + struct drm_framebuffer *fb)
> +{
> + return (uint64_t)fb;
> +}
> +
> +static inline uint64_t xen_drm_front_dbuf_to_cookie(
> + struct drm_gem_object *gem_obj)
> +{
> + return (uint64_t)gem_obj;
> +}
> +
> +int xen_drm_front_drv_probe(struct platform_device *pdev,
> + struct xen_drm_front_ops *front_ops);
> +
> +int xen_drm_front_drv_remove(struct platform_device *pdev);
> +
> +bool xen_drm_front_drv_is_used(struct platform_device *pdev);
> +
> +#endif /* __XEN_DRM_FRONT_DRV_H_ */
> +
> --
> 2.7.4
>
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, Feb 21, 2018 at 10:03:40AM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <[email protected]>
>
> Implement kernel modesetiing/connector handling using
> DRM simple KMS helper pipeline:
>
> - implement KMS part of the driver with the help of DRM
> simple pipepline helper which is possible due to the fact
> that the para-virtualized driver only supports a single
> (primary) plane:
> - initialize connectors according to XenStore configuration
> - handle frame done events from the backend
> - generate vblank events
> - create and destroy frame buffers and propagate those
> to the backend
> - propagate set/reset mode configuration to the backend on display
> enable/disable callbacks
> - send page flip request to the backend and implement logic for
> reporting backend IO errors on prepare fb callback
>
> - implement virtual connector handling:
> - support only pixel formats suitable for single plane modes
> - make sure the connector is always connected
> - support a single video mode as per para-virtualized driver
> configuration
>
> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
I think once you've removed the midlayer in the previous patch it would
makes sense to merge the 2 patches into 1.
Bunch more comments below.
-Daniel
> ---
> drivers/gpu/drm/xen/Makefile | 2 +
> drivers/gpu/drm/xen/xen_drm_front_conn.c | 125 +++++++++++++
> drivers/gpu/drm/xen/xen_drm_front_conn.h | 35 ++++
> drivers/gpu/drm/xen/xen_drm_front_drv.c | 15 ++
> drivers/gpu/drm/xen/xen_drm_front_drv.h | 12 ++
> drivers/gpu/drm/xen/xen_drm_front_kms.c | 299 +++++++++++++++++++++++++++++++
> drivers/gpu/drm/xen/xen_drm_front_kms.h | 30 ++++
> 7 files changed, 518 insertions(+)
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.h
>
> diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
> index d3068202590f..4fcb0da1a9c5 100644
> --- a/drivers/gpu/drm/xen/Makefile
> +++ b/drivers/gpu/drm/xen/Makefile
> @@ -2,6 +2,8 @@
>
> drm_xen_front-objs := xen_drm_front.o \
> xen_drm_front_drv.o \
> + xen_drm_front_kms.o \
> + xen_drm_front_conn.o \
> xen_drm_front_evtchnl.o \
> xen_drm_front_shbuf.o \
> xen_drm_front_cfg.o
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_conn.c b/drivers/gpu/drm/xen/xen_drm_front_conn.c
> new file mode 100644
> index 000000000000..d9986a2e1a3b
> --- /dev/null
> +++ b/drivers/gpu/drm/xen/xen_drm_front_conn.c
> @@ -0,0 +1,125 @@
> +/*
> + * Xen para-virtual DRM device
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2016-2018 EPAM Systems Inc.
> + *
> + * Author: Oleksandr Andrushchenko <[email protected]>
> + */
> +
> +#include <drm/drm_atomic_helper.h>
> +#include <drm/drm_crtc_helper.h>
> +
> +#include <video/videomode.h>
> +
> +#include "xen_drm_front_conn.h"
> +#include "xen_drm_front_drv.h"
> +
> +static struct xen_drm_front_drm_pipeline *
> +to_xen_drm_pipeline(struct drm_connector *connector)
> +{
> + return container_of(connector, struct xen_drm_front_drm_pipeline, conn);
> +}
> +
> +static const uint32_t plane_formats[] = {
> + DRM_FORMAT_RGB565,
> + DRM_FORMAT_RGB888,
> + DRM_FORMAT_XRGB8888,
> + DRM_FORMAT_ARGB8888,
> + DRM_FORMAT_XRGB4444,
> + DRM_FORMAT_ARGB4444,
> + DRM_FORMAT_XRGB1555,
> + DRM_FORMAT_ARGB1555,
> +};
> +
> +const uint32_t *xen_drm_front_conn_get_formats(int *format_count)
> +{
> + *format_count = ARRAY_SIZE(plane_formats);
> + return plane_formats;
> +}
> +
> +static enum drm_connector_status connector_detect(
> + struct drm_connector *connector, bool force)
> +{
> + if (drm_dev_is_unplugged(connector->dev))
> + return connector_status_disconnected;
> +
> + return connector_status_connected;
> +}
> +
> +#define XEN_DRM_NUM_VIDEO_MODES 1
> +#define XEN_DRM_CRTC_VREFRESH_HZ 60
> +
> +static int connector_get_modes(struct drm_connector *connector)
> +{
> + struct xen_drm_front_drm_pipeline *pipeline =
> + to_xen_drm_pipeline(connector);
> + struct drm_display_mode *mode;
> + struct videomode videomode;
> + int width, height;
> +
> + mode = drm_mode_create(connector->dev);
> + if (!mode)
> + return 0;
> +
> + memset(&videomode, 0, sizeof(videomode));
> + videomode.hactive = pipeline->width;
> + videomode.vactive = pipeline->height;
> + width = videomode.hactive + videomode.hfront_porch +
> + videomode.hback_porch + videomode.hsync_len;
> + height = videomode.vactive + videomode.vfront_porch +
> + videomode.vback_porch + videomode.vsync_len;
> + videomode.pixelclock = width * height * XEN_DRM_CRTC_VREFRESH_HZ;
> + mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
> +
> + drm_display_mode_from_videomode(&videomode, mode);
> + drm_mode_probed_add(connector, mode);
> + return XEN_DRM_NUM_VIDEO_MODES;
> +}
> +
> +static int connector_mode_valid(struct drm_connector *connector,
> + struct drm_display_mode *mode)
> +{
> + struct xen_drm_front_drm_pipeline *pipeline =
> + to_xen_drm_pipeline(connector);
> +
> + if (mode->hdisplay != pipeline->width)
> + return MODE_ERROR;
> +
> + if (mode->vdisplay != pipeline->height)
> + return MODE_ERROR;
> +
> + return MODE_OK;
> +}
> +
> +static const struct drm_connector_helper_funcs connector_helper_funcs = {
> + .get_modes = connector_get_modes,
> + .mode_valid = connector_mode_valid,
> +};
> +
> +static const struct drm_connector_funcs connector_funcs = {
> + .detect = connector_detect,
> + .fill_modes = drm_helper_probe_single_connector_modes,
> + .destroy = drm_connector_cleanup,
> + .reset = drm_atomic_helper_connector_reset,
> + .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
> + .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
> +};
> +
> +int xen_drm_front_conn_init(struct xen_drm_front_drm_info *drm_info,
> + struct drm_connector *connector)
> +{
> + drm_connector_helper_add(connector, &connector_helper_funcs);
> +
> + return drm_connector_init(drm_info->drm_dev, connector,
> + &connector_funcs, DRM_MODE_CONNECTOR_VIRTUAL);
> +}
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_conn.h b/drivers/gpu/drm/xen/xen_drm_front_conn.h
> new file mode 100644
> index 000000000000..708e80d45985
> --- /dev/null
> +++ b/drivers/gpu/drm/xen/xen_drm_front_conn.h
> @@ -0,0 +1,35 @@
> +/*
> + * Xen para-virtual DRM device
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2016-2018 EPAM Systems Inc.
> + *
> + * Author: Oleksandr Andrushchenko <[email protected]>
> + */
> +
> +#ifndef __XEN_DRM_FRONT_CONN_H_
> +#define __XEN_DRM_FRONT_CONN_H_
> +
> +#include <drm/drmP.h>
> +#include <drm/drm_crtc.h>
> +#include <drm/drm_encoder.h>
> +
> +#include <linux/wait.h>
> +
> +struct xen_drm_front_drm_info;
> +
> +const uint32_t *xen_drm_front_conn_get_formats(int *format_count);
> +
> +int xen_drm_front_conn_init(struct xen_drm_front_drm_info *drm_info,
> + struct drm_connector *connector);
> +
> +#endif /* __XEN_DRM_FRONT_CONN_H_ */
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.c b/drivers/gpu/drm/xen/xen_drm_front_drv.c
> index b3764d5ed0f6..e8862d26ba27 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_drv.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.c
> @@ -23,6 +23,7 @@
> #include "xen_drm_front.h"
> #include "xen_drm_front_cfg.h"
> #include "xen_drm_front_drv.h"
> +#include "xen_drm_front_kms.h"
>
> static int dumb_create(struct drm_file *filp,
> struct drm_device *dev, struct drm_mode_create_dumb *args)
> @@ -41,6 +42,13 @@ static void free_object(struct drm_gem_object *obj)
> static void on_frame_done(struct platform_device *pdev,
> int conn_idx, uint64_t fb_cookie)
> {
> + struct xen_drm_front_drm_info *drm_info = platform_get_drvdata(pdev);
> +
> + if (unlikely(conn_idx >= drm_info->cfg->num_connectors))
> + return;
> +
> + xen_drm_front_kms_on_frame_done(&drm_info->pipeline[conn_idx],
> + fb_cookie);
> }
>
> static void lastclose(struct drm_device *dev)
> @@ -157,6 +165,12 @@ int xen_drm_front_drv_probe(struct platform_device *pdev,
> return ret;
> }
>
> + ret = xen_drm_front_kms_init(drm_info);
> + if (ret) {
> + DRM_ERROR("Failed to initialize DRM/KMS, ret %d\n", ret);
> + goto fail_modeset;
> + }
> +
> dev->irq_enabled = 1;
>
> ret = drm_dev_register(dev, 0);
> @@ -172,6 +186,7 @@ int xen_drm_front_drv_probe(struct platform_device *pdev,
>
> fail_register:
> drm_dev_unregister(dev);
> +fail_modeset:
> drm_mode_config_cleanup(dev);
> return ret;
> }
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.h b/drivers/gpu/drm/xen/xen_drm_front_drv.h
> index aaa476535c13..563318b19f34 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_drv.h
> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.h
> @@ -20,14 +20,24 @@
> #define __XEN_DRM_FRONT_DRV_H_
>
> #include <drm/drmP.h>
> +#include <drm/drm_simple_kms_helper.h>
>
> #include "xen_drm_front.h"
> #include "xen_drm_front_cfg.h"
> +#include "xen_drm_front_conn.h"
>
> struct xen_drm_front_drm_pipeline {
> struct xen_drm_front_drm_info *drm_info;
>
> int index;
> +
> + struct drm_simple_display_pipe pipe;
> +
> + struct drm_connector conn;
> + /* these are only for connector mode checking */
> + int width, height;
> + /* last backend error seen on page flip */
> + int pgflip_last_error;
> };
>
> struct xen_drm_front_drm_info {
> @@ -35,6 +45,8 @@ struct xen_drm_front_drm_info {
> struct xen_drm_front_ops *front_ops;
> struct drm_device *drm_dev;
> struct xen_drm_front_cfg *cfg;
> +
> + struct xen_drm_front_drm_pipeline pipeline[XEN_DRM_FRONT_MAX_CRTCS];
> };
>
> static inline uint64_t xen_drm_front_fb_to_cookie(
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm/xen/xen_drm_front_kms.c
> new file mode 100644
> index 000000000000..ad94c28835cd
> --- /dev/null
> +++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
> @@ -0,0 +1,299 @@
> +/*
> + * Xen para-virtual DRM device
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2016-2018 EPAM Systems Inc.
> + *
> + * Author: Oleksandr Andrushchenko <[email protected]>
> + */
> +
> +#include "xen_drm_front_kms.h"
> +
> +#include <drm/drmP.h>
> +#include <drm/drm_atomic.h>
> +#include <drm/drm_atomic_helper.h>
> +#include <drm/drm_gem.h>
> +#include <drm/drm_gem_framebuffer_helper.h>
> +
> +#include "xen_drm_front.h"
> +#include "xen_drm_front_conn.h"
> +#include "xen_drm_front_drv.h"
> +
> +static struct xen_drm_front_drm_pipeline *
> +to_xen_drm_pipeline(struct drm_simple_display_pipe *pipe)
> +{
> + return container_of(pipe, struct xen_drm_front_drm_pipeline, pipe);
> +}
> +
> +static void fb_destroy(struct drm_framebuffer *fb)
> +{
> + struct xen_drm_front_drm_info *drm_info = fb->dev->dev_private;
> +
> + drm_info->front_ops->fb_detach(drm_info->front_info,
> + xen_drm_front_fb_to_cookie(fb));
> + drm_gem_fb_destroy(fb);
> +}
> +
> +static struct drm_framebuffer_funcs fb_funcs = {
> + .destroy = fb_destroy,
> +};
> +
> +static struct drm_framebuffer *fb_create(struct drm_device *dev,
> + struct drm_file *filp, const struct drm_mode_fb_cmd2 *mode_cmd)
> +{
> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> + static struct drm_framebuffer *fb;
> + struct drm_gem_object *gem_obj;
> + int ret;
> +
> + fb = drm_gem_fb_create_with_funcs(dev, filp, mode_cmd, &fb_funcs);
> + if (IS_ERR_OR_NULL(fb))
> + return fb;
> +
> + gem_obj = drm_gem_object_lookup(filp, mode_cmd->handles[0]);
> + if (!gem_obj) {
> + DRM_ERROR("Failed to lookup GEM object\n");
> + ret = -ENOENT;
> + goto fail;
> + }
> +
> + drm_gem_object_unreference_unlocked(gem_obj);
> +
> + ret = drm_info->front_ops->fb_attach(
> + drm_info->front_info,
> + xen_drm_front_dbuf_to_cookie(gem_obj),
> + xen_drm_front_fb_to_cookie(fb),
> + fb->width, fb->height, fb->format->format);
> + if (ret < 0) {
> + DRM_ERROR("Back failed to attach FB %p: %d\n", fb, ret);
> + goto fail;
> + }
> +
> + return fb;
> +
> +fail:
> + drm_gem_fb_destroy(fb);
> + return ERR_PTR(ret);
> +}
> +
> +static const struct drm_mode_config_funcs mode_config_funcs = {
> + .fb_create = fb_create,
> + .atomic_check = drm_atomic_helper_check,
> + .atomic_commit = drm_atomic_helper_commit,
> +};
> +
> +static int display_set_config(struct drm_simple_display_pipe *pipe,
> + struct drm_framebuffer *fb)
> +{
> + struct xen_drm_front_drm_pipeline *pipeline =
> + to_xen_drm_pipeline(pipe);
> + struct drm_crtc *crtc = &pipe->crtc;
> + struct xen_drm_front_drm_info *drm_info = pipeline->drm_info;
> + int ret;
> +
> + if (fb)
> + ret = drm_info->front_ops->mode_set(pipeline,
> + crtc->x, crtc->y,
> + fb->width, fb->height, fb->format->cpp[0] * 8,
> + xen_drm_front_fb_to_cookie(fb));
> + else
> + ret = drm_info->front_ops->mode_set(pipeline,
> + 0, 0, 0, 0, 0,
> + xen_drm_front_fb_to_cookie(NULL));
This is a bit much layering, the if (fb) case corresponds to the
display_enable/disable hooks, pls fold that in instead of the indirection.
simple helpers guarantee that when the display is on, then you have an fb.
Maybe we need to fix the docs, pls check and if that's not clear, submit a
kernel-doc patch for the simple pipe helpers.
> +
> + if (ret)
> + DRM_ERROR("Failed to set mode to back: %d\n", ret);
> +
> + return ret;
> +}
> +
> +static void display_enable(struct drm_simple_display_pipe *pipe,
> + struct drm_crtc_state *crtc_state)
> +{
> + struct drm_crtc *crtc = &pipe->crtc;
> + struct drm_framebuffer *fb = pipe->plane.state->fb;
> +
> + if (display_set_config(pipe, fb) == 0)
> + drm_crtc_vblank_on(crtc);
I get the impression your driver doesn't support vblanks (the page flip
code at least looks like it's only generating a single event), you also
don't have a enable/disable_vblank implementation. If there's no vblank
handling then this shouldn't be needed.
> + else
> + DRM_ERROR("Failed to enable display\n");
> +}
> +
> +static void display_disable(struct drm_simple_display_pipe *pipe)
> +{
> + struct drm_crtc *crtc = &pipe->crtc;
> +
> + display_set_config(pipe, NULL);
> + drm_crtc_vblank_off(crtc);
> + /* final check for stalled events */
> + if (crtc->state->event && !crtc->state->active) {
> + unsigned long flags;
> +
> + spin_lock_irqsave(&crtc->dev->event_lock, flags);
> + drm_crtc_send_vblank_event(crtc, crtc->state->event);
> + spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
> + crtc->state->event = NULL;
> + }
> +}
> +
> +void xen_drm_front_kms_on_frame_done(
> + struct xen_drm_front_drm_pipeline *pipeline,
> + uint64_t fb_cookie)
> +{
> + drm_crtc_handle_vblank(&pipeline->pipe.crtc);
Hm, again this doesn't look like real vblank, but only a page-flip done
event. If that's correct then please don't use the vblank machinery, but
just store the event internally (protected with your own private spinlock)
and send it out using drm_crtc_send_vblank_event directly. No calls to
arm_vblank_event or any of the other vblank infrastructure should be
needed.
Also please remove the drm_vblank_init() call, since your hw doesn't
really have vblanks. And exposing vblanks to userspace without
implementing them is confusing.
> +}
> +
> +static void display_send_page_flip(struct drm_simple_display_pipe *pipe,
> + struct drm_plane_state *old_plane_state)
> +{
> + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(
> + old_plane_state->state, &pipe->plane);
> +
> + /*
> + * If old_plane_state->fb is NULL and plane_state->fb is not,
> + * then this is an atomic commit which will enable display.
> + * If old_plane_state->fb is not NULL and plane_state->fb is,
> + * then this is an atomic commit which will disable display.
> + * Ignore these and do not send page flip as this framebuffer will be
> + * sent to the backend as a part of display_set_config call.
> + */
> + if (old_plane_state->fb && plane_state->fb) {
> + struct xen_drm_front_drm_pipeline *pipeline =
> + to_xen_drm_pipeline(pipe);
> + struct xen_drm_front_drm_info *drm_info = pipeline->drm_info;
> + int ret;
> +
> + ret = drm_info->front_ops->page_flip(drm_info->front_info,
> + pipeline->index,
> + xen_drm_front_fb_to_cookie(plane_state->fb));
> + pipeline->pgflip_last_error = ret;
> + if (ret) {
> + DRM_ERROR("Failed to send page flip request to backend: %d\n", ret);
> + /*
> + * As we are at commit stage the DRM core will anyways
> + * wait for the vblank and knows nothing about our
> + * failure. The best we can do is to handle
> + * vblank now, so there is no vblank/flip_done
> + * time outs
> + */
> + drm_crtc_handle_vblank(&pipeline->pipe.crtc);
> + }
> + }
> +}
> +
> +static int display_prepare_fb(struct drm_simple_display_pipe *pipe,
> + struct drm_plane_state *plane_state)
> +{
> + struct xen_drm_front_drm_pipeline *pipeline =
> + to_xen_drm_pipeline(pipe);
> +
> + if (pipeline->pgflip_last_error) {
> + int ret;
> +
> + /* if previous page flip didn't succeed then report the error */
> + ret = pipeline->pgflip_last_error;
> + /* and let us try to page flip next time */
> + pipeline->pgflip_last_error = 0;
> + return ret;
> + }
Nope, this isn't how the uapi works. If your flips fail then we might need
to add some error status thing to the drm events, but you can't make the
next flip fail.
-Daniel
> + return drm_gem_fb_prepare_fb(&pipe->plane, plane_state);
> +}
> +
> +static void display_update(struct drm_simple_display_pipe *pipe,
> + struct drm_plane_state *old_plane_state)
> +{
> + struct drm_crtc *crtc = &pipe->crtc;
> + struct drm_pending_vblank_event *event;
> +
> + event = crtc->state->event;
> + if (event) {
> + struct drm_device *dev = crtc->dev;
> + unsigned long flags;
> +
> + crtc->state->event = NULL;
> +
> + spin_lock_irqsave(&dev->event_lock, flags);
> + if (drm_crtc_vblank_get(crtc) == 0)
> + drm_crtc_arm_vblank_event(crtc, event);
> + else
> + drm_crtc_send_vblank_event(crtc, event);
> + spin_unlock_irqrestore(&dev->event_lock, flags);
> + }
> + /*
> + * Send page flip request to the backend *after* we have event armed/
> + * sent above, so on page flip done event from the backend we can
> + * deliver it while handling vblank.
> + */
> + display_send_page_flip(pipe, old_plane_state);
> +}
> +
> +static const struct drm_simple_display_pipe_funcs display_funcs = {
> + .enable = display_enable,
> + .disable = display_disable,
> + .prepare_fb = display_prepare_fb,
> + .update = display_update,
> +};
> +
> +static int display_pipe_init(struct xen_drm_front_drm_info *drm_info,
> + int index, struct xen_drm_front_cfg_connector *cfg,
> + struct xen_drm_front_drm_pipeline *pipeline)
> +{
> + struct drm_device *dev = drm_info->drm_dev;
> + const uint32_t *formats;
> + int format_count;
> + int ret;
> +
> + pipeline->drm_info = drm_info;
> + pipeline->index = index;
> + pipeline->height = cfg->height;
> + pipeline->width = cfg->width;
> +
> + ret = xen_drm_front_conn_init(drm_info, &pipeline->conn);
> + if (ret)
> + return ret;
> +
> + formats = xen_drm_front_conn_get_formats(&format_count);
> +
> + return drm_simple_display_pipe_init(dev, &pipeline->pipe,
> + &display_funcs, formats, format_count,
> + NULL, &pipeline->conn);
> +}
> +
> +int xen_drm_front_kms_init(struct xen_drm_front_drm_info *drm_info)
> +{
> + struct drm_device *dev = drm_info->drm_dev;
> + int i, ret;
> +
> + drm_mode_config_init(dev);
> +
> + dev->mode_config.min_width = 0;
> + dev->mode_config.min_height = 0;
> + dev->mode_config.max_width = 4095;
> + dev->mode_config.max_height = 2047;
> + dev->mode_config.funcs = &mode_config_funcs;
> +
> + for (i = 0; i < drm_info->cfg->num_connectors; i++) {
> + struct xen_drm_front_cfg_connector *cfg =
> + &drm_info->cfg->connectors[i];
> + struct xen_drm_front_drm_pipeline *pipeline =
> + &drm_info->pipeline[i];
> +
> + ret = display_pipe_init(drm_info, i, cfg, pipeline);
> + if (ret) {
> + drm_mode_config_cleanup(dev);
> + return ret;
> + }
> + }
> +
> + drm_mode_config_reset(dev);
> + return 0;
> +}
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.h b/drivers/gpu/drm/xen/xen_drm_front_kms.h
> new file mode 100644
> index 000000000000..65a50033bb9b
> --- /dev/null
> +++ b/drivers/gpu/drm/xen/xen_drm_front_kms.h
> @@ -0,0 +1,30 @@
> +/*
> + * Xen para-virtual DRM device
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2016-2018 EPAM Systems Inc.
> + *
> + * Author: Oleksandr Andrushchenko <[email protected]>
> + */
> +
> +#ifndef __XEN_DRM_FRONT_KMS_H_
> +#define __XEN_DRM_FRONT_KMS_H_
> +
> +#include "xen_drm_front_drv.h"
> +
> +int xen_drm_front_kms_init(struct xen_drm_front_drm_info *drm_info);
> +
> +void xen_drm_front_kms_on_frame_done(
> + struct xen_drm_front_drm_pipeline *pipeline,
> + uint64_t fb_cookie);
> +
> +#endif /* __XEN_DRM_FRONT_KMS_H_ */
> --
> 2.7.4
>
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Wed, Feb 21, 2018 at 10:03:41AM +0200, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <[email protected]>
>
> Implement GEM handling depending on driver mode of operation:
> depending on the requirements for the para-virtualized environment, namely
> requirements dictated by the accompanying DRM/(v)GPU drivers running in both
> host and guest environments, number of operating modes of para-virtualized
> display driver are supported:
> - display buffers can be allocated by either frontend driver or backend
> - display buffers can be allocated to be contiguous in memory or not
>
> Note! Frontend driver itself has no dependency on contiguous memory for
> its operation.
>
> 1. Buffers allocated by the frontend driver.
>
> The below modes of operation are configured at compile-time via
> frontend driver's kernel configuration.
>
> 1.1. Front driver configured to use GEM CMA helpers
> This use-case is useful when used with accompanying DRM/vGPU driver in
> guest domain which was designed to only work with contiguous buffers,
> e.g. DRM driver based on GEM CMA helpers: such drivers can only import
> contiguous PRIME buffers, thus requiring frontend driver to provide
> such. In order to implement this mode of operation para-virtualized
> frontend driver can be configured to use GEM CMA helpers.
>
> 1.2. Front driver doesn't use GEM CMA
> If accompanying drivers can cope with non-contiguous memory then, to
> lower pressure on CMA subsystem of the kernel, driver can allocate
> buffers from system memory.
>
> Note! If used with accompanying DRM/(v)GPU drivers this mode of operation
> may require IOMMU support on the platform, so accompanying DRM/vGPU
> hardware can still reach display buffer memory while importing PRIME
> buffers from the frontend driver.
>
> 2. Buffers allocated by the backend
>
> This mode of operation is run-time configured via guest domain configuration
> through XenStore entries.
>
> For systems which do not provide IOMMU support, but having specific
> requirements for display buffers it is possible to allocate such buffers
> at backend side and share those with the frontend.
> For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting
> physically contiguous memory, this allows implementing zero-copying
> use-cases.
>
> Note! Configuration options 1.1 (contiguous display buffers) and 2 (backend
> allocated buffers) are not supported at the same time.
>
> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
Some suggestions below for some larger cleanup work.
-Daniel
> ---
> drivers/gpu/drm/xen/Kconfig | 13 +
> drivers/gpu/drm/xen/Makefile | 6 +
> drivers/gpu/drm/xen/xen_drm_front.h | 74 ++++++
> drivers/gpu/drm/xen/xen_drm_front_drv.c | 80 ++++++-
> drivers/gpu/drm/xen/xen_drm_front_drv.h | 1 +
> drivers/gpu/drm/xen/xen_drm_front_gem.c | 360 ++++++++++++++++++++++++++++
> drivers/gpu/drm/xen/xen_drm_front_gem.h | 46 ++++
> drivers/gpu/drm/xen/xen_drm_front_gem_cma.c | 93 +++++++
> 8 files changed, 667 insertions(+), 6 deletions(-)
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.c
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.h
> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
>
> diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig
> index 4cca160782ab..4f4abc91f3b6 100644
> --- a/drivers/gpu/drm/xen/Kconfig
> +++ b/drivers/gpu/drm/xen/Kconfig
> @@ -15,3 +15,16 @@ config DRM_XEN_FRONTEND
> help
> Choose this option if you want to enable a para-virtualized
> frontend DRM/KMS driver for Xen guest OSes.
> +
> +config DRM_XEN_FRONTEND_CMA
> + bool "Use DRM CMA to allocate dumb buffers"
> + depends on DRM_XEN_FRONTEND
> + select DRM_KMS_CMA_HELPER
> + select DRM_GEM_CMA_HELPER
> + help
> + Use DRM CMA helpers to allocate display buffers.
> + This is useful for the use-cases when guest driver needs to
> + share or export buffers to other drivers which only expect
> + contiguous buffers.
> + Note: in this mode driver cannot use buffers allocated
> + by the backend.
> diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
> index 4fcb0da1a9c5..12376ec78fbc 100644
> --- a/drivers/gpu/drm/xen/Makefile
> +++ b/drivers/gpu/drm/xen/Makefile
> @@ -8,4 +8,10 @@ drm_xen_front-objs := xen_drm_front.o \
> xen_drm_front_shbuf.o \
> xen_drm_front_cfg.o
>
> +ifeq ($(CONFIG_DRM_XEN_FRONTEND_CMA),y)
> + drm_xen_front-objs += xen_drm_front_gem_cma.o
> +else
> + drm_xen_front-objs += xen_drm_front_gem.o
> +endif
> +
> obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
> diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
> index 9ed5bfb248d0..c6f52c892434 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front.h
> +++ b/drivers/gpu/drm/xen/xen_drm_front.h
> @@ -34,6 +34,80 @@
>
> struct xen_drm_front_drm_pipeline;
>
> +/*
> + *******************************************************************************
> + * Para-virtualized DRM/KMS frontend driver
> + *******************************************************************************
> + * This frontend driver implements Xen para-virtualized display
> + * according to the display protocol described at
> + * include/xen/interface/io/displif.h
> + *
> + *******************************************************************************
> + * Driver modes of operation in terms of display buffers used
> + *******************************************************************************
> + * Depending on the requirements for the para-virtualized environment, namely
> + * requirements dictated by the accompanying DRM/(v)GPU drivers running in both
> + * host and guest environments, number of operating modes of para-virtualized
> + * display driver are supported:
> + * - display buffers can be allocated by either frontend driver or backend
> + * - display buffers can be allocated to be contiguous in memory or not
> + *
> + * Note! Frontend driver itself has no dependency on contiguous memory for
> + * its operation.
> + *
> + *******************************************************************************
> + * 1. Buffers allocated by the frontend driver.
> + *******************************************************************************
> + *
> + * The below modes of operation are configured at compile-time via
> + * frontend driver's kernel configuration.
> + *
> + * 1.1. Front driver configured to use GEM CMA helpers
> + * This use-case is useful when used with accompanying DRM/vGPU driver in
> + * guest domain which was designed to only work with contiguous buffers,
> + * e.g. DRM driver based on GEM CMA helpers: such drivers can only import
> + * contiguous PRIME buffers, thus requiring frontend driver to provide
> + * such. In order to implement this mode of operation para-virtualized
> + * frontend driver can be configured to use GEM CMA helpers.
> + *
> + * 1.2. Front driver doesn't use GEM CMA
> + * If accompanying drivers can cope with non-contiguous memory then, to
> + * lower pressure on CMA subsystem of the kernel, driver can allocate
> + * buffers from system memory.
> + *
> + * Note! If used with accompanying DRM/(v)GPU drivers this mode of operation
> + * may require IOMMU support on the platform, so accompanying DRM/vGPU
> + * hardware can still reach display buffer memory while importing PRIME
> + * buffers from the frontend driver.
> + *
> + *******************************************************************************
> + * 2. Buffers allocated by the backend
> + *******************************************************************************
> + *
> + * This mode of operation is run-time configured via guest domain configuration
> + * through XenStore entries.
> + *
> + * For systems which do not provide IOMMU support, but having specific
> + * requirements for display buffers it is possible to allocate such buffers
> + * at backend side and share those with the frontend.
> + * For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting
> + * physically contiguous memory, this allows implementing zero-copying
> + * use-cases.
> + *
> + *******************************************************************************
> + * Driver limitations
> + *******************************************************************************
> + * 1. Configuration options 1.1 (contiguous display buffers) and 2 (backend
> + * allocated buffers) are not supported at the same time.
> + *
> + * 2. Only primary plane without additional properties is supported.
> + *
> + * 3. Only one video mode supported which is configured via XenStore.
> + *
> + * 4. All CRTCs operate at fixed frequency of 60Hz.
> + *
> + ******************************************************************************/
Since you've typed this all up, pls convert it to kernel-doc and pull it
into a xen-front.rst driver section in Documentation/gpu/ There's a few
examples for i915 and vc4 already.
> +
> struct xen_drm_front_ops {
> int (*mode_set)(struct xen_drm_front_drm_pipeline *pipeline,
> uint32_t x, uint32_t y, uint32_t width, uint32_t height,
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.c b/drivers/gpu/drm/xen/xen_drm_front_drv.c
> index e8862d26ba27..35e7e9cda9d1 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_drv.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.c
> @@ -23,12 +23,58 @@
> #include "xen_drm_front.h"
> #include "xen_drm_front_cfg.h"
> #include "xen_drm_front_drv.h"
> +#include "xen_drm_front_gem.h"
> #include "xen_drm_front_kms.h"
>
> static int dumb_create(struct drm_file *filp,
> struct drm_device *dev, struct drm_mode_create_dumb *args)
> {
> - return -EINVAL;
> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> + struct drm_gem_object *obj;
> + int ret;
> +
> + ret = drm_info->gem_ops->dumb_create(filp, dev, args);
> + if (ret)
> + goto fail;
> +
> + obj = drm_gem_object_lookup(filp, args->handle);
> + if (!obj) {
> + ret = -ENOENT;
> + goto fail_destroy;
> + }
> +
> + drm_gem_object_unreference_unlocked(obj);
> +
> + /*
> + * In case of CONFIG_DRM_XEN_FRONTEND_CMA gem_obj is constructed
> + * via DRM CMA helpers and doesn't have ->pages allocated
> + * (xendrm_gem_get_pages will return NULL), but instead can provide
> + * sg table
> + */
My recommendation is to use an sg table for everything if you deal with
mixed objects (CMA, special blocks 1:1 mapped from host, normal pages).
That avoids the constant get_pages vs. get_sgt differences. For examples
see how e.g. i915 handles the various gem object backends.
> + if (drm_info->gem_ops->get_pages(obj))
> + ret = drm_info->front_ops->dbuf_create_from_pages(
> + drm_info->front_info,
> + xen_drm_front_dbuf_to_cookie(obj),
> + args->width, args->height, args->bpp,
> + args->size,
> + drm_info->gem_ops->get_pages(obj));
> + else
> + ret = drm_info->front_ops->dbuf_create_from_sgt(
> + drm_info->front_info,
> + xen_drm_front_dbuf_to_cookie(obj),
> + args->width, args->height, args->bpp,
> + args->size,
> + drm_info->gem_ops->prime_get_sg_table(obj));
> + if (ret)
> + goto fail_destroy;
> +
> + return 0;
> +
> +fail_destroy:
> + drm_gem_dumb_destroy(filp, dev, args->handle);
> +fail:
> + DRM_ERROR("Failed to create dumb buffer: %d\n", ret);
> + return ret;
> }
>
> static void free_object(struct drm_gem_object *obj)
> @@ -37,6 +83,7 @@ static void free_object(struct drm_gem_object *obj)
>
> drm_info->front_ops->dbuf_destroy(drm_info->front_info,
> xen_drm_front_dbuf_to_cookie(obj));
> + drm_info->gem_ops->free_object_unlocked(obj);
> }
>
> static void on_frame_done(struct platform_device *pdev,
> @@ -60,32 +107,52 @@ static void lastclose(struct drm_device *dev)
>
> static int gem_mmap(struct file *filp, struct vm_area_struct *vma)
> {
> - return -EINVAL;
> + struct drm_file *file_priv = filp->private_data;
> + struct drm_device *dev = file_priv->minor->dev;
> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> +
> + return drm_info->gem_ops->mmap(filp, vma);
Uh, so 1 midlayer for the kms stuff and another midlayer for the gem
stuff. That's way too much indirection.
> }
>
> static struct sg_table *prime_get_sg_table(struct drm_gem_object *obj)
> {
> - return NULL;
> + struct xen_drm_front_drm_info *drm_info;
> +
> + drm_info = obj->dev->dev_private;
> + return drm_info->gem_ops->prime_get_sg_table(obj);
> }
>
> static struct drm_gem_object *prime_import_sg_table(struct drm_device *dev,
> struct dma_buf_attachment *attach, struct sg_table *sgt)
> {
> - return NULL;
> + struct xen_drm_front_drm_info *drm_info;
> +
> + drm_info = dev->dev_private;
> + return drm_info->gem_ops->prime_import_sg_table(dev, attach, sgt);
> }
>
> static void *prime_vmap(struct drm_gem_object *obj)
> {
> - return NULL;
> + struct xen_drm_front_drm_info *drm_info;
> +
> + drm_info = obj->dev->dev_private;
> + return drm_info->gem_ops->prime_vmap(obj);
> }
>
> static void prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> {
> + struct xen_drm_front_drm_info *drm_info;
> +
> + drm_info = obj->dev->dev_private;
> + drm_info->gem_ops->prime_vunmap(obj, vaddr);
> }
>
> static int prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> {
> - return -EINVAL;
> + struct xen_drm_front_drm_info *drm_info;
> +
> + drm_info = obj->dev->dev_private;
> + return drm_info->gem_ops->prime_mmap(obj, vma);
> }
>
> static const struct file_operations xendrm_fops = {
> @@ -147,6 +214,7 @@ int xen_drm_front_drv_probe(struct platform_device *pdev,
>
> drm_info->front_ops = front_ops;
> drm_info->front_ops->on_frame_done = on_frame_done;
> + drm_info->gem_ops = xen_drm_front_gem_get_ops();
> drm_info->front_info = cfg->front_info;
>
> dev = drm_dev_alloc(&xen_drm_driver, &pdev->dev);
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.h b/drivers/gpu/drm/xen/xen_drm_front_drv.h
> index 563318b19f34..34228eb86255 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_drv.h
> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.h
> @@ -43,6 +43,7 @@ struct xen_drm_front_drm_pipeline {
> struct xen_drm_front_drm_info {
> struct xen_drm_front_info *front_info;
> struct xen_drm_front_ops *front_ops;
> + const struct xen_drm_front_gem_ops *gem_ops;
> struct drm_device *drm_dev;
> struct xen_drm_front_cfg *cfg;
>
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> new file mode 100644
> index 000000000000..367e08f6a9ef
> --- /dev/null
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -0,0 +1,360 @@
> +/*
> + * Xen para-virtual DRM device
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2016-2018 EPAM Systems Inc.
> + *
> + * Author: Oleksandr Andrushchenko <[email protected]>
> + */
> +
> +#include "xen_drm_front_gem.h"
> +
> +#include <drm/drmP.h>
> +#include <drm/drm_crtc_helper.h>
> +#include <drm/drm_fb_helper.h>
> +#include <drm/drm_gem.h>
> +
> +#include <linux/dma-buf.h>
> +#include <linux/scatterlist.h>
> +#include <linux/shmem_fs.h>
> +
> +#include <xen/balloon.h>
> +
> +#include "xen_drm_front.h"
> +#include "xen_drm_front_drv.h"
> +#include "xen_drm_front_shbuf.h"
> +
> +struct xen_gem_object {
> + struct drm_gem_object base;
> +
> + size_t num_pages;
> + struct page **pages;
> +
> + /* set for buffers allocated by the backend */
> + bool be_alloc;
> +
> + /* this is for imported PRIME buffer */
> + struct sg_table *sgt_imported;
> +};
> +
> +static inline struct xen_gem_object *to_xen_gem_obj(
> + struct drm_gem_object *gem_obj)
> +{
> + return container_of(gem_obj, struct xen_gem_object, base);
> +}
> +
> +static int gem_alloc_pages_array(struct xen_gem_object *xen_obj,
> + size_t buf_size)
> +{
> + xen_obj->num_pages = DIV_ROUND_UP(buf_size, PAGE_SIZE);
> + xen_obj->pages = kvmalloc_array(xen_obj->num_pages,
> + sizeof(struct page *), GFP_KERNEL);
> + return xen_obj->pages == NULL ? -ENOMEM : 0;
> +}
> +
> +static void gem_free_pages_array(struct xen_gem_object *xen_obj)
> +{
> + kvfree(xen_obj->pages);
> + xen_obj->pages = NULL;
> +}
> +
> +static struct xen_gem_object *gem_create_obj(struct drm_device *dev,
> + size_t size)
> +{
> + struct xen_gem_object *xen_obj;
> + int ret;
> +
> + xen_obj = kzalloc(sizeof(*xen_obj), GFP_KERNEL);
> + if (!xen_obj)
> + return ERR_PTR(-ENOMEM);
> +
> + ret = drm_gem_object_init(dev, &xen_obj->base, size);
> + if (ret < 0) {
> + kfree(xen_obj);
> + return ERR_PTR(ret);
> + }
> +
> + return xen_obj;
> +}
> +
> +static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
> +{
> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> + struct xen_gem_object *xen_obj;
> + int ret;
> +
> + size = round_up(size, PAGE_SIZE);
> + xen_obj = gem_create_obj(dev, size);
> + if (IS_ERR_OR_NULL(xen_obj))
> + return xen_obj;
> +
> + if (drm_info->cfg->be_alloc) {
> + /*
> + * backend will allocate space for this buffer, so
> + * only allocate array of pointers to pages
> + */
> + xen_obj->be_alloc = true;
> + ret = gem_alloc_pages_array(xen_obj, size);
> + if (ret < 0) {
> + gem_free_pages_array(xen_obj);
> + goto fail;
> + }
> +
> + ret = alloc_xenballooned_pages(xen_obj->num_pages,
> + xen_obj->pages);
> + if (ret < 0) {
> + DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
> + xen_obj->num_pages, ret);
> + goto fail;
> + }
> +
> + return xen_obj;
> + }
> + /*
> + * need to allocate backing pages now, so we can share those
> + * with the backend
> + */
> + xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE);
> + xen_obj->pages = drm_gem_get_pages(&xen_obj->base);
> + if (IS_ERR_OR_NULL(xen_obj->pages)) {
> + ret = PTR_ERR(xen_obj->pages);
> + xen_obj->pages = NULL;
> + goto fail;
> + }
> +
> + return xen_obj;
> +
> +fail:
> + DRM_ERROR("Failed to allocate buffer with size %zu\n", size);
> + return ERR_PTR(ret);
> +}
> +
> +static struct xen_gem_object *gem_create_with_handle(struct drm_file *filp,
> + struct drm_device *dev, size_t size, uint32_t *handle)
> +{
> + struct xen_gem_object *xen_obj;
> + struct drm_gem_object *gem_obj;
> + int ret;
> +
> + xen_obj = gem_create(dev, size);
> + if (IS_ERR_OR_NULL(xen_obj))
> + return xen_obj;
> +
> + gem_obj = &xen_obj->base;
> + ret = drm_gem_handle_create(filp, gem_obj, handle);
> + /* handle holds the reference */
> + drm_gem_object_unreference_unlocked(gem_obj);
> + if (ret < 0)
> + return ERR_PTR(ret);
> +
> + return xen_obj;
> +}
> +
> +static int gem_dumb_create(struct drm_file *filp, struct drm_device *dev,
> + struct drm_mode_create_dumb *args)
> +{
> + struct xen_gem_object *xen_obj;
> +
> + args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
> + args->size = args->pitch * args->height;
> +
> + xen_obj = gem_create_with_handle(filp, dev, args->size, &args->handle);
> + if (IS_ERR_OR_NULL(xen_obj))
> + return xen_obj == NULL ? -ENOMEM : PTR_ERR(xen_obj);
> +
> + return 0;
> +}
> +
> +static void gem_free_object(struct drm_gem_object *gem_obj)
> +{
> + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> +
> + if (xen_obj->base.import_attach) {
> + drm_prime_gem_destroy(&xen_obj->base, xen_obj->sgt_imported);
> + gem_free_pages_array(xen_obj);
> + } else {
> + if (xen_obj->pages) {
> + if (xen_obj->be_alloc) {
> + free_xenballooned_pages(xen_obj->num_pages,
> + xen_obj->pages);
> + gem_free_pages_array(xen_obj);
> + } else
> + drm_gem_put_pages(&xen_obj->base,
> + xen_obj->pages, true, false);
> + }
> + }
> + drm_gem_object_release(gem_obj);
> + kfree(xen_obj);
> +}
> +
> +static struct page **gem_get_pages(struct drm_gem_object *gem_obj)
> +{
> + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> +
> + return xen_obj->pages;
> +}
> +
> +static struct sg_table *gem_get_sg_table(struct drm_gem_object *gem_obj)
> +{
> + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> +
> + if (!xen_obj->pages)
> + return NULL;
> +
> + return drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages);
> +}
> +
> +static struct drm_gem_object *gem_import_sg_table(struct drm_device *dev,
> + struct dma_buf_attachment *attach, struct sg_table *sgt)
> +{
> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> + struct xen_gem_object *xen_obj;
> + size_t size;
> + int ret;
> +
> + size = attach->dmabuf->size;
> + xen_obj = gem_create_obj(dev, size);
> + if (IS_ERR_OR_NULL(xen_obj))
> + return ERR_CAST(xen_obj);
> +
> + ret = gem_alloc_pages_array(xen_obj, size);
> + if (ret < 0)
> + return ERR_PTR(ret);
> +
> + xen_obj->sgt_imported = sgt;
> +
> + ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages,
> + NULL, xen_obj->num_pages);
> + if (ret < 0)
> + return ERR_PTR(ret);
> +
> + /*
> + * N.B. Although we have an API to create display buffer from sgt
> + * we use pages API, because we still need those for GEM handling,
> + * e.g. for mapping etc.
> + */
> + ret = drm_info->front_ops->dbuf_create_from_pages(
> + drm_info->front_info,
> + xen_drm_front_dbuf_to_cookie(&xen_obj->base),
> + 0, 0, 0, size, xen_obj->pages);
> + if (ret < 0)
> + return ERR_PTR(ret);
> +
> + DRM_DEBUG("Imported buffer of size %zu with nents %u\n",
> + size, sgt->nents);
> +
> + return &xen_obj->base;
> +}
> +
> +static int gem_mmap_obj(struct xen_gem_object *xen_obj,
> + struct vm_area_struct *vma)
> +{
> + unsigned long addr = vma->vm_start;
> + int i;
> +
> + /*
> + * clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
> + * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
> + * the whole buffer.
> + */
> + vma->vm_flags &= ~VM_PFNMAP;
> + vma->vm_flags |= VM_MIXEDMAP;
> + vma->vm_pgoff = 0;
> + vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
> +
> + /*
> + * vm_operations_struct.fault handler will be called if CPU access
> + * to VM is here. For GPUs this isn't the case, because CPU
> + * doesn't touch the memory. Insert pages now, so both CPU and GPU are
> + * happy.
> + * FIXME: as we insert all the pages now then no .fault handler must
> + * be called, so don't provide one
> + */
> + for (i = 0; i < xen_obj->num_pages; i++) {
> + int ret;
> +
> + ret = vm_insert_page(vma, addr, xen_obj->pages[i]);
> + if (ret < 0) {
> + DRM_ERROR("Failed to insert pages into vma: %d\n", ret);
> + return ret;
> + }
> +
> + addr += PAGE_SIZE;
> + }
> + return 0;
> +}
> +
> +static int gem_mmap(struct file *filp, struct vm_area_struct *vma)
> +{
> + struct xen_gem_object *xen_obj;
> + struct drm_gem_object *gem_obj;
> + int ret;
> +
> + ret = drm_gem_mmap(filp, vma);
> + if (ret < 0)
> + return ret;
> +
> + gem_obj = vma->vm_private_data;
> + xen_obj = to_xen_gem_obj(gem_obj);
> + return gem_mmap_obj(xen_obj, vma);
> +}
> +
> +static void *gem_prime_vmap(struct drm_gem_object *gem_obj)
> +{
> + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> +
> + if (!xen_obj->pages)
> + return NULL;
> +
> + return vmap(xen_obj->pages, xen_obj->num_pages,
> + VM_MAP, pgprot_writecombine(PAGE_KERNEL));
> +}
> +
> +static void gem_prime_vunmap(struct drm_gem_object *gem_obj, void *vaddr)
> +{
> + vunmap(vaddr);
> +}
> +
> +static int gem_prime_mmap(struct drm_gem_object *gem_obj,
> + struct vm_area_struct *vma)
> +{
> + struct xen_gem_object *xen_obj;
> + int ret;
> +
> + ret = drm_gem_mmap_obj(gem_obj, gem_obj->size, vma);
> + if (ret < 0)
> + return ret;
> +
> + xen_obj = to_xen_gem_obj(gem_obj);
> + return gem_mmap_obj(xen_obj, vma);
> +}
> +
> +static const struct xen_drm_front_gem_ops xen_drm_gem_ops = {
> + .free_object_unlocked = gem_free_object,
> + .prime_get_sg_table = gem_get_sg_table,
> + .prime_import_sg_table = gem_import_sg_table,
> +
> + .prime_vmap = gem_prime_vmap,
> + .prime_vunmap = gem_prime_vunmap,
> + .prime_mmap = gem_prime_mmap,
> +
> + .dumb_create = gem_dumb_create,
> +
> + .mmap = gem_mmap,
> +
> + .get_pages = gem_get_pages,
> +};
> +
> +const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void)
> +{
> + return &xen_drm_gem_ops;
> +}
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> new file mode 100644
> index 000000000000..d1e1711cc3fc
> --- /dev/null
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> @@ -0,0 +1,46 @@
> +/*
> + * Xen para-virtual DRM device
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2016-2018 EPAM Systems Inc.
> + *
> + * Author: Oleksandr Andrushchenko <[email protected]>
> + */
> +
> +#ifndef __XEN_DRM_FRONT_GEM_H
> +#define __XEN_DRM_FRONT_GEM_H
> +
> +#include <drm/drmP.h>
> +
> +struct xen_drm_front_gem_ops {
> + void (*free_object_unlocked)(struct drm_gem_object *obj);
> +
> + struct sg_table *(*prime_get_sg_table)(struct drm_gem_object *obj);
> + struct drm_gem_object *(*prime_import_sg_table)(struct drm_device *dev,
> + struct dma_buf_attachment *attach,
> + struct sg_table *sgt);
> + void *(*prime_vmap)(struct drm_gem_object *obj);
> + void (*prime_vunmap)(struct drm_gem_object *obj, void *vaddr);
> + int (*prime_mmap)(struct drm_gem_object *obj,
> + struct vm_area_struct *vma);
> +
> + int (*dumb_create)(struct drm_file *file_priv, struct drm_device *dev,
> + struct drm_mode_create_dumb *args);
> +
> + int (*mmap)(struct file *filp, struct vm_area_struct *vma);
> +
> + struct page **(*get_pages)(struct drm_gem_object *obj);
> +};
> +
> +const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void);
> +
> +#endif /* __XEN_DRM_FRONT_GEM_H */
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c b/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
> new file mode 100644
> index 000000000000..5ffcbfa652d5
> --- /dev/null
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
> @@ -0,0 +1,93 @@
> +/*
> + * Xen para-virtual DRM device
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2016-2018 EPAM Systems Inc.
> + *
> + * Author: Oleksandr Andrushchenko <[email protected]>
> + */
> +
> +#include <drm/drmP.h>
> +#include <drm/drm_gem.h>
> +#include <drm/drm_fb_cma_helper.h>
> +#include <drm/drm_gem_cma_helper.h>
> +
> +#include "xen_drm_front.h"
> +#include "xen_drm_front_drv.h"
> +#include "xen_drm_front_gem.h"
> +
> +static struct drm_gem_object *gem_import_sg_table(struct drm_device *dev,
> + struct dma_buf_attachment *attach, struct sg_table *sgt)
> +{
> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> + struct drm_gem_object *gem_obj;
> + struct drm_gem_cma_object *cma_obj;
> + int ret;
> +
> + gem_obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
> + if (IS_ERR_OR_NULL(gem_obj))
> + return gem_obj;
> +
> + cma_obj = to_drm_gem_cma_obj(gem_obj);
> +
> + ret = drm_info->front_ops->dbuf_create_from_sgt(
> + drm_info->front_info,
> + xen_drm_front_dbuf_to_cookie(gem_obj),
> + 0, 0, 0, gem_obj->size,
> + drm_gem_cma_prime_get_sg_table(gem_obj));
> + if (ret < 0)
> + return ERR_PTR(ret);
> +
> + DRM_DEBUG("Imported CMA buffer of size %zu\n", gem_obj->size);
> +
> + return gem_obj;
> +}
> +
> +static int gem_dumb_create(struct drm_file *filp, struct drm_device *dev,
> + struct drm_mode_create_dumb *args)
> +{
> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> +
> + if (drm_info->cfg->be_alloc) {
> + /* This use-case is not yet supported and probably won't be */
> + DRM_ERROR("Backend allocated buffers and CMA helpers are not supported at the same time\n");
> + return -EINVAL;
> + }
> +
> + return drm_gem_cma_dumb_create(filp, dev, args);
> +}
> +
> +static struct page **gem_get_pages(struct drm_gem_object *gem_obj)
> +{
> + return NULL;
> +}
> +
> +static const struct xen_drm_front_gem_ops xen_drm_front_gem_cma_ops = {
> + .free_object_unlocked = drm_gem_cma_free_object,
> + .prime_get_sg_table = drm_gem_cma_prime_get_sg_table,
> + .prime_import_sg_table = gem_import_sg_table,
> +
> + .prime_vmap = drm_gem_cma_prime_vmap,
> + .prime_vunmap = drm_gem_cma_prime_vunmap,
> + .prime_mmap = drm_gem_cma_prime_mmap,
> +
> + .dumb_create = gem_dumb_create,
> +
> + .mmap = drm_gem_cma_mmap,
> +
> + .get_pages = gem_get_pages,
> +};
Again quite a midlayer you have here. Please inline this to avoid
confusion for other people (since it looks like you only have 1
implementation).
> +
> +const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void)
> +{
> + return &xen_drm_front_gem_cma_ops;
> +}
> --
> 2.7.4
>
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On 03/05/2018 11:13 AM, Daniel Vetter wrote:
> On Wed, Feb 21, 2018 at 10:03:39AM +0200, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko <[email protected]>
>>
>> Implement essential initialization of the display driver:
>> - introduce required data structures
>> - handle DRM/KMS driver registration
>> - perform basic DRM driver initialization
>> - register driver on backend connection
>> - remove driver on backend disconnect
>> - introduce essential callbacks required by DRM/KMS core
>> - introduce essential callbacks required for frontend operations
>>
>> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
>> ---
>> drivers/gpu/drm/xen/Makefile | 1 +
>> drivers/gpu/drm/xen/xen_drm_front.c | 169 ++++++++++++++++++++++++-
>> drivers/gpu/drm/xen/xen_drm_front.h | 24 ++++
>> drivers/gpu/drm/xen/xen_drm_front_drv.c | 211 ++++++++++++++++++++++++++++++++
>> drivers/gpu/drm/xen/xen_drm_front_drv.h | 60 +++++++++
>> 5 files changed, 462 insertions(+), 3 deletions(-)
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.c
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_drv.h
>>
>> diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
>> index f1823cb596c5..d3068202590f 100644
>> --- a/drivers/gpu/drm/xen/Makefile
>> +++ b/drivers/gpu/drm/xen/Makefile
>> @@ -1,6 +1,7 @@
>> # SPDX-License-Identifier: GPL-2.0
>>
>> drm_xen_front-objs := xen_drm_front.o \
>> + xen_drm_front_drv.o \
>> xen_drm_front_evtchnl.o \
>> xen_drm_front_shbuf.o \
>> xen_drm_front_cfg.o
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
>> index 0d94ff272da3..8de88e359d5e 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front.c
>> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
>> @@ -18,6 +18,8 @@
>>
>> #include <drm/drmP.h>
>>
>> +#include <linux/of_device.h>
>> +
>> #include <xen/platform_pci.h>
>> #include <xen/xen.h>
>> #include <xen/xenbus.h>
>> @@ -25,15 +27,161 @@
>> #include <xen/interface/io/displif.h>
>>
>> #include "xen_drm_front.h"
>> +#include "xen_drm_front_drv.h"
>> #include "xen_drm_front_evtchnl.h"
>> #include "xen_drm_front_shbuf.h"
>>
>> +static int be_mode_set(struct xen_drm_front_drm_pipeline *pipeline, uint32_t x,
>> + uint32_t y, uint32_t width, uint32_t height, uint32_t bpp,
>> + uint64_t fb_cookie)
>> +
>> +{
>> + return 0;
>> +}
>> +
>> +static int be_dbuf_create_int(struct xen_drm_front_info *front_info,
>> + uint64_t dbuf_cookie, uint32_t width, uint32_t height,
>> + uint32_t bpp, uint64_t size, struct page **pages,
>> + struct sg_table *sgt)
>> +{
>> + return 0;
>> +}
>> +
>> +static int be_dbuf_create_from_sgt(struct xen_drm_front_info *front_info,
>> + uint64_t dbuf_cookie, uint32_t width, uint32_t height,
>> + uint32_t bpp, uint64_t size, struct sg_table *sgt)
>> +{
>> + return be_dbuf_create_int(front_info, dbuf_cookie, width, height,
>> + bpp, size, NULL, sgt);
>> +}
>> +
>> +static int be_dbuf_create_from_pages(struct xen_drm_front_info *front_info,
>> + uint64_t dbuf_cookie, uint32_t width, uint32_t height,
>> + uint32_t bpp, uint64_t size, struct page **pages)
>> +{
>> + return be_dbuf_create_int(front_info, dbuf_cookie, width, height,
>> + bpp, size, pages, NULL);
>> +}
>> +
>> +static int be_dbuf_destroy(struct xen_drm_front_info *front_info,
>> + uint64_t dbuf_cookie)
>> +{
>> + return 0;
>> +}
>> +
>> +static int be_fb_attach(struct xen_drm_front_info *front_info,
>> + uint64_t dbuf_cookie, uint64_t fb_cookie, uint32_t width,
>> + uint32_t height, uint32_t pixel_format)
>> +{
>> + return 0;
>> +}
>> +
>> +static int be_fb_detach(struct xen_drm_front_info *front_info,
>> + uint64_t fb_cookie)
>> +{
>> + return 0;
>> +}
>> +
>> +static int be_page_flip(struct xen_drm_front_info *front_info, int conn_idx,
>> + uint64_t fb_cookie)
>> +{
>> + return 0;
>> +}
>> +
>> +static void xen_drm_drv_unload(struct xen_drm_front_info *front_info)
>> +{
>> + if (front_info->xb_dev->state != XenbusStateReconfiguring)
>> + return;
>> +
>> + DRM_DEBUG("Can try removing driver now\n");
>> + xenbus_switch_state(front_info->xb_dev, XenbusStateInitialising);
>> +}
>> +
>> static struct xen_drm_front_ops front_ops = {
>> - /* placeholder for now */
>> + .mode_set = be_mode_set,
>> + .dbuf_create_from_pages = be_dbuf_create_from_pages,
>> + .dbuf_create_from_sgt = be_dbuf_create_from_sgt,
>> + .dbuf_destroy = be_dbuf_destroy,
>> + .fb_attach = be_fb_attach,
>> + .fb_detach = be_fb_detach,
>> + .page_flip = be_page_flip,
>> + .drm_last_close = xen_drm_drv_unload,
>> +};
> This looks like a midlayer/DRM-abstraction in your driver. Please remove,
> and instead directly hook your xen-front code into the relevant drm
> callbacks.
ok, will do
>
> In general also pls make sure you don't implement dummy callbacks that do
> nothing, we've tried really hard to make them all optional in the drm
> infrastructure.
sure
> -Daniel
>
>> +
>> +static int xen_drm_drv_probe(struct platform_device *pdev)
>> +{
>> + /*
>> + * The device is not spawn from a device tree, so arch_setup_dma_ops
>> + * is not called, thus leaving the device with dummy DMA ops.
>> + * This makes the device return error on PRIME buffer import, which
>> + * is not correct: to fix this call of_dma_configure() with a NULL
>> + * node to set default DMA ops.
>> + */
>> + of_dma_configure(&pdev->dev, NULL);
>> + return xen_drm_front_drv_probe(pdev, &front_ops);
>> +}
>> +
>> +static int xen_drm_drv_remove(struct platform_device *pdev)
>> +{
>> + return xen_drm_front_drv_remove(pdev);
>> +}
>> +
>> +struct platform_device_info xen_drm_front_platform_info = {
>> + .name = XENDISPL_DRIVER_NAME,
>> + .id = 0,
>> + .num_res = 0,
>> + .dma_mask = DMA_BIT_MASK(32),
>> };
>>
>> +static struct platform_driver xen_drm_front_front_info = {
>> + .probe = xen_drm_drv_probe,
>> + .remove = xen_drm_drv_remove,
>> + .driver = {
>> + .name = XENDISPL_DRIVER_NAME,
>> + },
>> +};
>> +
>> +static void xen_drm_drv_deinit(struct xen_drm_front_info *front_info)
>> +{
>> + if (!front_info->drm_pdrv_registered)
>> + return;
>> +
>> + if (front_info->drm_pdev)
>> + platform_device_unregister(front_info->drm_pdev);
>> +
>> + platform_driver_unregister(&xen_drm_front_front_info);
>> + front_info->drm_pdrv_registered = false;
>> + front_info->drm_pdev = NULL;
>> +}
>> +
>> +static int xen_drm_drv_init(struct xen_drm_front_info *front_info)
>> +{
>> + int ret;
>> +
>> + ret = platform_driver_register(&xen_drm_front_front_info);
>> + if (ret < 0)
>> + return ret;
>> +
>> + front_info->drm_pdrv_registered = true;
>> + /* pass card configuration via platform data */
>> + xen_drm_front_platform_info.data = &front_info->cfg;
>> + xen_drm_front_platform_info.size_data = sizeof(front_info->cfg);
>> +
>> + front_info->drm_pdev = platform_device_register_full(
>> + &xen_drm_front_platform_info);
>> + if (IS_ERR_OR_NULL(front_info->drm_pdev)) {
>> + DRM_ERROR("Failed to register " XENDISPL_DRIVER_NAME " PV DRM driver\n");
>> + front_info->drm_pdev = NULL;
>> + xen_drm_drv_deinit(front_info);
>> + return -ENODEV;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> static void xen_drv_remove_internal(struct xen_drm_front_info *front_info)
>> {
>> + xen_drm_drv_deinit(front_info);
>> xen_drm_front_evtchnl_free_all(front_info);
>> }
>>
>> @@ -59,13 +207,27 @@ static int backend_on_initwait(struct xen_drm_front_info *front_info)
>> static int backend_on_connected(struct xen_drm_front_info *front_info)
>> {
>> xen_drm_front_evtchnl_set_state(front_info, EVTCHNL_STATE_CONNECTED);
>> - return 0;
>> + return xen_drm_drv_init(front_info);
>> }
>>
>> static void backend_on_disconnected(struct xen_drm_front_info *front_info)
>> {
>> + bool removed = true;
>> +
>> + if (front_info->drm_pdev) {
>> + if (xen_drm_front_drv_is_used(front_info->drm_pdev)) {
>> + DRM_WARN("DRM driver still in use, deferring removal\n");
>> + removed = false;
>> + } else
>> + xen_drv_remove_internal(front_info);
>> + }
>> +
>> xen_drm_front_evtchnl_set_state(front_info, EVTCHNL_STATE_DISCONNECTED);
>> - xenbus_switch_state(front_info->xb_dev, XenbusStateInitialising);
>> +
>> + if (removed)
>> + xenbus_switch_state(front_info->xb_dev, XenbusStateInitialising);
>> + else
>> + xenbus_switch_state(front_info->xb_dev, XenbusStateReconfiguring);
>> }
>>
>> static void backend_on_changed(struct xenbus_device *xb_dev,
>> @@ -148,6 +310,7 @@ static int xen_drv_probe(struct xenbus_device *xb_dev,
>>
>> front_info->xb_dev = xb_dev;
>> spin_lock_init(&front_info->io_lock);
>> + front_info->drm_pdrv_registered = false;
>> dev_set_drvdata(&xb_dev->dev, front_info);
>> return xenbus_switch_state(xb_dev, XenbusStateInitialising);
>> }
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
>> index 13f22736ae02..9ed5bfb248d0 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front.h
>> +++ b/drivers/gpu/drm/xen/xen_drm_front.h
>> @@ -19,6 +19,8 @@
>> #ifndef __XEN_DRM_FRONT_H_
>> #define __XEN_DRM_FRONT_H_
>>
>> +#include <linux/scatterlist.h>
>> +
>> #include "xen_drm_front_cfg.h"
>>
>> #ifndef GRANT_INVALID_REF
>> @@ -30,16 +32,38 @@
>> #define GRANT_INVALID_REF 0
>> #endif
>>
>> +struct xen_drm_front_drm_pipeline;
>> +
>> struct xen_drm_front_ops {
>> + int (*mode_set)(struct xen_drm_front_drm_pipeline *pipeline,
>> + uint32_t x, uint32_t y, uint32_t width, uint32_t height,
>> + uint32_t bpp, uint64_t fb_cookie);
>> + int (*dbuf_create_from_pages)(struct xen_drm_front_info *front_info,
>> + uint64_t dbuf_cookie, uint32_t width, uint32_t height,
>> + uint32_t bpp, uint64_t size, struct page **pages);
>> + int (*dbuf_create_from_sgt)(struct xen_drm_front_info *front_info,
>> + uint64_t dbuf_cookie, uint32_t width, uint32_t height,
>> + uint32_t bpp, uint64_t size, struct sg_table *sgt);
>> + int (*dbuf_destroy)(struct xen_drm_front_info *front_info,
>> + uint64_t dbuf_cookie);
>> + int (*fb_attach)(struct xen_drm_front_info *front_info,
>> + uint64_t dbuf_cookie, uint64_t fb_cookie,
>> + uint32_t width, uint32_t height, uint32_t pixel_format);
>> + int (*fb_detach)(struct xen_drm_front_info *front_info,
>> + uint64_t fb_cookie);
>> + int (*page_flip)(struct xen_drm_front_info *front_info,
>> + int conn_idx, uint64_t fb_cookie);
>> /* CAUTION! this is called with a spin_lock held! */
>> void (*on_frame_done)(struct platform_device *pdev,
>> int conn_idx, uint64_t fb_cookie);
>> + void (*drm_last_close)(struct xen_drm_front_info *front_info);
>> };
>>
>> struct xen_drm_front_info {
>> struct xenbus_device *xb_dev;
>> /* to protect data between backend IO code and interrupt handler */
>> spinlock_t io_lock;
>> + bool drm_pdrv_registered;
>> /* virtual DRM platform device */
>> struct platform_device *drm_pdev;
>>
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.c b/drivers/gpu/drm/xen/xen_drm_front_drv.c
>> new file mode 100644
>> index 000000000000..b3764d5ed0f6
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.c
>> @@ -0,0 +1,211 @@
>> +/*
>> + * Xen para-virtual DRM device
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>> + *
>> + * Author: Oleksandr Andrushchenko <[email protected]>
>> + */
>> +
>> +#include <drm/drmP.h>
>> +#include <drm/drm_gem.h>
>> +#include <drm/drm_atomic_helper.h>
>> +
>> +#include "xen_drm_front.h"
>> +#include "xen_drm_front_cfg.h"
>> +#include "xen_drm_front_drv.h"
>> +
>> +static int dumb_create(struct drm_file *filp,
>> + struct drm_device *dev, struct drm_mode_create_dumb *args)
>> +{
>> + return -EINVAL;
>> +}
>> +
>> +static void free_object(struct drm_gem_object *obj)
>> +{
>> + struct xen_drm_front_drm_info *drm_info = obj->dev->dev_private;
>> +
>> + drm_info->front_ops->dbuf_destroy(drm_info->front_info,
>> + xen_drm_front_dbuf_to_cookie(obj));
>> +}
>> +
>> +static void on_frame_done(struct platform_device *pdev,
>> + int conn_idx, uint64_t fb_cookie)
>> +{
>> +}
>> +
>> +static void lastclose(struct drm_device *dev)
>> +{
>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>> +
>> + drm_info->front_ops->drm_last_close(drm_info->front_info);
>> +}
>> +
>> +static int gem_mmap(struct file *filp, struct vm_area_struct *vma)
>> +{
>> + return -EINVAL;
>> +}
>> +
>> +static struct sg_table *prime_get_sg_table(struct drm_gem_object *obj)
>> +{
>> + return NULL;
>> +}
>> +
>> +static struct drm_gem_object *prime_import_sg_table(struct drm_device *dev,
>> + struct dma_buf_attachment *attach, struct sg_table *sgt)
>> +{
>> + return NULL;
>> +}
>> +
>> +static void *prime_vmap(struct drm_gem_object *obj)
>> +{
>> + return NULL;
>> +}
>> +
>> +static void prime_vunmap(struct drm_gem_object *obj, void *vaddr)
>> +{
>> +}
>> +
>> +static int prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>> +{
>> + return -EINVAL;
>> +}
>> +
>> +static const struct file_operations xendrm_fops = {
>> + .owner = THIS_MODULE,
>> + .open = drm_open,
>> + .release = drm_release,
>> + .unlocked_ioctl = drm_ioctl,
>> +#ifdef CONFIG_COMPAT
>> + .compat_ioctl = drm_compat_ioctl,
>> +#endif
>> + .poll = drm_poll,
>> + .read = drm_read,
>> + .llseek = no_llseek,
>> + .mmap = gem_mmap,
>> +};
>> +
>> +static const struct vm_operations_struct xen_drm_vm_ops = {
>> + .open = drm_gem_vm_open,
>> + .close = drm_gem_vm_close,
>> +};
>> +
>> +struct drm_driver xen_drm_driver = {
>> + .driver_features = DRIVER_GEM | DRIVER_MODESET |
>> + DRIVER_PRIME | DRIVER_ATOMIC,
>> + .lastclose = lastclose,
>> + .gem_free_object_unlocked = free_object,
>> + .gem_vm_ops = &xen_drm_vm_ops,
>> + .prime_handle_to_fd = drm_gem_prime_handle_to_fd,
>> + .prime_fd_to_handle = drm_gem_prime_fd_to_handle,
>> + .gem_prime_import = drm_gem_prime_import,
>> + .gem_prime_export = drm_gem_prime_export,
>> + .gem_prime_get_sg_table = prime_get_sg_table,
>> + .gem_prime_import_sg_table = prime_import_sg_table,
>> + .gem_prime_vmap = prime_vmap,
>> + .gem_prime_vunmap = prime_vunmap,
>> + .gem_prime_mmap = prime_mmap,
>> + .dumb_create = dumb_create,
>> + .fops = &xendrm_fops,
>> + .name = "xendrm-du",
>> + .desc = "Xen PV DRM Display Unit",
>> + .date = "20161109",
>> + .major = 1,
>> + .minor = 0,
>> +};
>> +
>> +int xen_drm_front_drv_probe(struct platform_device *pdev,
>> + struct xen_drm_front_ops *front_ops)
>> +{
>> + struct xen_drm_front_cfg *cfg = dev_get_platdata(&pdev->dev);
>> + struct xen_drm_front_drm_info *drm_info;
>> + struct drm_device *dev;
>> + int ret;
>> +
>> + DRM_INFO("Creating %s\n", xen_drm_driver.desc);
>> +
>> + drm_info = devm_kzalloc(&pdev->dev, sizeof(*drm_info), GFP_KERNEL);
>> + if (!drm_info)
>> + return -ENOMEM;
>> +
>> + drm_info->front_ops = front_ops;
>> + drm_info->front_ops->on_frame_done = on_frame_done;
>> + drm_info->front_info = cfg->front_info;
>> +
>> + dev = drm_dev_alloc(&xen_drm_driver, &pdev->dev);
>> + if (!dev)
>> + return -ENOMEM;
>> +
>> + drm_info->drm_dev = dev;
>> +
>> + drm_info->cfg = cfg;
>> + dev->dev_private = drm_info;
>> + platform_set_drvdata(pdev, drm_info);
>> +
>> + ret = drm_vblank_init(dev, cfg->num_connectors);
>> + if (ret) {
>> + DRM_ERROR("Failed to initialize vblank, ret %d\n", ret);
>> + return ret;
>> + }
>> +
>> + dev->irq_enabled = 1;
>> +
>> + ret = drm_dev_register(dev, 0);
>> + if (ret)
>> + goto fail_register;
>> +
>> + DRM_INFO("Initialized %s %d.%d.%d %s on minor %d\n",
>> + xen_drm_driver.name, xen_drm_driver.major,
>> + xen_drm_driver.minor, xen_drm_driver.patchlevel,
>> + xen_drm_driver.date, dev->primary->index);
>> +
>> + return 0;
>> +
>> +fail_register:
>> + drm_dev_unregister(dev);
>> + drm_mode_config_cleanup(dev);
>> + return ret;
>> +}
>> +
>> +int xen_drm_front_drv_remove(struct platform_device *pdev)
>> +{
>> + struct xen_drm_front_drm_info *drm_info = platform_get_drvdata(pdev);
>> + struct drm_device *dev = drm_info->drm_dev;
>> +
>> + if (dev) {
>> + drm_dev_unregister(dev);
>> + drm_atomic_helper_shutdown(dev);
>> + drm_mode_config_cleanup(dev);
>> + drm_dev_unref(dev);
>> + }
>> + return 0;
>> +}
>> +
>> +bool xen_drm_front_drv_is_used(struct platform_device *pdev)
>> +{
>> + struct xen_drm_front_drm_info *drm_info = platform_get_drvdata(pdev);
>> + struct drm_device *dev;
>> +
>> + if (!drm_info)
>> + return false;
>> +
>> + dev = drm_info->drm_dev;
>> + if (!dev)
>> + return false;
>> +
>> + /*
>> + * FIXME: the code below must be protected by drm_global_mutex,
>> + * but it is not accessible to us. Anyways there is a race condition,
>> + * but we will re-try.
>> + */
>> + return dev->open_count != 0;
>> +}
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.h b/drivers/gpu/drm/xen/xen_drm_front_drv.h
>> new file mode 100644
>> index 000000000000..aaa476535c13
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.h
>> @@ -0,0 +1,60 @@
>> +/*
>> + * Xen para-virtual DRM device
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>> + *
>> + * Author: Oleksandr Andrushchenko <[email protected]>
>> + */
>> +
>> +#ifndef __XEN_DRM_FRONT_DRV_H_
>> +#define __XEN_DRM_FRONT_DRV_H_
>> +
>> +#include <drm/drmP.h>
>> +
>> +#include "xen_drm_front.h"
>> +#include "xen_drm_front_cfg.h"
>> +
>> +struct xen_drm_front_drm_pipeline {
>> + struct xen_drm_front_drm_info *drm_info;
>> +
>> + int index;
>> +};
>> +
>> +struct xen_drm_front_drm_info {
>> + struct xen_drm_front_info *front_info;
>> + struct xen_drm_front_ops *front_ops;
>> + struct drm_device *drm_dev;
>> + struct xen_drm_front_cfg *cfg;
>> +};
>> +
>> +static inline uint64_t xen_drm_front_fb_to_cookie(
>> + struct drm_framebuffer *fb)
>> +{
>> + return (uint64_t)fb;
>> +}
>> +
>> +static inline uint64_t xen_drm_front_dbuf_to_cookie(
>> + struct drm_gem_object *gem_obj)
>> +{
>> + return (uint64_t)gem_obj;
>> +}
>> +
>> +int xen_drm_front_drv_probe(struct platform_device *pdev,
>> + struct xen_drm_front_ops *front_ops);
>> +
>> +int xen_drm_front_drv_remove(struct platform_device *pdev);
>> +
>> +bool xen_drm_front_drv_is_used(struct platform_device *pdev);
>> +
>> +#endif /* __XEN_DRM_FRONT_DRV_H_ */
>> +
>> --
>> 2.7.4
>>
>> _______________________________________________
>> dri-devel mailing list
>> [email protected]
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
On 03/05/2018 11:23 AM, Daniel Vetter wrote:
> On Wed, Feb 21, 2018 at 10:03:40AM +0200, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko <[email protected]>
>>
>> Implement kernel modesetiing/connector handling using
>> DRM simple KMS helper pipeline:
>>
>> - implement KMS part of the driver with the help of DRM
>> simple pipepline helper which is possible due to the fact
>> that the para-virtualized driver only supports a single
>> (primary) plane:
>> - initialize connectors according to XenStore configuration
>> - handle frame done events from the backend
>> - generate vblank events
>> - create and destroy frame buffers and propagate those
>> to the backend
>> - propagate set/reset mode configuration to the backend on display
>> enable/disable callbacks
>> - send page flip request to the backend and implement logic for
>> reporting backend IO errors on prepare fb callback
>>
>> - implement virtual connector handling:
>> - support only pixel formats suitable for single plane modes
>> - make sure the connector is always connected
>> - support a single video mode as per para-virtualized driver
>> configuration
>>
>> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
> I think once you've removed the midlayer in the previous patch it would
> makes sense to merge the 2 patches into 1.
ok, will squash the two
>
> Bunch more comments below.
> -Daniel
>
>> ---
>> drivers/gpu/drm/xen/Makefile | 2 +
>> drivers/gpu/drm/xen/xen_drm_front_conn.c | 125 +++++++++++++
>> drivers/gpu/drm/xen/xen_drm_front_conn.h | 35 ++++
>> drivers/gpu/drm/xen/xen_drm_front_drv.c | 15 ++
>> drivers/gpu/drm/xen/xen_drm_front_drv.h | 12 ++
>> drivers/gpu/drm/xen/xen_drm_front_kms.c | 299 +++++++++++++++++++++++++++++++
>> drivers/gpu/drm/xen/xen_drm_front_kms.h | 30 ++++
>> 7 files changed, 518 insertions(+)
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.c
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.h
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.c
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.h
>>
>> diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
>> index d3068202590f..4fcb0da1a9c5 100644
>> --- a/drivers/gpu/drm/xen/Makefile
>> +++ b/drivers/gpu/drm/xen/Makefile
>> @@ -2,6 +2,8 @@
>>
>> drm_xen_front-objs := xen_drm_front.o \
>> xen_drm_front_drv.o \
>> + xen_drm_front_kms.o \
>> + xen_drm_front_conn.o \
>> xen_drm_front_evtchnl.o \
>> xen_drm_front_shbuf.o \
>> xen_drm_front_cfg.o
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_conn.c b/drivers/gpu/drm/xen/xen_drm_front_conn.c
>> new file mode 100644
>> index 000000000000..d9986a2e1a3b
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_conn.c
>> @@ -0,0 +1,125 @@
>> +/*
>> + * Xen para-virtual DRM device
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>> + *
>> + * Author: Oleksandr Andrushchenko <[email protected]>
>> + */
>> +
>> +#include <drm/drm_atomic_helper.h>
>> +#include <drm/drm_crtc_helper.h>
>> +
>> +#include <video/videomode.h>
>> +
>> +#include "xen_drm_front_conn.h"
>> +#include "xen_drm_front_drv.h"
>> +
>> +static struct xen_drm_front_drm_pipeline *
>> +to_xen_drm_pipeline(struct drm_connector *connector)
>> +{
>> + return container_of(connector, struct xen_drm_front_drm_pipeline, conn);
>> +}
>> +
>> +static const uint32_t plane_formats[] = {
>> + DRM_FORMAT_RGB565,
>> + DRM_FORMAT_RGB888,
>> + DRM_FORMAT_XRGB8888,
>> + DRM_FORMAT_ARGB8888,
>> + DRM_FORMAT_XRGB4444,
>> + DRM_FORMAT_ARGB4444,
>> + DRM_FORMAT_XRGB1555,
>> + DRM_FORMAT_ARGB1555,
>> +};
>> +
>> +const uint32_t *xen_drm_front_conn_get_formats(int *format_count)
>> +{
>> + *format_count = ARRAY_SIZE(plane_formats);
>> + return plane_formats;
>> +}
>> +
>> +static enum drm_connector_status connector_detect(
>> + struct drm_connector *connector, bool force)
>> +{
>> + if (drm_dev_is_unplugged(connector->dev))
>> + return connector_status_disconnected;
>> +
>> + return connector_status_connected;
>> +}
>> +
>> +#define XEN_DRM_NUM_VIDEO_MODES 1
>> +#define XEN_DRM_CRTC_VREFRESH_HZ 60
>> +
>> +static int connector_get_modes(struct drm_connector *connector)
>> +{
>> + struct xen_drm_front_drm_pipeline *pipeline =
>> + to_xen_drm_pipeline(connector);
>> + struct drm_display_mode *mode;
>> + struct videomode videomode;
>> + int width, height;
>> +
>> + mode = drm_mode_create(connector->dev);
>> + if (!mode)
>> + return 0;
>> +
>> + memset(&videomode, 0, sizeof(videomode));
>> + videomode.hactive = pipeline->width;
>> + videomode.vactive = pipeline->height;
>> + width = videomode.hactive + videomode.hfront_porch +
>> + videomode.hback_porch + videomode.hsync_len;
>> + height = videomode.vactive + videomode.vfront_porch +
>> + videomode.vback_porch + videomode.vsync_len;
>> + videomode.pixelclock = width * height * XEN_DRM_CRTC_VREFRESH_HZ;
>> + mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
>> +
>> + drm_display_mode_from_videomode(&videomode, mode);
>> + drm_mode_probed_add(connector, mode);
>> + return XEN_DRM_NUM_VIDEO_MODES;
>> +}
>> +
>> +static int connector_mode_valid(struct drm_connector *connector,
>> + struct drm_display_mode *mode)
>> +{
>> + struct xen_drm_front_drm_pipeline *pipeline =
>> + to_xen_drm_pipeline(connector);
>> +
>> + if (mode->hdisplay != pipeline->width)
>> + return MODE_ERROR;
>> +
>> + if (mode->vdisplay != pipeline->height)
>> + return MODE_ERROR;
>> +
>> + return MODE_OK;
>> +}
>> +
>> +static const struct drm_connector_helper_funcs connector_helper_funcs = {
>> + .get_modes = connector_get_modes,
>> + .mode_valid = connector_mode_valid,
>> +};
>> +
>> +static const struct drm_connector_funcs connector_funcs = {
>> + .detect = connector_detect,
>> + .fill_modes = drm_helper_probe_single_connector_modes,
>> + .destroy = drm_connector_cleanup,
>> + .reset = drm_atomic_helper_connector_reset,
>> + .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
>> + .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
>> +};
>> +
>> +int xen_drm_front_conn_init(struct xen_drm_front_drm_info *drm_info,
>> + struct drm_connector *connector)
>> +{
>> + drm_connector_helper_add(connector, &connector_helper_funcs);
>> +
>> + return drm_connector_init(drm_info->drm_dev, connector,
>> + &connector_funcs, DRM_MODE_CONNECTOR_VIRTUAL);
>> +}
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_conn.h b/drivers/gpu/drm/xen/xen_drm_front_conn.h
>> new file mode 100644
>> index 000000000000..708e80d45985
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_conn.h
>> @@ -0,0 +1,35 @@
>> +/*
>> + * Xen para-virtual DRM device
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>> + *
>> + * Author: Oleksandr Andrushchenko <[email protected]>
>> + */
>> +
>> +#ifndef __XEN_DRM_FRONT_CONN_H_
>> +#define __XEN_DRM_FRONT_CONN_H_
>> +
>> +#include <drm/drmP.h>
>> +#include <drm/drm_crtc.h>
>> +#include <drm/drm_encoder.h>
>> +
>> +#include <linux/wait.h>
>> +
>> +struct xen_drm_front_drm_info;
>> +
>> +const uint32_t *xen_drm_front_conn_get_formats(int *format_count);
>> +
>> +int xen_drm_front_conn_init(struct xen_drm_front_drm_info *drm_info,
>> + struct drm_connector *connector);
>> +
>> +#endif /* __XEN_DRM_FRONT_CONN_H_ */
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.c b/drivers/gpu/drm/xen/xen_drm_front_drv.c
>> index b3764d5ed0f6..e8862d26ba27 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front_drv.c
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.c
>> @@ -23,6 +23,7 @@
>> #include "xen_drm_front.h"
>> #include "xen_drm_front_cfg.h"
>> #include "xen_drm_front_drv.h"
>> +#include "xen_drm_front_kms.h"
>>
>> static int dumb_create(struct drm_file *filp,
>> struct drm_device *dev, struct drm_mode_create_dumb *args)
>> @@ -41,6 +42,13 @@ static void free_object(struct drm_gem_object *obj)
>> static void on_frame_done(struct platform_device *pdev,
>> int conn_idx, uint64_t fb_cookie)
>> {
>> + struct xen_drm_front_drm_info *drm_info = platform_get_drvdata(pdev);
>> +
>> + if (unlikely(conn_idx >= drm_info->cfg->num_connectors))
>> + return;
>> +
>> + xen_drm_front_kms_on_frame_done(&drm_info->pipeline[conn_idx],
>> + fb_cookie);
>> }
>>
>> static void lastclose(struct drm_device *dev)
>> @@ -157,6 +165,12 @@ int xen_drm_front_drv_probe(struct platform_device *pdev,
>> return ret;
>> }
>>
>> + ret = xen_drm_front_kms_init(drm_info);
>> + if (ret) {
>> + DRM_ERROR("Failed to initialize DRM/KMS, ret %d\n", ret);
>> + goto fail_modeset;
>> + }
>> +
>> dev->irq_enabled = 1;
>>
>> ret = drm_dev_register(dev, 0);
>> @@ -172,6 +186,7 @@ int xen_drm_front_drv_probe(struct platform_device *pdev,
>>
>> fail_register:
>> drm_dev_unregister(dev);
>> +fail_modeset:
>> drm_mode_config_cleanup(dev);
>> return ret;
>> }
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.h b/drivers/gpu/drm/xen/xen_drm_front_drv.h
>> index aaa476535c13..563318b19f34 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front_drv.h
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.h
>> @@ -20,14 +20,24 @@
>> #define __XEN_DRM_FRONT_DRV_H_
>>
>> #include <drm/drmP.h>
>> +#include <drm/drm_simple_kms_helper.h>
>>
>> #include "xen_drm_front.h"
>> #include "xen_drm_front_cfg.h"
>> +#include "xen_drm_front_conn.h"
>>
>> struct xen_drm_front_drm_pipeline {
>> struct xen_drm_front_drm_info *drm_info;
>>
>> int index;
>> +
>> + struct drm_simple_display_pipe pipe;
>> +
>> + struct drm_connector conn;
>> + /* these are only for connector mode checking */
>> + int width, height;
>> + /* last backend error seen on page flip */
>> + int pgflip_last_error;
>> };
>>
>> struct xen_drm_front_drm_info {
>> @@ -35,6 +45,8 @@ struct xen_drm_front_drm_info {
>> struct xen_drm_front_ops *front_ops;
>> struct drm_device *drm_dev;
>> struct xen_drm_front_cfg *cfg;
>> +
>> + struct xen_drm_front_drm_pipeline pipeline[XEN_DRM_FRONT_MAX_CRTCS];
>> };
>>
>> static inline uint64_t xen_drm_front_fb_to_cookie(
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm/xen/xen_drm_front_kms.c
>> new file mode 100644
>> index 000000000000..ad94c28835cd
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
>> @@ -0,0 +1,299 @@
>> +/*
>> + * Xen para-virtual DRM device
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>> + *
>> + * Author: Oleksandr Andrushchenko <[email protected]>
>> + */
>> +
>> +#include "xen_drm_front_kms.h"
>> +
>> +#include <drm/drmP.h>
>> +#include <drm/drm_atomic.h>
>> +#include <drm/drm_atomic_helper.h>
>> +#include <drm/drm_gem.h>
>> +#include <drm/drm_gem_framebuffer_helper.h>
>> +
>> +#include "xen_drm_front.h"
>> +#include "xen_drm_front_conn.h"
>> +#include "xen_drm_front_drv.h"
>> +
>> +static struct xen_drm_front_drm_pipeline *
>> +to_xen_drm_pipeline(struct drm_simple_display_pipe *pipe)
>> +{
>> + return container_of(pipe, struct xen_drm_front_drm_pipeline, pipe);
>> +}
>> +
>> +static void fb_destroy(struct drm_framebuffer *fb)
>> +{
>> + struct xen_drm_front_drm_info *drm_info = fb->dev->dev_private;
>> +
>> + drm_info->front_ops->fb_detach(drm_info->front_info,
>> + xen_drm_front_fb_to_cookie(fb));
>> + drm_gem_fb_destroy(fb);
>> +}
>> +
>> +static struct drm_framebuffer_funcs fb_funcs = {
>> + .destroy = fb_destroy,
>> +};
>> +
>> +static struct drm_framebuffer *fb_create(struct drm_device *dev,
>> + struct drm_file *filp, const struct drm_mode_fb_cmd2 *mode_cmd)
>> +{
>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>> + static struct drm_framebuffer *fb;
>> + struct drm_gem_object *gem_obj;
>> + int ret;
>> +
>> + fb = drm_gem_fb_create_with_funcs(dev, filp, mode_cmd, &fb_funcs);
>> + if (IS_ERR_OR_NULL(fb))
>> + return fb;
>> +
>> + gem_obj = drm_gem_object_lookup(filp, mode_cmd->handles[0]);
>> + if (!gem_obj) {
>> + DRM_ERROR("Failed to lookup GEM object\n");
>> + ret = -ENOENT;
>> + goto fail;
>> + }
>> +
>> + drm_gem_object_unreference_unlocked(gem_obj);
>> +
>> + ret = drm_info->front_ops->fb_attach(
>> + drm_info->front_info,
>> + xen_drm_front_dbuf_to_cookie(gem_obj),
>> + xen_drm_front_fb_to_cookie(fb),
>> + fb->width, fb->height, fb->format->format);
>> + if (ret < 0) {
>> + DRM_ERROR("Back failed to attach FB %p: %d\n", fb, ret);
>> + goto fail;
>> + }
>> +
>> + return fb;
>> +
>> +fail:
>> + drm_gem_fb_destroy(fb);
>> + return ERR_PTR(ret);
>> +}
>> +
>> +static const struct drm_mode_config_funcs mode_config_funcs = {
>> + .fb_create = fb_create,
>> + .atomic_check = drm_atomic_helper_check,
>> + .atomic_commit = drm_atomic_helper_commit,
>> +};
>> +
>> +static int display_set_config(struct drm_simple_display_pipe *pipe,
>> + struct drm_framebuffer *fb)
>> +{
>> + struct xen_drm_front_drm_pipeline *pipeline =
>> + to_xen_drm_pipeline(pipe);
>> + struct drm_crtc *crtc = &pipe->crtc;
>> + struct xen_drm_front_drm_info *drm_info = pipeline->drm_info;
>> + int ret;
>> +
>> + if (fb)
>> + ret = drm_info->front_ops->mode_set(pipeline,
>> + crtc->x, crtc->y,
>> + fb->width, fb->height, fb->format->cpp[0] * 8,
>> + xen_drm_front_fb_to_cookie(fb));
>> + else
>> + ret = drm_info->front_ops->mode_set(pipeline,
>> + 0, 0, 0, 0, 0,
>> + xen_drm_front_fb_to_cookie(NULL));
> This is a bit much layering, the if (fb) case corresponds to the
> display_enable/disable hooks, pls fold that in instead of the indirection.
> simple helpers guarantee that when the display is on, then you have an fb.
1. Ok, the only reason for having this function was to keep
front_ops->mode_set calls at one place (will be refactored
to be a direct call, not via front_ops).
2. The if (fb) check was meant not to check if simple helpers
may give us some wrong value when we do not expect: there is
nothing wrong with them. The check was for 2 cases when this
function was called: with fb != NULL on display enable and
with fb == NULL on display disable, e.g. fb was used as a
flag in this check.
3. I will remove this function at all and will make direct calls
to the backend on .display_{enable|disable}
>
> Maybe we need to fix the docs, pls check and if that's not clear, submit a
> kernel-doc patch for the simple pipe helpers.
no, nothing wrong here, just see my reasoning above
>> +
>> + if (ret)
>> + DRM_ERROR("Failed to set mode to back: %d\n", ret);
>> +
>> + return ret;
>> +}
>> +
>> +static void display_enable(struct drm_simple_display_pipe *pipe,
>> + struct drm_crtc_state *crtc_state)
>> +{
>> + struct drm_crtc *crtc = &pipe->crtc;
>> + struct drm_framebuffer *fb = pipe->plane.state->fb;
>> +
>> + if (display_set_config(pipe, fb) == 0)
>> + drm_crtc_vblank_on(crtc);
> I get the impression your driver doesn't support vblanks (the page flip
> code at least looks like it's only generating a single event),
yes, this is true
> you also
> don't have a enable/disable_vblank implementation.
this is because with my previous patches [1] these are now handled
by simple helpers, so no need to provide dummy ones in the driver
> If there's no vblank
> handling then this shouldn't be needed.
yes, I will rework the code, please see below
>> + else
>> + DRM_ERROR("Failed to enable display\n");
>> +}
>> +
>> +static void display_disable(struct drm_simple_display_pipe *pipe)
>> +{
>> + struct drm_crtc *crtc = &pipe->crtc;
>> +
>> + display_set_config(pipe, NULL);
>> + drm_crtc_vblank_off(crtc);
>> + /* final check for stalled events */
>> + if (crtc->state->event && !crtc->state->active) {
>> + unsigned long flags;
>> +
>> + spin_lock_irqsave(&crtc->dev->event_lock, flags);
>> + drm_crtc_send_vblank_event(crtc, crtc->state->event);
>> + spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
>> + crtc->state->event = NULL;
>> + }
>> +}
>> +
>> +void xen_drm_front_kms_on_frame_done(
>> + struct xen_drm_front_drm_pipeline *pipeline,
>> + uint64_t fb_cookie)
>> +{
>> + drm_crtc_handle_vblank(&pipeline->pipe.crtc);
> Hm, again this doesn't look like real vblank, but only a page-flip done
> event. If that's correct then please don't use the vblank machinery, but
> just store the event internally (protected with your own private spinlock)
Why can't I use &dev->event_lock? Anyways for handling
page-flip events I will need to lock on it, so I can do
drm_crtc_send_vblank_event?
> and send it out using drm_crtc_send_vblank_event directly. No calls to
> arm_vblank_event or any of the other vblank infrastructure should be
> needed.
will re-work, e.g. will store drm_pending_vblank_event
on .display_update and send out on page flip event from the
backend
> Also please remove the drm_vblank_init() call, since your hw doesn't
> really have vblanks. And exposing vblanks to userspace without
> implementing them is confusing.
will remove all vblank handling at all with the re-work above
>
>> +}
>> +
>> +static void display_send_page_flip(struct drm_simple_display_pipe *pipe,
>> + struct drm_plane_state *old_plane_state)
>> +{
>> + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(
>> + old_plane_state->state, &pipe->plane);
>> +
>> + /*
>> + * If old_plane_state->fb is NULL and plane_state->fb is not,
>> + * then this is an atomic commit which will enable display.
>> + * If old_plane_state->fb is not NULL and plane_state->fb is,
>> + * then this is an atomic commit which will disable display.
>> + * Ignore these and do not send page flip as this framebuffer will be
>> + * sent to the backend as a part of display_set_config call.
>> + */
>> + if (old_plane_state->fb && plane_state->fb) {
>> + struct xen_drm_front_drm_pipeline *pipeline =
>> + to_xen_drm_pipeline(pipe);
>> + struct xen_drm_front_drm_info *drm_info = pipeline->drm_info;
>> + int ret;
>> +
>> + ret = drm_info->front_ops->page_flip(drm_info->front_info,
>> + pipeline->index,
>> + xen_drm_front_fb_to_cookie(plane_state->fb));
>> + pipeline->pgflip_last_error = ret;
>> + if (ret) {
>> + DRM_ERROR("Failed to send page flip request to backend: %d\n", ret);
>> + /*
>> + * As we are at commit stage the DRM core will anyways
>> + * wait for the vblank and knows nothing about our
>> + * failure. The best we can do is to handle
>> + * vblank now, so there is no vblank/flip_done
>> + * time outs
>> + */
>> + drm_crtc_handle_vblank(&pipeline->pipe.crtc);
>> + }
>> + }
>> +}
>> +
>> +static int display_prepare_fb(struct drm_simple_display_pipe *pipe,
>> + struct drm_plane_state *plane_state)
>> +{
>> + struct xen_drm_front_drm_pipeline *pipeline =
>> + to_xen_drm_pipeline(pipe);
>> +
>> + if (pipeline->pgflip_last_error) {
>> + int ret;
>> +
>> + /* if previous page flip didn't succeed then report the error */
>> + ret = pipeline->pgflip_last_error;
>> + /* and let us try to page flip next time */
>> + pipeline->pgflip_last_error = 0;
>> + return ret;
>> + }
> Nope, this isn't how the uapi works. If your flips fail then we might need
> to add some error status thing to the drm events, but you can't make the
> next flip fail.
Well, yes, there is no way for me to tell that the page flip
has failed, so this is why I tried to do this workaround with
the next page-flip. The reason for that is that if, for example,
we are disconnected from the backend for some reason, there is
no way for me to tell the user-space that hey, please, do not
send any other page flips. If backend can recover and that was
a one time error then yes, the code I have will do wrong thing
(fail the current page flip), but if the error state is persistent
then I will be able to tell the user-space to stop by returning errors.
This is kind of trade-off which I am not sure how to solve correctly.
Do you think I can remove this workaround completely?
> -Daniel
>
>> + return drm_gem_fb_prepare_fb(&pipe->plane, plane_state);
>> +}
>> +
>> +static void display_update(struct drm_simple_display_pipe *pipe,
>> + struct drm_plane_state *old_plane_state)
>> +{
>> + struct drm_crtc *crtc = &pipe->crtc;
>> + struct drm_pending_vblank_event *event;
>> +
>> + event = crtc->state->event;
>> + if (event) {
>> + struct drm_device *dev = crtc->dev;
>> + unsigned long flags;
>> +
>> + crtc->state->event = NULL;
>> +
>> + spin_lock_irqsave(&dev->event_lock, flags);
>> + if (drm_crtc_vblank_get(crtc) == 0)
>> + drm_crtc_arm_vblank_event(crtc, event);
>> + else
>> + drm_crtc_send_vblank_event(crtc, event);
>> + spin_unlock_irqrestore(&dev->event_lock, flags);
>> + }
>> + /*
>> + * Send page flip request to the backend *after* we have event armed/
>> + * sent above, so on page flip done event from the backend we can
>> + * deliver it while handling vblank.
>> + */
>> + display_send_page_flip(pipe, old_plane_state);
>> +}
>> +
>> +static const struct drm_simple_display_pipe_funcs display_funcs = {
>> + .enable = display_enable,
>> + .disable = display_disable,
>> + .prepare_fb = display_prepare_fb,
>> + .update = display_update,
>> +};
>> +
>> +static int display_pipe_init(struct xen_drm_front_drm_info *drm_info,
>> + int index, struct xen_drm_front_cfg_connector *cfg,
>> + struct xen_drm_front_drm_pipeline *pipeline)
>> +{
>> + struct drm_device *dev = drm_info->drm_dev;
>> + const uint32_t *formats;
>> + int format_count;
>> + int ret;
>> +
>> + pipeline->drm_info = drm_info;
>> + pipeline->index = index;
>> + pipeline->height = cfg->height;
>> + pipeline->width = cfg->width;
>> +
>> + ret = xen_drm_front_conn_init(drm_info, &pipeline->conn);
>> + if (ret)
>> + return ret;
>> +
>> + formats = xen_drm_front_conn_get_formats(&format_count);
>> +
>> + return drm_simple_display_pipe_init(dev, &pipeline->pipe,
>> + &display_funcs, formats, format_count,
>> + NULL, &pipeline->conn);
>> +}
>> +
>> +int xen_drm_front_kms_init(struct xen_drm_front_drm_info *drm_info)
>> +{
>> + struct drm_device *dev = drm_info->drm_dev;
>> + int i, ret;
>> +
>> + drm_mode_config_init(dev);
>> +
>> + dev->mode_config.min_width = 0;
>> + dev->mode_config.min_height = 0;
>> + dev->mode_config.max_width = 4095;
>> + dev->mode_config.max_height = 2047;
>> + dev->mode_config.funcs = &mode_config_funcs;
>> +
>> + for (i = 0; i < drm_info->cfg->num_connectors; i++) {
>> + struct xen_drm_front_cfg_connector *cfg =
>> + &drm_info->cfg->connectors[i];
>> + struct xen_drm_front_drm_pipeline *pipeline =
>> + &drm_info->pipeline[i];
>> +
>> + ret = display_pipe_init(drm_info, i, cfg, pipeline);
>> + if (ret) {
>> + drm_mode_config_cleanup(dev);
>> + return ret;
>> + }
>> + }
>> +
>> + drm_mode_config_reset(dev);
>> + return 0;
>> +}
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.h b/drivers/gpu/drm/xen/xen_drm_front_kms.h
>> new file mode 100644
>> index 000000000000..65a50033bb9b
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_kms.h
>> @@ -0,0 +1,30 @@
>> +/*
>> + * Xen para-virtual DRM device
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>> + *
>> + * Author: Oleksandr Andrushchenko <[email protected]>
>> + */
>> +
>> +#ifndef __XEN_DRM_FRONT_KMS_H_
>> +#define __XEN_DRM_FRONT_KMS_H_
>> +
>> +#include "xen_drm_front_drv.h"
>> +
>> +int xen_drm_front_kms_init(struct xen_drm_front_drm_info *drm_info);
>> +
>> +void xen_drm_front_kms_on_frame_done(
>> + struct xen_drm_front_drm_pipeline *pipeline,
>> + uint64_t fb_cookie);
>> +
>> +#endif /* __XEN_DRM_FRONT_KMS_H_ */
>> --
>> 2.7.4
>>
>> _______________________________________________
>> dri-devel mailing list
>> [email protected]
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
[1] https://patchwork.kernel.org/patch/10211997/
On 03/05/2018 11:32 AM, Daniel Vetter wrote:
> On Wed, Feb 21, 2018 at 10:03:41AM +0200, Oleksandr Andrushchenko wrote:
>> From: Oleksandr Andrushchenko <[email protected]>
>>
>> Implement GEM handling depending on driver mode of operation:
>> depending on the requirements for the para-virtualized environment, namely
>> requirements dictated by the accompanying DRM/(v)GPU drivers running in both
>> host and guest environments, number of operating modes of para-virtualized
>> display driver are supported:
>> - display buffers can be allocated by either frontend driver or backend
>> - display buffers can be allocated to be contiguous in memory or not
>>
>> Note! Frontend driver itself has no dependency on contiguous memory for
>> its operation.
>>
>> 1. Buffers allocated by the frontend driver.
>>
>> The below modes of operation are configured at compile-time via
>> frontend driver's kernel configuration.
>>
>> 1.1. Front driver configured to use GEM CMA helpers
>> This use-case is useful when used with accompanying DRM/vGPU driver in
>> guest domain which was designed to only work with contiguous buffers,
>> e.g. DRM driver based on GEM CMA helpers: such drivers can only import
>> contiguous PRIME buffers, thus requiring frontend driver to provide
>> such. In order to implement this mode of operation para-virtualized
>> frontend driver can be configured to use GEM CMA helpers.
>>
>> 1.2. Front driver doesn't use GEM CMA
>> If accompanying drivers can cope with non-contiguous memory then, to
>> lower pressure on CMA subsystem of the kernel, driver can allocate
>> buffers from system memory.
>>
>> Note! If used with accompanying DRM/(v)GPU drivers this mode of operation
>> may require IOMMU support on the platform, so accompanying DRM/vGPU
>> hardware can still reach display buffer memory while importing PRIME
>> buffers from the frontend driver.
>>
>> 2. Buffers allocated by the backend
>>
>> This mode of operation is run-time configured via guest domain configuration
>> through XenStore entries.
>>
>> For systems which do not provide IOMMU support, but having specific
>> requirements for display buffers it is possible to allocate such buffers
>> at backend side and share those with the frontend.
>> For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting
>> physically contiguous memory, this allows implementing zero-copying
>> use-cases.
>>
>> Note! Configuration options 1.1 (contiguous display buffers) and 2 (backend
>> allocated buffers) are not supported at the same time.
>>
>> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
> Some suggestions below for some larger cleanup work.
> -Daniel
>
>> ---
>> drivers/gpu/drm/xen/Kconfig | 13 +
>> drivers/gpu/drm/xen/Makefile | 6 +
>> drivers/gpu/drm/xen/xen_drm_front.h | 74 ++++++
>> drivers/gpu/drm/xen/xen_drm_front_drv.c | 80 ++++++-
>> drivers/gpu/drm/xen/xen_drm_front_drv.h | 1 +
>> drivers/gpu/drm/xen/xen_drm_front_gem.c | 360 ++++++++++++++++++++++++++++
>> drivers/gpu/drm/xen/xen_drm_front_gem.h | 46 ++++
>> drivers/gpu/drm/xen/xen_drm_front_gem_cma.c | 93 +++++++
>> 8 files changed, 667 insertions(+), 6 deletions(-)
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.c
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.h
>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
>>
>> diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig
>> index 4cca160782ab..4f4abc91f3b6 100644
>> --- a/drivers/gpu/drm/xen/Kconfig
>> +++ b/drivers/gpu/drm/xen/Kconfig
>> @@ -15,3 +15,16 @@ config DRM_XEN_FRONTEND
>> help
>> Choose this option if you want to enable a para-virtualized
>> frontend DRM/KMS driver for Xen guest OSes.
>> +
>> +config DRM_XEN_FRONTEND_CMA
>> + bool "Use DRM CMA to allocate dumb buffers"
>> + depends on DRM_XEN_FRONTEND
>> + select DRM_KMS_CMA_HELPER
>> + select DRM_GEM_CMA_HELPER
>> + help
>> + Use DRM CMA helpers to allocate display buffers.
>> + This is useful for the use-cases when guest driver needs to
>> + share or export buffers to other drivers which only expect
>> + contiguous buffers.
>> + Note: in this mode driver cannot use buffers allocated
>> + by the backend.
>> diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
>> index 4fcb0da1a9c5..12376ec78fbc 100644
>> --- a/drivers/gpu/drm/xen/Makefile
>> +++ b/drivers/gpu/drm/xen/Makefile
>> @@ -8,4 +8,10 @@ drm_xen_front-objs := xen_drm_front.o \
>> xen_drm_front_shbuf.o \
>> xen_drm_front_cfg.o
>>
>> +ifeq ($(CONFIG_DRM_XEN_FRONTEND_CMA),y)
>> + drm_xen_front-objs += xen_drm_front_gem_cma.o
>> +else
>> + drm_xen_front-objs += xen_drm_front_gem.o
>> +endif
>> +
>> obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
>> index 9ed5bfb248d0..c6f52c892434 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front.h
>> +++ b/drivers/gpu/drm/xen/xen_drm_front.h
>> @@ -34,6 +34,80 @@
>>
>> struct xen_drm_front_drm_pipeline;
>>
>> +/*
>> + *******************************************************************************
>> + * Para-virtualized DRM/KMS frontend driver
>> + *******************************************************************************
>> + * This frontend driver implements Xen para-virtualized display
>> + * according to the display protocol described at
>> + * include/xen/interface/io/displif.h
>> + *
>> + *******************************************************************************
>> + * Driver modes of operation in terms of display buffers used
>> + *******************************************************************************
>> + * Depending on the requirements for the para-virtualized environment, namely
>> + * requirements dictated by the accompanying DRM/(v)GPU drivers running in both
>> + * host and guest environments, number of operating modes of para-virtualized
>> + * display driver are supported:
>> + * - display buffers can be allocated by either frontend driver or backend
>> + * - display buffers can be allocated to be contiguous in memory or not
>> + *
>> + * Note! Frontend driver itself has no dependency on contiguous memory for
>> + * its operation.
>> + *
>> + *******************************************************************************
>> + * 1. Buffers allocated by the frontend driver.
>> + *******************************************************************************
>> + *
>> + * The below modes of operation are configured at compile-time via
>> + * frontend driver's kernel configuration.
>> + *
>> + * 1.1. Front driver configured to use GEM CMA helpers
>> + * This use-case is useful when used with accompanying DRM/vGPU driver in
>> + * guest domain which was designed to only work with contiguous buffers,
>> + * e.g. DRM driver based on GEM CMA helpers: such drivers can only import
>> + * contiguous PRIME buffers, thus requiring frontend driver to provide
>> + * such. In order to implement this mode of operation para-virtualized
>> + * frontend driver can be configured to use GEM CMA helpers.
>> + *
>> + * 1.2. Front driver doesn't use GEM CMA
>> + * If accompanying drivers can cope with non-contiguous memory then, to
>> + * lower pressure on CMA subsystem of the kernel, driver can allocate
>> + * buffers from system memory.
>> + *
>> + * Note! If used with accompanying DRM/(v)GPU drivers this mode of operation
>> + * may require IOMMU support on the platform, so accompanying DRM/vGPU
>> + * hardware can still reach display buffer memory while importing PRIME
>> + * buffers from the frontend driver.
>> + *
>> + *******************************************************************************
>> + * 2. Buffers allocated by the backend
>> + *******************************************************************************
>> + *
>> + * This mode of operation is run-time configured via guest domain configuration
>> + * through XenStore entries.
>> + *
>> + * For systems which do not provide IOMMU support, but having specific
>> + * requirements for display buffers it is possible to allocate such buffers
>> + * at backend side and share those with the frontend.
>> + * For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting
>> + * physically contiguous memory, this allows implementing zero-copying
>> + * use-cases.
>> + *
>> + *******************************************************************************
>> + * Driver limitations
>> + *******************************************************************************
>> + * 1. Configuration options 1.1 (contiguous display buffers) and 2 (backend
>> + * allocated buffers) are not supported at the same time.
>> + *
>> + * 2. Only primary plane without additional properties is supported.
>> + *
>> + * 3. Only one video mode supported which is configured via XenStore.
>> + *
>> + * 4. All CRTCs operate at fixed frequency of 60Hz.
>> + *
>> + ******************************************************************************/
> Since you've typed this all up, pls convert it to kernel-doc and pull it
> into a xen-front.rst driver section in Documentation/gpu/ There's a few
> examples for i915 and vc4 already.
Do you mean to move or to keep in the driver and add in the
Documentation? I would prefer to move to have the description
at single place.
>
>> +
>> struct xen_drm_front_ops {
>> int (*mode_set)(struct xen_drm_front_drm_pipeline *pipeline,
>> uint32_t x, uint32_t y, uint32_t width, uint32_t height,
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.c b/drivers/gpu/drm/xen/xen_drm_front_drv.c
>> index e8862d26ba27..35e7e9cda9d1 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front_drv.c
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.c
>> @@ -23,12 +23,58 @@
>> #include "xen_drm_front.h"
>> #include "xen_drm_front_cfg.h"
>> #include "xen_drm_front_drv.h"
>> +#include "xen_drm_front_gem.h"
>> #include "xen_drm_front_kms.h"
>>
>> static int dumb_create(struct drm_file *filp,
>> struct drm_device *dev, struct drm_mode_create_dumb *args)
>> {
>> - return -EINVAL;
>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>> + struct drm_gem_object *obj;
>> + int ret;
>> +
>> + ret = drm_info->gem_ops->dumb_create(filp, dev, args);
>> + if (ret)
>> + goto fail;
>> +
>> + obj = drm_gem_object_lookup(filp, args->handle);
>> + if (!obj) {
>> + ret = -ENOENT;
>> + goto fail_destroy;
>> + }
>> +
>> + drm_gem_object_unreference_unlocked(obj);
>> +
>> + /*
>> + * In case of CONFIG_DRM_XEN_FRONTEND_CMA gem_obj is constructed
>> + * via DRM CMA helpers and doesn't have ->pages allocated
>> + * (xendrm_gem_get_pages will return NULL), but instead can provide
>> + * sg table
>> + */
> My recommendation is to use an sg table for everything if you deal with
> mixed objects (CMA, special blocks 1:1 mapped from host, normal pages).
> That avoids the constant get_pages vs. get_sgt differences. For examples
> see how e.g. i915 handles the various gem object backends.
Indeed, I tried to do that this way before, e.g. have all sgt based.
But at the end of the day Xen shared buffer code in the driver works
with pages (Xen API is page based there), so sgt then will anyway need
to be converted into page array.
For that reason I prefer to work with pages from the beginning, not sgt.
As to constant get_pages etc. - this is the only expected place in the
driver for that, so the _from_sgt/_from_pages API is only used here.
>
>> + if (drm_info->gem_ops->get_pages(obj))
>> + ret = drm_info->front_ops->dbuf_create_from_pages(
>> + drm_info->front_info,
>> + xen_drm_front_dbuf_to_cookie(obj),
>> + args->width, args->height, args->bpp,
>> + args->size,
>> + drm_info->gem_ops->get_pages(obj));
>> + else
>> + ret = drm_info->front_ops->dbuf_create_from_sgt(
>> + drm_info->front_info,
>> + xen_drm_front_dbuf_to_cookie(obj),
>> + args->width, args->height, args->bpp,
>> + args->size,
>> + drm_info->gem_ops->prime_get_sg_table(obj));
>> + if (ret)
>> + goto fail_destroy;
>> +
>> + return 0;
>> +
>> +fail_destroy:
>> + drm_gem_dumb_destroy(filp, dev, args->handle);
>> +fail:
>> + DRM_ERROR("Failed to create dumb buffer: %d\n", ret);
>> + return ret;
>> }
>>
>> static void free_object(struct drm_gem_object *obj)
>> @@ -37,6 +83,7 @@ static void free_object(struct drm_gem_object *obj)
>>
>> drm_info->front_ops->dbuf_destroy(drm_info->front_info,
>> xen_drm_front_dbuf_to_cookie(obj));
>> + drm_info->gem_ops->free_object_unlocked(obj);
>> }
>>
>> static void on_frame_done(struct platform_device *pdev,
>> @@ -60,32 +107,52 @@ static void lastclose(struct drm_device *dev)
>>
>> static int gem_mmap(struct file *filp, struct vm_area_struct *vma)
>> {
>> - return -EINVAL;
>> + struct drm_file *file_priv = filp->private_data;
>> + struct drm_device *dev = file_priv->minor->dev;
>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>> +
>> + return drm_info->gem_ops->mmap(filp, vma);
> Uh, so 1 midlayer for the kms stuff and another midlayer for the gem
> stuff. That's way too much indirection.
If by KMS you mean front_ops then -1: I will remove front_ops.
As to gem_ops, please see below
>> }
>>
>> static struct sg_table *prime_get_sg_table(struct drm_gem_object *obj)
>> {
>> - return NULL;
>> + struct xen_drm_front_drm_info *drm_info;
>> +
>> + drm_info = obj->dev->dev_private;
>> + return drm_info->gem_ops->prime_get_sg_table(obj);
>> }
>>
>> static struct drm_gem_object *prime_import_sg_table(struct drm_device *dev,
>> struct dma_buf_attachment *attach, struct sg_table *sgt)
>> {
>> - return NULL;
>> + struct xen_drm_front_drm_info *drm_info;
>> +
>> + drm_info = dev->dev_private;
>> + return drm_info->gem_ops->prime_import_sg_table(dev, attach, sgt);
>> }
>>
>> static void *prime_vmap(struct drm_gem_object *obj)
>> {
>> - return NULL;
>> + struct xen_drm_front_drm_info *drm_info;
>> +
>> + drm_info = obj->dev->dev_private;
>> + return drm_info->gem_ops->prime_vmap(obj);
>> }
>>
>> static void prime_vunmap(struct drm_gem_object *obj, void *vaddr)
>> {
>> + struct xen_drm_front_drm_info *drm_info;
>> +
>> + drm_info = obj->dev->dev_private;
>> + drm_info->gem_ops->prime_vunmap(obj, vaddr);
>> }
>>
>> static int prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>> {
>> - return -EINVAL;
>> + struct xen_drm_front_drm_info *drm_info;
>> +
>> + drm_info = obj->dev->dev_private;
>> + return drm_info->gem_ops->prime_mmap(obj, vma);
>> }
>>
>> static const struct file_operations xendrm_fops = {
>> @@ -147,6 +214,7 @@ int xen_drm_front_drv_probe(struct platform_device *pdev,
>>
>> drm_info->front_ops = front_ops;
>> drm_info->front_ops->on_frame_done = on_frame_done;
>> + drm_info->gem_ops = xen_drm_front_gem_get_ops();
>> drm_info->front_info = cfg->front_info;
>>
>> dev = drm_dev_alloc(&xen_drm_driver, &pdev->dev);
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.h b/drivers/gpu/drm/xen/xen_drm_front_drv.h
>> index 563318b19f34..34228eb86255 100644
>> --- a/drivers/gpu/drm/xen/xen_drm_front_drv.h
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.h
>> @@ -43,6 +43,7 @@ struct xen_drm_front_drm_pipeline {
>> struct xen_drm_front_drm_info {
>> struct xen_drm_front_info *front_info;
>> struct xen_drm_front_ops *front_ops;
>> + const struct xen_drm_front_gem_ops *gem_ops;
>> struct drm_device *drm_dev;
>> struct xen_drm_front_cfg *cfg;
>>
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
>> new file mode 100644
>> index 000000000000..367e08f6a9ef
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
>> @@ -0,0 +1,360 @@
>> +/*
>> + * Xen para-virtual DRM device
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>> + *
>> + * Author: Oleksandr Andrushchenko <[email protected]>
>> + */
>> +
>> +#include "xen_drm_front_gem.h"
>> +
>> +#include <drm/drmP.h>
>> +#include <drm/drm_crtc_helper.h>
>> +#include <drm/drm_fb_helper.h>
>> +#include <drm/drm_gem.h>
>> +
>> +#include <linux/dma-buf.h>
>> +#include <linux/scatterlist.h>
>> +#include <linux/shmem_fs.h>
>> +
>> +#include <xen/balloon.h>
>> +
>> +#include "xen_drm_front.h"
>> +#include "xen_drm_front_drv.h"
>> +#include "xen_drm_front_shbuf.h"
>> +
>> +struct xen_gem_object {
>> + struct drm_gem_object base;
>> +
>> + size_t num_pages;
>> + struct page **pages;
>> +
>> + /* set for buffers allocated by the backend */
>> + bool be_alloc;
>> +
>> + /* this is for imported PRIME buffer */
>> + struct sg_table *sgt_imported;
>> +};
>> +
>> +static inline struct xen_gem_object *to_xen_gem_obj(
>> + struct drm_gem_object *gem_obj)
>> +{
>> + return container_of(gem_obj, struct xen_gem_object, base);
>> +}
>> +
>> +static int gem_alloc_pages_array(struct xen_gem_object *xen_obj,
>> + size_t buf_size)
>> +{
>> + xen_obj->num_pages = DIV_ROUND_UP(buf_size, PAGE_SIZE);
>> + xen_obj->pages = kvmalloc_array(xen_obj->num_pages,
>> + sizeof(struct page *), GFP_KERNEL);
>> + return xen_obj->pages == NULL ? -ENOMEM : 0;
>> +}
>> +
>> +static void gem_free_pages_array(struct xen_gem_object *xen_obj)
>> +{
>> + kvfree(xen_obj->pages);
>> + xen_obj->pages = NULL;
>> +}
>> +
>> +static struct xen_gem_object *gem_create_obj(struct drm_device *dev,
>> + size_t size)
>> +{
>> + struct xen_gem_object *xen_obj;
>> + int ret;
>> +
>> + xen_obj = kzalloc(sizeof(*xen_obj), GFP_KERNEL);
>> + if (!xen_obj)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + ret = drm_gem_object_init(dev, &xen_obj->base, size);
>> + if (ret < 0) {
>> + kfree(xen_obj);
>> + return ERR_PTR(ret);
>> + }
>> +
>> + return xen_obj;
>> +}
>> +
>> +static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
>> +{
>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>> + struct xen_gem_object *xen_obj;
>> + int ret;
>> +
>> + size = round_up(size, PAGE_SIZE);
>> + xen_obj = gem_create_obj(dev, size);
>> + if (IS_ERR_OR_NULL(xen_obj))
>> + return xen_obj;
>> +
>> + if (drm_info->cfg->be_alloc) {
>> + /*
>> + * backend will allocate space for this buffer, so
>> + * only allocate array of pointers to pages
>> + */
>> + xen_obj->be_alloc = true;
>> + ret = gem_alloc_pages_array(xen_obj, size);
>> + if (ret < 0) {
>> + gem_free_pages_array(xen_obj);
>> + goto fail;
>> + }
>> +
>> + ret = alloc_xenballooned_pages(xen_obj->num_pages,
>> + xen_obj->pages);
>> + if (ret < 0) {
>> + DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
>> + xen_obj->num_pages, ret);
>> + goto fail;
>> + }
>> +
>> + return xen_obj;
>> + }
>> + /*
>> + * need to allocate backing pages now, so we can share those
>> + * with the backend
>> + */
>> + xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE);
>> + xen_obj->pages = drm_gem_get_pages(&xen_obj->base);
>> + if (IS_ERR_OR_NULL(xen_obj->pages)) {
>> + ret = PTR_ERR(xen_obj->pages);
>> + xen_obj->pages = NULL;
>> + goto fail;
>> + }
>> +
>> + return xen_obj;
>> +
>> +fail:
>> + DRM_ERROR("Failed to allocate buffer with size %zu\n", size);
>> + return ERR_PTR(ret);
>> +}
>> +
>> +static struct xen_gem_object *gem_create_with_handle(struct drm_file *filp,
>> + struct drm_device *dev, size_t size, uint32_t *handle)
>> +{
>> + struct xen_gem_object *xen_obj;
>> + struct drm_gem_object *gem_obj;
>> + int ret;
>> +
>> + xen_obj = gem_create(dev, size);
>> + if (IS_ERR_OR_NULL(xen_obj))
>> + return xen_obj;
>> +
>> + gem_obj = &xen_obj->base;
>> + ret = drm_gem_handle_create(filp, gem_obj, handle);
>> + /* handle holds the reference */
>> + drm_gem_object_unreference_unlocked(gem_obj);
>> + if (ret < 0)
>> + return ERR_PTR(ret);
>> +
>> + return xen_obj;
>> +}
>> +
>> +static int gem_dumb_create(struct drm_file *filp, struct drm_device *dev,
>> + struct drm_mode_create_dumb *args)
>> +{
>> + struct xen_gem_object *xen_obj;
>> +
>> + args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
>> + args->size = args->pitch * args->height;
>> +
>> + xen_obj = gem_create_with_handle(filp, dev, args->size, &args->handle);
>> + if (IS_ERR_OR_NULL(xen_obj))
>> + return xen_obj == NULL ? -ENOMEM : PTR_ERR(xen_obj);
>> +
>> + return 0;
>> +}
>> +
>> +static void gem_free_object(struct drm_gem_object *gem_obj)
>> +{
>> + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
>> +
>> + if (xen_obj->base.import_attach) {
>> + drm_prime_gem_destroy(&xen_obj->base, xen_obj->sgt_imported);
>> + gem_free_pages_array(xen_obj);
>> + } else {
>> + if (xen_obj->pages) {
>> + if (xen_obj->be_alloc) {
>> + free_xenballooned_pages(xen_obj->num_pages,
>> + xen_obj->pages);
>> + gem_free_pages_array(xen_obj);
>> + } else
>> + drm_gem_put_pages(&xen_obj->base,
>> + xen_obj->pages, true, false);
>> + }
>> + }
>> + drm_gem_object_release(gem_obj);
>> + kfree(xen_obj);
>> +}
>> +
>> +static struct page **gem_get_pages(struct drm_gem_object *gem_obj)
>> +{
>> + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
>> +
>> + return xen_obj->pages;
>> +}
>> +
>> +static struct sg_table *gem_get_sg_table(struct drm_gem_object *gem_obj)
>> +{
>> + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
>> +
>> + if (!xen_obj->pages)
>> + return NULL;
>> +
>> + return drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages);
>> +}
>> +
>> +static struct drm_gem_object *gem_import_sg_table(struct drm_device *dev,
>> + struct dma_buf_attachment *attach, struct sg_table *sgt)
>> +{
>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>> + struct xen_gem_object *xen_obj;
>> + size_t size;
>> + int ret;
>> +
>> + size = attach->dmabuf->size;
>> + xen_obj = gem_create_obj(dev, size);
>> + if (IS_ERR_OR_NULL(xen_obj))
>> + return ERR_CAST(xen_obj);
>> +
>> + ret = gem_alloc_pages_array(xen_obj, size);
>> + if (ret < 0)
>> + return ERR_PTR(ret);
>> +
>> + xen_obj->sgt_imported = sgt;
>> +
>> + ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages,
>> + NULL, xen_obj->num_pages);
>> + if (ret < 0)
>> + return ERR_PTR(ret);
>> +
>> + /*
>> + * N.B. Although we have an API to create display buffer from sgt
>> + * we use pages API, because we still need those for GEM handling,
>> + * e.g. for mapping etc.
>> + */
>> + ret = drm_info->front_ops->dbuf_create_from_pages(
>> + drm_info->front_info,
>> + xen_drm_front_dbuf_to_cookie(&xen_obj->base),
>> + 0, 0, 0, size, xen_obj->pages);
>> + if (ret < 0)
>> + return ERR_PTR(ret);
>> +
>> + DRM_DEBUG("Imported buffer of size %zu with nents %u\n",
>> + size, sgt->nents);
>> +
>> + return &xen_obj->base;
>> +}
>> +
>> +static int gem_mmap_obj(struct xen_gem_object *xen_obj,
>> + struct vm_area_struct *vma)
>> +{
>> + unsigned long addr = vma->vm_start;
>> + int i;
>> +
>> + /*
>> + * clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
>> + * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
>> + * the whole buffer.
>> + */
>> + vma->vm_flags &= ~VM_PFNMAP;
>> + vma->vm_flags |= VM_MIXEDMAP;
>> + vma->vm_pgoff = 0;
>> + vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
>> +
>> + /*
>> + * vm_operations_struct.fault handler will be called if CPU access
>> + * to VM is here. For GPUs this isn't the case, because CPU
>> + * doesn't touch the memory. Insert pages now, so both CPU and GPU are
>> + * happy.
>> + * FIXME: as we insert all the pages now then no .fault handler must
>> + * be called, so don't provide one
>> + */
>> + for (i = 0; i < xen_obj->num_pages; i++) {
>> + int ret;
>> +
>> + ret = vm_insert_page(vma, addr, xen_obj->pages[i]);
>> + if (ret < 0) {
>> + DRM_ERROR("Failed to insert pages into vma: %d\n", ret);
>> + return ret;
>> + }
>> +
>> + addr += PAGE_SIZE;
>> + }
>> + return 0;
>> +}
>> +
>> +static int gem_mmap(struct file *filp, struct vm_area_struct *vma)
>> +{
>> + struct xen_gem_object *xen_obj;
>> + struct drm_gem_object *gem_obj;
>> + int ret;
>> +
>> + ret = drm_gem_mmap(filp, vma);
>> + if (ret < 0)
>> + return ret;
>> +
>> + gem_obj = vma->vm_private_data;
>> + xen_obj = to_xen_gem_obj(gem_obj);
>> + return gem_mmap_obj(xen_obj, vma);
>> +}
>> +
>> +static void *gem_prime_vmap(struct drm_gem_object *gem_obj)
>> +{
>> + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
>> +
>> + if (!xen_obj->pages)
>> + return NULL;
>> +
>> + return vmap(xen_obj->pages, xen_obj->num_pages,
>> + VM_MAP, pgprot_writecombine(PAGE_KERNEL));
>> +}
>> +
>> +static void gem_prime_vunmap(struct drm_gem_object *gem_obj, void *vaddr)
>> +{
>> + vunmap(vaddr);
>> +}
>> +
>> +static int gem_prime_mmap(struct drm_gem_object *gem_obj,
>> + struct vm_area_struct *vma)
>> +{
>> + struct xen_gem_object *xen_obj;
>> + int ret;
>> +
>> + ret = drm_gem_mmap_obj(gem_obj, gem_obj->size, vma);
>> + if (ret < 0)
>> + return ret;
>> +
>> + xen_obj = to_xen_gem_obj(gem_obj);
>> + return gem_mmap_obj(xen_obj, vma);
>> +}
>> +
>> +static const struct xen_drm_front_gem_ops xen_drm_gem_ops = {
>> + .free_object_unlocked = gem_free_object,
>> + .prime_get_sg_table = gem_get_sg_table,
>> + .prime_import_sg_table = gem_import_sg_table,
>> +
>> + .prime_vmap = gem_prime_vmap,
>> + .prime_vunmap = gem_prime_vunmap,
>> + .prime_mmap = gem_prime_mmap,
>> +
>> + .dumb_create = gem_dumb_create,
>> +
>> + .mmap = gem_mmap,
>> +
>> + .get_pages = gem_get_pages,
>> +};
>> +
>> +const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void)
>> +{
>> + return &xen_drm_gem_ops;
>> +}
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
>> new file mode 100644
>> index 000000000000..d1e1711cc3fc
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
>> @@ -0,0 +1,46 @@
>> +/*
>> + * Xen para-virtual DRM device
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>> + *
>> + * Author: Oleksandr Andrushchenko <[email protected]>
>> + */
>> +
>> +#ifndef __XEN_DRM_FRONT_GEM_H
>> +#define __XEN_DRM_FRONT_GEM_H
>> +
>> +#include <drm/drmP.h>
>> +
>> +struct xen_drm_front_gem_ops {
>> + void (*free_object_unlocked)(struct drm_gem_object *obj);
>> +
>> + struct sg_table *(*prime_get_sg_table)(struct drm_gem_object *obj);
>> + struct drm_gem_object *(*prime_import_sg_table)(struct drm_device *dev,
>> + struct dma_buf_attachment *attach,
>> + struct sg_table *sgt);
>> + void *(*prime_vmap)(struct drm_gem_object *obj);
>> + void (*prime_vunmap)(struct drm_gem_object *obj, void *vaddr);
>> + int (*prime_mmap)(struct drm_gem_object *obj,
>> + struct vm_area_struct *vma);
>> +
>> + int (*dumb_create)(struct drm_file *file_priv, struct drm_device *dev,
>> + struct drm_mode_create_dumb *args);
>> +
>> + int (*mmap)(struct file *filp, struct vm_area_struct *vma);
>> +
>> + struct page **(*get_pages)(struct drm_gem_object *obj);
>> +};
>> +
>> +const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void);
>> +
>> +#endif /* __XEN_DRM_FRONT_GEM_H */
>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c b/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
>> new file mode 100644
>> index 000000000000..5ffcbfa652d5
>> --- /dev/null
>> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
>> @@ -0,0 +1,93 @@
>> +/*
>> + * Xen para-virtual DRM device
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>> + * GNU General Public License for more details.
>> + *
>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>> + *
>> + * Author: Oleksandr Andrushchenko <[email protected]>
>> + */
>> +
>> +#include <drm/drmP.h>
>> +#include <drm/drm_gem.h>
>> +#include <drm/drm_fb_cma_helper.h>
>> +#include <drm/drm_gem_cma_helper.h>
>> +
>> +#include "xen_drm_front.h"
>> +#include "xen_drm_front_drv.h"
>> +#include "xen_drm_front_gem.h"
>> +
>> +static struct drm_gem_object *gem_import_sg_table(struct drm_device *dev,
>> + struct dma_buf_attachment *attach, struct sg_table *sgt)
>> +{
>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>> + struct drm_gem_object *gem_obj;
>> + struct drm_gem_cma_object *cma_obj;
>> + int ret;
>> +
>> + gem_obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
>> + if (IS_ERR_OR_NULL(gem_obj))
>> + return gem_obj;
>> +
>> + cma_obj = to_drm_gem_cma_obj(gem_obj);
>> +
>> + ret = drm_info->front_ops->dbuf_create_from_sgt(
>> + drm_info->front_info,
>> + xen_drm_front_dbuf_to_cookie(gem_obj),
>> + 0, 0, 0, gem_obj->size,
>> + drm_gem_cma_prime_get_sg_table(gem_obj));
>> + if (ret < 0)
>> + return ERR_PTR(ret);
>> +
>> + DRM_DEBUG("Imported CMA buffer of size %zu\n", gem_obj->size);
>> +
>> + return gem_obj;
>> +}
>> +
>> +static int gem_dumb_create(struct drm_file *filp, struct drm_device *dev,
>> + struct drm_mode_create_dumb *args)
>> +{
>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>> +
>> + if (drm_info->cfg->be_alloc) {
>> + /* This use-case is not yet supported and probably won't be */
>> + DRM_ERROR("Backend allocated buffers and CMA helpers are not supported at the same time\n");
>> + return -EINVAL;
>> + }
>> +
>> + return drm_gem_cma_dumb_create(filp, dev, args);
>> +}
>> +
>> +static struct page **gem_get_pages(struct drm_gem_object *gem_obj)
>> +{
>> + return NULL;
>> +}
>> +
>> +static const struct xen_drm_front_gem_ops xen_drm_front_gem_cma_ops = {
>> + .free_object_unlocked = drm_gem_cma_free_object,
>> + .prime_get_sg_table = drm_gem_cma_prime_get_sg_table,
>> + .prime_import_sg_table = gem_import_sg_table,
>> +
>> + .prime_vmap = drm_gem_cma_prime_vmap,
>> + .prime_vunmap = drm_gem_cma_prime_vunmap,
>> + .prime_mmap = drm_gem_cma_prime_mmap,
>> +
>> + .dumb_create = gem_dumb_create,
>> +
>> + .mmap = drm_gem_cma_mmap,
>> +
>> + .get_pages = gem_get_pages,
>> +};
> Again quite a midlayer you have here. Please inline this to avoid
> confusion for other people (since it looks like you only have 1
> implementation).
There are 2 implementations depending on driver compile time options:
you can have the GEM operations implemented with DRM CMA helpers
or driver's cooked GEMs. For this reason this midlayer exists, e.g.
to eliminate the need for something like
#ifdef DRM_XEN_FRONTEND_CMA
drm_gem_cma_...()
#else
xen_drm_front_gem_...()
#endif
So, I would prefer to have ops rather then having ifdefs
>
>> +
>> +const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void)
>> +{
>> + return &xen_drm_front_gem_cma_ops;
>> +}
>> --
>> 2.7.4
>>
>> _______________________________________________
>> dri-devel mailing list
>> [email protected]
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Mon, Mar 05, 2018 at 02:59:23PM +0200, Oleksandr Andrushchenko wrote:
> On 03/05/2018 11:23 AM, Daniel Vetter wrote:
> > On Wed, Feb 21, 2018 at 10:03:40AM +0200, Oleksandr Andrushchenko wrote:
> > > From: Oleksandr Andrushchenko <[email protected]>
> > >
> > > Implement kernel modesetiing/connector handling using
> > > DRM simple KMS helper pipeline:
> > >
> > > - implement KMS part of the driver with the help of DRM
> > > simple pipepline helper which is possible due to the fact
> > > that the para-virtualized driver only supports a single
> > > (primary) plane:
> > > - initialize connectors according to XenStore configuration
> > > - handle frame done events from the backend
> > > - generate vblank events
> > > - create and destroy frame buffers and propagate those
> > > to the backend
> > > - propagate set/reset mode configuration to the backend on display
> > > enable/disable callbacks
> > > - send page flip request to the backend and implement logic for
> > > reporting backend IO errors on prepare fb callback
> > >
> > > - implement virtual connector handling:
> > > - support only pixel formats suitable for single plane modes
> > > - make sure the connector is always connected
> > > - support a single video mode as per para-virtualized driver
> > > configuration
> > >
> > > Signed-off-by: Oleksandr Andrushchenko <[email protected]>
> > I think once you've removed the midlayer in the previous patch it would
> > makes sense to merge the 2 patches into 1.
> ok, will squash the two
> >
> > Bunch more comments below.
> > -Daniel
> >
> > > ---
> > > drivers/gpu/drm/xen/Makefile | 2 +
> > > drivers/gpu/drm/xen/xen_drm_front_conn.c | 125 +++++++++++++
> > > drivers/gpu/drm/xen/xen_drm_front_conn.h | 35 ++++
> > > drivers/gpu/drm/xen/xen_drm_front_drv.c | 15 ++
> > > drivers/gpu/drm/xen/xen_drm_front_drv.h | 12 ++
> > > drivers/gpu/drm/xen/xen_drm_front_kms.c | 299 +++++++++++++++++++++++++++++++
> > > drivers/gpu/drm/xen/xen_drm_front_kms.h | 30 ++++
> > > 7 files changed, 518 insertions(+)
> > > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.c
> > > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.h
> > > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.c
> > > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.h
> > >
> > > diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
> > > index d3068202590f..4fcb0da1a9c5 100644
> > > --- a/drivers/gpu/drm/xen/Makefile
> > > +++ b/drivers/gpu/drm/xen/Makefile
> > > @@ -2,6 +2,8 @@
> > > drm_xen_front-objs := xen_drm_front.o \
> > > xen_drm_front_drv.o \
> > > + xen_drm_front_kms.o \
> > > + xen_drm_front_conn.o \
> > > xen_drm_front_evtchnl.o \
> > > xen_drm_front_shbuf.o \
> > > xen_drm_front_cfg.o
> > > diff --git a/drivers/gpu/drm/xen/xen_drm_front_conn.c b/drivers/gpu/drm/xen/xen_drm_front_conn.c
> > > new file mode 100644
> > > index 000000000000..d9986a2e1a3b
> > > --- /dev/null
> > > +++ b/drivers/gpu/drm/xen/xen_drm_front_conn.c
> > > @@ -0,0 +1,125 @@
> > > +/*
> > > + * Xen para-virtual DRM device
> > > + *
> > > + * This program is free software; you can redistribute it and/or modify
> > > + * it under the terms of the GNU General Public License as published by
> > > + * the Free Software Foundation; either version 2 of the License, or
> > > + * (at your option) any later version.
> > > + *
> > > + * This program is distributed in the hope that it will be useful,
> > > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > > + * GNU General Public License for more details.
> > > + *
> > > + * Copyright (C) 2016-2018 EPAM Systems Inc.
> > > + *
> > > + * Author: Oleksandr Andrushchenko <[email protected]>
> > > + */
> > > +
> > > +#include <drm/drm_atomic_helper.h>
> > > +#include <drm/drm_crtc_helper.h>
> > > +
> > > +#include <video/videomode.h>
> > > +
> > > +#include "xen_drm_front_conn.h"
> > > +#include "xen_drm_front_drv.h"
> > > +
> > > +static struct xen_drm_front_drm_pipeline *
> > > +to_xen_drm_pipeline(struct drm_connector *connector)
> > > +{
> > > + return container_of(connector, struct xen_drm_front_drm_pipeline, conn);
> > > +}
> > > +
> > > +static const uint32_t plane_formats[] = {
> > > + DRM_FORMAT_RGB565,
> > > + DRM_FORMAT_RGB888,
> > > + DRM_FORMAT_XRGB8888,
> > > + DRM_FORMAT_ARGB8888,
> > > + DRM_FORMAT_XRGB4444,
> > > + DRM_FORMAT_ARGB4444,
> > > + DRM_FORMAT_XRGB1555,
> > > + DRM_FORMAT_ARGB1555,
> > > +};
> > > +
> > > +const uint32_t *xen_drm_front_conn_get_formats(int *format_count)
> > > +{
> > > + *format_count = ARRAY_SIZE(plane_formats);
> > > + return plane_formats;
> > > +}
> > > +
> > > +static enum drm_connector_status connector_detect(
> > > + struct drm_connector *connector, bool force)
> > > +{
> > > + if (drm_dev_is_unplugged(connector->dev))
> > > + return connector_status_disconnected;
> > > +
> > > + return connector_status_connected;
> > > +}
> > > +
> > > +#define XEN_DRM_NUM_VIDEO_MODES 1
> > > +#define XEN_DRM_CRTC_VREFRESH_HZ 60
> > > +
> > > +static int connector_get_modes(struct drm_connector *connector)
> > > +{
> > > + struct xen_drm_front_drm_pipeline *pipeline =
> > > + to_xen_drm_pipeline(connector);
> > > + struct drm_display_mode *mode;
> > > + struct videomode videomode;
> > > + int width, height;
> > > +
> > > + mode = drm_mode_create(connector->dev);
> > > + if (!mode)
> > > + return 0;
> > > +
> > > + memset(&videomode, 0, sizeof(videomode));
> > > + videomode.hactive = pipeline->width;
> > > + videomode.vactive = pipeline->height;
> > > + width = videomode.hactive + videomode.hfront_porch +
> > > + videomode.hback_porch + videomode.hsync_len;
> > > + height = videomode.vactive + videomode.vfront_porch +
> > > + videomode.vback_porch + videomode.vsync_len;
> > > + videomode.pixelclock = width * height * XEN_DRM_CRTC_VREFRESH_HZ;
> > > + mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
> > > +
> > > + drm_display_mode_from_videomode(&videomode, mode);
> > > + drm_mode_probed_add(connector, mode);
> > > + return XEN_DRM_NUM_VIDEO_MODES;
> > > +}
> > > +
> > > +static int connector_mode_valid(struct drm_connector *connector,
> > > + struct drm_display_mode *mode)
> > > +{
> > > + struct xen_drm_front_drm_pipeline *pipeline =
> > > + to_xen_drm_pipeline(connector);
> > > +
> > > + if (mode->hdisplay != pipeline->width)
> > > + return MODE_ERROR;
> > > +
> > > + if (mode->vdisplay != pipeline->height)
> > > + return MODE_ERROR;
> > > +
> > > + return MODE_OK;
> > > +}
> > > +
> > > +static const struct drm_connector_helper_funcs connector_helper_funcs = {
> > > + .get_modes = connector_get_modes,
> > > + .mode_valid = connector_mode_valid,
> > > +};
> > > +
> > > +static const struct drm_connector_funcs connector_funcs = {
> > > + .detect = connector_detect,
> > > + .fill_modes = drm_helper_probe_single_connector_modes,
> > > + .destroy = drm_connector_cleanup,
> > > + .reset = drm_atomic_helper_connector_reset,
> > > + .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
> > > + .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
> > > +};
> > > +
> > > +int xen_drm_front_conn_init(struct xen_drm_front_drm_info *drm_info,
> > > + struct drm_connector *connector)
> > > +{
> > > + drm_connector_helper_add(connector, &connector_helper_funcs);
> > > +
> > > + return drm_connector_init(drm_info->drm_dev, connector,
> > > + &connector_funcs, DRM_MODE_CONNECTOR_VIRTUAL);
> > > +}
> > > diff --git a/drivers/gpu/drm/xen/xen_drm_front_conn.h b/drivers/gpu/drm/xen/xen_drm_front_conn.h
> > > new file mode 100644
> > > index 000000000000..708e80d45985
> > > --- /dev/null
> > > +++ b/drivers/gpu/drm/xen/xen_drm_front_conn.h
> > > @@ -0,0 +1,35 @@
> > > +/*
> > > + * Xen para-virtual DRM device
> > > + *
> > > + * This program is free software; you can redistribute it and/or modify
> > > + * it under the terms of the GNU General Public License as published by
> > > + * the Free Software Foundation; either version 2 of the License, or
> > > + * (at your option) any later version.
> > > + *
> > > + * This program is distributed in the hope that it will be useful,
> > > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > > + * GNU General Public License for more details.
> > > + *
> > > + * Copyright (C) 2016-2018 EPAM Systems Inc.
> > > + *
> > > + * Author: Oleksandr Andrushchenko <[email protected]>
> > > + */
> > > +
> > > +#ifndef __XEN_DRM_FRONT_CONN_H_
> > > +#define __XEN_DRM_FRONT_CONN_H_
> > > +
> > > +#include <drm/drmP.h>
> > > +#include <drm/drm_crtc.h>
> > > +#include <drm/drm_encoder.h>
> > > +
> > > +#include <linux/wait.h>
> > > +
> > > +struct xen_drm_front_drm_info;
> > > +
> > > +const uint32_t *xen_drm_front_conn_get_formats(int *format_count);
> > > +
> > > +int xen_drm_front_conn_init(struct xen_drm_front_drm_info *drm_info,
> > > + struct drm_connector *connector);
> > > +
> > > +#endif /* __XEN_DRM_FRONT_CONN_H_ */
> > > diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.c b/drivers/gpu/drm/xen/xen_drm_front_drv.c
> > > index b3764d5ed0f6..e8862d26ba27 100644
> > > --- a/drivers/gpu/drm/xen/xen_drm_front_drv.c
> > > +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.c
> > > @@ -23,6 +23,7 @@
> > > #include "xen_drm_front.h"
> > > #include "xen_drm_front_cfg.h"
> > > #include "xen_drm_front_drv.h"
> > > +#include "xen_drm_front_kms.h"
> > > static int dumb_create(struct drm_file *filp,
> > > struct drm_device *dev, struct drm_mode_create_dumb *args)
> > > @@ -41,6 +42,13 @@ static void free_object(struct drm_gem_object *obj)
> > > static void on_frame_done(struct platform_device *pdev,
> > > int conn_idx, uint64_t fb_cookie)
> > > {
> > > + struct xen_drm_front_drm_info *drm_info = platform_get_drvdata(pdev);
> > > +
> > > + if (unlikely(conn_idx >= drm_info->cfg->num_connectors))
> > > + return;
> > > +
> > > + xen_drm_front_kms_on_frame_done(&drm_info->pipeline[conn_idx],
> > > + fb_cookie);
> > > }
> > > static void lastclose(struct drm_device *dev)
> > > @@ -157,6 +165,12 @@ int xen_drm_front_drv_probe(struct platform_device *pdev,
> > > return ret;
> > > }
> > > + ret = xen_drm_front_kms_init(drm_info);
> > > + if (ret) {
> > > + DRM_ERROR("Failed to initialize DRM/KMS, ret %d\n", ret);
> > > + goto fail_modeset;
> > > + }
> > > +
> > > dev->irq_enabled = 1;
> > > ret = drm_dev_register(dev, 0);
> > > @@ -172,6 +186,7 @@ int xen_drm_front_drv_probe(struct platform_device *pdev,
> > > fail_register:
> > > drm_dev_unregister(dev);
> > > +fail_modeset:
> > > drm_mode_config_cleanup(dev);
> > > return ret;
> > > }
> > > diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.h b/drivers/gpu/drm/xen/xen_drm_front_drv.h
> > > index aaa476535c13..563318b19f34 100644
> > > --- a/drivers/gpu/drm/xen/xen_drm_front_drv.h
> > > +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.h
> > > @@ -20,14 +20,24 @@
> > > #define __XEN_DRM_FRONT_DRV_H_
> > > #include <drm/drmP.h>
> > > +#include <drm/drm_simple_kms_helper.h>
> > > #include "xen_drm_front.h"
> > > #include "xen_drm_front_cfg.h"
> > > +#include "xen_drm_front_conn.h"
> > > struct xen_drm_front_drm_pipeline {
> > > struct xen_drm_front_drm_info *drm_info;
> > > int index;
> > > +
> > > + struct drm_simple_display_pipe pipe;
> > > +
> > > + struct drm_connector conn;
> > > + /* these are only for connector mode checking */
> > > + int width, height;
> > > + /* last backend error seen on page flip */
> > > + int pgflip_last_error;
> > > };
> > > struct xen_drm_front_drm_info {
> > > @@ -35,6 +45,8 @@ struct xen_drm_front_drm_info {
> > > struct xen_drm_front_ops *front_ops;
> > > struct drm_device *drm_dev;
> > > struct xen_drm_front_cfg *cfg;
> > > +
> > > + struct xen_drm_front_drm_pipeline pipeline[XEN_DRM_FRONT_MAX_CRTCS];
> > > };
> > > static inline uint64_t xen_drm_front_fb_to_cookie(
> > > diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm/xen/xen_drm_front_kms.c
> > > new file mode 100644
> > > index 000000000000..ad94c28835cd
> > > --- /dev/null
> > > +++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
> > > @@ -0,0 +1,299 @@
> > > +/*
> > > + * Xen para-virtual DRM device
> > > + *
> > > + * This program is free software; you can redistribute it and/or modify
> > > + * it under the terms of the GNU General Public License as published by
> > > + * the Free Software Foundation; either version 2 of the License, or
> > > + * (at your option) any later version.
> > > + *
> > > + * This program is distributed in the hope that it will be useful,
> > > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > > + * GNU General Public License for more details.
> > > + *
> > > + * Copyright (C) 2016-2018 EPAM Systems Inc.
> > > + *
> > > + * Author: Oleksandr Andrushchenko <[email protected]>
> > > + */
> > > +
> > > +#include "xen_drm_front_kms.h"
> > > +
> > > +#include <drm/drmP.h>
> > > +#include <drm/drm_atomic.h>
> > > +#include <drm/drm_atomic_helper.h>
> > > +#include <drm/drm_gem.h>
> > > +#include <drm/drm_gem_framebuffer_helper.h>
> > > +
> > > +#include "xen_drm_front.h"
> > > +#include "xen_drm_front_conn.h"
> > > +#include "xen_drm_front_drv.h"
> > > +
> > > +static struct xen_drm_front_drm_pipeline *
> > > +to_xen_drm_pipeline(struct drm_simple_display_pipe *pipe)
> > > +{
> > > + return container_of(pipe, struct xen_drm_front_drm_pipeline, pipe);
> > > +}
> > > +
> > > +static void fb_destroy(struct drm_framebuffer *fb)
> > > +{
> > > + struct xen_drm_front_drm_info *drm_info = fb->dev->dev_private;
> > > +
> > > + drm_info->front_ops->fb_detach(drm_info->front_info,
> > > + xen_drm_front_fb_to_cookie(fb));
> > > + drm_gem_fb_destroy(fb);
> > > +}
> > > +
> > > +static struct drm_framebuffer_funcs fb_funcs = {
> > > + .destroy = fb_destroy,
> > > +};
> > > +
> > > +static struct drm_framebuffer *fb_create(struct drm_device *dev,
> > > + struct drm_file *filp, const struct drm_mode_fb_cmd2 *mode_cmd)
> > > +{
> > > + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> > > + static struct drm_framebuffer *fb;
> > > + struct drm_gem_object *gem_obj;
> > > + int ret;
> > > +
> > > + fb = drm_gem_fb_create_with_funcs(dev, filp, mode_cmd, &fb_funcs);
> > > + if (IS_ERR_OR_NULL(fb))
> > > + return fb;
> > > +
> > > + gem_obj = drm_gem_object_lookup(filp, mode_cmd->handles[0]);
> > > + if (!gem_obj) {
> > > + DRM_ERROR("Failed to lookup GEM object\n");
> > > + ret = -ENOENT;
> > > + goto fail;
> > > + }
> > > +
> > > + drm_gem_object_unreference_unlocked(gem_obj);
> > > +
> > > + ret = drm_info->front_ops->fb_attach(
> > > + drm_info->front_info,
> > > + xen_drm_front_dbuf_to_cookie(gem_obj),
> > > + xen_drm_front_fb_to_cookie(fb),
> > > + fb->width, fb->height, fb->format->format);
> > > + if (ret < 0) {
> > > + DRM_ERROR("Back failed to attach FB %p: %d\n", fb, ret);
> > > + goto fail;
> > > + }
> > > +
> > > + return fb;
> > > +
> > > +fail:
> > > + drm_gem_fb_destroy(fb);
> > > + return ERR_PTR(ret);
> > > +}
> > > +
> > > +static const struct drm_mode_config_funcs mode_config_funcs = {
> > > + .fb_create = fb_create,
> > > + .atomic_check = drm_atomic_helper_check,
> > > + .atomic_commit = drm_atomic_helper_commit,
> > > +};
> > > +
> > > +static int display_set_config(struct drm_simple_display_pipe *pipe,
> > > + struct drm_framebuffer *fb)
> > > +{
> > > + struct xen_drm_front_drm_pipeline *pipeline =
> > > + to_xen_drm_pipeline(pipe);
> > > + struct drm_crtc *crtc = &pipe->crtc;
> > > + struct xen_drm_front_drm_info *drm_info = pipeline->drm_info;
> > > + int ret;
> > > +
> > > + if (fb)
> > > + ret = drm_info->front_ops->mode_set(pipeline,
> > > + crtc->x, crtc->y,
> > > + fb->width, fb->height, fb->format->cpp[0] * 8,
> > > + xen_drm_front_fb_to_cookie(fb));
> > > + else
> > > + ret = drm_info->front_ops->mode_set(pipeline,
> > > + 0, 0, 0, 0, 0,
> > > + xen_drm_front_fb_to_cookie(NULL));
> > This is a bit much layering, the if (fb) case corresponds to the
> > display_enable/disable hooks, pls fold that in instead of the indirection.
> > simple helpers guarantee that when the display is on, then you have an fb.
> 1. Ok, the only reason for having this function was to keep
> front_ops->mode_set calls at one place (will be refactored
> to be a direct call, not via front_ops).
> 2. The if (fb) check was meant not to check if simple helpers
> may give us some wrong value when we do not expect: there is
> nothing wrong with them. The check was for 2 cases when this
> function was called: with fb != NULL on display enable and
> with fb == NULL on display disable, e.g. fb was used as a
> flag in this check.
Yeah that's what I meant - it is needlessly confusing: You get 2 explicit
enable/disable callbacks, then you squash them into 1 function call, only
to require an
if (do_I_need_to_enable_or_disable) {
/* code that really should be directly put in the enable callback */
} else {
/* code that really should be directly put in the enable callback */
}
Just a bit of indirection where I didnt' see the point.
Aside for why this matters: When refactoring the entire subsystem you need
to be able to quickly understand how all the drivers work in a specific
case, without being an expert on that driver. If there's very little
indirection between the shared drm concepts/structs/callbacks and the
actual driver code, then that's easy. If there's a bunch of callback
layers or indirections like the above, you make subsystem refactoring
harder for no reason. And in upstream we optimize for the overall
subsystem, not individual drivers.
> 3. I will remove this function at all and will make direct calls
> to the backend on .display_{enable|disable}
> >
> > Maybe we need to fix the docs, pls check and if that's not clear, submit a
> > kernel-doc patch for the simple pipe helpers.
> no, nothing wrong here, just see my reasoning above
> > > +
> > > + if (ret)
> > > + DRM_ERROR("Failed to set mode to back: %d\n", ret);
> > > +
> > > + return ret;
> > > +}
> > > +
> > > +static void display_enable(struct drm_simple_display_pipe *pipe,
> > > + struct drm_crtc_state *crtc_state)
> > > +{
> > > + struct drm_crtc *crtc = &pipe->crtc;
> > > + struct drm_framebuffer *fb = pipe->plane.state->fb;
> > > +
> > > + if (display_set_config(pipe, fb) == 0)
> > > + drm_crtc_vblank_on(crtc);
> > I get the impression your driver doesn't support vblanks (the page flip
> > code at least looks like it's only generating a single event),
> yes, this is true
> > you also
> > don't have a enable/disable_vblank implementation.
> this is because with my previous patches [1] these are now handled
> by simple helpers, so no need to provide dummy ones in the driver
> > If there's no vblank
> > handling then this shouldn't be needed.
> yes, I will rework the code, please see below
> > > + else
> > > + DRM_ERROR("Failed to enable display\n");
> > > +}
> > > +
> > > +static void display_disable(struct drm_simple_display_pipe *pipe)
> > > +{
> > > + struct drm_crtc *crtc = &pipe->crtc;
> > > +
> > > + display_set_config(pipe, NULL);
> > > + drm_crtc_vblank_off(crtc);
> > > + /* final check for stalled events */
> > > + if (crtc->state->event && !crtc->state->active) {
> > > + unsigned long flags;
> > > +
> > > + spin_lock_irqsave(&crtc->dev->event_lock, flags);
> > > + drm_crtc_send_vblank_event(crtc, crtc->state->event);
> > > + spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
> > > + crtc->state->event = NULL;
> > > + }
> > > +}
> > > +
> > > +void xen_drm_front_kms_on_frame_done(
> > > + struct xen_drm_front_drm_pipeline *pipeline,
> > > + uint64_t fb_cookie)
> > > +{
> > > + drm_crtc_handle_vblank(&pipeline->pipe.crtc);
> > Hm, again this doesn't look like real vblank, but only a page-flip done
> > event. If that's correct then please don't use the vblank machinery, but
> > just store the event internally (protected with your own private spinlock)
> Why can't I use &dev->event_lock? Anyways for handling
> page-flip events I will need to lock on it, so I can do
> drm_crtc_send_vblank_event?
Yeah you can reuse the event_lock too, that's what many drivers do.
> > and send it out using drm_crtc_send_vblank_event directly. No calls to
> > arm_vblank_event or any of the other vblank infrastructure should be
> > needed.
> will re-work, e.g. will store drm_pending_vblank_event
> on .display_update and send out on page flip event from the
> backend
> > Also please remove the drm_vblank_init() call, since your hw doesn't
> > really have vblanks. And exposing vblanks to userspace without
> > implementing them is confusing.
> will remove all vblank handling at all with the re-work above
> >
> > > +}
> > > +
> > > +static void display_send_page_flip(struct drm_simple_display_pipe *pipe,
> > > + struct drm_plane_state *old_plane_state)
> > > +{
> > > + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(
> > > + old_plane_state->state, &pipe->plane);
> > > +
> > > + /*
> > > + * If old_plane_state->fb is NULL and plane_state->fb is not,
> > > + * then this is an atomic commit which will enable display.
> > > + * If old_plane_state->fb is not NULL and plane_state->fb is,
> > > + * then this is an atomic commit which will disable display.
> > > + * Ignore these and do not send page flip as this framebuffer will be
> > > + * sent to the backend as a part of display_set_config call.
> > > + */
> > > + if (old_plane_state->fb && plane_state->fb) {
> > > + struct xen_drm_front_drm_pipeline *pipeline =
> > > + to_xen_drm_pipeline(pipe);
> > > + struct xen_drm_front_drm_info *drm_info = pipeline->drm_info;
> > > + int ret;
> > > +
> > > + ret = drm_info->front_ops->page_flip(drm_info->front_info,
> > > + pipeline->index,
> > > + xen_drm_front_fb_to_cookie(plane_state->fb));
> > > + pipeline->pgflip_last_error = ret;
> > > + if (ret) {
> > > + DRM_ERROR("Failed to send page flip request to backend: %d\n", ret);
> > > + /*
> > > + * As we are at commit stage the DRM core will anyways
> > > + * wait for the vblank and knows nothing about our
> > > + * failure. The best we can do is to handle
> > > + * vblank now, so there is no vblank/flip_done
> > > + * time outs
> > > + */
> > > + drm_crtc_handle_vblank(&pipeline->pipe.crtc);
> > > + }
> > > + }
> > > +}
> > > +
> > > +static int display_prepare_fb(struct drm_simple_display_pipe *pipe,
> > > + struct drm_plane_state *plane_state)
> > > +{
> > > + struct xen_drm_front_drm_pipeline *pipeline =
> > > + to_xen_drm_pipeline(pipe);
> > > +
> > > + if (pipeline->pgflip_last_error) {
> > > + int ret;
> > > +
> > > + /* if previous page flip didn't succeed then report the error */
> > > + ret = pipeline->pgflip_last_error;
> > > + /* and let us try to page flip next time */
> > > + pipeline->pgflip_last_error = 0;
> > > + return ret;
> > > + }
> > Nope, this isn't how the uapi works. If your flips fail then we might need
> > to add some error status thing to the drm events, but you can't make the
> > next flip fail.
> Well, yes, there is no way for me to tell that the page flip
> has failed, so this is why I tried to do this workaround with
> the next page-flip. The reason for that is that if, for example,
> we are disconnected from the backend for some reason, there is
> no way for me to tell the user-space that hey, please, do not
> send any other page flips. If backend can recover and that was
> a one time error then yes, the code I have will do wrong thing
> (fail the current page flip), but if the error state is persistent
> then I will be able to tell the user-space to stop by returning errors.
> This is kind of trade-off which I am not sure how to solve correctly.
>
> Do you think I can remove this workaround completely?
Yes. If you want to tell userspace that the backend is gone, send a
hotplug uevent and update the connector status to disconnected. Hotplug
uevents is how we tell userspace about asynchronous changes. We also have
special stuff to signal display cable issue that might require picking a
lower resolution (DP link training) and when HDCP encryption failed.
Sending back random errors on pageflips just confuses the compositor, and
all correctly working compositors will listen to hotplug events and
reprobe all the outputs and change the configuration if necessary.
-Daniel
> > -Daniel
> >
> > > + return drm_gem_fb_prepare_fb(&pipe->plane, plane_state);
> > > +}
> > > +
> > > +static void display_update(struct drm_simple_display_pipe *pipe,
> > > + struct drm_plane_state *old_plane_state)
> > > +{
> > > + struct drm_crtc *crtc = &pipe->crtc;
> > > + struct drm_pending_vblank_event *event;
> > > +
> > > + event = crtc->state->event;
> > > + if (event) {
> > > + struct drm_device *dev = crtc->dev;
> > > + unsigned long flags;
> > > +
> > > + crtc->state->event = NULL;
> > > +
> > > + spin_lock_irqsave(&dev->event_lock, flags);
> > > + if (drm_crtc_vblank_get(crtc) == 0)
> > > + drm_crtc_arm_vblank_event(crtc, event);
> > > + else
> > > + drm_crtc_send_vblank_event(crtc, event);
> > > + spin_unlock_irqrestore(&dev->event_lock, flags);
> > > + }
> > > + /*
> > > + * Send page flip request to the backend *after* we have event armed/
> > > + * sent above, so on page flip done event from the backend we can
> > > + * deliver it while handling vblank.
> > > + */
> > > + display_send_page_flip(pipe, old_plane_state);
> > > +}
> > > +
> > > +static const struct drm_simple_display_pipe_funcs display_funcs = {
> > > + .enable = display_enable,
> > > + .disable = display_disable,
> > > + .prepare_fb = display_prepare_fb,
> > > + .update = display_update,
> > > +};
> > > +
> > > +static int display_pipe_init(struct xen_drm_front_drm_info *drm_info,
> > > + int index, struct xen_drm_front_cfg_connector *cfg,
> > > + struct xen_drm_front_drm_pipeline *pipeline)
> > > +{
> > > + struct drm_device *dev = drm_info->drm_dev;
> > > + const uint32_t *formats;
> > > + int format_count;
> > > + int ret;
> > > +
> > > + pipeline->drm_info = drm_info;
> > > + pipeline->index = index;
> > > + pipeline->height = cfg->height;
> > > + pipeline->width = cfg->width;
> > > +
> > > + ret = xen_drm_front_conn_init(drm_info, &pipeline->conn);
> > > + if (ret)
> > > + return ret;
> > > +
> > > + formats = xen_drm_front_conn_get_formats(&format_count);
> > > +
> > > + return drm_simple_display_pipe_init(dev, &pipeline->pipe,
> > > + &display_funcs, formats, format_count,
> > > + NULL, &pipeline->conn);
> > > +}
> > > +
> > > +int xen_drm_front_kms_init(struct xen_drm_front_drm_info *drm_info)
> > > +{
> > > + struct drm_device *dev = drm_info->drm_dev;
> > > + int i, ret;
> > > +
> > > + drm_mode_config_init(dev);
> > > +
> > > + dev->mode_config.min_width = 0;
> > > + dev->mode_config.min_height = 0;
> > > + dev->mode_config.max_width = 4095;
> > > + dev->mode_config.max_height = 2047;
> > > + dev->mode_config.funcs = &mode_config_funcs;
> > > +
> > > + for (i = 0; i < drm_info->cfg->num_connectors; i++) {
> > > + struct xen_drm_front_cfg_connector *cfg =
> > > + &drm_info->cfg->connectors[i];
> > > + struct xen_drm_front_drm_pipeline *pipeline =
> > > + &drm_info->pipeline[i];
> > > +
> > > + ret = display_pipe_init(drm_info, i, cfg, pipeline);
> > > + if (ret) {
> > > + drm_mode_config_cleanup(dev);
> > > + return ret;
> > > + }
> > > + }
> > > +
> > > + drm_mode_config_reset(dev);
> > > + return 0;
> > > +}
> > > diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.h b/drivers/gpu/drm/xen/xen_drm_front_kms.h
> > > new file mode 100644
> > > index 000000000000..65a50033bb9b
> > > --- /dev/null
> > > +++ b/drivers/gpu/drm/xen/xen_drm_front_kms.h
> > > @@ -0,0 +1,30 @@
> > > +/*
> > > + * Xen para-virtual DRM device
> > > + *
> > > + * This program is free software; you can redistribute it and/or modify
> > > + * it under the terms of the GNU General Public License as published by
> > > + * the Free Software Foundation; either version 2 of the License, or
> > > + * (at your option) any later version.
> > > + *
> > > + * This program is distributed in the hope that it will be useful,
> > > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > > + * GNU General Public License for more details.
> > > + *
> > > + * Copyright (C) 2016-2018 EPAM Systems Inc.
> > > + *
> > > + * Author: Oleksandr Andrushchenko <[email protected]>
> > > + */
> > > +
> > > +#ifndef __XEN_DRM_FRONT_KMS_H_
> > > +#define __XEN_DRM_FRONT_KMS_H_
> > > +
> > > +#include "xen_drm_front_drv.h"
> > > +
> > > +int xen_drm_front_kms_init(struct xen_drm_front_drm_info *drm_info);
> > > +
> > > +void xen_drm_front_kms_on_frame_done(
> > > + struct xen_drm_front_drm_pipeline *pipeline,
> > > + uint64_t fb_cookie);
> > > +
> > > +#endif /* __XEN_DRM_FRONT_KMS_H_ */
> > > --
> > > 2.7.4
> > >
> > > _______________________________________________
> > > dri-devel mailing list
> > > [email protected]
> > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> [1] https://patchwork.kernel.org/patch/10211997/
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Mon, Mar 05, 2018 at 03:46:07PM +0200, Oleksandr Andrushchenko wrote:
> On 03/05/2018 11:32 AM, Daniel Vetter wrote:
> > On Wed, Feb 21, 2018 at 10:03:41AM +0200, Oleksandr Andrushchenko wrote:
> > > From: Oleksandr Andrushchenko <[email protected]>
> > >
> > > Implement GEM handling depending on driver mode of operation:
> > > depending on the requirements for the para-virtualized environment, namely
> > > requirements dictated by the accompanying DRM/(v)GPU drivers running in both
> > > host and guest environments, number of operating modes of para-virtualized
> > > display driver are supported:
> > > - display buffers can be allocated by either frontend driver or backend
> > > - display buffers can be allocated to be contiguous in memory or not
> > >
> > > Note! Frontend driver itself has no dependency on contiguous memory for
> > > its operation.
> > >
> > > 1. Buffers allocated by the frontend driver.
> > >
> > > The below modes of operation are configured at compile-time via
> > > frontend driver's kernel configuration.
> > >
> > > 1.1. Front driver configured to use GEM CMA helpers
> > > This use-case is useful when used with accompanying DRM/vGPU driver in
> > > guest domain which was designed to only work with contiguous buffers,
> > > e.g. DRM driver based on GEM CMA helpers: such drivers can only import
> > > contiguous PRIME buffers, thus requiring frontend driver to provide
> > > such. In order to implement this mode of operation para-virtualized
> > > frontend driver can be configured to use GEM CMA helpers.
> > >
> > > 1.2. Front driver doesn't use GEM CMA
> > > If accompanying drivers can cope with non-contiguous memory then, to
> > > lower pressure on CMA subsystem of the kernel, driver can allocate
> > > buffers from system memory.
> > >
> > > Note! If used with accompanying DRM/(v)GPU drivers this mode of operation
> > > may require IOMMU support on the platform, so accompanying DRM/vGPU
> > > hardware can still reach display buffer memory while importing PRIME
> > > buffers from the frontend driver.
> > >
> > > 2. Buffers allocated by the backend
> > >
> > > This mode of operation is run-time configured via guest domain configuration
> > > through XenStore entries.
> > >
> > > For systems which do not provide IOMMU support, but having specific
> > > requirements for display buffers it is possible to allocate such buffers
> > > at backend side and share those with the frontend.
> > > For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting
> > > physically contiguous memory, this allows implementing zero-copying
> > > use-cases.
> > >
> > > Note! Configuration options 1.1 (contiguous display buffers) and 2 (backend
> > > allocated buffers) are not supported at the same time.
> > >
> > > Signed-off-by: Oleksandr Andrushchenko <[email protected]>
> > Some suggestions below for some larger cleanup work.
> > -Daniel
> >
> > > ---
> > > drivers/gpu/drm/xen/Kconfig | 13 +
> > > drivers/gpu/drm/xen/Makefile | 6 +
> > > drivers/gpu/drm/xen/xen_drm_front.h | 74 ++++++
> > > drivers/gpu/drm/xen/xen_drm_front_drv.c | 80 ++++++-
> > > drivers/gpu/drm/xen/xen_drm_front_drv.h | 1 +
> > > drivers/gpu/drm/xen/xen_drm_front_gem.c | 360 ++++++++++++++++++++++++++++
> > > drivers/gpu/drm/xen/xen_drm_front_gem.h | 46 ++++
> > > drivers/gpu/drm/xen/xen_drm_front_gem_cma.c | 93 +++++++
> > > 8 files changed, 667 insertions(+), 6 deletions(-)
> > > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.c
> > > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.h
> > > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
> > >
> > > diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig
> > > index 4cca160782ab..4f4abc91f3b6 100644
> > > --- a/drivers/gpu/drm/xen/Kconfig
> > > +++ b/drivers/gpu/drm/xen/Kconfig
> > > @@ -15,3 +15,16 @@ config DRM_XEN_FRONTEND
> > > help
> > > Choose this option if you want to enable a para-virtualized
> > > frontend DRM/KMS driver for Xen guest OSes.
> > > +
> > > +config DRM_XEN_FRONTEND_CMA
> > > + bool "Use DRM CMA to allocate dumb buffers"
> > > + depends on DRM_XEN_FRONTEND
> > > + select DRM_KMS_CMA_HELPER
> > > + select DRM_GEM_CMA_HELPER
> > > + help
> > > + Use DRM CMA helpers to allocate display buffers.
> > > + This is useful for the use-cases when guest driver needs to
> > > + share or export buffers to other drivers which only expect
> > > + contiguous buffers.
> > > + Note: in this mode driver cannot use buffers allocated
> > > + by the backend.
> > > diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
> > > index 4fcb0da1a9c5..12376ec78fbc 100644
> > > --- a/drivers/gpu/drm/xen/Makefile
> > > +++ b/drivers/gpu/drm/xen/Makefile
> > > @@ -8,4 +8,10 @@ drm_xen_front-objs := xen_drm_front.o \
> > > xen_drm_front_shbuf.o \
> > > xen_drm_front_cfg.o
> > > +ifeq ($(CONFIG_DRM_XEN_FRONTEND_CMA),y)
> > > + drm_xen_front-objs += xen_drm_front_gem_cma.o
> > > +else
> > > + drm_xen_front-objs += xen_drm_front_gem.o
> > > +endif
> > > +
> > > obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
> > > diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
> > > index 9ed5bfb248d0..c6f52c892434 100644
> > > --- a/drivers/gpu/drm/xen/xen_drm_front.h
> > > +++ b/drivers/gpu/drm/xen/xen_drm_front.h
> > > @@ -34,6 +34,80 @@
> > > struct xen_drm_front_drm_pipeline;
> > > +/*
> > > + *******************************************************************************
> > > + * Para-virtualized DRM/KMS frontend driver
> > > + *******************************************************************************
> > > + * This frontend driver implements Xen para-virtualized display
> > > + * according to the display protocol described at
> > > + * include/xen/interface/io/displif.h
> > > + *
> > > + *******************************************************************************
> > > + * Driver modes of operation in terms of display buffers used
> > > + *******************************************************************************
> > > + * Depending on the requirements for the para-virtualized environment, namely
> > > + * requirements dictated by the accompanying DRM/(v)GPU drivers running in both
> > > + * host and guest environments, number of operating modes of para-virtualized
> > > + * display driver are supported:
> > > + * - display buffers can be allocated by either frontend driver or backend
> > > + * - display buffers can be allocated to be contiguous in memory or not
> > > + *
> > > + * Note! Frontend driver itself has no dependency on contiguous memory for
> > > + * its operation.
> > > + *
> > > + *******************************************************************************
> > > + * 1. Buffers allocated by the frontend driver.
> > > + *******************************************************************************
> > > + *
> > > + * The below modes of operation are configured at compile-time via
> > > + * frontend driver's kernel configuration.
> > > + *
> > > + * 1.1. Front driver configured to use GEM CMA helpers
> > > + * This use-case is useful when used with accompanying DRM/vGPU driver in
> > > + * guest domain which was designed to only work with contiguous buffers,
> > > + * e.g. DRM driver based on GEM CMA helpers: such drivers can only import
> > > + * contiguous PRIME buffers, thus requiring frontend driver to provide
> > > + * such. In order to implement this mode of operation para-virtualized
> > > + * frontend driver can be configured to use GEM CMA helpers.
> > > + *
> > > + * 1.2. Front driver doesn't use GEM CMA
> > > + * If accompanying drivers can cope with non-contiguous memory then, to
> > > + * lower pressure on CMA subsystem of the kernel, driver can allocate
> > > + * buffers from system memory.
> > > + *
> > > + * Note! If used with accompanying DRM/(v)GPU drivers this mode of operation
> > > + * may require IOMMU support on the platform, so accompanying DRM/vGPU
> > > + * hardware can still reach display buffer memory while importing PRIME
> > > + * buffers from the frontend driver.
> > > + *
> > > + *******************************************************************************
> > > + * 2. Buffers allocated by the backend
> > > + *******************************************************************************
> > > + *
> > > + * This mode of operation is run-time configured via guest domain configuration
> > > + * through XenStore entries.
> > > + *
> > > + * For systems which do not provide IOMMU support, but having specific
> > > + * requirements for display buffers it is possible to allocate such buffers
> > > + * at backend side and share those with the frontend.
> > > + * For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting
> > > + * physically contiguous memory, this allows implementing zero-copying
> > > + * use-cases.
> > > + *
> > > + *******************************************************************************
> > > + * Driver limitations
> > > + *******************************************************************************
> > > + * 1. Configuration options 1.1 (contiguous display buffers) and 2 (backend
> > > + * allocated buffers) are not supported at the same time.
> > > + *
> > > + * 2. Only primary plane without additional properties is supported.
> > > + *
> > > + * 3. Only one video mode supported which is configured via XenStore.
> > > + *
> > > + * 4. All CRTCs operate at fixed frequency of 60Hz.
> > > + *
> > > + ******************************************************************************/
> > Since you've typed this all up, pls convert it to kernel-doc and pull it
> > into a xen-front.rst driver section in Documentation/gpu/ There's a few
> > examples for i915 and vc4 already.
> Do you mean to move or to keep in the driver and add in the
> Documentation? I would prefer to move to have the description
> at single place.
Keep it where it is, but reformat as a correct kerneldoc (it's RST format)
and pull it in as a DOC: section. See
https://dri.freedesktop.org/docs/drm/doc-guide/kernel-doc.html
and the other sections in that chapter.
> >
> > > +
> > > struct xen_drm_front_ops {
> > > int (*mode_set)(struct xen_drm_front_drm_pipeline *pipeline,
> > > uint32_t x, uint32_t y, uint32_t width, uint32_t height,
> > > diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.c b/drivers/gpu/drm/xen/xen_drm_front_drv.c
> > > index e8862d26ba27..35e7e9cda9d1 100644
> > > --- a/drivers/gpu/drm/xen/xen_drm_front_drv.c
> > > +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.c
> > > @@ -23,12 +23,58 @@
> > > #include "xen_drm_front.h"
> > > #include "xen_drm_front_cfg.h"
> > > #include "xen_drm_front_drv.h"
> > > +#include "xen_drm_front_gem.h"
> > > #include "xen_drm_front_kms.h"
> > > static int dumb_create(struct drm_file *filp,
> > > struct drm_device *dev, struct drm_mode_create_dumb *args)
> > > {
> > > - return -EINVAL;
> > > + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> > > + struct drm_gem_object *obj;
> > > + int ret;
> > > +
> > > + ret = drm_info->gem_ops->dumb_create(filp, dev, args);
> > > + if (ret)
> > > + goto fail;
> > > +
> > > + obj = drm_gem_object_lookup(filp, args->handle);
> > > + if (!obj) {
> > > + ret = -ENOENT;
> > > + goto fail_destroy;
> > > + }
> > > +
> > > + drm_gem_object_unreference_unlocked(obj);
> > > +
> > > + /*
> > > + * In case of CONFIG_DRM_XEN_FRONTEND_CMA gem_obj is constructed
> > > + * via DRM CMA helpers and doesn't have ->pages allocated
> > > + * (xendrm_gem_get_pages will return NULL), but instead can provide
> > > + * sg table
> > > + */
> > My recommendation is to use an sg table for everything if you deal with
> > mixed objects (CMA, special blocks 1:1 mapped from host, normal pages).
> > That avoids the constant get_pages vs. get_sgt differences. For examples
> > see how e.g. i915 handles the various gem object backends.
> Indeed, I tried to do that this way before, e.g. have all sgt based.
> But at the end of the day Xen shared buffer code in the driver works
> with pages (Xen API is page based there), so sgt then will anyway need
> to be converted into page array.
> For that reason I prefer to work with pages from the beginning, not sgt.
> As to constant get_pages etc. - this is the only expected place in the
> driver for that, so the _from_sgt/_from_pages API is only used here.
Yeah was just a suggestion to simplify the code. But if you have to deal
with both, there's not much point.
>
> >
> > > + if (drm_info->gem_ops->get_pages(obj))
> > > + ret = drm_info->front_ops->dbuf_create_from_pages(
> > > + drm_info->front_info,
> > > + xen_drm_front_dbuf_to_cookie(obj),
> > > + args->width, args->height, args->bpp,
> > > + args->size,
> > > + drm_info->gem_ops->get_pages(obj));
> > > + else
> > > + ret = drm_info->front_ops->dbuf_create_from_sgt(
> > > + drm_info->front_info,
> > > + xen_drm_front_dbuf_to_cookie(obj),
> > > + args->width, args->height, args->bpp,
> > > + args->size,
> > > + drm_info->gem_ops->prime_get_sg_table(obj));
> > > + if (ret)
> > > + goto fail_destroy;
> > > +
> > > + return 0;
> > > +
> > > +fail_destroy:
> > > + drm_gem_dumb_destroy(filp, dev, args->handle);
> > > +fail:
> > > + DRM_ERROR("Failed to create dumb buffer: %d\n", ret);
> > > + return ret;
> > > }
> > > static void free_object(struct drm_gem_object *obj)
> > > @@ -37,6 +83,7 @@ static void free_object(struct drm_gem_object *obj)
> > > drm_info->front_ops->dbuf_destroy(drm_info->front_info,
> > > xen_drm_front_dbuf_to_cookie(obj));
> > > + drm_info->gem_ops->free_object_unlocked(obj);
> > > }
> > > static void on_frame_done(struct platform_device *pdev,
> > > @@ -60,32 +107,52 @@ static void lastclose(struct drm_device *dev)
> > > static int gem_mmap(struct file *filp, struct vm_area_struct *vma)
> > > {
> > > - return -EINVAL;
> > > + struct drm_file *file_priv = filp->private_data;
> > > + struct drm_device *dev = file_priv->minor->dev;
> > > + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> > > +
> > > + return drm_info->gem_ops->mmap(filp, vma);
> > Uh, so 1 midlayer for the kms stuff and another midlayer for the gem
> > stuff. That's way too much indirection.
> If by KMS you mean front_ops then -1: I will remove front_ops.
> As to gem_ops, please see below
> > > }
> > > static struct sg_table *prime_get_sg_table(struct drm_gem_object *obj)
> > > {
> > > - return NULL;
> > > + struct xen_drm_front_drm_info *drm_info;
> > > +
> > > + drm_info = obj->dev->dev_private;
> > > + return drm_info->gem_ops->prime_get_sg_table(obj);
> > > }
> > > static struct drm_gem_object *prime_import_sg_table(struct drm_device *dev,
> > > struct dma_buf_attachment *attach, struct sg_table *sgt)
> > > {
> > > - return NULL;
> > > + struct xen_drm_front_drm_info *drm_info;
> > > +
> > > + drm_info = dev->dev_private;
> > > + return drm_info->gem_ops->prime_import_sg_table(dev, attach, sgt);
> > > }
> > > static void *prime_vmap(struct drm_gem_object *obj)
> > > {
> > > - return NULL;
> > > + struct xen_drm_front_drm_info *drm_info;
> > > +
> > > + drm_info = obj->dev->dev_private;
> > > + return drm_info->gem_ops->prime_vmap(obj);
> > > }
> > > static void prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> > > {
> > > + struct xen_drm_front_drm_info *drm_info;
> > > +
> > > + drm_info = obj->dev->dev_private;
> > > + drm_info->gem_ops->prime_vunmap(obj, vaddr);
> > > }
> > > static int prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> > > {
> > > - return -EINVAL;
> > > + struct xen_drm_front_drm_info *drm_info;
> > > +
> > > + drm_info = obj->dev->dev_private;
> > > + return drm_info->gem_ops->prime_mmap(obj, vma);
> > > }
> > > static const struct file_operations xendrm_fops = {
> > > @@ -147,6 +214,7 @@ int xen_drm_front_drv_probe(struct platform_device *pdev,
> > > drm_info->front_ops = front_ops;
> > > drm_info->front_ops->on_frame_done = on_frame_done;
> > > + drm_info->gem_ops = xen_drm_front_gem_get_ops();
> > > drm_info->front_info = cfg->front_info;
> > > dev = drm_dev_alloc(&xen_drm_driver, &pdev->dev);
> > > diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.h b/drivers/gpu/drm/xen/xen_drm_front_drv.h
> > > index 563318b19f34..34228eb86255 100644
> > > --- a/drivers/gpu/drm/xen/xen_drm_front_drv.h
> > > +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.h
> > > @@ -43,6 +43,7 @@ struct xen_drm_front_drm_pipeline {
> > > struct xen_drm_front_drm_info {
> > > struct xen_drm_front_info *front_info;
> > > struct xen_drm_front_ops *front_ops;
> > > + const struct xen_drm_front_gem_ops *gem_ops;
> > > struct drm_device *drm_dev;
> > > struct xen_drm_front_cfg *cfg;
> > > diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> > > new file mode 100644
> > > index 000000000000..367e08f6a9ef
> > > --- /dev/null
> > > +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> > > @@ -0,0 +1,360 @@
> > > +/*
> > > + * Xen para-virtual DRM device
> > > + *
> > > + * This program is free software; you can redistribute it and/or modify
> > > + * it under the terms of the GNU General Public License as published by
> > > + * the Free Software Foundation; either version 2 of the License, or
> > > + * (at your option) any later version.
> > > + *
> > > + * This program is distributed in the hope that it will be useful,
> > > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > > + * GNU General Public License for more details.
> > > + *
> > > + * Copyright (C) 2016-2018 EPAM Systems Inc.
> > > + *
> > > + * Author: Oleksandr Andrushchenko <[email protected]>
> > > + */
> > > +
> > > +#include "xen_drm_front_gem.h"
> > > +
> > > +#include <drm/drmP.h>
> > > +#include <drm/drm_crtc_helper.h>
> > > +#include <drm/drm_fb_helper.h>
> > > +#include <drm/drm_gem.h>
> > > +
> > > +#include <linux/dma-buf.h>
> > > +#include <linux/scatterlist.h>
> > > +#include <linux/shmem_fs.h>
> > > +
> > > +#include <xen/balloon.h>
> > > +
> > > +#include "xen_drm_front.h"
> > > +#include "xen_drm_front_drv.h"
> > > +#include "xen_drm_front_shbuf.h"
> > > +
> > > +struct xen_gem_object {
> > > + struct drm_gem_object base;
> > > +
> > > + size_t num_pages;
> > > + struct page **pages;
> > > +
> > > + /* set for buffers allocated by the backend */
> > > + bool be_alloc;
> > > +
> > > + /* this is for imported PRIME buffer */
> > > + struct sg_table *sgt_imported;
> > > +};
> > > +
> > > +static inline struct xen_gem_object *to_xen_gem_obj(
> > > + struct drm_gem_object *gem_obj)
> > > +{
> > > + return container_of(gem_obj, struct xen_gem_object, base);
> > > +}
> > > +
> > > +static int gem_alloc_pages_array(struct xen_gem_object *xen_obj,
> > > + size_t buf_size)
> > > +{
> > > + xen_obj->num_pages = DIV_ROUND_UP(buf_size, PAGE_SIZE);
> > > + xen_obj->pages = kvmalloc_array(xen_obj->num_pages,
> > > + sizeof(struct page *), GFP_KERNEL);
> > > + return xen_obj->pages == NULL ? -ENOMEM : 0;
> > > +}
> > > +
> > > +static void gem_free_pages_array(struct xen_gem_object *xen_obj)
> > > +{
> > > + kvfree(xen_obj->pages);
> > > + xen_obj->pages = NULL;
> > > +}
> > > +
> > > +static struct xen_gem_object *gem_create_obj(struct drm_device *dev,
> > > + size_t size)
> > > +{
> > > + struct xen_gem_object *xen_obj;
> > > + int ret;
> > > +
> > > + xen_obj = kzalloc(sizeof(*xen_obj), GFP_KERNEL);
> > > + if (!xen_obj)
> > > + return ERR_PTR(-ENOMEM);
> > > +
> > > + ret = drm_gem_object_init(dev, &xen_obj->base, size);
> > > + if (ret < 0) {
> > > + kfree(xen_obj);
> > > + return ERR_PTR(ret);
> > > + }
> > > +
> > > + return xen_obj;
> > > +}
> > > +
> > > +static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
> > > +{
> > > + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> > > + struct xen_gem_object *xen_obj;
> > > + int ret;
> > > +
> > > + size = round_up(size, PAGE_SIZE);
> > > + xen_obj = gem_create_obj(dev, size);
> > > + if (IS_ERR_OR_NULL(xen_obj))
> > > + return xen_obj;
> > > +
> > > + if (drm_info->cfg->be_alloc) {
> > > + /*
> > > + * backend will allocate space for this buffer, so
> > > + * only allocate array of pointers to pages
> > > + */
> > > + xen_obj->be_alloc = true;
> > > + ret = gem_alloc_pages_array(xen_obj, size);
> > > + if (ret < 0) {
> > > + gem_free_pages_array(xen_obj);
> > > + goto fail;
> > > + }
> > > +
> > > + ret = alloc_xenballooned_pages(xen_obj->num_pages,
> > > + xen_obj->pages);
> > > + if (ret < 0) {
> > > + DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
> > > + xen_obj->num_pages, ret);
> > > + goto fail;
> > > + }
> > > +
> > > + return xen_obj;
> > > + }
> > > + /*
> > > + * need to allocate backing pages now, so we can share those
> > > + * with the backend
> > > + */
> > > + xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE);
> > > + xen_obj->pages = drm_gem_get_pages(&xen_obj->base);
> > > + if (IS_ERR_OR_NULL(xen_obj->pages)) {
> > > + ret = PTR_ERR(xen_obj->pages);
> > > + xen_obj->pages = NULL;
> > > + goto fail;
> > > + }
> > > +
> > > + return xen_obj;
> > > +
> > > +fail:
> > > + DRM_ERROR("Failed to allocate buffer with size %zu\n", size);
> > > + return ERR_PTR(ret);
> > > +}
> > > +
> > > +static struct xen_gem_object *gem_create_with_handle(struct drm_file *filp,
> > > + struct drm_device *dev, size_t size, uint32_t *handle)
> > > +{
> > > + struct xen_gem_object *xen_obj;
> > > + struct drm_gem_object *gem_obj;
> > > + int ret;
> > > +
> > > + xen_obj = gem_create(dev, size);
> > > + if (IS_ERR_OR_NULL(xen_obj))
> > > + return xen_obj;
> > > +
> > > + gem_obj = &xen_obj->base;
> > > + ret = drm_gem_handle_create(filp, gem_obj, handle);
> > > + /* handle holds the reference */
> > > + drm_gem_object_unreference_unlocked(gem_obj);
> > > + if (ret < 0)
> > > + return ERR_PTR(ret);
> > > +
> > > + return xen_obj;
> > > +}
> > > +
> > > +static int gem_dumb_create(struct drm_file *filp, struct drm_device *dev,
> > > + struct drm_mode_create_dumb *args)
> > > +{
> > > + struct xen_gem_object *xen_obj;
> > > +
> > > + args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
> > > + args->size = args->pitch * args->height;
> > > +
> > > + xen_obj = gem_create_with_handle(filp, dev, args->size, &args->handle);
> > > + if (IS_ERR_OR_NULL(xen_obj))
> > > + return xen_obj == NULL ? -ENOMEM : PTR_ERR(xen_obj);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static void gem_free_object(struct drm_gem_object *gem_obj)
> > > +{
> > > + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> > > +
> > > + if (xen_obj->base.import_attach) {
> > > + drm_prime_gem_destroy(&xen_obj->base, xen_obj->sgt_imported);
> > > + gem_free_pages_array(xen_obj);
> > > + } else {
> > > + if (xen_obj->pages) {
> > > + if (xen_obj->be_alloc) {
> > > + free_xenballooned_pages(xen_obj->num_pages,
> > > + xen_obj->pages);
> > > + gem_free_pages_array(xen_obj);
> > > + } else
> > > + drm_gem_put_pages(&xen_obj->base,
> > > + xen_obj->pages, true, false);
> > > + }
> > > + }
> > > + drm_gem_object_release(gem_obj);
> > > + kfree(xen_obj);
> > > +}
> > > +
> > > +static struct page **gem_get_pages(struct drm_gem_object *gem_obj)
> > > +{
> > > + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> > > +
> > > + return xen_obj->pages;
> > > +}
> > > +
> > > +static struct sg_table *gem_get_sg_table(struct drm_gem_object *gem_obj)
> > > +{
> > > + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> > > +
> > > + if (!xen_obj->pages)
> > > + return NULL;
> > > +
> > > + return drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages);
> > > +}
> > > +
> > > +static struct drm_gem_object *gem_import_sg_table(struct drm_device *dev,
> > > + struct dma_buf_attachment *attach, struct sg_table *sgt)
> > > +{
> > > + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> > > + struct xen_gem_object *xen_obj;
> > > + size_t size;
> > > + int ret;
> > > +
> > > + size = attach->dmabuf->size;
> > > + xen_obj = gem_create_obj(dev, size);
> > > + if (IS_ERR_OR_NULL(xen_obj))
> > > + return ERR_CAST(xen_obj);
> > > +
> > > + ret = gem_alloc_pages_array(xen_obj, size);
> > > + if (ret < 0)
> > > + return ERR_PTR(ret);
> > > +
> > > + xen_obj->sgt_imported = sgt;
> > > +
> > > + ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages,
> > > + NULL, xen_obj->num_pages);
> > > + if (ret < 0)
> > > + return ERR_PTR(ret);
> > > +
> > > + /*
> > > + * N.B. Although we have an API to create display buffer from sgt
> > > + * we use pages API, because we still need those for GEM handling,
> > > + * e.g. for mapping etc.
> > > + */
> > > + ret = drm_info->front_ops->dbuf_create_from_pages(
> > > + drm_info->front_info,
> > > + xen_drm_front_dbuf_to_cookie(&xen_obj->base),
> > > + 0, 0, 0, size, xen_obj->pages);
> > > + if (ret < 0)
> > > + return ERR_PTR(ret);
> > > +
> > > + DRM_DEBUG("Imported buffer of size %zu with nents %u\n",
> > > + size, sgt->nents);
> > > +
> > > + return &xen_obj->base;
> > > +}
> > > +
> > > +static int gem_mmap_obj(struct xen_gem_object *xen_obj,
> > > + struct vm_area_struct *vma)
> > > +{
> > > + unsigned long addr = vma->vm_start;
> > > + int i;
> > > +
> > > + /*
> > > + * clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
> > > + * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
> > > + * the whole buffer.
> > > + */
> > > + vma->vm_flags &= ~VM_PFNMAP;
> > > + vma->vm_flags |= VM_MIXEDMAP;
> > > + vma->vm_pgoff = 0;
> > > + vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
> > > +
> > > + /*
> > > + * vm_operations_struct.fault handler will be called if CPU access
> > > + * to VM is here. For GPUs this isn't the case, because CPU
> > > + * doesn't touch the memory. Insert pages now, so both CPU and GPU are
> > > + * happy.
> > > + * FIXME: as we insert all the pages now then no .fault handler must
> > > + * be called, so don't provide one
> > > + */
> > > + for (i = 0; i < xen_obj->num_pages; i++) {
> > > + int ret;
> > > +
> > > + ret = vm_insert_page(vma, addr, xen_obj->pages[i]);
> > > + if (ret < 0) {
> > > + DRM_ERROR("Failed to insert pages into vma: %d\n", ret);
> > > + return ret;
> > > + }
> > > +
> > > + addr += PAGE_SIZE;
> > > + }
> > > + return 0;
> > > +}
> > > +
> > > +static int gem_mmap(struct file *filp, struct vm_area_struct *vma)
> > > +{
> > > + struct xen_gem_object *xen_obj;
> > > + struct drm_gem_object *gem_obj;
> > > + int ret;
> > > +
> > > + ret = drm_gem_mmap(filp, vma);
> > > + if (ret < 0)
> > > + return ret;
> > > +
> > > + gem_obj = vma->vm_private_data;
> > > + xen_obj = to_xen_gem_obj(gem_obj);
> > > + return gem_mmap_obj(xen_obj, vma);
> > > +}
> > > +
> > > +static void *gem_prime_vmap(struct drm_gem_object *gem_obj)
> > > +{
> > > + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> > > +
> > > + if (!xen_obj->pages)
> > > + return NULL;
> > > +
> > > + return vmap(xen_obj->pages, xen_obj->num_pages,
> > > + VM_MAP, pgprot_writecombine(PAGE_KERNEL));
> > > +}
> > > +
> > > +static void gem_prime_vunmap(struct drm_gem_object *gem_obj, void *vaddr)
> > > +{
> > > + vunmap(vaddr);
> > > +}
> > > +
> > > +static int gem_prime_mmap(struct drm_gem_object *gem_obj,
> > > + struct vm_area_struct *vma)
> > > +{
> > > + struct xen_gem_object *xen_obj;
> > > + int ret;
> > > +
> > > + ret = drm_gem_mmap_obj(gem_obj, gem_obj->size, vma);
> > > + if (ret < 0)
> > > + return ret;
> > > +
> > > + xen_obj = to_xen_gem_obj(gem_obj);
> > > + return gem_mmap_obj(xen_obj, vma);
> > > +}
> > > +
> > > +static const struct xen_drm_front_gem_ops xen_drm_gem_ops = {
> > > + .free_object_unlocked = gem_free_object,
> > > + .prime_get_sg_table = gem_get_sg_table,
> > > + .prime_import_sg_table = gem_import_sg_table,
> > > +
> > > + .prime_vmap = gem_prime_vmap,
> > > + .prime_vunmap = gem_prime_vunmap,
> > > + .prime_mmap = gem_prime_mmap,
> > > +
> > > + .dumb_create = gem_dumb_create,
> > > +
> > > + .mmap = gem_mmap,
> > > +
> > > + .get_pages = gem_get_pages,
> > > +};
> > > +
> > > +const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void)
> > > +{
> > > + return &xen_drm_gem_ops;
> > > +}
> > > diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> > > new file mode 100644
> > > index 000000000000..d1e1711cc3fc
> > > --- /dev/null
> > > +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> > > @@ -0,0 +1,46 @@
> > > +/*
> > > + * Xen para-virtual DRM device
> > > + *
> > > + * This program is free software; you can redistribute it and/or modify
> > > + * it under the terms of the GNU General Public License as published by
> > > + * the Free Software Foundation; either version 2 of the License, or
> > > + * (at your option) any later version.
> > > + *
> > > + * This program is distributed in the hope that it will be useful,
> > > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > > + * GNU General Public License for more details.
> > > + *
> > > + * Copyright (C) 2016-2018 EPAM Systems Inc.
> > > + *
> > > + * Author: Oleksandr Andrushchenko <[email protected]>
> > > + */
> > > +
> > > +#ifndef __XEN_DRM_FRONT_GEM_H
> > > +#define __XEN_DRM_FRONT_GEM_H
> > > +
> > > +#include <drm/drmP.h>
> > > +
> > > +struct xen_drm_front_gem_ops {
> > > + void (*free_object_unlocked)(struct drm_gem_object *obj);
> > > +
> > > + struct sg_table *(*prime_get_sg_table)(struct drm_gem_object *obj);
> > > + struct drm_gem_object *(*prime_import_sg_table)(struct drm_device *dev,
> > > + struct dma_buf_attachment *attach,
> > > + struct sg_table *sgt);
> > > + void *(*prime_vmap)(struct drm_gem_object *obj);
> > > + void (*prime_vunmap)(struct drm_gem_object *obj, void *vaddr);
> > > + int (*prime_mmap)(struct drm_gem_object *obj,
> > > + struct vm_area_struct *vma);
> > > +
> > > + int (*dumb_create)(struct drm_file *file_priv, struct drm_device *dev,
> > > + struct drm_mode_create_dumb *args);
> > > +
> > > + int (*mmap)(struct file *filp, struct vm_area_struct *vma);
> > > +
> > > + struct page **(*get_pages)(struct drm_gem_object *obj);
> > > +};
> > > +
> > > +const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void);
> > > +
> > > +#endif /* __XEN_DRM_FRONT_GEM_H */
> > > diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c b/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
> > > new file mode 100644
> > > index 000000000000..5ffcbfa652d5
> > > --- /dev/null
> > > +++ b/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
> > > @@ -0,0 +1,93 @@
> > > +/*
> > > + * Xen para-virtual DRM device
> > > + *
> > > + * This program is free software; you can redistribute it and/or modify
> > > + * it under the terms of the GNU General Public License as published by
> > > + * the Free Software Foundation; either version 2 of the License, or
> > > + * (at your option) any later version.
> > > + *
> > > + * This program is distributed in the hope that it will be useful,
> > > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> > > + * GNU General Public License for more details.
> > > + *
> > > + * Copyright (C) 2016-2018 EPAM Systems Inc.
> > > + *
> > > + * Author: Oleksandr Andrushchenko <[email protected]>
> > > + */
> > > +
> > > +#include <drm/drmP.h>
> > > +#include <drm/drm_gem.h>
> > > +#include <drm/drm_fb_cma_helper.h>
> > > +#include <drm/drm_gem_cma_helper.h>
> > > +
> > > +#include "xen_drm_front.h"
> > > +#include "xen_drm_front_drv.h"
> > > +#include "xen_drm_front_gem.h"
> > > +
> > > +static struct drm_gem_object *gem_import_sg_table(struct drm_device *dev,
> > > + struct dma_buf_attachment *attach, struct sg_table *sgt)
> > > +{
> > > + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> > > + struct drm_gem_object *gem_obj;
> > > + struct drm_gem_cma_object *cma_obj;
> > > + int ret;
> > > +
> > > + gem_obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
> > > + if (IS_ERR_OR_NULL(gem_obj))
> > > + return gem_obj;
> > > +
> > > + cma_obj = to_drm_gem_cma_obj(gem_obj);
> > > +
> > > + ret = drm_info->front_ops->dbuf_create_from_sgt(
> > > + drm_info->front_info,
> > > + xen_drm_front_dbuf_to_cookie(gem_obj),
> > > + 0, 0, 0, gem_obj->size,
> > > + drm_gem_cma_prime_get_sg_table(gem_obj));
> > > + if (ret < 0)
> > > + return ERR_PTR(ret);
> > > +
> > > + DRM_DEBUG("Imported CMA buffer of size %zu\n", gem_obj->size);
> > > +
> > > + return gem_obj;
> > > +}
> > > +
> > > +static int gem_dumb_create(struct drm_file *filp, struct drm_device *dev,
> > > + struct drm_mode_create_dumb *args)
> > > +{
> > > + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
> > > +
> > > + if (drm_info->cfg->be_alloc) {
> > > + /* This use-case is not yet supported and probably won't be */
> > > + DRM_ERROR("Backend allocated buffers and CMA helpers are not supported at the same time\n");
> > > + return -EINVAL;
> > > + }
> > > +
> > > + return drm_gem_cma_dumb_create(filp, dev, args);
> > > +}
> > > +
> > > +static struct page **gem_get_pages(struct drm_gem_object *gem_obj)
> > > +{
> > > + return NULL;
> > > +}
> > > +
> > > +static const struct xen_drm_front_gem_ops xen_drm_front_gem_cma_ops = {
> > > + .free_object_unlocked = drm_gem_cma_free_object,
> > > + .prime_get_sg_table = drm_gem_cma_prime_get_sg_table,
> > > + .prime_import_sg_table = gem_import_sg_table,
> > > +
> > > + .prime_vmap = drm_gem_cma_prime_vmap,
> > > + .prime_vunmap = drm_gem_cma_prime_vunmap,
> > > + .prime_mmap = drm_gem_cma_prime_mmap,
> > > +
> > > + .dumb_create = gem_dumb_create,
> > > +
> > > + .mmap = drm_gem_cma_mmap,
> > > +
> > > + .get_pages = gem_get_pages,
> > > +};
> > Again quite a midlayer you have here. Please inline this to avoid
> > confusion for other people (since it looks like you only have 1
> > implementation).
> There are 2 implementations depending on driver compile time options:
> you can have the GEM operations implemented with DRM CMA helpers
> or driver's cooked GEMs. For this reason this midlayer exists, e.g.
> to eliminate the need for something like
> #ifdef DRM_XEN_FRONTEND_CMA
> drm_gem_cma_...()
> #else
> xen_drm_front_gem_...()
> #endif
> So, I would prefer to have ops rather then having ifdefs
Ok, makes sense, but please review whether you really need all of them,
since for a lot of them (all except get_pages really) we already have
vfuncs. And if you only switch at compile time I think it's cleaner to
simply have 2 vfunc tables for those (e.g. struct drm_driver). That avoids
the indirection.
Cheers, Daniel
> >
> > > +
> > > +const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void)
> > > +{
> > > + return &xen_drm_front_gem_cma_ops;
> > > +}
> > > --
> > > 2.7.4
> > >
> > > _______________________________________________
> > > dri-devel mailing list
> > > [email protected]
> > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On 03/06/2018 09:22 AM, Daniel Vetter wrote:
> On Mon, Mar 05, 2018 at 02:59:23PM +0200, Oleksandr Andrushchenko wrote:
>> On 03/05/2018 11:23 AM, Daniel Vetter wrote:
>>> On Wed, Feb 21, 2018 at 10:03:40AM +0200, Oleksandr Andrushchenko wrote:
>>>> From: Oleksandr Andrushchenko <[email protected]>
>>>>
>>>> Implement kernel modesetiing/connector handling using
>>>> DRM simple KMS helper pipeline:
>>>>
>>>> - implement KMS part of the driver with the help of DRM
>>>> simple pipepline helper which is possible due to the fact
>>>> that the para-virtualized driver only supports a single
>>>> (primary) plane:
>>>> - initialize connectors according to XenStore configuration
>>>> - handle frame done events from the backend
>>>> - generate vblank events
>>>> - create and destroy frame buffers and propagate those
>>>> to the backend
>>>> - propagate set/reset mode configuration to the backend on display
>>>> enable/disable callbacks
>>>> - send page flip request to the backend and implement logic for
>>>> reporting backend IO errors on prepare fb callback
>>>>
>>>> - implement virtual connector handling:
>>>> - support only pixel formats suitable for single plane modes
>>>> - make sure the connector is always connected
>>>> - support a single video mode as per para-virtualized driver
>>>> configuration
>>>>
>>>> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
>>> I think once you've removed the midlayer in the previous patch it would
>>> makes sense to merge the 2 patches into 1.
>> ok, will squash the two
>>> Bunch more comments below.
>>> -Daniel
>>>
>>>> ---
>>>> drivers/gpu/drm/xen/Makefile | 2 +
>>>> drivers/gpu/drm/xen/xen_drm_front_conn.c | 125 +++++++++++++
>>>> drivers/gpu/drm/xen/xen_drm_front_conn.h | 35 ++++
>>>> drivers/gpu/drm/xen/xen_drm_front_drv.c | 15 ++
>>>> drivers/gpu/drm/xen/xen_drm_front_drv.h | 12 ++
>>>> drivers/gpu/drm/xen/xen_drm_front_kms.c | 299 +++++++++++++++++++++++++++++++
>>>> drivers/gpu/drm/xen/xen_drm_front_kms.h | 30 ++++
>>>> 7 files changed, 518 insertions(+)
>>>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.c
>>>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_conn.h
>>>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.c
>>>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_kms.h
>>>>
>>>> diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
>>>> index d3068202590f..4fcb0da1a9c5 100644
>>>> --- a/drivers/gpu/drm/xen/Makefile
>>>> +++ b/drivers/gpu/drm/xen/Makefile
>>>> @@ -2,6 +2,8 @@
>>>> drm_xen_front-objs := xen_drm_front.o \
>>>> xen_drm_front_drv.o \
>>>> + xen_drm_front_kms.o \
>>>> + xen_drm_front_conn.o \
>>>> xen_drm_front_evtchnl.o \
>>>> xen_drm_front_shbuf.o \
>>>> xen_drm_front_cfg.o
>>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_conn.c b/drivers/gpu/drm/xen/xen_drm_front_conn.c
>>>> new file mode 100644
>>>> index 000000000000..d9986a2e1a3b
>>>> --- /dev/null
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front_conn.c
>>>> @@ -0,0 +1,125 @@
>>>> +/*
>>>> + * Xen para-virtual DRM device
>>>> + *
>>>> + * This program is free software; you can redistribute it and/or modify
>>>> + * it under the terms of the GNU General Public License as published by
>>>> + * the Free Software Foundation; either version 2 of the License, or
>>>> + * (at your option) any later version.
>>>> + *
>>>> + * This program is distributed in the hope that it will be useful,
>>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>>>> + * GNU General Public License for more details.
>>>> + *
>>>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>>>> + *
>>>> + * Author: Oleksandr Andrushchenko <[email protected]>
>>>> + */
>>>> +
>>>> +#include <drm/drm_atomic_helper.h>
>>>> +#include <drm/drm_crtc_helper.h>
>>>> +
>>>> +#include <video/videomode.h>
>>>> +
>>>> +#include "xen_drm_front_conn.h"
>>>> +#include "xen_drm_front_drv.h"
>>>> +
>>>> +static struct xen_drm_front_drm_pipeline *
>>>> +to_xen_drm_pipeline(struct drm_connector *connector)
>>>> +{
>>>> + return container_of(connector, struct xen_drm_front_drm_pipeline, conn);
>>>> +}
>>>> +
>>>> +static const uint32_t plane_formats[] = {
>>>> + DRM_FORMAT_RGB565,
>>>> + DRM_FORMAT_RGB888,
>>>> + DRM_FORMAT_XRGB8888,
>>>> + DRM_FORMAT_ARGB8888,
>>>> + DRM_FORMAT_XRGB4444,
>>>> + DRM_FORMAT_ARGB4444,
>>>> + DRM_FORMAT_XRGB1555,
>>>> + DRM_FORMAT_ARGB1555,
>>>> +};
>>>> +
>>>> +const uint32_t *xen_drm_front_conn_get_formats(int *format_count)
>>>> +{
>>>> + *format_count = ARRAY_SIZE(plane_formats);
>>>> + return plane_formats;
>>>> +}
>>>> +
>>>> +static enum drm_connector_status connector_detect(
>>>> + struct drm_connector *connector, bool force)
>>>> +{
>>>> + if (drm_dev_is_unplugged(connector->dev))
>>>> + return connector_status_disconnected;
>>>> +
>>>> + return connector_status_connected;
>>>> +}
>>>> +
>>>> +#define XEN_DRM_NUM_VIDEO_MODES 1
>>>> +#define XEN_DRM_CRTC_VREFRESH_HZ 60
>>>> +
>>>> +static int connector_get_modes(struct drm_connector *connector)
>>>> +{
>>>> + struct xen_drm_front_drm_pipeline *pipeline =
>>>> + to_xen_drm_pipeline(connector);
>>>> + struct drm_display_mode *mode;
>>>> + struct videomode videomode;
>>>> + int width, height;
>>>> +
>>>> + mode = drm_mode_create(connector->dev);
>>>> + if (!mode)
>>>> + return 0;
>>>> +
>>>> + memset(&videomode, 0, sizeof(videomode));
>>>> + videomode.hactive = pipeline->width;
>>>> + videomode.vactive = pipeline->height;
>>>> + width = videomode.hactive + videomode.hfront_porch +
>>>> + videomode.hback_porch + videomode.hsync_len;
>>>> + height = videomode.vactive + videomode.vfront_porch +
>>>> + videomode.vback_porch + videomode.vsync_len;
>>>> + videomode.pixelclock = width * height * XEN_DRM_CRTC_VREFRESH_HZ;
>>>> + mode->type = DRM_MODE_TYPE_PREFERRED | DRM_MODE_TYPE_DRIVER;
>>>> +
>>>> + drm_display_mode_from_videomode(&videomode, mode);
>>>> + drm_mode_probed_add(connector, mode);
>>>> + return XEN_DRM_NUM_VIDEO_MODES;
>>>> +}
>>>> +
>>>> +static int connector_mode_valid(struct drm_connector *connector,
>>>> + struct drm_display_mode *mode)
>>>> +{
>>>> + struct xen_drm_front_drm_pipeline *pipeline =
>>>> + to_xen_drm_pipeline(connector);
>>>> +
>>>> + if (mode->hdisplay != pipeline->width)
>>>> + return MODE_ERROR;
>>>> +
>>>> + if (mode->vdisplay != pipeline->height)
>>>> + return MODE_ERROR;
>>>> +
>>>> + return MODE_OK;
>>>> +}
>>>> +
>>>> +static const struct drm_connector_helper_funcs connector_helper_funcs = {
>>>> + .get_modes = connector_get_modes,
>>>> + .mode_valid = connector_mode_valid,
>>>> +};
>>>> +
>>>> +static const struct drm_connector_funcs connector_funcs = {
>>>> + .detect = connector_detect,
>>>> + .fill_modes = drm_helper_probe_single_connector_modes,
>>>> + .destroy = drm_connector_cleanup,
>>>> + .reset = drm_atomic_helper_connector_reset,
>>>> + .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
>>>> + .atomic_destroy_state = drm_atomic_helper_connector_destroy_state,
>>>> +};
>>>> +
>>>> +int xen_drm_front_conn_init(struct xen_drm_front_drm_info *drm_info,
>>>> + struct drm_connector *connector)
>>>> +{
>>>> + drm_connector_helper_add(connector, &connector_helper_funcs);
>>>> +
>>>> + return drm_connector_init(drm_info->drm_dev, connector,
>>>> + &connector_funcs, DRM_MODE_CONNECTOR_VIRTUAL);
>>>> +}
>>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_conn.h b/drivers/gpu/drm/xen/xen_drm_front_conn.h
>>>> new file mode 100644
>>>> index 000000000000..708e80d45985
>>>> --- /dev/null
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front_conn.h
>>>> @@ -0,0 +1,35 @@
>>>> +/*
>>>> + * Xen para-virtual DRM device
>>>> + *
>>>> + * This program is free software; you can redistribute it and/or modify
>>>> + * it under the terms of the GNU General Public License as published by
>>>> + * the Free Software Foundation; either version 2 of the License, or
>>>> + * (at your option) any later version.
>>>> + *
>>>> + * This program is distributed in the hope that it will be useful,
>>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>>>> + * GNU General Public License for more details.
>>>> + *
>>>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>>>> + *
>>>> + * Author: Oleksandr Andrushchenko <[email protected]>
>>>> + */
>>>> +
>>>> +#ifndef __XEN_DRM_FRONT_CONN_H_
>>>> +#define __XEN_DRM_FRONT_CONN_H_
>>>> +
>>>> +#include <drm/drmP.h>
>>>> +#include <drm/drm_crtc.h>
>>>> +#include <drm/drm_encoder.h>
>>>> +
>>>> +#include <linux/wait.h>
>>>> +
>>>> +struct xen_drm_front_drm_info;
>>>> +
>>>> +const uint32_t *xen_drm_front_conn_get_formats(int *format_count);
>>>> +
>>>> +int xen_drm_front_conn_init(struct xen_drm_front_drm_info *drm_info,
>>>> + struct drm_connector *connector);
>>>> +
>>>> +#endif /* __XEN_DRM_FRONT_CONN_H_ */
>>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.c b/drivers/gpu/drm/xen/xen_drm_front_drv.c
>>>> index b3764d5ed0f6..e8862d26ba27 100644
>>>> --- a/drivers/gpu/drm/xen/xen_drm_front_drv.c
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.c
>>>> @@ -23,6 +23,7 @@
>>>> #include "xen_drm_front.h"
>>>> #include "xen_drm_front_cfg.h"
>>>> #include "xen_drm_front_drv.h"
>>>> +#include "xen_drm_front_kms.h"
>>>> static int dumb_create(struct drm_file *filp,
>>>> struct drm_device *dev, struct drm_mode_create_dumb *args)
>>>> @@ -41,6 +42,13 @@ static void free_object(struct drm_gem_object *obj)
>>>> static void on_frame_done(struct platform_device *pdev,
>>>> int conn_idx, uint64_t fb_cookie)
>>>> {
>>>> + struct xen_drm_front_drm_info *drm_info = platform_get_drvdata(pdev);
>>>> +
>>>> + if (unlikely(conn_idx >= drm_info->cfg->num_connectors))
>>>> + return;
>>>> +
>>>> + xen_drm_front_kms_on_frame_done(&drm_info->pipeline[conn_idx],
>>>> + fb_cookie);
>>>> }
>>>> static void lastclose(struct drm_device *dev)
>>>> @@ -157,6 +165,12 @@ int xen_drm_front_drv_probe(struct platform_device *pdev,
>>>> return ret;
>>>> }
>>>> + ret = xen_drm_front_kms_init(drm_info);
>>>> + if (ret) {
>>>> + DRM_ERROR("Failed to initialize DRM/KMS, ret %d\n", ret);
>>>> + goto fail_modeset;
>>>> + }
>>>> +
>>>> dev->irq_enabled = 1;
>>>> ret = drm_dev_register(dev, 0);
>>>> @@ -172,6 +186,7 @@ int xen_drm_front_drv_probe(struct platform_device *pdev,
>>>> fail_register:
>>>> drm_dev_unregister(dev);
>>>> +fail_modeset:
>>>> drm_mode_config_cleanup(dev);
>>>> return ret;
>>>> }
>>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.h b/drivers/gpu/drm/xen/xen_drm_front_drv.h
>>>> index aaa476535c13..563318b19f34 100644
>>>> --- a/drivers/gpu/drm/xen/xen_drm_front_drv.h
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.h
>>>> @@ -20,14 +20,24 @@
>>>> #define __XEN_DRM_FRONT_DRV_H_
>>>> #include <drm/drmP.h>
>>>> +#include <drm/drm_simple_kms_helper.h>
>>>> #include "xen_drm_front.h"
>>>> #include "xen_drm_front_cfg.h"
>>>> +#include "xen_drm_front_conn.h"
>>>> struct xen_drm_front_drm_pipeline {
>>>> struct xen_drm_front_drm_info *drm_info;
>>>> int index;
>>>> +
>>>> + struct drm_simple_display_pipe pipe;
>>>> +
>>>> + struct drm_connector conn;
>>>> + /* these are only for connector mode checking */
>>>> + int width, height;
>>>> + /* last backend error seen on page flip */
>>>> + int pgflip_last_error;
>>>> };
>>>> struct xen_drm_front_drm_info {
>>>> @@ -35,6 +45,8 @@ struct xen_drm_front_drm_info {
>>>> struct xen_drm_front_ops *front_ops;
>>>> struct drm_device *drm_dev;
>>>> struct xen_drm_front_cfg *cfg;
>>>> +
>>>> + struct xen_drm_front_drm_pipeline pipeline[XEN_DRM_FRONT_MAX_CRTCS];
>>>> };
>>>> static inline uint64_t xen_drm_front_fb_to_cookie(
>>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm/xen/xen_drm_front_kms.c
>>>> new file mode 100644
>>>> index 000000000000..ad94c28835cd
>>>> --- /dev/null
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
>>>> @@ -0,0 +1,299 @@
>>>> +/*
>>>> + * Xen para-virtual DRM device
>>>> + *
>>>> + * This program is free software; you can redistribute it and/or modify
>>>> + * it under the terms of the GNU General Public License as published by
>>>> + * the Free Software Foundation; either version 2 of the License, or
>>>> + * (at your option) any later version.
>>>> + *
>>>> + * This program is distributed in the hope that it will be useful,
>>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>>>> + * GNU General Public License for more details.
>>>> + *
>>>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>>>> + *
>>>> + * Author: Oleksandr Andrushchenko <[email protected]>
>>>> + */
>>>> +
>>>> +#include "xen_drm_front_kms.h"
>>>> +
>>>> +#include <drm/drmP.h>
>>>> +#include <drm/drm_atomic.h>
>>>> +#include <drm/drm_atomic_helper.h>
>>>> +#include <drm/drm_gem.h>
>>>> +#include <drm/drm_gem_framebuffer_helper.h>
>>>> +
>>>> +#include "xen_drm_front.h"
>>>> +#include "xen_drm_front_conn.h"
>>>> +#include "xen_drm_front_drv.h"
>>>> +
>>>> +static struct xen_drm_front_drm_pipeline *
>>>> +to_xen_drm_pipeline(struct drm_simple_display_pipe *pipe)
>>>> +{
>>>> + return container_of(pipe, struct xen_drm_front_drm_pipeline, pipe);
>>>> +}
>>>> +
>>>> +static void fb_destroy(struct drm_framebuffer *fb)
>>>> +{
>>>> + struct xen_drm_front_drm_info *drm_info = fb->dev->dev_private;
>>>> +
>>>> + drm_info->front_ops->fb_detach(drm_info->front_info,
>>>> + xen_drm_front_fb_to_cookie(fb));
>>>> + drm_gem_fb_destroy(fb);
>>>> +}
>>>> +
>>>> +static struct drm_framebuffer_funcs fb_funcs = {
>>>> + .destroy = fb_destroy,
>>>> +};
>>>> +
>>>> +static struct drm_framebuffer *fb_create(struct drm_device *dev,
>>>> + struct drm_file *filp, const struct drm_mode_fb_cmd2 *mode_cmd)
>>>> +{
>>>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>>>> + static struct drm_framebuffer *fb;
>>>> + struct drm_gem_object *gem_obj;
>>>> + int ret;
>>>> +
>>>> + fb = drm_gem_fb_create_with_funcs(dev, filp, mode_cmd, &fb_funcs);
>>>> + if (IS_ERR_OR_NULL(fb))
>>>> + return fb;
>>>> +
>>>> + gem_obj = drm_gem_object_lookup(filp, mode_cmd->handles[0]);
>>>> + if (!gem_obj) {
>>>> + DRM_ERROR("Failed to lookup GEM object\n");
>>>> + ret = -ENOENT;
>>>> + goto fail;
>>>> + }
>>>> +
>>>> + drm_gem_object_unreference_unlocked(gem_obj);
>>>> +
>>>> + ret = drm_info->front_ops->fb_attach(
>>>> + drm_info->front_info,
>>>> + xen_drm_front_dbuf_to_cookie(gem_obj),
>>>> + xen_drm_front_fb_to_cookie(fb),
>>>> + fb->width, fb->height, fb->format->format);
>>>> + if (ret < 0) {
>>>> + DRM_ERROR("Back failed to attach FB %p: %d\n", fb, ret);
>>>> + goto fail;
>>>> + }
>>>> +
>>>> + return fb;
>>>> +
>>>> +fail:
>>>> + drm_gem_fb_destroy(fb);
>>>> + return ERR_PTR(ret);
>>>> +}
>>>> +
>>>> +static const struct drm_mode_config_funcs mode_config_funcs = {
>>>> + .fb_create = fb_create,
>>>> + .atomic_check = drm_atomic_helper_check,
>>>> + .atomic_commit = drm_atomic_helper_commit,
>>>> +};
>>>> +
>>>> +static int display_set_config(struct drm_simple_display_pipe *pipe,
>>>> + struct drm_framebuffer *fb)
>>>> +{
>>>> + struct xen_drm_front_drm_pipeline *pipeline =
>>>> + to_xen_drm_pipeline(pipe);
>>>> + struct drm_crtc *crtc = &pipe->crtc;
>>>> + struct xen_drm_front_drm_info *drm_info = pipeline->drm_info;
>>>> + int ret;
>>>> +
>>>> + if (fb)
>>>> + ret = drm_info->front_ops->mode_set(pipeline,
>>>> + crtc->x, crtc->y,
>>>> + fb->width, fb->height, fb->format->cpp[0] * 8,
>>>> + xen_drm_front_fb_to_cookie(fb));
>>>> + else
>>>> + ret = drm_info->front_ops->mode_set(pipeline,
>>>> + 0, 0, 0, 0, 0,
>>>> + xen_drm_front_fb_to_cookie(NULL));
>>> This is a bit much layering, the if (fb) case corresponds to the
>>> display_enable/disable hooks, pls fold that in instead of the indirection.
>>> simple helpers guarantee that when the display is on, then you have an fb.
>> 1. Ok, the only reason for having this function was to keep
>> front_ops->mode_set calls at one place (will be refactored
>> to be a direct call, not via front_ops).
>> 2. The if (fb) check was meant not to check if simple helpers
>> may give us some wrong value when we do not expect: there is
>> nothing wrong with them. The check was for 2 cases when this
>> function was called: with fb != NULL on display enable and
>> with fb == NULL on display disable, e.g. fb was used as a
>> flag in this check.
> Yeah that's what I meant - it is needlessly confusing: You get 2 explicit
> enable/disable callbacks, then you squash them into 1 function call, only
> to require an
>
> if (do_I_need_to_enable_or_disable) {
> /* code that really should be directly put in the enable callback */
> } else {
> /* code that really should be directly put in the enable callback */
> }
>
> Just a bit of indirection where I didnt' see the point.
>
> Aside for why this matters: When refactoring the entire subsystem you need
> to be able to quickly understand how all the drivers work in a specific
> case, without being an expert on that driver. If there's very little
> indirection between the shared drm concepts/structs/callbacks and the
> actual driver code, then that's easy. If there's a bunch of callback
> layers or indirections like the above, you make subsystem refactoring
> harder for no reason. And in upstream we optimize for the overall
> subsystem, not individual drivers.
ok, does make sense, will rework without yet another function
>> 3. I will remove this function at all and will make direct calls
>> to the backend on .display_{enable|disable}
>>> Maybe we need to fix the docs, pls check and if that's not clear, submit a
>>> kernel-doc patch for the simple pipe helpers.
>> no, nothing wrong here, just see my reasoning above
>>>> +
>>>> + if (ret)
>>>> + DRM_ERROR("Failed to set mode to back: %d\n", ret);
>>>> +
>>>> + return ret;
>>>> +}
>>>> +
>>>> +static void display_enable(struct drm_simple_display_pipe *pipe,
>>>> + struct drm_crtc_state *crtc_state)
>>>> +{
>>>> + struct drm_crtc *crtc = &pipe->crtc;
>>>> + struct drm_framebuffer *fb = pipe->plane.state->fb;
>>>> +
>>>> + if (display_set_config(pipe, fb) == 0)
>>>> + drm_crtc_vblank_on(crtc);
>>> I get the impression your driver doesn't support vblanks (the page flip
>>> code at least looks like it's only generating a single event),
>> yes, this is true
>>> you also
>>> don't have a enable/disable_vblank implementation.
>> this is because with my previous patches [1] these are now handled
>> by simple helpers, so no need to provide dummy ones in the driver
>>> If there's no vblank
>>> handling then this shouldn't be needed.
>> yes, I will rework the code, please see below
>>>> + else
>>>> + DRM_ERROR("Failed to enable display\n");
>>>> +}
>>>> +
>>>> +static void display_disable(struct drm_simple_display_pipe *pipe)
>>>> +{
>>>> + struct drm_crtc *crtc = &pipe->crtc;
>>>> +
>>>> + display_set_config(pipe, NULL);
>>>> + drm_crtc_vblank_off(crtc);
>>>> + /* final check for stalled events */
>>>> + if (crtc->state->event && !crtc->state->active) {
>>>> + unsigned long flags;
>>>> +
>>>> + spin_lock_irqsave(&crtc->dev->event_lock, flags);
>>>> + drm_crtc_send_vblank_event(crtc, crtc->state->event);
>>>> + spin_unlock_irqrestore(&crtc->dev->event_lock, flags);
>>>> + crtc->state->event = NULL;
>>>> + }
>>>> +}
>>>> +
>>>> +void xen_drm_front_kms_on_frame_done(
>>>> + struct xen_drm_front_drm_pipeline *pipeline,
>>>> + uint64_t fb_cookie)
>>>> +{
>>>> + drm_crtc_handle_vblank(&pipeline->pipe.crtc);
>>> Hm, again this doesn't look like real vblank, but only a page-flip done
>>> event. If that's correct then please don't use the vblank machinery, but
>>> just store the event internally (protected with your own private spinlock)
>> Why can't I use &dev->event_lock? Anyways for handling
>> page-flip events I will need to lock on it, so I can do
>> drm_crtc_send_vblank_event?
> Yeah you can reuse the event_lock too, that's what many drivers do.
I just was clarifying on the need for my own private lock ;)
>>> and send it out using drm_crtc_send_vblank_event directly. No calls to
>>> arm_vblank_event or any of the other vblank infrastructure should be
>>> needed.
>> will re-work, e.g. will store drm_pending_vblank_event
>> on .display_update and send out on page flip event from the
>> backend
>>> Also please remove the drm_vblank_init() call, since your hw doesn't
>>> really have vblanks. And exposing vblanks to userspace without
>>> implementing them is confusing.
>> will remove all vblank handling at all with the re-work above
>>>> +}
>>>> +
>>>> +static void display_send_page_flip(struct drm_simple_display_pipe *pipe,
>>>> + struct drm_plane_state *old_plane_state)
>>>> +{
>>>> + struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(
>>>> + old_plane_state->state, &pipe->plane);
>>>> +
>>>> + /*
>>>> + * If old_plane_state->fb is NULL and plane_state->fb is not,
>>>> + * then this is an atomic commit which will enable display.
>>>> + * If old_plane_state->fb is not NULL and plane_state->fb is,
>>>> + * then this is an atomic commit which will disable display.
>>>> + * Ignore these and do not send page flip as this framebuffer will be
>>>> + * sent to the backend as a part of display_set_config call.
>>>> + */
>>>> + if (old_plane_state->fb && plane_state->fb) {
>>>> + struct xen_drm_front_drm_pipeline *pipeline =
>>>> + to_xen_drm_pipeline(pipe);
>>>> + struct xen_drm_front_drm_info *drm_info = pipeline->drm_info;
>>>> + int ret;
>>>> +
>>>> + ret = drm_info->front_ops->page_flip(drm_info->front_info,
>>>> + pipeline->index,
>>>> + xen_drm_front_fb_to_cookie(plane_state->fb));
>>>> + pipeline->pgflip_last_error = ret;
>>>> + if (ret) {
>>>> + DRM_ERROR("Failed to send page flip request to backend: %d\n", ret);
>>>> + /*
>>>> + * As we are at commit stage the DRM core will anyways
>>>> + * wait for the vblank and knows nothing about our
>>>> + * failure. The best we can do is to handle
>>>> + * vblank now, so there is no vblank/flip_done
>>>> + * time outs
>>>> + */
>>>> + drm_crtc_handle_vblank(&pipeline->pipe.crtc);
>>>> + }
>>>> + }
>>>> +}
>>>> +
>>>> +static int display_prepare_fb(struct drm_simple_display_pipe *pipe,
>>>> + struct drm_plane_state *plane_state)
>>>> +{
>>>> + struct xen_drm_front_drm_pipeline *pipeline =
>>>> + to_xen_drm_pipeline(pipe);
>>>> +
>>>> + if (pipeline->pgflip_last_error) {
>>>> + int ret;
>>>> +
>>>> + /* if previous page flip didn't succeed then report the error */
>>>> + ret = pipeline->pgflip_last_error;
>>>> + /* and let us try to page flip next time */
>>>> + pipeline->pgflip_last_error = 0;
>>>> + return ret;
>>>> + }
>>> Nope, this isn't how the uapi works. If your flips fail then we might need
>>> to add some error status thing to the drm events, but you can't make the
>>> next flip fail.
>> Well, yes, there is no way for me to tell that the page flip
>> has failed, so this is why I tried to do this workaround with
>> the next page-flip. The reason for that is that if, for example,
>> we are disconnected from the backend for some reason, there is
>> no way for me to tell the user-space that hey, please, do not
>> send any other page flips. If backend can recover and that was
>> a one time error then yes, the code I have will do wrong thing
>> (fail the current page flip), but if the error state is persistent
>> then I will be able to tell the user-space to stop by returning errors.
>> This is kind of trade-off which I am not sure how to solve correctly.
>>
>> Do you think I can remove this workaround completely?
> Yes. If you want to tell userspace that the backend is gone, send a
> hotplug uevent and update the connector status to disconnected. Hotplug
> uevents is how we tell userspace about asynchronous changes. We also have
> special stuff to signal display cable issue that might require picking a
> lower resolution (DP link training) and when HDCP encryption failed.
Ah, then I'll need to plumb in connector hotplug machinery.
Will add that so I can report errors
> Sending back random errors on pageflips just confuses the compositor, and
> all correctly working compositors will listen to hotplug events and
> reprobe all the outputs and change the configuration if necessary.
Good point, thank you
> -Daniel
>
>>> -Daniel
>>>
>>>> + return drm_gem_fb_prepare_fb(&pipe->plane, plane_state);
>>>> +}
>>>> +
>>>> +static void display_update(struct drm_simple_display_pipe *pipe,
>>>> + struct drm_plane_state *old_plane_state)
>>>> +{
>>>> + struct drm_crtc *crtc = &pipe->crtc;
>>>> + struct drm_pending_vblank_event *event;
>>>> +
>>>> + event = crtc->state->event;
>>>> + if (event) {
>>>> + struct drm_device *dev = crtc->dev;
>>>> + unsigned long flags;
>>>> +
>>>> + crtc->state->event = NULL;
>>>> +
>>>> + spin_lock_irqsave(&dev->event_lock, flags);
>>>> + if (drm_crtc_vblank_get(crtc) == 0)
>>>> + drm_crtc_arm_vblank_event(crtc, event);
>>>> + else
>>>> + drm_crtc_send_vblank_event(crtc, event);
>>>> + spin_unlock_irqrestore(&dev->event_lock, flags);
>>>> + }
>>>> + /*
>>>> + * Send page flip request to the backend *after* we have event armed/
>>>> + * sent above, so on page flip done event from the backend we can
>>>> + * deliver it while handling vblank.
>>>> + */
>>>> + display_send_page_flip(pipe, old_plane_state);
>>>> +}
>>>> +
>>>> +static const struct drm_simple_display_pipe_funcs display_funcs = {
>>>> + .enable = display_enable,
>>>> + .disable = display_disable,
>>>> + .prepare_fb = display_prepare_fb,
>>>> + .update = display_update,
>>>> +};
>>>> +
>>>> +static int display_pipe_init(struct xen_drm_front_drm_info *drm_info,
>>>> + int index, struct xen_drm_front_cfg_connector *cfg,
>>>> + struct xen_drm_front_drm_pipeline *pipeline)
>>>> +{
>>>> + struct drm_device *dev = drm_info->drm_dev;
>>>> + const uint32_t *formats;
>>>> + int format_count;
>>>> + int ret;
>>>> +
>>>> + pipeline->drm_info = drm_info;
>>>> + pipeline->index = index;
>>>> + pipeline->height = cfg->height;
>>>> + pipeline->width = cfg->width;
>>>> +
>>>> + ret = xen_drm_front_conn_init(drm_info, &pipeline->conn);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + formats = xen_drm_front_conn_get_formats(&format_count);
>>>> +
>>>> + return drm_simple_display_pipe_init(dev, &pipeline->pipe,
>>>> + &display_funcs, formats, format_count,
>>>> + NULL, &pipeline->conn);
>>>> +}
>>>> +
>>>> +int xen_drm_front_kms_init(struct xen_drm_front_drm_info *drm_info)
>>>> +{
>>>> + struct drm_device *dev = drm_info->drm_dev;
>>>> + int i, ret;
>>>> +
>>>> + drm_mode_config_init(dev);
>>>> +
>>>> + dev->mode_config.min_width = 0;
>>>> + dev->mode_config.min_height = 0;
>>>> + dev->mode_config.max_width = 4095;
>>>> + dev->mode_config.max_height = 2047;
>>>> + dev->mode_config.funcs = &mode_config_funcs;
>>>> +
>>>> + for (i = 0; i < drm_info->cfg->num_connectors; i++) {
>>>> + struct xen_drm_front_cfg_connector *cfg =
>>>> + &drm_info->cfg->connectors[i];
>>>> + struct xen_drm_front_drm_pipeline *pipeline =
>>>> + &drm_info->pipeline[i];
>>>> +
>>>> + ret = display_pipe_init(drm_info, i, cfg, pipeline);
>>>> + if (ret) {
>>>> + drm_mode_config_cleanup(dev);
>>>> + return ret;
>>>> + }
>>>> + }
>>>> +
>>>> + drm_mode_config_reset(dev);
>>>> + return 0;
>>>> +}
>>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.h b/drivers/gpu/drm/xen/xen_drm_front_kms.h
>>>> new file mode 100644
>>>> index 000000000000..65a50033bb9b
>>>> --- /dev/null
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front_kms.h
>>>> @@ -0,0 +1,30 @@
>>>> +/*
>>>> + * Xen para-virtual DRM device
>>>> + *
>>>> + * This program is free software; you can redistribute it and/or modify
>>>> + * it under the terms of the GNU General Public License as published by
>>>> + * the Free Software Foundation; either version 2 of the License, or
>>>> + * (at your option) any later version.
>>>> + *
>>>> + * This program is distributed in the hope that it will be useful,
>>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>>>> + * GNU General Public License for more details.
>>>> + *
>>>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>>>> + *
>>>> + * Author: Oleksandr Andrushchenko <[email protected]>
>>>> + */
>>>> +
>>>> +#ifndef __XEN_DRM_FRONT_KMS_H_
>>>> +#define __XEN_DRM_FRONT_KMS_H_
>>>> +
>>>> +#include "xen_drm_front_drv.h"
>>>> +
>>>> +int xen_drm_front_kms_init(struct xen_drm_front_drm_info *drm_info);
>>>> +
>>>> +void xen_drm_front_kms_on_frame_done(
>>>> + struct xen_drm_front_drm_pipeline *pipeline,
>>>> + uint64_t fb_cookie);
>>>> +
>>>> +#endif /* __XEN_DRM_FRONT_KMS_H_ */
>>>> --
>>>> 2.7.4
>>>>
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> [email protected]
>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>> [1] https://patchwork.kernel.org/patch/10211997/
>> _______________________________________________
>> dri-devel mailing list
>> [email protected]
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
On 03/06/2018 09:26 AM, Daniel Vetter wrote:
> On Mon, Mar 05, 2018 at 03:46:07PM +0200, Oleksandr Andrushchenko wrote:
>> On 03/05/2018 11:32 AM, Daniel Vetter wrote:
>>> On Wed, Feb 21, 2018 at 10:03:41AM +0200, Oleksandr Andrushchenko wrote:
>>>> From: Oleksandr Andrushchenko <[email protected]>
>>>>
>>>> Implement GEM handling depending on driver mode of operation:
>>>> depending on the requirements for the para-virtualized environment, namely
>>>> requirements dictated by the accompanying DRM/(v)GPU drivers running in both
>>>> host and guest environments, number of operating modes of para-virtualized
>>>> display driver are supported:
>>>> - display buffers can be allocated by either frontend driver or backend
>>>> - display buffers can be allocated to be contiguous in memory or not
>>>>
>>>> Note! Frontend driver itself has no dependency on contiguous memory for
>>>> its operation.
>>>>
>>>> 1. Buffers allocated by the frontend driver.
>>>>
>>>> The below modes of operation are configured at compile-time via
>>>> frontend driver's kernel configuration.
>>>>
>>>> 1.1. Front driver configured to use GEM CMA helpers
>>>> This use-case is useful when used with accompanying DRM/vGPU driver in
>>>> guest domain which was designed to only work with contiguous buffers,
>>>> e.g. DRM driver based on GEM CMA helpers: such drivers can only import
>>>> contiguous PRIME buffers, thus requiring frontend driver to provide
>>>> such. In order to implement this mode of operation para-virtualized
>>>> frontend driver can be configured to use GEM CMA helpers.
>>>>
>>>> 1.2. Front driver doesn't use GEM CMA
>>>> If accompanying drivers can cope with non-contiguous memory then, to
>>>> lower pressure on CMA subsystem of the kernel, driver can allocate
>>>> buffers from system memory.
>>>>
>>>> Note! If used with accompanying DRM/(v)GPU drivers this mode of operation
>>>> may require IOMMU support on the platform, so accompanying DRM/vGPU
>>>> hardware can still reach display buffer memory while importing PRIME
>>>> buffers from the frontend driver.
>>>>
>>>> 2. Buffers allocated by the backend
>>>>
>>>> This mode of operation is run-time configured via guest domain configuration
>>>> through XenStore entries.
>>>>
>>>> For systems which do not provide IOMMU support, but having specific
>>>> requirements for display buffers it is possible to allocate such buffers
>>>> at backend side and share those with the frontend.
>>>> For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting
>>>> physically contiguous memory, this allows implementing zero-copying
>>>> use-cases.
>>>>
>>>> Note! Configuration options 1.1 (contiguous display buffers) and 2 (backend
>>>> allocated buffers) are not supported at the same time.
>>>>
>>>> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
>>> Some suggestions below for some larger cleanup work.
>>> -Daniel
>>>
>>>> ---
>>>> drivers/gpu/drm/xen/Kconfig | 13 +
>>>> drivers/gpu/drm/xen/Makefile | 6 +
>>>> drivers/gpu/drm/xen/xen_drm_front.h | 74 ++++++
>>>> drivers/gpu/drm/xen/xen_drm_front_drv.c | 80 ++++++-
>>>> drivers/gpu/drm/xen/xen_drm_front_drv.h | 1 +
>>>> drivers/gpu/drm/xen/xen_drm_front_gem.c | 360 ++++++++++++++++++++++++++++
>>>> drivers/gpu/drm/xen/xen_drm_front_gem.h | 46 ++++
>>>> drivers/gpu/drm/xen/xen_drm_front_gem_cma.c | 93 +++++++
>>>> 8 files changed, 667 insertions(+), 6 deletions(-)
>>>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.c
>>>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.h
>>>> create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
>>>>
>>>> diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig
>>>> index 4cca160782ab..4f4abc91f3b6 100644
>>>> --- a/drivers/gpu/drm/xen/Kconfig
>>>> +++ b/drivers/gpu/drm/xen/Kconfig
>>>> @@ -15,3 +15,16 @@ config DRM_XEN_FRONTEND
>>>> help
>>>> Choose this option if you want to enable a para-virtualized
>>>> frontend DRM/KMS driver for Xen guest OSes.
>>>> +
>>>> +config DRM_XEN_FRONTEND_CMA
>>>> + bool "Use DRM CMA to allocate dumb buffers"
>>>> + depends on DRM_XEN_FRONTEND
>>>> + select DRM_KMS_CMA_HELPER
>>>> + select DRM_GEM_CMA_HELPER
>>>> + help
>>>> + Use DRM CMA helpers to allocate display buffers.
>>>> + This is useful for the use-cases when guest driver needs to
>>>> + share or export buffers to other drivers which only expect
>>>> + contiguous buffers.
>>>> + Note: in this mode driver cannot use buffers allocated
>>>> + by the backend.
>>>> diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile
>>>> index 4fcb0da1a9c5..12376ec78fbc 100644
>>>> --- a/drivers/gpu/drm/xen/Makefile
>>>> +++ b/drivers/gpu/drm/xen/Makefile
>>>> @@ -8,4 +8,10 @@ drm_xen_front-objs := xen_drm_front.o \
>>>> xen_drm_front_shbuf.o \
>>>> xen_drm_front_cfg.o
>>>> +ifeq ($(CONFIG_DRM_XEN_FRONTEND_CMA),y)
>>>> + drm_xen_front-objs += xen_drm_front_gem_cma.o
>>>> +else
>>>> + drm_xen_front-objs += xen_drm_front_gem.o
>>>> +endif
>>>> +
>>>> obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o
>>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
>>>> index 9ed5bfb248d0..c6f52c892434 100644
>>>> --- a/drivers/gpu/drm/xen/xen_drm_front.h
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front.h
>>>> @@ -34,6 +34,80 @@
>>>> struct xen_drm_front_drm_pipeline;
>>>> +/*
>>>> + *******************************************************************************
>>>> + * Para-virtualized DRM/KMS frontend driver
>>>> + *******************************************************************************
>>>> + * This frontend driver implements Xen para-virtualized display
>>>> + * according to the display protocol described at
>>>> + * include/xen/interface/io/displif.h
>>>> + *
>>>> + *******************************************************************************
>>>> + * Driver modes of operation in terms of display buffers used
>>>> + *******************************************************************************
>>>> + * Depending on the requirements for the para-virtualized environment, namely
>>>> + * requirements dictated by the accompanying DRM/(v)GPU drivers running in both
>>>> + * host and guest environments, number of operating modes of para-virtualized
>>>> + * display driver are supported:
>>>> + * - display buffers can be allocated by either frontend driver or backend
>>>> + * - display buffers can be allocated to be contiguous in memory or not
>>>> + *
>>>> + * Note! Frontend driver itself has no dependency on contiguous memory for
>>>> + * its operation.
>>>> + *
>>>> + *******************************************************************************
>>>> + * 1. Buffers allocated by the frontend driver.
>>>> + *******************************************************************************
>>>> + *
>>>> + * The below modes of operation are configured at compile-time via
>>>> + * frontend driver's kernel configuration.
>>>> + *
>>>> + * 1.1. Front driver configured to use GEM CMA helpers
>>>> + * This use-case is useful when used with accompanying DRM/vGPU driver in
>>>> + * guest domain which was designed to only work with contiguous buffers,
>>>> + * e.g. DRM driver based on GEM CMA helpers: such drivers can only import
>>>> + * contiguous PRIME buffers, thus requiring frontend driver to provide
>>>> + * such. In order to implement this mode of operation para-virtualized
>>>> + * frontend driver can be configured to use GEM CMA helpers.
>>>> + *
>>>> + * 1.2. Front driver doesn't use GEM CMA
>>>> + * If accompanying drivers can cope with non-contiguous memory then, to
>>>> + * lower pressure on CMA subsystem of the kernel, driver can allocate
>>>> + * buffers from system memory.
>>>> + *
>>>> + * Note! If used with accompanying DRM/(v)GPU drivers this mode of operation
>>>> + * may require IOMMU support on the platform, so accompanying DRM/vGPU
>>>> + * hardware can still reach display buffer memory while importing PRIME
>>>> + * buffers from the frontend driver.
>>>> + *
>>>> + *******************************************************************************
>>>> + * 2. Buffers allocated by the backend
>>>> + *******************************************************************************
>>>> + *
>>>> + * This mode of operation is run-time configured via guest domain configuration
>>>> + * through XenStore entries.
>>>> + *
>>>> + * For systems which do not provide IOMMU support, but having specific
>>>> + * requirements for display buffers it is possible to allocate such buffers
>>>> + * at backend side and share those with the frontend.
>>>> + * For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting
>>>> + * physically contiguous memory, this allows implementing zero-copying
>>>> + * use-cases.
>>>> + *
>>>> + *******************************************************************************
>>>> + * Driver limitations
>>>> + *******************************************************************************
>>>> + * 1. Configuration options 1.1 (contiguous display buffers) and 2 (backend
>>>> + * allocated buffers) are not supported at the same time.
>>>> + *
>>>> + * 2. Only primary plane without additional properties is supported.
>>>> + *
>>>> + * 3. Only one video mode supported which is configured via XenStore.
>>>> + *
>>>> + * 4. All CRTCs operate at fixed frequency of 60Hz.
>>>> + *
>>>> + ******************************************************************************/
>>> Since you've typed this all up, pls convert it to kernel-doc and pull it
>>> into a xen-front.rst driver section in Documentation/gpu/ There's a few
>>> examples for i915 and vc4 already.
>> Do you mean to move or to keep in the driver and add in the
>> Documentation? I would prefer to move to have the description
>> at single place.
> Keep it where it is, but reformat as a correct kerneldoc (it's RST format)
> and pull it in as a DOC: section. See
>
> https://dri.freedesktop.org/docs/drm/doc-guide/kernel-doc.html
>
> and the other sections in that chapter.
Thank you, already tried playing with that and I see
that it is way easier to move the description to an rst file
than keeping it in the header: for such a description that
I have in the header it is not easy to format text properly.
For example, if I had those small sections in different files,
then probably that have worked fine.
So, I will move to xen-front.rst under Documentation/gpu
>>>> +
>>>> struct xen_drm_front_ops {
>>>> int (*mode_set)(struct xen_drm_front_drm_pipeline *pipeline,
>>>> uint32_t x, uint32_t y, uint32_t width, uint32_t height,
>>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.c b/drivers/gpu/drm/xen/xen_drm_front_drv.c
>>>> index e8862d26ba27..35e7e9cda9d1 100644
>>>> --- a/drivers/gpu/drm/xen/xen_drm_front_drv.c
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.c
>>>> @@ -23,12 +23,58 @@
>>>> #include "xen_drm_front.h"
>>>> #include "xen_drm_front_cfg.h"
>>>> #include "xen_drm_front_drv.h"
>>>> +#include "xen_drm_front_gem.h"
>>>> #include "xen_drm_front_kms.h"
>>>> static int dumb_create(struct drm_file *filp,
>>>> struct drm_device *dev, struct drm_mode_create_dumb *args)
>>>> {
>>>> - return -EINVAL;
>>>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>>>> + struct drm_gem_object *obj;
>>>> + int ret;
>>>> +
>>>> + ret = drm_info->gem_ops->dumb_create(filp, dev, args);
>>>> + if (ret)
>>>> + goto fail;
>>>> +
>>>> + obj = drm_gem_object_lookup(filp, args->handle);
>>>> + if (!obj) {
>>>> + ret = -ENOENT;
>>>> + goto fail_destroy;
>>>> + }
>>>> +
>>>> + drm_gem_object_unreference_unlocked(obj);
>>>> +
>>>> + /*
>>>> + * In case of CONFIG_DRM_XEN_FRONTEND_CMA gem_obj is constructed
>>>> + * via DRM CMA helpers and doesn't have ->pages allocated
>>>> + * (xendrm_gem_get_pages will return NULL), but instead can provide
>>>> + * sg table
>>>> + */
>>> My recommendation is to use an sg table for everything if you deal with
>>> mixed objects (CMA, special blocks 1:1 mapped from host, normal pages).
>>> That avoids the constant get_pages vs. get_sgt differences. For examples
>>> see how e.g. i915 handles the various gem object backends.
>> Indeed, I tried to do that this way before, e.g. have all sgt based.
>> But at the end of the day Xen shared buffer code in the driver works
>> with pages (Xen API is page based there), so sgt then will anyway need
>> to be converted into page array.
>> For that reason I prefer to work with pages from the beginning, not sgt.
>> As to constant get_pages etc. - this is the only expected place in the
>> driver for that, so the _from_sgt/_from_pages API is only used here.
> Yeah was just a suggestion to simplify the code. But if you have to deal
> with both, there's not much point.
Agreed
>>>> + if (drm_info->gem_ops->get_pages(obj))
>>>> + ret = drm_info->front_ops->dbuf_create_from_pages(
>>>> + drm_info->front_info,
>>>> + xen_drm_front_dbuf_to_cookie(obj),
>>>> + args->width, args->height, args->bpp,
>>>> + args->size,
>>>> + drm_info->gem_ops->get_pages(obj));
>>>> + else
>>>> + ret = drm_info->front_ops->dbuf_create_from_sgt(
>>>> + drm_info->front_info,
>>>> + xen_drm_front_dbuf_to_cookie(obj),
>>>> + args->width, args->height, args->bpp,
>>>> + args->size,
>>>> + drm_info->gem_ops->prime_get_sg_table(obj));
>>>> + if (ret)
>>>> + goto fail_destroy;
>>>> +
>>>> + return 0;
>>>> +
>>>> +fail_destroy:
>>>> + drm_gem_dumb_destroy(filp, dev, args->handle);
>>>> +fail:
>>>> + DRM_ERROR("Failed to create dumb buffer: %d\n", ret);
>>>> + return ret;
>>>> }
>>>> static void free_object(struct drm_gem_object *obj)
>>>> @@ -37,6 +83,7 @@ static void free_object(struct drm_gem_object *obj)
>>>> drm_info->front_ops->dbuf_destroy(drm_info->front_info,
>>>> xen_drm_front_dbuf_to_cookie(obj));
>>>> + drm_info->gem_ops->free_object_unlocked(obj);
>>>> }
>>>> static void on_frame_done(struct platform_device *pdev,
>>>> @@ -60,32 +107,52 @@ static void lastclose(struct drm_device *dev)
>>>> static int gem_mmap(struct file *filp, struct vm_area_struct *vma)
>>>> {
>>>> - return -EINVAL;
>>>> + struct drm_file *file_priv = filp->private_data;
>>>> + struct drm_device *dev = file_priv->minor->dev;
>>>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>>>> +
>>>> + return drm_info->gem_ops->mmap(filp, vma);
>>> Uh, so 1 midlayer for the kms stuff and another midlayer for the gem
>>> stuff. That's way too much indirection.
>> If by KMS you mean front_ops then -1: I will remove front_ops.
>> As to gem_ops, please see below
>>>> }
>>>> static struct sg_table *prime_get_sg_table(struct drm_gem_object *obj)
>>>> {
>>>> - return NULL;
>>>> + struct xen_drm_front_drm_info *drm_info;
>>>> +
>>>> + drm_info = obj->dev->dev_private;
>>>> + return drm_info->gem_ops->prime_get_sg_table(obj);
>>>> }
>>>> static struct drm_gem_object *prime_import_sg_table(struct drm_device *dev,
>>>> struct dma_buf_attachment *attach, struct sg_table *sgt)
>>>> {
>>>> - return NULL;
>>>> + struct xen_drm_front_drm_info *drm_info;
>>>> +
>>>> + drm_info = dev->dev_private;
>>>> + return drm_info->gem_ops->prime_import_sg_table(dev, attach, sgt);
>>>> }
>>>> static void *prime_vmap(struct drm_gem_object *obj)
>>>> {
>>>> - return NULL;
>>>> + struct xen_drm_front_drm_info *drm_info;
>>>> +
>>>> + drm_info = obj->dev->dev_private;
>>>> + return drm_info->gem_ops->prime_vmap(obj);
>>>> }
>>>> static void prime_vunmap(struct drm_gem_object *obj, void *vaddr)
>>>> {
>>>> + struct xen_drm_front_drm_info *drm_info;
>>>> +
>>>> + drm_info = obj->dev->dev_private;
>>>> + drm_info->gem_ops->prime_vunmap(obj, vaddr);
>>>> }
>>>> static int prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>>>> {
>>>> - return -EINVAL;
>>>> + struct xen_drm_front_drm_info *drm_info;
>>>> +
>>>> + drm_info = obj->dev->dev_private;
>>>> + return drm_info->gem_ops->prime_mmap(obj, vma);
>>>> }
>>>> static const struct file_operations xendrm_fops = {
>>>> @@ -147,6 +214,7 @@ int xen_drm_front_drv_probe(struct platform_device *pdev,
>>>> drm_info->front_ops = front_ops;
>>>> drm_info->front_ops->on_frame_done = on_frame_done;
>>>> + drm_info->gem_ops = xen_drm_front_gem_get_ops();
>>>> drm_info->front_info = cfg->front_info;
>>>> dev = drm_dev_alloc(&xen_drm_driver, &pdev->dev);
>>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.h b/drivers/gpu/drm/xen/xen_drm_front_drv.h
>>>> index 563318b19f34..34228eb86255 100644
>>>> --- a/drivers/gpu/drm/xen/xen_drm_front_drv.h
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.h
>>>> @@ -43,6 +43,7 @@ struct xen_drm_front_drm_pipeline {
>>>> struct xen_drm_front_drm_info {
>>>> struct xen_drm_front_info *front_info;
>>>> struct xen_drm_front_ops *front_ops;
>>>> + const struct xen_drm_front_gem_ops *gem_ops;
>>>> struct drm_device *drm_dev;
>>>> struct xen_drm_front_cfg *cfg;
>>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
>>>> new file mode 100644
>>>> index 000000000000..367e08f6a9ef
>>>> --- /dev/null
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
>>>> @@ -0,0 +1,360 @@
>>>> +/*
>>>> + * Xen para-virtual DRM device
>>>> + *
>>>> + * This program is free software; you can redistribute it and/or modify
>>>> + * it under the terms of the GNU General Public License as published by
>>>> + * the Free Software Foundation; either version 2 of the License, or
>>>> + * (at your option) any later version.
>>>> + *
>>>> + * This program is distributed in the hope that it will be useful,
>>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>>>> + * GNU General Public License for more details.
>>>> + *
>>>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>>>> + *
>>>> + * Author: Oleksandr Andrushchenko <[email protected]>
>>>> + */
>>>> +
>>>> +#include "xen_drm_front_gem.h"
>>>> +
>>>> +#include <drm/drmP.h>
>>>> +#include <drm/drm_crtc_helper.h>
>>>> +#include <drm/drm_fb_helper.h>
>>>> +#include <drm/drm_gem.h>
>>>> +
>>>> +#include <linux/dma-buf.h>
>>>> +#include <linux/scatterlist.h>
>>>> +#include <linux/shmem_fs.h>
>>>> +
>>>> +#include <xen/balloon.h>
>>>> +
>>>> +#include "xen_drm_front.h"
>>>> +#include "xen_drm_front_drv.h"
>>>> +#include "xen_drm_front_shbuf.h"
>>>> +
>>>> +struct xen_gem_object {
>>>> + struct drm_gem_object base;
>>>> +
>>>> + size_t num_pages;
>>>> + struct page **pages;
>>>> +
>>>> + /* set for buffers allocated by the backend */
>>>> + bool be_alloc;
>>>> +
>>>> + /* this is for imported PRIME buffer */
>>>> + struct sg_table *sgt_imported;
>>>> +};
>>>> +
>>>> +static inline struct xen_gem_object *to_xen_gem_obj(
>>>> + struct drm_gem_object *gem_obj)
>>>> +{
>>>> + return container_of(gem_obj, struct xen_gem_object, base);
>>>> +}
>>>> +
>>>> +static int gem_alloc_pages_array(struct xen_gem_object *xen_obj,
>>>> + size_t buf_size)
>>>> +{
>>>> + xen_obj->num_pages = DIV_ROUND_UP(buf_size, PAGE_SIZE);
>>>> + xen_obj->pages = kvmalloc_array(xen_obj->num_pages,
>>>> + sizeof(struct page *), GFP_KERNEL);
>>>> + return xen_obj->pages == NULL ? -ENOMEM : 0;
>>>> +}
>>>> +
>>>> +static void gem_free_pages_array(struct xen_gem_object *xen_obj)
>>>> +{
>>>> + kvfree(xen_obj->pages);
>>>> + xen_obj->pages = NULL;
>>>> +}
>>>> +
>>>> +static struct xen_gem_object *gem_create_obj(struct drm_device *dev,
>>>> + size_t size)
>>>> +{
>>>> + struct xen_gem_object *xen_obj;
>>>> + int ret;
>>>> +
>>>> + xen_obj = kzalloc(sizeof(*xen_obj), GFP_KERNEL);
>>>> + if (!xen_obj)
>>>> + return ERR_PTR(-ENOMEM);
>>>> +
>>>> + ret = drm_gem_object_init(dev, &xen_obj->base, size);
>>>> + if (ret < 0) {
>>>> + kfree(xen_obj);
>>>> + return ERR_PTR(ret);
>>>> + }
>>>> +
>>>> + return xen_obj;
>>>> +}
>>>> +
>>>> +static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
>>>> +{
>>>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>>>> + struct xen_gem_object *xen_obj;
>>>> + int ret;
>>>> +
>>>> + size = round_up(size, PAGE_SIZE);
>>>> + xen_obj = gem_create_obj(dev, size);
>>>> + if (IS_ERR_OR_NULL(xen_obj))
>>>> + return xen_obj;
>>>> +
>>>> + if (drm_info->cfg->be_alloc) {
>>>> + /*
>>>> + * backend will allocate space for this buffer, so
>>>> + * only allocate array of pointers to pages
>>>> + */
>>>> + xen_obj->be_alloc = true;
>>>> + ret = gem_alloc_pages_array(xen_obj, size);
>>>> + if (ret < 0) {
>>>> + gem_free_pages_array(xen_obj);
>>>> + goto fail;
>>>> + }
>>>> +
>>>> + ret = alloc_xenballooned_pages(xen_obj->num_pages,
>>>> + xen_obj->pages);
>>>> + if (ret < 0) {
>>>> + DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
>>>> + xen_obj->num_pages, ret);
>>>> + goto fail;
>>>> + }
>>>> +
>>>> + return xen_obj;
>>>> + }
>>>> + /*
>>>> + * need to allocate backing pages now, so we can share those
>>>> + * with the backend
>>>> + */
>>>> + xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE);
>>>> + xen_obj->pages = drm_gem_get_pages(&xen_obj->base);
>>>> + if (IS_ERR_OR_NULL(xen_obj->pages)) {
>>>> + ret = PTR_ERR(xen_obj->pages);
>>>> + xen_obj->pages = NULL;
>>>> + goto fail;
>>>> + }
>>>> +
>>>> + return xen_obj;
>>>> +
>>>> +fail:
>>>> + DRM_ERROR("Failed to allocate buffer with size %zu\n", size);
>>>> + return ERR_PTR(ret);
>>>> +}
>>>> +
>>>> +static struct xen_gem_object *gem_create_with_handle(struct drm_file *filp,
>>>> + struct drm_device *dev, size_t size, uint32_t *handle)
>>>> +{
>>>> + struct xen_gem_object *xen_obj;
>>>> + struct drm_gem_object *gem_obj;
>>>> + int ret;
>>>> +
>>>> + xen_obj = gem_create(dev, size);
>>>> + if (IS_ERR_OR_NULL(xen_obj))
>>>> + return xen_obj;
>>>> +
>>>> + gem_obj = &xen_obj->base;
>>>> + ret = drm_gem_handle_create(filp, gem_obj, handle);
>>>> + /* handle holds the reference */
>>>> + drm_gem_object_unreference_unlocked(gem_obj);
>>>> + if (ret < 0)
>>>> + return ERR_PTR(ret);
>>>> +
>>>> + return xen_obj;
>>>> +}
>>>> +
>>>> +static int gem_dumb_create(struct drm_file *filp, struct drm_device *dev,
>>>> + struct drm_mode_create_dumb *args)
>>>> +{
>>>> + struct xen_gem_object *xen_obj;
>>>> +
>>>> + args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
>>>> + args->size = args->pitch * args->height;
>>>> +
>>>> + xen_obj = gem_create_with_handle(filp, dev, args->size, &args->handle);
>>>> + if (IS_ERR_OR_NULL(xen_obj))
>>>> + return xen_obj == NULL ? -ENOMEM : PTR_ERR(xen_obj);
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +static void gem_free_object(struct drm_gem_object *gem_obj)
>>>> +{
>>>> + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
>>>> +
>>>> + if (xen_obj->base.import_attach) {
>>>> + drm_prime_gem_destroy(&xen_obj->base, xen_obj->sgt_imported);
>>>> + gem_free_pages_array(xen_obj);
>>>> + } else {
>>>> + if (xen_obj->pages) {
>>>> + if (xen_obj->be_alloc) {
>>>> + free_xenballooned_pages(xen_obj->num_pages,
>>>> + xen_obj->pages);
>>>> + gem_free_pages_array(xen_obj);
>>>> + } else
>>>> + drm_gem_put_pages(&xen_obj->base,
>>>> + xen_obj->pages, true, false);
>>>> + }
>>>> + }
>>>> + drm_gem_object_release(gem_obj);
>>>> + kfree(xen_obj);
>>>> +}
>>>> +
>>>> +static struct page **gem_get_pages(struct drm_gem_object *gem_obj)
>>>> +{
>>>> + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
>>>> +
>>>> + return xen_obj->pages;
>>>> +}
>>>> +
>>>> +static struct sg_table *gem_get_sg_table(struct drm_gem_object *gem_obj)
>>>> +{
>>>> + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
>>>> +
>>>> + if (!xen_obj->pages)
>>>> + return NULL;
>>>> +
>>>> + return drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages);
>>>> +}
>>>> +
>>>> +static struct drm_gem_object *gem_import_sg_table(struct drm_device *dev,
>>>> + struct dma_buf_attachment *attach, struct sg_table *sgt)
>>>> +{
>>>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>>>> + struct xen_gem_object *xen_obj;
>>>> + size_t size;
>>>> + int ret;
>>>> +
>>>> + size = attach->dmabuf->size;
>>>> + xen_obj = gem_create_obj(dev, size);
>>>> + if (IS_ERR_OR_NULL(xen_obj))
>>>> + return ERR_CAST(xen_obj);
>>>> +
>>>> + ret = gem_alloc_pages_array(xen_obj, size);
>>>> + if (ret < 0)
>>>> + return ERR_PTR(ret);
>>>> +
>>>> + xen_obj->sgt_imported = sgt;
>>>> +
>>>> + ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages,
>>>> + NULL, xen_obj->num_pages);
>>>> + if (ret < 0)
>>>> + return ERR_PTR(ret);
>>>> +
>>>> + /*
>>>> + * N.B. Although we have an API to create display buffer from sgt
>>>> + * we use pages API, because we still need those for GEM handling,
>>>> + * e.g. for mapping etc.
>>>> + */
>>>> + ret = drm_info->front_ops->dbuf_create_from_pages(
>>>> + drm_info->front_info,
>>>> + xen_drm_front_dbuf_to_cookie(&xen_obj->base),
>>>> + 0, 0, 0, size, xen_obj->pages);
>>>> + if (ret < 0)
>>>> + return ERR_PTR(ret);
>>>> +
>>>> + DRM_DEBUG("Imported buffer of size %zu with nents %u\n",
>>>> + size, sgt->nents);
>>>> +
>>>> + return &xen_obj->base;
>>>> +}
>>>> +
>>>> +static int gem_mmap_obj(struct xen_gem_object *xen_obj,
>>>> + struct vm_area_struct *vma)
>>>> +{
>>>> + unsigned long addr = vma->vm_start;
>>>> + int i;
>>>> +
>>>> + /*
>>>> + * clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the
>>>> + * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map
>>>> + * the whole buffer.
>>>> + */
>>>> + vma->vm_flags &= ~VM_PFNMAP;
>>>> + vma->vm_flags |= VM_MIXEDMAP;
>>>> + vma->vm_pgoff = 0;
>>>> + vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
>>>> +
>>>> + /*
>>>> + * vm_operations_struct.fault handler will be called if CPU access
>>>> + * to VM is here. For GPUs this isn't the case, because CPU
>>>> + * doesn't touch the memory. Insert pages now, so both CPU and GPU are
>>>> + * happy.
>>>> + * FIXME: as we insert all the pages now then no .fault handler must
>>>> + * be called, so don't provide one
>>>> + */
>>>> + for (i = 0; i < xen_obj->num_pages; i++) {
>>>> + int ret;
>>>> +
>>>> + ret = vm_insert_page(vma, addr, xen_obj->pages[i]);
>>>> + if (ret < 0) {
>>>> + DRM_ERROR("Failed to insert pages into vma: %d\n", ret);
>>>> + return ret;
>>>> + }
>>>> +
>>>> + addr += PAGE_SIZE;
>>>> + }
>>>> + return 0;
>>>> +}
>>>> +
>>>> +static int gem_mmap(struct file *filp, struct vm_area_struct *vma)
>>>> +{
>>>> + struct xen_gem_object *xen_obj;
>>>> + struct drm_gem_object *gem_obj;
>>>> + int ret;
>>>> +
>>>> + ret = drm_gem_mmap(filp, vma);
>>>> + if (ret < 0)
>>>> + return ret;
>>>> +
>>>> + gem_obj = vma->vm_private_data;
>>>> + xen_obj = to_xen_gem_obj(gem_obj);
>>>> + return gem_mmap_obj(xen_obj, vma);
>>>> +}
>>>> +
>>>> +static void *gem_prime_vmap(struct drm_gem_object *gem_obj)
>>>> +{
>>>> + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
>>>> +
>>>> + if (!xen_obj->pages)
>>>> + return NULL;
>>>> +
>>>> + return vmap(xen_obj->pages, xen_obj->num_pages,
>>>> + VM_MAP, pgprot_writecombine(PAGE_KERNEL));
>>>> +}
>>>> +
>>>> +static void gem_prime_vunmap(struct drm_gem_object *gem_obj, void *vaddr)
>>>> +{
>>>> + vunmap(vaddr);
>>>> +}
>>>> +
>>>> +static int gem_prime_mmap(struct drm_gem_object *gem_obj,
>>>> + struct vm_area_struct *vma)
>>>> +{
>>>> + struct xen_gem_object *xen_obj;
>>>> + int ret;
>>>> +
>>>> + ret = drm_gem_mmap_obj(gem_obj, gem_obj->size, vma);
>>>> + if (ret < 0)
>>>> + return ret;
>>>> +
>>>> + xen_obj = to_xen_gem_obj(gem_obj);
>>>> + return gem_mmap_obj(xen_obj, vma);
>>>> +}
>>>> +
>>>> +static const struct xen_drm_front_gem_ops xen_drm_gem_ops = {
>>>> + .free_object_unlocked = gem_free_object,
>>>> + .prime_get_sg_table = gem_get_sg_table,
>>>> + .prime_import_sg_table = gem_import_sg_table,
>>>> +
>>>> + .prime_vmap = gem_prime_vmap,
>>>> + .prime_vunmap = gem_prime_vunmap,
>>>> + .prime_mmap = gem_prime_mmap,
>>>> +
>>>> + .dumb_create = gem_dumb_create,
>>>> +
>>>> + .mmap = gem_mmap,
>>>> +
>>>> + .get_pages = gem_get_pages,
>>>> +};
>>>> +
>>>> +const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void)
>>>> +{
>>>> + return &xen_drm_gem_ops;
>>>> +}
>>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
>>>> new file mode 100644
>>>> index 000000000000..d1e1711cc3fc
>>>> --- /dev/null
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
>>>> @@ -0,0 +1,46 @@
>>>> +/*
>>>> + * Xen para-virtual DRM device
>>>> + *
>>>> + * This program is free software; you can redistribute it and/or modify
>>>> + * it under the terms of the GNU General Public License as published by
>>>> + * the Free Software Foundation; either version 2 of the License, or
>>>> + * (at your option) any later version.
>>>> + *
>>>> + * This program is distributed in the hope that it will be useful,
>>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>>>> + * GNU General Public License for more details.
>>>> + *
>>>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>>>> + *
>>>> + * Author: Oleksandr Andrushchenko <[email protected]>
>>>> + */
>>>> +
>>>> +#ifndef __XEN_DRM_FRONT_GEM_H
>>>> +#define __XEN_DRM_FRONT_GEM_H
>>>> +
>>>> +#include <drm/drmP.h>
>>>> +
>>>> +struct xen_drm_front_gem_ops {
>>>> + void (*free_object_unlocked)(struct drm_gem_object *obj);
>>>> +
>>>> + struct sg_table *(*prime_get_sg_table)(struct drm_gem_object *obj);
>>>> + struct drm_gem_object *(*prime_import_sg_table)(struct drm_device *dev,
>>>> + struct dma_buf_attachment *attach,
>>>> + struct sg_table *sgt);
>>>> + void *(*prime_vmap)(struct drm_gem_object *obj);
>>>> + void (*prime_vunmap)(struct drm_gem_object *obj, void *vaddr);
>>>> + int (*prime_mmap)(struct drm_gem_object *obj,
>>>> + struct vm_area_struct *vma);
>>>> +
>>>> + int (*dumb_create)(struct drm_file *file_priv, struct drm_device *dev,
>>>> + struct drm_mode_create_dumb *args);
>>>> +
>>>> + int (*mmap)(struct file *filp, struct vm_area_struct *vma);
>>>> +
>>>> + struct page **(*get_pages)(struct drm_gem_object *obj);
>>>> +};
>>>> +
>>>> +const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void);
>>>> +
>>>> +#endif /* __XEN_DRM_FRONT_GEM_H */
>>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c b/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
>>>> new file mode 100644
>>>> index 000000000000..5ffcbfa652d5
>>>> --- /dev/null
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c
>>>> @@ -0,0 +1,93 @@
>>>> +/*
>>>> + * Xen para-virtual DRM device
>>>> + *
>>>> + * This program is free software; you can redistribute it and/or modify
>>>> + * it under the terms of the GNU General Public License as published by
>>>> + * the Free Software Foundation; either version 2 of the License, or
>>>> + * (at your option) any later version.
>>>> + *
>>>> + * This program is distributed in the hope that it will be useful,
>>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
>>>> + * GNU General Public License for more details.
>>>> + *
>>>> + * Copyright (C) 2016-2018 EPAM Systems Inc.
>>>> + *
>>>> + * Author: Oleksandr Andrushchenko <[email protected]>
>>>> + */
>>>> +
>>>> +#include <drm/drmP.h>
>>>> +#include <drm/drm_gem.h>
>>>> +#include <drm/drm_fb_cma_helper.h>
>>>> +#include <drm/drm_gem_cma_helper.h>
>>>> +
>>>> +#include "xen_drm_front.h"
>>>> +#include "xen_drm_front_drv.h"
>>>> +#include "xen_drm_front_gem.h"
>>>> +
>>>> +static struct drm_gem_object *gem_import_sg_table(struct drm_device *dev,
>>>> + struct dma_buf_attachment *attach, struct sg_table *sgt)
>>>> +{
>>>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>>>> + struct drm_gem_object *gem_obj;
>>>> + struct drm_gem_cma_object *cma_obj;
>>>> + int ret;
>>>> +
>>>> + gem_obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt);
>>>> + if (IS_ERR_OR_NULL(gem_obj))
>>>> + return gem_obj;
>>>> +
>>>> + cma_obj = to_drm_gem_cma_obj(gem_obj);
>>>> +
>>>> + ret = drm_info->front_ops->dbuf_create_from_sgt(
>>>> + drm_info->front_info,
>>>> + xen_drm_front_dbuf_to_cookie(gem_obj),
>>>> + 0, 0, 0, gem_obj->size,
>>>> + drm_gem_cma_prime_get_sg_table(gem_obj));
>>>> + if (ret < 0)
>>>> + return ERR_PTR(ret);
>>>> +
>>>> + DRM_DEBUG("Imported CMA buffer of size %zu\n", gem_obj->size);
>>>> +
>>>> + return gem_obj;
>>>> +}
>>>> +
>>>> +static int gem_dumb_create(struct drm_file *filp, struct drm_device *dev,
>>>> + struct drm_mode_create_dumb *args)
>>>> +{
>>>> + struct xen_drm_front_drm_info *drm_info = dev->dev_private;
>>>> +
>>>> + if (drm_info->cfg->be_alloc) {
>>>> + /* This use-case is not yet supported and probably won't be */
>>>> + DRM_ERROR("Backend allocated buffers and CMA helpers are not supported at the same time\n");
>>>> + return -EINVAL;
>>>> + }
>>>> +
>>>> + return drm_gem_cma_dumb_create(filp, dev, args);
>>>> +}
>>>> +
>>>> +static struct page **gem_get_pages(struct drm_gem_object *gem_obj)
>>>> +{
>>>> + return NULL;
>>>> +}
>>>> +
>>>> +static const struct xen_drm_front_gem_ops xen_drm_front_gem_cma_ops = {
>>>> + .free_object_unlocked = drm_gem_cma_free_object,
>>>> + .prime_get_sg_table = drm_gem_cma_prime_get_sg_table,
>>>> + .prime_import_sg_table = gem_import_sg_table,
>>>> +
>>>> + .prime_vmap = drm_gem_cma_prime_vmap,
>>>> + .prime_vunmap = drm_gem_cma_prime_vunmap,
>>>> + .prime_mmap = drm_gem_cma_prime_mmap,
>>>> +
>>>> + .dumb_create = gem_dumb_create,
>>>> +
>>>> + .mmap = drm_gem_cma_mmap,
>>>> +
>>>> + .get_pages = gem_get_pages,
>>>> +};
>>> Again quite a midlayer you have here. Please inline this to avoid
>>> confusion for other people (since it looks like you only have 1
>>> implementation).
>> There are 2 implementations depending on driver compile time options:
>> you can have the GEM operations implemented with DRM CMA helpers
>> or driver's cooked GEMs. For this reason this midlayer exists, e.g.
>> to eliminate the need for something like
>> #ifdef DRM_XEN_FRONTEND_CMA
>> drm_gem_cma_...()
>> #else
>> xen_drm_front_gem_...()
>> #endif
>> So, I would prefer to have ops rather then having ifdefs
> Ok, makes sense, but please review whether you really need all of them,
> since for a lot of them (all except get_pages really) we already have
> vfuncs. And if you only switch at compile time I think it's cleaner to
> simply have 2 vfunc tables for those (e.g. struct drm_driver). That avoids
> the indirection.
Ok, makes sense.
So then I'll only have one #ifdef while defining struct drm_driver.
And will take care of get_pages separately
> Cheers, Daniel
>>>> +
>>>> +const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void)
>>>> +{
>>>> + return &xen_drm_front_gem_cma_ops;
>>>> +}
>>>> --
>>>> 2.7.4
>>>>
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> [email protected]
>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>> _______________________________________________
>> dri-devel mailing list
>> [email protected]
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
Thank you,
Oleksandr
On Mon, Mar 05, 2018 at 11:30:35AM +0200, Oleksandr Andrushchenko wrote:
> On 03/05/2018 11:25 AM, Daniel Vetter wrote:
> > On Wed, Feb 21, 2018 at 10:03:42AM +0200, Oleksandr Andrushchenko wrote:
> > > From: Oleksandr Andrushchenko <[email protected]>
> > >
> > > Handle communication with the backend:
> > > - send requests and wait for the responses according
> > > to the displif protocol
> > > - serialize access to the communication channel
> > > - time-out used for backend communication is set to 3000 ms
> > > - manage display buffers shared with the backend
> > >
> > > Signed-off-by: Oleksandr Andrushchenko <[email protected]>
> > After the demidlayering it probably makes sense to merge this with the
> > overall kms/basic-drm-driver patch. Up to you really.
> The reason for such partitioning here and before was that
> I can have Xen/DRM parts separate, so those are easier for
> review by Xen/DRM communities. So, I would prefer to have it
> as it is
Well for reviewing the kms parts I need to check what the xen parts are
doing (at least sometimes), since semantics of what you're doing matter,
and there's a few cases which new drivers tend to get wrong. So for me,
this splitting makes stuff actually harder to review.
And I guess for the xen folks it won't hurt if they see a bit clearer how
it's used on the drm side (even if they might not really understand what's
going on). If we have some superficial abstraction in between each of the
subsystem maintainers might make assumptions about what the other side of
the code is doing which turn out to be wrong, and that's not good.
Just explaining my motivation for why I don't like abstractions and
splitting stuff up into patches that don't make much sense on their own
(because the code is just hanging out there without being wired up
anywhere).
-Daniel
> > -Daniel
> > > ---
> > > drivers/gpu/drm/xen/xen_drm_front.c | 327 +++++++++++++++++++++++++++++++++++-
> > > drivers/gpu/drm/xen/xen_drm_front.h | 5 +
> > > 2 files changed, 327 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
> > > index 8de88e359d5e..5ad546231d30 100644
> > > --- a/drivers/gpu/drm/xen/xen_drm_front.c
> > > +++ b/drivers/gpu/drm/xen/xen_drm_front.c
> > > @@ -31,12 +31,146 @@
> > > #include "xen_drm_front_evtchnl.h"
> > > #include "xen_drm_front_shbuf.h"
> > > +/* timeout in ms to wait for backend to respond */
> > > +#define VDRM_WAIT_BACK_MS 3000
> > > +
> > > +struct xen_drm_front_dbuf {
> > > + struct list_head list;
> > > + uint64_t dbuf_cookie;
> > > + uint64_t fb_cookie;
> > > + struct xen_drm_front_shbuf *shbuf;
> > > +};
> > > +
> > > +static int dbuf_add_to_list(struct xen_drm_front_info *front_info,
> > > + struct xen_drm_front_shbuf *shbuf, uint64_t dbuf_cookie)
> > > +{
> > > + struct xen_drm_front_dbuf *dbuf;
> > > +
> > > + dbuf = kzalloc(sizeof(*dbuf), GFP_KERNEL);
> > > + if (!dbuf)
> > > + return -ENOMEM;
> > > +
> > > + dbuf->dbuf_cookie = dbuf_cookie;
> > > + dbuf->shbuf = shbuf;
> > > + list_add(&dbuf->list, &front_info->dbuf_list);
> > > + return 0;
> > > +}
> > > +
> > > +static struct xen_drm_front_dbuf *dbuf_get(struct list_head *dbuf_list,
> > > + uint64_t dbuf_cookie)
> > > +{
> > > + struct xen_drm_front_dbuf *buf, *q;
> > > +
> > > + list_for_each_entry_safe(buf, q, dbuf_list, list)
> > > + if (buf->dbuf_cookie == dbuf_cookie)
> > > + return buf;
> > > +
> > > + return NULL;
> > > +}
> > > +
> > > +static void dbuf_flush_fb(struct list_head *dbuf_list, uint64_t fb_cookie)
> > > +{
> > > + struct xen_drm_front_dbuf *buf, *q;
> > > +
> > > + list_for_each_entry_safe(buf, q, dbuf_list, list)
> > > + if (buf->fb_cookie == fb_cookie)
> > > + xen_drm_front_shbuf_flush(buf->shbuf);
> > > +}
> > > +
> > > +static void dbuf_free(struct list_head *dbuf_list, uint64_t dbuf_cookie)
> > > +{
> > > + struct xen_drm_front_dbuf *buf, *q;
> > > +
> > > + list_for_each_entry_safe(buf, q, dbuf_list, list)
> > > + if (buf->dbuf_cookie == dbuf_cookie) {
> > > + list_del(&buf->list);
> > > + xen_drm_front_shbuf_unmap(buf->shbuf);
> > > + xen_drm_front_shbuf_free(buf->shbuf);
> > > + kfree(buf);
> > > + break;
> > > + }
> > > +}
> > > +
> > > +static void dbuf_free_all(struct list_head *dbuf_list)
> > > +{
> > > + struct xen_drm_front_dbuf *buf, *q;
> > > +
> > > + list_for_each_entry_safe(buf, q, dbuf_list, list) {
> > > + list_del(&buf->list);
> > > + xen_drm_front_shbuf_unmap(buf->shbuf);
> > > + xen_drm_front_shbuf_free(buf->shbuf);
> > > + kfree(buf);
> > > + }
> > > +}
> > > +
> > > +static struct xendispl_req *be_prepare_req(
> > > + struct xen_drm_front_evtchnl *evtchnl, uint8_t operation)
> > > +{
> > > + struct xendispl_req *req;
> > > +
> > > + req = RING_GET_REQUEST(&evtchnl->u.req.ring,
> > > + evtchnl->u.req.ring.req_prod_pvt);
> > > + req->operation = operation;
> > > + req->id = evtchnl->evt_next_id++;
> > > + evtchnl->evt_id = req->id;
> > > + return req;
> > > +}
> > > +
> > > +static int be_stream_do_io(struct xen_drm_front_evtchnl *evtchnl,
> > > + struct xendispl_req *req)
> > > +{
> > > + reinit_completion(&evtchnl->u.req.completion);
> > > + if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED))
> > > + return -EIO;
> > > +
> > > + xen_drm_front_evtchnl_flush(evtchnl);
> > > + return 0;
> > > +}
> > > +
> > > +static int be_stream_wait_io(struct xen_drm_front_evtchnl *evtchnl)
> > > +{
> > > + if (wait_for_completion_timeout(&evtchnl->u.req.completion,
> > > + msecs_to_jiffies(VDRM_WAIT_BACK_MS)) <= 0)
> > > + return -ETIMEDOUT;
> > > +
> > > + return evtchnl->u.req.resp_status;
> > > +}
> > > +
> > > static int be_mode_set(struct xen_drm_front_drm_pipeline *pipeline, uint32_t x,
> > > uint32_t y, uint32_t width, uint32_t height, uint32_t bpp,
> > > uint64_t fb_cookie)
> > > {
> > > - return 0;
> > > + struct xen_drm_front_evtchnl *evtchnl;
> > > + struct xen_drm_front_info *front_info;
> > > + struct xendispl_req *req;
> > > + unsigned long flags;
> > > + int ret;
> > > +
> > > + front_info = pipeline->drm_info->front_info;
> > > + evtchnl = &front_info->evt_pairs[pipeline->index].req;
> > > + if (unlikely(!evtchnl))
> > > + return -EIO;
> > > +
> > > + mutex_lock(&front_info->req_io_lock);
> > > +
> > > + spin_lock_irqsave(&front_info->io_lock, flags);
> > > + req = be_prepare_req(evtchnl, XENDISPL_OP_SET_CONFIG);
> > > + req->op.set_config.x = x;
> > > + req->op.set_config.y = y;
> > > + req->op.set_config.width = width;
> > > + req->op.set_config.height = height;
> > > + req->op.set_config.bpp = bpp;
> > > + req->op.set_config.fb_cookie = fb_cookie;
> > > +
> > > + ret = be_stream_do_io(evtchnl, req);
> > > + spin_unlock_irqrestore(&front_info->io_lock, flags);
> > > +
> > > + if (ret == 0)
> > > + ret = be_stream_wait_io(evtchnl);
> > > +
> > > + mutex_unlock(&front_info->req_io_lock);
> > > + return ret;
> > > }
> > > static int be_dbuf_create_int(struct xen_drm_front_info *front_info,
> > > @@ -44,7 +178,69 @@ static int be_dbuf_create_int(struct xen_drm_front_info *front_info,
> > > uint32_t bpp, uint64_t size, struct page **pages,
> > > struct sg_table *sgt)
> > > {
> > > + struct xen_drm_front_evtchnl *evtchnl;
> > > + struct xen_drm_front_shbuf *shbuf;
> > > + struct xendispl_req *req;
> > > + struct xen_drm_front_shbuf_cfg buf_cfg;
> > > + unsigned long flags;
> > > + int ret;
> > > +
> > > + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
> > > + if (unlikely(!evtchnl))
> > > + return -EIO;
> > > +
> > > + memset(&buf_cfg, 0, sizeof(buf_cfg));
> > > + buf_cfg.xb_dev = front_info->xb_dev;
> > > + buf_cfg.pages = pages;
> > > + buf_cfg.size = size;
> > > + buf_cfg.sgt = sgt;
> > > + buf_cfg.be_alloc = front_info->cfg.be_alloc;
> > > +
> > > + shbuf = xen_drm_front_shbuf_alloc(&buf_cfg);
> > > + if (!shbuf)
> > > + return -ENOMEM;
> > > +
> > > + ret = dbuf_add_to_list(front_info, shbuf, dbuf_cookie);
> > > + if (ret < 0) {
> > > + xen_drm_front_shbuf_free(shbuf);
> > > + return ret;
> > > + }
> > > +
> > > + mutex_lock(&front_info->req_io_lock);
> > > +
> > > + spin_lock_irqsave(&front_info->io_lock, flags);
> > > + req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_CREATE);
> > > + req->op.dbuf_create.gref_directory =
> > > + xen_drm_front_shbuf_get_dir_start(shbuf);
> > > + req->op.dbuf_create.buffer_sz = size;
> > > + req->op.dbuf_create.dbuf_cookie = dbuf_cookie;
> > > + req->op.dbuf_create.width = width;
> > > + req->op.dbuf_create.height = height;
> > > + req->op.dbuf_create.bpp = bpp;
> > > + if (buf_cfg.be_alloc)
> > > + req->op.dbuf_create.flags |= XENDISPL_DBUF_FLG_REQ_ALLOC;
> > > +
> > > + ret = be_stream_do_io(evtchnl, req);
> > > + spin_unlock_irqrestore(&front_info->io_lock, flags);
> > > +
> > > + if (ret < 0)
> > > + goto fail;
> > > +
> > > + ret = be_stream_wait_io(evtchnl);
> > > + if (ret < 0)
> > > + goto fail;
> > > +
> > > + ret = xen_drm_front_shbuf_map(shbuf);
> > > + if (ret < 0)
> > > + goto fail;
> > > +
> > > + mutex_unlock(&front_info->req_io_lock);
> > > return 0;
> > > +
> > > +fail:
> > > + mutex_unlock(&front_info->req_io_lock);
> > > + dbuf_free(&front_info->dbuf_list, dbuf_cookie);
> > > + return ret;
> > > }
> > > static int be_dbuf_create_from_sgt(struct xen_drm_front_info *front_info,
> > > @@ -66,26 +262,144 @@ static int be_dbuf_create_from_pages(struct xen_drm_front_info *front_info,
> > > static int be_dbuf_destroy(struct xen_drm_front_info *front_info,
> > > uint64_t dbuf_cookie)
> > > {
> > > - return 0;
> > > + struct xen_drm_front_evtchnl *evtchnl;
> > > + struct xendispl_req *req;
> > > + unsigned long flags;
> > > + bool be_alloc;
> > > + int ret;
> > > +
> > > + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
> > > + if (unlikely(!evtchnl))
> > > + return -EIO;
> > > +
> > > + be_alloc = front_info->cfg.be_alloc;
> > > +
> > > + /*
> > > + * for the backend allocated buffer release references now, so backend
> > > + * can free the buffer
> > > + */
> > > + if (be_alloc)
> > > + dbuf_free(&front_info->dbuf_list, dbuf_cookie);
> > > +
> > > + mutex_lock(&front_info->req_io_lock);
> > > +
> > > + spin_lock_irqsave(&front_info->io_lock, flags);
> > > + req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_DESTROY);
> > > + req->op.dbuf_destroy.dbuf_cookie = dbuf_cookie;
> > > +
> > > + ret = be_stream_do_io(evtchnl, req);
> > > + spin_unlock_irqrestore(&front_info->io_lock, flags);
> > > +
> > > + if (ret == 0)
> > > + ret = be_stream_wait_io(evtchnl);
> > > +
> > > + /*
> > > + * do this regardless of communication status with the backend:
> > > + * if we cannot remove remote resources remove what we can locally
> > > + */
> > > + if (!be_alloc)
> > > + dbuf_free(&front_info->dbuf_list, dbuf_cookie);
> > > +
> > > + mutex_unlock(&front_info->req_io_lock);
> > > + return ret;
> > > }
> > > static int be_fb_attach(struct xen_drm_front_info *front_info,
> > > uint64_t dbuf_cookie, uint64_t fb_cookie, uint32_t width,
> > > uint32_t height, uint32_t pixel_format)
> > > {
> > > - return 0;
> > > + struct xen_drm_front_evtchnl *evtchnl;
> > > + struct xen_drm_front_dbuf *buf;
> > > + struct xendispl_req *req;
> > > + unsigned long flags;
> > > + int ret;
> > > +
> > > + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
> > > + if (unlikely(!evtchnl))
> > > + return -EIO;
> > > +
> > > + buf = dbuf_get(&front_info->dbuf_list, dbuf_cookie);
> > > + if (!buf)
> > > + return -EINVAL;
> > > +
> > > + buf->fb_cookie = fb_cookie;
> > > +
> > > + mutex_lock(&front_info->req_io_lock);
> > > +
> > > + spin_lock_irqsave(&front_info->io_lock, flags);
> > > + req = be_prepare_req(evtchnl, XENDISPL_OP_FB_ATTACH);
> > > + req->op.fb_attach.dbuf_cookie = dbuf_cookie;
> > > + req->op.fb_attach.fb_cookie = fb_cookie;
> > > + req->op.fb_attach.width = width;
> > > + req->op.fb_attach.height = height;
> > > + req->op.fb_attach.pixel_format = pixel_format;
> > > +
> > > + ret = be_stream_do_io(evtchnl, req);
> > > + spin_unlock_irqrestore(&front_info->io_lock, flags);
> > > +
> > > + if (ret == 0)
> > > + ret = be_stream_wait_io(evtchnl);
> > > +
> > > + mutex_unlock(&front_info->req_io_lock);
> > > + return ret;
> > > }
> > > static int be_fb_detach(struct xen_drm_front_info *front_info,
> > > uint64_t fb_cookie)
> > > {
> > > - return 0;
> > > + struct xen_drm_front_evtchnl *evtchnl;
> > > + struct xendispl_req *req;
> > > + unsigned long flags;
> > > + int ret;
> > > +
> > > + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
> > > + if (unlikely(!evtchnl))
> > > + return -EIO;
> > > +
> > > + mutex_lock(&front_info->req_io_lock);
> > > +
> > > + spin_lock_irqsave(&front_info->io_lock, flags);
> > > + req = be_prepare_req(evtchnl, XENDISPL_OP_FB_DETACH);
> > > + req->op.fb_detach.fb_cookie = fb_cookie;
> > > +
> > > + ret = be_stream_do_io(evtchnl, req);
> > > + spin_unlock_irqrestore(&front_info->io_lock, flags);
> > > +
> > > + if (ret == 0)
> > > + ret = be_stream_wait_io(evtchnl);
> > > +
> > > + mutex_unlock(&front_info->req_io_lock);
> > > + return ret;
> > > }
> > > static int be_page_flip(struct xen_drm_front_info *front_info, int conn_idx,
> > > uint64_t fb_cookie)
> > > {
> > > - return 0;
> > > + struct xen_drm_front_evtchnl *evtchnl;
> > > + struct xendispl_req *req;
> > > + unsigned long flags;
> > > + int ret;
> > > +
> > > + if (unlikely(conn_idx >= front_info->num_evt_pairs))
> > > + return -EINVAL;
> > > +
> > > + dbuf_flush_fb(&front_info->dbuf_list, fb_cookie);
> > > + evtchnl = &front_info->evt_pairs[conn_idx].req;
> > > +
> > > + mutex_lock(&front_info->req_io_lock);
> > > +
> > > + spin_lock_irqsave(&front_info->io_lock, flags);
> > > + req = be_prepare_req(evtchnl, XENDISPL_OP_PG_FLIP);
> > > + req->op.pg_flip.fb_cookie = fb_cookie;
> > > +
> > > + ret = be_stream_do_io(evtchnl, req);
> > > + spin_unlock_irqrestore(&front_info->io_lock, flags);
> > > +
> > > + if (ret == 0)
> > > + ret = be_stream_wait_io(evtchnl);
> > > +
> > > + mutex_unlock(&front_info->req_io_lock);
> > > + return ret;
> > > }
> > > static void xen_drm_drv_unload(struct xen_drm_front_info *front_info)
> > > @@ -183,6 +497,7 @@ static void xen_drv_remove_internal(struct xen_drm_front_info *front_info)
> > > {
> > > xen_drm_drv_deinit(front_info);
> > > xen_drm_front_evtchnl_free_all(front_info);
> > > + dbuf_free_all(&front_info->dbuf_list);
> > > }
> > > static int backend_on_initwait(struct xen_drm_front_info *front_info)
> > > @@ -310,6 +625,8 @@ static int xen_drv_probe(struct xenbus_device *xb_dev,
> > > front_info->xb_dev = xb_dev;
> > > spin_lock_init(&front_info->io_lock);
> > > + mutex_init(&front_info->req_io_lock);
> > > + INIT_LIST_HEAD(&front_info->dbuf_list);
> > > front_info->drm_pdrv_registered = false;
> > > dev_set_drvdata(&xb_dev->dev, front_info);
> > > return xenbus_switch_state(xb_dev, XenbusStateInitialising);
> > > diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
> > > index c6f52c892434..db32d00145d1 100644
> > > --- a/drivers/gpu/drm/xen/xen_drm_front.h
> > > +++ b/drivers/gpu/drm/xen/xen_drm_front.h
> > > @@ -137,6 +137,8 @@ struct xen_drm_front_info {
> > > struct xenbus_device *xb_dev;
> > > /* to protect data between backend IO code and interrupt handler */
> > > spinlock_t io_lock;
> > > + /* serializer for backend IO: request/response */
> > > + struct mutex req_io_lock;
> > > bool drm_pdrv_registered;
> > > /* virtual DRM platform device */
> > > struct platform_device *drm_pdev;
> > > @@ -144,6 +146,9 @@ struct xen_drm_front_info {
> > > int num_evt_pairs;
> > > struct xen_drm_front_evtchnl_pair *evt_pairs;
> > > struct xen_drm_front_cfg cfg;
> > > +
> > > + /* display buffers */
> > > + struct list_head dbuf_list;
> > > };
> > > #endif /* __XEN_DRM_FRONT_H_ */
> > > --
> > > 2.7.4
> > >
> > > _______________________________________________
> > > dri-devel mailing list
> > > [email protected]
> > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On 03/06/2018 11:26 AM, Daniel Vetter wrote:
> On Mon, Mar 05, 2018 at 11:30:35AM +0200, Oleksandr Andrushchenko wrote:
>> On 03/05/2018 11:25 AM, Daniel Vetter wrote:
>>> On Wed, Feb 21, 2018 at 10:03:42AM +0200, Oleksandr Andrushchenko wrote:
>>>> From: Oleksandr Andrushchenko <[email protected]>
>>>>
>>>> Handle communication with the backend:
>>>> - send requests and wait for the responses according
>>>> to the displif protocol
>>>> - serialize access to the communication channel
>>>> - time-out used for backend communication is set to 3000 ms
>>>> - manage display buffers shared with the backend
>>>>
>>>> Signed-off-by: Oleksandr Andrushchenko <[email protected]>
>>> After the demidlayering it probably makes sense to merge this with the
>>> overall kms/basic-drm-driver patch. Up to you really.
>> The reason for such partitioning here and before was that
>> I can have Xen/DRM parts separate, so those are easier for
>> review by Xen/DRM communities. So, I would prefer to have it
>> as it is
> Well for reviewing the kms parts I need to check what the xen parts are
> doing (at least sometimes), since semantics of what you're doing matter,
> and there's a few cases which new drivers tend to get wrong. So for me,
> this splitting makes stuff actually harder to review.
>
> And I guess for the xen folks it won't hurt if they see a bit clearer how
> it's used on the drm side (even if they might not really understand what's
> going on). If we have some superficial abstraction in between each of the
> subsystem maintainers might make assumptions about what the other side of
> the code is doing which turn out to be wrong, and that's not good.
>
> Just explaining my motivation for why I don't like abstractions and
> splitting stuff up into patches that don't make much sense on their own
> (because the code is just hanging out there without being wired up
> anywhere).
Ok, no problem here. Will squash relevant patches then
> -Daniel
>>> -Daniel
>>>> ---
>>>> drivers/gpu/drm/xen/xen_drm_front.c | 327 +++++++++++++++++++++++++++++++++++-
>>>> drivers/gpu/drm/xen/xen_drm_front.h | 5 +
>>>> 2 files changed, 327 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
>>>> index 8de88e359d5e..5ad546231d30 100644
>>>> --- a/drivers/gpu/drm/xen/xen_drm_front.c
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front.c
>>>> @@ -31,12 +31,146 @@
>>>> #include "xen_drm_front_evtchnl.h"
>>>> #include "xen_drm_front_shbuf.h"
>>>> +/* timeout in ms to wait for backend to respond */
>>>> +#define VDRM_WAIT_BACK_MS 3000
>>>> +
>>>> +struct xen_drm_front_dbuf {
>>>> + struct list_head list;
>>>> + uint64_t dbuf_cookie;
>>>> + uint64_t fb_cookie;
>>>> + struct xen_drm_front_shbuf *shbuf;
>>>> +};
>>>> +
>>>> +static int dbuf_add_to_list(struct xen_drm_front_info *front_info,
>>>> + struct xen_drm_front_shbuf *shbuf, uint64_t dbuf_cookie)
>>>> +{
>>>> + struct xen_drm_front_dbuf *dbuf;
>>>> +
>>>> + dbuf = kzalloc(sizeof(*dbuf), GFP_KERNEL);
>>>> + if (!dbuf)
>>>> + return -ENOMEM;
>>>> +
>>>> + dbuf->dbuf_cookie = dbuf_cookie;
>>>> + dbuf->shbuf = shbuf;
>>>> + list_add(&dbuf->list, &front_info->dbuf_list);
>>>> + return 0;
>>>> +}
>>>> +
>>>> +static struct xen_drm_front_dbuf *dbuf_get(struct list_head *dbuf_list,
>>>> + uint64_t dbuf_cookie)
>>>> +{
>>>> + struct xen_drm_front_dbuf *buf, *q;
>>>> +
>>>> + list_for_each_entry_safe(buf, q, dbuf_list, list)
>>>> + if (buf->dbuf_cookie == dbuf_cookie)
>>>> + return buf;
>>>> +
>>>> + return NULL;
>>>> +}
>>>> +
>>>> +static void dbuf_flush_fb(struct list_head *dbuf_list, uint64_t fb_cookie)
>>>> +{
>>>> + struct xen_drm_front_dbuf *buf, *q;
>>>> +
>>>> + list_for_each_entry_safe(buf, q, dbuf_list, list)
>>>> + if (buf->fb_cookie == fb_cookie)
>>>> + xen_drm_front_shbuf_flush(buf->shbuf);
>>>> +}
>>>> +
>>>> +static void dbuf_free(struct list_head *dbuf_list, uint64_t dbuf_cookie)
>>>> +{
>>>> + struct xen_drm_front_dbuf *buf, *q;
>>>> +
>>>> + list_for_each_entry_safe(buf, q, dbuf_list, list)
>>>> + if (buf->dbuf_cookie == dbuf_cookie) {
>>>> + list_del(&buf->list);
>>>> + xen_drm_front_shbuf_unmap(buf->shbuf);
>>>> + xen_drm_front_shbuf_free(buf->shbuf);
>>>> + kfree(buf);
>>>> + break;
>>>> + }
>>>> +}
>>>> +
>>>> +static void dbuf_free_all(struct list_head *dbuf_list)
>>>> +{
>>>> + struct xen_drm_front_dbuf *buf, *q;
>>>> +
>>>> + list_for_each_entry_safe(buf, q, dbuf_list, list) {
>>>> + list_del(&buf->list);
>>>> + xen_drm_front_shbuf_unmap(buf->shbuf);
>>>> + xen_drm_front_shbuf_free(buf->shbuf);
>>>> + kfree(buf);
>>>> + }
>>>> +}
>>>> +
>>>> +static struct xendispl_req *be_prepare_req(
>>>> + struct xen_drm_front_evtchnl *evtchnl, uint8_t operation)
>>>> +{
>>>> + struct xendispl_req *req;
>>>> +
>>>> + req = RING_GET_REQUEST(&evtchnl->u.req.ring,
>>>> + evtchnl->u.req.ring.req_prod_pvt);
>>>> + req->operation = operation;
>>>> + req->id = evtchnl->evt_next_id++;
>>>> + evtchnl->evt_id = req->id;
>>>> + return req;
>>>> +}
>>>> +
>>>> +static int be_stream_do_io(struct xen_drm_front_evtchnl *evtchnl,
>>>> + struct xendispl_req *req)
>>>> +{
>>>> + reinit_completion(&evtchnl->u.req.completion);
>>>> + if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED))
>>>> + return -EIO;
>>>> +
>>>> + xen_drm_front_evtchnl_flush(evtchnl);
>>>> + return 0;
>>>> +}
>>>> +
>>>> +static int be_stream_wait_io(struct xen_drm_front_evtchnl *evtchnl)
>>>> +{
>>>> + if (wait_for_completion_timeout(&evtchnl->u.req.completion,
>>>> + msecs_to_jiffies(VDRM_WAIT_BACK_MS)) <= 0)
>>>> + return -ETIMEDOUT;
>>>> +
>>>> + return evtchnl->u.req.resp_status;
>>>> +}
>>>> +
>>>> static int be_mode_set(struct xen_drm_front_drm_pipeline *pipeline, uint32_t x,
>>>> uint32_t y, uint32_t width, uint32_t height, uint32_t bpp,
>>>> uint64_t fb_cookie)
>>>> {
>>>> - return 0;
>>>> + struct xen_drm_front_evtchnl *evtchnl;
>>>> + struct xen_drm_front_info *front_info;
>>>> + struct xendispl_req *req;
>>>> + unsigned long flags;
>>>> + int ret;
>>>> +
>>>> + front_info = pipeline->drm_info->front_info;
>>>> + evtchnl = &front_info->evt_pairs[pipeline->index].req;
>>>> + if (unlikely(!evtchnl))
>>>> + return -EIO;
>>>> +
>>>> + mutex_lock(&front_info->req_io_lock);
>>>> +
>>>> + spin_lock_irqsave(&front_info->io_lock, flags);
>>>> + req = be_prepare_req(evtchnl, XENDISPL_OP_SET_CONFIG);
>>>> + req->op.set_config.x = x;
>>>> + req->op.set_config.y = y;
>>>> + req->op.set_config.width = width;
>>>> + req->op.set_config.height = height;
>>>> + req->op.set_config.bpp = bpp;
>>>> + req->op.set_config.fb_cookie = fb_cookie;
>>>> +
>>>> + ret = be_stream_do_io(evtchnl, req);
>>>> + spin_unlock_irqrestore(&front_info->io_lock, flags);
>>>> +
>>>> + if (ret == 0)
>>>> + ret = be_stream_wait_io(evtchnl);
>>>> +
>>>> + mutex_unlock(&front_info->req_io_lock);
>>>> + return ret;
>>>> }
>>>> static int be_dbuf_create_int(struct xen_drm_front_info *front_info,
>>>> @@ -44,7 +178,69 @@ static int be_dbuf_create_int(struct xen_drm_front_info *front_info,
>>>> uint32_t bpp, uint64_t size, struct page **pages,
>>>> struct sg_table *sgt)
>>>> {
>>>> + struct xen_drm_front_evtchnl *evtchnl;
>>>> + struct xen_drm_front_shbuf *shbuf;
>>>> + struct xendispl_req *req;
>>>> + struct xen_drm_front_shbuf_cfg buf_cfg;
>>>> + unsigned long flags;
>>>> + int ret;
>>>> +
>>>> + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
>>>> + if (unlikely(!evtchnl))
>>>> + return -EIO;
>>>> +
>>>> + memset(&buf_cfg, 0, sizeof(buf_cfg));
>>>> + buf_cfg.xb_dev = front_info->xb_dev;
>>>> + buf_cfg.pages = pages;
>>>> + buf_cfg.size = size;
>>>> + buf_cfg.sgt = sgt;
>>>> + buf_cfg.be_alloc = front_info->cfg.be_alloc;
>>>> +
>>>> + shbuf = xen_drm_front_shbuf_alloc(&buf_cfg);
>>>> + if (!shbuf)
>>>> + return -ENOMEM;
>>>> +
>>>> + ret = dbuf_add_to_list(front_info, shbuf, dbuf_cookie);
>>>> + if (ret < 0) {
>>>> + xen_drm_front_shbuf_free(shbuf);
>>>> + return ret;
>>>> + }
>>>> +
>>>> + mutex_lock(&front_info->req_io_lock);
>>>> +
>>>> + spin_lock_irqsave(&front_info->io_lock, flags);
>>>> + req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_CREATE);
>>>> + req->op.dbuf_create.gref_directory =
>>>> + xen_drm_front_shbuf_get_dir_start(shbuf);
>>>> + req->op.dbuf_create.buffer_sz = size;
>>>> + req->op.dbuf_create.dbuf_cookie = dbuf_cookie;
>>>> + req->op.dbuf_create.width = width;
>>>> + req->op.dbuf_create.height = height;
>>>> + req->op.dbuf_create.bpp = bpp;
>>>> + if (buf_cfg.be_alloc)
>>>> + req->op.dbuf_create.flags |= XENDISPL_DBUF_FLG_REQ_ALLOC;
>>>> +
>>>> + ret = be_stream_do_io(evtchnl, req);
>>>> + spin_unlock_irqrestore(&front_info->io_lock, flags);
>>>> +
>>>> + if (ret < 0)
>>>> + goto fail;
>>>> +
>>>> + ret = be_stream_wait_io(evtchnl);
>>>> + if (ret < 0)
>>>> + goto fail;
>>>> +
>>>> + ret = xen_drm_front_shbuf_map(shbuf);
>>>> + if (ret < 0)
>>>> + goto fail;
>>>> +
>>>> + mutex_unlock(&front_info->req_io_lock);
>>>> return 0;
>>>> +
>>>> +fail:
>>>> + mutex_unlock(&front_info->req_io_lock);
>>>> + dbuf_free(&front_info->dbuf_list, dbuf_cookie);
>>>> + return ret;
>>>> }
>>>> static int be_dbuf_create_from_sgt(struct xen_drm_front_info *front_info,
>>>> @@ -66,26 +262,144 @@ static int be_dbuf_create_from_pages(struct xen_drm_front_info *front_info,
>>>> static int be_dbuf_destroy(struct xen_drm_front_info *front_info,
>>>> uint64_t dbuf_cookie)
>>>> {
>>>> - return 0;
>>>> + struct xen_drm_front_evtchnl *evtchnl;
>>>> + struct xendispl_req *req;
>>>> + unsigned long flags;
>>>> + bool be_alloc;
>>>> + int ret;
>>>> +
>>>> + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
>>>> + if (unlikely(!evtchnl))
>>>> + return -EIO;
>>>> +
>>>> + be_alloc = front_info->cfg.be_alloc;
>>>> +
>>>> + /*
>>>> + * for the backend allocated buffer release references now, so backend
>>>> + * can free the buffer
>>>> + */
>>>> + if (be_alloc)
>>>> + dbuf_free(&front_info->dbuf_list, dbuf_cookie);
>>>> +
>>>> + mutex_lock(&front_info->req_io_lock);
>>>> +
>>>> + spin_lock_irqsave(&front_info->io_lock, flags);
>>>> + req = be_prepare_req(evtchnl, XENDISPL_OP_DBUF_DESTROY);
>>>> + req->op.dbuf_destroy.dbuf_cookie = dbuf_cookie;
>>>> +
>>>> + ret = be_stream_do_io(evtchnl, req);
>>>> + spin_unlock_irqrestore(&front_info->io_lock, flags);
>>>> +
>>>> + if (ret == 0)
>>>> + ret = be_stream_wait_io(evtchnl);
>>>> +
>>>> + /*
>>>> + * do this regardless of communication status with the backend:
>>>> + * if we cannot remove remote resources remove what we can locally
>>>> + */
>>>> + if (!be_alloc)
>>>> + dbuf_free(&front_info->dbuf_list, dbuf_cookie);
>>>> +
>>>> + mutex_unlock(&front_info->req_io_lock);
>>>> + return ret;
>>>> }
>>>> static int be_fb_attach(struct xen_drm_front_info *front_info,
>>>> uint64_t dbuf_cookie, uint64_t fb_cookie, uint32_t width,
>>>> uint32_t height, uint32_t pixel_format)
>>>> {
>>>> - return 0;
>>>> + struct xen_drm_front_evtchnl *evtchnl;
>>>> + struct xen_drm_front_dbuf *buf;
>>>> + struct xendispl_req *req;
>>>> + unsigned long flags;
>>>> + int ret;
>>>> +
>>>> + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
>>>> + if (unlikely(!evtchnl))
>>>> + return -EIO;
>>>> +
>>>> + buf = dbuf_get(&front_info->dbuf_list, dbuf_cookie);
>>>> + if (!buf)
>>>> + return -EINVAL;
>>>> +
>>>> + buf->fb_cookie = fb_cookie;
>>>> +
>>>> + mutex_lock(&front_info->req_io_lock);
>>>> +
>>>> + spin_lock_irqsave(&front_info->io_lock, flags);
>>>> + req = be_prepare_req(evtchnl, XENDISPL_OP_FB_ATTACH);
>>>> + req->op.fb_attach.dbuf_cookie = dbuf_cookie;
>>>> + req->op.fb_attach.fb_cookie = fb_cookie;
>>>> + req->op.fb_attach.width = width;
>>>> + req->op.fb_attach.height = height;
>>>> + req->op.fb_attach.pixel_format = pixel_format;
>>>> +
>>>> + ret = be_stream_do_io(evtchnl, req);
>>>> + spin_unlock_irqrestore(&front_info->io_lock, flags);
>>>> +
>>>> + if (ret == 0)
>>>> + ret = be_stream_wait_io(evtchnl);
>>>> +
>>>> + mutex_unlock(&front_info->req_io_lock);
>>>> + return ret;
>>>> }
>>>> static int be_fb_detach(struct xen_drm_front_info *front_info,
>>>> uint64_t fb_cookie)
>>>> {
>>>> - return 0;
>>>> + struct xen_drm_front_evtchnl *evtchnl;
>>>> + struct xendispl_req *req;
>>>> + unsigned long flags;
>>>> + int ret;
>>>> +
>>>> + evtchnl = &front_info->evt_pairs[GENERIC_OP_EVT_CHNL].req;
>>>> + if (unlikely(!evtchnl))
>>>> + return -EIO;
>>>> +
>>>> + mutex_lock(&front_info->req_io_lock);
>>>> +
>>>> + spin_lock_irqsave(&front_info->io_lock, flags);
>>>> + req = be_prepare_req(evtchnl, XENDISPL_OP_FB_DETACH);
>>>> + req->op.fb_detach.fb_cookie = fb_cookie;
>>>> +
>>>> + ret = be_stream_do_io(evtchnl, req);
>>>> + spin_unlock_irqrestore(&front_info->io_lock, flags);
>>>> +
>>>> + if (ret == 0)
>>>> + ret = be_stream_wait_io(evtchnl);
>>>> +
>>>> + mutex_unlock(&front_info->req_io_lock);
>>>> + return ret;
>>>> }
>>>> static int be_page_flip(struct xen_drm_front_info *front_info, int conn_idx,
>>>> uint64_t fb_cookie)
>>>> {
>>>> - return 0;
>>>> + struct xen_drm_front_evtchnl *evtchnl;
>>>> + struct xendispl_req *req;
>>>> + unsigned long flags;
>>>> + int ret;
>>>> +
>>>> + if (unlikely(conn_idx >= front_info->num_evt_pairs))
>>>> + return -EINVAL;
>>>> +
>>>> + dbuf_flush_fb(&front_info->dbuf_list, fb_cookie);
>>>> + evtchnl = &front_info->evt_pairs[conn_idx].req;
>>>> +
>>>> + mutex_lock(&front_info->req_io_lock);
>>>> +
>>>> + spin_lock_irqsave(&front_info->io_lock, flags);
>>>> + req = be_prepare_req(evtchnl, XENDISPL_OP_PG_FLIP);
>>>> + req->op.pg_flip.fb_cookie = fb_cookie;
>>>> +
>>>> + ret = be_stream_do_io(evtchnl, req);
>>>> + spin_unlock_irqrestore(&front_info->io_lock, flags);
>>>> +
>>>> + if (ret == 0)
>>>> + ret = be_stream_wait_io(evtchnl);
>>>> +
>>>> + mutex_unlock(&front_info->req_io_lock);
>>>> + return ret;
>>>> }
>>>> static void xen_drm_drv_unload(struct xen_drm_front_info *front_info)
>>>> @@ -183,6 +497,7 @@ static void xen_drv_remove_internal(struct xen_drm_front_info *front_info)
>>>> {
>>>> xen_drm_drv_deinit(front_info);
>>>> xen_drm_front_evtchnl_free_all(front_info);
>>>> + dbuf_free_all(&front_info->dbuf_list);
>>>> }
>>>> static int backend_on_initwait(struct xen_drm_front_info *front_info)
>>>> @@ -310,6 +625,8 @@ static int xen_drv_probe(struct xenbus_device *xb_dev,
>>>> front_info->xb_dev = xb_dev;
>>>> spin_lock_init(&front_info->io_lock);
>>>> + mutex_init(&front_info->req_io_lock);
>>>> + INIT_LIST_HEAD(&front_info->dbuf_list);
>>>> front_info->drm_pdrv_registered = false;
>>>> dev_set_drvdata(&xb_dev->dev, front_info);
>>>> return xenbus_switch_state(xb_dev, XenbusStateInitialising);
>>>> diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
>>>> index c6f52c892434..db32d00145d1 100644
>>>> --- a/drivers/gpu/drm/xen/xen_drm_front.h
>>>> +++ b/drivers/gpu/drm/xen/xen_drm_front.h
>>>> @@ -137,6 +137,8 @@ struct xen_drm_front_info {
>>>> struct xenbus_device *xb_dev;
>>>> /* to protect data between backend IO code and interrupt handler */
>>>> spinlock_t io_lock;
>>>> + /* serializer for backend IO: request/response */
>>>> + struct mutex req_io_lock;
>>>> bool drm_pdrv_registered;
>>>> /* virtual DRM platform device */
>>>> struct platform_device *drm_pdev;
>>>> @@ -144,6 +146,9 @@ struct xen_drm_front_info {
>>>> int num_evt_pairs;
>>>> struct xen_drm_front_evtchnl_pair *evt_pairs;
>>>> struct xen_drm_front_cfg cfg;
>>>> +
>>>> + /* display buffers */
>>>> + struct list_head dbuf_list;
>>>> };
>>>> #endif /* __XEN_DRM_FRONT_H_ */
>>>> --
>>>> 2.7.4
>>>>
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> [email protected]
>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>> _______________________________________________
>> dri-devel mailing list
>> [email protected]
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel