Received: by 10.223.185.116 with SMTP id b49csp2390357wrg; Mon, 5 Mar 2018 01:56:06 -0800 (PST) X-Google-Smtp-Source: AG47ELuqbBgGuG+/xOccDCK5N4TTRhEjLKPskGxUsT5GZDMI0hu3XOCDXY3xSG6OqEYN3RhfINB6 X-Received: by 2002:a17:902:2823:: with SMTP id e32-v6mr12667908plb.44.1520243766693; Mon, 05 Mar 2018 01:56:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520243766; cv=none; d=google.com; s=arc-20160816; b=CqAVh4yloc+EpjjrtO0J+rlgp8bKenyAHG2Kgc3rSZzXn2RvwqhgDyCIs/+m8E4rjB YWe8FLl3ki0Fx4V42PDQQMV2VeFlp2IvTScSdvBtG994SFS7fDqObma9xaNUht0cnJWq gFlzbiU7IwDD53E9TC3gfqrObryR1G9HyiF9SvGw2p50ARbmfGqEQl/gMoEEGvseukuH gfDTMo5zsMX1dcz7/1W0cPBpeApmqBA16g7jPj5PR9OzZhn46F0hIRvpfIt/UEOUX3Z+ gLg/ZQ3hFrN34COYR4xUYrG3zUtBnqesADyNEl2tiJmmN/wwJ3XLTrNoqAWvM7XdH+6C DIhQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:mail-followup-to :message-id:subject:cc:to:from:date:dkim-signature :arc-authentication-results; bh=tV4MqFB/b/3cJ4bwoMNcm9Db+G/JHKsEisLSDRu3ii4=; b=B/qzHxkFHU0bD4fFSWpPyyGeS14J+EiX8IDIVaQWOwvhmeairMGxSuMsev+ejRJs/y 2A8hXgy5mpbCS9rA1E7FtHcg1xS6++giLRxdpnXCpKO63lFYdCfzyrKrJoD8rAPR4U96 ysErbQRTal7/GGA6N643mnMz06awVyR3AnIsZKDbLgR26NgGNzsPcd0B++di/9Z0vuZ0 RvoRKbLxpU8Q7h2HJj4BFQ8Jh0GFJ96oSE5ps7wTHti4gxlF6sSME3pmn9QElkq2SWSX /NtEMjGiocWiBjfHGuhFRHVHExHd4Qy/v+eZpYD/2brYMmC/xFazixvpuVafMHBah70p oz+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@ffwll.ch header.s=google header.b=GYu8x345; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 38-v6si9070405pln.397.2018.03.05.01.55.52; Mon, 05 Mar 2018 01:56:06 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@ffwll.ch header.s=google header.b=GYu8x345; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933642AbeCEJcf (ORCPT + 99 others); Mon, 5 Mar 2018 04:32:35 -0500 Received: from mail-wm0-f68.google.com ([74.125.82.68]:36414 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933494AbeCEJca (ORCPT ); Mon, 5 Mar 2018 04:32:30 -0500 Received: by mail-wm0-f68.google.com with SMTP id 188so14016155wme.1 for ; Mon, 05 Mar 2018 01:32:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; h=sender:date:from:to:cc:subject:message-id:mail-followup-to :references:mime-version:content-disposition:in-reply-to:user-agent; bh=tV4MqFB/b/3cJ4bwoMNcm9Db+G/JHKsEisLSDRu3ii4=; b=GYu8x345kfR0CWCbNiaYH55uVpfALUEeeqbByXalI+SHqJGKlFD7u6Zbu/jfVjzN5R KW1ZgdJgz+TfUjl9a5GCDUyt7TD/Ro4EC2iUJ8/v0M5Acyex8ycpP/9IhbX0NYzGxhNZ MFdmWZoHDltGx0ZoA8pJWSuI8wqlg4jWbTukU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :mail-followup-to:references:mime-version:content-disposition :in-reply-to:user-agent; bh=tV4MqFB/b/3cJ4bwoMNcm9Db+G/JHKsEisLSDRu3ii4=; b=DH4zuyzlF4LHevEEEb7hfOiOdQzxE5bc7rFIKkduhIUiH2eYgXgm/+oxnO4KJtQGe/ cSrM+MnI416WYCw+WPKypxaFxvsF+ZlqU6et8nTjejxQOHXJ6zwB+oIZJeazu4plh7zD 2ZLTem/Sequc0by5B6O6LpcqNEvUP3qJOXLt3a5Ruj9TcnLFHddQV5eKzufGVNS3ZIsd EpWN8ku17X9ffPbcuQRVsLvNKYHiIaHqlOyzgpLTnjeIf20hQj4XYIoTQJOIHYCBQczn bPuqajkWAtaIhIL1V/hnDpTU4LlH2leXEnEqbfGPfhIEOS8d7EYl54Z1yEixfLa/D6ZO /bJA== X-Gm-Message-State: AElRT7HRij3K5Iu3MUtIDqGksA606UIjfYuF9Y5z9DgI7//UVIIqPn3N 3bs4gK0eDEfygXcjG6alNhjyCA== X-Received: by 10.80.218.75 with SMTP id a11mr9399809edk.74.1520242348720; Mon, 05 Mar 2018 01:32:28 -0800 (PST) Received: from phenom.ffwll.local ([2a02:168:5635:0:39d2:f87e:2033:9f6]) by smtp.gmail.com with ESMTPSA id z7sm7189977edb.46.2018.03.05.01.32.27 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 05 Mar 2018 01:32:27 -0800 (PST) Date: Mon, 5 Mar 2018 10:32:25 +0100 From: Daniel Vetter To: Oleksandr Andrushchenko Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, airlied@linux.ie, daniel.vetter@intel.com, seanpaul@chromium.org, gustavo@padovan.org, jgross@suse.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com, Oleksandr Andrushchenko Subject: Re: [PATCH 8/9] drm/xen-front: Implement GEM operations Message-ID: <20180305093225.GK22212@phenom.ffwll.local> Mail-Followup-To: Oleksandr Andrushchenko , xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, airlied@linux.ie, daniel.vetter@intel.com, seanpaul@chromium.org, gustavo@padovan.org, jgross@suse.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com, Oleksandr Andrushchenko References: <1519200222-20623-1-git-send-email-andr2000@gmail.com> <1519200222-20623-9-git-send-email-andr2000@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1519200222-20623-9-git-send-email-andr2000@gmail.com> X-Operating-System: Linux phenom 4.14.0-3-amd64 User-Agent: Mutt/1.9.3 (2018-01-21) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Feb 21, 2018 at 10:03:41AM +0200, Oleksandr Andrushchenko wrote: > From: Oleksandr Andrushchenko > > Implement GEM handling depending on driver mode of operation: > depending on the requirements for the para-virtualized environment, namely > requirements dictated by the accompanying DRM/(v)GPU drivers running in both > host and guest environments, number of operating modes of para-virtualized > display driver are supported: > - display buffers can be allocated by either frontend driver or backend > - display buffers can be allocated to be contiguous in memory or not > > Note! Frontend driver itself has no dependency on contiguous memory for > its operation. > > 1. Buffers allocated by the frontend driver. > > The below modes of operation are configured at compile-time via > frontend driver's kernel configuration. > > 1.1. Front driver configured to use GEM CMA helpers > This use-case is useful when used with accompanying DRM/vGPU driver in > guest domain which was designed to only work with contiguous buffers, > e.g. DRM driver based on GEM CMA helpers: such drivers can only import > contiguous PRIME buffers, thus requiring frontend driver to provide > such. In order to implement this mode of operation para-virtualized > frontend driver can be configured to use GEM CMA helpers. > > 1.2. Front driver doesn't use GEM CMA > If accompanying drivers can cope with non-contiguous memory then, to > lower pressure on CMA subsystem of the kernel, driver can allocate > buffers from system memory. > > Note! If used with accompanying DRM/(v)GPU drivers this mode of operation > may require IOMMU support on the platform, so accompanying DRM/vGPU > hardware can still reach display buffer memory while importing PRIME > buffers from the frontend driver. > > 2. Buffers allocated by the backend > > This mode of operation is run-time configured via guest domain configuration > through XenStore entries. > > For systems which do not provide IOMMU support, but having specific > requirements for display buffers it is possible to allocate such buffers > at backend side and share those with the frontend. > For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting > physically contiguous memory, this allows implementing zero-copying > use-cases. > > Note! Configuration options 1.1 (contiguous display buffers) and 2 (backend > allocated buffers) are not supported at the same time. > > Signed-off-by: Oleksandr Andrushchenko Some suggestions below for some larger cleanup work. -Daniel > --- > drivers/gpu/drm/xen/Kconfig | 13 + > drivers/gpu/drm/xen/Makefile | 6 + > drivers/gpu/drm/xen/xen_drm_front.h | 74 ++++++ > drivers/gpu/drm/xen/xen_drm_front_drv.c | 80 ++++++- > drivers/gpu/drm/xen/xen_drm_front_drv.h | 1 + > drivers/gpu/drm/xen/xen_drm_front_gem.c | 360 ++++++++++++++++++++++++++++ > drivers/gpu/drm/xen/xen_drm_front_gem.h | 46 ++++ > drivers/gpu/drm/xen/xen_drm_front_gem_cma.c | 93 +++++++ > 8 files changed, 667 insertions(+), 6 deletions(-) > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.c > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem.h > create mode 100644 drivers/gpu/drm/xen/xen_drm_front_gem_cma.c > > diff --git a/drivers/gpu/drm/xen/Kconfig b/drivers/gpu/drm/xen/Kconfig > index 4cca160782ab..4f4abc91f3b6 100644 > --- a/drivers/gpu/drm/xen/Kconfig > +++ b/drivers/gpu/drm/xen/Kconfig > @@ -15,3 +15,16 @@ config DRM_XEN_FRONTEND > help > Choose this option if you want to enable a para-virtualized > frontend DRM/KMS driver for Xen guest OSes. > + > +config DRM_XEN_FRONTEND_CMA > + bool "Use DRM CMA to allocate dumb buffers" > + depends on DRM_XEN_FRONTEND > + select DRM_KMS_CMA_HELPER > + select DRM_GEM_CMA_HELPER > + help > + Use DRM CMA helpers to allocate display buffers. > + This is useful for the use-cases when guest driver needs to > + share or export buffers to other drivers which only expect > + contiguous buffers. > + Note: in this mode driver cannot use buffers allocated > + by the backend. > diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile > index 4fcb0da1a9c5..12376ec78fbc 100644 > --- a/drivers/gpu/drm/xen/Makefile > +++ b/drivers/gpu/drm/xen/Makefile > @@ -8,4 +8,10 @@ drm_xen_front-objs := xen_drm_front.o \ > xen_drm_front_shbuf.o \ > xen_drm_front_cfg.o > > +ifeq ($(CONFIG_DRM_XEN_FRONTEND_CMA),y) > + drm_xen_front-objs += xen_drm_front_gem_cma.o > +else > + drm_xen_front-objs += xen_drm_front_gem.o > +endif > + > obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o > diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h > index 9ed5bfb248d0..c6f52c892434 100644 > --- a/drivers/gpu/drm/xen/xen_drm_front.h > +++ b/drivers/gpu/drm/xen/xen_drm_front.h > @@ -34,6 +34,80 @@ > > struct xen_drm_front_drm_pipeline; > > +/* > + ******************************************************************************* > + * Para-virtualized DRM/KMS frontend driver > + ******************************************************************************* > + * This frontend driver implements Xen para-virtualized display > + * according to the display protocol described at > + * include/xen/interface/io/displif.h > + * > + ******************************************************************************* > + * Driver modes of operation in terms of display buffers used > + ******************************************************************************* > + * Depending on the requirements for the para-virtualized environment, namely > + * requirements dictated by the accompanying DRM/(v)GPU drivers running in both > + * host and guest environments, number of operating modes of para-virtualized > + * display driver are supported: > + * - display buffers can be allocated by either frontend driver or backend > + * - display buffers can be allocated to be contiguous in memory or not > + * > + * Note! Frontend driver itself has no dependency on contiguous memory for > + * its operation. > + * > + ******************************************************************************* > + * 1. Buffers allocated by the frontend driver. > + ******************************************************************************* > + * > + * The below modes of operation are configured at compile-time via > + * frontend driver's kernel configuration. > + * > + * 1.1. Front driver configured to use GEM CMA helpers > + * This use-case is useful when used with accompanying DRM/vGPU driver in > + * guest domain which was designed to only work with contiguous buffers, > + * e.g. DRM driver based on GEM CMA helpers: such drivers can only import > + * contiguous PRIME buffers, thus requiring frontend driver to provide > + * such. In order to implement this mode of operation para-virtualized > + * frontend driver can be configured to use GEM CMA helpers. > + * > + * 1.2. Front driver doesn't use GEM CMA > + * If accompanying drivers can cope with non-contiguous memory then, to > + * lower pressure on CMA subsystem of the kernel, driver can allocate > + * buffers from system memory. > + * > + * Note! If used with accompanying DRM/(v)GPU drivers this mode of operation > + * may require IOMMU support on the platform, so accompanying DRM/vGPU > + * hardware can still reach display buffer memory while importing PRIME > + * buffers from the frontend driver. > + * > + ******************************************************************************* > + * 2. Buffers allocated by the backend > + ******************************************************************************* > + * > + * This mode of operation is run-time configured via guest domain configuration > + * through XenStore entries. > + * > + * For systems which do not provide IOMMU support, but having specific > + * requirements for display buffers it is possible to allocate such buffers > + * at backend side and share those with the frontend. > + * For example, if host domain is 1:1 mapped and has DRM/GPU hardware expecting > + * physically contiguous memory, this allows implementing zero-copying > + * use-cases. > + * > + ******************************************************************************* > + * Driver limitations > + ******************************************************************************* > + * 1. Configuration options 1.1 (contiguous display buffers) and 2 (backend > + * allocated buffers) are not supported at the same time. > + * > + * 2. Only primary plane without additional properties is supported. > + * > + * 3. Only one video mode supported which is configured via XenStore. > + * > + * 4. All CRTCs operate at fixed frequency of 60Hz. > + * > + ******************************************************************************/ Since you've typed this all up, pls convert it to kernel-doc and pull it into a xen-front.rst driver section in Documentation/gpu/ There's a few examples for i915 and vc4 already. > + > struct xen_drm_front_ops { > int (*mode_set)(struct xen_drm_front_drm_pipeline *pipeline, > uint32_t x, uint32_t y, uint32_t width, uint32_t height, > diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.c b/drivers/gpu/drm/xen/xen_drm_front_drv.c > index e8862d26ba27..35e7e9cda9d1 100644 > --- a/drivers/gpu/drm/xen/xen_drm_front_drv.c > +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.c > @@ -23,12 +23,58 @@ > #include "xen_drm_front.h" > #include "xen_drm_front_cfg.h" > #include "xen_drm_front_drv.h" > +#include "xen_drm_front_gem.h" > #include "xen_drm_front_kms.h" > > static int dumb_create(struct drm_file *filp, > struct drm_device *dev, struct drm_mode_create_dumb *args) > { > - return -EINVAL; > + struct xen_drm_front_drm_info *drm_info = dev->dev_private; > + struct drm_gem_object *obj; > + int ret; > + > + ret = drm_info->gem_ops->dumb_create(filp, dev, args); > + if (ret) > + goto fail; > + > + obj = drm_gem_object_lookup(filp, args->handle); > + if (!obj) { > + ret = -ENOENT; > + goto fail_destroy; > + } > + > + drm_gem_object_unreference_unlocked(obj); > + > + /* > + * In case of CONFIG_DRM_XEN_FRONTEND_CMA gem_obj is constructed > + * via DRM CMA helpers and doesn't have ->pages allocated > + * (xendrm_gem_get_pages will return NULL), but instead can provide > + * sg table > + */ My recommendation is to use an sg table for everything if you deal with mixed objects (CMA, special blocks 1:1 mapped from host, normal pages). That avoids the constant get_pages vs. get_sgt differences. For examples see how e.g. i915 handles the various gem object backends. > + if (drm_info->gem_ops->get_pages(obj)) > + ret = drm_info->front_ops->dbuf_create_from_pages( > + drm_info->front_info, > + xen_drm_front_dbuf_to_cookie(obj), > + args->width, args->height, args->bpp, > + args->size, > + drm_info->gem_ops->get_pages(obj)); > + else > + ret = drm_info->front_ops->dbuf_create_from_sgt( > + drm_info->front_info, > + xen_drm_front_dbuf_to_cookie(obj), > + args->width, args->height, args->bpp, > + args->size, > + drm_info->gem_ops->prime_get_sg_table(obj)); > + if (ret) > + goto fail_destroy; > + > + return 0; > + > +fail_destroy: > + drm_gem_dumb_destroy(filp, dev, args->handle); > +fail: > + DRM_ERROR("Failed to create dumb buffer: %d\n", ret); > + return ret; > } > > static void free_object(struct drm_gem_object *obj) > @@ -37,6 +83,7 @@ static void free_object(struct drm_gem_object *obj) > > drm_info->front_ops->dbuf_destroy(drm_info->front_info, > xen_drm_front_dbuf_to_cookie(obj)); > + drm_info->gem_ops->free_object_unlocked(obj); > } > > static void on_frame_done(struct platform_device *pdev, > @@ -60,32 +107,52 @@ static void lastclose(struct drm_device *dev) > > static int gem_mmap(struct file *filp, struct vm_area_struct *vma) > { > - return -EINVAL; > + struct drm_file *file_priv = filp->private_data; > + struct drm_device *dev = file_priv->minor->dev; > + struct xen_drm_front_drm_info *drm_info = dev->dev_private; > + > + return drm_info->gem_ops->mmap(filp, vma); Uh, so 1 midlayer for the kms stuff and another midlayer for the gem stuff. That's way too much indirection. > } > > static struct sg_table *prime_get_sg_table(struct drm_gem_object *obj) > { > - return NULL; > + struct xen_drm_front_drm_info *drm_info; > + > + drm_info = obj->dev->dev_private; > + return drm_info->gem_ops->prime_get_sg_table(obj); > } > > static struct drm_gem_object *prime_import_sg_table(struct drm_device *dev, > struct dma_buf_attachment *attach, struct sg_table *sgt) > { > - return NULL; > + struct xen_drm_front_drm_info *drm_info; > + > + drm_info = dev->dev_private; > + return drm_info->gem_ops->prime_import_sg_table(dev, attach, sgt); > } > > static void *prime_vmap(struct drm_gem_object *obj) > { > - return NULL; > + struct xen_drm_front_drm_info *drm_info; > + > + drm_info = obj->dev->dev_private; > + return drm_info->gem_ops->prime_vmap(obj); > } > > static void prime_vunmap(struct drm_gem_object *obj, void *vaddr) > { > + struct xen_drm_front_drm_info *drm_info; > + > + drm_info = obj->dev->dev_private; > + drm_info->gem_ops->prime_vunmap(obj, vaddr); > } > > static int prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) > { > - return -EINVAL; > + struct xen_drm_front_drm_info *drm_info; > + > + drm_info = obj->dev->dev_private; > + return drm_info->gem_ops->prime_mmap(obj, vma); > } > > static const struct file_operations xendrm_fops = { > @@ -147,6 +214,7 @@ int xen_drm_front_drv_probe(struct platform_device *pdev, > > drm_info->front_ops = front_ops; > drm_info->front_ops->on_frame_done = on_frame_done; > + drm_info->gem_ops = xen_drm_front_gem_get_ops(); > drm_info->front_info = cfg->front_info; > > dev = drm_dev_alloc(&xen_drm_driver, &pdev->dev); > diff --git a/drivers/gpu/drm/xen/xen_drm_front_drv.h b/drivers/gpu/drm/xen/xen_drm_front_drv.h > index 563318b19f34..34228eb86255 100644 > --- a/drivers/gpu/drm/xen/xen_drm_front_drv.h > +++ b/drivers/gpu/drm/xen/xen_drm_front_drv.h > @@ -43,6 +43,7 @@ struct xen_drm_front_drm_pipeline { > struct xen_drm_front_drm_info { > struct xen_drm_front_info *front_info; > struct xen_drm_front_ops *front_ops; > + const struct xen_drm_front_gem_ops *gem_ops; > struct drm_device *drm_dev; > struct xen_drm_front_cfg *cfg; > > diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c > new file mode 100644 > index 000000000000..367e08f6a9ef > --- /dev/null > +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c > @@ -0,0 +1,360 @@ > +/* > + * Xen para-virtual DRM device > + * > + * This program is free software; you can redistribute it and/or modify > + * it under the terms of the GNU General Public License as published by > + * the Free Software Foundation; either version 2 of the License, or > + * (at your option) any later version. > + * > + * This program is distributed in the hope that it will be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > + * GNU General Public License for more details. > + * > + * Copyright (C) 2016-2018 EPAM Systems Inc. > + * > + * Author: Oleksandr Andrushchenko > + */ > + > +#include "xen_drm_front_gem.h" > + > +#include > +#include > +#include > +#include > + > +#include > +#include > +#include > + > +#include > + > +#include "xen_drm_front.h" > +#include "xen_drm_front_drv.h" > +#include "xen_drm_front_shbuf.h" > + > +struct xen_gem_object { > + struct drm_gem_object base; > + > + size_t num_pages; > + struct page **pages; > + > + /* set for buffers allocated by the backend */ > + bool be_alloc; > + > + /* this is for imported PRIME buffer */ > + struct sg_table *sgt_imported; > +}; > + > +static inline struct xen_gem_object *to_xen_gem_obj( > + struct drm_gem_object *gem_obj) > +{ > + return container_of(gem_obj, struct xen_gem_object, base); > +} > + > +static int gem_alloc_pages_array(struct xen_gem_object *xen_obj, > + size_t buf_size) > +{ > + xen_obj->num_pages = DIV_ROUND_UP(buf_size, PAGE_SIZE); > + xen_obj->pages = kvmalloc_array(xen_obj->num_pages, > + sizeof(struct page *), GFP_KERNEL); > + return xen_obj->pages == NULL ? -ENOMEM : 0; > +} > + > +static void gem_free_pages_array(struct xen_gem_object *xen_obj) > +{ > + kvfree(xen_obj->pages); > + xen_obj->pages = NULL; > +} > + > +static struct xen_gem_object *gem_create_obj(struct drm_device *dev, > + size_t size) > +{ > + struct xen_gem_object *xen_obj; > + int ret; > + > + xen_obj = kzalloc(sizeof(*xen_obj), GFP_KERNEL); > + if (!xen_obj) > + return ERR_PTR(-ENOMEM); > + > + ret = drm_gem_object_init(dev, &xen_obj->base, size); > + if (ret < 0) { > + kfree(xen_obj); > + return ERR_PTR(ret); > + } > + > + return xen_obj; > +} > + > +static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size) > +{ > + struct xen_drm_front_drm_info *drm_info = dev->dev_private; > + struct xen_gem_object *xen_obj; > + int ret; > + > + size = round_up(size, PAGE_SIZE); > + xen_obj = gem_create_obj(dev, size); > + if (IS_ERR_OR_NULL(xen_obj)) > + return xen_obj; > + > + if (drm_info->cfg->be_alloc) { > + /* > + * backend will allocate space for this buffer, so > + * only allocate array of pointers to pages > + */ > + xen_obj->be_alloc = true; > + ret = gem_alloc_pages_array(xen_obj, size); > + if (ret < 0) { > + gem_free_pages_array(xen_obj); > + goto fail; > + } > + > + ret = alloc_xenballooned_pages(xen_obj->num_pages, > + xen_obj->pages); > + if (ret < 0) { > + DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n", > + xen_obj->num_pages, ret); > + goto fail; > + } > + > + return xen_obj; > + } > + /* > + * need to allocate backing pages now, so we can share those > + * with the backend > + */ > + xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE); > + xen_obj->pages = drm_gem_get_pages(&xen_obj->base); > + if (IS_ERR_OR_NULL(xen_obj->pages)) { > + ret = PTR_ERR(xen_obj->pages); > + xen_obj->pages = NULL; > + goto fail; > + } > + > + return xen_obj; > + > +fail: > + DRM_ERROR("Failed to allocate buffer with size %zu\n", size); > + return ERR_PTR(ret); > +} > + > +static struct xen_gem_object *gem_create_with_handle(struct drm_file *filp, > + struct drm_device *dev, size_t size, uint32_t *handle) > +{ > + struct xen_gem_object *xen_obj; > + struct drm_gem_object *gem_obj; > + int ret; > + > + xen_obj = gem_create(dev, size); > + if (IS_ERR_OR_NULL(xen_obj)) > + return xen_obj; > + > + gem_obj = &xen_obj->base; > + ret = drm_gem_handle_create(filp, gem_obj, handle); > + /* handle holds the reference */ > + drm_gem_object_unreference_unlocked(gem_obj); > + if (ret < 0) > + return ERR_PTR(ret); > + > + return xen_obj; > +} > + > +static int gem_dumb_create(struct drm_file *filp, struct drm_device *dev, > + struct drm_mode_create_dumb *args) > +{ > + struct xen_gem_object *xen_obj; > + > + args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8); > + args->size = args->pitch * args->height; > + > + xen_obj = gem_create_with_handle(filp, dev, args->size, &args->handle); > + if (IS_ERR_OR_NULL(xen_obj)) > + return xen_obj == NULL ? -ENOMEM : PTR_ERR(xen_obj); > + > + return 0; > +} > + > +static void gem_free_object(struct drm_gem_object *gem_obj) > +{ > + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj); > + > + if (xen_obj->base.import_attach) { > + drm_prime_gem_destroy(&xen_obj->base, xen_obj->sgt_imported); > + gem_free_pages_array(xen_obj); > + } else { > + if (xen_obj->pages) { > + if (xen_obj->be_alloc) { > + free_xenballooned_pages(xen_obj->num_pages, > + xen_obj->pages); > + gem_free_pages_array(xen_obj); > + } else > + drm_gem_put_pages(&xen_obj->base, > + xen_obj->pages, true, false); > + } > + } > + drm_gem_object_release(gem_obj); > + kfree(xen_obj); > +} > + > +static struct page **gem_get_pages(struct drm_gem_object *gem_obj) > +{ > + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj); > + > + return xen_obj->pages; > +} > + > +static struct sg_table *gem_get_sg_table(struct drm_gem_object *gem_obj) > +{ > + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj); > + > + if (!xen_obj->pages) > + return NULL; > + > + return drm_prime_pages_to_sg(xen_obj->pages, xen_obj->num_pages); > +} > + > +static struct drm_gem_object *gem_import_sg_table(struct drm_device *dev, > + struct dma_buf_attachment *attach, struct sg_table *sgt) > +{ > + struct xen_drm_front_drm_info *drm_info = dev->dev_private; > + struct xen_gem_object *xen_obj; > + size_t size; > + int ret; > + > + size = attach->dmabuf->size; > + xen_obj = gem_create_obj(dev, size); > + if (IS_ERR_OR_NULL(xen_obj)) > + return ERR_CAST(xen_obj); > + > + ret = gem_alloc_pages_array(xen_obj, size); > + if (ret < 0) > + return ERR_PTR(ret); > + > + xen_obj->sgt_imported = sgt; > + > + ret = drm_prime_sg_to_page_addr_arrays(sgt, xen_obj->pages, > + NULL, xen_obj->num_pages); > + if (ret < 0) > + return ERR_PTR(ret); > + > + /* > + * N.B. Although we have an API to create display buffer from sgt > + * we use pages API, because we still need those for GEM handling, > + * e.g. for mapping etc. > + */ > + ret = drm_info->front_ops->dbuf_create_from_pages( > + drm_info->front_info, > + xen_drm_front_dbuf_to_cookie(&xen_obj->base), > + 0, 0, 0, size, xen_obj->pages); > + if (ret < 0) > + return ERR_PTR(ret); > + > + DRM_DEBUG("Imported buffer of size %zu with nents %u\n", > + size, sgt->nents); > + > + return &xen_obj->base; > +} > + > +static int gem_mmap_obj(struct xen_gem_object *xen_obj, > + struct vm_area_struct *vma) > +{ > + unsigned long addr = vma->vm_start; > + int i; > + > + /* > + * clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the > + * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map > + * the whole buffer. > + */ > + vma->vm_flags &= ~VM_PFNMAP; > + vma->vm_flags |= VM_MIXEDMAP; > + vma->vm_pgoff = 0; > + vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); > + > + /* > + * vm_operations_struct.fault handler will be called if CPU access > + * to VM is here. For GPUs this isn't the case, because CPU > + * doesn't touch the memory. Insert pages now, so both CPU and GPU are > + * happy. > + * FIXME: as we insert all the pages now then no .fault handler must > + * be called, so don't provide one > + */ > + for (i = 0; i < xen_obj->num_pages; i++) { > + int ret; > + > + ret = vm_insert_page(vma, addr, xen_obj->pages[i]); > + if (ret < 0) { > + DRM_ERROR("Failed to insert pages into vma: %d\n", ret); > + return ret; > + } > + > + addr += PAGE_SIZE; > + } > + return 0; > +} > + > +static int gem_mmap(struct file *filp, struct vm_area_struct *vma) > +{ > + struct xen_gem_object *xen_obj; > + struct drm_gem_object *gem_obj; > + int ret; > + > + ret = drm_gem_mmap(filp, vma); > + if (ret < 0) > + return ret; > + > + gem_obj = vma->vm_private_data; > + xen_obj = to_xen_gem_obj(gem_obj); > + return gem_mmap_obj(xen_obj, vma); > +} > + > +static void *gem_prime_vmap(struct drm_gem_object *gem_obj) > +{ > + struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj); > + > + if (!xen_obj->pages) > + return NULL; > + > + return vmap(xen_obj->pages, xen_obj->num_pages, > + VM_MAP, pgprot_writecombine(PAGE_KERNEL)); > +} > + > +static void gem_prime_vunmap(struct drm_gem_object *gem_obj, void *vaddr) > +{ > + vunmap(vaddr); > +} > + > +static int gem_prime_mmap(struct drm_gem_object *gem_obj, > + struct vm_area_struct *vma) > +{ > + struct xen_gem_object *xen_obj; > + int ret; > + > + ret = drm_gem_mmap_obj(gem_obj, gem_obj->size, vma); > + if (ret < 0) > + return ret; > + > + xen_obj = to_xen_gem_obj(gem_obj); > + return gem_mmap_obj(xen_obj, vma); > +} > + > +static const struct xen_drm_front_gem_ops xen_drm_gem_ops = { > + .free_object_unlocked = gem_free_object, > + .prime_get_sg_table = gem_get_sg_table, > + .prime_import_sg_table = gem_import_sg_table, > + > + .prime_vmap = gem_prime_vmap, > + .prime_vunmap = gem_prime_vunmap, > + .prime_mmap = gem_prime_mmap, > + > + .dumb_create = gem_dumb_create, > + > + .mmap = gem_mmap, > + > + .get_pages = gem_get_pages, > +}; > + > +const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void) > +{ > + return &xen_drm_gem_ops; > +} > diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h > new file mode 100644 > index 000000000000..d1e1711cc3fc > --- /dev/null > +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h > @@ -0,0 +1,46 @@ > +/* > + * Xen para-virtual DRM device > + * > + * This program is free software; you can redistribute it and/or modify > + * it under the terms of the GNU General Public License as published by > + * the Free Software Foundation; either version 2 of the License, or > + * (at your option) any later version. > + * > + * This program is distributed in the hope that it will be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > + * GNU General Public License for more details. > + * > + * Copyright (C) 2016-2018 EPAM Systems Inc. > + * > + * Author: Oleksandr Andrushchenko > + */ > + > +#ifndef __XEN_DRM_FRONT_GEM_H > +#define __XEN_DRM_FRONT_GEM_H > + > +#include > + > +struct xen_drm_front_gem_ops { > + void (*free_object_unlocked)(struct drm_gem_object *obj); > + > + struct sg_table *(*prime_get_sg_table)(struct drm_gem_object *obj); > + struct drm_gem_object *(*prime_import_sg_table)(struct drm_device *dev, > + struct dma_buf_attachment *attach, > + struct sg_table *sgt); > + void *(*prime_vmap)(struct drm_gem_object *obj); > + void (*prime_vunmap)(struct drm_gem_object *obj, void *vaddr); > + int (*prime_mmap)(struct drm_gem_object *obj, > + struct vm_area_struct *vma); > + > + int (*dumb_create)(struct drm_file *file_priv, struct drm_device *dev, > + struct drm_mode_create_dumb *args); > + > + int (*mmap)(struct file *filp, struct vm_area_struct *vma); > + > + struct page **(*get_pages)(struct drm_gem_object *obj); > +}; > + > +const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void); > + > +#endif /* __XEN_DRM_FRONT_GEM_H */ > diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c b/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c > new file mode 100644 > index 000000000000..5ffcbfa652d5 > --- /dev/null > +++ b/drivers/gpu/drm/xen/xen_drm_front_gem_cma.c > @@ -0,0 +1,93 @@ > +/* > + * Xen para-virtual DRM device > + * > + * This program is free software; you can redistribute it and/or modify > + * it under the terms of the GNU General Public License as published by > + * the Free Software Foundation; either version 2 of the License, or > + * (at your option) any later version. > + * > + * This program is distributed in the hope that it will be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > + * GNU General Public License for more details. > + * > + * Copyright (C) 2016-2018 EPAM Systems Inc. > + * > + * Author: Oleksandr Andrushchenko > + */ > + > +#include > +#include > +#include > +#include > + > +#include "xen_drm_front.h" > +#include "xen_drm_front_drv.h" > +#include "xen_drm_front_gem.h" > + > +static struct drm_gem_object *gem_import_sg_table(struct drm_device *dev, > + struct dma_buf_attachment *attach, struct sg_table *sgt) > +{ > + struct xen_drm_front_drm_info *drm_info = dev->dev_private; > + struct drm_gem_object *gem_obj; > + struct drm_gem_cma_object *cma_obj; > + int ret; > + > + gem_obj = drm_gem_cma_prime_import_sg_table(dev, attach, sgt); > + if (IS_ERR_OR_NULL(gem_obj)) > + return gem_obj; > + > + cma_obj = to_drm_gem_cma_obj(gem_obj); > + > + ret = drm_info->front_ops->dbuf_create_from_sgt( > + drm_info->front_info, > + xen_drm_front_dbuf_to_cookie(gem_obj), > + 0, 0, 0, gem_obj->size, > + drm_gem_cma_prime_get_sg_table(gem_obj)); > + if (ret < 0) > + return ERR_PTR(ret); > + > + DRM_DEBUG("Imported CMA buffer of size %zu\n", gem_obj->size); > + > + return gem_obj; > +} > + > +static int gem_dumb_create(struct drm_file *filp, struct drm_device *dev, > + struct drm_mode_create_dumb *args) > +{ > + struct xen_drm_front_drm_info *drm_info = dev->dev_private; > + > + if (drm_info->cfg->be_alloc) { > + /* This use-case is not yet supported and probably won't be */ > + DRM_ERROR("Backend allocated buffers and CMA helpers are not supported at the same time\n"); > + return -EINVAL; > + } > + > + return drm_gem_cma_dumb_create(filp, dev, args); > +} > + > +static struct page **gem_get_pages(struct drm_gem_object *gem_obj) > +{ > + return NULL; > +} > + > +static const struct xen_drm_front_gem_ops xen_drm_front_gem_cma_ops = { > + .free_object_unlocked = drm_gem_cma_free_object, > + .prime_get_sg_table = drm_gem_cma_prime_get_sg_table, > + .prime_import_sg_table = gem_import_sg_table, > + > + .prime_vmap = drm_gem_cma_prime_vmap, > + .prime_vunmap = drm_gem_cma_prime_vunmap, > + .prime_mmap = drm_gem_cma_prime_mmap, > + > + .dumb_create = gem_dumb_create, > + > + .mmap = drm_gem_cma_mmap, > + > + .get_pages = gem_get_pages, > +}; Again quite a midlayer you have here. Please inline this to avoid confusion for other people (since it looks like you only have 1 implementation). > + > +const struct xen_drm_front_gem_ops *xen_drm_front_gem_get_ops(void) > +{ > + return &xen_drm_front_gem_cma_ops; > +} > -- > 2.7.4 > > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch