Received: by 10.223.185.116 with SMTP id b49csp605693wrg; Wed, 21 Feb 2018 04:02:32 -0800 (PST) X-Google-Smtp-Source: AH8x224JhJaYA2Bv9PzVAuMfc1ct0Tbe6bksTclX1pg0ZiChdhdTikHon7wyKyI7ecOAbpt1dv6j X-Received: by 2002:a17:902:684a:: with SMTP id f10-v6mr3008137pln.129.1519214551987; Wed, 21 Feb 2018 04:02:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519214551; cv=none; d=google.com; s=arc-20160816; b=Tza60IBynabv3DS+Vuh1uJvk/iErX2E1ANkBfwuF955u0aDFmDGDozR0NbFhTC3Och Rehf5pwXMVTV72S5/1f6EA+u5yJsZ7cHlHeJzRzp+9ev7Du4WycPmAJ5FYR+Bdlb6WMt tSVUrMuUTmJ4KqZTRuw0pj14kZk2CBLsHCNCOQJZs8GYqubd3M+Epmc65U/jqUpaLjDk GYnZ+b2kDLzceCyedi+FY0BYUszG+L2S3Ex5dIzoMNH2fPZ/zaGFJjcOQsTzMviD3Px8 xV4fE/IjEKhiJRulEURxq4Rd5D8B3lh1iQkpD0s1b26SxESkAn5uaitugQ2F4pij/oqz AiXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=2p/z51y2RF2Hgi3gsAu3Okp3VbE9ybOQj6p0Kv0wVyc=; b=pyn2k4RUTqtsPsG6fRONT1gVZ8rlT0iTsVVUL9zHpkVEydv3VcXJ4VEeAtpojmqzkJ R/0O/fvJCqOUizBE8ElWbbsKQjkXHk4/54/2VQNpi90nI34fGzKTxbNVY+sD2mYn4pE6 kxHFpSq2PCuEwOUh0109s5+OdaPHx92v5y2m3B50+PCGhcQLW+HV6G8rFF7nuJNCQVfU EaX+yf+XI+i68Pz7v88iBaX3ug8M6o0A9C1Zk/vpv1uBeverle/wls1sfaCbtiYpoa93 6d8B4l9PDedBA6pS/X4819SE+BUz2ppYLoSxIi6cM0o2tqnZ+iZGN4Z+m2zjjh61X2hy gSMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=CUaWdTR4; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id g2-v6si395473plo.627.2018.02.21.04.02.03; Wed, 21 Feb 2018 04:02:31 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=CUaWdTR4; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751973AbeBUIEG (ORCPT + 99 others); Wed, 21 Feb 2018 03:04:06 -0500 Received: from mail-lf0-f44.google.com ([209.85.215.44]:42114 "EHLO mail-lf0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751843AbeBUIEA (ORCPT ); Wed, 21 Feb 2018 03:04:00 -0500 Received: by mail-lf0-f44.google.com with SMTP id t204so1070937lff.9 for ; Wed, 21 Feb 2018 00:03:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=2p/z51y2RF2Hgi3gsAu3Okp3VbE9ybOQj6p0Kv0wVyc=; b=CUaWdTR4SObjmVxn+iedCFAkERvzuvhy5w1Uo07Lz1a9EbbFf9pF6JayFBCgS4RgOR daG9OqaJNfgFikxfx1uhSyKlp7avRocoZrc2aTN497a4+IDMVNRcxENpiBTe42fbsWI6 PYeHDvyBUoaiBSS7ZA1y6V1f1HmB7uxbD8b/G/1CHqWvVxllY6unpLXbrue+Kp+hxCx8 JqoQgAeQ/bHquHbU5tups8fNOcZkgpSheDKHFwIPNME84tWfCKFg+WH9GByapYYM9awm 53wFNLMmSU6zybVpwNG/jZH29gGzAvZjTcmL+0nI8WFuF/VnltAv8cG+UdATsxBkD2W4 X6+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=2p/z51y2RF2Hgi3gsAu3Okp3VbE9ybOQj6p0Kv0wVyc=; b=GbAlZ1M+kZ3IKhT90dlxA0g6FLVVGsq/9EqpszNfgkdkR0Nw0b27B1yUZDM3Ws5u9P A/4112oL9CrKxPIlCTuir2RFhn/tlX018InkLgJsk2iComeKO75gLZNfgHm80jbeSkBi udaJ3OAdYm0VuXoy6q9TKjD9cLxiFbG7uwgqv5POo3tlL7ddFqiv8P+caooh08l6Zcm5 9MqHXuG+rKVdzevtG/ajtHg44+QLxF+VEcPE+U3QaZEQDnZlYlN+QwaM7fsiWq3BpQD3 Z7Scc/jZNEDHEL7rBzTKOd4F2KnejLkEo1NdGYpgJNxvfIakmi2wqb7tlwy0fTzHr0qn UZUA== X-Gm-Message-State: APf1xPAg9ofTU3nw1L1BRvbaBn8HFF9JrCm1aD4OpX94xtOjcBdHt672 rq79PhMnv7+2EBYbgAk8Mio= X-Received: by 10.46.17.82 with SMTP id f79mr1771466lje.96.1519200238204; Wed, 21 Feb 2018 00:03:58 -0800 (PST) Received: from a2k-HP-ProDesk-600-G2-SFF.kyiv.epam.com (ll-53.209.223.85.sovam.net.ua. [85.223.209.53]) by smtp.gmail.com with ESMTPSA id g38sm687394lji.24.2018.02.21.00.03.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 21 Feb 2018 00:03:57 -0800 (PST) From: Oleksandr Andrushchenko To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, airlied@linux.ie, daniel.vetter@intel.com, seanpaul@chromium.org, gustavo@padovan.org, jgross@suse.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com Cc: andr2000@gmail.com, Oleksandr Andrushchenko Subject: [PATCH 4/9] drm/xen-front: Implement Xen event channel handling Date: Wed, 21 Feb 2018 10:03:37 +0200 Message-Id: <1519200222-20623-5-git-send-email-andr2000@gmail.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1519200222-20623-1-git-send-email-andr2000@gmail.com> References: <1519200222-20623-1-git-send-email-andr2000@gmail.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Oleksandr Andrushchenko Handle Xen event channels: - create for all configured connectors and publish corresponding ring references and event channels in Xen store, so backend can connect - implement event channels interrupt handlers - create and destroy event channels with respect to Xen bus state Signed-off-by: Oleksandr Andrushchenko --- drivers/gpu/drm/xen/Makefile | 1 + drivers/gpu/drm/xen/xen_drm_front.c | 16 +- drivers/gpu/drm/xen/xen_drm_front.h | 22 ++ drivers/gpu/drm/xen/xen_drm_front_evtchnl.c | 399 ++++++++++++++++++++++++++++ drivers/gpu/drm/xen/xen_drm_front_evtchnl.h | 89 +++++++ 5 files changed, 526 insertions(+), 1 deletion(-) create mode 100644 drivers/gpu/drm/xen/xen_drm_front_evtchnl.c create mode 100644 drivers/gpu/drm/xen/xen_drm_front_evtchnl.h diff --git a/drivers/gpu/drm/xen/Makefile b/drivers/gpu/drm/xen/Makefile index 0a2eae757f0c..4ce7756b8437 100644 --- a/drivers/gpu/drm/xen/Makefile +++ b/drivers/gpu/drm/xen/Makefile @@ -1,6 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 drm_xen_front-objs := xen_drm_front.o \ + xen_drm_front_evtchnl.o \ xen_drm_front_cfg.o obj-$(CONFIG_DRM_XEN_FRONTEND) += drm_xen_front.o diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c index 0a90c474c7ce..b558e0ae3b33 100644 --- a/drivers/gpu/drm/xen/xen_drm_front.c +++ b/drivers/gpu/drm/xen/xen_drm_front.c @@ -25,9 +25,15 @@ #include #include "xen_drm_front.h" +#include "xen_drm_front_evtchnl.h" + +static struct xen_drm_front_ops front_ops = { + /* placeholder for now */ +}; static void xen_drv_remove_internal(struct xen_drm_front_info *front_info) { + xen_drm_front_evtchnl_free_all(front_info); } static int backend_on_initwait(struct xen_drm_front_info *front_info) @@ -41,16 +47,23 @@ static int backend_on_initwait(struct xen_drm_front_info *front_info) return ret; DRM_INFO("Have %d conector(s)\n", cfg->num_connectors); - return 0; + /* Create event channels for all connectors and publish */ + ret = xen_drm_front_evtchnl_create_all(front_info, &front_ops); + if (ret < 0) + return ret; + + return xen_drm_front_evtchnl_publish_all(front_info); } static int backend_on_connected(struct xen_drm_front_info *front_info) { + xen_drm_front_evtchnl_set_state(front_info, EVTCHNL_STATE_CONNECTED); return 0; } static void backend_on_disconnected(struct xen_drm_front_info *front_info) { + xen_drm_front_evtchnl_set_state(front_info, EVTCHNL_STATE_DISCONNECTED); xenbus_switch_state(front_info->xb_dev, XenbusStateInitialising); } @@ -133,6 +146,7 @@ static int xen_drv_probe(struct xenbus_device *xb_dev, } front_info->xb_dev = xb_dev; + spin_lock_init(&front_info->io_lock); dev_set_drvdata(&xb_dev->dev, front_info); return xenbus_switch_state(xb_dev, XenbusStateInitialising); } diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h index 62b0d4e3e12b..13f22736ae02 100644 --- a/drivers/gpu/drm/xen/xen_drm_front.h +++ b/drivers/gpu/drm/xen/xen_drm_front.h @@ -21,8 +21,30 @@ #include "xen_drm_front_cfg.h" +#ifndef GRANT_INVALID_REF +/* + * Note on usage of grant reference 0 as invalid grant reference: + * grant reference 0 is valid, but never exposed to a PV driver, + * because of the fact it is already in use/reserved by the PV console. + */ +#define GRANT_INVALID_REF 0 +#endif + +struct xen_drm_front_ops { + /* CAUTION! this is called with a spin_lock held! */ + void (*on_frame_done)(struct platform_device *pdev, + int conn_idx, uint64_t fb_cookie); +}; + struct xen_drm_front_info { struct xenbus_device *xb_dev; + /* to protect data between backend IO code and interrupt handler */ + spinlock_t io_lock; + /* virtual DRM platform device */ + struct platform_device *drm_pdev; + + int num_evt_pairs; + struct xen_drm_front_evtchnl_pair *evt_pairs; struct xen_drm_front_cfg cfg; }; diff --git a/drivers/gpu/drm/xen/xen_drm_front_evtchnl.c b/drivers/gpu/drm/xen/xen_drm_front_evtchnl.c new file mode 100644 index 000000000000..697a0e4dcaed --- /dev/null +++ b/drivers/gpu/drm/xen/xen_drm_front_evtchnl.c @@ -0,0 +1,399 @@ +/* + * Xen para-virtual DRM device + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * Copyright (C) 2016-2018 EPAM Systems Inc. + * + * Author: Oleksandr Andrushchenko + */ + +#include + +#include +#include + +#include +#include +#include + +#include "xen_drm_front.h" +#include "xen_drm_front_evtchnl.h" + +static irqreturn_t evtchnl_interrupt_ctrl(int irq, void *dev_id) +{ + struct xen_drm_front_evtchnl *evtchnl = dev_id; + struct xen_drm_front_info *front_info = evtchnl->front_info; + struct xendispl_resp *resp; + RING_IDX i, rp; + unsigned long flags; + + spin_lock_irqsave(&front_info->io_lock, flags); + + if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED)) + goto out; + +again: + rp = evtchnl->u.req.ring.sring->rsp_prod; + /* ensure we see queued responses up to rp */ + virt_rmb(); + + for (i = evtchnl->u.req.ring.rsp_cons; i != rp; i++) { + resp = RING_GET_RESPONSE(&evtchnl->u.req.ring, i); + if (unlikely(resp->id != evtchnl->evt_id)) + continue; + + switch (resp->operation) { + case XENDISPL_OP_PG_FLIP: + case XENDISPL_OP_FB_ATTACH: + case XENDISPL_OP_FB_DETACH: + case XENDISPL_OP_DBUF_CREATE: + case XENDISPL_OP_DBUF_DESTROY: + case XENDISPL_OP_SET_CONFIG: + evtchnl->u.req.resp_status = resp->status; + complete(&evtchnl->u.req.completion); + break; + + default: + DRM_ERROR("Operation %d is not supported\n", + resp->operation); + break; + } + } + + evtchnl->u.req.ring.rsp_cons = i; + + if (i != evtchnl->u.req.ring.req_prod_pvt) { + int more_to_do; + + RING_FINAL_CHECK_FOR_RESPONSES(&evtchnl->u.req.ring, + more_to_do); + if (more_to_do) + goto again; + } else + evtchnl->u.req.ring.sring->rsp_event = i + 1; + +out: + spin_unlock_irqrestore(&front_info->io_lock, flags); + return IRQ_HANDLED; +} + +static irqreturn_t evtchnl_interrupt_evt(int irq, void *dev_id) +{ + struct xen_drm_front_evtchnl *evtchnl = dev_id; + struct xen_drm_front_info *front_info = evtchnl->front_info; + struct xendispl_event_page *page = evtchnl->u.evt.page; + uint32_t cons, prod; + unsigned long flags; + + spin_lock_irqsave(&front_info->io_lock, flags); + if (unlikely(evtchnl->state != EVTCHNL_STATE_CONNECTED)) + goto out; + + prod = page->in_prod; + /* ensure we see ring contents up to prod */ + virt_rmb(); + if (prod == page->in_cons) + goto out; + + for (cons = page->in_cons; cons != prod; cons++) { + struct xendispl_evt *event; + + event = &XENDISPL_IN_RING_REF(page, cons); + if (unlikely(event->id != evtchnl->evt_id++)) + continue; + + switch (event->type) { + case XENDISPL_EVT_PG_FLIP: + evtchnl->u.evt.front_ops->on_frame_done( + front_info->drm_pdev, evtchnl->index, + event->op.pg_flip.fb_cookie); + break; + } + } + page->in_cons = cons; + /* ensure ring contents */ + virt_wmb(); + +out: + spin_unlock_irqrestore(&front_info->io_lock, flags); + return IRQ_HANDLED; +} + +static void evtchnl_free(struct xen_drm_front_info *front_info, + struct xen_drm_front_evtchnl *evtchnl) +{ + unsigned long page = 0; + + if (evtchnl->type == EVTCHNL_TYPE_REQ) + page = (unsigned long)evtchnl->u.req.ring.sring; + else if (evtchnl->type == EVTCHNL_TYPE_EVT) + page = (unsigned long)evtchnl->u.evt.page; + if (!page) + return; + + evtchnl->state = EVTCHNL_STATE_DISCONNECTED; + + if (evtchnl->type == EVTCHNL_TYPE_REQ) { + /* release all who still waits for response if any */ + evtchnl->u.req.resp_status = -EIO; + complete_all(&evtchnl->u.req.completion); + } + + if (evtchnl->irq) + unbind_from_irqhandler(evtchnl->irq, evtchnl); + + if (evtchnl->port) + xenbus_free_evtchn(front_info->xb_dev, evtchnl->port); + + /* end access and free the page */ + if (evtchnl->gref != GRANT_INVALID_REF) + gnttab_end_foreign_access(evtchnl->gref, 0, page); + + if (evtchnl->type == EVTCHNL_TYPE_REQ) + evtchnl->u.req.ring.sring = NULL; + else + evtchnl->u.evt.page = NULL; + + memset(evtchnl, 0, sizeof(*evtchnl)); +} + +static int evtchnl_alloc(struct xen_drm_front_info *front_info, int index, + struct xen_drm_front_evtchnl *evtchnl, + enum xen_drm_front_evtchnl_type type) +{ + struct xenbus_device *xb_dev = front_info->xb_dev; + unsigned long page; + grant_ref_t gref; + irq_handler_t handler; + int ret; + + memset(evtchnl, 0, sizeof(*evtchnl)); + evtchnl->type = type; + evtchnl->index = index; + evtchnl->front_info = front_info; + evtchnl->state = EVTCHNL_STATE_DISCONNECTED; + evtchnl->gref = GRANT_INVALID_REF; + + page = get_zeroed_page(GFP_NOIO | __GFP_HIGH); + if (!page) { + ret = -ENOMEM; + goto fail; + } + + if (type == EVTCHNL_TYPE_REQ) { + struct xen_displif_sring *sring; + + init_completion(&evtchnl->u.req.completion); + sring = (struct xen_displif_sring *)page; + SHARED_RING_INIT(sring); + FRONT_RING_INIT(&evtchnl->u.req.ring, + sring, XEN_PAGE_SIZE); + + ret = xenbus_grant_ring(xb_dev, sring, 1, &gref); + if (ret < 0) + goto fail; + + handler = evtchnl_interrupt_ctrl; + } else { + evtchnl->u.evt.page = (struct xendispl_event_page *)page; + + ret = gnttab_grant_foreign_access(xb_dev->otherend_id, + virt_to_gfn((void *)page), 0); + if (ret < 0) + goto fail; + + gref = ret; + handler = evtchnl_interrupt_evt; + } + evtchnl->gref = gref; + + ret = xenbus_alloc_evtchn(xb_dev, &evtchnl->port); + if (ret < 0) + goto fail; + + ret = bind_evtchn_to_irqhandler(evtchnl->port, + handler, 0, xb_dev->devicetype, evtchnl); + if (ret < 0) + goto fail; + + evtchnl->irq = ret; + return 0; + +fail: + DRM_ERROR("Failed to allocate ring: %d\n", ret); + return ret; +} + +int xen_drm_front_evtchnl_create_all(struct xen_drm_front_info *front_info, + struct xen_drm_front_ops *front_ops) +{ + struct xen_drm_front_cfg *cfg; + int ret, conn; + + cfg = &front_info->cfg; + + front_info->evt_pairs = devm_kcalloc(&front_info->xb_dev->dev, + cfg->num_connectors, + sizeof(struct xen_drm_front_evtchnl_pair), GFP_KERNEL); + if (!front_info->evt_pairs) { + ret = -ENOMEM; + goto fail; + } + + for (conn = 0; conn < cfg->num_connectors; conn++) { + ret = evtchnl_alloc(front_info, conn, + &front_info->evt_pairs[conn].req, + EVTCHNL_TYPE_REQ); + if (ret < 0) { + DRM_ERROR("Error allocating control channel\n"); + goto fail; + } + + ret = evtchnl_alloc(front_info, conn, + &front_info->evt_pairs[conn].evt, + EVTCHNL_TYPE_EVT); + if (ret < 0) { + DRM_ERROR("Error allocating in-event channel\n"); + goto fail; + } + + front_info->evt_pairs[conn].evt.u.evt.front_ops = front_ops; + } + front_info->num_evt_pairs = cfg->num_connectors; + return 0; + +fail: + xen_drm_front_evtchnl_free_all(front_info); + return ret; +} + +static int evtchnl_publish(struct xenbus_transaction xbt, + struct xen_drm_front_evtchnl *evtchnl, const char *path, + const char *node_ring, const char *node_chnl) +{ + struct xenbus_device *xb_dev = evtchnl->front_info->xb_dev; + int ret; + + /* write control channel ring reference */ + ret = xenbus_printf(xbt, path, node_ring, "%u", evtchnl->gref); + if (ret < 0) { + xenbus_dev_error(xb_dev, ret, "writing ring-ref"); + return ret; + } + + /* write event channel ring reference */ + ret = xenbus_printf(xbt, path, node_chnl, "%u", evtchnl->port); + if (ret < 0) { + xenbus_dev_error(xb_dev, ret, "writing event channel"); + return ret; + } + + return 0; +} + +int xen_drm_front_evtchnl_publish_all(struct xen_drm_front_info *front_info) +{ + struct xenbus_transaction xbt; + struct xen_drm_front_cfg *plat_data; + int ret, conn; + + plat_data = &front_info->cfg; + +again: + ret = xenbus_transaction_start(&xbt); + if (ret < 0) { + xenbus_dev_fatal(front_info->xb_dev, ret, + "starting transaction"); + return ret; + } + + for (conn = 0; conn < plat_data->num_connectors; conn++) { + ret = evtchnl_publish(xbt, + &front_info->evt_pairs[conn].req, + plat_data->connectors[conn].xenstore_path, + XENDISPL_FIELD_REQ_RING_REF, + XENDISPL_FIELD_REQ_CHANNEL); + if (ret < 0) + goto fail; + + ret = evtchnl_publish(xbt, + &front_info->evt_pairs[conn].evt, + plat_data->connectors[conn].xenstore_path, + XENDISPL_FIELD_EVT_RING_REF, + XENDISPL_FIELD_EVT_CHANNEL); + if (ret < 0) + goto fail; + } + + ret = xenbus_transaction_end(xbt, 0); + if (ret < 0) { + if (ret == -EAGAIN) + goto again; + + xenbus_dev_fatal(front_info->xb_dev, ret, + "completing transaction"); + goto fail_to_end; + } + + return 0; + +fail: + xenbus_transaction_end(xbt, 1); + +fail_to_end: + xenbus_dev_fatal(front_info->xb_dev, ret, "writing Xen store"); + return ret; +} + +void xen_drm_front_evtchnl_flush(struct xen_drm_front_evtchnl *evtchnl) +{ + int notify; + + evtchnl->u.req.ring.req_prod_pvt++; + RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&evtchnl->u.req.ring, notify); + if (notify) + notify_remote_via_irq(evtchnl->irq); +} + +void xen_drm_front_evtchnl_set_state(struct xen_drm_front_info *front_info, + enum xen_drm_front_evtchnl_state state) +{ + unsigned long flags; + int i; + + if (!front_info->evt_pairs) + return; + + spin_lock_irqsave(&front_info->io_lock, flags); + for (i = 0; i < front_info->num_evt_pairs; i++) { + front_info->evt_pairs[i].req.state = state; + front_info->evt_pairs[i].evt.state = state; + } + spin_unlock_irqrestore(&front_info->io_lock, flags); + +} + +void xen_drm_front_evtchnl_free_all(struct xen_drm_front_info *front_info) +{ + int i; + + if (!front_info->evt_pairs) + return; + + for (i = 0; i < front_info->num_evt_pairs; i++) { + evtchnl_free(front_info, &front_info->evt_pairs[i].req); + evtchnl_free(front_info, &front_info->evt_pairs[i].evt); + } + + devm_kfree(&front_info->xb_dev->dev, front_info->evt_pairs); + front_info->evt_pairs = NULL; +} diff --git a/drivers/gpu/drm/xen/xen_drm_front_evtchnl.h b/drivers/gpu/drm/xen/xen_drm_front_evtchnl.h new file mode 100644 index 000000000000..e72d3aa68b4e --- /dev/null +++ b/drivers/gpu/drm/xen/xen_drm_front_evtchnl.h @@ -0,0 +1,89 @@ +/* + * Xen para-virtual DRM device + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * Copyright (C) 2016-2018 EPAM Systems Inc. + * + * Author: Oleksandr Andrushchenko + */ + +#ifndef __XEN_DRM_FRONT_EVTCHNL_H_ +#define __XEN_DRM_FRONT_EVTCHNL_H_ + +#include +#include + +#include +#include + +/* + * All operations which are not connector oriented use this ctrl event channel, + * e.g. fb_attach/destroy which belong to a DRM device, not to a CRTC. + */ +#define GENERIC_OP_EVT_CHNL 0 + +enum xen_drm_front_evtchnl_state { + EVTCHNL_STATE_DISCONNECTED, + EVTCHNL_STATE_CONNECTED, +}; + +enum xen_drm_front_evtchnl_type { + EVTCHNL_TYPE_REQ, + EVTCHNL_TYPE_EVT, +}; + +struct xen_drm_front_drm_info; + +struct xen_drm_front_evtchnl { + struct xen_drm_front_info *front_info; + int gref; + int port; + int irq; + int index; + enum xen_drm_front_evtchnl_state state; + enum xen_drm_front_evtchnl_type type; + /* either response id or incoming event id */ + uint16_t evt_id; + /* next request id or next expected event id */ + uint16_t evt_next_id; + union { + struct { + struct xen_displif_front_ring ring; + struct completion completion; + /* latest response status */ + int resp_status; + } req; + struct { + struct xendispl_event_page *page; + struct xen_drm_front_ops *front_ops; + } evt; + } u; +}; + +struct xen_drm_front_evtchnl_pair { + struct xen_drm_front_evtchnl req; + struct xen_drm_front_evtchnl evt; +}; + +int xen_drm_front_evtchnl_create_all(struct xen_drm_front_info *front_info, + struct xen_drm_front_ops *front_ops); + +int xen_drm_front_evtchnl_publish_all(struct xen_drm_front_info *front_info); + +void xen_drm_front_evtchnl_flush(struct xen_drm_front_evtchnl *evtchnl); + +void xen_drm_front_evtchnl_set_state(struct xen_drm_front_info *front_info, + enum xen_drm_front_evtchnl_state state); + +void xen_drm_front_evtchnl_free_all(struct xen_drm_front_info *front_info); + +#endif /* __XEN_DRM_FRONT_EVTCHNL_H_ */ -- 2.7.4