Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp489633imm; Fri, 1 Jun 2018 04:44:19 -0700 (PDT) X-Google-Smtp-Source: ADUXVKI+GMEmfIgC+/UE3AVN2nvquQlDW0xqxYqsdFpnJoHJFqZ8AjqbqBSv2rXiox+c9r3Jurg0 X-Received: by 2002:a17:902:ba97:: with SMTP id k23-v6mr10940556pls.259.1527853459077; Fri, 01 Jun 2018 04:44:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527853459; cv=none; d=google.com; s=arc-20160816; b=kBUYPRlvojuWq//41PEbbXeNcSD/nw4gG6xk/wc10THTRLHj2wGyLoImvTj081WXZW 8uGB3vzMQ9BhVy83i5VYIycRX2AmTrsiBXNV1d4qH4HKK04QO9aWvcplrxzSBlhgFL1I a4aqbBXhlBOPze8CYGVxJC2gnaolpMzhaOQaO2fNssZnyxLGQ6k046pfEd1jyVOshN8B dMeST1tJ7EqfW1iONQQksJXY+qEZ5kLfewFs7vpJDHOyT5lhcFnoc6Ywt4PKYWbOKSi5 2AwdezwEVSu6pyqDQJl+Og3jqA08/4Nj9Or3GXPK2M0G8My2Fi2xScWKaYjoyz05CE3M VXHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=JcAXBh0qvM6YdqH6cvAAWEY+EIvfC/d+qVFSgpR7rfs=; b=0brgiNkTlwzrBM9GUA4Oi/fUidnpYo9/YSz95EyEj4xX7f53dRXG3MqEv0UD5TBWH4 ZU4rIiT4a2JSvgy5RHKOvTGO+CzO6k3rvtOxctu56pYidjObXYHUWXzYzDvnK+Byhk1V jEddnUt9XdoeHAMYlrBKoBDkjmcPKnu5dWApmi+xvngoZvHsn3pgbh7zQKTLbhL3dYSx ZKLaZnvWNtPQC5WF27pm9l0u/6cmbE6+2wzUBDXFGN6n1zVrsH8YU8JLUsUVPWd+t4Jk MmDHZvk09L+EXeSwPl8WF72YjCh+H4joDsvgJVEDVPKz5CaEYpS+5fu5V0PMfEQGyYEa 3raQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=bNyVJPz/; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m13-v6si9536313pgt.554.2018.06.01.04.44.04; Fri, 01 Jun 2018 04:44:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=bNyVJPz/; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752049AbeFALmX (ORCPT + 99 others); Fri, 1 Jun 2018 07:42:23 -0400 Received: from mail-lf0-f66.google.com ([209.85.215.66]:35172 "EHLO mail-lf0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751937AbeFALlx (ORCPT ); Fri, 1 Jun 2018 07:41:53 -0400 Received: by mail-lf0-f66.google.com with SMTP id y72-v6so14371159lfd.2; Fri, 01 Jun 2018 04:41:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=JcAXBh0qvM6YdqH6cvAAWEY+EIvfC/d+qVFSgpR7rfs=; b=bNyVJPz/eTmRnE1/sVUCapWLStEHI8ngYb+YxkgU74bNr7SwTbUvgoHnSzLs/VSza1 xXS5VWvJac1B9q/YOSHZLDFugGfadgV4V0NueBrzp+EG4yB/dbE2bqB0M9kQvs8tYC1P yPT5aD8v/t6LHEO66Kmj0yXKfMqJ7FP3Ho0uLT52GSmmwwfSYKcyQONg+/SEJDdk7eis r/l44eCTlavwv/UgJO0VdicUAJ6aLRzzg1Ys76/i5JSK91EoUUtHltFWX8I1ecRl4Jwl BWPP6jf+qzdGwLdMu5rTRd00UkFowIquPSOf6QJW24/F6bxiZVg8RdwObYpUYFYxkgTy V29A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=JcAXBh0qvM6YdqH6cvAAWEY+EIvfC/d+qVFSgpR7rfs=; b=cFkoL0x52BGyV+KDnd9N714FfQb8GYkTHz35Dp/WI6Mgt46j7xJvwYYm4eK1X/F4Bp /h2pAm2xIcuNdtPXvY/1I2AbXEMkDfKQaA7bX3y3Dk+8x2eZJVEpJW7Tkfpq4X/sFvqp e067WMqlCBzXU7PlDrXUaqCAwaKk84ZoP+O+6sQt7byvzpPkFIj/6k/prwLPO4JgdoUy iymTwpBXlLZXV0Eo1RvH8QM0ONLTxDRGt+EpTldS6ZRau+wowj1CnpWWx8tg+8MmxLSv /VCpoNe7lEZSTnEj6aTzYpGbOmtEh/ReCrUMIC8aBBfuJ6k0EzMVnmMhMjcj36/G6PRt iHZQ== X-Gm-Message-State: ALKqPwfCr6ngY05u1W2DdaWcvxDRNdZVVb1V799jxxXM/D+SyjAqewon N1sIYMIf8TbV5fQTccd2aIQ= X-Received: by 2002:a2e:5b95:: with SMTP id m21-v6mr7834587lje.79.1527853311990; Fri, 01 Jun 2018 04:41:51 -0700 (PDT) Received: from a2k-HP-ProDesk-600-G2-SFF.kyiv.epam.com (ll-51.209.223.85.sovam.net.ua. [85.223.209.51]) by smtp.gmail.com with ESMTPSA id c6-v6sm8066280lja.22.2018.06.01.04.41.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 01 Jun 2018 04:41:51 -0700 (PDT) From: Oleksandr Andrushchenko To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, jgross@suse.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com Cc: daniel.vetter@intel.com, andr2000@gmail.com, dongwon.kim@intel.com, matthew.d.roper@intel.com, Oleksandr Andrushchenko Subject: [PATCH v2 8/9] xen/gntdev: Implement dma-buf import functionality Date: Fri, 1 Jun 2018 14:41:31 +0300 Message-Id: <20180601114132.22596-9-andr2000@gmail.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180601114132.22596-1-andr2000@gmail.com> References: <20180601114132.22596-1-andr2000@gmail.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Oleksandr Andrushchenko 1. Import a dma-buf with the file descriptor provided and export granted references to the pages of that dma-buf into the array of grant references. 2. Add API to close all references to an imported buffer, so it can be released by the owner. This is only valid for buffers created with IOCTL_GNTDEV_DMABUF_IMP_TO_REFS. Signed-off-by: Oleksandr Andrushchenko --- drivers/xen/gntdev-dmabuf.c | 243 +++++++++++++++++++++++++++++++++++- 1 file changed, 241 insertions(+), 2 deletions(-) diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c index f612468879b4..b5569a220f03 100644 --- a/drivers/xen/gntdev-dmabuf.c +++ b/drivers/xen/gntdev-dmabuf.c @@ -11,8 +11,20 @@ #include #include +#include +#include + #include "gntdev-dmabuf.h" +#ifndef GRANT_INVALID_REF +/* + * Note on usage of grant reference 0 as invalid grant reference: + * grant reference 0 is valid, but never exposed to a driver, + * because of the fact it is already in use/reserved by the PV console. + */ +#define GRANT_INVALID_REF 0 +#endif + struct gntdev_dmabuf { struct gntdev_dmabuf_priv *priv; struct dma_buf *dmabuf; @@ -29,6 +41,14 @@ struct gntdev_dmabuf { void (*release)(struct gntdev_priv *priv, struct grant_map *map); } exp; + struct { + /* Granted references of the imported buffer. */ + grant_ref_t *refs; + /* Scatter-gather table of the imported buffer. */ + struct sg_table *sgt; + /* dma-buf attachment of the imported buffer. */ + struct dma_buf_attachment *attach; + } imp; } u; /* Number of pages this buffer has. */ @@ -53,6 +73,8 @@ struct gntdev_dmabuf_priv { struct list_head exp_list; /* List of wait objects. */ struct list_head exp_wait_list; + /* List of imported DMA buffers. */ + struct list_head imp_list; /* This is the lock which protects dma_buf_xxx lists. */ struct mutex lock; }; @@ -424,21 +446,237 @@ int gntdev_dmabuf_exp_from_pages(struct gntdev_dmabuf_export_args *args) /* DMA buffer import support. */ /* ------------------------------------------------------------------ */ +static int +dmabuf_imp_grant_foreign_access(struct page **pages, u32 *refs, + int count, int domid) +{ + grant_ref_t priv_gref_head; + int i, ret; + + ret = gnttab_alloc_grant_references(count, &priv_gref_head); + if (ret < 0) { + pr_err("Cannot allocate grant references, ret %d\n", ret); + return ret; + } + + for (i = 0; i < count; i++) { + int cur_ref; + + cur_ref = gnttab_claim_grant_reference(&priv_gref_head); + if (cur_ref < 0) { + ret = cur_ref; + pr_err("Cannot claim grant reference, ret %d\n", ret); + goto out; + } + + gnttab_grant_foreign_access_ref(cur_ref, domid, + xen_page_to_gfn(pages[i]), 0); + refs[i] = cur_ref; + } + + ret = 0; + +out: + gnttab_free_grant_references(priv_gref_head); + return ret; +} + +static void dmabuf_imp_end_foreign_access(u32 *refs, int count) +{ + int i; + + for (i = 0; i < count; i++) + if (refs[i] != GRANT_INVALID_REF) + gnttab_end_foreign_access(refs[i], 0, 0UL); +} + +static void dmabuf_imp_free_storage(struct gntdev_dmabuf *gntdev_dmabuf) +{ + kfree(gntdev_dmabuf->pages); + kfree(gntdev_dmabuf->u.imp.refs); + kfree(gntdev_dmabuf); +} + +static struct gntdev_dmabuf *dmabuf_imp_alloc_storage(int count) +{ + struct gntdev_dmabuf *gntdev_dmabuf; + int i; + + gntdev_dmabuf = kzalloc(sizeof(*gntdev_dmabuf), GFP_KERNEL); + if (!gntdev_dmabuf) + goto fail; + + gntdev_dmabuf->u.imp.refs = kcalloc(count, + sizeof(gntdev_dmabuf->u.imp.refs[0]), + GFP_KERNEL); + if (!gntdev_dmabuf->u.imp.refs) + goto fail; + + gntdev_dmabuf->pages = kcalloc(count, + sizeof(gntdev_dmabuf->pages[0]), + GFP_KERNEL); + if (!gntdev_dmabuf->pages) + goto fail; + + gntdev_dmabuf->nr_pages = count; + + for (i = 0; i < count; i++) + gntdev_dmabuf->u.imp.refs[i] = GRANT_INVALID_REF; + + return gntdev_dmabuf; + +fail: + dmabuf_imp_free_storage(gntdev_dmabuf); + return ERR_PTR(-ENOMEM); +} + struct gntdev_dmabuf * gntdev_dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev, int fd, int count, int domid) { - return ERR_PTR(-ENOMEM); + struct gntdev_dmabuf *gntdev_dmabuf, *ret; + struct dma_buf *dma_buf; + struct dma_buf_attachment *attach; + struct sg_table *sgt; + struct sg_page_iter sg_iter; + int i; + + dma_buf = dma_buf_get(fd); + if (IS_ERR(dma_buf)) + return ERR_CAST(dma_buf); + + gntdev_dmabuf = dmabuf_imp_alloc_storage(count); + if (IS_ERR(gntdev_dmabuf)) { + ret = gntdev_dmabuf; + goto fail_put; +} + + gntdev_dmabuf->priv = priv; + gntdev_dmabuf->fd = fd; + + attach = dma_buf_attach(dma_buf, dev); + if (IS_ERR(attach)) { + ret = ERR_CAST(attach); + goto fail_free_obj; + } + + gntdev_dmabuf->u.imp.attach = attach; + + sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); + if (IS_ERR(sgt)) { + ret = ERR_CAST(sgt); + goto fail_detach; + } + + /* Check number of pages that imported buffer has. */ + if (attach->dmabuf->size != gntdev_dmabuf->nr_pages << PAGE_SHIFT) { + ret = ERR_PTR(-EINVAL); + pr_err("DMA buffer has %zu pages, user-space expects %d\n", + attach->dmabuf->size, gntdev_dmabuf->nr_pages); + goto fail_unmap; + } + + gntdev_dmabuf->u.imp.sgt = sgt; + + /* Now convert sgt to array of pages and check for page validity. */ + i = 0; + for_each_sg_page(sgt->sgl, &sg_iter, sgt->nents, 0) { + struct page *page = sg_page_iter_page(&sg_iter); + /* + * Check if page is valid: this can happen if we are given + * a page from VRAM or other resources which are not backed + * by a struct page. + */ + if (!pfn_valid(page_to_pfn(page))) { + ret = ERR_PTR(-EINVAL); + goto fail_unmap; + } + + gntdev_dmabuf->pages[i++] = page; + } + + ret = ERR_PTR(dmabuf_imp_grant_foreign_access(gntdev_dmabuf->pages, + gntdev_dmabuf->u.imp.refs, + count, domid)); + if (IS_ERR(ret)) + goto fail_end_access; + + pr_debug("Imported DMA buffer with fd %d\n", fd); + + mutex_lock(&priv->lock); + list_add(&gntdev_dmabuf->next, &priv->imp_list); + mutex_unlock(&priv->lock); + + return gntdev_dmabuf; + +fail_end_access: + dmabuf_imp_end_foreign_access(gntdev_dmabuf->u.imp.refs, count); +fail_unmap: + dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL); +fail_detach: + dma_buf_detach(dma_buf, attach); +fail_free_obj: + dmabuf_imp_free_storage(gntdev_dmabuf); +fail_put: + dma_buf_put(dma_buf); + return ret; } u32 *gntdev_dmabuf_imp_get_refs(struct gntdev_dmabuf *gntdev_dmabuf) { + if (gntdev_dmabuf) + return gntdev_dmabuf->u.imp.refs; + return NULL; } +/* + * Find the hyper dma-buf by its file descriptor and remove + * it from the buffer's list. + */ +static struct gntdev_dmabuf * +dmabuf_imp_find_unlink(struct gntdev_dmabuf_priv *priv, int fd) +{ + struct gntdev_dmabuf *q, *gntdev_dmabuf, *ret = ERR_PTR(-ENOENT); + + mutex_lock(&priv->lock); + list_for_each_entry_safe(gntdev_dmabuf, q, &priv->imp_list, next) { + if (gntdev_dmabuf->fd == fd) { + pr_debug("Found gntdev_dmabuf in the import list\n"); + ret = gntdev_dmabuf; + list_del(&gntdev_dmabuf->next); + break; + } + } + mutex_unlock(&priv->lock); + return ret; +} + int gntdev_dmabuf_imp_release(struct gntdev_dmabuf_priv *priv, u32 fd) { - return -EINVAL; + struct gntdev_dmabuf *gntdev_dmabuf; + struct dma_buf_attachment *attach; + struct dma_buf *dma_buf; + + gntdev_dmabuf = dmabuf_imp_find_unlink(priv, fd); + if (IS_ERR(gntdev_dmabuf)) + return PTR_ERR(gntdev_dmabuf); + + pr_debug("Releasing DMA buffer with fd %d\n", fd); + + attach = gntdev_dmabuf->u.imp.attach; + + if (gntdev_dmabuf->u.imp.sgt) + dma_buf_unmap_attachment(attach, gntdev_dmabuf->u.imp.sgt, + DMA_BIDIRECTIONAL); + dma_buf = attach->dmabuf; + dma_buf_detach(attach->dmabuf, attach); + dma_buf_put(dma_buf); + + dmabuf_imp_end_foreign_access(gntdev_dmabuf->u.imp.refs, + gntdev_dmabuf->nr_pages); + dmabuf_imp_free_storage(gntdev_dmabuf); + return 0; } struct gntdev_dmabuf_priv *gntdev_dmabuf_init(void) @@ -452,6 +690,7 @@ struct gntdev_dmabuf_priv *gntdev_dmabuf_init(void) mutex_init(&priv->lock); INIT_LIST_HEAD(&priv->exp_list); INIT_LIST_HEAD(&priv->exp_wait_list); + INIT_LIST_HEAD(&priv->imp_list); return priv; } -- 2.17.0