Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp2635035imm; Fri, 20 Jul 2018 02:03:49 -0700 (PDT) X-Google-Smtp-Source: AAOMgpe3XpCcHGPeJFrS8FDWihigheUvv6udDSUVCYo9VJAOyZkjZjubHKYN/GxwhsK356lgHsKh X-Received: by 2002:a63:b256:: with SMTP id t22-v6mr1294573pgo.101.1532077429370; Fri, 20 Jul 2018 02:03:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532077429; cv=none; d=google.com; s=arc-20160816; b=yNbiYPwHsEwnsLG4rcPc45roTqBafoUYYyi96kWcTq10czeKSDFqorXYn7nVTXHVky h/EhzoZF+8to3WNi9STUnDdbQK/7eTfwRvPSBLR8dhstXhqxRtQV+0Ss2coojd2dYiRj BzzKIsiGlGlRUMW9seuaGVMCc7zbRgxlGElZiYsNoHCb6WZWYQoKS6SEYb2rrLZQJQhy jMKTW2SiDO3UoCmf7uVccl//23sPKVbQCI9AdkwBRUlhlqJfKnmJovUs73TVOp7GLC0V Cg+75blMa0h9oz9Rsl6DuR+t+z6ZjAK331KNuBEKLiPha2vYodEEWyjQG/FE67149vYi Kn0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=ug+pzugMyEUosqLHfKVNMVB2P6QlHUd8/pZDApMTf+g=; b=qKTA6wRG+ZuYkeq8MjkvGzOBIfuRRxH+VWMrkkfPOnX08tP7f2v01ef9o31umUWwKY h3QAmk95TxcliqU6BreSrzpnZucosSNKRShdqF1hwCR7xLctCftsJcQMX6HGwt+6uX/P 49BAkrBQnXHXo79zFKDH51XtlWLjfDVkZ3xPvyvhF/wV91gxnO+0+Bw+de0VFzS0bEoX JlvkV4+BIFwI2JAD5a7ZA5kDzhp+ZfBHC03FwZ96iBvBGIuAzDk6NlAZhh1b+U6sAH7u t1e5tithaFllOad1J6f+HCKJprPDRFSErMmXmnf+lYvE19ObHe8LmeAmUaFFaBMPp+NH LUFg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=mLgdjqW8; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c38-v6si1368923pgb.489.2018.07.20.02.03.34; Fri, 20 Jul 2018 02:03:49 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=mLgdjqW8; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730720AbeGTJtb (ORCPT + 99 others); Fri, 20 Jul 2018 05:49:31 -0400 Received: from mail-lf1-f68.google.com ([209.85.167.68]:33720 "EHLO mail-lf1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729117AbeGTJta (ORCPT ); Fri, 20 Jul 2018 05:49:30 -0400 Received: by mail-lf1-f68.google.com with SMTP id u14-v6so1292627lfu.0; Fri, 20 Jul 2018 02:02:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ug+pzugMyEUosqLHfKVNMVB2P6QlHUd8/pZDApMTf+g=; b=mLgdjqW8uUGpmEN9mKCq+UhYAdpuoQnvtk8gd/8yq+T1t8L0iLfTehIkCJtqAKDEwX c6khHG+Xy5bR8wIvikjtHpKdI/opqYJBpFVkMLOkKX1PKl3Nf40r0RKX0sm+/U6217ZT QZ10xgTR7wKiWjI0/vnVqwFomZfwX+NJZ2BBcco3WupT47FRr4H+WAsF+5n5OtcUNqj9 YDkHesMm8OBl54Roa4YOETsbkIDTuW4CEc9S98k2W8Q4GC/dmaITJXscMSW6fdA1LV5N 8/WmPXw5pAYgogi1DgD8TRhdFC91kqkNqRB4Gwsztr0xQJKebC1hwYwETba/tBk9n92x ycoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ug+pzugMyEUosqLHfKVNMVB2P6QlHUd8/pZDApMTf+g=; b=cVwJqDPOKJfT0t8Q4schrK4p0JW/dEuJoB0TzvEDXjmj38Q8zow6kSdovTgCxKtZmp HkO5mFeNSu4NcrESsAB9DkVnbdnFAhrlsG1EYyhjTCbtirdDAcNdmy/SKkZ+GB7dSjCL 41IiiwrBeKy7z3fZ7kdxE0qW8xceeZO6NYI4igD5j4We1S4P6zGBY6jVQ7IHhsCqdFC4 CbpfgzpNuo+h8F3CUwvGXaZQFxgWY+ugduKfHW1v1CmJWD748PpuT1ZPYwByZpM3xyFE dv+8m5/EZippFfuqbtPQBi+V/8irKyf+cryFpOn6tSLo3Te3I86xfu9HGLOf2irjOCDS Eh1Q== X-Gm-Message-State: AOUpUlHUyDzWqE0AuWcv7i63Mh5PQ+9dimLIwIsmG56u7Arck3K9S/n7 h28L4EPv7I271qLnrYOyACQ= X-Received: by 2002:a19:cc0f:: with SMTP id c15-v6mr811968lfg.145.1532077330747; Fri, 20 Jul 2018 02:02:10 -0700 (PDT) Received: from a2k-HP-ProDesk-600-G2-SFF.kyiv.epam.com (ll-74.141.223.85.sovam.net.ua. [85.223.141.74]) by smtp.gmail.com with ESMTPSA id w2-v6sm252394lje.73.2018.07.20.02.02.09 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 20 Jul 2018 02:02:09 -0700 (PDT) From: Oleksandr Andrushchenko To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-media@vger.kernel.org, jgross@suse.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com Cc: daniel.vetter@intel.com, andr2000@gmail.com, dongwon.kim@intel.com, matthew.d.roper@intel.com, Oleksandr Andrushchenko Subject: [PATCH v5 8/8] xen/gntdev: Implement dma-buf import functionality Date: Fri, 20 Jul 2018 12:01:50 +0300 Message-Id: <20180720090150.24560-9-andr2000@gmail.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180720090150.24560-1-andr2000@gmail.com> References: <20180720090150.24560-1-andr2000@gmail.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Oleksandr Andrushchenko 1. Import a dma-buf with the file descriptor provided and export granted references to the pages of that dma-buf into the array of grant references. 2. Add API to close all references to an imported buffer, so it can be released by the owner. This is only valid for buffers created with IOCTL_GNTDEV_DMABUF_IMP_TO_REFS. Signed-off-by: Oleksandr Andrushchenko Reviewed-by: Boris Ostrovsky --- drivers/xen/gntdev-dmabuf.c | 239 +++++++++++++++++++++++++++++++++++- 1 file changed, 234 insertions(+), 5 deletions(-) diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c index cc4f16c81919..e4c9f1f74476 100644 --- a/drivers/xen/gntdev-dmabuf.c +++ b/drivers/xen/gntdev-dmabuf.c @@ -21,6 +21,15 @@ #include "gntdev-common.h" #include "gntdev-dmabuf.h" +#ifndef GRANT_INVALID_REF +/* + * Note on usage of grant reference 0 as invalid grant reference: + * grant reference 0 is valid, but never exposed to a driver, + * because of the fact it is already in use/reserved by the PV console. + */ +#define GRANT_INVALID_REF 0 +#endif + struct gntdev_dmabuf { struct gntdev_dmabuf_priv *priv; struct dma_buf *dmabuf; @@ -35,6 +44,14 @@ struct gntdev_dmabuf { struct gntdev_priv *priv; struct gntdev_grant_map *map; } exp; + struct { + /* Granted references of the imported buffer. */ + grant_ref_t *refs; + /* Scatter-gather table of the imported buffer. */ + struct sg_table *sgt; + /* dma-buf attachment of the imported buffer. */ + struct dma_buf_attachment *attach; + } imp; } u; /* Number of pages this buffer has. */ @@ -59,6 +76,8 @@ struct gntdev_dmabuf_priv { struct list_head exp_list; /* List of wait objects. */ struct list_head exp_wait_list; + /* List of imported DMA buffers. */ + struct list_head imp_list; /* This is the lock which protects dma_buf_xxx lists. */ struct mutex lock; }; @@ -491,21 +510,230 @@ static int dmabuf_exp_from_refs(struct gntdev_priv *priv, int flags, /* DMA buffer import support. */ +static int +dmabuf_imp_grant_foreign_access(struct page **pages, u32 *refs, + int count, int domid) +{ + grant_ref_t priv_gref_head; + int i, ret; + + ret = gnttab_alloc_grant_references(count, &priv_gref_head); + if (ret < 0) { + pr_debug("Cannot allocate grant references, ret %d\n", ret); + return ret; + } + + for (i = 0; i < count; i++) { + int cur_ref; + + cur_ref = gnttab_claim_grant_reference(&priv_gref_head); + if (cur_ref < 0) { + ret = cur_ref; + pr_debug("Cannot claim grant reference, ret %d\n", ret); + goto out; + } + + gnttab_grant_foreign_access_ref(cur_ref, domid, + xen_page_to_gfn(pages[i]), 0); + refs[i] = cur_ref; + } + + return 0; + +out: + gnttab_free_grant_references(priv_gref_head); + return ret; +} + +static void dmabuf_imp_end_foreign_access(u32 *refs, int count) +{ + int i; + + for (i = 0; i < count; i++) + if (refs[i] != GRANT_INVALID_REF) + gnttab_end_foreign_access(refs[i], 0, 0UL); +} + +static void dmabuf_imp_free_storage(struct gntdev_dmabuf *gntdev_dmabuf) +{ + kfree(gntdev_dmabuf->pages); + kfree(gntdev_dmabuf->u.imp.refs); + kfree(gntdev_dmabuf); +} + +static struct gntdev_dmabuf *dmabuf_imp_alloc_storage(int count) +{ + struct gntdev_dmabuf *gntdev_dmabuf; + int i; + + gntdev_dmabuf = kzalloc(sizeof(*gntdev_dmabuf), GFP_KERNEL); + if (!gntdev_dmabuf) + goto fail; + + gntdev_dmabuf->u.imp.refs = kcalloc(count, + sizeof(gntdev_dmabuf->u.imp.refs[0]), + GFP_KERNEL); + if (!gntdev_dmabuf->u.imp.refs) + goto fail; + + gntdev_dmabuf->pages = kcalloc(count, + sizeof(gntdev_dmabuf->pages[0]), + GFP_KERNEL); + if (!gntdev_dmabuf->pages) + goto fail; + + gntdev_dmabuf->nr_pages = count; + + for (i = 0; i < count; i++) + gntdev_dmabuf->u.imp.refs[i] = GRANT_INVALID_REF; + + return gntdev_dmabuf; + +fail: + dmabuf_imp_free_storage(gntdev_dmabuf); + return ERR_PTR(-ENOMEM); +} + static struct gntdev_dmabuf * dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev, int fd, int count, int domid) { - return ERR_PTR(-ENOMEM); + struct gntdev_dmabuf *gntdev_dmabuf, *ret; + struct dma_buf *dma_buf; + struct dma_buf_attachment *attach; + struct sg_table *sgt; + struct sg_page_iter sg_iter; + int i; + + dma_buf = dma_buf_get(fd); + if (IS_ERR(dma_buf)) + return ERR_CAST(dma_buf); + + gntdev_dmabuf = dmabuf_imp_alloc_storage(count); + if (IS_ERR(gntdev_dmabuf)) { + ret = gntdev_dmabuf; + goto fail_put; + } + + gntdev_dmabuf->priv = priv; + gntdev_dmabuf->fd = fd; + + attach = dma_buf_attach(dma_buf, dev); + if (IS_ERR(attach)) { + ret = ERR_CAST(attach); + goto fail_free_obj; + } + + gntdev_dmabuf->u.imp.attach = attach; + + sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); + if (IS_ERR(sgt)) { + ret = ERR_CAST(sgt); + goto fail_detach; + } + + /* Check number of pages that imported buffer has. */ + if (attach->dmabuf->size != gntdev_dmabuf->nr_pages << PAGE_SHIFT) { + ret = ERR_PTR(-EINVAL); + pr_debug("DMA buffer has %zu pages, user-space expects %d\n", + attach->dmabuf->size, gntdev_dmabuf->nr_pages); + goto fail_unmap; + } + + gntdev_dmabuf->u.imp.sgt = sgt; + + /* Now convert sgt to array of pages and check for page validity. */ + i = 0; + for_each_sg_page(sgt->sgl, &sg_iter, sgt->nents, 0) { + struct page *page = sg_page_iter_page(&sg_iter); + /* + * Check if page is valid: this can happen if we are given + * a page from VRAM or other resources which are not backed + * by a struct page. + */ + if (!pfn_valid(page_to_pfn(page))) { + ret = ERR_PTR(-EINVAL); + goto fail_unmap; + } + + gntdev_dmabuf->pages[i++] = page; + } + + ret = ERR_PTR(dmabuf_imp_grant_foreign_access(gntdev_dmabuf->pages, + gntdev_dmabuf->u.imp.refs, + count, domid)); + if (IS_ERR(ret)) + goto fail_end_access; + + pr_debug("Imported DMA buffer with fd %d\n", fd); + + mutex_lock(&priv->lock); + list_add(&gntdev_dmabuf->next, &priv->imp_list); + mutex_unlock(&priv->lock); + + return gntdev_dmabuf; + +fail_end_access: + dmabuf_imp_end_foreign_access(gntdev_dmabuf->u.imp.refs, count); +fail_unmap: + dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL); +fail_detach: + dma_buf_detach(dma_buf, attach); +fail_free_obj: + dmabuf_imp_free_storage(gntdev_dmabuf); +fail_put: + dma_buf_put(dma_buf); + return ret; } -static u32 *dmabuf_imp_get_refs(struct gntdev_dmabuf *gntdev_dmabuf) +/* + * Find the hyper dma-buf by its file descriptor and remove + * it from the buffer's list. + */ +static struct gntdev_dmabuf * +dmabuf_imp_find_unlink(struct gntdev_dmabuf_priv *priv, int fd) { - return NULL; + struct gntdev_dmabuf *q, *gntdev_dmabuf, *ret = ERR_PTR(-ENOENT); + + mutex_lock(&priv->lock); + list_for_each_entry_safe(gntdev_dmabuf, q, &priv->imp_list, next) { + if (gntdev_dmabuf->fd == fd) { + pr_debug("Found gntdev_dmabuf in the import list\n"); + ret = gntdev_dmabuf; + list_del(&gntdev_dmabuf->next); + break; + } + } + mutex_unlock(&priv->lock); + return ret; } static int dmabuf_imp_release(struct gntdev_dmabuf_priv *priv, u32 fd) { - return -EINVAL; + struct gntdev_dmabuf *gntdev_dmabuf; + struct dma_buf_attachment *attach; + struct dma_buf *dma_buf; + + gntdev_dmabuf = dmabuf_imp_find_unlink(priv, fd); + if (IS_ERR(gntdev_dmabuf)) + return PTR_ERR(gntdev_dmabuf); + + pr_debug("Releasing DMA buffer with fd %d\n", fd); + + dmabuf_imp_end_foreign_access(gntdev_dmabuf->u.imp.refs, + gntdev_dmabuf->nr_pages); + + attach = gntdev_dmabuf->u.imp.attach; + + if (gntdev_dmabuf->u.imp.sgt) + dma_buf_unmap_attachment(attach, gntdev_dmabuf->u.imp.sgt, + DMA_BIDIRECTIONAL); + dma_buf = attach->dmabuf; + dma_buf_detach(attach->dmabuf, attach); + dma_buf_put(dma_buf); + + dmabuf_imp_free_storage(gntdev_dmabuf); + return 0; } /* DMA buffer IOCTL support. */ @@ -582,7 +810,7 @@ long gntdev_ioctl_dmabuf_imp_to_refs(struct gntdev_priv *priv, if (IS_ERR(gntdev_dmabuf)) return PTR_ERR(gntdev_dmabuf); - if (copy_to_user(u->refs, dmabuf_imp_get_refs(gntdev_dmabuf), + if (copy_to_user(u->refs, gntdev_dmabuf->u.imp.refs, sizeof(*u->refs) * op.count) != 0) { ret = -EFAULT; goto out_release; @@ -616,6 +844,7 @@ struct gntdev_dmabuf_priv *gntdev_dmabuf_init(void) mutex_init(&priv->lock); INIT_LIST_HEAD(&priv->exp_list); INIT_LIST_HEAD(&priv->exp_wait_list); + INIT_LIST_HEAD(&priv->imp_list); return priv; } -- 2.18.0